Image-generating technology is accelerating quickly, making it much more likely that you will be seeing "digital replicas" (sometimes referred to as "deepfakes") of celebrities and non-celebrities alike across film, television, documentaries, marketing, advertising, and election materials. Meanwhile, legislators are advocating for protections against the exploitation of name, image, and likeness while attempting to balance the First Amendment rights creatives enjoy.

In this post, we analyze significant recent developments in the regulation of digital replicas, particularly the U.S. Copyright Office Report on Copyright and Artificial Intelligence, Part 1: Digital Replicas, the federal NO FAKES Act, and other legislative and regulatory efforts to protect individuals against unauthorized generative AI re-creations, and how they implicate First Amendment principles or might impact the creation and distribution of content.

U.S. Copyright Office and First Amendment Considerations

On July 31, 2024, the U.S. Copyright Office published a report summarizing its proposal for a new federal law that would protect against unauthorized digital replicas, recognizing there are gaps in existing legal frameworks (i.e., a patchwork of state right-of-publicity law and regulation, the Copyright Act, the Federal Trade Commission Act, the Lanham Act, and the Communications Act).

After collecting thousands of comments and conducting its own study, the office outlined the contours of a proposed new federal law targeting unauthorized digital replicas (AI-generated or otherwise). Some of those features include:

  • Protection for private individuals as well as celebrities;
  • Liability for distribution only (not creation);
  • Secondary liability for distributors or other types of intermediaries (and exclusion from Section 230), but with safe harbors that incentivize prompt removal; and
  • Availability of injunctive relief, statutory damages, and attorneys' fees.

Although the office's proposal is not strictly limited to commercial uses, the office acknowledged that digital replicas have legitimate uses in the context of constitutionally protected speech, such as in news reporting, artistic works, parody, and political opinion.

There was significant disagreement among the thousands of comments the Copyright Office received on how to achieve the balance between protecting individuals from unauthorized deepfakes while respecting speech and distribution of content. Many commenters supported specific categorical exemptions from any federal law (e.g., for news reporting, various types of expressive works, sports broadcasting, as well as parody, comment, and criticism), which provide greater certainty. The Motion Picture Association's comments, for instance, included several examples of expressive uses deserving of categorical protection, such as documentaries that use digital replicas "to re-create scenes from history where no actual footage exists." Other commenters instead preferred a fact-specific balancing test, worried that categorical exclusions could be simultaneously over- and under-inclusive depending on the context.

Ultimately, the Copyright Office recommended that any federal law include a balancing framework rather than categorical exemptions, which would require courts to assess a full range of factors including:

  • the purpose of the use, including whether it is commercial;
  • whether the use is expressive or political in nature;
  • the relevance of the digital replica to the purpose of the use;
  • whether the use is intentionally deceptive;
  • whether the replica was labeled;
  • the extent of the harm caused; and
  • the good faith of the user.

As with any balancing test, this provides greater flexibility to courts and other decisionmakers in striking the appropriate balance on a case-by-case basis, but also creates greater uncertainty, the likelihood of inconsistent decisions across different jurisdictions, increased litigation expense, and the potential for chilling speech as the result of creators wanting to steer clear of potential liability.

NO FAKES Act and Other Legislation

On July 31, 2024, the same day the U.S. Copyright Office released its report, a bipartisan group of federal lawmakers including Senators Coons, Blackburn, Klobuchar, and Tillis introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe Act, commonly referred to as the NO FAKES Act. That bill, which is currently pending in the Senate, is designed to protect the voice and likeness of actors, singers, performers, and other individuals from the unauthorized use of AI-generated replicas. Among other things, the legislation attempts to address concerns regarding deepfake technology and seeks to create uniform protections across the United States. Upon release, entertainment industry stakeholders, including SAG-AFTRA, the Recording Industry Association of America, the Motion Picture Association, and The Walt Disney Company, among others, released statements endorsing the proposed legislation, as did technology companies OpenAI and IBM.

While the NO FAKES Act seeks to hold individuals or companies liable for damages for producing, hosting, or sharing digital replicas, including AI-generated replicas, of an individual performing in audiovisual works, images, or sound recordings without that person's consent or participation, the act attempts to balance First Amendment protections in the draft legislation, including exclusions that apply when the digital replica is:

  • Produced or used in a bona fide news, public affairs, or sports broadcast if the replica is the subject of or materially relevant to the subject of the broadcast;
  • Used in a documentary or in a historical or biographical manner, including some degree of fictionalization, unless the use intends to and does create the false impression the work is authentic and the person participated;
  • Produced or used consistent with the public interest in bona fide commentary, criticism, scholarship, satire, or parody;
  • Used in a fleeting or negligible manner;
  • Produced or used in an advertisement or commercial announcement for any of the foregoing purposes.

Of course, even with explicit First Amendment exclusions being part of the draft bill, the legislation raises various questions about when these protections apply—for example, what is a "bona fide news" broadcast? What constitutes "some degree of fictionalization," and how "fleeting or negligible" does the use have to be to qualify? And do these First Amendment protections sufficiently protect creative and artistic (or even commercial) works? If this legislation is passed, courts will likely be wrestling with these and other questions.

In recent months, other lawmakers have proposed their own bills to address digital replicas, all of which also grapple with balancing prohibitions with the First Amendment. For example, on August 9, 2024, Congressman Issa (CA-48) publicly released a discussion draft of the Preventing Abuse of Digital Replicas Act (PADRA), which seeks to modify the Lanham (Trademark) Act to address digital replicas, including by providing a rebuttable presumption that such uses are likely to cause confusion, mistake, or deceive, with a carveout where the use is protected under the First Amendment. In Tennessee, the recent ELVIS Act extends the right of publicity protection for names and likenesses to include individuals' voices in light of the increased popularity and accessibility of AI-generated audio tracks, while at the same time memorializing First Amendment and fair use as an express exemption. Illinois also passed a similar amendment to the state's Right of Publicity Act to cover digital replicas, with express exclusions for news, sports broadcasts, or use in political or public interest purposes, documentary or biographical works, or for satire or parody—all so long as they do not create the false impression that the replica is authentic.

And in California, state lawmakers recently passed AB 1836, one of California's proposed digital replica laws, which would modify the state's right of publicity statute to prohibit the use of digital replicas of a deceased person in expressive audiovisual works or sound recordings without prior consent. That legislation has similar First Amendment exclusions to those proposed in the NO FAKES Act discussed above. It is currently awaiting the governor's signature. And as of August 27, 2024, California lawmakers sent the governor AB 2602, which would require movie studios, streamers, and other content creators to seek permission from performers to create digital replicas. These are but a few of the bills being debated, all of which attempt to balance the goals of the legislation while still protecting artists and content creators.

Conclusion

There is tension among the legislative proposals regarding how and to what degree they intend to incorporate and apply First Amendment protections for expressive works, with solutions ranging from exempting categories of works from the bill's scope to creating a balancing test of factors for courts to consider. Ultimately, it is still not clear how lawmakers—and eventually, courts—will resolve these pressing and evolving issues.

DWT will continue to provide updates regarding these and other issues in AI and the creation and distribution of content. If you have questions about digital replicas, deepfakes, or other issues relating to AI and the First Amendment, please contact the authors.