Two New California Laws Tackle Deepfake Videos in Politics and Porn
This article was originally featured as a Artificial Intelligence Law Advisor post on DWT.com on October 11, 2019. Our editors have chosen to feature this article here for its coinciding subject matter.
Effective in 2020, two new California laws will regulate the distribution of so-called "deepfakes"—manipulated images, audio, or visual depictions of someone that appear to be genuine. But the laws, which are designed to prevent improper influence of elections and unauthorized use of one’s likeness in pornography, have come under scrutiny from First Amendment advocates.
AB 730 prohibits the use of deepfakes to influence political campaigns. That new law will sunset on January 1, 2023 and will no longer be in effect after that time. AB 602, which addresses deepfakes and pornography, has no sunset provision.
These new measures are part of California’s ongoing efforts to regulate new applications enabled by artificial intelligence (AI) and machine learning technologies. These technologies, once the province of high-powered computers and expert coders, are now accessible to consumers. Last year, the state passed a law prohibiting the use of bots—AI-enabled autonomous communications tools—in commerce or politics, unless the bot’s autonomous nature is disclosed to consumers.
Current California Laws Address Only Campaign Material; Fail to Account for AI and Machine Learning
Until now, California has not expressly regulated the distribution of manipulated or altered content without requiring a showing of fraud or damages. See, e.g., Cal. Civ. Code § 1572 (fraud in contract); § 1709 (deceit). The Election Code, however, prohibits production or distribution of "campaign material" that includes superimposed photographs of candidates intended "to create a false representation." Cal. Elec. Code § 20010(a).
However, that section, which passed in 1998, does not address or even anticipate the advent of AI and machine learning, which are now capable of producing deepfake video and audio files that often appear to be authentic. For example, a well-publicized deepfake video of President Obama, voiced by actor Jordan Peele, is often used to illustrate the power of this technology.
The new laws are designed to bridge that gap.
AB 730
AB 730 prohibits distributing "with actual malice" materially deceptive audio or visual media showing a candidate for office within 60 days of an election, with the intent to injure the candidate’s reputation or deceive a voter into voting for or against the candidate.
Cal. Elec. Code § 20010(a) (2020). "Materially deceptive audio or visual media" must:
- 1. "Falsely appear to a reasonable person to be authentic"; and
- 2. "Cause a reasonable person to have a fundamentally different understanding or impression of the expressive content" than if he or she had seen the unaltered content. § 20010(e).
The law also permits an affected candidate to file a lawsuit and seek injunctive relief, general or special damages, and attorney’s fees. § 20010(c). Significantly, this measure exempts print and online media and websites if that entity clearly discloses that the deepfake video or audio file is inaccurate or of questionable authenticity.
AB 602
AB 602 will provide a private right of action against any person who:
- 1. Creates and intentionally discloses sexually explicit material if that person knows or reasonably should have known the depicted individual did not consent; or
- 2. Intentionally discloses sexually explicit material that the person did not create if the person knows the depicted individual did not consent. Cal. Civ. Code § 1708.85(b) (2020).
A "depicted individual" is "an individual who appears, as a result of digitization, to be giving a performance they did not actually perform or to be performing in an altered depiction." § 1708.86(a)(4). A plaintiff can recover disgorgement of profits, economic and noneconomic damages, or statutory damages up to $150,000 if the act was "committed with malice." § 1708.86(e).
Addressing First Amendment Concerns
AB 730
Both laws contain exceptions nominally designed to alleviate First Amendment concerns. For example, AB 730 explicitly does not purport to alter protections under Section 230 of the Communications Decency Act. § 20010(d)(1). Nor does it apply to a radio or television station reporting on the news with a disclaimer, § 20010(d)(2), or "when it is paid to broadcast" the content, § 20010(d)(3).
In addition, the measure exempts from liability "internet website[s], or a regularly published newspaper, magazine, or other periodical . . . that routinely carries news and commentary of general interest" if such sites include proper disclaimers. Websites must "clearly state" that the "materially deceptive audio or visual media does not accurately represent the speech or conduct of the candidate."
Similarly, radio or television broadcasters must "clearly acknowledge" that there are "questions about the authenticity of the materially deceptive audio or visual media." AB 730 does not apply to altered content "that constitutes satire or parody." § 20010(d)(5).
AB 602
Similarly, AB 602 permits disseminating altered content for the purpose of reporting the unlawful activity or in a legal proceeding. § 1708.86(c)(1)(A). Nor can a person be held liable for creating or publishing content that is a "matter of legitimate public concern," a work of political or newsworthy value, or material within the state and federal protections for commentary, criticism, and disclosure. § 1708.86(c)(1)(B).
Under the law, however, altered pornography "is not of newsworthy value solely because the depicted individual is a public figure," § 1708.86(c)(2), and unlike AB 730, a disclaimer is not a defense, § 1708.86(d).
Advocates Identify Issues
Still, First Amendment advocates have identified problems with the statute.
For example, the California News Publishers Association and the American Civil Liberties Union had urged Governor Gavin Newsom to veto AB 730. The concerns about the constitutionality of the measures appear to be founded.
- AB 730, for example, would likely prohibit the use of altered content to reenact true events that were not recorded and could bar a candidate’s use of altered videos of him or herself.
- AB 602 potentially imposes liability for content viewed solely by the creator and does not explain what happens when consent is revoked after creation or distribution.
Whether these laws will be challenged remains to be seen.
Possible Federal Action and What’s Next for Deepfake Laws
Recent activity in Congress suggests federal authorities are also thinking about the potential regulation of deepfakes in politics. Earlier this month, Senators Warner (D-VA) and Rubio (R-FL) sent a letter to leading technology companies seeking information about those companies’ policies to detect and deter intentionally misleading or fabricated media, such as deepfakes.
That effort followed a House Intelligence Committee hearing earlier this year in which members suggested that social media companies should be held liable for the content distributed over their platforms, and that persons responsible for producing deepfakes should face potential criminal liability—a concept included in the recently introduced DEEPFAKES Accountability Act of 2019.
Whatever fate awaits these new California laws, the use and proliferation of deepfakes will very likely face greater legal and regulatory scrutiny in the months and years ahead. As a result, unless and until the deepfake laws are successfully challenged, online, broadcast, print media, and publishers must be cognizant of these new requirements to identify and disclose the presence of deepfakes.