FCC Declares AI Robocalls Subject to Federal Telemarketing Regulation
Only a week after issuing a statement calling for the regulation of robocalls featuring voices generated by artificial intelligence, the Federal Communications Commission (FCC) adopted a Declaratory Ruling confirming that telephone calls that use or include AI-generated voices are subject to the federal Telephone Consumer Protection Act (TCPA) and FCC implementing regulations that impose strict requirements on entities placing/sending informational and marketing calls and text messages to consumers.
As a result of the Declaratory Ruling, effective immediately, entities placing calls that include AI-generated voices must comply with the TCPA's requirements – including obtaining the prior consent of the called party absent an emergency purpose or exemption, providing in the call certain identification and disclosure information, and complying with various opt-out requirements. Failing to comply with these requirements could now result in an FCC investigation or enforcement proceeding, as well as potential class action liability for every non-consented-to AI-voice generated call that is placed.
Background
Adopted in 1992, the TCPA and corresponding FCC regulations prohibit entities from placing telephone calls using, among other things, "artificial or prerecorded voices" absent the necessary level of consent. For years, this restriction was interpreted as applying to calls placed using a prerecorded message or via soundboard technology. However, in November 2023 the FCC released a Notice of Inquiry (NOI) seeking to better understand the emerging implications of AI technology and whether calls placed using such technology should fall within the TCPA's requirements applicable to the use of an "artificial or prerecorded voice" in making "unwanted and illegal robocalls."
In late January, thousands of New Hampshire voters received robocalls using a deepfake of President Joe Biden's voice urging them to stay home from the state's Republican primary. And a week later – seemingly in response to the New Hampshire incident, as well as NOI comments submitted by 26 state attorneys general – FCC Chairwoman Jessica Rosenworcel issued a proposal calling for the FCC to impose the TCPA's restrictions on the use of AI-generated voices in telephone calls to consumers. The FCC's Declaratory Ruling now makes the Chairwoman's proposal a reality, noting that the FCC will now "have another tool to go after voice cloning scams and get this [AI] junk off the line."
Expanding the TCPA's "Artificial or Prerecorded Voice" Restrictions to AI Robocalls
In the Declaratory Ruling the FCC confirms that the TCPA's restrictions on the use of "artificial or prerecorded voice" calls encompass "current AI technologies that resemble human voices and/or generate call content using a prerecorded voice." Applying the current "artificial or prerecorded voice" definition is logical, the FCC claims, because "[v]oice cloning and other similar technologies emulate real or artificially created human voices" and are thus "artificial" since a live person is not speaking to the called party.
The FCC goes on to explain that applying the definition to AI-generated voice is also necessary because while voice cloning and other uses of AI are still evolving, "we have already seen their use in ways that can uniquely harm consumers and those whose voice is cloned," referencing the New Hampshire incident. Moreover, "voice cloning can convince a called party that a trusted person, or someone they care about such as a family member, wants or needs them to take some action that they would not otherwise take." Therefore, because of these harms and the various consumer protection risks that result from such calls, requiring compliance with the TCPA – including its consent requirements – will ensure consumers are aware of the artificial nature of and the right not to receive such calls, and accordingly consumers will "be cautious about them."
Applying TCPA Regulations to AI Robocalls
As a result of the FCC's Declaratory ruling, callers that place calls or generate call content that uses AI to mimic a human voice must comply with the TCPA's various – and voluminous – requirements. While not a complete list, the requirements include:
- Obtaining the "prior express consent" of the called party to initiate such calls absent an emergency purpose or exemption, which includes providing call recipients with the necessary TCPA-compliant disclosure and, for marketing calls specifically, ensuring the call recipient provides their consent in writing;
- Providing various identification and disclosure information for the entity responsible for initiating the call, including, but not necessarily limited to, the identity of the call initiator and a telephone number where a representative of the calling party can be reached; and
- For marketing calls, providing an automated opt-out mechanism for the call recipient to make a do-not-call request (which must comply with all FCC automated opt-out requirements).
Failing to comply with these requirements can be severe, with the FCC permitted to investigate and pursue enforcement actions against any violators and affected call recipients able to seek damages of up to $1,500 per call received (and on a class-wide basis).
Conclusion
With the FCC's Declaratory Ruling taking immediate effect, companies that place calls using AI-generated content are now subject to a host of regulations that if not followed can result in significant liability. And with the TCPA functioning as the impetus for thousands of class action lawsuits annually, compliance is critical. If your company relies on AI-generated voices to place calls and you are curious about the necessary compliance requirements or best risk mitigation strategies, DWT's attorneys are experienced and ready to assist.