Two FCC Actions on AI: Penalties for Deepfake Robocalls With Spoofed Caller ID; Rulemaking on AI in Political Ads
The FCC recently issued a Notice of Apparent Liability for Forfeiture (NAL) proposing a $6 million fine on a political consultant for allegedly carrying out an illegal robocall campaign using caller ID spoofing and an AI-generated deepfake technology that replicated President Biden's voice to discourage New Hampshire voters from casting their votes in the state's January 2024 primary election. The allegedly unlawful robocalls reached voters two days before the state's primary election. A telecommunications provider responsible for transmitting the calls without appropriate call attestation is also facing a potential $2 million fine.
In conjunction with its increased focus on the use of AI in the communications industry, the FCC will also initiate a new rulemaking requiring the disclosure of the use of AI technology in election advertisements. When the NPRM is issued, it will likely include disclosure requirements for broadcasters, cable operators, and satellite and other programming origination providers when there is AI-generated content in political ads over their networks. The rulemaking will be aimed at disclosure requirements—not a prohibition on use of AI-generated content in political ads.
The anticipated rulemaking will be one of several actions the FCC has taken in response to the use of deepfake voice-cloning technology in robocalls. Earlier this year, the agency issued a Declaratory Ruling that telephone calls using AI-generated voices are subject to the federal Telephone Consumer Protection Act (TCPA) and FCC implementing regulations that impose strict requirements on entities placing/sending informational and marketing calls and text messages to consumers. The Declaratory Ruling was also issued largely in response to the New Hampshire primary illegal robocall voter misinformation campaign at issue and declared that the use of AI-generated voices in phone calls is subject to the TCPA's consent, disclosure, opt-out, and other requirements. The new robocall rules in the Declaratory Ruling took effect immediately after issuance and impose the TCPA's strict requirements on companies that use AI-generated voices in the calls they place to consumers.
Other federal agencies have recently taken measures to regulate deepfakes, including considering restrictions on the use of deepfakes in campaign ads. In addition, a number of states have enacted legislation prohibiting the use of deepfakes in campaigns and other areas.
Violations and Enforcement Mechanisms
Using AI-generated voices in robocalls absent compliance with the TCPA and FCC implementing regulations is prohibited under the recent Declaratory Ruling, which concluded that the TCPA's restrictions on the use of "artificial or prerecorded voice" calls encompass "current AI technologies that resemble human voices and/or generate call content using a prerecorded voice." But the NAL does not conclude that the New Hampshire robocall campaign violated the TCPA's regulations.
Instead, the recent NAL focuses its liability determination on its finding that the dissemination of the deepfake recording of the President's voice used in the New Hampshire robocall campaign with a malicious spoofing of a known political operative's phone number constituted a violation of the Truth in Caller ID Act, which prohibits the knowing transmission of inaccurate caller ID information with the intent to defraud, cause harm, or wrongly obtain anything of value. The Commission found that the spoofed calls were transmitted using a telecommunication provider's voice services by circumventing the appropriate caller ID attestation that providers must implement in their networks to detect that the calls were spoofed and prevent their transmission.
The Truth in Caller ID Act prohibits the transmission of inaccurate or misleading caller ID information when a call is placed with the intent to defraud consumers. Penalties for spoofing violations can be up to $10,000 per violation. A caller who deliberately falsifies caller ID in calls made to end users to disguise their identity and real phone number engages in prohibited caller ID spoofing. Although there are instances where caller ID spoofing is used for legitimate purposes (such as to transmit the main phone number associated with a business), caller ID spoofing is typically used as part of an effort to defraud and scam consumers while concealing the identity of fraudulent callers.
The Commission initiated a separate enforcement action against the carrier that distributed the calls with the deepfake messages, alleging violations of the STIR/SHAKEN regulations, which require telecommunications providers to assign appropriate levels of call attestations to prevent illegal robocalls and robotexts to consumers. STIR/SHAKEN caller ID technology is considered an industry standard that facilitates the authentication and verification of calls placed over Internet Protocol (IP) networks to prevent the transmission of such illegal calls. In June 2020, the FCC implemented STIR/SHAKEN rules that require telecommunication providers to authenticate caller ID information for calls transmitted via their networks. In addition, providers are required to implement robocall mitigation plans, outlining procedures to protect their customers from fraudulent activity including spoofed robocalls.
The FCC has taken prior measures to address illegal robocalls and robotexts, including issuing fines ranging from $10 million to $225 million for illegal telemarketing campaigns using caller ID spoofing and pre-recorded artificial voices in robocalls absent the call recipient's consent.
Next Steps
The NAL and the proposed penalties outlined in it are, at this time, allegations that may be rebutted by the named parties. The named parties can also seek a reduction or cancellation of the proposed penalties. Depending upon the response of the named parties these proceeding could move to a formal enforcement proceeding, or the parties may choose to settle with the FCC.
The NAL directed at the political consultant, coupled with the NAL directed at the telecommunication provider involved in the transmittal of the spoofed calls, along with similar FCC forfeitures are a clear signal that the FCC is likely to continue zeroing in on illegal robocalls. Likewise, prior FCC actions and the upcoming NPRM focused on AI regulatory measures signal the FCC's continued efforts to expand its enforcement activity in AI-related matters.
DWT will monitor for the release of the NPRM and note deadlines to submit public comment on the proposed rules.
DWT's AI team regularly advises clients on the rapidly evolving AI regulatory landscape and compliance with federal, state, and international laws and regulations. As new actions are taken to regulate emerging technologies, with the risk of potential overlap in regulatory authority or over regulations, we are closely monitoring policy developments and proposed rules to provide clients with guidance, advise on potential impacts on the private sector, and prepare comments in response to proposed regulations.