Generative AI, ChatGPT Undergoing Heightened Regulatory Scrutiny
Regulators, both in the United States and around the globe, are showing greater concern about the potential risks of using generative artificial intelligence (AI) systems for commercial and business purposes. The Federal Trade Commission and international regulators are all looking into AI issues in greater depth.
In the U.S., Federal Trade Commissioner Alvaro Bedoya publicly warned companies that consideration of consumer interests and risks must be a component of any current or pending deployment featuring generative AI. This technology, Bedoya made clear, is already regulated in the U.S., in part by the FTC pursuant to its Section 5 authority on unfair and deceptive practices.
"The reality is AI is regulated (in the U.S.). Unfair and deceptive trade practices laws apply to AI. At the FTC, our core Section 5 authority extends to companies making, selling or using AI. If a company makes a deceptive claim using or about AI, that company can be held accountable," said Bedoya at the recent International Association of Privacy Professionals' Global Privacy Summit.
Commissioner Bedoya's comments reiterate the "tread lightly" advice contained within an FTC blog post published on March 20, 2023. In that piece the FTC advises companies to consider the potentially deceptive or unfair use of AI tools designed with the capability to create and/or generate synthetic media. Describing this issue as the "AI fake" problem, whereby generative AI deployments are used to develop phishing emails, generate malware and ransomware, create deepfake videos and voice clones, and prompt injection attacks, the FTC outlines guidance to eliminate or mitigate risks. The guidance suggests developers consider risks at the design phase, prior to deployment; outlines certain risk mitigation measures; and reminds developers and users of these systems to avoid fraud and deception risks associated with this technology.
In addition, the FTC issued earlier guidance on the use of generative AI in advertising. The FTC declared that companies need to emphasize transparency with regards to how an AI product actually works and what the capabilities of the particular technology are. The FTC cited computers and the capacity to predict human behavior as an example of false and unsubstantiated claims that could be considered deceptive advertising subject to FTC enforcement. In addition, the FTC noted that advertisements containing assertions that a product using AI-technology is inherently stronger or performs better than a comparable non-AI product must be verifiable.
Generative AI and Data Privacy
Along with concerns about the "AI fake" problem and deceptive or misleading advertising, the FTC provided guidance on the question of how personal information is being collected and maintained when used to train AI systems. Specifically, the FTC recommended that companies collecting consumer data to train their AI systems should do so in a transparent manner, after securing consent.
International Regulators Scrutinize Generative AI Technologies
Privacy regulators around the globe are beginning to express greater concern and skepticism as generative AI technologies proliferate, spurring multiple international regulators to initiate probes into ChatGPT. Italy became the first Western country to officially ban ChatGPT altogether when Garante, Italy's Data Protection regulatory authority, ordered OpenAI to temporarily halt processing Italian users' data due to concerns over a suspected breach of privacy regulations in Europe. Garante revealed details about a purported breach at OpenAI that enabled users to view the titles of other users' conversations, a lack of age restrictions for using ChatGPT, and concerns with ChatGPT providing factually incorrect information in its responses.
Following Italy's formal ban of ChatGPT, regulatory authorities in France and Ireland contacted Garante in Italy to inquire about the OpenAI breach. Furthermore, Germany is reportedly contemplating blocking ChatGPT due to concerns with OpenAI's data security practices, and the Office of the Privacy Commissioner in Canada is investigating OpenAI's generative language app ChatGPT after the watchdog received a complaint claiming the software was collecting, using, and disclosing personal information without consent.
On a related topic, Brian Hood, the mayor of Hepburn Shire in Australia, is reportedly contemplating a suit against OpenAI for defamation after it was discovered ChatGPT was providing inaccurate and misleading information. Specifically, ChatGPT was reportedly advising individuals that Mr. Hood served time in prison for bribery. In fact, Hood never served time in prison but was actually the whistleblower who notified authorities about payment of bribes to foreign officials to win currency printing contracts, and he was never charged with a crime, according to Reuters. If Hood does file the defamation suit, it would likely be the first against OpenAI, the owner of ChatGPT, for claims made by its automated language product.
Looking Ahead
The sudden initiation of and interest in banning ChatGPT in multiple countries exemplifies the regulatory scrutiny on the horizon for OpenAI. Companies based in the U.S. using generative AI technologies or contemplating new AI deployments should proceed with caution and ensure proactive steps are taken to comply with guidance issued by the FTC and other regulatory authorities.
In the coming days, DWT will publish a series of articles providing insight on significant developments in the regulation of AI systems and deployments. For example, the New York City Department of Consumer and Worker Protection adopted final rules implementing regulations concerning the use of automated employment decision tools (AEDT) in hiring that are set to go into effect in July 2023. In addition, the National Telecommunications and Information Administration issued an "AI Accountability Request for Comment" on April 11, 2023, seeking feedback regarding policies that could serve to support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems. DWT will publish additional insights and analysis on these developments.
DWT's Privacy and Security and Artificial Intelligence teams will continue to monitor and report on the legal and policy developments impacting cybersecurity, privacy, and AI.