Utah Enacts Multiple Laws Amending and Expanding the State's Regulation of the Deployment and Use of Artificial Intelligence
Alongside California and Colorado, Utah has emerged as one of the most active states in regulating the deployment and use of artificial intelligence ("AI"). Diverging from Colorado's approach, Utah first enacted laws in 2024 that focused on a relatively discrete set of rules for certain uses of AI. In late March, Utah enacted several laws that regulate businesses' use of generative AI ("GenAI") tools to interact with consumers, including one law that creates specific requirements for GenAI tools that offer mental health services. Several of these measures amend the Utah AI Policy Act, enacted in 2024, and also expand the state's existing "unauthorized impersonation" law to include impersonations created by AI tools and to prohibit the distribution of tools whose primary intended purpose is to create unauthorized impersonations.
Below, we outline the details of each of these laws.
Modifying the Scope of Disclosures Mandated by the AI Policy Act
Utah enacted two bills (SB 226 and SB 332) that modify and narrow the existing transparency obligations established in the AI Policy Act.
The AI Policy Act mandates disclosures for entities using generative AI in connection with certain consumer interactions. SB 226 narrows the scope of these disclosure requirements for businesses using GenAI to those instances when a consumer or supplier directly asks whether "artificial intelligence" is being used. The amendment makes clear that the individual's prompt or question must be a clear and unambiguous request to determine whether the interaction is with a human or with artificial intelligence.
Separately,those providing services using GenAI in "regulated occupations" must provide "prominent" disclosures ("prominent" is undefined but presumably means "clear and conspicuous" as used in the safe harbor set forth below) when an individual receives services in a "high-risk artificial intelligence interaction." High-risk artificial intelligence interactions are defined as those involving:
the collection of sensitive personal information such as health, financial, or biometric data, or
the provision of personalized recommendations, advice, or information (defined to include financial, legal, medical, or mental health advice or services) that could reasonably be relied on by individuals to make significant personal decisions.
These "prominent" disclosures must be provided either verbally at the start of a verbal interaction or in writing at the start of a written interaction. The regulated occupations covered by this mandate are those requiring a license from the Utah Department of Commerce, including accounting, architecture, engineering, and numerous healthcare professions (such as genetic counseling, nursing, pharmacy, etc.). Importantly, this "prominent" disclosure requirement does not apply to all "high-risk artificial intelligence interactions" but is limited to high-risk interactions with businesses in regulated occupations.
SB 226 also creates a statutory safe harbor provision that is available to entities whose GenAI tools make "clear and conspicuous" disclosures at the outset of and during the course of consumer interactions with GenAI. That said, the amendment makes clear that it is not an affirmative defense to assert that the GenAI tool made the violative statement or undertook the violative act. The statute was originally scheduled to automatically repeal on May 1, 2025, but SB 332 extended the repeal period such that the measure will remain in effect until July 1, 2027.
Regulating the Use of AI-Enabled Mental Health Chatbots
Utah also enacted a new measure (HB 452) that governs mental health chatbots, which the statute defines as "artificial intelligence technology" using GenAI to engage in interactive conversations with a user that are "similar to the confidential communications that an individual would have with a licensed mental health therapist" and that a reasonable person would construe as mental health therapy or helping a user manage or treat mental health conditions. The statute excludes AI-enabled tools that simply provide scripted outputs or facilitate connections with a human therapist from the scope of the definition of a mental health chatbot.
HB 452 establishes the following restrictions on mental health chatbots:
The chatbots must clearly and conspicuously disclose that they are not human prior to interacting with users, at the beginning of any interaction with a user if the user has not accessed the chatbot within the last seven days, and any time the user asks the chatbot whether artificial intelligence is being used.
The chatbots must refrain from advertising any products or services during user interactions, unless the chatbot clearly and conspicuously discloses and identifies the advertisement, and must not leverage user inputs to determine whether to present ads to the consumer.
Providers of chatbots are prohibited from selling or sharing any individually identifiable personal health information or any user inputs with third parties, except for: (1) contractually bound suppliers who are necessary to ensure the functionality of the bot; and (2) health care providers or health plans with the consent of the user.
Affirmative defenses against alleged violations of these rules are available for providers that maintain appropriate documentation regarding the development and implementation of these tools and policies to govern their responsible use. Violations of the law can result in administrative fines of up to $2,500 per violation. The law also permits courts to issue injunctions, order disgorgement of money, impose fines, and award attorney's fees and costs for violations of the law.
Expanding the Scope of Unauthorized Impersonation Using AI
SB 271 expands the scope of Utah's existing "abuse of personal identity" law to prohibit the unauthorized commercial use of simulated or artificially recreated personal identities, including simulations created using GenAI, computer animation, digital manipulation, or any other technological means.
The law expands the definition of personal identities to include video likeness, voice, or audiovisual experience. Voice is further defined as "a computer-generated sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice of the individual."
"Unauthorized commercial use" refers to the use of an individual's likeness for advertising, fundraising, or solicitation of donations where the use expresses or implies that the individual has approved of or endorsed the product. The law includes a fair use exception for uses of an individual's personal identity in connection with news reporting, artistic works, political commentary, works of parody and satire, or transformative creations.
SB 271 also prohibits the distribution of technology whose intended, primary purpose is the unauthorized commercial creation or modification of content using personal identities.
Takeaways
Utah's new laws add to the growing patchwork of state laws that govern the deployment and use of artificial intelligence. DWT's AI Team regularly advises businesses on compliance with emerging AI regulations and will continue to track state and federal legislative developments.