Updated Oct. 2, 2024: Governor Newsom signed all but one of the bills this week. Read our update and Newsom’s statement on vetoing SB 1047 in this new post.


Last week, the California Legislature passed several bills that, if signed by the governor, will regulate how organizations develop, train, and use artificial intelligence (AI) models, systems, and applications. Of these bills, SB 1047—the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act—has been the most controversial, with many parties voicing support as well as concerns in opposition. As discussed in detail below, SB 1047 will create a number of substantial obligations for developers of the most powerful "frontier" AI models, including making safety assessments and determining that the models do not have the capability to create an unreasonable risk of causing or enabling a critical harm.

SB 1047 would come close to matching the breadth and reach of the recently enacted Colorado AI Act, which reflects the first comprehensive state law regulating AI developers and deployers. Concerns about the impact of SB 1047, in particular, have been raised by leading technology developers, as well as members of Congress representing districts in California where some of these companies operate. Indeed, former Speaker Nancy Pelosi released a statement that SB 1047 is "more hurtful than helpful" in protecting consumers, and Rep. Zoe Lofgren raised several concerns, including that the bill would have unintended consequences for open-sourced models, possibly making the original model developer liable for downstream uses. On the other hand, Elon Musk said on X that it "is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill," having previously warned of the "dangers of runaway AI." These and other arguments will likely be prominent in the campaign to convince Governor Newsom to sign or veto the measure.

The Legislature also passed three other less discussed bills that, if enacted, would (1) require developers of generative AI (GenAI) systems to disclose information about the data used to train their models, (2) require developers of GenAI systems to implement technical measures to facilitate transparency objectives by requiring developers to identify content as AI generated, and (3) create new requirements for employment agreements involving the use of digital replicas.

However, it should be noted that several other state and local laws already on the books regulate specific AI applications or use cases, such as in hiring and promotions, when used to create deepfakes; in insurance sales, when using bots or other AI-enabled systems to engage with consumers; and the use of automated decisionmaking technology for certain profiling based on personal information.

Governor Newsom must either sign, approve without signing, or veto all four measures by the end of September. Should any be enacted into law, they will add to the growing number of state laws imposing new affirmative duties on the development and use of AI models, systems, and applications and may inspire other states to adopt similar measures.

SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

SB 1047 (the Bill) would apply to "developers" (defined below) of "Covered Models" that (1) are trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds $100 million or (2) are fine-tuned AI using a quantity of computing power equal to or greater than three times 10^25 integer or floating-point operations, the cost of which exceeds $10 million. Notably, this computing threshold likely does not reach current large models but is expected to reach next-generation models in development. Importantly, the computing threshold is set to be reevaluated, revised, and approved after January 1, 2027, by a newly established "Board of Frontier Models" within the Government Operations Agency. The Bill also would establish a consortium within the agency to create "CalCompute," a "public cloud computing cluster" for research, development, and deployment of AI that is "safe, ethical, equitable, and sustainable."

In addition to Covered Models, the Bill would create obligations related to "covered model derivatives" (emphasis added) that are defined as: (1) an unmodified copy of a Covered Model; (2) a copy of a Covered Model that has been subjected to post-training modifications unrelated to fine-tuning; or (3) a copy of a Covered Model that has been fine-tuned using a quantity of computing power not exceeding three times 10^25 integer or floating point operations, the cost of which, as reasonably assessed by the Developer, exceeds $10 million.

"Developers" impacted by the bill would include "person[s] that perform[] the initial training of a covered model either by training a model … or by fine-tuning an existing covered model or covered model derivative...."

This bill is broadly aimed at preventing "Critical Harms," which are defined as any of the following harms caused or materially enabled by a Covered Model or Covered Model derivative:

  1. The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.
  2. Mass casualties or at least $500 million of damage resulting from cyberattacks on critical infrastructure by a model conducting or providing precise instructions for conducting a cyberattack or series of cyberattacks on critical infrastructure.
  3. Mass casualties or at least $500 million of damage resulting from an artificial intelligence model engaging in conduct that does both of the following:
    1. Acts with limited human oversight, intervention, or supervision, and
    2. Results in death, great bodily injury, property damage, or property loss, and would, if committed by a human, constitute a crime specified in the [California] Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.
  4. Other grave harms to public safety and security that are of comparable severity to the harms described above.

Pre-Training Requirements

Before training a Covered Model, Developers would have to:

  1. Implement reasonable security measures to prevent the unauthorized access to, misuse of, or unsafe post-training modifications of the Covered Model and all Covered Model derivatives controlled by the Developers.
  2. Implement the capability to promptly enact a full shutdown.
  3. Implement a written safety and security protocol that:
    1. specifies protections and procedures that satisfy the Developer's duty to take reasonable care to avoid creating an AI model that poses unreasonable risk of causing or materially enabling a Critical Harm,
    2. objectively states compliance requirements with sufficient detail to allow a determination that requirements of the safety and security protocol have been followed,
    3. identifies a testing procedure that evaluates whether the Covered Model or Covered Model derivative pose an unreasonable risk of causing Critical Harm,
    4. describes in detail how testing procedures assess the risks associated with post-training modifications,
    5. describes in detail how the testing procedure addresses the possibility that a Covered Model or Covered Model derivatives could be used to make post-training modifications or create another Covered Model in a manner that may cause Critical Harm, and
    6. describes in detail how the Developer will fulfil its obligations, implement the safeguards required by the Bill, enact a full shutdown, and how the protocol may be modified.
  4. Designate senior personnel to be responsible for ensuring compliance with the protocol.
  5. Retain an unredacted copy of the protocol for as long as the Covered Model is made available for public use, plus five years.
  6. Conduct an annual review of the protocol.
  7. Publish and submit a copy of the redacted protocol to the California Attorney General and grant the Attorney General access to the unredacted protocol upon request.

Pre-Deployment Requirements

Before using a Covered Model or making it available to the public or for commercial use, Developers would have to:

  1. Assess whether the Covered Model is reasonably capable of causing or materially enabling a Critical Harm.
  2. Record and retain for as long as the Covered Model is made available for commercial use plus five years information on the specific tests and test results used in the assessment.
  3. Take reasonable care to implement appropriate safeguards to prevent the Covered Model and Covered Model derivatives from causing or materially enabling a Critical Harm.
  4. Take reasonable care to ensure, to the extent reasonably possible, that the Covered Model's actions and the actions of Covered Model derivatives, as well as Critical Harms resulting from those models' actions, can be accurately and reliably attributed to them.

Developers would be prohibited from using or making Covered Models available to the public or commercially available if there is an unreasonable risk that the Covered Model or Covered Model derivative will cause Critical Harm.

Post-Deployment Audit Requirements

Developers of Covered Models would be required to retain third-party auditors to conduct annual, independent audits of compliance with the requirements of the Bill. The auditor's report would have to include the following:

  1. A detailed assessment of the Developer's steps to comply with the requirements of the Bill.
  2. If applicable, any identified instances of noncompliance with the requirements of the Bill, and any recommendations for how the Developer can improve its policies and processes for ensuring compliance with the requirements of this section.
  3. A detailed assessment of the Developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the Developer, its employees, and its contractors.
  4. The signature of the lead auditor certifying the results of the auditor.

Developers would be required to publish and submit to the California Attorney General a redacted copy of the auditor's report and grant the Attorney General access to the unredacted auditor's report upon request.

Post-Deployment Statement of Compliance Requirements

Developers of Covered Models would be required to annually submit to the California Attorney General a statement of compliance signed by the Developer's chief technology officer or more senior corporate officer. The statement of compliance must include the following:

  1. An assessment of the nature and magnitude of Critical Harms that the Covered Model or Covered Model derivatives may reasonably cause or materially enable and the outcome of the pre-deployment assessment.
  2. An assessment of the risk that compliance with the safety and security protocol may be insufficient to prevent the Covered Model or Covered Model derivatives from causing or materially enabling Critical Harms.
  3. A description of the process used by the signing officer to verify compliance with the requirements of this section, including a description of the materials reviewed by the signing officer, a description of testing or other evaluation performed to support the statement, and the contact information of any third parties relied upon to validate compliance.

Incident Reporting Requirements

Developers of Covered Models would have to report each artificial intelligence safety incident affecting the Covered Model, or any Covered Model derivative controlled by the Developer, to the Attorney General within 72 hours of learning of the incident or of learning facts sufficient to establish a reasonable belief that an incident has occurred.

Computing Cluster Requirements – Disclosures of Customer Identities and Monitoring Resource Usage

Operators of computing clusters—defined as "a set of machines transitively connected by data center networking of over 100 gigabits per second that has a theoretical maximum computing capacity of at least 10^20 integer or floating-point operations per second and can be used for training artificial intelligence"—would have to implement procedures to identify and assess customers' intentions to train large AI models and take action to stop such training if necessary. These rules overlap, to some degree, rules being developed by the U.S. Department of Commerce at the direction of the Biden Administration's 2023 Executive Order on AI that would require certain providers to report transactions that may allow a foreign person to train a "large AI model with potential capabilities that could be used in malicious cyber-enabled activity."

Under California's measure, operators of computing clusters would have to undertake the following when a customer utilizes computing resources sufficient to train a Covered Model:

  1. Obtain the prospective customer's basic identifying information and business purpose for utilizing the computing cluster, including all of the following:
    1. The identity of the prospective customer.
    2. The means and source of payment, including any associated financial institution, credit card number, account number, customer identifier, transaction identifiers, or virtual currency wallet or wallet address identifier.
    3. The email address and telephonic contact information used to verify the prospective customer's identity.
  2. Assess whether the prospective customer intends to utilize the computing cluster to train a Covered Model.
  3. If a customer repeatedly utilizes computer resources that would be sufficient to train a Covered Model, validate the information initially collected pursuant to paragraph (1) and conduct the assessment required pursuant to paragraph (2) prior to each utilization.
  4. Retain a customer's IP addresses used for access or administration and the date and time of each access or administrative action.
  5. Maintain for seven years and provide to the Attorney General, upon request, appropriate records of actions taken under this section, including policies and procedures put into effect.
  6. Implement the capability to promptly enact a full shutdown of any resources being used to train or operate models under the customer's control.

In addition, the bill would require that:

  1. A person operating a computing cluster shall consider industry best practices and applicable guidance from the U.S. Artificial Intelligence Safety Institute, National Institute of Standards and Technology, and other reputable standard-setting organizations, and
  2. In complying with the requirements of the above provisions, a person that operates a computing cluster may impose reasonable requirements on customers to prevent the collection or retention of personal information that the person that operates a computing cluster would not otherwise collect or retain, including a requirement that a corporate customer submit corporate contact information rather than information that would identify a specific individual.

Enforcement and Penalties

The Bill grants the Attorney General authority to bring civil actions for violations and the ability to recover the following:

  1. Civil penalties of up to 10% of the cost of the computing power used to train the model for violations that cause death or bodily harm to another human, harm to property, theft or misappropriation of property, or that constitute an imminent risk or threat to public safety, and civil penalties of up to 30% of the cost of the computing power for subsequent violations;
  2. Civil penalties up to $50,000 for a first violation and up to $100,000 for subsequent violations by operators of computing clusters or independent auditors;
  3. Injunctive relief;
  4. Monetary and punitive damages;
  5. Attorney's fees and costs.

AB 2013: Generative Artificial Intelligence: Training Data Transparency

If enacted, developers of generative AI (GenAI) systems made publicly available to Californians would be required to publicly post by January 1, 2026, disclosures regarding the data used to train the GenAI system. "Developer" under this bill is defined broadly to include anyone that "designs, codes, produces, or substantially modifies an artificial intelligence system or service for use by members of the public." That would include anyone who modifies (i.e., releases a new version or otherwise updates) a GenAI system or service in a way that materially changes its functionality or performance, such as through retraining or fine-tuning. Notably, because the definition in AB 2013 differs from the definition of "Developer" in SB 1047, entities could fall under the definition for one bill but not the other.

Developers' disclosures would be required to provide a high-level summary of the datasets used in the development of the GenAI system or service and include:

  1. The sources or owners of the datasets.
  2. A description of how the datasets further the intended purpose of the artificial intelligence system or service.
  3. The number of data points included in the datasets, which may be in general ranges, and with estimated figures for dynamic datasets.
  4. A description of the types of data points (defined as the types of labels used or the general characteristics) within the datasets.
  5. Whether the datasets include any data protected by copyright, trademark, or patent, or whether the datasets are entirely in the public domain.
  6. Whether the datasets were purchased or licensed by the Developer.
  7. Whether the datasets include personal information.
  8. Whether the datasets include aggregate consumer information.
  9. Whether there was any cleaning, processing, or other modification to the datasets by the Developer, including the intended purpose of those efforts in relation to the artificial intelligence system or service.
  10. The time period during which the data in the datasets was collected, including a notice if the data collection is ongoing.
  11. The dates the datasets were first used during the development of the artificial intelligence system or service.
  12. Whether the GenAI system or service used or continuously uses synthetic data generation in its development. A Developer could include a description of the functional need or desired purpose of the synthetic data in relation to the intended purpose of the system or service.

The Bill would exempt GenAI systems and services that are used solely to ensure system security and integrity (as defined in the California Consumer Privacy Act) or to operate aircraft or that are developed for national security, military, or defense purposes and made available only to a federal entity. The Bill would apply to GenAI systems or services released on or after January 1, 2022, regardless of whether the terms of use of the GenAI systems or services include compensation.

SB 942: California AI Transparency Act

This bill is intended to enhance digital provenance measures for content created by GenAI. It would create obligations for developers of GenAI systems to create AI detection tools and to allow users to mark content as generated by AI.

"Covered Providers" means a person that creates, codes or otherwise produces a GenAI system, that has over one million monthly visitors or users and is publicly available in California. Such providers are required to make available an AI detection tool that meets the following criteria:

  1. Allows a user to assess whether image, video, or audio content, or content that is any combination thereof, was created or altered by the Covered Provider's GenAI system.
  2. Outputs any system provenance data, i.e., information regarding the type of device, system, or service that was used to generate a piece of digital content that is detected in the content.
  3. Does not output any data capable of being associated with a particular user.
  4. Is publicly accessible, subject to some limitation.
  5. Allows a user to upload content or provide a uniform resource locator (URL) linking to online content.
  6. Supports an application programming interface that allows a user to invoke the tool without visiting the Covered Provider's internet website.

Covered Providers would also be required to collect user feedback related to the efficacy of the AI detection tool and incorporate relevant feedback into any attempt to improve the efficacy of the tool.

Covered Providers would be prohibited from doing any of the following:

  1. Collecting or retaining personal information from users of the Covered Provider's AI detection tool, unless the user submits feedback and opts in to being contacted by the provider.
  2. Retaining any content submitted to the AI detection tool for longer than is necessary to comply with this section.
  3. Retaining any personal data from content submitted to the AI detection tool by a user.

Covered Providers would be required to offer users the option to include a manifest disclosure, e.g., a watermark, on any content created by the GenAI system. The manifest disclosure must meet the following criteria:

  • Identify the content as AI generated.
  • Be clear, conspicuous, appropriate for the medium of the content, and understandable.
  • Be permanent or extraordinarily difficult to remove, to the extent technically feasible.

Covered Providers would also have to include latent disclosures, i.e., disclosures that are present but not easily perceived by natural persons, in AI generated-content. Where technically feasible and reasonable, these disclosures must contain the following elements, either directly or through a link to a permanent website:

  1. Name of the Covered Provider.
  2. Name and version number of the GenAI system that created or altered the content.
  3. Time and date of the content's creation or alteration.
  4. A unique identifier.

The disclosure would have to be detectable by the provider's AI detection tool, be consistent with widely accepted industry standards, and be permanent or extraordinarily difficult to remove to the extent technically feasible.

If the provider licenses its GenAI system to a third party, it would have to contractually require that third party to maintain the system's capability to include the latent disclosure. If the Covered Provider learns that the licensee modified the GenAI system such that it is no longer capable of providing the disclosure, the Covered Provider would have to revoke the license within 96 hours of the discovery.

Violations will be enforced by the state attorney general, city attorney, or county counsel and can result in fines of $5,000 per violation, with each day of violation constituting a discrete violation. A third-party licensee's failure to cease using the system after its license has been revoked may be subject to an action for injunctive relief and reasonable fees and costs. Plaintiffs could recover reasonable attorneys' fees and costs. 

Any product, service, website, or app that provides exclusively non-user-generated video games, television, streaming content, movies, or interactive experiences would be exempt.

If enacted, the effective date would be January 1, 2026.

AB 2602: Employment Contracts Involving Digital Replicas

If this bill is enacted, then starting on January 1, 2025, provisions in agreements for the performance of personal or professional services by a "digital replica" of an individual—defined as "a computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that is embodied in a sound recording, image, audiovisual work, or transmission in which the actual individual either did not actually perform or appear, or the actual individual did perform or appear, but the fundamental character of the performance or appearance has been materially altered"—would be unenforceable if they meet all of the following conditions:

  1. The provision allows for the creation and use of a digital replica of the individual's voice or likeness in place of work the individual would otherwise have performed in person.
  2. The provision does not include a reasonably specific description of the intended uses of the digital replica. (However, failure to include a reasonably specific description of the intended uses does not render the provision unenforceable if the uses are consistent with the terms of the contract "for the performance of personal or professional services and the fundamental character of the photography or soundtrack as recorded or performed.")
  3. The individual was not represented by legal counsel or a labor union.

The bill makes clear that it only makes those specific provisions of a contract that do not meet the criteria above unenforceable and does not otherwise affect any other provisions. Moreover, the bill would not affect exclusivity grants contained in otherwise unenforceable provisions.

Notably, the bill does not include any enforcement mechanisms or penalties but simply establishes that provisions in agreements meeting the criteria above are not enforceable.

DWT's AI Team regularly advises on compliance with emerging AI regulations and will continue to track developments with these bills.