Governor Newsom has vetoed SB 1047 but signed into law three other laws regulating the development and deployment of certain artificial intelligence (AI) tools. As explained in detail in our prior post, California's Legislature passed a number of bills aimed at regulating AI.

SB 1047—the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act—had been the most controversial of the AI bills passed by the legislature, with many parties voicing support as well as concerns in opposition. The bill would have created a number of substantial obligations for developers of the most powerful "frontier" AI models, including making safety assessments and determining that the models do not have the capability to create an unreasonable risk of causing or enabling catastrophic harms.

In a statement accompanying his veto, Governor Newsom voiced concerns that targeting the largest AI models may not be the most appropriate method of ensuring the development of trustworthy AI, while at the same time, such legislation may curtail innovation. Specifically, the governor said:

Key to the debate is whether the threshold for regulation should be based on the cost and number of computations needed to develop an AI model, or whether we should evaluate the system's actual risks regardless of these factors. … While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by this technology.

Apart from SB 1047, the governor enacted the three other AI governance bills we discussed in our prior post. These new laws will (1) require developers of generative AI (GenAI) systems to disclose information about the data used to train their models, (2) require developers of GenAI systems to implement technical measures to facilitate transparency objectives by requiring developers to identify content as AI-generated, and (3) create new requirements for employment agreements involving the use of digital replicas.

DWT's AI Team regularly advises on compliance with emerging AI regulations and will continue to track state and federal legislative developments.