All-In on AI: Bipartisan Senate AI Policy Roadmap Identifies Areas of Consensus
On March 15, 2024, the Bipartisan Senate Artificial Intelligence Working Group (the "AI Working Group")—led by Senate Majority Leader Chuck Schumer (D-N.Y.) and Sens. Mike Rounds (R-S.D.), Martin Heinrich (D-N.M.), and Todd Young (R-Ind.)—announced the release of their roadmap for developing AI policy in the U.S. Senate (the "Roadmap") growing out of nine AI Insight Forums identifying areas of consensus, as well as areas of disagreement.
Rather than detail specific proposals for regulating AI, the Roadmap instead identifies key areas of bipartisan consensus for the adoption of additional policies and encourages the executive branch and relevant congressional committees to move forward and collaborate closely and frequently on AI legislation focused on the identified areas. The 30-page Roadmap identifies eight key areas of bipartisan consensus and aims to generate further consideration of recommendations for AI legislation.
In addition to highlighting priority areas where there is apparent consensus, the Roadmap also reveals several important insights about how these legislators are thinking about future AI policies and regulations.
First, the Roadmap embraces the concept of using risk-based solutions to focus new policies and rules on those tools/applications that present the greatest risk, rather than applying broad rules indiscriminately. The Roadmap does not call for comprehensive AI regulation, akin to the European Union Artificial Intelligence Act (the "EU AI Act"), or the establishment of new federal regulatory agencies for AI, whether for technological assessment or general-purpose licensing regimes. Rather, the Roadmap encourages relevant committees to consider discrete AI policy issues, with the apparent intention of advancing more piecemeal legislation touching on different aspects of AI. Like the EU AI Act, however, categorical prohibitions or bans on certain applications of AI remain under consideration (e.g., to address online child sexual abuse material, protect children, prevent fraud and deception, and ban social scoring uses, among others) and "[r]eview whether other potential uses for AI should be either extremely limited or banned."
Second, the Roadmap reflects an understanding that the AI value chain includes numerous actors, and that duties and liabilities should be carefully considered (and allocated) according to an entity's role in the value chain.
Third, the Roadmap clearly supports adherence to the National Institute of Standards and Technology ("NIST") AI Risk Management Framework ("AI RMF") as a means for ensuring heightened AI governance processes and suggests that providers in the government procurement process should receive more favorable treatment if they comply with the AI RMF.
Fourth, and finally, the roadmap advocates for leveraging existing law and authority to address gaps in coverage, rather than adoption of a broad new national "horizontal" approach to new rules or policies. However, the Roadmap does not address the emergence of state and local laws regulating AI, leaving open questions of preemption and other federal action in the face of new state and local AI regulation such as the recently enacted Colorado Artificial Intelligence Act.
We highlight the key areas and policy recommendations of the Roadmap below.
1. Supporting U.S. Innovation in AI
The AI Working Group urged the executive branch and the Senate Appropriations Committee to pursue significant federal investment in AI, encouraging each to explicitly meet the level of spending proposed by the National Security Commission on Artificial Intelligence (NSCAI), which recommends investing at least $32 billion per year for non-defense AI innovation.
To that end, as well as to make up for the gap between current spending levels and the spending levels proposed by the NSCAI, the group recommended allocating emergency appropriations for a variety of priority endeavors, including among others:
- Funding for a cross-government AI research and development effort, including relevant infrastructure that spans the Department of Energy (DOE), Department of Commerce, National Science Foundation, NIST, National Institutes of Health, and National Aeronautics and Space Administration (NASA)
- Funding the outstanding CHIPS and Science Act (P.L. 117-167) accounts not yet fully funded
- Authorizing the National AI Research Resource by passing the CREATE AI Act (S.2714)
- Funding R&D activities, and developing appropriate policies, at the intersection of AI and robotics to advance national security, workplace safety, industrial efficiency, economic productivity, and competitiveness, through a coordinated interagency initiative, and AI in critical infrastructure, including for smart cities and intelligent transportation system technologies
- Funding "AI Grand Challenge" programs such as those described in Section 202 of the Future of AI Innovation Act (S. 4178) and the AI Grand Challenges Act (S. 4236)
- Funding for AI efforts at NIST, including AI testing and evaluation infrastructure and the U.S. AI Safety Institute
2. High-Impact Uses of AI
The Roadmap articulates a clear preference for using a risk-based approach to targeting new legislation, a welcome signal that members of the AI Working Group recognize the potential harm in over-regulating this emerging technology and instead encourage new policies focused on the highest-risk tools and applications.
The AI Working Group stated that "AI use cases should not directly or inadvertently infringe on constitutional rights, imperil public safety, or violate existing antidiscrimination laws," and underscored that where U.S. law requires a clear understanding of how automated systems work, the opaque nature of some AI systems may be unacceptable. Here, the group recommended developing legislation to ensure that regulators and impacted stakeholders can access information relevant to enforcing any existing laws and to consider, where appropriate, transparency, explainability, and testing and evaluation requirements for high-risk uses of AI. In particular, the Roadmap specifically calls out the financial services and housing sectors as areas where additional transparency and accountability measures may be needed. Additional priorities for high-impact uses of AI include, among others:
- Developing standards for use of AI in critical infrastructure
- Investigating risks and opportunities in the use of AI systems in the housing sector, with particular emphasis on issues of transparency and accountability
- Ensuring appropriate testing and evaluation of AI systems in the federal procurement process and streamlining the procurement process for AI systems
- Developing legislation to address online child sexual abuse material
- Considering legislation to ban the use of AI for social scoring
- Considering legislation to provide transparency for healthcare providers and the broader public regarding the use of AI in medical products and clinical support services, including the data used to train AI models
3. Safeguarding Against AI Risks
The Working Group encouraged private companies to perform detailed testing to understand the potential harms of the AI systems they are developing, and not to deploy AI systems that cannot meet industry standards. Moreover, the group pointed to the need for an analytical and legislative framework to address when pre-deployment evaluation of AI models is required.
The Working Group urged the relevant congressional committees to adopt "a resilient risk regime" that focuses on the capabilities of AI systems, protects proprietary information, and allows for continued AI innovation. Here, the Working Group underscored their support for legislative and policy efforts honed to capabilities-based regime that considers all levels of risk, particularly in regard to developing and standardizing risk testing and evaluation methodologies, such as "red-teaming, sandboxes and testbeds, commercial AI auditing standards, bug bounty programs, as well as physical and cyber security standards," with particular emphasis on the federal procurement process.
Further, the Working Group highlighted the need to understand the legal and policy implications of different product release choices for AI systems, such as investigating the differences between closed versus fully open-source models and the full spectrum of product release choices between those two ends.
4. Privacy and Liability
The AI Working Group underscored the need and its support for strong, baseline federal data privacy legislation to address AI risks and opportunities while maximizing the benefits of AI. The AI Working Group specifically called out the need for a federal privacy law to provide legal certainty for AI developers and protection for consumers in such areas as data minimization, data security, consumer data rights, consent and disclosure, and the activities of data brokers "to reduce the prevalence of non-public personal information being stored in, or used by, AI systems." The group also encouraged relevant committees to consider whether new standards are needed "to hold AI developers and deployers accountable if their products or actions cause harm to consumers, or to hold end users accountable if their actions cause harm, as well as how to enforce any such liability standards," recognizing that the "'black box' nature of some AI algorithms, and the layered developer-deployer structure of many AI products, along with the lack of legal clarity, might make it difficult to assign liability for any harms."
5. Transparency, Explainability, Intellectual Property, and Copyright
Here, the AI Working Group supports the development of legislation to establish transparency requirements for "public-facing" AI tools and applications, while allowing for use case specific requirements where necessary and beneficial. The Roadmap also suggests the development of best practices for AI deployers to disclose that their products use AI, building on ongoing federal effort in this space. In addition, the AI Working Group recommended further consideration of federal policy related to data sets used by AI developers to train their models, including data sets that might contain sensitive personal data or are protected by copyright, and evaluation of whether there is a need for transparency requirements in certain training data uses.
Also, the AI Working Group urged consideration of legislation to:
- establish a coherent approach to public-facing transparency requirements for AI systems
- incentivize providers of generative AI software and hardware products to develop content provenance information
- incentivize platforms to maintain access to and display content provenance information
- protect against the unauthorized use and commercialization of one's name, image, likeness, and voice using AI
6. AI and the Workforce
The AI Working Group encouraged the development of legislation aimed at upskilling the private-sector workforce to successfully participate in an AI-enabled economy, including potential incentives for businesses to develop strategies to integrate new technologies and reskilled employees into the workplace, as well as incentives for employees across the spectrum to obtain retraining from community colleges and universities.
In particular, the AI Working Group highlighted the Workforce Data for Analyzing and Tracking Automation Act (S. 2138), which would authorize the Bureau of Labor Statistics, with the assistance of the National Academies of Sciences, Engineering, and Medicine, to record the effect of automation on the workforce and measure those trends over time, including job displacement, the number of new jobs created, and the shifting in-demand skills.
7. Elections and Democracy
The AI Working Group encouraged relevant congressional committees as well as AI developers and deployers to move forward with efforts to advance effective watermarking and other digital media content provenance practices related to AI-generated or AI-augmented election content.
The AI Working Group also highlighted relevant toolkits developed by federal actors to support frontline election stakeholders, which include the U.S. Election Assistance Commission's AI Toolkit for Election Officials and the Cybersecurity and Infrastructure Agency's Cybersecurity Toolkit and Resources to Protect Elections.
8. National Security
The Working Group urged the relevant congressional committees to adopt legislation to bolster the use of AI in U.S. cyber capabilities, including developing career pathways and training programs for digital engineering, ensuring efficient and swift handling of security clearance applications particularly for AI talent, and improving lateral and senior placement opportunities and other mechanisms to expand the AI talent pathway into the military. The group also underscored the need to address and mitigate the rising energy demand for AI systems, as well as explore opportunities for leveraging advanced AI models to improve the management and risk mitigation of space debris and consider codifying DOD's transparency regarding its policy on fully autonomous lethal weapon systems.
Additionally, the group urged consideration of policy proposals that develop frameworks to determine:
- when, or if, export controls should be placed on certain, powerful AI systems
- when an AI system, if acquired by an adversary, would be powerful enough that it would pose such a grave risk to national security that it should be considered classified
Looking Ahead
While the AI policy roadmap contains significant and varied policy recommendations and directions for federal AI legislation, one thing is clear: U.S. AI policy is moving full speed ahead. The AI Working Group acknowledged the ongoing concerns and uncertainties associated with artificial general intelligence (AGI), including that being developed by adversaries, and wants to be sure long-term monitoring continues, encouraging "the relevant committees to better define AGI in consultation with experts, characterize both the likelihood of AGI development and the magnitude of the risks that AGI development would pose, and develop an appropriate policy framework based on that analysis." That said, many civil rights and civil liberties groups think the roadmap puts "industry interests ahead of the public interest."
DWT's AI team regularly counsels clients on how their business practices can navigate and comply with emerging AI laws. We will continue to monitor the rapid development of other state and new federal AI laws and regulations.