White House Issues Guidance for AI Regulation and “Non-Regulation”
In a highly orchestrated policy announcement made at the 2020 CES show, the Trump Administration released a draft framework and set of principles for governance of AI technology and applications in the U.S.
Specifically, the OMB released a memorandum to all federal agencies and executive offices, the “Guidance for Regulation of Artificial Intelligence Applications” (AI Guidelines), a detailed policy document articulating the Administration’s regulatory and non-regulatory approach to the many forms of emerging artificial intelligence (AI) technology and applications in society today. The Administration is inviting public comment on the AI Guidelines for a period of sixty days which will conclude on March 13.
According to the Administration’s AI policy lead, Chief Technology Officer Michael Kratsios, the AI Guidelines are intended to achieve three overarching policy goals:
- Encourage public engagement in the AI discussion
- Promote a “light-touch” AI regulatory approach
- Promote the adoption and development of trustworthy AI
The guidelines reflect a National AI strategy based on a philosophy of regulatory restraint, as mandated by President Trump’s Executive Order 13859. Indeed, the AI Guidelines call for federal agencies to consider a regulatory approach that fosters “innovation, growth, and engenders trust, while protecting core American values” through both regulatory and non-regulatory actions and “reducing unnecessary barriers to the development and deployment of AI.” To that end, the AI Guidelines directs federal agencies to “avoid regulatory and non-regulatory actions that needlessly hamper AI innovation and growth.”
We purposely wanted to avoid top-down, one-size-fits-all, blanket regulations...
Lynne Parker, U.S. Deputy Chief Technology Officer,
White House Office of Science and Technology Policy
Notably, the AI guidelines do not articulate a policy preference or approach for regulating any specific AI technology or application, such as facial recognition, deep fakes or algorithmic decision-making systems. Nor do the guidelines articulate high-level governance or ethical principles that would cover all AI technologies.
Instead, the Administration is using the comment process to design potential regulation of AI technologies consistent with “the U.S. approach to free-market capitalism, federalism, and good regulatory practices.”
In conjunction with the AI guidelines, the Department of Transportation also released their latest guidance on the development of autonomous vehicles, largely reflecting the regulatory restraint articulated in the AI Guidelines. Ironically, at the same time that the Administration was touting its philosophy of regulatory restraint, the Commerce Department’s Bureau of Industry and Security issued an interim final rule restricting the sale and export of Geospatial AI software, which leverages AI to analyze satellite imagery.
Guidelines Encourage Innovation and Growth in AI
The Administration’s AI Guidelines direct agencies to first consider actions that encourage innovation, growth and engender trust while also “protecting core American values,” which, according to OMB, include “protecting American technology, economic and national security, privacy, civil liberties, and other American values, including the principles of freedom, human rights, the rule of law, and respect for intellectual property.”
Specifically, the agencies must consider approaches that reduce barriers to the development and deployment of AI, and avoid actions that hamper AI innovation and growth. Further, agencies must undertake AI-specific cost-benefit analyses before taking any action.
These principles reflect traditional conservative principles of limited regulation in the area of emerging technologies that is necessary to ensure that new regulations do not stifle innovation – what one agency head has called “regulatory humility.” Notably, this approach differs from some proposals in Congress, which seek to regulate or limit algorithms or certain AI applications, and marks a clear move away from the type of top-down regulatory oversight that many believe the EU may adopt in the months ahead.
Notably, the OMB guidance encourages federal agencies to consider preempting state laws in certain situations, including (if necessary) addressing inconsistent, duplicative or burdensome state laws “that prevent the emergence of a national market.” The AI Guidelines specifically direct federal agencies to consider the effect of potential federal regulation on existing or potential state actions.
This approach mirrors many of the same issues raised in the current debate over the need for a national privacy framework, and the potential preemption of certain state laws.
Key Principles Identified in the AI Guidelines
The 10 principles identified in the AI Guidelines are as follows:
- Public trust in AI
The “continued adoption and acceptance [of AI] will depend significantly on public trust and validation,” and “risks to privacy, individual rights, autonomy, and civil liberties…must be carefully assessed and appropriately addressed.” Therefore, the government’s approach to AI should “promote reliable, robust, and trustworthy AI applications, which will contribute to public trust in AI.”
- Public participation
“Public participation…will improve agency accountability and regulatory outcomes, as well as increase public trust and confidence.” Accordingly, “…agencies should provide ample opportunities for the public to … provide information and participate in all stages of the rulemaking process.” Further, “…agencies are also encouraged…to inform the public and promote awareness and widespread availability of standards and the creation of other informative documents.”
- Scientific integrity and information quality
Agencies should implement transparent scientific processes when issuing regulations and guidance, which should include “…articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of the AI application’s results.” Consistent with scientific integrity and information quality principles, “…agencies should hold information...that is likely to have a clear and substantial influence on important public policy or private sector decisions to a high standard of quality, transparency, and compliance.”
- Risk assessment and management
Agencies should use a risk-based approach “…to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits.” Transparent evaluations of risk are paramount and agencies should “re-evaluate their assumptions and conclusions at appropriate intervals so as to foster accountability.” In addition, the assessment of the scale and nature of consequences resulting from the failure or success of AI “…can help inform the level and type of regulatory effort that is appropriate to identify and mitigate risks.”
- Benefits and costs
Agencies should “…carefully consider the full societal costs, benefits, and distributional effects…” before regulating AI or the deployment of AI. This would include a comparison among its contemporaries or predecessors of “potential benefits and costs of employing AI” and an evaluation of the degree of risk tolerated in existing, comparable applications. Agencies should also consider other outcomes that may result from implementation of the AI, such as “…critical dependencies…and changes in human processes associated with AI implementation [that] may alter the nature and magnitude of the risks and benefits.”
- Flexibility
“Agencies should pursue performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications.”
- Fairness and non-discrimination
“…agencies should consider, issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful, unfair, or otherwise unintended discrimination as compared to existing processes.”
- Disclosure and transparency
“… Transparency and disclosure can increase public trust,” but “…agencies should carefully consider the sufficiency of existing or evolving legal, policy, and regulatory environments before contemplating additional measures for disclosure and transparency. What constitutes appropriate disclosure and transparency is context-specific, depending on assessments of potential harms, the magnitude of those harms, the technical state of the art, and the potential benefits of the AI application.”
- Safety and security
“Agencies should promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process. Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity, and availability of the information processed, stored, and transmitted by AI systems. Agencies should give additional consideration to methods for guaranteeing systemic resilience, and for preventing bad actors from exploiting AI system weaknesses…”
- Interagency coordination
“Agencies should coordinate with each other to share experiences and to ensure consistency and predictability of AI-related policies that advance American innovation and growth in AI.”
Fairness, Bias Mitigation, and Transparency
Indeed, the discussion of fairness, discrimination, disclosures and transparency in principles 7 & 8 could be read as a tacit acknowledgement that some regulations may be necessary in some circumstances. A number of high-profile issues, such as bias and discrimination, are called out in this framework.
For example, principle #7 (Fairness and Nondiscrimination) asserts that AI applications have the potential to reduce present-day discrimination caused by human subjectivity, while also acknowledging the risk that certain applications can introduce bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI.
Similarly, principle #8 (Disclosure and Transparency) recognizes that transparency can increase public trust and confidence in AI, and that at times certain disclosures may be necessary or appropriate. For example, this principle acknowledges that disclosing when AI is being used may be appropriate when the application is used to interface with human beings. At the same time, this principle also recognizes that certain existing legal or policy regimes may be sufficient to address such concerns, so careful evaluation is necessary.
Data Quality and Trustworthiness
Recognizing that data quality is key to enabling trustworthy and robust AI, principle #3 (Data Quality) directs agencies to hold information that is likely to have a clear and substantial influence on important public policy or privacy sector decisions “to a high standard of quality, transparency and compliance.” Specific best practices cited in the OMB Guidance include: transparently articulating strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of AI application results.
Non-Regulatory Actions
Although the AI Guidelines indicate a strong preference for agencies to adopt a light-touch approach, there is no express prohibition on the adoption of new regulations. However, the AI Guidelines do offer examples of several “non-regulatory” actions that agencies may take to address potential risks posed by AI, including:
- Sector-specific policy guidance or frameworks – Agencies should consider using existing statutory authority to issue non-regulatory policy statements, guidance or testing and deployment frameworks, and collaborate with industry, in the form of playbooks or voluntary frameworks.
- Pilot programs and experiments – Consider developing and permitting the use of pilot programs, which include safe harbors for systems or applications tested or used within that program.
- Voluntary consensus standards – Encourage the development of voluntary standards, and agencies should also consider independent standards-setting organizations when considering whether new regulations are necessary.
The promotion of safe harbors, pilot programs, best practices and other frameworks is encouraging, and could be a useful tool in many circumstances for advocating for alternative approaches if regulators appear to be intent on moving towards adopting new regulations.
Next Steps
Following the public comment period that ends March 13, and upon issuance of the finalized guidelines, agencies with regulatory authority relevant to AI will have 180 days to submit plans to the OMB to “demonstrate consistency” with the AI Guidelines. Such agencies must identify statutory authorities specifically authorizing agency regulation of AI applications or technology, as well as any collection of AI-related information from regulated entities.
Covered agencies are also expected to list and describe any planned or considered regulatory actions on AI.
Several agencies are already in the middle of developing new regulatory policies centered on AI and algorithmic decision-making, including FDA, PTO, Commerce Department, and HUD. DWT’s AI Team will be closely following those proceedings and any new proposals to ensure conformance with these guidelines. Please contact the authors to learn more about these and other AI regulatory and policy developments.