NIST Releases Final Risk Management Framework for Developing Trustworthy AI
On January 26, 2023, the National Institute of Standards and Technology (NIST) released the final version of its AI Risk Management Framework (RMF).
The RMF is the culmination of NIST's intensive drafting process that offered interested stakeholders two opportunities to comment on working drafts over the last year.
Work on the RMF began in 2021, following a Congressional mandate set forth in the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283), a part of the 2021 National Defense Authorization Act, well before new AI tools resulted in attention-grabbing headlines about generative AI producing captivating images and chat responses. The final draft of the RMF comes at a time when AI is facing increasing scrutiny, both from regulators across the globe and litigants in the US, with "trustworthiness" being a key component of responsible AI and necessary to "minimize potential negative impacts of AI systems, such as threats to civil liberties and rights, while also providing opportunities to maximize positive impacts."
The RMF is a non-binding, voluntary framework. However, it may serve as an influential model that guides industry practices towards the development of trustworthy AI. While there is no formal safe harbor protection associated with the RMF, adherence to the principles in the framework could be evidence that an organization has worked in good faith to mitigate potential harms in such systems.
The RMF largely follows the structure of the previous draft, although there are some notable changes and additions which are summarized in more detail below. NIST also published a Playbook intended to guide businesses through the core components of RMF, a Roadmap for contemplated further iterations, and Crosswalks comparing the RMF to other AI standards and proposed regulations, including the EU AI Act, OECD Recommendations on AI, and the White House Blueprint for an AI Bill of Rights.
Seven Trustworthiness Characteristics
The RMF first sets out seven Trustworthiness Characteristics for AI:
Valid and Reliable
- Validation is the confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled.
- Reliability is the ability of an AI system to perform as required, without failure, for a given time interval, under given conditions.
Safe
- Safe AI systems are those that do "not under defined conditions, lead to a state in which human life, health, property, or the environment is endangered."
Secure and Resilient
- AI systems, as well as the ecosystems in which they are deployed, may be said to be secure and resilient if they can withstand unexpected adverse events or unexpected changes in their environment or use – or if they can maintain their functions and structure in the face of internal and external change and degrade safely and gracefully when this is necessary.
Accountable and Transparent
- Transparency reflects the extent to which information about an AI system and its outputs is available to individuals interacting with such a system – regardless of whether they are even aware that they are doing so.
- The RMF offers less concrete recommendations on accountability, noting that the role of risk and accountability will vary across legal, sectoral and cultural contexts. Further, the role of AI actors (e.g., developer, deployer, etc.) should also be considered in developing appropriate levels of accountability, suitable to the context of the environment.
Explainable and Interpretable
- Explainability refers to a representation of the mechanisms underlying AI systems' operation, whereas interpretability refers to the meaning of AI systems' output in the context of their designed functional purposes.
- Transparency, explainability, and interpretability are distinct characteristics that support each other. Transparency can answer the question of "what happened" in the system. Explainability can answer the question of "how" a decision was made in the system. Interpretability can answer the question of "why" a decision was made by the system and its meaning or context to the user. Together, these allow AI system users and operators "to gain deeper insights into the functionality and trustworthiness of the system, including its outputs."
Privacy-Enhanced
- Privacy refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. Data minimizing methods such as de-identification and aggregation for certain model outputs can support design for privacy-enhanced AI systems.
Fair – with Harmful Bias Managed
- Fairness in AI includes concerns for equality and equity by addressing issues such as harmful bias and discrimination. The three major categories of AI bias to be considered and managed (discussed below) are systemic, computational and statistical, and human-cognitive. Bias is tightly associated with the concepts of transparency as well as fairness in society and can perpetuate and amplify harm to society.
NIST recognizes that organizations deploying AI systems might "face a tradeoff between predictive accuracy and interpretability. Or, under certain conditions such as data sparsity, privacy-enhancing techniques can result in a loss in accuracy, affecting decisions about fairness."
Four Core Components
Despite these challenges and tradeoffs, these seven characteristics of trustworthy AI provide a foundation for what NIST calls the core components of its AI Risk Management Framework. The RMF "Core" provides actionable steps to map, measure, manage and ultimately govern AI systems:
Map
- The Map function establishes the context to frame risks related to an AI system. The information gathered while carrying out the Map function enables negative risk mitigation and informs decisions for processes such as model management, as well as an initial decision about appropriateness or the need for an AI solution. Outcomes in the Map function are the basis for the Measure and Manage functions.
Measure
- The Measure function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts. It uses knowledge relevant to AI risks identified in the Map function and informs the Manage function.
Manage
- The Manage function entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the Govern function. Risk treatment comprises plans to respond to, recover from, and communicate about incidents or events.
Govern
- The Govern function is a crosscutting function that is infused throughout AI risk management and is informed by the previous steps of mapping, measuring and managing risks. Aspects of Govern, especially those related to compliance or evaluation, should be integrated into each of the other functions as organizations develop and evolve these frameworks.
What's New
As noted, the RMF is the product of an iterative process during which NIST received numerous comments from interested parties. Notable changes and additions from the last version of the RMF include:
- NIST explicitly acknowledges that some of the characteristics of trustworthy AI are in tension, and that there are likely to be tradeoffs between the characteristics. For example, the RMF states that "harm/cost-benefit tradeoffs will continue to be developed and debated" for mitigating negative AI risks for balancing "societal values and priorities related to civil liberties and rights, equity, the environment and the planet, and the economy." Importantly, dealing with tradeoffs depends on "the values at play in the relevant context and should be resolved in a manner that is both transparent and appropriately justifiable."
- RMF acknowledges additional challenges to measuring AI risk which may be different in early and late stages in the AI lifecycle:
- The current lack of consensus on robust and verifiable measurement methods for risk and trustworthiness, and applicability to different AI use cases, is an AI risk measurement challenge.
- Divergence of risk perspectives across different actors in the AI lifecycle. For example, an AI developer who makes AI software available, such as pre-trained models, can have a different risk perspective than an AI actor who is responsible for deploying that pre-trained model in a specific use case. Such deployers may not recognize that their particular uses could entail risks which differ from those perceived by the initial developer.
- Lack of an accepted baseline for comparison to human decision-making. Evaluating risks of AI systems that replace human decision-making requires some form of baseline for comparison. NIST acknowledges that baseline metrics are "difficult to systematize since AI systems carry out different tasks – and perform tasks differently – than humans."
- New guidance regarding risk prioritization: Risk prioritization may differ between AI systems that are designed or deployed to directly interact with humans as compared to AI systems that are not. For example, AI systems trained on sensitive or personal data may call for higher initial risk prioritization, whereas systems that have less impact on humans (those that interact only with computational systems trained on non-sensitive datasets) may call for lower risk prioritization. Importantly "regularly assessing and prioritizing risk based on context remains important because non-human-facing AI systems can have downstream safety or social implications."
- New Appendix C: AI Risk Management and Human-AI Interaction. In this Appendix, NIST highlights issues that merit further consideration and research, including:
- A clear definition and differentiation of human roles and responsibilities in decision-making and overseeing AI systems.
- An understanding of human cognitive biases that may impact the design, development, deployment, and evaluation of AI systems.
- The possibility that AI systems can amplify human biases, leading to more biased decisions than the AI or human alone would make.
- RMF contains new acknowledgments that different parties in the AI lifespan will have different obligations, depending on their roles in the lifecycle, and AI risks should not be considered in isolation. For example, "an AI designer or developer may have a different perception of the characteristics than the deployer," and "organizations developing an AI system often will not have information about how the system may be used." AI risk management "should be integrated and incorporated into broader enterprise risk management strategies and processes."
- RMF introduces the new concept of residual risk, "defined as risk remaining after risk treatment," and which "directly impacts end users or affected individuals and communities." RMF states that "documenting residual risks will call for the system provider to fully consider the risks of deploying the AI product and will inform end users about potential negative impacts of interacting with the system." The term "system provider" is not defined within RMF but would seem to reference third-party "providers," which would include "developers, vendors, and evaluators of data, algorithms, models, and/or systems and related services for another organization or the organization's customers or clients."
What's Next
NIST will continue to periodically review and update the RMF and is soliciting comments on the Roadmap and Playbook. Companies that develop or deploy AI tools should consider reviewing those materials and submitting comments. Please contact us with questions or if interested in submitting comments.
DWT's AI Team will continue to monitor and report on these and other developments impacting AI legal and policy developments.