CISA, UK NCSC, and 17 Other Countries Issue Landmark Joint Guidelines for Secure AI System Development
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (UK NCSC), along with partner agencies from 17 nations, have released Guidelines for Secure AI System Development (the "Guidelines"). The Guidelines provide security considerations and risk mitigations for four stages of the AI development lifecycle: secure design, secure development, secure deployment, and secure operation and maintenance. The Guidelines are recommendations "to help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties," and are intended for providers of any AI systems, whether the provider has developed those systems from scratch or built them on top of other providers' software, and "are aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs)."
The Guidelines call for AI systems developers to take primary responsibility for the security of AI systems rather than to push responsibility down to system users. This approach aligns with CISA's Secure by Design principles and the National Cybersecurity Strategy (NCS). As discussed in our prior blog post, the NSC emphasizes shifting responsibility for software security toward producers and away from users and calls for legislation that would prohibit software providers from disclaiming liability for security vulnerabilities. Consistent with the NCS, the Guidelines state that users of AI systems "do not typically have sufficient visibility and/or expertise to fully understand, evaluate or address risks associated with systems they are using."
CISA has been active on the AI front. Release of the Secure AI System Guidelines comes on the heels of the agency unveiling its Roadmap for AI on November 9, 2023. CISA's AI roadmap aligns with the Biden Administration's emphasis on encouraging private actors to develop safe and trustworthy AI in addition to bolstering alignment on AI governance with international allies, which were themes of the recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110). For example, CISA's AI roadmap describes five "lines of effort" and specific CISA-led actions "as directed by EO 14110." Those lines of effort include: responsibly using AI to support the agency's mission, assess and assist secure by design, AI-based software adoption by different stakeholders, protect critical infrastructure from malicious use of AI, engage in interagency processes on AI-enabled software, and expand AI expertise in CISA's workforce. EO 14110 also assigned CISA with responsibilities to perform risk assessments when AI is used in critical infrastructure systems. Based on this delegation and the activity CISA has already undertaken, CISA is likely to continue focusing its attention on cybersecurity risks in AI use and development.
Although the Secure AI System Guidelines are not binding, developers of AI systems are advised to review the guidelines and benchmark their AI development practices against CISA and NCSC's recommendations. Numerous state and federal laws require companies to employ "reasonable" or "appropriate" security practices to protect their IT systems and data, and what security practices are reasonable or appropriate for AI systems is an emerging and largely unanswered question. Regulators seeking to enforce those laws may look to the Secure AI Systems Guidelines to establish a baseline of legally required reasonable or appropriate security practices for AI.
Overview of the Secure AI System Guidelines
CISA and the UK NCSC drafted the Guidelines to align to and be used alongside the Secure Software Development Framework (SSDF) published by the National Institute of Standards and Technology's (NIST) (we discussed the SSDF in a recent blog post on CISA's secure development self-attestation form), CISA's Secure by Design principles, and the UK NCSC's secure development and deployment guidance.
The Guidelines use the term "AI" to refer to all types of machine learning (ML) applications, as opposed to non-ML AI approaches such as rule-based systems. The Guidelines define ML applications as those that:
- Involve use of models that allow computers to recognize and bring context to patterns in data without the rules having to be explicitly programmed by a human, or
- Generate predictions, recommendations, or decisions based on statistical reasoning
The Guidelines emphasize that AI systems face unique security threats. In particular, through adversarial machine learning (AML) attackers can exploit vulnerabilities in ML-related hardware, software, and other systems, resulting in unintended performance of AI systems (for example, by causing the AI system to make incorrect decisions or decisions that are deliberately harmful) and compromise sensitive model information. These types of attacks can be accomplished in a number of ways, including through prompt injection attacks against a large-language model and deliberately corrupting the training data or user feedback.
The guidelines are broken down into four areas within the AI system development life cycle: secure design, development, deployment, and operation and maintenance.
Secure Design
This section discusses understanding cyber risks to AI systems, threat modeling, balancing security and usability, and selecting the appropriate AI model. Recommendations include:
- Raise staff awareness of cyber threats and risks. Personnel at AI systems developers should be made aware of AI security threats and mitigations. Developers should be trained in secure coding techniques.
- Model the threats to your system. Providers should assess security threats to AI systems and assess potential impact of those threats to the systems, users, and others. Security decisions should be based on the outcome of these risk assessments.
- Design your systems for security as well as functionality and performance. Security should be a primary focus of AI systems design, alongside functionality and performance. For example, due diligence should be performed on external model providers; third-party code should be subject to security scanning; and if the provider is using an external API, appropriate controls should be in place for sending data outside the provider's environment. Additionally, AI software development should be subject to the provider's existing secure development practices, the most secure system settings should be configured by default, and access to systems should be granted according to the principle of least privilege.
- Consider security benefits and trade-offs when selecting your AI model. Providers should consider various factors when choosing an appropriate AI model, including the model's complexity, its appropriateness for the provider's use case, and the supply chain security practices for the model's components.
Secure Development
This section addressees supply chain security, documentation, and asset and technical debt management. Recommendations include:
- Secure your supply chain. Providers should monitor the security of their AI supply chains and require suppliers to adhere to strong security practices. The NIST SSDF includes various recommendations for supply chain security.
- Identify, track, and protect your assets. Providers should secure sensitive system logs, track and secure their computing assets, and protect data inputs and outputs of AI systems.
- Document your data, models and prompts. Providers should document the development and maintenance of models, datasets, and meta- or system-prompts, including security-relevant information such as software bills of materials (SBOMs), use of cryptographic hashes and signatures, and retention periods.
- Manage your technical debt. Providers should identify, track, and manage their "technical debt," which the Guidelines define as situations "where engineering decisions that fall short of best practices to achieve short-term results are made, at the expense of longer-term benefits."
Secure Deployment
This section addresses protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release of AI systems. Recommendations include:
- Secure your infrastructure. Providers should implement strong security controls, including "appropriate access controls of APIs" and "appropriate segregation of environments holding sensitive code or data" in an effort to help mitigate cyber attacks focused on stealing models or harming the model's performance.
- Protect your model continuously. Providers should protect their models by implementing standard security best practices and by implementing controls on query interfaces to prevent unauthorized access or use. To ensure that consuming systems can validate models, the Guidelines recommend that organizations "compute and share" cryptographic hashes and/or signatures of model files (e.g., model weights) and datasets (including checkpoints) as soon as the model is trained. The Guidelines state that "good key management is essential."
- Develop incident management procedures. The Guidelines recommend developing and implementing an incident response plan that reflects different threat scenarios and is "regularly reassessed as the system and wider research evolves." The Guidelines also recommend storing critical company digital resources in offline backups, training responders "to assess and address AI-related incidents," and maintaining "high-quality audit logs and other security features."
- Release AI responsibility. Providers should release AI systems only after subjecting them to appropriate security evaluations and testing.
- Make it easy for users to do the right thing. Providers should make secure configurations the default configurations for users. Where appropriate, the only available options should be secure ones. Users should be provided with guidance on secure use of the AI system and clear statements about their responsibilities for maintaining systems security. As noted above, placing primary responsibility for system security on the system developer, including through the use of secure default configurations, closely aligns with CISA's Secure by Design principles and the NCS.
Secure Operation and Maintenance
This section discusses security activities that are particularly relevant once an AI system has been deployed, including logging and monitoring, update management, and information sharing. Recommendations include:
- Monitor your system's behavior. Providers and users should monitor AI systems outputs and identify potential intrusions and compromises.
- Monitor your system's inputs. Inputs, such as inference requests, queries, or prompts, should be monitored and logged to enable compliance obligations, audit, investigation, and remediation in the case of compromise or misuse. The Guidelines recommend using "explicit detection of out-of-distribution and/or adversarial inputs."
- Follow a secure-by-design approach to updates. Providers should include automated updates by default and use secure, modular update distribution procedures. Major updates should be treated like new software versions.
- Collect and share lessons learned. Providers should participate in information-sharing communities to share best practices and lessons learned. Open lines of communication should be maintained within organizations to identify and report vulnerabilities and provide feedback. Providers should publish vulnerability information and take prompt action to mitigate and remediate issues.
+++
DWT's privacy and security and artificial intelligence teams will continue to monitor CISA's guidance and other activities to promote cybersecurity in the development and use of AI, including potential impacts for clients as federal agencies start implementing directives from the AI Executive Order.