CISA and UK NCSC Unveil Joint Guidelines for Secure AI System Development

Today, in a landmark collaboration, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) are proud to announce the release of the Guidelines for Secure AI System Development. Co-sealed by 23 domestic and international cybersecurity organizations, this publication marks a significant step in addressing the intersection of artificial intelligence (AI), cybersecurity, and critical infrastructure.

The Guidelines, complementing the U.S. Voluntary Commitments on Ensuring Safe, Secure, and Trustworthy AI, provide essential recommendations for AI system development and emphasize the importance of adhering to Secure by Design principles. The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority.

The Guidelines apply to all types of AI systems, not just frontier models. We provide suggestions and mitigations that will help data scientists, developers, managers, decision-makers, and risk owners make informed decisions about the secure design, model development, system development, deployment, and operation of their machine learning AI systems.

This document is aimed primarily at providers of AI systems, whether based on models hosted by an organization or making use of external application programming interfaces. However, we urge all stakeholders—including data scientists, developers, managers, decision-makers, and risk owners make—to read this guidance to help them make informed decisions about the design, deployment, and operation of their machine learning AI systems.

CISA invites stakeholders, partners, and the public to explore the Guidelines for Secure AI System Development as well as our recently published Roadmap for AI to learn more about our strategic vision for AI technology and cybersecurity. To access learn more, visit CISA.gov/AI.

 

Read more... Alerts