By Dr. Darren Death, Vice President of Information Security, Chief Information Security Officer, ASRC Federal

As I wrote previously, CISA’s “Guidelines for Secure AI System Development” provide a clear path for safely managing AI systems. They highlight the importance of building security into AI systems from the beginning. These guidelines help organizations deal with specific AI threats, keep the supply chain safe, and use AI responsibly. The main aim is to create AI systems that are not only smart and effective but also safe and trustworthy. Let’s take a closer look at how to implement secure AI based on the principles of “secure by design”, threat modeling/risk assessment, cybersecurity supply chain risk management, documentation, infrastructure protection and continuous monitoring.

Integrating ‘secure by design’ principles:

  • Organizations must take responsibility for the security of their AI systems. This includes ensuring that security measures are integrated at every stage of the software development lifecycle. Integrating ‘secure by design’ principles into AI involves incorporating security considerations at the beginning of AI system development.
  • This approach mandates that security is not an afterthought but a foundational element of the AI system. AI system developers should embrace transparency in their processes, clarifying how data is used and stored and how models are developed and updated.
  • This requires comprehensive planning and execution to ensure that every aspect of the AI system adheres to stringent security standards, from the algorithm to data handling and model deployment. This principle aims to build AI systems that are inherently resilient to threats and vulnerabilities, enhancing their overall reliability and trustworthiness. A top-down approach is needed, where secure AI system development is prioritized as a business imperative.

Regular threat modeling and risk assessment:

  • Threat modeling in AI involves identifying potential security threats and vulnerabilities specific to AI systems. This proactive approach allows organizations to anticipate and mitigate risks before they materialize. For AI systems, threat modeling must consider traditional cybersecurity risks and those unique to AI, such as data poisoning, model evasion, and adversarial attacks.
  • Regular risk assessments should be an integral part of the software development lifecycle. This includes evaluating the potential impact of identified threats and vulnerabilities on the AI system and the organization. Risk assessments should be updated regularly to account for new threats and changes in the system’s architecture or usage.
  • Integrating threat modeling and risk assessment into the development lifecycle ensures that security considerations are addressed from design to deployment and maintenance. This approach aligns with the ‘secure by design’ philosophy, embedding security into every aspect of the AI system.

Cybersecurity Supply Chain Risk Management (C-SCRM):

  • Comprehensive Supplier Vetting: Ensuring a secure supply chain for an AI System involves rigorous vetting of all suppliers and third-party service providers that support the AI system. This goes beyond mere financial stability and reputation checks.
  • It involves thoroughly assessing their cybersecurity practices, data protection policies, and adherence to industry-specific security standards. Contracts with suppliers should include specific cybersecurity requirements. This includes clauses for regular security audits, immediate incident reporting, and adherence to agreed-upon cybersecurity standards.
  • These agreements should also outline the consequences of security breaches, ensuring suppliers are accountable for lapses in their security practices.

Comprehensively documenting AI systems:

  • Documentation involves more than just record-keeping; it’s essential for ensuring AI systems’ transparency, security, and compliance. The goal here is not to develop a set of compliance documentation. Instead, it guides developers, users, and stakeholders through the functionality and security of the environment, offering a map to navigate the complexities of AI systems. Data is the foundation for a successful AI implementation.
  • Documentation must detail data sources, collection methods, processing, and storage. It must also address privacy concerns and compliance with data protection regulations, ensuring responsible data stewardship. The AI model’s architecture, training data, algorithms, and performance metrics should also be well documented. This not only aids in understanding the model’s capabilities but also in identifying potential biases and ethical considerations.
  • Security documentation outlines the security controls to protect the AI system against threats. This helps to understand what security controls have been implemented and whether a potential user of an AI system should consider using the systems based on their business requirements.

Infrastructure and model protection measures:

  • Infrastructure security involves implementing security controls to protect the core of AI operations. AI infrastructures must incorporate security controls such as network segmentation and strict access controls to protect sensitive data and AI models. Organizations should consider implementing a complete zero-trust architectural approach to protect the environment.
  • Regular vulnerability assessments and penetration testing should be conducted against the infrastructure and AI system to identify and mitigate security vulnerabilities. Protecting the AI model is equally important. Keep all software, including AI models, their dependencies, and system software, updated with the latest security patches.
  • Some patches may come from third parties, and some may be developed in-house for software developed as part of the AI system and its dependencies. The models should also be continuously updated and tested against new threats to maintain their security and reliability based on the outcomes of ongoing risk assessments.

Continuous monitoring of AI system performance and security:

  • This involves monitoring the AI System and its underlying infrastructure to identify any unusual activity that may suggest performance or security issues that must be mitigated. Constant vigilance is key in detecting and mitigating threats promptly. AI systems’ performance heavily depends on the quality of data they process.
  • Regular monitoring should include checks for data integrity, ensuring that the data feeding into the AI system has not been tampered with, is consistent, and is of high quality. AI systems are susceptible to adversarial attacks, where slight, often imperceptible alterations to input data can lead to incorrect outputs.

Continuous monitoring should include mechanisms like adversarial training and input pre-processing to detect and mitigate such attacks. AI systems rely on machine learning models whose performance can degrade over time due to changes in data patterns (a phenomenon known as concept drift). Continuous monitoring should include regular model accuracy, precision, and recall assessments to ensure the AI system remains effective and reliable.