Biden’s Executive Order on AI Governance: How ValidMind Enables You to Comply Featured Image

Biden’s Executive Order on AI Governance: How ValidMind Enables You to Comply

Nov 9, 2023

President Biden’s Executive Order on artificial intelligence issued on October 30, mandates rigorous safety and security testing for high-risk AI systems. This includes requiring developers to report test results and undergo red-team exercises before deployment. The order also directs NIST to refine its AI Risk Management Framework (AI RMF), particularly concerning generative AI, and to establish guidelines and best practices for trustworthy AI systems. In this post, we show you how you can comply with these requirements using ValidMind’s new NIST AI Risk Framework template for model documentation.

What challenges does the executive order pose to organizations?

A key aspect of NIST’s responsibility will be to develop tests and benchmarks for AI system evaluations and to document results to ensure transparency and accountability. This responsibility in turn creates several challenges that AI developers and organizations must address to ensure compliance:

  • Creating standardized model documentation that aligns with the NIST framework. Structured, consistent reporting of test results and evidence is essential for regulators who need to review and compare documentation and validation across different AI systems.
  • Aligning quantitative metrics produced by developers with the governance frameworks established by validators. The lack of alignment between model developers and model validators may result in gaps between metrics and governance standards required for regulatory compliance, exposing AI systems to risks that could affect their trustworthiness.
  • Sharing test results and documentation with the government to create evidence that is transparent and allows for the structuring of information in a manner that complies with AI risk frameworks.

Supporting NIST accountability with ValidMind

ValidMind’s platform is designed to guide AI developers and validators through the process of collecting and presenting evidence that meets the regulatory requirements, including those outlined in the executive order and by NIST. We enable you to do this though the easy creation of documentation templates for AI systems, including our new template that aligns with the NIST AI Risk Management Framework.

For each stage of the AI development lifecycle — data collection, data preprocessing, data sampling, model selection, prompt engineering, output analysis — ValidMind automates the process of generating model testing evidence associated with the sources of risk outlined in the NIST framework, such as:

  • Reliability
  • Accuracy
  • Safety
  • Security
  • Explainability
  • Privacy
  • Fairness
  • Resilience
  • Interpretability
  • Transparency
  • Bias
  • Robustness
  • Validity

For each of these, our platform provides a clear linkage between quantitative metrics and the broader governance principles.

Here’s an excerpt from our ValidMind platform that illustrates how a model is being tested and documented for compliance with the NIST AI Risk Management Framework:

The NIST AI Risk Management Framework is seamlessly integrated with ValidMind’s Model Documentation functionality, providing clear linkages between quantitative metrics and broader governance principles:

  1. In the 3. Development section, AI model developers generate the bulk of the quantitative metrics throughout the development life cycle. For instance, they provide evidence of bias during the prompt engineering stage.
  2. These quantitative outputs are then translated into risk statements and categorized into different AI risk sources under the 6. NIST AI Principles Compliance section, which serves as a bridge between the quantitative metrics and governance standards. For example, the output of the bias testing will be assessed into Bias risk and then linked to the NIST Principle Fair with harmful Bias Managed.

The test data you’re looking at is generated by our developer framework which integrates seamlessly with the documentation generate. We also make it easy to see if a test fails and needs further attention.

A Call to Action for AI Governance

President Biden’s Executive Order, focusing on AI safety and security, creates an urgency for AI developers and institutions to start aligning their testing and documentation practices with standards, such as the NIST AI Risk Management Framework.

ValidMind offers an automated and efficient way for organizations developing AI models to streamline their documentation and validation processes in line with modern AI risk management frameworks.

If you are ready to try ValidMind yourself, request a demo or join our closed beta.