March 18, 2024

The EU AI Act is Here: What Model Risk Management Teams Need to Know

Image
Share

In a historic move this month, the European Union’s parliament passed the world’s first significant set of regulations to address artificial intelligence. The EU AI Act contains comprehensive guidelines that will have lasting and wide-reaching effects on commercialized artificial intelligence use.

If you are a global practitioner in the model risk management (MRM) field, you likely are either evaluating AI solutions or using them in production already, but you might only be beginning to understand how legislation will affect your profession.

The EU AI Act, which will likely be enforced starting in May 2024, will have different implications for model risk management (MRM) professionals based on the types of models they’re developing. The first and second lines of defense in your organization should prepare for profound and subtle changes to their day-to-day in a number of important  areas. 

Understanding and assessing risk classification 

The AI Act has introduced a risk-based classification system for AI applications, dividing them into four categories: 

  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal risk.

Your model risk management teams will need to understand the distinctions and how to evaluate their systems through these four designations. A deeper understanding of these categories will help your business evaluate new and existing tools. While regulations imposed on each AI system will vary according to its risk level, the strictest requirements will hit high-risk applications. 

Identifying and avoiding prohibitive AI systems

Education will be essential for MRM teams in these early days of the EU AI Act to stay informed as these regulations take shape and MRM teams work to keep up-to-date on current and upcoming AI regulations. Focusing on what is considered unacceptable or high-risk under the EU AI Act is critical, and MRM teams will need to work closely with their legal partners to understand the nuances of the law and how it applies to specific AI applications within the organization.

Ensuring a human touch 

In a LinkedIn post, European Parliament member Dragos Tudorache said, “[w]e have forever attached to the concept of Artificial Intelligence, the fundamental values that form the basis of our societies. With that alone, the (EU AI Act) has nudged the future of AI in a human-centric direction.” 

Model risk developers and validators should consider themselves the humans in Tudorache’s pointed “human-centric direction” call-out. When considering how to fold in AI, model risk management systems will require mechanisms for human oversight — sometimes also referred to as human-in-the-loop (HITL) — and processes for intervention to ensure decisions are reviewed and, if necessary, overridden by humans. 

Enforcing more transparent model documentation

The need for detailed documentation of the decision-making processes, data sources, and methodologies used in AI systems will only increase as the Act is enforced. This documentation should be readily available and validated for regulatory scrutiny. These systems should be designed with full transparency. They should be understandable and built in a way that facilitates easier assessment of compliance with regulatory standards.

This won’t be the end of new regulations

The EU AI Act is just the start. Last year saw President Biden’s executive order on artificial intelligence, for example, and some U.S. lawmakers have also been pushing for AI audits among technology providers while others are seeking greater transparency. Legislation hasn’t advanced in the U.S. as rapidly as it has in Europe, but don’t expect it to stay that way. Standards, such as the NIST AI Risk Management Framework, are already on their way.

Model risk professionals will be wise to stay on top of these trends and news as government agencies around the world look to the EU AI Act as a rubric for upcoming legislative change. 

Hope for the future and ample promise

Even as your MRM teams learn to navigate this rapidly evolving landscape, there is ample promise of great things to come. There’s no doubt that AI will revolutionize the way that model risk management teams work, even as more standards come to light. The finance industry will face substantial requirements to employ AI systems. Expect in-depth risk assessments, further transparency measures, and significant accountability. 

As your organization begins to plan for this new normal, partnerships will be key to understanding the changes and navigating this new landscape. ValidMind is compliant by design, hyperfocused on model risk governance, and built to give data scientists, model developers, model validators, and auditors full transparency into statistical and AI/ML models. 

Interested in learning more? Book a demo today to see how ValidMind can help as you navigate the EU AI Act. 

Resources

ValidMind in the Media

Company and Industry Updates, Straight to Your Inbox