December 3, 2025

10 Key Insights from the AI Risk Summit: Navigating Governance and Innovation

AI Risk Summit Summary

The AI Risk Summit 2025, hosted by Experian and ValidMind, brought together industry experts and risk professionals to discuss the rapidly evolving challenges of governing AI, particularly within regulated industries like financial services and insurance. The conversation focused heavily on balancing the rapid pace of technological innovation with the critical need for robust governance and resilience.

ValidMind Chief Revenue Officer Jan Larsen summed up the importance of this moment in AI, saying, “It’s not an overstatement to say we are at the dawn of one of the great technological revolutions in history. We have organizations that are rapidly deploying digital workforces. We, as a community, are figuring out what it means to manage these digital workforces effectively.”

With that, here are 10 key insights gleaned from the summit discussions:

Michael Versace - Chartis
Michael Versace, Chartis

1. Business Priorities Outpace Algorithmic Excellence

For large firms, the immediate priority for AI deployment is often determined by the “top of the house” (CEO and board), focusing on addressing controls, ensuring operational risk is managed, or accelerating investor relations reports. Building an algorithm designed primarily to maximize profit or make money often takes a back seat to these foundational priorities. Governance is therefore viewed as the platform for growth; without it, organizations cannot move forward quickly.

2. Trust by Design is a Financial Imperative

Strong governance and ethical practice (“trust by design”) are not merely compliance constraints but are demonstrated catalysts for long-term financial performance. Organizations that are ethical and compliant by design tend to be more profitable and sustainable, with evidence suggesting a financial premium for ethical organizations. Building controls from the beginning ensures compliance by design, which can lead to “approval by default” and accelerate the adoption of models.

3. Regulators are Still Primarily in Data-Gathering Mode

While US regulators are very active and issuing some guidance on getting started, they are largely focused on a data-gathering exercise right now. Regulators frequently ask organizations, “what are you doing?” and have requested AI inventory lists repeatedly (sometimes every three months). Overall, the current stage is considered “benign” until more prescriptive guidance is issued regarding specific use cases.

aaIMG 1769
Rodanthy Tzani speaks at AI Risk Summit ’25 in New York City.

4. Regulatory Complexity Drives Conservative AI Adoption

Organizations are cautious about being the first to deploy advanced AI, especially in high-risk, audited areas like credit risk model development. In applications like credit and employment, organizations often maintain a very conservative approach (e.g., using statistical logistic regression) because they need to be able to explain exactly how a solution was reached.

5. Unified Frameworks are Essential for Multi-State Compliance

For insurance organizations, which are regulated by all 50 US states (each with different requirements), the regulatory landscape is extremely challenging. The preferred governance approach is to design a single, robust program that adheres to the strictest standards (e.g., aligning with Colorado for quantitative testing and California for explainability). This “unified framework” prevents the costly duplication of controls and ensures compliance across multiple jurisdictions.

6. Increasingly Sufficient Guidance Exist Despite Regulatory Uncertainty

Despite the influx of hundreds of potential regulations and geopolitical concerns (such as the US/China race to dominate AI), there is a belief that sufficient guidance and “guardrails” are available today for organizations to move forward. Consensus standards, particularly those published by ISO and NIST, provide frameworks that can be applied broadly. Organizations cannot afford to wait for regulation to be fully harmonized before acting.

7. Defining ‘Model’ vs. ‘Tool’ Remains a Challenge

There is ongoing debate about how to categorize new AI applications, such as Microsoft Copilot. If an AI application is used solely for efficiency purposes (e.g., generating meeting summaries, drafting code) and the end product still goes through human review or oversight, it is often classified as an efficiency tool. However, if the tool is used for material business decision-making without human oversight, it becomes categorized as a model that requires validation. Still, this classification remains a challenge because determining if the tool’s output is strictly for efficiency or if it influences material business decisioning requires careful assessment, especially since the code produced by efficiency tools often contributes to models that must ultimately be validated.

8. Legacy MRM Systems Cannot Handle the Pace of AI

The traditional Model Risk Management (MRM) approach, built on manually updated platforms where validation might occur only once every year or two, is fundamentally inadequate for governing modern AI. AI models, especially Generative AI (GenAI) applications, can update weekly or monthly, and a single model version may support hundreds of use cases, requiring continuous and rapid testing. Automation of testing, documentation, and monitoring is therefore critical.

Kristof Horompoly - ValidMind
Kristof Horompoly, Head of AI, ValidMind

9. The Judge LLM as a Concept for Validation Scalability

To meet the high-speed demands of validation, the concept of a dedicated Judge LLM was introduced, sparking heavy debate. The judge LLM would act as an independent, AI-driven evaluation tool owned by the second line of defense (validation team) that can assess the primary LLM application. The Judge LLM is designed to perform the majority of the evaluation work, providing instant feedback and constant monitoring, thereby allowing human experts to scale their knowledge and focus on high-risk assessments. This approach leverages AI “teaming” to empower human AI governance teams.

10. Risk Assessment Must Begin at Ideation

Effective AI governance requires shifting the focus to the entire application lifecycle, starting at the ideation phase, rather than waiting until development is complete. Companies should conduct an AI risk screening immediately to identify and prohibit unacceptable or “high-vigilance” use cases (e.g., applications involving cognitive behavioral manipulation or classifying people based on socioeconomic status). Responsibility for identifying and managing risk starts with the first line, supported by the second line’s risk assessment.

At its core, the challenge of AI governance is akin to trying to install seatbelts and speed governors on a rocket that is still accelerating. Organizations must adopt AI-driven governance tools to match the velocity of technological advancement, transforming the risk function from a cost center into a decisive catalyst for sustainable growth.

Event Presentations

Emerging Trends in Model and AI Risk Management: Michael Versace, Risk and Regulatory Markets Lead, Chartis Research

AI Risk Assessment: A Central Governance Pillar: Rodanthy Tzani, Founder & Risk and Compliance Advisor, Sphaleron

Unified AI Governance for Insurance Shawn Tumanov, Data, Model and AI Risk Governance Executive, GEICO

Company and Industry Updates, Straight to Your Inbox