November 25, 2025

Staying Agile: Managing Risk Across Fragmented Regulatory Environments 

Staying Agile Managing Risk Across Fragmented Regulatory Environments

As global regulations evolve at unprecedented speed, organizations face a growing list of requirements they need to meet. We spoke with ValidMind’s Chief Risk Officer, Jan Larsen, and Head of AI, Kristof Horompoly, about how they balance global compliance with adaptability and what it takes to build a risk culture that thrives amid constant regulatory change.

The New Complexity of Global Risk

In a fragmented regulatory landscape, firms must understand which workflows and regulations matter most. “The more it’s mapped out, the easier it is to understand what needs to be done, and the more agile the response can be,” Larsen says.

But clarity isn’t enough. Horompoly warns that complexity escalates when organizations focus on checking regulatory boxes instead of managing real exposure. “The danger is companies getting reactive rather than proactive in their governance and risk management frameworks,” he states. Strong internal frameworks, he argues, can absorb most regulatory changes. “If they set up frameworks meaningfully, it’ll cover 90-95% of all the regulations that will pop up.”

His recommendation is simple. “Focus on understanding and managing the risk you currently have. That will put you in a very good position to comply with regulations that appear anywhere you’re operating.”  

Building Agility into Governance

If adaptability begins with understanding the risk, governance is about making that understanding operational. Larsen highlights the importance of the traditional three lines of defense: model owners as the first line, independent risk and validation as the second, and audit and compliance as the third

As AI becomes more embedded in the business, governance must support speed. Horompoly notes that effective controls must align with how engineers actually work. “Ultimately it’s about empowering your development teams to move fast and do what they like to do, which is implement and develop AI.”

That requires translating high level principles into actionable rules. “You need to distill your overall framework into practical and actionable guardrails so your teams are not constantly second-guessing what the framework means for them,” Horompoly states. When boundaries are clear, he says, teams can innovate safely “and move freely within that box that it sets for them.”

Explore potential AI trends for next year: 10 AI Risk Trends for 2026

Risk Awareness as a Shared Responsibility

Risk culture is built on everyday behavior. “We want people to be proactive in identifying and managing risk, because that’s their innate behavior as an employee of the company,” Larsen says. As the use of AI continues to expand, he expects this mindset to spread. “The biggest change will be the number of new stakeholders who need to adopt this risk-awareness-first mindset.” 

A shared culture also requires adaptable frameworks. “This space is incredibly dynamic, and so your frameworks need to be dynamic as well,” Horompoly says. As teams advance their AI capabilities, their controls must evolve with them. “Maturity in AI Development is growing and you need to grow awareness and risk management with it.”

This means updating practical guardrails without rewiring foundational principles. “You need to continually update day-to-day controls and guardrails, while keeping the overall framework and principles stable enough to allow for new changes,” Horompoly says. 

Learn more about responsible AI: Racing Toward Responsible AI: How Institutions Can Accelerate Adoption Without Losing Control

Staying Ahead of Change

Technology is advancing and so is the complexity of managing data responsibly. Cross-border data is a prime example. “There needs to be an authoritative map of where data is stored, processed, and accessed,” Horompoly states. With jurisdictions enforcing different rules, organizations must design for flexibility. “You want capabilities that can anonymize, pseudonymize, or create synthetic data. The more you do that, the more freely you can use your data across borders.”

“You need to bring your models to the data, rather than your data to the models.”

Kristof Horompoly, Head of AI, ValidMind

You need to bring your models to the data, rather than your data to the models”. – Kristof Horompoly, Head of AI, ValidMind

Complexity also challenges explainability and transparency. “For these large language models, explainability is inherently challenging because they are so big and complex,” Horompoly says. Post-hoc tools simplify reality, stakeholders require different detail, and transparency alone is unable to solve it. “Something can be transparent and still not explainable to a human,” he notes.

The Future of Risk Management

Looking ahead, Larsen foresees a risk landscape defined by constant adaptation. “An advantage of AI is that it adapts to changing conditions really quickly,” he notes. Larsen believes this speed will ultimately shift how organizations communicate risk. “I think we’re going to converge over some period of time to explainability going away.” Adaptability, he argues, will define the next era of governance. 

At the same time, AI is broadening who owns risk. “You’re integrating AI risk into an existing taxonomy of risks within the organization,” Horompoly explains. “You can break down most of the risks that AI poses into existing risks that your organization is already dealing with.”

Horompoly expects AI to continue outpacing traditional oversight. “AI development is happening faster than evaluation or risk management can keep up” he says, “we’ll see incidents where there’s no proper risk management, and those companies will make headlines.”

The result, he predicts, is a necessary pause, forcing organizations to ask a defining question: How can we make sure that our risk management is able to follow the pace of development?

Company and Industry Updates, Straight to Your Inbox