5 Tips to Improve Your AI Governance

As AI technologies—especially generative AI—become embedded in more aspects of financial services, strong AI governance has emerged as a critical pillar for institutions aiming to innovate responsibly. We recently spoke with a few of our ValidMind thought leaders to understand how effective governance unlocks safety, speed, and compliance while also providing scalability and a stark competitive advantage.
In this post, we break down five practical tips to enhance AI governance, drawing directly from insights shared by Jan Larsen (Chief Revenue Officer), Kristof Horompoly (VP – AI Risk Management), and David Asermely (VP of Global Business Development and Growth Strategy) at ValidMind.
1. Establish a Cross-Functional AI Governance Committee
“It’s very important for any organization… to have some kind of AI governance committee, some kind of internal AI board to set the risk tolerance of the organization for AI,” Larsen said. “There are risks, but we bear those risks because the payoff for bearing and managing those risks well is potentially extremely large in terms of efficiency, better customer acquisition, better revenue as a result.
To be effective, your AI governance council should include senior leaders from all stakeholder groups—risk, legal, compliance, technology, data science, and business units. This committee must set the organization’s AI risk appetite, review use cases, and ensure accountability is clearly assigned. It should also report regularly to the board of directors.
Tip: Make sure representation in your AI council aligns with your institution’s operating model and includes voices from all critical control functions.

2. Build and Maintain a Comprehensive AI Inventory
“Having an inventory of all of your AI and how that AI is used is the first step,” Asermely said.
Without a proper, robust inventory, Horompoly said, “you’re exposing yourself to a lot of risk, and that includes operational risk. Not knowing which models you have out there may expose you, and so will not having the proper stakeholders involved, not having the proper policies and procedures to make sure that everybody is consistently following the same processes for developing, validating, and maintaining/managing those models.”
Many banks struggle to know exactly which AI systems are in use, especially as generative AI tools proliferate through business lines via third-party applications and APIs. Without a centralized inventory, risk-tiering and monitoring become virtually impossible.
Tip: Use a flexible inventory framework that maps upstream/downstream dependencies and differentiates between traditional models, machine learning, and GenAI applications.
3. Tier AI Systems by Risk to Prioritize Controls
“Your organization should have an understanding of what constitutes a high-risk AI system, what constitutes moderate and minimal risk,” Asermely continued.
Not all AI systems pose the same level of risk. Institutions should develop clear criteria to tier AI systems and allocate governance resources accordingly. High-risk systems might involve sensitive data, financial decisions, or regulatory exposure and warrant more stringent validation and ongoing monitoring.
“By having your models in an inventory and having them risk tiered,” Asermely said, “you can apply the appropriate level of controls around those high risk AI systems and give them the proper attention they deserve.”
Tip: Define tiering rules collaboratively with risk, compliance, and audit stakeholders and apply them consistently across the enterprise.
4. Adapt Validation and Monitoring for Generative AI
“GenAI validation is incredibly different from traditional model validation,” Horompoly explained. With GenAI models, organizations often can’t access training data or model internals, especially when using third-party LLMs via APIs.
Horompoly advised: “You need to constrain the scope of the application to something that is testable… and make sure you do very comprehensive testing within that scope.” He also emphasized the importance of continuous monitoring, user feedback, and proper education on generative AI risks.
Tip: Rely on scenario testing, prompt evaluation, feedback loops, and human-in-the-loop safeguards to validate generative AI systems effectively.
5. Use Governance to Enable, Not Block, AI Adoption
“AI governance is the orchestration and enforcement of policies… that helps organizations end up in the place they want to,” said Asermely.
Instead of viewing governance as a blocker, the most forward-looking institutions treat it as an enabler. A well-defined governance framework reduces ambiguity, empowers innovation within clear boundaries, and accelerates safe AI adoption. “Without proper risk governance, it’s impossible ultimately to scale AI,” added Horompoly.
Tip: Balance risk mitigation with enablement—structure your policies to support experimentation while minimizing harm.
Final Thoughts on AI Governance
Larsen provided an insightful take on why proper AI governance is so important for businesses looking to spur innovation: “What’s holding AI adoption back now really is a lack of confidence… A stronger governance platform is what allows the institution to develop that confidence”.
Banks and financial institutions that embrace mature, scalable AI governance can mitigate risks, enhance regulatory readiness, and stay ahead of emerging fintech challengers. If your institution is still figuring out how to build confidence in AI, start with these five tips—and scale with trust.
Want to learn how ValidMind can help your organization boost its AI governance? Book a demo today.