London Stock Exchange (LSEG) ValidMind Panel Discussion: Event Summary
ValidMind recently convened a group of model risk management experts for an engaging and enlightening panel discussion at the London Stock Exchange (LSEG). ValidMind CEO Jonas Jacobi facilitated the event, which operated under Chatham House Rules. The panel included several senior representatives from large European financial institutions. It covered a range of topics related to Generative AI (GenAI), diving specifically into best practices for minimizing the risk associated with AI models and their respective frameworks.Â
Understanding the EU AI Act
The first topic covered was the soon-to-be implemented EU AI Act, which is regarded more as a framework for identifying the proper guideline for AI rather than a strict law. Panelists discussed the impact of AI regulation on financial operations, emphasizing how it would enable increased transparency and traceability. This would shed light on unexplored areas and allow for proper financing. One panelist shared their experience, explaining how a company’s shift from classical AI to GenAI was challenging. They addressed this by including as many collaborators as possible to ease the transition.
GenAI’s impact on operations
Discussions then shifted to Gen AI and its impact on a company’s current operation, as well as touching base on who owns AI risk within a given bank. The panelists’ overall response was that the impact of GenAI will affect more than just the MRM and technical sides of a company, thus making it important for them to recognize and adapt to this new AI. The ownership of AI risk within an organization was debated; a consensus was reached that this responsibility falls to the individual who introduces a solution and effectively implements it into the company.
Testing and validating GenAI
Another key point discussed was the testing and validating of GenAI and the potential for challenges. GenAI’s wide range of applications has the capability of validating a variety of use cases; however, current validating processes are insufficient for this change. ‘Human influence’ was tied into the discussion, where reliability and effectiveness among other things were debated.
The democratization of AI
The panel also briefly discussed the democratization of AI as well as model assumptions and regulatory frameworks, reflecting on the importance of regulation and implementing proper governance of AI within banks. One panelist explained how traditional AI models were often very specific to the use case and easier to manage. With Gen AI, one sees more democratization; this model is versatile and can be used by a range of people for diverse reasons, making it difficult to track and manage. This emphasizes the importance of companies adapting their MRM so that it can handle Gen AI.
Managing AI risks through culture shift
The second half of the discussion focused on solutions for managing the risks of AI, addressing potential biases in AI training, and other internal factors that AI can impact. This all boiled down to the importance of a company properly managing its employees, their access to AI tools, and the overall company culture.
Ensuring AI is used properly, specifically not for high risk banking scenarios, requires changing employees’ perception of AI, and to properly organize cultures within the company. The panelists noted that without any form of regulation or guardrail, people could access AI on personal devices, which could be problematic. Therefore, it was agreed that implementing cultural changes within an organization is vital to prevent this scenario.
Managing third-party risks
The conversation then moved to third party risks. Properly managing and evaluating risks with third party AI solutions is vital for organizations to function properly. The panelists agreed that an important part of learning how to use AI tools effectively and safely must include some form of awareness training and governance frameworks implemented into the organizations; a lack of structure could lead to a suboptimal understanding of the benefits and risks associated with AI. An idea was passed around the panel in which CEO executives would be the initial adopters of these AI frameworks. It was suggested that CEO executives should lead by example in implementing and discussing AI-related changes, as their opinions and actions will reflect on the rest of their team.
The ‘kill switch’ question
The discussion concluded with the panelists’ thoughts regarding a ‘kill switch’ for AI. The idea, which was introduced at a recent AI summit in South Korea involving some of the world’s top AI companies, was debated for its potential efficacy. There was skepticism regarding its effectiveness, and how such a switch would work with artificial general intelligence (AGI). Some panelists argued that if AGI becomes too advanced, a kill switch might be too late to implement effectively. This further emphasizes the importance of properly tested and validated AI.