5 Essential Steps for Banks to Implement LLMs Safely
The business case for using large language models (LLMs) is clear, and financial institutions are increasingly exploring ways to harness their potential. But adopting LLMs in banking requires a strategic approach to ensure that these tools are implemented safely and effectively.
JP Morgan Chase, for example, recently announced that it rolled out an LLM/virtual assistant called LLM Suite for its Asset and Wealth Management organization (around 60,000 employees) using a closed version of the OpenAI software ChatGPT. That the world’s largest bank was able to accomplish this shows that the banking industry is shifting towards integrating advanced AI tools at a faster pace. However, with this shift comes the need for a careful and strategic approach.
Unlike more niche AI use cases typically seen in banks, which are narrowly tailored for specific functions or small groups of users, the AI implementation at JP Morgan Chase is much broader in scope. This internal LLM is designed for general use across the entire organization, making it accessible to a wide range of employees. This broad application requires a different validation approach, as it’s not limited to a specific use case but is instead intended to support a variety of functions. While its usefulness is obvious, rolling out such a tool requires a methodical approach that includes (but is not limited to) these areas: Identify, Validate, Educate, Control, and Monitor.
1. IDENTIFY: Understanding Use Cases and Cohorts
The first step in implementing LLMs is to clearly identify the specific use cases within the organization where the technology can add value. This involves selecting the right cohorts across various departments to test and evaluate the tool. By identifying these critical areas, banks can ensure that the LLM is deployed where it will be most effective, while also minimizing the risk of misuse.
2. VALIDATE AND SECURE: Rigorous Testing and Risk Assessment
Validation is a crucial phase in the implementation of LLMs. Banks must conduct thorough testing of the technology by engaging a diverse group of users to assess the risks and benefits. This process should include evaluating the potential for biases, hallucinations, and other issues that could arise from the use of LLMs. Ensuring that the environment is secure for handling sensitive data is also a key part of this validation process.
3. EDUCATE: Training for Safe and Effective Use
Education is essential for the successful deployment of LLMs. Before any employee gains access to the tool, they should undergo comprehensive training to understand how LLMs work and how to use them responsibly. This training should cover the limitations of LLMs, the importance of human oversight, and the need for contextual awareness when interpreting outputs generated by the model.
4. CONTROL: Implementing Internal Guardrails
To mitigate risks, banks need to establish robust controls that govern the use of LLMs. This includes setting up internal guardrails such as hallucination checkers, disclosure protocols, and usage guidelines. Controls should also emphasize that the final responsibility for any decision or information derived from the LLM rests with the human user. This ensures that critical decisions, especially those related to regulatory or client-facing outputs, are thoroughly reviewed before being acted upon.
5. MONITOR and FEEDBACK: Continuous Improvement
Finally, ongoing feedback and monitoring are essential for the long-term success of LLMs in banking. Banks should implement mechanisms for users to provide real-time feedback on the tool’s performance, including issues like inaccurate responses or potential risks. Continuous monitoring allows for the iterative improvement of the LLM, ensuring that it remains aligned with the bank’s goals and compliance requirements.
The adoption of LLMs in the banking sector offers tremendous potential, but it also comes with significant risks. By focusing on these five key practices—Identify, Validate, Educate, Control, and Monitor—banks can broadly implement LLMs safely and effectively, paving the way for innovative solutions while maintaining the highest standards of security and compliance.