The Future of Risk Management: How Oliver Wyman Transforms the Industry with ValidMind
We recently attended RiskMinds International 2023 in London which featured a standout presentation by our friends at Oliver Wyman — Ian Shipley and Rainer Glaser. Their talk, Evaluating An LLM-Powered Model: A Practical Case Study With Oliver Wyman’s NewsTrack Model, offered an insightful look into the innovative application of large language models in risk management, including how Oliver Wyman uses ValidMind for LLM documentation and validation. In this blog post, we explore in more detail the functionality behind their talk, why it matters, and how you can try it out yourself.
NewsTrack, meet ValidMind
Oliver Wyman’s NewsTrack model is an advanced Natural Language Processing (NLP) and deep learning toolkit that processes both structured and unstructured, high-frequency textual information in real-time to generate state-of-the-art analysis and forcasting risk-related signals, such as potential credit downgrade signals that are highly relevant for financial institutions. The toolkit is used as an early warning and monitoring system for risk-related events and is also highly adaptable, allowing for applications to various new use cases with minimal effort.
Since NewsTrack implements a large language model to generate early warning signals, it is subject to the same validation requirements that are being raised for other LLM applications. Banks or other financial institutions that use AI models must have processes in place to validate, document, and govern the risks related to these models, to ensure they are compliant with regulatory requirements, both current and incoming. Efficient compliance requires automation of these processes, a role seamlessly filled by ValidMind.
An AI risk platform to automate compliance
ValidMind is highlighted in Oliver Wyman’s talk for its effectiveness in ensuring that NewsTrack’s use of large language models aligns with validation and testing requirements for AI models.
The benefits of using the ValidMind platform include:
- Automated model testing & documentation Our developer framework enables fast programmatic testing and documentation of models. We support a wide range of models, including AI/ML, LLM, and statistical models, built in Python or R.
- Automated validation report generation The platform enables you to identify risk areas and generate validation reports based on test outcomes. It allows for the configuration of validation templates and risk areas specific to your organization.
- Model risk governance The platform facilitates tracking of models, documentation of versions, and risk findings across the inventory. You can easily manage workflows, approvals, decisions, and remediation actions.
Make the complex, simple
ValidMind has invested considerable effort into designing a product that streamlines testing and validation for AI/ML and LLM models. Our aim, as evidenced by Oliver Wyman’s decision to deploy their toolkit in the ValidMind framework, is to remove the guesswork and complexity from the model validation process.
Take a look at this short video to see what we mean:
Our product design enables you to:
- Run our developer framework inside your own development environment. Adding the ValidMind library to your Python development environment is as simple as a pip install validmind, and connecting to our platform UI is easily done through API connectivity
- Use validation tests from our default library, or bring your own use-case specific tests. Our demo notebooks and documentation let you explore various pre-configured tests, and we also offer utilities to make it easy to develop your own tests and integrate them into the library via test providers.
- Seamlessly synchronize the documentation output with your model development environment, to refine your test results and model documentation throughout the development cycle.
The metrics and tests we provide are aligned with industry best practices and they exemplify the evolution of model risk management.
Try it yourself — join our closed beta!
The LLM validation features showcased in the Oliver Wyman demo are readily available through our ongoing closed beta program, from the Jupyter notebooks that make use of the ValidMind Developer Framework to automate testing and documentation, to the AI Risk Platform UI where you can collaborate to validate your AI models.
All you need to do is register for our closed beta to begin your discovery of everything ValidMind has to offer.