10 Takeaways from the ‘Navigating the PRA’s SS1/23 Regulation’ Event

This week, ValidMind and Genpact partnered to welcome a packed room of model risk and AI governance professionals in London for our “Navigating PRA’s SS1/23” event. This was an off-the-record session designed to unpack the current state of model risk management (MRM) and AI governance across the UK financial sector. The evening featured direct insights from regulatory practitioners and model risk professionals. It included a candid panel discussion that brought together representatives from leading banks, MRM technology providers, and more.
With SS1/23 now formally in effect, this event offered a timely forum to explore how firms are grappling with the practicalities of implementation, what the PRA is observing through its thematic reviews, and where the future of model risk, especially under the growing influence of AI, is headed. Under Chatham House Rule, we’ve compiled ten key takeaways that reflect the most pressing strategies, challenges, and opportunities raised during the session.
1. SS1/23 Is Driving a Cultural Shift in Model Risk Management
The goal of SS1/23 is not just compliance but to embed model risk management (MRM) as a distinct risk class on par with credit or market risk. Firms are expected to treat MRM with equivalent rigor, governance, and visibility at the board level.
2. Implementation is a Journey, Not a Checklist
While some banks are further ahead in SS1/23 compliance than others (particularly those already aligned with SR 11-7), the SS1/23 rollout will be an iterative process. Ongoing dialogue is encouraged, recognizing the practical challenges firms face in embedding the framework across complex organizations.

3. Thematic Findings Are Meant to Inform and Prompt Action
The PRA’s thematic review yielded findings that are applicable industry-wide. Firms are expected to incorporate these findings into their remediation plans.
4. Model Inventories Must Go Beyond Compliance Metrics
Many firms’ risk appetite frameworks rely heavily on model lifecycle metrics (e.g., validation status), but these often fail to capture qualitative aspects like model performance and limitations. Supervisors expect firms to develop inventories that inform board-level decisions with insights into risk exposure, usage, and materiality.
5. Beware of Excluding Models from the Framework
Red flags were raised about firms excluding certain model types, such as expert judgment models, vendor models, or single-use models, from their MRM scope. Such exclusions must be justified, with accompanying controls to mitigate associated risks.
Learn more in our technical brief: Navigating PRA’s SS1/23
6. DQMs Require a Practical, Risk-Based Approach
The “Data and Quantitative Methods” (DQM) designation outlined in SS1/23 has created confusion among some MRM practitioners. Firms having issues in this are encouraged to focus on material DQMs that influence decisions, rather than attempt to catalog every non-model analytical tool. Controls should be proportionate and based on business impact.
7. Model Tiering Should Drive Control Intensity
The tiering framework is central to a proportional MRM strategy. While firms are free to design their own methodologies, the PRA expects an independent review of tiering frameworks and clear alignment between tiers and control rigor. Low-tier models still require minimum controls.

8. Validation Needs to Be Both Robust and Independent
While gaps in effective challenges exist, particularly around conceptual soundness, validation functions must go beyond code checks to assess methodology suitability for intended use. Validation teams also need sufficient authority to delay or reject model deployment if issues are unresolved.
9. AI Models Require Cross-Functional Oversight Based on Risk Materiality
As AI adoption expands across financial institutions, it’s critical to manage these technologies in proportion to the materiality of the risk they pose. Rather than defaulting to fragmented or siloed oversight, organizations should align stakeholders across key functions, including model risk management, data governance, fraud prevention, compliance, cybersecurity, and IT. This collaborative approach ensures holistic risk coverage and governance consistency. Participants showed interest in applying generative AI to validation workflows but also stressed the importance of maintaining robust explainability and control mechanisms.
10. Transparency and Board Engagement Are Essential
There was consensus across speakers and panelists that model risk governance must be intelligible to the board. This includes distilling complex models into clear language, surfacing key risks, and ensuring accountability, especially in AI use cases. Trust, explainability, and traceability were recurring themes in this context.
Want to learn more about how ValidMind helps banks comply with SS1/23? Get to know some of the features of our platform that can boost your confidence.