June 6, 2025

How to Navigate PRA’s SS1/23 Regulation

How to Navigate PRA’s SS1/23 Regulation
Share

75% of British financial institutions are already using artificial intelligence (AI), with a further 10% planning to do so in the next three years. This surge in adoption, up from 58% in 2022, is drawing heightened scrutiny from regulators, who are concerned with the associated model risks and the additional strain AI places on existing model risk management (MRM) frameworks.

| Get your copy of the technical brief: Navigating PRA’s SS1/23 Regulation

A Tech-Positive Regulatory Landscape

However, this is the UK’s AI moment, and the importance of risk management should be understood in the context of the huge opportunities and tech-forward stance being embraced by national institutions:

  • AI Playbook for the UK Government: The Government’s AI Playbook outlines how departments can adopt AI safely and effectively, demonstrating a pro-innovation stance by embedding AI into public service delivery, guided by practical safeguards and scalable use cases.

This policy alignment signals an ambition not just to regulate AI, but to enable its responsible acceleration across finance.

Introducing SS1/23: A Structured Approach to MRM

The Prudential Regulation Authority’s Supervisory Statement 1/23 (SS1/23) lays out a structured and principles-based approach to model risk management. Effective from May 17, 2024, it applies to all PRA-regulated banks using internal models for capital requirements and beyond. Importantly, even non-in-scope firms are encouraged to adopt SS1/23 as best practice.

Unlike rigid rulebooks, SS1/23 offers five guiding principles, leaving it to institutions to interpret and implement them proportionately:

  1. Model Identification & Classification
  2. Governance
  3. Development, Implementation, and Use
  4. Independent Validation
  5. Risk Mitigants

Each principle has significant implications for AI and machine learning systems.

What SS1/23 Means for AI

1. Enterprise Model Inventories Must Be AI-Literate

SS1/23 requires centralized, comprehensive model inventories, even for AI models embedded in customer service, fraud detection, or HR analytics. Foundational models like LLMs must be inventoried alongside their applications, with documented links and use cases.

2. Governance Is Now Board-Level

AI model oversight is no longer optional. SS1/23 elevates MRM accountability to senior management, requiring AI-specific metrics like interpretability, complexity, and ethical risk to be part of risk tiering and reporting.

3. Multi-Disciplinary Oversight Is Essential

AI development can’t be siloed. SS1/23 pushes firms to embed data scientists, compliance, and business leads across the model lifecycle–from design to monitoring. Integrated platforms like ValidMind’s help automate this coordination.

4. AI Models Demand New Validation Techniques

Generative AI, dynamic ML models, and opaque third-party systems don’t fit neatly into traditional backtesting. SS1/23 expects techniques like generation testing, override tracking, and parallel outcome analysis for AI, and calls for tailored validation documentation, even for vendor models.

5. Mitigation Strategies Must Be Real-Time and Scalable

For generative and adaptive models, failure modes may be unpredictable. SS1/23 expects firms to proactively define fallback mechanisms, escalation paths, and real-time monitoring capabilities to mitigate risk from AI models, especially in customer-facing applications.

Watch the Webinar: David Asermely on SS1/23

Gain expert insight into how financial institutions can implement robust, AI-specific MRM practices aligned to SS1/23. In this webinar, ValidMind’s VP Growth Strategy, David Asermely, outlines:

  • How to address emerging PRA expectations for AI and model risk governance
  • How to build validation frameworks that meet SS1/23 standards
  • How to future-proof your models for evolving regulatory scrutiny

How ValidMind Helps You Comply with SS1/23

ValidMind is purpose-built to help financial institutions operationalize the five principles of SS1/23.

With ValidMind, your team can:

  • Automate validation workflows with role-based controls
  • Maintain real-time, AI-specific model inventories
  • Generate board-ready documentation with explainability tooling
  • Ensure model risk is monitored, governed, and auditable

| Read the product brief: SS1/23 Compliance with the ValidMind Platform

YouTube video

Conclusion: Compliance as a Competitive Advantage

SS1/23 isn’t just about regulatory checkboxing. It represents a shift toward AI accountability and strategic risk governance. Institutions that embrace this framework will be better positioned to scale their AI initiatives safely, ethically, and confidently.

Need help navigating compliance? Request a demo with ValidMind to see how we simplify model risk management for AI.

navigating pra ss1 23 technical brief

Company and Industry Updates, Straight to Your Inbox