The UK’s Financial Conduct Authority (FCA) has now issued its AI Update (2025), a significant step in its regulatory journey. It builds on the 2022 Discussion Paper on AI and Machine Learning (DP22/4), which set out early questions about AI’s transformative potential and the risks it introduces. Three years on, the FCA’s position has sharpened: rather than building an AI rulebook from scratch, the regulator is embedding AI oversight within the UK’s existing principles-based framework.
This shift – from exploration to direction – carries important implications for firms across the financial sector.
When the FCA and the Bank of England published their joint discussion paper in 2022, the aim was to open a dialogue rather than prescribe solutions. The paper framed AI as a transformative force already moving from proofs of concept to production use across the industry.But it also highlighted uncertainty. Firms were unsure how AI fit into the patchwork of existing regulation. Was there a need for bespoke AI rules, or could the UK’s established conduct, governance, and resilience requirements be stretched to cover new risks?
The paper grouped those risks into four categories:
- Consumer outcomes, including the potential for bias, discrimination, or opaque decision-making.
- Data and models, with questions around quality, lineage, explainability, and robustness.
- Governance and accountability, ensuring firms could not sidestep responsibility by delegating decisions to algorithms.
- Operational resilience, including concentration risk where critical third-party providers supply AI infrastructure at scale.
The tone was deliberately open. The FCA sought feedback, commissioned the second Machine Learning Survey with the Bank of England, and invited industry to help shape the next phase. The message in 2022 was one of listening and evidence-gathering.
Principles, Not Prescriptions
Three years later, the FCA’s AI Update makes its stance clearer. The regulator now emphasises that existing frameworks already capture most AI risks. Consumer Duty obligations, SM&CR responsibilities, SYSC rules on governance, outsourcing and operational resilience requirements – together these create a broad regulatory net that apply to AI enabled solutions.
The UK’s approach is explicitly principles-based, technology-agnostic, and outcomes-focused. That means firms are not bound to specific technologies or processes, but are judged on the fairness, transparency, accountability, and resilience of their outcomes.
This is in sharp contrast with the EU AI Act, which sets prescriptive requirements depending on a system’s risk category. By sticking to principles, the FCA argues it can encourage innovation without sacrificing integrity.
A central message is accountability. The AI Update underscores that senior managers and boards remain responsible under SM&CR. Responsibility cannot be outsourced to a model or vendor.
Similarly, Consumer Duty applies to AI-driven processes: decisions must be fair, explainable, and free of unjustified discrimination.
How FCA is Deploying AI Internally
Beyond its focus on AI supervision, FCA is also deploying advanced analytics internally to strengthen oversight, improve detection, and test the resilience of regulated firms. This dual role, supervisor and practitioner, gives the FCA practical insight into the same challenges firms face when adopting AI, from explainability to operational resilience.One prominent example is in market abuse surveillance. In this latest 2025 AI Update, the FCA notes that trade surveillance strategies must evolve to detect “ever more complex forms of market abuse” and highlighted how advanced analytics can be applied to cross-market manipulation and other hard-to-spot behaviours. This direction has been tested in practice, where the FCA ran a Market Abuse Surveillance TechSprint in 2024, bringing together firms and technologists to explore how AI and machine learning could be used to detect manipulative trading strategies across multiple venues.
The regulator has also applied AI to fraud and scam detection. Its Advanced Analytics Unit uses web-scraping and social media monitoring tools to identify and triage potential scam websites, supporting the FCA’s ongoing effort to disrupt fraud at scale. In a 2023 speech on fraud prevention, the FCA confirmed it scans around 100,000 websites a day, issuing public warnings through its Warning List and sharing intelligence with law enforcement and financial institutions (Frameworks for Effective Fraud Prevention Measures, FCA, 2023). These tools are now being extended to address Authorised Push Payment (APP) fraud, where synthetic datasets allow the FCA and industry participants to model how scams propagate through payment systems and test intervention strategies.
Perhaps the clearest example of internal AI innovation is the FCA’s use of synthetic data for sanctions screening. The 2025 AI Update describes an “in-house synthetic data tool for Sanctions Screening Testing that has transformed our assessment of firms’ sanctions name screening systems”. This capability allows the regulator to test how effectively firms’ systems match names against the UK’s consolidated sanctions list, including whether they handle fuzzy matches, threshold logic, and false positives appropriately. In a 2023 speech, the FCA explained how the tool is used to probe governance, oversight, and vendor performance in sanctions screening, noting that it has revealed significant weaknesses in some firms’ controls – see How to Change in Response to Changing Threats, FCA, 2023. Beyond sanctions, synthetic data is also made available in the Digital Sandbox to help firms experiment with AI models without risking exposure of personal or sensitive financial data.
The FCA has institutionalised this approach through the creation of a Synthetic Data Expert Group (SDEG), which brings together financial institutions, vendors, and academics to explore how synthetic data can support fraud detection, testing of machine learning models, and secure data sharing. By formalising these efforts, the regulator signals that synthetic data is not just a supervisory tool but a wider enabler of innovation in the UK’s financial system.
Together, these initiatives show that the FCA is “walking the talk” on AI. It is developing surveillance and fraud-detection tools, applying synthetic data to live supervisory challenges, and creating collaborative environments where firms can experiment safely. This internal adoption strengthens the regulator’s credibility when it calls on firms to ensure AI systems are explainable, fair, and accountable.
By experimenting with AI inside its own operations, the regulator gains hands-on understanding of the challenges firms face, from data quality to explainability.
The AI Update looks forward as well as back. Over the next 12 months, several initiatives are planned:
- Machine Learning Survey 3.0: Working again with the Bank of England, the FCA will gather updated evidence on adoption, use cases, and risk management.
- Innovation testbeds: The AI Sandbox and Digital Hub will allow firms to trial AI solutions under regulatory oversight.
- Horizon scanning: Beyond today’s models, the FCA is monitoring emerging risks such as large language models (LLMs), deepfakes, and potential quantum computing disruption.
- International coordination: The FCA will continue engaging through the Digital Regulation Cooperation Forum (DRCF), the Global Financial Innovation Network (GFIN), and multilateral groups such as IOSCO, FSB, OECD and the G7.
While new legislation is not on the immediate horizon, the FCA intends to deepen its supervisory toolkit and sharpen expectations, particularly around explainability, fairness, and governance.
The Bottom Line for Firms
AI oversight falls squarely under existing accountability and fairness standards. Firms should be proactive in aligning AI projects with these obligations.
The FCA’s journey from its 2022 Discussion Paper to the 2025 AI Update presents much needed clarity. The early stage was about asking questions, gathering evidence, and exploring whether new rules were needed. The latest update makes clear that the UK will rely on its established principles-based framework, demanding fairness, accountability, and resilience in AI without constraining innovation.
For firms, the message is twofold: AI adoption is welcome, but it must be explainable, governed, and aligned to consumer outcomes. Senior managers remain firmly on the hook. With surveys, sandboxes, and international coordination in motion, 2025 will be a pivotal year for embedding AI into the UK’s regulatory architecture.
Subscribe to our newsletter