About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

From London to New York: How Regulators and Firms Are Re-Drawing the AI Compliance Map

Subscribe to our newsletter

As artificial intelligence (AI) reshapes financial services, regulators and industry leaders are converging on a shared challenge: how to balance innovation with accountability. At A-Team Group’s recent RegTech Summit London, the conversation moved beyond theory into practice, with the Financial Conduct Authority (FCA) and leading firms outlining how principle-based regulation, collaborative testing, and emerging “agentic AI” models are changing the compliance landscape.

Those themes set the stage for A-Team Group’s  RegTech Summit in New York coming up in November, where the discussion will continue and the focus will turn to implementation, global alignment, and the readiness of U.S. institutions to adopt similar approaches.

Regulation by Dialogue

Opening the London event, the FCA’s head of advanced analytics described the regulator’s role as “less about marking homework and more about creating dialogue.” Rather than imposing prescriptive AI rules, the FCA continues to apply an outcomes- and principles-based framework, relying on existing regimes such as the Consumer Duty, SMCR and Operational Resilience (OPRES) to cover AI systems.

The FCA has issued a clear message: there are no new AI-specific regulations planned, and firms can “have confidence in the stability of the environment.” The FCA’s approach remains tech-agnostic and tech-positive, supporting innovation while maintaining clear expectations for governance, testing, and accountability.

The regulator is applying AI internally, developing large language models to analyse the “hundreds of thousands of unstructured text documents” it receives annually. The goal is to speed decision-making without removing human oversight: “The human is still very much in the loop.” The FCA’s own use of generative AI, together with initiatives such as its partnership with the Alan Turing Institute to create synthetic AML data, signals a regulator that wants to lead by example rather than monitor from a distance.

From Sandbox to Live Testing

Perhaps the most striking development revealed in London was the launch of AI Live Testing, a first-of-its-kind environment that allows firms to test market-ready AI systems under supervisory guidance. Designed to bridge the gap between proof-of-concept and production, the scheme provides “appropriate levels of regulatory comfort” so firms can experiment “without fear of enforcement action.”

The FCA’s new supercharged sandbox – delivered with partners including Nvidia – complements this approach by offering enriched synthetic datasets and greater computing power for model training. Together, they aim to provide coverage “across the full AI lifecycle,” from development to deployment.

This test-and-learn model marks a significant evolution in regulatory engagement. Rather than waiting for compliance failures, supervisors and firms can now learn side-by-side, building the shared evidence base that has been missing from AI regulation to date. For global institutions preparing for New York, the question is whether U.S. regulators will adopt similar collaborative methods – and how firms can align their governance models accordingly.

Agentic AI and Human Oversight

As the keynote set the regulatory tone, the next panel – Navigating the Frontier of Agentic AI in RegTech – explored what happens when autonomy meets accountability. Agentic AI, capable of adaptive decisions with limited human intervention, is already finding practical footholds in surveillance, KYC, AML and communications monitoring.

One panellist noted that the technology delivers value “where you’re using AI to do something that just doesn’t scale for humans,” such as reviewing millions of voice, video and chat interactions for signs of market abuse or misconduct. Another described using AI to “scan documents for data” during periodic KYC reviews, reducing “painful manual work” and enabling “real-time due diligence.” The collective ambition is to move beyond fragmented controls toward continuous, unified oversight of conduct and risk.

But greater autonomy brings new governance demands. As one panellist cautioned, “Fully transparent explainability of complex models is not theoretically possible.” The practical aim, they argued, is to output a “reasoning trace” – a record of how a model reached its decision – supported by metrics and logs. Another advised treating AI “like another employee – an over-enthusiastic toddler” that requires supervision, explanation and remedial action if it misbehaves.

These exchanges underscored a common understanding: AI should never make the final decision. Senior managers remain accountable, and model management must become a living discipline encompassing testing, validation, and periodic audit.

Data, Privacy, and Small Language Models

Much of the panel’s discussion centred on data readiness – the foundation of any trustworthy AI. Several speakers described the difficulty of combining structured and unstructured inputs across systems such as email, voice, and messaging. Building the right infrastructure to “integrate data of different types in a scalable way” was identified as one of the main barriers to wider adoption.

Privacy also remains a critical concern. One approach described was to replace reliance on public large language models with small language models (SLMs) hosted internally. These narrow, task-specific systems can “perform a narrow task repeatedly well” while maintaining full control over enterprise data – a practical compromise between innovation and confidentiality.

Evolving Skills for Compliance Teams

Panellists agreed that while the accountability of compliance functions remains unchanged, the skill set must evolve. “We’re moving away from reactive control oversight into model management,” noted one panellist. Teams will need a working understanding of AI and model-risk techniques, along with stronger communication skills to collaborate with technology and business development teams from the outset.

Another remarked that this is “not new” – the financial sector has decades of experience in model risk management – but the surface form is changing. The same frameworks that once governed risk models now need to accommodate adaptive AI systems that learn continuously.

Looking Ahead

The FCA’s position contrasts with the European Union’s AI Act, which prescribes sector-agnostic rules. “The UK is not taking that approach,” the keynote noted, describing a vertical model where each regulator manages AI within its domain. That divergence sets up an important transatlantic conversation: how will U.S. regulators, under FINRA or the SEC, interpret accountability, explainability, and human oversight as AI adoption accelerates?

For firms operating globally, aligning to multiple interpretations of responsible AI will add to the already heavy “reg-change management” burden. The London sessions revealed a cautious optimism across regulators and practitioners alike. The biggest risk, one panellist warned, is “waiting too long and losing the opportunity to build knowledge.” The FCA’s concern echoed this concern – that hesitation, not misuse, could cause the UK financial sector to fall behind global peers.

As AI’s regulatory and operational frontiers expand, A-Team Group’s RegTech Summit New York will extend the London discussions into moving from experimentation to execution as the dialogue shifts from “can we?” to “how do we scale safely?”

Secure your place at the Summit HERE or register below.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Sponsored by FundGuard: NAV Resilience Under DORA, A Year of Lessons Learned

Date: 25 February 2026 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes The EU’s Digital Operational Resilience Act (DORA) came into force a year ago, and is reshaping how asset managers, asset owners and fund service providers think about operational risk. While DORA’s focus is squarely on ICT resilience and third-party...

BLOG

How Firms Are Adapting to a Multi-Channel, AI-Driven Future – Global Relay Survey

Global Relay has published its 2025/26 Data Insights: Communications Capture Trends report, now in its third annual edition and rapidly becoming a reference point for how regulated financial institutions manage their communications obligations. Drawing on data from more than 12,000 regulated financial institutions using Global Relay’s connectors, the survey tracks which channels firms are archiving,...

EVENT

Eagle Alpha Alternative Data Conference, London, hosted by A-Team Group

Now in its 8th year, the Eagle Alpha Alternative Data Conference managed by A-Team Group, is the premier content forum and networking event for investment firms and hedge funds.

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...