Regulators are losing patience. In the first half of 2025, global financial institutions were hit with fines totalling $1.23 billion, a 417% increase on the same period the year before. Sanctions failures alone surged from $3.7 million in H1 2024 to $228.8 million this year, underscoring just how closely watchdogs are monitoring AML, KYC and sanctions controls. Cryptocurrency exchanges were prominent in the firing line: OKX paid more than $500 million, while BitMEX settled for over $100 million after admitting AML failings.
The pressure is no longer confined to banks. Regulators are extending client due diligence obligations into adjacent sectors such as commodities and even football clubs, demonstrating how far compliance expectations now reach. As Rory Doyle, Head of Financial Crime Policy at Fenergo, put it: “These figures offer a stark warning to financial institutions across the globe – particularly those operating in the fast-growing digital assets sector, where watchdogs won’t hesitate to dole out hefty fines for AML shortcomings.”Against this backdrop, the question is no longer whether firms can comply, but how they can prove that compliance programmes are effective. Increasingly, the answer lies in leveraging AI.
From Rules Engines to Agentic AI
For decades, financial crime compliance was built on rules-based engines: rigid “if-this-then-that” triggers designed to catch suspicious activity. They generated high alert volumes, demanded armies of analysts, and were prone to backlogs.
Tracy Moore of Fenergo argues that this model is no longer fit for purpose. “Traditional AI was reactive – it processed compliance through rules. Agentic AI is different: it proactively seeks out compliance risks. It maintains KYC profiles, initiates investigative workflows, and acts without waiting for human instruction,” she explained.
This shift requires more than new tooling. “It’s not just a tool. You have to build it into your infrastructure,” Moore stressed. Embedding AI across client lifecycle management (CLM) and financial crime processes means rethinking operating models, governance, and strategy. The future is what some are calling audit-ready AI: agents that not only execute tasks but also leave a traceable, explainable record for regulators to review.
Perpetual KYC Meets Regulatory Effectiveness
One of the clearest applications of agentic AI is perpetual KYC (pKYC). Instead of waiting for periodic reviews, AI agents continuously update client files, monitor external events, and trigger new checks when risks change. Moore highlighted pKYC, alongside fraud detection and automated onboarding, as one of the most promising areas for scaling cost savings and operational efficiency.
But here the regulatory challenge is acute. Doyle warns that regulators expect firms to adopt a risk-based approach rather than blanket automation. Supervisors want evidence that perpetual monitoring is not simply creating noise. Indeed, regulators are beginning to treat alert backlog management as a proxy for the quality of due diligence. If aged backlogs spiral out of control or alerts are routinely dismissed without escalation, firms risk being accused of “alert washing”.The regulatory test is effectiveness: are real risks being identified, escalated, and acted upon? Agentic AI can help achieve this, but only if it comes with decision tracing, audit logs, and human-in-the-loop oversight.
Cloud Comfort, AI Trust
Until recently, the industry was preoccupied with whether regulators would accept sensitive workloads in the cloud. That debate, Moore believes, is settled. “We’re no longer talking about the comfort level of the cloud. We are moving now to the comfort level of AI, and what’s the trustworthiness and the visibility and the transparency around adapting AI within the firm, and how do I explain that to the regulators?”
Fenergo’s answer is to make explainability and oversight non-negotiable. Its FinCrime operating system is cloud-native on AWS Bedrock, but what matters is the governance layer: agents are designed to be transparent, testable, and monitored. This human-in-the-loop framework ensures nothing is hidden “behind the curtain”.
The Economics of Adoption
The promise of AI in compliance has long been linked to cost reduction, and Fenergo’s research with Chartis suggests annual savings of around $3 million per firm. But Moore cautions that savings only materialise when AI is strategically embedded at scale. “If you build it into your strategy plan rather than just having a Band-Aid here… then we see the cost savings, particularly in the client lifecycle management compliance area,” she noted.
The biggest gains come from eliminating manual, labour-intensive processes: daily reviews, document collection, and KYC refreshes. Yet achieving them depends on scalability, infrastructure, and governance—not isolated pilots. In other words, the return depends not just on AI itself, but on whether firms re-engineer their operating models to make AI sustainable and compliant.
Towards Proactive, Audit-Ready Compliance
The trajectory is clear: fines are rising, oversight is expanding, and regulators are demanding evidence that compliance systems actually work. Firms that continue to rely on static rules engines and reactive processes will struggle to keep up.
Fenergo positions itself as a partner in the next phase: helping institutions move from rules to agents, from backlog management to audit-ready oversight, and from compliance as a cost centre to compliance as a competitive strength. As Doyle has stated, “The importance of integrating smarter financial crime technology with AI to increase accuracy and strengthen due diligence processes cannot be overstated.”
Find out More about AI FinCrime
For compliance and risk professionals who want to take a deeper dive into the practical application of AI in financial crime, A-Team Group’s RegTech Summits in London and New York offer a timely opportunity.
RegTech Summit London on 16 October brings together regulators, banks, and technology providers to explore the realities of deploying AI across compliance. Sessions will tackle the regulator’s perspective on AI governance, the frontier of agentic AI in RegTech, and the transition away from manual compliance. Particularly relevant is the panel on addressing financial crime and AML with RegTech, where experts will examine how AI can support firms in strengthening KYC, sanctions, and transaction monitoring frameworks. Check out the full agenda and register here
RegTech Summit New York on 20 November will continue the discussion with a sharper US focus. Panels include regulators’ expectations for AI, practical approaches to harnessing agentic AI in compliance, and the governance challenges of building AI-driven change programmes. The AML-focused sessions provide valuable insight into how RegTech solutions can move firms towards more effective, auditable programmes. There will also be discussion of crypto and digital assets—particularly pertinent given the Department of Justice’s recent fines against exchanges for AML failings.
Subscribe to our newsletter