About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Financial Crime is a Decision-Speed Problem: Rethinking AI in AML and Compliance Controls

Subscribe to our newsletter

Financial crime compliance is often described as a resourcing challenge. Firms speak of analyst backlogs, alert volumes and the rising cost of surveillance and screening. Kieran Holland, Solutions Engineering Team Leader at Innovative Systems’ FinScan, argues that the underlying constraint has shifted. Financial crime has become a decision-speed problem.

“The fight against financial crime is no longer about headcount or process maturity,” he says. It is about how quickly institutions can gather, reconcile and interpret fragmented data in order to reach defensible decisions.

In a conversation with RegTech Insight, Holland outlines where time is really lost, how AI can either support or distort decision-making, and why data governance remains the defining constraint on AI-enabled compliance.

Decision Speed and the Data Burden

Asked where institutions are losing the most time today – data acquisition, reconciliation, triage or escalation – Holland points to the sprawl of information sources required for even routine decisions.

“Institutions are spending the most time on the many platforms, websites, and data sources that risk and compliance teams need in order to make informed decisions,” he explains. “Compliance teams need to analyse and accurately collate and summarise all that diverse information, and it’s not an easy task.”

The range of inputs is extensive. “With hundreds of global sanctions lists, Politically Exposed Persons databases, news articles, and company ownership information, the list feels almost endless and is ever-changing.”

In this environment, the bottleneck is less about workflow design and more about synthesis. AI, in Holland’s view, is most effective when it helps analysts reach a conclusion faster, not when it attempts to replace them.

“I see the best usage of AI (and wider IT systems) when it is working alongside compliance teams to help make sense of, and automate the collection of, such diverse sources of information to get to a human decision faster.”

The emphasis is clear: augmentation rather than substitution.

When AI Supports Judgement – and When It Scales Weakness

Holland draws a firm distinction between AI that supports human judgement and AI that amplifies weaknesses in control design.

“AI is great at collating data, helping guide decisions, and summarising diverse information,” he says. “Where it falls down is that it lacks human insight, empathy and judgement.”

He notes that the wider technology sector offers cautionary examples. “We can see all throughout the press recently that AI is going through numerous growing pains, with companies replacing support staff or looking to have it do much of the software development heavy lifting, often with very evident failings.”

The same dynamic can surface in compliance. “It is now well understood where AI is being used for first-line decisioning. The gain is then lost to the need for increased resources related to ongoing training of the AI, or increased model validation scrutiny. It’s robbing Peter to pay Paul.”

In other words, automating the front line can increase the governance burden in the second and third lines.

Where Holland sees more sustainable outcomes is in targeted enablement. “Where we see success is helping the human in the chair to get to the important aspects quicker – be this through data collation and summarisation, or through data mining to create insights about common patterns or scenarios, so that approaches, rules and systems can be better adapted for the future.”

He identifies two recurring failings in current AI programmes.

“The first was probably best described by Abraham Maslow in 1966: ‘It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.’ Companies are seeking to use AI without truly understanding the problem that it is trying to solve, or even if AI is the best way to solve it.”

The second is foundational. “Not ensuring data is accurate and timely enough to truly train AI for its task is another key issue.”

Holland challenges the assumption that large organisations possess clean, complete data. “We live in a world where many ordinary people might think that big companies have lots of (and very accurate) information about who we are and what we do, but this couldn’t be further from the truth.”

Legacy systems, acquisitions and manual processes continue to fragment data estates. “Many companies still have legacy platforms, acquired data through mergers and acquisitions, siloed systems, and manually maintained Excel sheets.”

In that context, he questions the allocation of investment. “One could surmise that much of the budget going towards shiny AI projects should be going back into the foundations of IT management, updating core systems and managing data quality. Institutions must secure the foundation before they build on it.”

He references external research to underline the point. “A recent MIT study, ‘State of AI in business 2025,’ shows that 95% of GenAI pilots fail to show any significant ROI, despite billions in investment. I can’t help but wonder if the lack of problem/solution mapping and poor data has led to these failed pilots.”

Regulatory Scrutiny: From Audit Trails to Model Validation

Supervisory expectations are also shifting. While Holland notes that he does not have direct access to supervisory reviews, he observes changes in how clients prepare for scrutiny.

“I personally have seen a shift away from post-activity quality checking (audit logs, case history and QC checks) towards model validation and being able to demonstrate and articulate that a regulated entity understands how their systems work, and how they went about testing and validating them.”

This reflects a broader move from retrospective assurance to forward-looking governance. Firms are expected not only to retain audit trails, but to explain how models are designed, trained and challenged.

Holland points to engagement initiatives as evidence of that direction. “To the extent that the UK FCA now has its own AI lab where regulated entities can utilise FCA resources to help test their own AI models, it is a great step forward.”

The implication is that explainability and evidentiary discipline are becoming core supervisory themes, even where detailed AI rulebooks are still evolving.

Accountability in a Vendor-Enabled AI World

As AI capabilities are embedded into vendor platforms, accountability questions intensify. Holland is direct about where responsibility sits.

“It comes down to one thing: partnership.”

Technology providers, he argues, must be invested in client success. “Vendors should be invested in their customers’ success by providing great support, a clear development roadmap, and detailed technical documentation.”

But the obligation is mutual. “Likewise, as a customer of those vendors, institutions need to test these aspects of their given vendors.”

Procurement processes often focus on functionality at the expense of governance. “All too often, companies seek new solutions and vendors by looking at functional and non-functional requirements of the software but pay scant regard to the aspects of ongoing management of relationships, how that vendor listens to its customers, and how customer feedback influences their roadmap.”

In a compliance environment defined by regulatory change, stagnation carries risk. “A vendor that has created a good software solution but will provide only basic support and little in terms of updates over a multi-year relationship is no good in the modern, ever-changing world of compliance.”

Data Governance as the Enduring Constraint

If AI is an accelerator, data quality is the fuel. Holland resists prescriptive definitions of what “good enough” looks like, but he is clear on principle.

“The finer details of what ‘good enough’ is, are going to vary greatly from organisation to organisation.”

What matters is posture. “If you can make data quality and governance a core pillar of your business, and continue to analyse, update, enrich and correct the information you have on your customers, you will be in a much better position to utilise that data well in future projects, especially AI.”

He stresses that governance is continuous. “Remember that data is not a point-in-time exercise. Your business is always gathering more information, onboarding new customers, and changing IT systems. You constantly need to be vigilant when it comes to data.”

The strategic framing is unambiguous. “It is often said that data is the most valuable asset that modern companies hold, so organisations must invest in it.”

For institutions navigating financial crime risk, AI can compress decision cycles and improve analytical consistency. But without clear problem definition, disciplined validation and sustained investment in data foundations, it can just as easily expand governance burdens.

The question is not whether to deploy AI, but how deliberately it is aligned to the realities of compliance operations.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: GenAI and LLM case studies for Surveillance, Screening and Scanning

As Generative AI (GenAI) and Large Language Models (LLMs) move from pilot to production, compliance, surveillance, and screening functions are seeing tangible results – and new risks. From trade surveillance to adverse media screening to policy and regulatory scanning, GenAI and LLMs promise to tackle complexity and volume at a scale never seen before. But...

BLOG

Regulator-First AI: Vivox Brings Atomic Workflows to Compliance Operations

Artificial intelligence has become a default talking point in financial crime compliance. Yet for many regulated firms, particularly those operating across capital markets, payments, and treasury functions, the challenge is no longer whether AI can be used, but whether it can be deployed in a way regulators will accept. For Vivox AI, a young company...

EVENT

TradingTech Summit London

Now in its 15th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...