About a-team Marketing Services

A-Team Insight Blogs

Unlocking the Potential of AI in AML

Subscribe to our newsletter

By Steve Marshall, Director of Advisory Services, FinScan.

AI has been increasingly used over the last few years to combat financial crime and money laundering. While it may not have lived up to its original billing as a silver bullet, some key lessons have been learned since those early days.

Firstly, while AI has many valuable applications in anti-money laundering (AML), it cannot be left to run independently. It requires human oversight, technical expertise and sound data. Secondly, AI can introduce new risks to the AML process, and numerous factors must be considered for a successful and ethical implementation. In this article, we detail the key considerations institutions need to factor into their strategic decision-making when using AI in AML and how to unleash its potential responsibly.

Define the problem

The first step in any successful technology implementation is to clearly define the problem so that the institution is much better placed to identify potential solutions. The next key step is to take an open approach to exploring those possible solutions. There are normally various options available, and it is best to thoroughly investigate these before deciding on a course of action.

With this in mind, institutions must not fall into the trap that AI is the only answer to the AML challenges faced. Indeed, given the current hype surrounding the technology, it is easy to think that AI is a must-have. Yet, while AI is a powerful tool, it is one of many options available and may not always be the most appropriate solution for the problem at hand. In other words, if AI is a hammer, not every AML problem is a nail.

Examples of where AI can play a vital role in the fight against money laundering, include automating KYC processes and improving alert resolution rates in name screening by triaging red flags and providing information to support decision-making. However, when it comes to watchlist and payment screening, for instance, other technologies may be more appropriate. These tasks often require a millisecond response, but AI algorithms are not designed for speed and do not scale. As such, non-AI-based, traditional screening technologies would arguably be better used.

Identify the potential risks

If an AI solution is considered the most appropriate for the institution’s AML needs, it is essential to understand the risks its implementation can bring. The potential risks and pitfalls are manifold, but a typical example is biased training data, which can result in discriminatory outcomes for specific groups. Similarly, there is the question of cultural recognition and how accurate the solution is at recognizing different cultural data in an increasingly international world. Another is not building human adjudication into model updates; if models update automatically, institutions may not fully understand or be able to explain how the model is working.

In the US, the National Institute of Standards & Technology (NIST) launched its AI Risk Management Framework in 2023, which encapsulates many of the risks faced. The European Commission (EC) is also developing the first legal framework for using AI, and the UK has recently consulted on AI regulation. Any institution looking to introduce AI must factor in the relevant AI frameworks for their jurisdictions from the outset, as they will provide a clear guide on key considerations for responsible AI implementation. They will also help institutions take a more methodological approach to identifying, assessing and mitigating key risks. Putting in place a cross-functional team to carry out the implementation, including technology, data, the business and project management, will help to ensure this process is implemented consistently across the entire business.

Understand organizational implications

Institutions must also recognize that implementing AI will likely have staffing implications and change the organizational structure. Often, AI is sold on the efficiency gains it can bring, when in reality, it can reduce team sizes in one area but require additional hires elsewhere as skill set requirements change. For example, there will be less need for manual reviewers, but there will be a greater call for a model validation team and AI algorithm experts to retrain models. Understanding these requirements in more detail will help institutions adjust their hiring and training practices as they realign their workforce.

Pay attention to data strategy

Finally, institutions must put data at the heart of their AI strategy. High-quality data is essential for a successful AI implementation; if the underlying data is not in shape, it will only exacerbate current challenges. As such, institutions need to consider the quality of any external and internal data sources used.

Many open-source AI solutions are trained on the internet, where the data sets may be large but biased. These are inappropriate for specific AML use cases. Institutions can train AI solutions on their internal data, but this, too, can present challenges regarding the robustness of the data sets. For example, does the data contain sufficient examples of suspicious activity to train the solution? Often, the percentages of suspicious names and activity are extremely low. Then, there is the challenge of incorporating data from public sources, such as company registries, into the training data. Public source information frequently contains gaps, which lowers productivity and can lead to more false positives. To combat these pitfalls, an institution must implement strong data governance and quality practices before implementing AI.

AI is a potent tool with many potential use cases in AML, but it is not the solution to everything. To ensure AI’s practical use, it is first essential to identify the problem correctly and ensure that AI is the most appropriate solution. If it is, then establishing cross-functional teams and following the proper AI risk frameworks will help institutions avoid the common pitfalls of an AI-driven approach. Finally, remember that strong data is vital to unlocking the potential of AI; high-quality inputs mean the best-quality outputs.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Best Practices for Managing Trade Surveillance

1 July 2025 10:00am ET | 3:00pm London | 4:00pm CET Duration: 50 Minutes The surge in trading volumes combined with the emergence of new digital financial assets and geopolitical events have added layers of complexity to market activities. Traditional surveillance methods often struggle to keep pace with these changes, leading to difficulties in detecting...

BLOG

FinCEN’s Final Rule Puts the Buy-Side on Notice – AML/CFT Compliance by Jan ‘26

The US Financial Crimes Enforcement Network’s (FinCEN) long-anticipated final rule on Anti-Money Laundering (AML) and Countering the Financing of Terrorism (CFT) – issued in August – closes a significant regulatory gap that previously allowed certain advisers to operate with limited oversight. This made them potential targets for exploitation by illicit actors, including money launderers and...

EVENT

TradingTech Summit MENA

The inaugural TradingTech Summit MENA takes place in November and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions in the region.

GUIDE

Risk & Compliance

The current financial climate has meant that risk management and compliance requirements are never far from the minds of the boards of financial institutions. In order to meet the slew of regulations on the horizon, firms are being compelled to invest in their systems in order to cope with the new requirements. Data management is...