About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Unlocking the Potential of AI in AML

Subscribe to our newsletter

By Steve Marshall, Director of Advisory Services, FinScan.

AI has been increasingly used over the last few years to combat financial crime and money laundering. While it may not have lived up to its original billing as a silver bullet, some key lessons have been learned since those early days.

Firstly, while AI has many valuable applications in anti-money laundering (AML), it cannot be left to run independently. It requires human oversight, technical expertise and sound data. Secondly, AI can introduce new risks to the AML process, and numerous factors must be considered for a successful and ethical implementation. In this article, we detail the key considerations institutions need to factor into their strategic decision-making when using AI in AML and how to unleash its potential responsibly.

Define the problem

The first step in any successful technology implementation is to clearly define the problem so that the institution is much better placed to identify potential solutions. The next key step is to take an open approach to exploring those possible solutions. There are normally various options available, and it is best to thoroughly investigate these before deciding on a course of action.

With this in mind, institutions must not fall into the trap that AI is the only answer to the AML challenges faced. Indeed, given the current hype surrounding the technology, it is easy to think that AI is a must-have. Yet, while AI is a powerful tool, it is one of many options available and may not always be the most appropriate solution for the problem at hand. In other words, if AI is a hammer, not every AML problem is a nail.

Examples of where AI can play a vital role in the fight against money laundering, include automating KYC processes and improving alert resolution rates in name screening by triaging red flags and providing information to support decision-making. However, when it comes to watchlist and payment screening, for instance, other technologies may be more appropriate. These tasks often require a millisecond response, but AI algorithms are not designed for speed and do not scale. As such, non-AI-based, traditional screening technologies would arguably be better used.

Identify the potential risks

If an AI solution is considered the most appropriate for the institution’s AML needs, it is essential to understand the risks its implementation can bring. The potential risks and pitfalls are manifold, but a typical example is biased training data, which can result in discriminatory outcomes for specific groups. Similarly, there is the question of cultural recognition and how accurate the solution is at recognizing different cultural data in an increasingly international world. Another is not building human adjudication into model updates; if models update automatically, institutions may not fully understand or be able to explain how the model is working.

In the US, the National Institute of Standards & Technology (NIST) launched its AI Risk Management Framework in 2023, which encapsulates many of the risks faced. The European Commission (EC) is also developing the first legal framework for using AI, and the UK has recently consulted on AI regulation. Any institution looking to introduce AI must factor in the relevant AI frameworks for their jurisdictions from the outset, as they will provide a clear guide on key considerations for responsible AI implementation. They will also help institutions take a more methodological approach to identifying, assessing and mitigating key risks. Putting in place a cross-functional team to carry out the implementation, including technology, data, the business and project management, will help to ensure this process is implemented consistently across the entire business.

Understand organizational implications

Institutions must also recognize that implementing AI will likely have staffing implications and change the organizational structure. Often, AI is sold on the efficiency gains it can bring, when in reality, it can reduce team sizes in one area but require additional hires elsewhere as skill set requirements change. For example, there will be less need for manual reviewers, but there will be a greater call for a model validation team and AI algorithm experts to retrain models. Understanding these requirements in more detail will help institutions adjust their hiring and training practices as they realign their workforce.

Pay attention to data strategy

Finally, institutions must put data at the heart of their AI strategy. High-quality data is essential for a successful AI implementation; if the underlying data is not in shape, it will only exacerbate current challenges. As such, institutions need to consider the quality of any external and internal data sources used.

Many open-source AI solutions are trained on the internet, where the data sets may be large but biased. These are inappropriate for specific AML use cases. Institutions can train AI solutions on their internal data, but this, too, can present challenges regarding the robustness of the data sets. For example, does the data contain sufficient examples of suspicious activity to train the solution? Often, the percentages of suspicious names and activity are extremely low. Then, there is the challenge of incorporating data from public sources, such as company registries, into the training data. Public source information frequently contains gaps, which lowers productivity and can lead to more false positives. To combat these pitfalls, an institution must implement strong data governance and quality practices before implementing AI.

AI is a potent tool with many potential use cases in AML, but it is not the solution to everything. To ensure AI’s practical use, it is first essential to identify the problem correctly and ensure that AI is the most appropriate solution. If it is, then establishing cross-functional teams and following the proper AI risk frameworks will help institutions avoid the common pitfalls of an AI-driven approach. Finally, remember that strong data is vital to unlocking the potential of AI; high-quality inputs mean the best-quality outputs.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Best practices for compliance with EU Market Abuse Regulation

EU Market Abuse Regulation (MAR) came into force in July 2016, rescinding the previous Market Abuse Directive and replacing it with a significantly extended scope of regulatory obligations. Eight years later, and amid constant change in capital markets regulation, technology and culture, financial institutions continue to struggle to stay on the right side of the...

BLOG

Themis Adds AI Chatbot to Financial Crime Platform

Themis has released an AI-powered chatbot integration for its financial crime platform. The chatbot is designed to provide instant assistance, speedy responses, personalised support and connections to experts. The 24/7 availability allows customers to receive assistance at any moment regardless of time zones or business hours, while instant responses supported by the inclusion of natural...

EVENT

AI in Capital Markets Summit London

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

GUIDE

Regulatory Data Handbook 2019/2020 – Seventh Edition

Welcome to A-Team Group’s best read handbook, the Regulatory Data Handbook, which is now in its seventh edition and continues to grow in terms of the number of regulations covered, the detail of each regulation and the impact that all the rules and regulations will have on data and data management at your institution. This...