About a-team Marketing Services

A-Team Insight Blogs

Unlocking the Potential of AI in AML

Subscribe to our newsletter

By Steve Marshall, Director of Advisory Services, FinScan.

AI has been increasingly used over the last few years to combat financial crime and money laundering. While it may not have lived up to its original billing as a silver bullet, some key lessons have been learned since those early days.

Firstly, while AI has many valuable applications in anti-money laundering (AML), it cannot be left to run independently. It requires human oversight, technical expertise and sound data. Secondly, AI can introduce new risks to the AML process, and numerous factors must be considered for a successful and ethical implementation. In this article, we detail the key considerations institutions need to factor into their strategic decision-making when using AI in AML and how to unleash its potential responsibly.

Define the problem

The first step in any successful technology implementation is to clearly define the problem so that the institution is much better placed to identify potential solutions. The next key step is to take an open approach to exploring those possible solutions. There are normally various options available, and it is best to thoroughly investigate these before deciding on a course of action.

With this in mind, institutions must not fall into the trap that AI is the only answer to the AML challenges faced. Indeed, given the current hype surrounding the technology, it is easy to think that AI is a must-have. Yet, while AI is a powerful tool, it is one of many options available and may not always be the most appropriate solution for the problem at hand. In other words, if AI is a hammer, not every AML problem is a nail.

Examples of where AI can play a vital role in the fight against money laundering, include automating KYC processes and improving alert resolution rates in name screening by triaging red flags and providing information to support decision-making. However, when it comes to watchlist and payment screening, for instance, other technologies may be more appropriate. These tasks often require a millisecond response, but AI algorithms are not designed for speed and do not scale. As such, non-AI-based, traditional screening technologies would arguably be better used.

Identify the potential risks

If an AI solution is considered the most appropriate for the institution’s AML needs, it is essential to understand the risks its implementation can bring. The potential risks and pitfalls are manifold, but a typical example is biased training data, which can result in discriminatory outcomes for specific groups. Similarly, there is the question of cultural recognition and how accurate the solution is at recognizing different cultural data in an increasingly international world. Another is not building human adjudication into model updates; if models update automatically, institutions may not fully understand or be able to explain how the model is working.

In the US, the National Institute of Standards & Technology (NIST) launched its AI Risk Management Framework in 2023, which encapsulates many of the risks faced. The European Commission (EC) is also developing the first legal framework for using AI, and the UK has recently consulted on AI regulation. Any institution looking to introduce AI must factor in the relevant AI frameworks for their jurisdictions from the outset, as they will provide a clear guide on key considerations for responsible AI implementation. They will also help institutions take a more methodological approach to identifying, assessing and mitigating key risks. Putting in place a cross-functional team to carry out the implementation, including technology, data, the business and project management, will help to ensure this process is implemented consistently across the entire business.

Understand organizational implications

Institutions must also recognize that implementing AI will likely have staffing implications and change the organizational structure. Often, AI is sold on the efficiency gains it can bring, when in reality, it can reduce team sizes in one area but require additional hires elsewhere as skill set requirements change. For example, there will be less need for manual reviewers, but there will be a greater call for a model validation team and AI algorithm experts to retrain models. Understanding these requirements in more detail will help institutions adjust their hiring and training practices as they realign their workforce.

Pay attention to data strategy

Finally, institutions must put data at the heart of their AI strategy. High-quality data is essential for a successful AI implementation; if the underlying data is not in shape, it will only exacerbate current challenges. As such, institutions need to consider the quality of any external and internal data sources used.

Many open-source AI solutions are trained on the internet, where the data sets may be large but biased. These are inappropriate for specific AML use cases. Institutions can train AI solutions on their internal data, but this, too, can present challenges regarding the robustness of the data sets. For example, does the data contain sufficient examples of suspicious activity to train the solution? Often, the percentages of suspicious names and activity are extremely low. Then, there is the challenge of incorporating data from public sources, such as company registries, into the training data. Public source information frequently contains gaps, which lowers productivity and can lead to more false positives. To combat these pitfalls, an institution must implement strong data governance and quality practices before implementing AI.

AI is a potent tool with many potential use cases in AML, but it is not the solution to everything. To ensure AI’s practical use, it is first essential to identify the problem correctly and ensure that AI is the most appropriate solution. If it is, then establishing cross-functional teams and following the proper AI risk frameworks will help institutions avoid the common pitfalls of an AI-driven approach. Finally, remember that strong data is vital to unlocking the potential of AI; high-quality inputs mean the best-quality outputs.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Multi-cloud environments – How to maximise data value while keeping on the right side of privacy and security

Multi-cloud environments have much to offer beyond single-vendor cloud setups, including the benefits of access to a variety of best-in-class cloud solutions, opportunities for price optimisation, greater flexibility and scalability, better risk management, and crucially, increased performance and availability. On the downside, multiple cloud vendors in a technology stack can cause complexity, more vulnerabilities, and...

BLOG

SteelEye Reports Increasing Demand for Integrated Surveillance as Regulatory Crackdown Continues

Demand for integrated trade and communications surveillance among financial institutions has surged by 100% this year following heightened regulatory scrutiny across financial markets, according to recent research by SteelEye. The company’s 2023 Annual Compliance Health Check Report, which surveyed more than 300 senior financial services compliance and risk professionals, found integrated surveillance is now a...

EVENT

Data Management Summit London

Now in its 14th year, the Data Management Summit (DMS) in London brings together the European capital markets enterprise data management community, to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...