About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Potential and Practicalities of Implementing Generative AI for Compliance

Subscribe to our newsletter

While AI has been around for 20 years or so, its time has come in capital markets with Generative AI and large language models (LLMs) able to handle vast volumes of compliance data and achieve outcomes that cannot be reached by humans. GenAI apps are not, however, a silver bullet, and compliance teams are not yet ready, on the whole, to use the technologies.

A panel session at A-Team Group’s recent RegTech Summit London considered these issues in the context of the risks, challenges and opportunities of GenAI and LLMs. The panel was moderated by Andrew Delaney, president and chief content officer at A-Team Group, and joined by Marili Anderson, chief compliance officer at William Blair International; Chris Beevor, UK MLRO and group compliance COO at GAM; Vall Herard, co-founder and CEO at Saifr; and Shaun Hurst, principal regulatory advisor at Smarsh.

The panel tracked the history of AI, noting the incremental increase in its capabilities and the real-world potential offered by GenAI and LLMs. “This is an opportunity to upscale compliance and develop skills other than understanding data,” said one panellist. Another commented: “AI and compliance go hand in hand. Compliance officers’ rote is to find exceptions. AI can do this and will have a big impact in compliance.”

Use cases

The panel noted early use cases of GenAI including transaction and communications surveillance, financial crime issues such as anti-money laundering, customer onboarding and screening, KYC, and financial advisory chat bots.

Explaining the onboarding use case, a speaker said: “If you give the name of a person to a GenAI tool you should be able to spontaneously surface whether you should be doing business with this person.” Another added: “The core is finding focused models. GenAI models can do cool things, but you need something specific to test them, perhaps communications surveillance, where you can interrogate the data faster than ever before.”

Risks

Looking at the risks of GenAI and LLMs, an audience poll asked Summit delegates, ‘What do you consider to be the biggest risk around adopting GenAI and LLMs?’ More than half the delegates (52%) noted explainability as the biggest risk. This was followed by potential misuse/risk of misinformation, data quality, data privacy and managing bias.

The speakers concurred with the poll results, highlighting the need for explainability, but also problems of achieving it. “Models with many parameters cannot explain everything they do,” said a speaker. Another added: “Explainability is very important, but getting a framework and model governance right is a struggle. Then you need to make sure the model doesn’t drift.”

Also, “Vendors need to make more responsibility for AI, they need to make models explainabile, black boxes are no good anymore.”

Data quality was acknowledged as a common challenge across financial institutions, yet key to ensuring AI models learn from the right data. Be it internal or external data it also needs to be trusted. One solution is to continue using a lexicon to look at words and AI to understand their sentiment. Bias can begin to be addressed by including a diversity of people in labelling data to be used by AI models and keeping a human in the loop when building the models.

Acknowledging that AI is a journey, the panel noted that you must get the fundamentals in place before building models. Transparent policy and procedures around AI are key, along with understanding the business case, selecting trusted data, testing the data, putting governance in place, and being ready to scrap models and start again when necessary.

In conclusion, it said GenAI and LLMs will offer massive benefits in the long term, but there is still a lot to learn. Don’t rush to be first to market and, as one speaker put it: “If you are thinkg of putting the technology in, but don’t understand the data – stop.”

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Sponsored by FundGuard: NAV Resilience Under DORA, A Year of Lessons Learned

The EU’s Digital Operational Resilience Act (DORA) came into force a year ago, and is reshaping how asset managers, asset owners and fund service providers think about operational risk. While DORA’s focus is squarely on ICT resilience and third-party dependencies, its implications extend deep into core operational processes that are critical to market integrity, investor...

BLOG

Regulator-First AI: Vivox Brings Atomic Workflows to Compliance Operations

Artificial intelligence has become a default talking point in financial crime compliance. Yet for many regulated firms, particularly those operating across capital markets, payments, and treasury functions, the challenge is no longer whether AI can be used, but whether it can be deployed in a way regulators will accept. For Vivox AI, a young company...

EVENT

TradingTech Summit New York

Our TradingTech Summit in New York is aimed at senior-level decision makers in trading technology, electronic execution, trading architecture and offers a day packed with insight from practitioners and from innovative suppliers happy to share their experiences in dealing with the enterprise challenges facing our marketplace.

GUIDE

Solvency II Data Management Handbook

Want to get a handle on Solvency II and what it means for data management? Need to make sure you have all the bases covered for the looming January 2016 deadline? Our Solvency II Data Management Handbook is now available for free download to help you. This Handbook is the ultimate guide to all things...