About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Potential and Practicalities of Implementing Generative AI for Compliance

Subscribe to our newsletter

While AI has been around for 20 years or so, its time has come in capital markets with Generative AI and large language models (LLMs) able to handle vast volumes of compliance data and achieve outcomes that cannot be reached by humans. GenAI apps are not, however, a silver bullet, and compliance teams are not yet ready, on the whole, to use the technologies.

A panel session at A-Team Group’s recent RegTech Summit London considered these issues in the context of the risks, challenges and opportunities of GenAI and LLMs. The panel was moderated by Andrew Delaney, president and chief content officer at A-Team Group, and joined by Marili Anderson, chief compliance officer at William Blair International; Chris Beevor, UK MLRO and group compliance COO at GAM; Vall Herard, co-founder and CEO at Saifr; and Shaun Hurst, principal regulatory advisor at Smarsh.

The panel tracked the history of AI, noting the incremental increase in its capabilities and the real-world potential offered by GenAI and LLMs. “This is an opportunity to upscale compliance and develop skills other than understanding data,” said one panellist. Another commented: “AI and compliance go hand in hand. Compliance officers’ rote is to find exceptions. AI can do this and will have a big impact in compliance.”

Use cases

The panel noted early use cases of GenAI including transaction and communications surveillance, financial crime issues such as anti-money laundering, customer onboarding and screening, KYC, and financial advisory chat bots.

Explaining the onboarding use case, a speaker said: “If you give the name of a person to a GenAI tool you should be able to spontaneously surface whether you should be doing business with this person.” Another added: “The core is finding focused models. GenAI models can do cool things, but you need something specific to test them, perhaps communications surveillance, where you can interrogate the data faster than ever before.”


Looking at the risks of GenAI and LLMs, an audience poll asked Summit delegates, ‘What do you consider to be the biggest risk around adopting GenAI and LLMs?’ More than half the delegates (52%) noted explainability as the biggest risk. This was followed by potential misuse/risk of misinformation, data quality, data privacy and managing bias.

The speakers concurred with the poll results, highlighting the need for explainability, but also problems of achieving it. “Models with many parameters cannot explain everything they do,” said a speaker. Another added: “Explainability is very important, but getting a framework and model governance right is a struggle. Then you need to make sure the model doesn’t drift.”

Also, “Vendors need to make more responsibility for AI, they need to make models explainabile, black boxes are no good anymore.”

Data quality was acknowledged as a common challenge across financial institutions, yet key to ensuring AI models learn from the right data. Be it internal or external data it also needs to be trusted. One solution is to continue using a lexicon to look at words and AI to understand their sentiment. Bias can begin to be addressed by including a diversity of people in labelling data to be used by AI models and keeping a human in the loop when building the models.

Acknowledging that AI is a journey, the panel noted that you must get the fundamentals in place before building models. Transparent policy and procedures around AI are key, along with understanding the business case, selecting trusted data, testing the data, putting governance in place, and being ready to scrap models and start again when necessary.

In conclusion, it said GenAI and LLMs will offer massive benefits in the long term, but there is still a lot to learn. Don’t rush to be first to market and, as one speaker put it: “If you are thinkg of putting the technology in, but don’t understand the data – stop.”

Subscribe to our newsletter

Related content


Recorded Webinar: Best practice approaches to trade surveillance for market abuse

Breaches of market abuse regulation can lead to reputational damage, eye-watering fines and, ultimately, custodial sentences of up to 10 years. Internally, market abuse triggers scrutiny of traders and trading behaviours; externally it can undermine confidence in markets and cause financial instability. This webinar will discuss market abuse of different types, such as insider trading...


How 2024 will be a Monumental Year with Evolving Regulatory Requirements

By Leo Labeis, CEO at REGnosys. This year will be uniquely busy with numerous changes to global reporting regimes. This article explores the changes firms need to be aware of and how RegTech solutions can help them stay ahead of the curve. RegTech is one of the fastest advancing areas of fintech with the global...


RegTech Summit New York

Now in its 8th year, the RegTech Summit in New York will bring together the regtech ecosystem to explore how the North American capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.


Corporate Actions Europe 2010

The European corporate actions market could be the stage of some pretty heavy duty discussions regarding standards going forward, particularly with regards to the adoption of both XBRL tagging and ISO 20022 messaging. The region’s issuer community, for one, is not going to be easy to convince of the benefits of XBRL tags, given the...