About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Potential and Practicalities of Implementing Generative AI for Compliance

Subscribe to our newsletter

While AI has been around for 20 years or so, its time has come in capital markets with Generative AI and large language models (LLMs) able to handle vast volumes of compliance data and achieve outcomes that cannot be reached by humans. GenAI apps are not, however, a silver bullet, and compliance teams are not yet ready, on the whole, to use the technologies.

A panel session at A-Team Group’s recent RegTech Summit London considered these issues in the context of the risks, challenges and opportunities of GenAI and LLMs. The panel was moderated by Andrew Delaney, president and chief content officer at A-Team Group, and joined by Marili Anderson, chief compliance officer at William Blair International; Chris Beevor, UK MLRO and group compliance COO at GAM; Vall Herard, co-founder and CEO at Saifr; and Shaun Hurst, principal regulatory advisor at Smarsh.

The panel tracked the history of AI, noting the incremental increase in its capabilities and the real-world potential offered by GenAI and LLMs. “This is an opportunity to upscale compliance and develop skills other than understanding data,” said one panellist. Another commented: “AI and compliance go hand in hand. Compliance officers’ rote is to find exceptions. AI can do this and will have a big impact in compliance.”

Use cases

The panel noted early use cases of GenAI including transaction and communications surveillance, financial crime issues such as anti-money laundering, customer onboarding and screening, KYC, and financial advisory chat bots.

Explaining the onboarding use case, a speaker said: “If you give the name of a person to a GenAI tool you should be able to spontaneously surface whether you should be doing business with this person.” Another added: “The core is finding focused models. GenAI models can do cool things, but you need something specific to test them, perhaps communications surveillance, where you can interrogate the data faster than ever before.”

Risks

Looking at the risks of GenAI and LLMs, an audience poll asked Summit delegates, ‘What do you consider to be the biggest risk around adopting GenAI and LLMs?’ More than half the delegates (52%) noted explainability as the biggest risk. This was followed by potential misuse/risk of misinformation, data quality, data privacy and managing bias.

The speakers concurred with the poll results, highlighting the need for explainability, but also problems of achieving it. “Models with many parameters cannot explain everything they do,” said a speaker. Another added: “Explainability is very important, but getting a framework and model governance right is a struggle. Then you need to make sure the model doesn’t drift.”

Also, “Vendors need to make more responsibility for AI, they need to make models explainabile, black boxes are no good anymore.”

Data quality was acknowledged as a common challenge across financial institutions, yet key to ensuring AI models learn from the right data. Be it internal or external data it also needs to be trusted. One solution is to continue using a lexicon to look at words and AI to understand their sentiment. Bias can begin to be addressed by including a diversity of people in labelling data to be used by AI models and keeping a human in the loop when building the models.

Acknowledging that AI is a journey, the panel noted that you must get the fundamentals in place before building models. Transparent policy and procedures around AI are key, along with understanding the business case, selecting trusted data, testing the data, putting governance in place, and being ready to scrap models and start again when necessary.

In conclusion, it said GenAI and LLMs will offer massive benefits in the long term, but there is still a lot to learn. Don’t rush to be first to market and, as one speaker put it: “If you are thinkg of putting the technology in, but don’t understand the data – stop.”

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Sponsored by FundGuard: NAV Resilience Under DORA, A Year of Lessons Learned

The EU’s Digital Operational Resilience Act (DORA) came into force a year ago, and is reshaping how asset managers, asset owners and fund service providers think about operational risk. While DORA’s focus is squarely on ICT resilience and third-party dependencies, its implications extend deep into core operational processes that are critical to market integrity, investor...

BLOG

From London to New York: How Regulators and Firms Are Re-Drawing the AI Compliance Map

As artificial intelligence (AI) reshapes financial services, regulators and industry leaders are converging on a shared challenge: how to balance innovation with accountability. At A-Team Group’s recent RegTech Summit London, the conversation moved beyond theory into practice, with the Financial Conduct Authority (FCA) and leading firms outlining how principle-based regulation, collaborative testing, and emerging “agentic...

EVENT

RegTech Summit New York

Now in its 9th year, the RegTech Summit in New York will bring together the RegTech ecosystem to explore how the North American capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Managing Valuations Data for Optimal Risk Management

The US corporate actions market has long been characterised as paper-based and manually intensive, but it seems that much progress is being made of late to tackle the lack of automation due to the introduction of four little letters: XBRL. According to a survey by the American Institute of Certified Public Accountants (AICPA) and standards...