About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Risks and Concerns with Generative AI – What Financial Institutions Need to Consider

Subscribe to our newsletter

By Jennifer Clarke, Head of Content, Global Relay.

With use cases for AI becoming more commonplace, many compliance teams have begun to employ AI-driven tools to assist with ordinary tasks including regulatory change management and surveillance. However, as with most new technologies, as innovation around generative AI increases, various new risks and challenges emerge.

Potential downsides of generative AI

The unknown power of generative AI appears to be the main concern. Recent reports have stated that JP Morgan, Citigroup and Deutsche Bank have banned staff from using ChatGPT, mostly owing to the fact that the technology is unknown. As well as this, we’ve recently seen over 1,000 tech experts, including Elon Musk and Steve Wozniak, sign an open letter calling for all AI labs to pause the training of AI more powerful than ChatGPT 4 for at least six months. The key driver here, is that the creators no longer fully understand the technology’s potential.

Another concern surrounding generative AI is the data and associated data risk. OpenAI’s own FAQs note that certain OpenAI employees, as well as third-party contractors, can access the information or queries posted by users for review. Italy has recently banned ChatGPT because of GDPR compliance concerns. Financial services firms hold vast quantities of consumer and employee data – if a single employee were to plug this data into ChatGPT, there could be far-reaching data exposure.

Beyond data, many see residual risks with the technology. Some institutions are worried that their analysts could use ChatGPT to build predictive models, which could create inaccurate results owing to the fact that ChatGPT’s training data was cut off in 2021. Others are worried that, in order to keep up with the pace of innovation, they must invest vast sums of money to either build similar technology or buy licenses that allow them to integrate the tool.

Who is liable for AI-based decisions that go wrong?

Questions of who is responsible in the event that AI gives bad advice, or communicates in a way that is not commensurate with the business style or code, must be considered by businesses. The landscape shows that regulators are beginning to increase focus on the individual liability of senior managers and chief compliance officers in the event of any wrongdoing. It will be interesting to see where responsibilities lie in the event that AI is used to provide financial advice or business communications.

What steps are regulators taking in response to the rise of generative AI?

While many compliance teams are acting fast to assess the potential risks associated with generative AI, regulators appear to be taking a slower, more considered approach to the rapid roll out of generative AI products such as ChatGPT, Dall-E, and Bard.

Regulators may be taking a more cautious approach, which could explain their lack of urgency. In February 2023, SEC Chair Gary Gensler noted in an interview with Politico, that the “transformative technology right now of our times is predictive data analytics and everything underlying artificial intelligence”. Perhaps Gensler’s comment is a hint of future regulatory consideration, with new approaches to come.

One government body that appears to be paying particular attention is the US Federal Trade Commission (FTC) which recently said there is currently an “AI hype” and that it is a “marketing term”.  The FTC’s concern is that “some products with AI claims might not even work as advertised”. As such, it has warned companies to only promote AI capabilities if they are true. In short, it has asked firms to stop exaggerating the use of AI in products, noting “you don’t need a machine to predict what the FTC might do when those claims are unsupported”.

Potential upsides of generative AI

It is worth mentioning that generative AI offers vast opportunities, benefits, and potential for the financial industry and the broader economy. Despite being a rapidly evolving technology, generative AI is still being studied and explored by experts, regulatory bodies, and governments. As time goes on, it may serve as the basis for various functions, such as advanced search, translation, data analysis, and risk management.

It is important for firms to approach generative AI with a level of caution and query. If you can’t explain how something happened, how will you report it to the regulators if it fails? The FTC’s warning sets a good benchmark for how regulators are expecting firms to grapple with AI – cautiously. Understand it first, deploy it second. The potential benefits of integrating generative AI into existing tools may be short-term if compliance considerations are not given sufficient attention.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Best practice approaches to data management for regulatory reporting

13 May 2025 10:00am ET | 3:00pm London | 4:00pm CET Duration: 50 Minutes Effective regulatory reporting requires firms to manage vast amounts of data across multiple systems, regions, and regulatory jurisdictions. With increasing scrutiny from regulators and the rising complexity of financial instruments, the need for a streamlined and strategic approach to data management...

BLOG

FCA AI Innovation Lab – Corlytics Joins AI Spotlight

In October 2024, the UK’s Financial Conduct Authority (FCA) launched its Artificial Intelligence (AI) Lab, marking a significant regulatory initiative for integrating AI within financial services. The initiative underscores the FCA’s commitment to fostering responsible AI innovation while ensuring consumer protection and market integrity. The AI Lab serves as a collaborative platform, bringing together regulators,...

EVENT

TradingTech Briefing New York

Our TradingTech Briefing in New York is aimed at senior-level decision makers in trading technology, electronic execution, trading architecture and offers a day packed with insight from practitioners and from innovative suppliers happy to share their experiences in dealing with the enterprise challenges facing our marketplace.

GUIDE

AI in Capital Markets: Practical Insight for a Transforming Industry – Free Handbook

AI is no longer on the horizon – it’s embedded in the infrastructure of modern capital markets. But separating real impact from inflated promises requires a grounded, practical understanding. The AI in Capital Markets Handbook 2025 provides exactly that. Designed for data-driven professionals across the trade life-cycle, compliance, infrastructure, and strategy, this handbook goes beyond...