About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Risks and Concerns with Generative AI – What Financial Institutions Need to Consider

Subscribe to our newsletter

By Jennifer Clarke, Head of Content, Global Relay.

With use cases for AI becoming more commonplace, many compliance teams have begun to employ AI-driven tools to assist with ordinary tasks including regulatory change management and surveillance. However, as with most new technologies, as innovation around generative AI increases, various new risks and challenges emerge.

Potential downsides of generative AI

The unknown power of generative AI appears to be the main concern. Recent reports have stated that JP Morgan, Citigroup and Deutsche Bank have banned staff from using ChatGPT, mostly owing to the fact that the technology is unknown. As well as this, we’ve recently seen over 1,000 tech experts, including Elon Musk and Steve Wozniak, sign an open letter calling for all AI labs to pause the training of AI more powerful than ChatGPT 4 for at least six months. The key driver here, is that the creators no longer fully understand the technology’s potential.

Another concern surrounding generative AI is the data and associated data risk. OpenAI’s own FAQs note that certain OpenAI employees, as well as third-party contractors, can access the information or queries posted by users for review. Italy has recently banned ChatGPT because of GDPR compliance concerns. Financial services firms hold vast quantities of consumer and employee data – if a single employee were to plug this data into ChatGPT, there could be far-reaching data exposure.

Beyond data, many see residual risks with the technology. Some institutions are worried that their analysts could use ChatGPT to build predictive models, which could create inaccurate results owing to the fact that ChatGPT’s training data was cut off in 2021. Others are worried that, in order to keep up with the pace of innovation, they must invest vast sums of money to either build similar technology or buy licenses that allow them to integrate the tool.

Who is liable for AI-based decisions that go wrong?

Questions of who is responsible in the event that AI gives bad advice, or communicates in a way that is not commensurate with the business style or code, must be considered by businesses. The landscape shows that regulators are beginning to increase focus on the individual liability of senior managers and chief compliance officers in the event of any wrongdoing. It will be interesting to see where responsibilities lie in the event that AI is used to provide financial advice or business communications.

What steps are regulators taking in response to the rise of generative AI?

While many compliance teams are acting fast to assess the potential risks associated with generative AI, regulators appear to be taking a slower, more considered approach to the rapid roll out of generative AI products such as ChatGPT, Dall-E, and Bard.

Regulators may be taking a more cautious approach, which could explain their lack of urgency. In February 2023, SEC Chair Gary Gensler noted in an interview with Politico, that the “transformative technology right now of our times is predictive data analytics and everything underlying artificial intelligence”. Perhaps Gensler’s comment is a hint of future regulatory consideration, with new approaches to come.

One government body that appears to be paying particular attention is the US Federal Trade Commission (FTC) which recently said there is currently an “AI hype” and that it is a “marketing term”.  The FTC’s concern is that “some products with AI claims might not even work as advertised”. As such, it has warned companies to only promote AI capabilities if they are true. In short, it has asked firms to stop exaggerating the use of AI in products, noting “you don’t need a machine to predict what the FTC might do when those claims are unsupported”.

Potential upsides of generative AI

It is worth mentioning that generative AI offers vast opportunities, benefits, and potential for the financial industry and the broader economy. Despite being a rapidly evolving technology, generative AI is still being studied and explored by experts, regulatory bodies, and governments. As time goes on, it may serve as the basis for various functions, such as advanced search, translation, data analysis, and risk management.

It is important for firms to approach generative AI with a level of caution and query. If you can’t explain how something happened, how will you report it to the regulators if it fails? The FTC’s warning sets a good benchmark for how regulators are expecting firms to grapple with AI – cautiously. Understand it first, deploy it second. The potential benefits of integrating generative AI into existing tools may be short-term if compliance considerations are not given sufficient attention.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Best practices for compliance with EU Market Abuse Regulation

Date: 18 June 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes EU Market Abuse Regulation (MAR) came into force in July 2016, rescinding the previous Market Abuse Directive and replacing it with a significantly extended scope of regulatory obligations. Eight years later, and amid constant change in capital markets regulation,...

BLOG

Apiax Releases AI Policy Assistant

Apiax, a Zurich-based provider of embedded compliance products, has released a solution designed to improve operational efficiency and accuracy of policy searches. The solution is a synthesis of Apiax’s compliance expertise, rule-based technology, and Generative AI, and allows financial institutions to access and verify company policy details with ease. The company’s so-called AI Policy Assistant...

EVENT

ESG Data & Tech Summit London

The ESG Data & Tech Summit will explore challenges around assembling and evaluating ESG data for reporting and the impact of regulatory measures and industry collaboration on transparency and standardisation efforts. Expert speakers will address how the evolving market infrastructure is developing and the role of new technologies and alternative data in improving insight and filling data gaps.

GUIDE

ESG Data Handbook 2022

The ESG landscape is changing faster than anyone could have imagined even five years ago. With tens of trillions of dollars expected to have been committed to sustainable assets by the end of the decade, it’s never been more important for financial institutions of all sizes to stay abreast of changes in the ESG data...