About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

BoE and FCA Plan Report on Impact of AI in Financial Services

Subscribe to our newsletter

The Bank of England (BoE) and the Financial Conduct Authority (FCA) have completed a one-year exploration into the deployment of artificial intelligence (AI) by financial institutions, and what implications this may have on the financial system, including the potential need for additional regulatory standards. The findings – from four quarterly meetings and a series of workshops with private-sector organisations – will be published in a final report to follow.

The BoE’s former Governor Mark Carney had announced the formation of an AI Public-Private Forum (AIPPF) in 2019. But due to the impact of Covid-19, the first meeting was delayed until October 2020, with the expectation that the initiative would run for 12 months. The forum was jointly hosted by the BoE and the FCA with members including individuals from leading banks, corporates and asset management firms.

The final meeting of the AIPPF was held earlier this month and focused on the issues surrounding governance, with members having discussed whether there is a role for additional regulatory standards, what those standards might be and whether there should be a certified auditing regime for AI, according to the minutes published by the Bank. Following this final quarterly meeting, the Forum will also hold a number of workshops focusing on governance related topics.

Moderator Varun Paul, Head of the Fintech Hub at the BoE, said that a final report will be published on conclusion of the AIPPF and noted that the Bank and FCA will be thinking about what future engagement with the financial industry more broadly could look like in light of the lessons learned through the AIPPF. According to Paul, this includes how to take forward the numerous findings and recommendations that have come out of the Forum and will be included in the final report.

Dave Ramsden, Deputy Governor, Markets and Banking at the BoE had also noted earlier this year that the AIPPF will produce a final report upon its conclusion, adding that as the Bank and the industry learn more from this soft support around AI, “it may be that harder forms of public infrastructure will be needed, provided either by the Bank, another authority or as a collaboration between different authorities”.

“Those decisions will be taken at the appropriate time,” he said. “If that means that the Bank’s direct involvement in these areas passes over to another institution, then that’s fine: we will have played our role at the right time.”

According to the final meeting minutes of the AIPPF, members had also discussed if the regulator could ask companies to develop policies that outline how they have considered the ethics of AI and suggested that certification could be used as a mark of recognition to identify those firms that have developed AI policies. Members also suggested that audits of such policies could extend to looking at how firms plan to remediate errors in the case of false positives.

Joint Chair Dave Ramsden, Deputy Governor, Markets and Banking at the BoE, said that the topic of governance was crucial to the safe adoption of AI in UK financial services. He explained that AI may differ from other new and emerging technologies from a governance perspective because AI can limit, or even potentially eliminate, human judgement and oversight from key decisions. He added that this clearly poses challenges to existing governance frameworks in financial services and the concepts of individual and collective accountability that are enshrined in those, such as the elements of the Senior Managers and Certification Regime (SM&CR).

Co-Chair Jessica Rusu, Chief Data, Information and Intelligence Officer (CDIIO) at the FCA added that it was also important that regulators learn from industry practice and current governance approaches to AI and jointly explore how governance can contribute to ethical, safe, robust and resilient use of AI in financial services.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: GenAI and LLM case studies for Surveillance, Screening and Scanning

6 November 2025 11:00am ET | 3:00pm London | 4:00pm CET Duration: 50 Minutes As Generative AI (GenAI) and Large Language Models (LLMs) move from pilot to production, compliance, surveillance, and screening functions are seeing tangible results — and new risks. From trade surveillance to adverse media screening to policy and regulatory scanning, GenAI and...

BLOG

GenAI and LLM Adoption in Compliance: Implementation Insights from Saifr’s Harsh Pandya

The Saifr sponsored whitepaper – From Caution to Action: How Advisory Firms are Integrating AI in Compliance – published in November, had several key themes surrounding the adoption of generative AI (GenAI) enabled technologies for compliance functions by advisors and wealth management companies. We recently covered the theme of in-house versus vendor-supplied solutions in an interview...

EVENT

RegTech Summit London

Now in its 9th year, the RegTech Summit in London will bring together the RegTech ecosystem to explore how the European capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

AI in Capital Markets: Practical Insight for a Transforming Industry – Free Handbook

AI is no longer on the horizon – it’s embedded in the infrastructure of modern capital markets. But separating real impact from inflated promises requires a grounded, practical understanding. The AI in Capital Markets Handbook 2025 provides exactly that. Designed for data-driven professionals across the trade life-cycle, compliance, infrastructure, and strategy, this handbook goes beyond...