The Bank of England (BoE) and the Financial Conduct Authority (FCA) have completed a one-year exploration into the deployment of artificial intelligence (AI) by financial institutions, and what implications this may have on the financial system, including the potential need for additional regulatory standards. The findings – from four quarterly meetings and a series of workshops with private-sector organisations – will be published in a final report to follow.
The BoE’s former Governor Mark Carney had announced the formation of an AI Public-Private Forum (AIPPF) in 2019. But due to the impact of Covid-19, the first meeting was delayed until October 2020, with the expectation that the initiative would run for 12 months. The forum was jointly hosted by the BoE and the FCA with members including individuals from leading banks, corporates and asset management firms.
The final meeting of the AIPPF was held earlier this month and focused on the issues surrounding governance, with members having discussed whether there is a role for additional regulatory standards, what those standards might be and whether there should be a certified auditing regime for AI, according to the minutes published by the Bank. Following this final quarterly meeting, the Forum will also hold a number of workshops focusing on governance related topics.
Moderator Varun Paul, Head of the Fintech Hub at the BoE, said that a final report will be published on conclusion of the AIPPF and noted that the Bank and FCA will be thinking about what future engagement with the financial industry more broadly could look like in light of the lessons learned through the AIPPF. According to Paul, this includes how to take forward the numerous findings and recommendations that have come out of the Forum and will be included in the final report.
Dave Ramsden, Deputy Governor, Markets and Banking at the BoE had also noted earlier this year that the AIPPF will produce a final report upon its conclusion, adding that as the Bank and the industry learn more from this soft support around AI, “it may be that harder forms of public infrastructure will be needed, provided either by the Bank, another authority or as a collaboration between different authorities”.
“Those decisions will be taken at the appropriate time,” he said. “If that means that the Bank’s direct involvement in these areas passes over to another institution, then that’s fine: we will have played our role at the right time.”
According to the final meeting minutes of the AIPPF, members had also discussed if the regulator could ask companies to develop policies that outline how they have considered the ethics of AI and suggested that certification could be used as a mark of recognition to identify those firms that have developed AI policies. Members also suggested that audits of such policies could extend to looking at how firms plan to remediate errors in the case of false positives.
Joint Chair Dave Ramsden, Deputy Governor, Markets and Banking at the BoE, said that the topic of governance was crucial to the safe adoption of AI in UK financial services. He explained that AI may differ from other new and emerging technologies from a governance perspective because AI can limit, or even potentially eliminate, human judgement and oversight from key decisions. He added that this clearly poses challenges to existing governance frameworks in financial services and the concepts of individual and collective accountability that are enshrined in those, such as the elements of the Senior Managers and Certification Regime (SM&CR).
Co-Chair Jessica Rusu, Chief Data, Information and Intelligence Officer (CDIIO) at the FCA added that it was also important that regulators learn from industry practice and current governance approaches to AI and jointly explore how governance can contribute to ethical, safe, robust and resilient use of AI in financial services.