About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Building Trust in AI: An Imperative for Widespread Adoption

Subscribe to our newsletter

By Anshuman Prasad, Global Head of Risk Analytics, CRISIL.

As large language models (LLMs) continue to surprise there is a clamour to adopt AI in a more meaningful fashion across the financial services sector, where machine learning was previously accessible only in rarefied tech or quant circles.

From algorithmic trading to predictive analytics and chatbots, AI promises greater efficiency, insights and automation for financial institutions. But there is also a realization that with these benefits come risks around transparency, ethics and governance, and these factors are holding back mainstream adoption of AI.

Wider adoption of AI by financial institutions is also hampered by concerns around trust, reliability, and risk. These concerns are rightfully more pronounced within the regulated environment of capital markets, where there are fears that increased adoption of AI could pose systemic risks.

Regulators globally have shown concern.  US SEC Chairman Gary Gensler was recently quoted as saying that AI could cause the next financial crisis due to herd behaviour, as just a few AI companies will build the foundational models that industry participants will come to rely on. The UK Prudential Regulation Authority (PRA) and Monetary Authority of Singapore (MAS), meanwhile, have set out guidelines and definitions that provide at least a framework for the adoption of AI in a safe manner.

The PRA’s policy statement – SS 1/23 – focuses on model risk management for banks. It  encourages a comprehensive understanding and management of risks associated with AI, including those arising from machine learning models. To ensure the principles are effectively and proportionately implemented, PRA proposes the allocation of responsibility for the overall model risk management framework to an appropriate Senior Management Function (SMF) holder.

MAS is promoting Fairness, Ethics, Accountability and Transparency (FEAT) in the use of AI and Data Analytics. The Singapore regulator has released an open-source toolkit to aid in the responsible use of AI in the financial industry. This includes a consistent and robust AI framework that spans geographies; a risk-based approach to determine the governance required for the AI use cases; and responsible AI practices and training for the new generation of AI professionals in the financial sector.

Irrespective of the regulatory scenario, financial institutions need to take a lead role. They must convince their clients, and even the public that their AI systems are fair, validated and accountable. Furthermore, at CRISIL we believe there are a few key best practices that firms must adopt to ensure fail-safe AI implementation.

Explaining AI Decisions

A core component of trust is explainability – understanding how AI models work and arrive at outputs. AI systems for trading, credit decisions and fraud detection should not be ‘black boxes’. Their algorithms and data sources should be documented properly.

Teams should test models to ensure they perform as intended. And they must monitor for bias or errors, and be able to explain why the AI program took certain actions. Explanations may be challenging with advanced techniques like deep learning, but institutions should make appropriate efforts.

Upholding Ethics and Fairness

Ethical AI means fair treatment and outcomes. Data inputs and algorithms themselves can potentially introduce or exacerbate biases against protected groups. Financial firms must proactively assess models for issues like gender or racial bias and mitigate any unfair impacts.

Ongoing monitoring, impact assessments and staff training on ethics are key. Having diverse teams build and validate AI can help include different perspectives. AI should ultimately benefit all customers and society – not just the institutions deploying it.

Enabling Transparency

Transparency breeds trust. The data and methodologies behind AI in capital markets should not be shrouded in secrecy. Firms should communicate openly about what data trains algorithms, how models work, their limitations and steps to address risks.

Third-party testing and audits enable outside scrutiny and identifying blind spots. Documentation helps, but interpreting complex algorithms requires expertise. Providing transparency into AI systems wisely, in understandable terms, demonstrates commitment to building trust.

Instituting Governance

Robust AI governance establishes accountability. Policies and controls for acquiring data, building models, testing, documenting and monitoring results are needed. Risk assessment frameworks specifically tailored for AI can uncover vulnerabilities.

Roles and responsibilities for model development, compliance, audit and operations should be clearly defined. Firms must budget appropriately to support fairness, transparency and explainability capabilities. With the right governance model, financial institutions can deploy AI responsibly and at scale.

Building Confidence in AI 

Trust is essential for the adoption of artificial intelligence in capital markets. Through explainability, ethics, transparency and governance, institutions can deploy AI responsibly.

Financial companies should follow emerging regulatory guidance and industry best practices for fairness, testing and risk management. With thoughtful implementation, AI can transform finance for the better while maintaining public confidence.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Best practices for buy-side data management across structured and unstructured data

Date: 14 November 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Data management is central to asset management, but it can also be a challenge as firms face increased volumes of data, data complexity and the need to consolidate structured and unstructured data to gain valuable insights, improve decision-making, step...

BLOG

AI Startup BlueFlame Raises $5m for Alternative Markets Data Platform

BlueFlame AI, a start-up that harnesses artificial intelligence to help alternative market participants streamline their operational, regulatory and clerical processes, has raised US$5 million in a Series A funding round. The cash injection, which will be used to further develop BlueFlame’s AI platform, raises the company’s value to $50m, the New York- and London-based company...

EVENT

TradingTech Briefing New York

Our TradingTech Briefing in New York is aimed at senior-level decision makers in trading technology, electronic execution, trading architecture and offers a day packed with insight from practitioners and from innovative suppliers happy to share their experiences in dealing with the enterprise challenges facing our marketplace.

GUIDE

Regulatory Data Handbook 2024 – Twelfth Edition

Welcome to the twelfth edition of A-Team Group’s Regulatory Data Handbook, a unique and useful guide to capital markets regulation, regulatory change and the data and data management requirements of compliance. The handbook covers regulation in Europe, the UK, US and Asia-Pacific. This edition of the handbook includes a detailed review of acts, plans and...