About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Building Trust in AI: An Imperative for Widespread Adoption

Subscribe to our newsletter

By Anshuman Prasad, Global Head of Risk Analytics, CRISIL.

As large language models (LLMs) continue to surprise there is a clamour to adopt AI in a more meaningful fashion across the financial services sector, where machine learning was previously accessible only in rarefied tech or quant circles.

From algorithmic trading to predictive analytics and chatbots, AI promises greater efficiency, insights and automation for financial institutions. But there is also a realization that with these benefits come risks around transparency, ethics and governance, and these factors are holding back mainstream adoption of AI.

Wider adoption of AI by financial institutions is also hampered by concerns around trust, reliability, and risk. These concerns are rightfully more pronounced within the regulated environment of capital markets, where there are fears that increased adoption of AI could pose systemic risks.

Regulators globally have shown concern.  US SEC Chairman Gary Gensler was recently quoted as saying that AI could cause the next financial crisis due to herd behaviour, as just a few AI companies will build the foundational models that industry participants will come to rely on. The UK Prudential Regulation Authority (PRA) and Monetary Authority of Singapore (MAS), meanwhile, have set out guidelines and definitions that provide at least a framework for the adoption of AI in a safe manner.

The PRA’s policy statement – SS 1/23 – focuses on model risk management for banks. It  encourages a comprehensive understanding and management of risks associated with AI, including those arising from machine learning models. To ensure the principles are effectively and proportionately implemented, PRA proposes the allocation of responsibility for the overall model risk management framework to an appropriate Senior Management Function (SMF) holder.

MAS is promoting Fairness, Ethics, Accountability and Transparency (FEAT) in the use of AI and Data Analytics. The Singapore regulator has released an open-source toolkit to aid in the responsible use of AI in the financial industry. This includes a consistent and robust AI framework that spans geographies; a risk-based approach to determine the governance required for the AI use cases; and responsible AI practices and training for the new generation of AI professionals in the financial sector.

Irrespective of the regulatory scenario, financial institutions need to take a lead role. They must convince their clients, and even the public that their AI systems are fair, validated and accountable. Furthermore, at CRISIL we believe there are a few key best practices that firms must adopt to ensure fail-safe AI implementation.

Explaining AI Decisions

A core component of trust is explainability – understanding how AI models work and arrive at outputs. AI systems for trading, credit decisions and fraud detection should not be ‘black boxes’. Their algorithms and data sources should be documented properly.

Teams should test models to ensure they perform as intended. And they must monitor for bias or errors, and be able to explain why the AI program took certain actions. Explanations may be challenging with advanced techniques like deep learning, but institutions should make appropriate efforts.

Upholding Ethics and Fairness

Ethical AI means fair treatment and outcomes. Data inputs and algorithms themselves can potentially introduce or exacerbate biases against protected groups. Financial firms must proactively assess models for issues like gender or racial bias and mitigate any unfair impacts.

Ongoing monitoring, impact assessments and staff training on ethics are key. Having diverse teams build and validate AI can help include different perspectives. AI should ultimately benefit all customers and society – not just the institutions deploying it.

Enabling Transparency

Transparency breeds trust. The data and methodologies behind AI in capital markets should not be shrouded in secrecy. Firms should communicate openly about what data trains algorithms, how models work, their limitations and steps to address risks.

Third-party testing and audits enable outside scrutiny and identifying blind spots. Documentation helps, but interpreting complex algorithms requires expertise. Providing transparency into AI systems wisely, in understandable terms, demonstrates commitment to building trust.

Instituting Governance

Robust AI governance establishes accountability. Policies and controls for acquiring data, building models, testing, documenting and monitoring results are needed. Risk assessment frameworks specifically tailored for AI can uncover vulnerabilities.

Roles and responsibilities for model development, compliance, audit and operations should be clearly defined. Firms must budget appropriately to support fairness, transparency and explainability capabilities. With the right governance model, financial institutions can deploy AI responsibly and at scale.

Building Confidence in AI 

Trust is essential for the adoption of artificial intelligence in capital markets. Through explainability, ethics, transparency and governance, institutions can deploy AI responsibly.

Financial companies should follow emerging regulatory guidance and industry best practices for fairness, testing and risk management. With thoughtful implementation, AI can transform finance for the better while maintaining public confidence.

Subscribe to our newsletter

Related content


Upcoming Webinar: Practical considerations for regulatory change management

Date: 18 September 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Regulatory change management has become a norm across financial markets but a challenge for financial institutions that must monitor, manage and adapt to ensure compliance with both minor and major adjustments to obligations. This year is particularly troublesome, with...


DTIF Partners DLC Distributed Ledger Consulting to Add Crypto Risk Metrics to Digital Token Identifiers

The Digital Token Identifier Foundation (DTIF), created by Etrading Software to provide ISO standard identifiers for digital assets based on open data principles, has reached agreement with DLC Distributed Ledger Consulting to display a crypto risk metrics score on Digital Token Identifiers (DTIs) for commonly traded tokens. The metric score, combined with the DTI standard,...


Data Management Summit London

Now in its 14th year, the Data Management Summit (DMS) in London brings together the European capital markets enterprise data management community, to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.


Regulatory Data Handbook – Second Edition

Need to know all the essentials about the regulations impacting data management? A-Team’s Regulatory Data Handbook is a great way to see at-a-glance: All the regulations that are impacting data management today A description of each regulation The impact each will have from a data and data management perspective Messages from sponsors with products related to...