About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Building Trust in AI: An Imperative for Widespread Adoption

Subscribe to our newsletter

By Anshuman Prasad, Global Head of Risk Analytics, CRISIL.

As large language models (LLMs) continue to surprise there is a clamour to adopt AI in a more meaningful fashion across the financial services sector, where machine learning was previously accessible only in rarefied tech or quant circles.

From algorithmic trading to predictive analytics and chatbots, AI promises greater efficiency, insights and automation for financial institutions. But there is also a realization that with these benefits come risks around transparency, ethics and governance, and these factors are holding back mainstream adoption of AI.

Wider adoption of AI by financial institutions is also hampered by concerns around trust, reliability, and risk. These concerns are rightfully more pronounced within the regulated environment of capital markets, where there are fears that increased adoption of AI could pose systemic risks.

Regulators globally have shown concern.  US SEC Chairman Gary Gensler was recently quoted as saying that AI could cause the next financial crisis due to herd behaviour, as just a few AI companies will build the foundational models that industry participants will come to rely on. The UK Prudential Regulation Authority (PRA) and Monetary Authority of Singapore (MAS), meanwhile, have set out guidelines and definitions that provide at least a framework for the adoption of AI in a safe manner.

The PRA’s policy statement – SS 1/23 – focuses on model risk management for banks. It  encourages a comprehensive understanding and management of risks associated with AI, including those arising from machine learning models. To ensure the principles are effectively and proportionately implemented, PRA proposes the allocation of responsibility for the overall model risk management framework to an appropriate Senior Management Function (SMF) holder.

MAS is promoting Fairness, Ethics, Accountability and Transparency (FEAT) in the use of AI and Data Analytics. The Singapore regulator has released an open-source toolkit to aid in the responsible use of AI in the financial industry. This includes a consistent and robust AI framework that spans geographies; a risk-based approach to determine the governance required for the AI use cases; and responsible AI practices and training for the new generation of AI professionals in the financial sector.

Irrespective of the regulatory scenario, financial institutions need to take a lead role. They must convince their clients, and even the public that their AI systems are fair, validated and accountable. Furthermore, at CRISIL we believe there are a few key best practices that firms must adopt to ensure fail-safe AI implementation.

Explaining AI Decisions

A core component of trust is explainability – understanding how AI models work and arrive at outputs. AI systems for trading, credit decisions and fraud detection should not be ‘black boxes’. Their algorithms and data sources should be documented properly.

Teams should test models to ensure they perform as intended. And they must monitor for bias or errors, and be able to explain why the AI program took certain actions. Explanations may be challenging with advanced techniques like deep learning, but institutions should make appropriate efforts.

Upholding Ethics and Fairness

Ethical AI means fair treatment and outcomes. Data inputs and algorithms themselves can potentially introduce or exacerbate biases against protected groups. Financial firms must proactively assess models for issues like gender or racial bias and mitigate any unfair impacts.

Ongoing monitoring, impact assessments and staff training on ethics are key. Having diverse teams build and validate AI can help include different perspectives. AI should ultimately benefit all customers and society – not just the institutions deploying it.

Enabling Transparency

Transparency breeds trust. The data and methodologies behind AI in capital markets should not be shrouded in secrecy. Firms should communicate openly about what data trains algorithms, how models work, their limitations and steps to address risks.

Third-party testing and audits enable outside scrutiny and identifying blind spots. Documentation helps, but interpreting complex algorithms requires expertise. Providing transparency into AI systems wisely, in understandable terms, demonstrates commitment to building trust.

Instituting Governance

Robust AI governance establishes accountability. Policies and controls for acquiring data, building models, testing, documenting and monitoring results are needed. Risk assessment frameworks specifically tailored for AI can uncover vulnerabilities.

Roles and responsibilities for model development, compliance, audit and operations should be clearly defined. Firms must budget appropriately to support fairness, transparency and explainability capabilities. With the right governance model, financial institutions can deploy AI responsibly and at scale.

Building Confidence in AI 

Trust is essential for the adoption of artificial intelligence in capital markets. Through explainability, ethics, transparency and governance, institutions can deploy AI responsibly.

Financial companies should follow emerging regulatory guidance and industry best practices for fairness, testing and risk management. With thoughtful implementation, AI can transform finance for the better while maintaining public confidence.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Best Practices for Managing Trade Surveillance

1 July 2025 10:00am ET | 3:00pm London | 4:00pm CET Duration: 50 Minutes The surge in trading volumes combined with the emergence of new digital financial assets and geopolitical events have added layers of complexity to market activities. Traditional surveillance methods often struggle to keep pace with these changes, leading to difficulties in detecting...

BLOG

Predictions for 2025: Regulatory Climate to Impact Data Management as KYC Initiatives Evolve

By Cenk Ipeker, General Manager, Product Management, Cloud, NICE Actimize. As we enter 2025, financial institutions are likely to witness a shift toward more efficient, customer-friendly, and compliant Know Your Customer (KYC) practices. These changes will occur as institutions navigate an evolving regulatory landscape and technological advancements, with data management becoming a key focus area...

EVENT

Data Management Summit New York City

Now in its 15th year the Data Management Summit NYC brings together the North American data management community to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

AI in Capital Markets: Practical Insight for a Transforming Industry – Free Handbook

AI is no longer on the horizon – it’s embedded in the infrastructure of modern capital markets. But separating real impact from inflated promises requires a grounded, practical understanding. The AI in Capital Markets Handbook 2025 provides exactly that. Designed for data-driven professionals across the trade life-cycle, compliance, infrastructure, and strategy, this handbook goes beyond...