About a-team Marketing Services

A-Team Insight Blogs

Building Trust in AI: An Imperative for Widespread Adoption

Subscribe to our newsletter

By Anshuman Prasad, Global Head of Risk Analytics, CRISIL.

As large language models (LLMs) continue to surprise there is a clamour to adopt AI in a more meaningful fashion across the financial services sector, where machine learning was previously accessible only in rarefied tech or quant circles.

From algorithmic trading to predictive analytics and chatbots, AI promises greater efficiency, insights and automation for financial institutions. But there is also a realization that with these benefits come risks around transparency, ethics and governance, and these factors are holding back mainstream adoption of AI.

Wider adoption of AI by financial institutions is also hampered by concerns around trust, reliability, and risk. These concerns are rightfully more pronounced within the regulated environment of capital markets, where there are fears that increased adoption of AI could pose systemic risks.

Regulators globally have shown concern.  US SEC Chairman Gary Gensler was recently quoted as saying that AI could cause the next financial crisis due to herd behaviour, as just a few AI companies will build the foundational models that industry participants will come to rely on. The UK Prudential Regulation Authority (PRA) and Monetary Authority of Singapore (MAS), meanwhile, have set out guidelines and definitions that provide at least a framework for the adoption of AI in a safe manner.

The PRA’s policy statement – SS 1/23 – focuses on model risk management for banks. It  encourages a comprehensive understanding and management of risks associated with AI, including those arising from machine learning models. To ensure the principles are effectively and proportionately implemented, PRA proposes the allocation of responsibility for the overall model risk management framework to an appropriate Senior Management Function (SMF) holder.

MAS is promoting Fairness, Ethics, Accountability and Transparency (FEAT) in the use of AI and Data Analytics. The Singapore regulator has released an open-source toolkit to aid in the responsible use of AI in the financial industry. This includes a consistent and robust AI framework that spans geographies; a risk-based approach to determine the governance required for the AI use cases; and responsible AI practices and training for the new generation of AI professionals in the financial sector.

Irrespective of the regulatory scenario, financial institutions need to take a lead role. They must convince their clients, and even the public that their AI systems are fair, validated and accountable. Furthermore, at CRISIL we believe there are a few key best practices that firms must adopt to ensure fail-safe AI implementation.

Explaining AI Decisions

A core component of trust is explainability – understanding how AI models work and arrive at outputs. AI systems for trading, credit decisions and fraud detection should not be ‘black boxes’. Their algorithms and data sources should be documented properly.

Teams should test models to ensure they perform as intended. And they must monitor for bias or errors, and be able to explain why the AI program took certain actions. Explanations may be challenging with advanced techniques like deep learning, but institutions should make appropriate efforts.

Upholding Ethics and Fairness

Ethical AI means fair treatment and outcomes. Data inputs and algorithms themselves can potentially introduce or exacerbate biases against protected groups. Financial firms must proactively assess models for issues like gender or racial bias and mitigate any unfair impacts.

Ongoing monitoring, impact assessments and staff training on ethics are key. Having diverse teams build and validate AI can help include different perspectives. AI should ultimately benefit all customers and society – not just the institutions deploying it.

Enabling Transparency

Transparency breeds trust. The data and methodologies behind AI in capital markets should not be shrouded in secrecy. Firms should communicate openly about what data trains algorithms, how models work, their limitations and steps to address risks.

Third-party testing and audits enable outside scrutiny and identifying blind spots. Documentation helps, but interpreting complex algorithms requires expertise. Providing transparency into AI systems wisely, in understandable terms, demonstrates commitment to building trust.

Instituting Governance

Robust AI governance establishes accountability. Policies and controls for acquiring data, building models, testing, documenting and monitoring results are needed. Risk assessment frameworks specifically tailored for AI can uncover vulnerabilities.

Roles and responsibilities for model development, compliance, audit and operations should be clearly defined. Firms must budget appropriately to support fairness, transparency and explainability capabilities. With the right governance model, financial institutions can deploy AI responsibly and at scale.

Building Confidence in AI 

Trust is essential for the adoption of artificial intelligence in capital markets. Through explainability, ethics, transparency and governance, institutions can deploy AI responsibly.

Financial companies should follow emerging regulatory guidance and industry best practices for fairness, testing and risk management. With thoughtful implementation, AI can transform finance for the better while maintaining public confidence.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: How to optimise and streamline regulatory reporting using outsourced and managed services

Regulatory reporting remains a top agenda item at many financial institutions as they struggle to implement new regulations and updates, capture and manage required data, and achieve compliant reporting. These issues can pile pressure onto compliance teams and drain resources from other parts of the business, but there are solutions. This webinar will consider how...

BLOG

Vontobel Expands Deployment of NICE Actimize Markets Surveillance Platform

Swiss investment firm Vontobel has renewed its contract with NICE Actimize, and expanded its use of the SURVEIL-X Markets Surveillance platform. The broadened arrangement will support Vontobel’s future growth plans while continuing to meet its current regulatory requirements. The new arrangement allows long-time NICE Actimize client Vontobel to add new capabilities going forward. Vontobel’s use...

EVENT

Data Management Summit London

Now in its 14th year, the Data Management Summit (DMS) in London brings together the European capital markets enterprise data management community, to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

Regulatory Reporting Handbook – First Edition

Welcome to the inaugural edition of A-Team Group’s Regulatory Reporting Handbook, a comprehensive guide to reporting obligations that must be fulfilled by financial institutions on a global basis. The handbook reviews not only the current state of play within the regulatory reporting space, but also looks ahead to identify how institutions should be preparing for...