About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Trustworthy AI in Capital Markets

Subscribe to our newsletter

By Shameek Kundu, head of Financial Services at TruEra.

The use of Artificial Intelligence (AI) is growing in capital markets. But the sector has work to do to capitalise on investment made, and this starts with making AI and machine learning (ML) models robust, transparent and fully accountable. Ensuring AI quality will be critical to building trust and increasing adoption, and leveraging the real, but currently untapped, potential of new technologies.

In risk and compliance, AI is helping improve effectiveness and expand the risk envelope in which firms can operate. Examples include: market abuse surveillance; anti-money laundering; liquidity, currency and counterparty risk management; and cyber-security threat monitoring. In operations, AI is being used to classify and extract relevant information from unstructured documents, such as contractual parties and terms from legal contracts. AI is also helping process internal and external data for confirmation, settlement, reconciliation and asset value calculation, and detect and mitigate data quality issues.

Firms have also started using AI to understand and engage clients. For example, by creating sophisticated segmentation to understand and anticipate client needs and by processing transaction quotes from clients using conversational AI. AI plays a role in portfolio decision-making and trade execution. Analysis of large volumes of time-series and unstructured data helps with sentiment analysis and investment signal generation. AI is being used to evaluate trading venues, brokers and execution algorithms, and to determine the timing, price and size of particular orders.

More breadth than depth

However, while the breadth of AI use in capital markets is impressive, the depth of AI adoption is often shallow. Most firms have crossed the threshold of experimentation, but the use of AI models in production, at a scale that starts making a significant bottom-line impact, is still limited.

A key driver for this apparent gap, between the hype and investment around AI, and its actual impact on the ground, is the continued lack of trust in AI systems. AI-based systems can be more difficult to understand than their traditional rule-based or statistical counterparts, making it difficult to justify decisions internally, or with clients and regulators. Because such algorithms learn from patterns in historical data, their reliability can be influenced by the quality and representativeness of the input data. Finally, without human oversight, such algorithms can heighten risks related to market stability and competition.

Regulators have begun recognising these concerns. The Bank of England and Financial Conduct Authority (FCA) in the UK formed a public-private forum in October 2020, with the aim of gathering industry inputs. US banking regulators have sought industry comments on a wide-ranging set of questions around AI risks. The International Organisation of Securities Commissions (IOSCO) published its own consultation report in June 2020. Capital markets firms are also recognising these challenges, and enhancing existing risk frameworks to create safe guard-rails for AI adoption.

A focus on AI quality

Many firms are looking at the experience with software quality to frame their response to the AI trust deficit. AI models are often built and deployed today in the way that software was in the 1980s: without the tools needed for systematic testing, review and monitoring. As with software quality over the past two decades, there is a need for similar systematic focus on AI quality.

The most commonly understood attribute of AI quality is a model’s predictive accuracy. After all, a pricing model is no good if it is not able to accurately predict historical prices in the test data set. However, even if a model performs well on train and test data, it may not generalise well in the real world. For that, it needs to learn robust predictive or causal relationships.

This attribute of AI quality is sometimes referred to as conceptual soundness. Assessing conceptual soundness requires surfacing important features, concepts, and relationships learned by AI models that are driving their predictions. This information can then be reviewed by human experts to build confidence in the model or identify model weaknesses.

This is achieved through explainability, the science of reading the artificial minds of complex AI models, such as tree ensemble models and deep neural networks. One example is the way in which communications surveillance systems demonstrate the way in which they are parsing trading room communication to draw conclusions on potential market abuse.

Conceptual soundness and explainability are related to robustness, another attribute of AI quality. It captures the idea that small changes in inputs to a model should cause only small changes in its outputs. For example, a small change in the input data for a client risk assessment model should only change its outcomes by a small amount.

Recognising the dynamic nature of the world we inhabit, AI quality also encompasses attributes pertaining to the stability of AI models and the associated data, offering answers to questions such as: How different are the model’s predictions in the current time from when it was trained? Is the model still fit for purpose or are there input regions where it is not very reliable because the new data is very different from the training data or because the world has changed?

These AI quality attributes depend in a fundamental way on the data on which models are trained and to which they are applied. Data quality is thus another critical attribute of AI Quality. A good AI system must be trained using data that is representative of the population to which it will be applied, and meet necessary standards of data accuracy and completeness.

These AI quality attributes are important throughout the lifecycle of AI systems, during development and validation, and after they go live. An effective diagnostic and monitoring capability can help capital markets firms operationalise AI quality.

TruEra is a company dedicated to making AI trustworthy

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Unlocking Transparency in Private Markets: Data-Driven Strategies in Asset Management

As asset managers continue to increase their allocations in private assets, the demand for greater transparency, risk oversight, and operational efficiency is growing rapidly. Managing private markets data presents its own set of unique challenges due to a lack of transparency, disparate sources and lack of standardization. Without reliable access, your firm may face inefficiencies,...

BLOG

Standards and Identifiers Help to Prevent ‘Data Chaos’: Webinar Preview

Financial institutions’ absorption of ever-greater volumes of data, and their utilisation of it in a surging number of use cases, is putting strains on their data management processes. Taking the friction out of those workflows can improve performance substantially. But the absence of a unified international set of standards to ensure all data used by...

EVENT

AI in Data Management Summit New York City

Following the success of the 15th Data Management Summit NYC, A-Team Group are excited to announce our new event: AI in Data Management Summit NYC!

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...