About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Trustworthy AI in Capital Markets

Subscribe to our newsletter

By Shameek Kundu, head of Financial Services at TruEra.

The use of Artificial Intelligence (AI) is growing in capital markets. But the sector has work to do to capitalise on investment made, and this starts with making AI and machine learning (ML) models robust, transparent and fully accountable. Ensuring AI quality will be critical to building trust and increasing adoption, and leveraging the real, but currently untapped, potential of new technologies.

In risk and compliance, AI is helping improve effectiveness and expand the risk envelope in which firms can operate. Examples include: market abuse surveillance; anti-money laundering; liquidity, currency and counterparty risk management; and cyber-security threat monitoring. In operations, AI is being used to classify and extract relevant information from unstructured documents, such as contractual parties and terms from legal contracts. AI is also helping process internal and external data for confirmation, settlement, reconciliation and asset value calculation, and detect and mitigate data quality issues.

Firms have also started using AI to understand and engage clients. For example, by creating sophisticated segmentation to understand and anticipate client needs and by processing transaction quotes from clients using conversational AI. AI plays a role in portfolio decision-making and trade execution. Analysis of large volumes of time-series and unstructured data helps with sentiment analysis and investment signal generation. AI is being used to evaluate trading venues, brokers and execution algorithms, and to determine the timing, price and size of particular orders.

More breadth than depth

However, while the breadth of AI use in capital markets is impressive, the depth of AI adoption is often shallow. Most firms have crossed the threshold of experimentation, but the use of AI models in production, at a scale that starts making a significant bottom-line impact, is still limited.

A key driver for this apparent gap, between the hype and investment around AI, and its actual impact on the ground, is the continued lack of trust in AI systems. AI-based systems can be more difficult to understand than their traditional rule-based or statistical counterparts, making it difficult to justify decisions internally, or with clients and regulators. Because such algorithms learn from patterns in historical data, their reliability can be influenced by the quality and representativeness of the input data. Finally, without human oversight, such algorithms can heighten risks related to market stability and competition.

Regulators have begun recognising these concerns. The Bank of England and Financial Conduct Authority (FCA) in the UK formed a public-private forum in October 2020, with the aim of gathering industry inputs. US banking regulators have sought industry comments on a wide-ranging set of questions around AI risks. The International Organisation of Securities Commissions (IOSCO) published its own consultation report in June 2020. Capital markets firms are also recognising these challenges, and enhancing existing risk frameworks to create safe guard-rails for AI adoption.

A focus on AI quality

Many firms are looking at the experience with software quality to frame their response to the AI trust deficit. AI models are often built and deployed today in the way that software was in the 1980s: without the tools needed for systematic testing, review and monitoring. As with software quality over the past two decades, there is a need for similar systematic focus on AI quality.

The most commonly understood attribute of AI quality is a model’s predictive accuracy. After all, a pricing model is no good if it is not able to accurately predict historical prices in the test data set. However, even if a model performs well on train and test data, it may not generalise well in the real world. For that, it needs to learn robust predictive or causal relationships.

This attribute of AI quality is sometimes referred to as conceptual soundness. Assessing conceptual soundness requires surfacing important features, concepts, and relationships learned by AI models that are driving their predictions. This information can then be reviewed by human experts to build confidence in the model or identify model weaknesses.

This is achieved through explainability, the science of reading the artificial minds of complex AI models, such as tree ensemble models and deep neural networks. One example is the way in which communications surveillance systems demonstrate the way in which they are parsing trading room communication to draw conclusions on potential market abuse.

Conceptual soundness and explainability are related to robustness, another attribute of AI quality. It captures the idea that small changes in inputs to a model should cause only small changes in its outputs. For example, a small change in the input data for a client risk assessment model should only change its outcomes by a small amount.

Recognising the dynamic nature of the world we inhabit, AI quality also encompasses attributes pertaining to the stability of AI models and the associated data, offering answers to questions such as: How different are the model’s predictions in the current time from when it was trained? Is the model still fit for purpose or are there input regions where it is not very reliable because the new data is very different from the training data or because the world has changed?

These AI quality attributes depend in a fundamental way on the data on which models are trained and to which they are applied. Data quality is thus another critical attribute of AI Quality. A good AI system must be trained using data that is representative of the population to which it will be applied, and meet necessary standards of data accuracy and completeness.

These AI quality attributes are important throughout the lifecycle of AI systems, during development and validation, and after they go live. An effective diagnostic and monitoring capability can help capital markets firms operationalise AI quality.

TruEra is a company dedicated to making AI trustworthy

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: An update on data standards and global identifiers

Data standards and global identifiers have been parts of capital markets’ practices for many years, and more are being developed, reviewed and shaped as the industry acknowledges their role in streamlining data management, reducing risk, improving transparency, and achieving compliance. This webinar will discuss data standards and identifiers in play, as well as those in...

BLOG

The Challenge of Data Integration in a Multiple Data Source World

By Inesa Smigola, Head of Presales, EMEA and APAC at Xceptor. Financial institutions have a growing data challenge – ever increasing data volumes, much of it unstructured, multiple data sources, and hugely varied data formats and structures. Across this is the additional challenge of inconsistent data quality according to data source and format– an Excel...

EVENT

TradingTech Summit London

Now in its 13th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...