About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Trustworthy AI in Capital Markets

Subscribe to our newsletter

By Shameek Kundu, head of Financial Services at TruEra.

The use of Artificial Intelligence (AI) is growing in capital markets. But the sector has work to do to capitalise on investment made, and this starts with making AI and machine learning (ML) models robust, transparent and fully accountable. Ensuring AI quality will be critical to building trust and increasing adoption, and leveraging the real, but currently untapped, potential of new technologies.

In risk and compliance, AI is helping improve effectiveness and expand the risk envelope in which firms can operate. Examples include: market abuse surveillance; anti-money laundering; liquidity, currency and counterparty risk management; and cyber-security threat monitoring. In operations, AI is being used to classify and extract relevant information from unstructured documents, such as contractual parties and terms from legal contracts. AI is also helping process internal and external data for confirmation, settlement, reconciliation and asset value calculation, and detect and mitigate data quality issues.

Firms have also started using AI to understand and engage clients. For example, by creating sophisticated segmentation to understand and anticipate client needs and by processing transaction quotes from clients using conversational AI. AI plays a role in portfolio decision-making and trade execution. Analysis of large volumes of time-series and unstructured data helps with sentiment analysis and investment signal generation. AI is being used to evaluate trading venues, brokers and execution algorithms, and to determine the timing, price and size of particular orders.

More breadth than depth

However, while the breadth of AI use in capital markets is impressive, the depth of AI adoption is often shallow. Most firms have crossed the threshold of experimentation, but the use of AI models in production, at a scale that starts making a significant bottom-line impact, is still limited.

A key driver for this apparent gap, between the hype and investment around AI, and its actual impact on the ground, is the continued lack of trust in AI systems. AI-based systems can be more difficult to understand than their traditional rule-based or statistical counterparts, making it difficult to justify decisions internally, or with clients and regulators. Because such algorithms learn from patterns in historical data, their reliability can be influenced by the quality and representativeness of the input data. Finally, without human oversight, such algorithms can heighten risks related to market stability and competition.

Regulators have begun recognising these concerns. The Bank of England and Financial Conduct Authority (FCA) in the UK formed a public-private forum in October 2020, with the aim of gathering industry inputs. US banking regulators have sought industry comments on a wide-ranging set of questions around AI risks. The International Organisation of Securities Commissions (IOSCO) published its own consultation report in June 2020. Capital markets firms are also recognising these challenges, and enhancing existing risk frameworks to create safe guard-rails for AI adoption.

A focus on AI quality

Many firms are looking at the experience with software quality to frame their response to the AI trust deficit. AI models are often built and deployed today in the way that software was in the 1980s: without the tools needed for systematic testing, review and monitoring. As with software quality over the past two decades, there is a need for similar systematic focus on AI quality.

The most commonly understood attribute of AI quality is a model’s predictive accuracy. After all, a pricing model is no good if it is not able to accurately predict historical prices in the test data set. However, even if a model performs well on train and test data, it may not generalise well in the real world. For that, it needs to learn robust predictive or causal relationships.

This attribute of AI quality is sometimes referred to as conceptual soundness. Assessing conceptual soundness requires surfacing important features, concepts, and relationships learned by AI models that are driving their predictions. This information can then be reviewed by human experts to build confidence in the model or identify model weaknesses.

This is achieved through explainability, the science of reading the artificial minds of complex AI models, such as tree ensemble models and deep neural networks. One example is the way in which communications surveillance systems demonstrate the way in which they are parsing trading room communication to draw conclusions on potential market abuse.

Conceptual soundness and explainability are related to robustness, another attribute of AI quality. It captures the idea that small changes in inputs to a model should cause only small changes in its outputs. For example, a small change in the input data for a client risk assessment model should only change its outcomes by a small amount.

Recognising the dynamic nature of the world we inhabit, AI quality also encompasses attributes pertaining to the stability of AI models and the associated data, offering answers to questions such as: How different are the model’s predictions in the current time from when it was trained? Is the model still fit for purpose or are there input regions where it is not very reliable because the new data is very different from the training data or because the world has changed?

These AI quality attributes depend in a fundamental way on the data on which models are trained and to which they are applied. Data quality is thus another critical attribute of AI Quality. A good AI system must be trained using data that is representative of the population to which it will be applied, and meet necessary standards of data accuracy and completeness.

These AI quality attributes are important throughout the lifecycle of AI systems, during development and validation, and after they go live. An effective diagnostic and monitoring capability can help capital markets firms operationalise AI quality.

TruEra is a company dedicated to making AI trustworthy

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: ESG data sourcing and management to meet your ESG strategy, objectives and timeline

Date: 11 June 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes ESG data plays a key role in research, fund product development, fund selection, asset selection, performance tracking, and client and regulatory reporting, yet it is not always easy to source and manage in a complete, transparent and timely manner....

BLOG

EU Parliament Approves Landmark Artificial Intelligence Act

The EU Parliament has approved the Artificial Intelligence Act, marking the world’s first regulation of AI. The regulation establishes obligations for AI based on its potential risks and level of impact and is designed to ensure safety and compliance with fundamental rights, democracy, the rule of law and environmental sustainability, while boosting innovation. The act...

EVENT

RegTech Summit London

Now in its 8th year, the RegTech Summit in London will bring together the RegTech ecosystem to explore how the European capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

MiFID II Handbook

As the 3 January 2018 compliance deadline for Markets in Financial Instruments Directive II (MiFID II) approaches, A-Team Group has pulled together everything you need to know about the regulation in a precise and concise handbook. The MiFID II Handbook, commissioned by Thomson Reuters, provides a guide to aspects of the regulation that will have...