A-Team Insight Blogs

Trustworthy AI in Capital Markets

By Shameek Kundu, head of Financial Services at TruEra.

The use of Artificial Intelligence (AI) is growing in capital markets. But the sector has work to do to capitalise on investment made, and this starts with making AI and machine learning (ML) models robust, transparent and fully accountable. Ensuring AI quality will be critical to building trust and increasing adoption, and leveraging the real, but currently untapped, potential of new technologies.

In risk and compliance, AI is helping improve effectiveness and expand the risk envelope in which firms can operate. Examples include: market abuse surveillance; anti-money laundering; liquidity, currency and counterparty risk management; and cyber-security threat monitoring. In operations, AI is being used to classify and extract relevant information from unstructured documents, such as contractual parties and terms from legal contracts. AI is also helping process internal and external data for confirmation, settlement, reconciliation and asset value calculation, and detect and mitigate data quality issues.

Firms have also started using AI to understand and engage clients. For example, by creating sophisticated segmentation to understand and anticipate client needs and by processing transaction quotes from clients using conversational AI. AI plays a role in portfolio decision-making and trade execution. Analysis of large volumes of time-series and unstructured data helps with sentiment analysis and investment signal generation. AI is being used to evaluate trading venues, brokers and execution algorithms, and to determine the timing, price and size of particular orders.

More breadth than depth

However, while the breadth of AI use in capital markets is impressive, the depth of AI adoption is often shallow. Most firms have crossed the threshold of experimentation, but the use of AI models in production, at a scale that starts making a significant bottom-line impact, is still limited.

A key driver for this apparent gap, between the hype and investment around AI, and its actual impact on the ground, is the continued lack of trust in AI systems. AI-based systems can be more difficult to understand than their traditional rule-based or statistical counterparts, making it difficult to justify decisions internally, or with clients and regulators. Because such algorithms learn from patterns in historical data, their reliability can be influenced by the quality and representativeness of the input data. Finally, without human oversight, such algorithms can heighten risks related to market stability and competition.

Regulators have begun recognising these concerns. The Bank of England and Financial Conduct Authority (FCA) in the UK formed a public-private forum in October 2020, with the aim of gathering industry inputs. US banking regulators have sought industry comments on a wide-ranging set of questions around AI risks. The International Organisation of Securities Commissions (IOSCO) published its own consultation report in June 2020. Capital markets firms are also recognising these challenges, and enhancing existing risk frameworks to create safe guard-rails for AI adoption.

A focus on AI quality

Many firms are looking at the experience with software quality to frame their response to the AI trust deficit. AI models are often built and deployed today in the way that software was in the 1980s: without the tools needed for systematic testing, review and monitoring. As with software quality over the past two decades, there is a need for similar systematic focus on AI quality.

The most commonly understood attribute of AI quality is a model’s predictive accuracy. After all, a pricing model is no good if it is not able to accurately predict historical prices in the test data set. However, even if a model performs well on train and test data, it may not generalise well in the real world. For that, it needs to learn robust predictive or causal relationships.

This attribute of AI quality is sometimes referred to as conceptual soundness. Assessing conceptual soundness requires surfacing important features, concepts, and relationships learned by AI models that are driving their predictions. This information can then be reviewed by human experts to build confidence in the model or identify model weaknesses.

This is achieved through explainability, the science of reading the artificial minds of complex AI models, such as tree ensemble models and deep neural networks. One example is the way in which communications surveillance systems demonstrate the way in which they are parsing trading room communication to draw conclusions on potential market abuse.

Conceptual soundness and explainability are related to robustness, another attribute of AI quality. It captures the idea that small changes in inputs to a model should cause only small changes in its outputs. For example, a small change in the input data for a client risk assessment model should only change its outcomes by a small amount.

Recognising the dynamic nature of the world we inhabit, AI quality also encompasses attributes pertaining to the stability of AI models and the associated data, offering answers to questions such as: How different are the model’s predictions in the current time from when it was trained? Is the model still fit for purpose or are there input regions where it is not very reliable because the new data is very different from the training data or because the world has changed?

These AI quality attributes depend in a fundamental way on the data on which models are trained and to which they are applied. Data quality is thus another critical attribute of AI Quality. A good AI system must be trained using data that is representative of the population to which it will be applied, and meet necessary standards of data accuracy and completeness.

These AI quality attributes are important throughout the lifecycle of AI systems, during development and validation, and after they go live. An effective diagnostic and monitoring capability can help capital markets firms operationalise AI quality.

TruEra is a company dedicated to making AI trustworthy

Related content


Upcoming Webinar: Strategies, technologies and services for successful corporate actions automation

Date: 15 September 2021 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Rising volumes and increasing complexity of corporate actions are challenging market participants’ efforts to reconcile data, automate corporate actions processing, and contain costs. The culprits causing these challenges include legacy systems, missing skills, manual processes, data quality issues, and...


Data Takes Centre Stage

By Matt Smith, CEO of SteelEye. When it comes to the future of data management, one of the biggest challenges facing all firms is managing increasing data volumes from an ever-expanding range of sources. Looking at financial markets, we foresee 2021 as the year when firms focus on efficient and accurate data management as a...


Virtual Briefing: ESG Data Management – A Strategic Imperative

This briefing will explore challenges around assembling and evaluating ESG data for reporting and the impact of regulatory measures and industry collaboration on transparency and standardisation efforts. Expert speakers will address how the evolving market infrastructure is developing and the role of new technologies and alternative data in improving insight and filling data gaps.


ESG Handbook 2021

A-Team Group’s ESG Handbook 2021 is a ‘must read’ for all capital markets participants, data vendors and solutions providers involved in Environmental, Social and Governance (ESG) investing and product development. It includes extensive coverage of all elements of ESG, from an initial definition and why ESG is important, to existing and emerging regulations, data challenges...