Earlier this month, HSBC made headlines when it confirmed it will use UK start-up Quantexa’s artificial intelligence (AI) and machine learning technology to root out money laundering across its operations – the bank’s latest move in a multi-billion pound programme to fight financial crime.
This seal of approval by HSBC has focused attention on the use of AI and machine learning systems in financial services. It also came just as a House of Lords report into the UK’s use of AI declared: “Many jobs will be enhanced by AI, many will disappear and many new jobs will be created.” AI may not have reached Hollywood Terminator level yet, but it is the ultimate disruptive technology and trading firms are looking closely at how it can be used for competitive advantage.
The view from experts is: don’t wait. Chirag Patel, head of innovation and advisory solutions at financial services firm State Street Global Exchange, says: “This technology has crossed the tipping point in terms of being ready to be deployed in many aspects of financial services. It’s important for firms to at least start building proof of concepts to test applications.”
The key advantage of AI and machine learning – and where it’s best applied – is in its ability to digest and analyse huge volumes of data much more quickly than a human. Patel says: “It’s difficult to estimate what proportion of the market it’s used in, but in electronic trading in particular it’s fairly substantial.” He cites successful early deployments in high-frequency trading and order execution. Other existing applications include trade pricing, portfolio management, credit scoring models, sentiment analysis and recommendations.
Graham Biggart, risk and compliance solutions lead in the banking and financial markets group at IBM, and an AI specialist, agrees: “Trading relies on the use of trade prices and low latency systems to act on changes in pricing relationships. This makes it ideally suited to machine learning processes and some very successful trading strategies have resulted.”
Reflecting this, data management consultant Mady Korada highlighted one leading use-case of AI at A-Team Group’s recent Data Management Summit. He described how a London-based investment bank is using a machine learning based recommendations engine to detect trade anomalies across all asset classes, inside a target time of 15 minutes. The aim is to negate the need for the bank’s front, middle and back-office controls to spot any ‘fat finger’ trades that need to be fixed or cancelled.
But along successes like these, there are plenty of problems too. To avoid them, AI watchers say it’s important to understand the limits of the technology. Patel says: “AI can be divided into ‘general’ and ‘narrow’ intelligence. General intelligence is the fictional Hollywood destroy-mankind type. But narrow intelligence has existed for several years, typified by robots being used to build cars, essentially machines trained to carry out repetitive tasks.”
Machine learning goes a step further. AI systems have enough intelligence to learn from the data they are dealing with and improve their own operation. This too can be divided into supervised machine learning, which is monitored by humans, and unsupervised machine learning, which allows machines to interpret and process data on their own.
Supervised machine learning is the prominent AI used in financial services. As Patel says: “In the trading environment we’re using machine learning technologies to trade, for example, within extremely narrow ‘guard rails’ – perhaps high-frequency trading. The applications we have seen don’t offer the autonomy to actually choose what to buy and sell, but where to buy and sell. For example, order execution can be achieved by accessing multiple liquidity sources in a much more timely fashion. That’s something we’re already seeing.” He adds: “The next stage we’ll get to will take the guard rails off and the boundaries between which autonomy is permitted will widen.”
These suggestions are promising, but there are business and regulatory risks to using even supervised machine learning and related systems, and certainly to ‘taking the guard rails off’.
In business terms, Korada highlighted the drawbacks of the investment bank’s trade anomaly detection system. The project’s biggest problem was sourcing good quality data needed to train the detection models being set up. He said: “The quality of data was very poor and that took us a lot of time. You’d be amazed how difficult it is to get the good quality dataset that you need to train models and get something useful out of them.”
This case study pinpoints two key problems with machine learning: finding sufficient high-quality data to feed and train systems, and the scale of human effort required to run systems.
There are regulatory problems too. A November 2017 report by the Financial Stability Board, Artificial intelligence and Machine Learning in Financial Services, points out that AI and machine learning can bring benefits like faster transactions and therefore more market liquidity.
On the downside, it suggests machine learning driven algorithms may be less auditable and less predictable than others, so could create risk and market volatility. The FSB also questions whether algos trained in periods of low volatility will act correctly in a high volatility financial crisis. Patel agrees: “Being trained in certain environments that aren’t fully representative of all environments that might manifest in the future is precisely one of the major shortcomings of these systems.”
The potential nightmare scenario is uncontrolled learning algos leaving human control behind. The FSB report states: “Increased complexities of algorithmic models may strain the abilities of developers and users to fully explain, and/or, in some instances, understand how they work.”
Biggart too accepts the risks around controlling increasingly advanced AI and machine learning models. He says: “In future, management will need to understand the techniques used to build these models and learn new ways to rapidly fix a failing version if they are to exercise any semblance of control.”
This shortcoming shows it’s important for trading firms to recognise the limits of AI and machine learning technology – not only to gauge best application, but also which tasks still need to be done by humans.
As Patel says: “Market making and broader electronic trading is an area where we’re seeing heavy application of algorithmic approaches that are to some degree autonomous. That’s going to be considerably transformed, I would argue, because the multitude of liquidity sources that can be accessed in real time makes for a meaningful shift in trading, as opposed to relying on a voice broker to talk to three or four counterparties.
“Higher value-add tasks – such as investment strategy and portfolio construction over longer horizons – are areas where we are not nearly ready to deploy these systems. You could not replace a strategist with these tools because, to the regulator’s point, the tools don’t know anything about what happens in a rising inflation marketplace where equities are considered overvalued and yet bonds are about to burst because of rising rates. This kind of context is very much the place where human intervention and practitioner insight is essential.”
AI and machine learning problems also illustrate how trading firms could optimise the technology by sourcing and developing better data to drive systems. Patel says: “I believe the biggest acceleration with this technology will result from larger quantities of data being available, and they are becoming available. We need a long enough history to be able to train AI engines.”
That said, some machine learning applications can operate with less source data. Patel says: “For certain use cases, six months of history is all you need. For example, in asset servicing, if you want to use automation to eliminate inefficiencies of manual reconciliations and error handling, being able to do that doesn’t require substantial amounts of history.
“In electronic trading there are rich datasets even from six months of history. If that data is captured at a millisecond level or as tick-by-tick information, it can lend itself to machine learning where you start to exploit arbitrage opportunities across, say, different liquidity venues. For market makers that becomes fairly attractive.”
Biggart suggests there are solutions to the ‘learning’ problems with machine learning systems and their apparent lack of auditability. He says: “AI does suffer from the weakness that as it learns from history then it is only as good as the history is. However, humans are very similar in as much as Generals always try to fight the last war and not the next one. There are various technical responses, one of which is to fit your data to longer periods. This is very similar to the challenge in credit of measuring point in time probabilities of default (PDs) versus through the cycle PDs.”
He also says machine learning systems are auditable, if you use the right tools, but notes that management may not be readily able to interpret audit answers or ask the complex questions that might give weight to changing models.
Securing benefit and avoiding risk often boils down to using AI and machine learning systems that have been primed with the right quantity and quality of data. According to the AI experts, there is a good choice of sufficiently advanced, off-the-shelf and specialist technology tools available for trading firms to get active in this area, although they may have to invest in human resource to put them in place.
The bottom line is that there are AI and machine learning uses cases, potential applications, and enough benefits to push the start button. Biggart says: “At the moment, the challenge for all banks is getting superior returns to their competitors in the next recession. It has been 10 years since the last recession, the next one will come soon. AI can help.” Korada concludes: “Trust me, anything that can be automated will be automated. It’s just a matter of time.”