The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

Increasing Trade Performance: A Quest for Lower Latency, or Simply Improved Monitoring?

Performance.  We all grow up assessed for it.  But in today’s trading environment, how can organisations balance increased regulatory requirements and maximise the efficiency of their core trading functions? Less operational resources means firms are challenged with creating more sophisticated business logic that differentiates them from the competition in the never ending quest for lower latency. As a result, many firms are trying to understand the relative performance of their electronic trade lifecycle with regards to the business requirements. This has created a huge variety of potential avenues for exploration, largely split into two camps: trading strategy performance and systems performance.

Developing a bespoke system for implementing each individual trader’s specific strategy, knowledge and skills is both expensive and inefficient because there are too many variables. As a result, the value that a trader brings to their desk, in a lot of cases, is in their head, not in a system. This means that organisations are potentially losing out on a few opportunities. Firstly, within a particular desk of traders there will be a variety of skills. Not being able to allocate trades to traders based on the structure of the trade and the trader’s individual skills means that an organisation could be losing performance simply through the random allocation of trades to be executed. Secondly, a trader’s performance is likely to vary as market conditions change. Organisations should use standardised performance tools to implement the traders knowledge into an algo monitoring tool that can keep track of market conditions. They would be able to withdraw from inefficient markets far more effectively than if they are trying to achieve this manually, a situation that gets more complicated with the number of venues involved.

We are beginning to see a few organisations develop algo switches, the ability to switch a range of algorithmic trading models in and out of the market based on predictive analysis of whether the current and near-future market conditions are suitable for each algo. To achieve this, organisations need to develop a level of confidence around the capability of these switches by back-testing the algo monitoring model through historical data, as with algo model development. However, where algo trading model development could be back tested using end-of day, intra-day or conflated tick data, these lower frequencies are inadequate for back testing the monitoring models. Typically this level of data is unavailable in sufficient quantities or not integrated in a single environment.

The quest for faster execution has continued uninterrupted throughout the development of electronic trading strategies. The emergence of High Frequency Trading techniques makes it more important than ever to monitor performance in near real-time – not just of the strategy itself but also the market conditions and underlying systems. Being able to identify an HFT technique that is failing to execute effectively in seconds could save minutes and therefore thousands of potential losing trades. To the same point, it is crucial to spot an increase in market volatility and volume when you don’t have the underlying system resources to support being active in such a market.

Underpinning all this is the performance of the executing systems across the trade lifecycle, which is perhaps both better understood and at the same the most untapped source of benefits. Trade latency analysis has found that some organisations are actually too fast in executing trades. As a result they are missing market opportunities and are no longer aligned with their business strategy. There will always be clients who want to be the fastest one on the block, but currently the trend is moving more towards efficient trade dynamics. This means understanding internal systems’ trade lifecycle and, where possible, systems external to an organisations trade lifecycle. Many organisations that have implemented Transaction Cost Analytics to gain price efficiency across venues in real-time now look to enhance these analytics with the opportunity cost of missed, early or late trades.

As a result, there is a new content set emerging in the process of executing a trade. We are seeing the inclusion of data collected and analysed from internal systems, such as optimal latency, TCA metrics and systems capacity to name a few. And that is in addition to the consumption of reference data, market data, risk margins and limits, client policies, collateral requirements, counter party measures and other traditional factors usually included in trade decisions and execution.

The net result of this is obvious. Organisations can achieve improved trade performance by understanding more about all components involved in the trade lifecycle, from a trader’s knowledge to the available capacity in a server for the execution engine. This, in turn, can likely result in increased customer loyalty and even flow, as well as a more targeted and efficient approach to improving those areas of the trade lifecycle that need the attention most – not just a generic approach to a continued drive for lower latency. The trick is in ensuring your organisation is capturing as much relevant and granular data as possible whilst not relying on bespoke software development for the analysis and implementation of new capabilities. The result: efficiency improves and cost declines.

Related content

WEBINAR

Upcoming Webinar: Trade surveillance: Deploying monitoring and surveillance capabilities for today’s new normal

Date: 8 April 2021 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Let’s face it: The old ways aren’t coming back. A plethora of challenges brought on by the covid-19 pandemic, coupled with unrelenting market volatility and uncertainty, have pushed financial service firms to look for rigorous monitoring and surveillance solutions...

BLOG

Case Study: How Liquidnet Deployed OpenFin for Investment Analytics Unit

Desktop integration specialist OpenFin has published a case study mapping out how Liquidnet deployed its platform to develop a single application uniting three of its key data and analytics offerings. The project aimed to bring together the respective capabilities of the OTAS Technologies, Prattle and RSRCHXchange units, which Liquidnet combined into a single entity called...

EVENT

Data Management Summit New York City

Now in its 10th year, the Data Management Summit (DMS) in NYC explores the shift to the new world where data is redefining the operating model and firms are seeking to unlock value via data transformation projects for enterprise gain and competitive edge.

GUIDE

RegTech Suppliers Guide 2020/2021

Welcome to the second edition of A-Team Group’s RegTech Suppliers Guide, an essential aid for financial institutions sourcing innovative solutions to improve their regulatory response, and a showcase for encumbent and new RegTech vendors with offerings designed to match market demand. Available free of charge and based on an industry-wide survey, the guide provides a...