The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

Eliminating Latency with Analytics

Latency is a continual challenge in trading systems. In High Frequency Trading (HFT) systems, the challenge is immediate and obvious: if your order isn’t in first, you won’t hit the liquidity you’re after. In other systems, it might not be so obvious, but it’s still an issue. For example, if you’re streaming FX quotes out to OTC venues, any latency you introduce increases the chances of a subsequent order being rejected due to changed market conditions; do this on a regular basis and your counterparties will quickly become wary of dealing with you, resulting in lost order flow.

Given this challenge, it’s not surprising that pretty much everyone in the market is working hard to reduce latency in key systems. This usually starts with an effort to measure and benchmark existing latency. After all, as the old adage goes, if you can’t measure it, you can’t manage it. And here lies the first challenge – in most trading systems, latency isn’t something that happens in one place; rather, it’s the time difference between information arriving from a source, and a resulting order/quote/trade being sent to a destination. Depending on the complexity of the system, there may be dozens of infrastructure and application components between these two points. If you only measure the end-to-end latency, then you’ll know whether it’s good or bad, but where do you take action to improve it? Likewise, if you only focus on one point (your market data source, or exchange connections, for example), how do you know its latency contribution is a significant component of the end-to-end trip?

So, if the objective of the process is to make improvements in your latency, then you need to be able to quantify the contribution of each of the components in the path. Now, there are numerous ways to do this. You can leverage existing log files to track flow across multiple components (assuming you can address time synchronisation challenges); you can capture packet traces and use these in a similar way; you can purchase one of the many dedicated latency monitoring solutions on the market; or you can use some combination of all of these. The best approach for you depends on a whole number of variables, and could easily fill another full article. Let’s assume you’ve chosen and implemented your approach, and you’re now measuring your latency, and how much each system is contributing.

Now the fun really begins …

Because measuring latency only tells you the what and the when. What is my latency nowWhat was it at 11am on Friday? It doesn’t tell you the why or the what if. Why is it higher than usual? What will it be when market volumes double, or when you add new customers? In order to truly manage your latency, you need to do more than just measure it – you need to model and analyse its relationship with other things that are happening in your environment: market data volumes, order volumes, network throughput, infrastructure utilisation and any other component that could affect it.

This is where IT Analytics comes to the fore. By bringing together the latency data, plus business volume data, plus infrastructure metrics into a single, large, normalised data set, you gain the ability to understand the relationships between them. You can quantify the impact of business volumes on latency. You can find the components whose latency is most sensitive to volume and quantify the improvement achievable by re-engineering them. You can identify the infrastructure components where capacity limitations are causing latency spikes. And because these Big Data models are re-usable, you can do all this repeatably and consistently – so you can quickly see evidence to validate any improvements you make.

This isn’t a pipe-dream. Yes, there are issues with data quality; yes, the data sets are large and the normalisations can be non-trivial; yes, finding the relationships requires specialised statistical techniques. But advances in Big Data in the past few years make all of this achievable – as an example, Sumerian helped one customer reduce the end-to-end latency in a key FX flow by 75% by using exactly this approach – and did it in less than eight weeks.

There are big wins to be had from applying IT Analytics to data you already have. Organisations who take advantage of it stand to gain that all-important competitive edge, achieving systematic, ongoing reductions in latency.

Related content

WEBINAR

Recorded Webinar: Approaches to migrating market data to the cloud to drive agility in trading operations

Market data in the cloud is an attractive proposition in terms of reducing the cost of on premise servers and storage, and moving into a more agile and flexible data delivery environment. It is also well suited to working from home, the fall-back of many financial institutions during lockdowns caused by the coronavirus pandemic. But...

BLOG

Overbond Launches AI-Based Pricing to Facilitate Bond Trading Automation

Fixed-income analytics specialist Overbond Ltd. has released a new AI-based pricing engine that aggregates bond data from multiple sources to provide traders with more accurate pricing data for measuring liquidity of individual securities and ultimately enabling execution automation. Overbond’s COBI-Pricing Live consolidates historical data on over 30,000 securities – with a refresh rate of under...

EVENT

RegTech Summit Virtual

The RegTech Summit Virtual is a global online event that will be held in June 2021 with an exceptional guest speaker line up of RegTech practitioners, regulators, start-ups and solution providers to collaborate and discuss innovative and effective approaches for building a better regulatory environment.

GUIDE

RegTech Suppliers Guide 2020/2021

Welcome to the second edition of A-Team Group’s RegTech Suppliers Guide, an essential aid for financial institutions sourcing innovative solutions to improve their regulatory response, and a showcase for encumbent and new RegTech vendors with offerings designed to match market demand. Available free of charge and based on an industry-wide survey, the guide provides a...