About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Eliminating Latency with Analytics

Subscribe to our newsletter

Latency is a continual challenge in trading systems. In High Frequency Trading (HFT) systems, the challenge is immediate and obvious: if your order isn’t in first, you won’t hit the liquidity you’re after. In other systems, it might not be so obvious, but it’s still an issue. For example, if you’re streaming FX quotes out to OTC venues, any latency you introduce increases the chances of a subsequent order being rejected due to changed market conditions; do this on a regular basis and your counterparties will quickly become wary of dealing with you, resulting in lost order flow.

Given this challenge, it’s not surprising that pretty much everyone in the market is working hard to reduce latency in key systems. This usually starts with an effort to measure and benchmark existing latency. After all, as the old adage goes, if you can’t measure it, you can’t manage it. And here lies the first challenge – in most trading systems, latency isn’t something that happens in one place; rather, it’s the time difference between information arriving from a source, and a resulting order/quote/trade being sent to a destination. Depending on the complexity of the system, there may be dozens of infrastructure and application components between these two points. If you only measure the end-to-end latency, then you’ll know whether it’s good or bad, but where do you take action to improve it? Likewise, if you only focus on one point (your market data source, or exchange connections, for example), how do you know its latency contribution is a significant component of the end-to-end trip?

So, if the objective of the process is to make improvements in your latency, then you need to be able to quantify the contribution of each of the components in the path. Now, there are numerous ways to do this. You can leverage existing log files to track flow across multiple components (assuming you can address time synchronisation challenges); you can capture packet traces and use these in a similar way; you can purchase one of the many dedicated latency monitoring solutions on the market; or you can use some combination of all of these. The best approach for you depends on a whole number of variables, and could easily fill another full article. Let’s assume you’ve chosen and implemented your approach, and you’re now measuring your latency, and how much each system is contributing.

Now the fun really begins …

Because measuring latency only tells you the what and the when. What is my latency nowWhat was it at 11am on Friday? It doesn’t tell you the why or the what if. Why is it higher than usual? What will it be when market volumes double, or when you add new customers? In order to truly manage your latency, you need to do more than just measure it – you need to model and analyse its relationship with other things that are happening in your environment: market data volumes, order volumes, network throughput, infrastructure utilisation and any other component that could affect it.

This is where IT Analytics comes to the fore. By bringing together the latency data, plus business volume data, plus infrastructure metrics into a single, large, normalised data set, you gain the ability to understand the relationships between them. You can quantify the impact of business volumes on latency. You can find the components whose latency is most sensitive to volume and quantify the improvement achievable by re-engineering them. You can identify the infrastructure components where capacity limitations are causing latency spikes. And because these Big Data models are re-usable, you can do all this repeatably and consistently – so you can quickly see evidence to validate any improvements you make.

This isn’t a pipe-dream. Yes, there are issues with data quality; yes, the data sets are large and the normalisations can be non-trivial; yes, finding the relationships requires specialised statistical techniques. But advances in Big Data in the past few years make all of this achievable – as an example, Sumerian helped one customer reduce the end-to-end latency in a key FX flow by 75% by using exactly this approach – and did it in less than eight weeks.

There are big wins to be had from applying IT Analytics to data you already have. Organisations who take advantage of it stand to gain that all-important competitive edge, achieving systematic, ongoing reductions in latency.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Trade the Middle East & North Africa: Connectivity, Data Systems & Processes

In Partnership With As key states across the region seek alternatives to the fossil fuel industries that have driven their economies for decades, pioneering financial centres are emerging in Egypt, United Arab Emirates (UAE), Saudi Arabia and beyond. Exchanges and market intermediaries trading in these centres are adopting cutting-edge technologies to cater to the growing...

BLOG

AI, Cloud, Digital Assets, Cybersecurity – What to Expect at TradingTech Summit 2025

A-Team Group’s 14th Annual TradingTech Summit London is set to take place on 27th February 2025 at the Hilton Canary Wharf, bringing together industry leaders, innovators, and experts to explore the latest advancements in financial markets trading technology. Kicking Off the Conversation: Innovation, AI, and the Future of Trading Following the welcome address, Cathy Gibson,...

EVENT

ESG Data & Tech Briefing London

The ESG Data & Tech Briefing will explore challenges around assembling and evaluating ESG data for reporting and the impact of regulatory measures and industry collaboration on transparency and standardisation efforts. Expert speakers will address how the evolving market infrastructure is developing and the role of new technologies and alternative data in improving insight and filling data gaps.

GUIDE

AI in Capital Markets: Practical Insight for a Transforming Industry – Free Handbook

AI is no longer on the horizon – it’s embedded in the infrastructure of modern capital markets. But separating real impact from inflated promises requires a grounded, practical understanding. The AI in Capital Markets Handbook 2025 provides exactly that. Designed for data-driven professionals across the trade life-cycle, compliance, infrastructure, and strategy, this handbook goes beyond...