About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Eliminating Latency with Analytics

Subscribe to our newsletter

Latency is a continual challenge in trading systems. In High Frequency Trading (HFT) systems, the challenge is immediate and obvious: if your order isn’t in first, you won’t hit the liquidity you’re after. In other systems, it might not be so obvious, but it’s still an issue. For example, if you’re streaming FX quotes out to OTC venues, any latency you introduce increases the chances of a subsequent order being rejected due to changed market conditions; do this on a regular basis and your counterparties will quickly become wary of dealing with you, resulting in lost order flow.

Given this challenge, it’s not surprising that pretty much everyone in the market is working hard to reduce latency in key systems. This usually starts with an effort to measure and benchmark existing latency. After all, as the old adage goes, if you can’t measure it, you can’t manage it. And here lies the first challenge – in most trading systems, latency isn’t something that happens in one place; rather, it’s the time difference between information arriving from a source, and a resulting order/quote/trade being sent to a destination. Depending on the complexity of the system, there may be dozens of infrastructure and application components between these two points. If you only measure the end-to-end latency, then you’ll know whether it’s good or bad, but where do you take action to improve it? Likewise, if you only focus on one point (your market data source, or exchange connections, for example), how do you know its latency contribution is a significant component of the end-to-end trip?

So, if the objective of the process is to make improvements in your latency, then you need to be able to quantify the contribution of each of the components in the path. Now, there are numerous ways to do this. You can leverage existing log files to track flow across multiple components (assuming you can address time synchronisation challenges); you can capture packet traces and use these in a similar way; you can purchase one of the many dedicated latency monitoring solutions on the market; or you can use some combination of all of these. The best approach for you depends on a whole number of variables, and could easily fill another full article. Let’s assume you’ve chosen and implemented your approach, and you’re now measuring your latency, and how much each system is contributing.

Now the fun really begins …

Because measuring latency only tells you the what and the when. What is my latency nowWhat was it at 11am on Friday? It doesn’t tell you the why or the what if. Why is it higher than usual? What will it be when market volumes double, or when you add new customers? In order to truly manage your latency, you need to do more than just measure it – you need to model and analyse its relationship with other things that are happening in your environment: market data volumes, order volumes, network throughput, infrastructure utilisation and any other component that could affect it.

This is where IT Analytics comes to the fore. By bringing together the latency data, plus business volume data, plus infrastructure metrics into a single, large, normalised data set, you gain the ability to understand the relationships between them. You can quantify the impact of business volumes on latency. You can find the components whose latency is most sensitive to volume and quantify the improvement achievable by re-engineering them. You can identify the infrastructure components where capacity limitations are causing latency spikes. And because these Big Data models are re-usable, you can do all this repeatably and consistently – so you can quickly see evidence to validate any improvements you make.

This isn’t a pipe-dream. Yes, there are issues with data quality; yes, the data sets are large and the normalisations can be non-trivial; yes, finding the relationships requires specialised statistical techniques. But advances in Big Data in the past few years make all of this achievable – as an example, Sumerian helped one customer reduce the end-to-end latency in a key FX flow by 75% by using exactly this approach – and did it in less than eight weeks.

There are big wins to be had from applying IT Analytics to data you already have. Organisations who take advantage of it stand to gain that all-important competitive edge, achieving systematic, ongoing reductions in latency.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: How to automate entity data management and due diligence to ensure efficiency, accuracy and compliance

Requesting, gathering, analysing and monitoring customer, vendor and partner entity data is time consuming, and often a tedious manual process. This can slow down customer relationships and expose financial institutions to risk from inaccurate, incomplete or outdated data – but there are solutions to these problems. This webinar will consider the challenges of sourcing and...

BLOG

A-Team Group Announces Winners of RegTech Insight Awards – USA 2023

A-Team Group has announced the winners of its RegTech Insight Awards – USA 2023. The awards celebrate vendors of leading RegTech solutions, services and consultancy, and are designed to recognise both start-up and established providers offering creative solutions to regulatory challenges. The awards were presented by Andrew Delaney, President & Chief Content Officer at A-Team...

EVENT

RegTech Summit London

Now in its 8th year, the RegTech Summit in London will bring together the RegTech ecosystem to explore how the European capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...