The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

Eliminating Latency with Analytics

Latency is a continual challenge in trading systems. In High Frequency Trading (HFT) systems, the challenge is immediate and obvious: if your order isn’t in first, you won’t hit the liquidity you’re after. In other systems, it might not be so obvious, but it’s still an issue. For example, if you’re streaming FX quotes out to OTC venues, any latency you introduce increases the chances of a subsequent order being rejected due to changed market conditions; do this on a regular basis and your counterparties will quickly become wary of dealing with you, resulting in lost order flow.

Given this challenge, it’s not surprising that pretty much everyone in the market is working hard to reduce latency in key systems. This usually starts with an effort to measure and benchmark existing latency. After all, as the old adage goes, if you can’t measure it, you can’t manage it. And here lies the first challenge – in most trading systems, latency isn’t something that happens in one place; rather, it’s the time difference between information arriving from a source, and a resulting order/quote/trade being sent to a destination. Depending on the complexity of the system, there may be dozens of infrastructure and application components between these two points. If you only measure the end-to-end latency, then you’ll know whether it’s good or bad, but where do you take action to improve it? Likewise, if you only focus on one point (your market data source, or exchange connections, for example), how do you know its latency contribution is a significant component of the end-to-end trip?

So, if the objective of the process is to make improvements in your latency, then you need to be able to quantify the contribution of each of the components in the path. Now, there are numerous ways to do this. You can leverage existing log files to track flow across multiple components (assuming you can address time synchronisation challenges); you can capture packet traces and use these in a similar way; you can purchase one of the many dedicated latency monitoring solutions on the market; or you can use some combination of all of these. The best approach for you depends on a whole number of variables, and could easily fill another full article. Let’s assume you’ve chosen and implemented your approach, and you’re now measuring your latency, and how much each system is contributing.

Now the fun really begins …

Because measuring latency only tells you the what and the when. What is my latency nowWhat was it at 11am on Friday? It doesn’t tell you the why or the what if. Why is it higher than usual? What will it be when market volumes double, or when you add new customers? In order to truly manage your latency, you need to do more than just measure it – you need to model and analyse its relationship with other things that are happening in your environment: market data volumes, order volumes, network throughput, infrastructure utilisation and any other component that could affect it.

This is where IT Analytics comes to the fore. By bringing together the latency data, plus business volume data, plus infrastructure metrics into a single, large, normalised data set, you gain the ability to understand the relationships between them. You can quantify the impact of business volumes on latency. You can find the components whose latency is most sensitive to volume and quantify the improvement achievable by re-engineering them. You can identify the infrastructure components where capacity limitations are causing latency spikes. And because these Big Data models are re-usable, you can do all this repeatably and consistently – so you can quickly see evidence to validate any improvements you make.

This isn’t a pipe-dream. Yes, there are issues with data quality; yes, the data sets are large and the normalisations can be non-trivial; yes, finding the relationships requires specialised statistical techniques. But advances in Big Data in the past few years make all of this achievable – as an example, Sumerian helped one customer reduce the end-to-end latency in a key FX flow by 75% by using exactly this approach – and did it in less than eight weeks.

There are big wins to be had from applying IT Analytics to data you already have. Organisations who take advantage of it stand to gain that all-important competitive edge, achieving systematic, ongoing reductions in latency.

Related content

WEBINAR

Upcoming Webinar: Integrating Intelligent Machine Readable News

Date: 30 November 2021 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Intelligent machine readable news is a powerful tool in the arsenals of trading and investment firms seeking competitive advantage. It turns unstructured data into actionable insight and can be used, for example, to uncover market trends, identify correlations and...

BLOG

Encompass Secures New Funding from Beacon

London-based Know Your Customer (KYC) automation software provider Encompass Corp. has secured new investment from Beacon Equity Partners, a Boston-based private equity firm that focuses on companies in the regulatory and compliance space, as part of a broader capital round that included existing investors. Joe Bradley will join Encompass’s Advisory Council, to sit alongside existing members Stephen Allen,...

EVENT

RegTech Summit New York City

Now in its 5th year, the RegTech Summit in NYC explores how the North American financial services industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Entity Data Management Handbook – Seventh Edition

Sourcing entity data and ensuring efficient and effective entity data management is a challenge for many financial institutions as volumes of data rise, more regulations require entity data in reporting, and the fight again financial crime is escalated by bad actors using increasingly sophisticated techniques to attack processes and systems. That said, based on best...