About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Latency Monitoring Revs Up to Nanoseconds

Subscribe to our newsletter

“The Value of a Millisecond” – the title of a widely quoted April 2008 white paper by my esteemed industry colleague Larry Tabb – is now as obsolete a discussion as is the Renault F1 car that graced its cover. In the world of low latency – just as in F1 – three years of innovation has reset the performance metrics that matter. Winners, and losers, and all that.

Last week’s announcement by Corvil that it is enabling its customer Nomura to report latency to nanosecond accuracy highlighted the increasing need to monitor at this level of granularity. As trading firms’ execution platforms and messaging middleware offerings begin operating in the single-digit microsecond range, being ahead of that in terms of measurement is becoming an imperative. As I say above, winners and losers, and all that.

To recap from last week, Nomura has deployed Corvil in its equities DMA operations – its NXT Direct system – to monitor trade performance and to validate latency. NXT Direct has been operating at below three microseconds latency (so says Nomura, without getting at all granular on what that number actually means), and so latency measurement of individual components that comprise the platform, including how that latency varies, is now seemingly a matter of talk in nanoseconds.

As would be expected, Nomura’s platform is co-located with major US equity trading venues, and provides per-client, per venue analysis, including the ability to view latencies for single orders, acknowledgments, fills and all other order types. All good stuff to know should a client complain about performance being a microsecond or two on the slow side.

For Nomura, Corvil has deployed the latest version of its CorvilNet offering, which has seen a software upgrade to increase the resolution of measurement and to optimise performance by leveraging Intel multi-core technology, as already deployed in Corvil’s hardware platforms.

But while the software within CorvilNet can measure down to a single nanosecond, the hardware time stamping on the network interface card currently installed is 10 nanoseconds. So keeping up with the software will require a hardware upgrade in the future in order to produce more accurate timestamps.

“There has been an insatiable drive by our customers from milliseconds to microseconds and now to nanoseconds,” says Donal O’Sullivan, Corvil’s VP of product management. “Now with our latest release, Corvil customers can detect if someone inserts a 10m cable instead of a 5m cable by looking at the latency reports.”

[I am told that a metre of cable equates to three to four nanoseconds, so I think that claim flies.]

The focus on increased granularity also comes from the more general deployment of latest technologies, such as 10 gigabit Ethernet, InfiniBand and RMDA transports. These have been adopted by messaging middleware vendors such as IBM, Informatica/29West, NYSE Technologies, and now Tibco Software.

Recently published performance tests by IBM and Informatica show server-to-server latencies of single-digit microseconds (and Tibco’s imminent release of FTL is likely to compete on that level). Measurement of latency variances – so called jitter – across such middleware is going to require nanosecond measurement to make sense.

Coming soon, I would expect, will be latency measurement between applications running on the same server, and communicating via shared memory. A number of middleware offerings already support this, and reported latencies are in the few hundred nanosecond range – only to reduce with hardware advances.

TS-Associates’ Application Tap is an add-in server card that supports such latency measurement intra-server, albeit with some minor code changes, also down to 10 nanoseconds. “The New Paradigm of Nanometrics” is the name of a report from that company on its approach. Nice try, but I still prefer Larry’s, outdated as it is.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Enhancing trader efficiency with interoperability – Innovative solutions for automated and streamlined trader desktop and workflows

Traders today are expected to navigate increasingly complex markets using workflows that often lag behind the pace of change. Disconnected systems, manual processes, and fragmented user experiences create hidden inefficiencies that directly impact performance and risk management. Firms that can streamline and modernise the trader desktop are gaining a tangible edge – both in speed...

BLOG

Platform-Led Strategies for Solving Market Data Fragmentation, Cost and Governance Challenges

For any Chief Data Officer or Head of Trading Technology, the line item for market data is both one of the largest and most complex to manage. The challenge is no longer simply about plumbing feeds into applications. It is a strategic imperative to control spiralling costs, integrate a chaotic mix of traditional and alternative...

EVENT

TradingTech Summit London

Now in its 15th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

The Trading Regulations Handbook

Need to know all the essentials about the regulations impacting trading infrastructure? Welcome to the first edition of our A-Team Trading Regulations Handbook which provides all the essentials about regulations impacting trading operations, data and technology. A-Team’s Trading Regulations Handbook is a great way to see at-a-glance: All the regulations that are impacting trading technology...