“The Value of a Millisecond” – the title of a widely quoted April 2008 white paper by my esteemed industry colleague Larry Tabb – is now as obsolete a discussion as is the Renault F1 car that graced its cover. In the world of low latency – just as in F1 – three years of innovation has reset the performance metrics that matter. Winners, and losers, and all that.
Last week’s announcement by Corvil that it is enabling its customer Nomura to report latency to nanosecond accuracy highlighted the increasing need to monitor at this level of granularity. As trading firms’ execution platforms and messaging middleware offerings begin operating in the single-digit microsecond range, being ahead of that in terms of measurement is becoming an imperative. As I say above, winners and losers, and all that.
To recap from last week, Nomura has deployed Corvil in its equities DMA operations – its NXT Direct system – to monitor trade performance and to validate latency. NXT Direct has been operating at below three microseconds latency (so says Nomura, without getting at all granular on what that number actually means), and so latency measurement of individual components that comprise the platform, including how that latency varies, is now seemingly a matter of talk in nanoseconds.
As would be expected, Nomura’s platform is co-located with major US equity trading venues, and provides per-client, per venue analysis, including the ability to view latencies for single orders, acknowledgments, fills and all other order types. All good stuff to know should a client complain about performance being a microsecond or two on the slow side.
For Nomura, Corvil has deployed the latest version of its CorvilNet offering, which has seen a software upgrade to increase the resolution of measurement and to optimise performance by leveraging Intel multi-core technology, as already deployed in Corvil’s hardware platforms.
But while the software within CorvilNet can measure down to a single nanosecond, the hardware time stamping on the network interface card currently installed is 10 nanoseconds. So keeping up with the software will require a hardware upgrade in the future in order to produce more accurate timestamps.
“There has been an insatiable drive by our customers from milliseconds to microseconds and now to nanoseconds,” says Donal O’Sullivan, Corvil’s VP of product management. “Now with our latest release, Corvil customers can detect if someone inserts a 10m cable instead of a 5m cable by looking at the latency reports.”
[I am told that a metre of cable equates to three to four nanoseconds, so I think that claim flies.]
The focus on increased granularity also comes from the more general deployment of latest technologies, such as 10 gigabit Ethernet, InfiniBand and RMDA transports. These have been adopted by messaging middleware vendors such as IBM, Informatica/29West, NYSE Technologies, and now Tibco Software.
Recently published performance tests by IBM and Informatica show server-to-server latencies of single-digit microseconds (and Tibco’s imminent release of FTL is likely to compete on that level). Measurement of latency variances – so called jitter – across such middleware is going to require nanosecond measurement to make sense.
Coming soon, I would expect, will be latency measurement between applications running on the same server, and communicating via shared memory. A number of middleware offerings already support this, and reported latencies are in the few hundred nanosecond range – only to reduce with hardware advances.
TS-Associates’ Application Tap is an add-in server card that supports such latency measurement intra-server, albeit with some minor code changes, also down to 10 nanoseconds. “The New Paradigm of Nanometrics” is the name of a report from that company on its approach. Nice try, but I still prefer Larry’s, outdated as it is.
Subscribe to our newsletter