The one constant in the world of financial markets IT is that nothing stays the same for very long. The world of market data feeds and the applications that process them is no exception.
As data rates increase, and as processing systems (or any component of them) are upgraded and modified, performance can be hit. The trick is to constantly monitor the environment to pre-empt potential problems. But what should one monitor?
According to the boffins at 29West, the answer is straightforward. To quote from an article in their most recent newsletter: “Focus more on measuring application latency and less on measuring data rates”. In fact, 29West reckons that latency is the “canary in the coalmine” when it comes to early warning indicators.
The reasoning, they say, is that when one has a measure of latency, it is possible to work to improve it. That’s in contrast to measuring data rates, which are essentially out of one’s control. 29West suggests measuring latency message by message, day in, day out.
29West also point out that data rate measurements are really averages, and often information such as the time period for the sample are not provided, or are measured over inappropriate sample times, that won’t show up problems.
More on this subject from 29West here. Please get back to us with your own views on how best to monitor systems. Do you agree with 29West, or have you found some other metric to monitor?
Until next time … here’s some good music.
[tags]29West, data latency, low latency, latency measurement[/tags]
Subscribe to our newsletter