In the new era of microsecond latency, is the quickest path between two points a straight line? Traditionally, messaging implementations aim to move information between publishers and subscribers by the most direct means possible, which often becomes the argument against legacy centralised, middleware messaging and queuing systems. Proponents of direct-connect, peer-to-peer messaging systems contend that it is impossible to put an extra hop between two hosts and provide a lower-latency path. This is true, but only in theory.
Unfortunately, the foundation for this argument no longer makes sense given complex communication interdependencies and deterministic, end-to-end performance requirements. As soon as organisations move beyond a small number of direct connections, such inter-dependencies impact performance. Even worse, attempting to scale with direct connections typically results in systemic operational and performance issues.
In light of today’s increasing market volatility and unstable financial climate, taking such a risk could quickly render a viable business out of the market. When an organisation decides to decouple applications and scale its infrastructure beyond a handful of hosts to increase performance, the middleware messaging layer becomes the focus.
Organisations that can no longer bear the constant burdens of peer-to-peer messaging systems need to explore more efficient hardware-based alternatives. Real-market implementation examples coupled with quantifiable results are the data points required to help navigate this course.
Take, for example, a global investment banking customer who routed every piece of US market data available – including all equities, derivatives, commodities and FX instruments – through a pair of Tervela fault-tolerant message switches and sent it out to over 1,500 subscribers.
The findings follow:
• Mean baseline roundtrip message latency of 58 microseconds with a standard deviation of 8 microseconds
• Mean latency increase of only 18 microseconds during peak load conditions, which included sensitive phases such as market open and close
• Consistent, predictable performance even as subscribers exhibited diverse consumption patterns
As these results show, using hardware to address the messaging challenge improves overall efficiency. Unlike software-based peer-to-peer messaging systems, hardware-accelerated message networking provides faster, more predictable performance and lower latency across the entire operational spectrum.
It also more effectively scales to remove broad, detrimental impact on both applications and the network, which is especially crucial during periods of volatility. Making communications faster has many dependencies: operating systems, processors, memory, application programming interfaces, networks and so forth.
Offloading common messaging tasks to a hardware-accelerated message network frees the client API of many computationally intensive tasks such as routing and transport reliability. Adding intelligence and filtering to a message network also relieves the existing packet network of excessive traffic.
Because peer-to-peer messaging systems lack centralised intelligence, they are a prime producer of unwarranted network traffic. This ultimately impacts all the servers and applications that connect to the network. So the real question still remains: is the path from the wire to the application faster using a hardware-based message network or software-based system with many publishers and subscribers?
The reality of peer-to-peer messaging systems is the inherent inability to scale deterministically due to software limitations. One must remember that performance changes dramatically from a lab environment of five servers (where most people make their purchase decisions) to a production environment with hundreds of hosts and ever-accelerating live message data rates.
The diagram below, which compares peer-to-peer and message networking solutions at a top-tier investment services firm, uses several days of live feeds from the Options Price Reporting Authority (OPRA) and participant exchanges to measure latency. The results not only illustrate the message network’s ability to improve performance during quiescent market periods, but also depict a dramatic performance increase during market volatility.
Of course – with a limited number of hosts – some software-based messaging systems are able to operate at adequate (though not optimal) levels of performance during quiet periods when data rates tend to be low and more predictable. However, as volumes increase and message rates accelerate alongside rising market volatility, the tight coupling of hosts within a peer-to-peer messaging system becomes its Achilles heel.
In summary, message networking delivers the low-latency performance and deployment scale required to meet both today’s challenges and tomorrow’s demands. What’s the quickest path for real-market environments? A message network.
Subscribe to our newsletter