About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Performance by Numbers: Lessons on Latency

Subscribe to our newsletter

When sizing up a data sheet, what numbers pop out at you?  IOPS? GB/s? Latency?  If you’re like most IT professionals, you might be starting to pay more attention to latency.  In case you’re still wondering what all the fuss is about, let’s look at why low latency is so important, why it’s a challenge to get right, and how to avoid marketing tricks that attempt to dismiss its importance in favour of other benchmarks.

Before we dig into the details, however, let’s look at how much low latency performance can affect business success: a 2009 study showed that 40% of shoppers will wait no more than three seconds before abandoning a retail or travel site.  The old saying “time is money” clearly rings true, as data latency can dramatically impact a user’s experience and a company’s revenue.

The Latency Challenge

Until recently, many IT professionals when reviewing storage options – whether mechanical disk or solid state memory – focused primarily on Input/Output Operations (IOPS) or bandwidth rates because marketers focused on pushing bigger, better numbers.  Few marketers want to draw up a chart that drops dramatically as it moves to the right, because it’s human nature to think that bigger is better and marketers know this.  High IOPS numbers are fantastic if they come with ultra low latency.  However, it’s fairly easy to boost bandwidth in ways that drastically raise latency in order to pad data sheets.

For example, take a solid state memory module where you could add more chips to increase bandwidth. When adding more components, the fan-out on address lines increases latency. Unfortunately, most bandwidth improvements are achieved by adding more replicated components: lots of replicated disks in an array, multiple memory chips on a module, many of these modules in a large memory system, or scaling out processors in a cluster.

This forces the implementation of processor caching, file caches, disk caches, replication, pre-fetching, large block sizes, etc., all to deal with the bandwidth-to-latency imbalance.  If bandwidth rate gains come from adding more components and complexity, those latency numbers are also going to rise on the charts right along with the bandwidth access rates.

Bandwidth is certainly important, but not at latency’s expense.  The good thing about low latency is that it will inherently increase bandwidth while directly impacting a user’s experience.

We Really Hate to Wait

Let’s look at a few real world examples of what happens online when latency lags on.  In 2008, Google ran an experiment to measure user satisfaction by increasing the number of search results displayed on a single page from 10 to 30.  This increased latency by more than 100 percent, from 0.4 seconds to 0.9.  While user surveys unanimously showed that they wanted 30 search results per page, the latency increase actually resulted in dissatisfied users, decreased traffic by 20 percent, not to mention revenue.

In another case, a leading online wine vendor estimated it lost 15% of its business in 2007 due to poor latency experienced by customers.  In 2008, the vendor achieved an estimated $45 million in sales, which would equate to $6.75 million in lost revenue.  After implementing a solution with solid state flash memory connected to the server through PCI Express, the company was able to reduce latency by four times.  This is because PCI Express is the best way to connect to the CPU and latency is directly tied to CPU efficiency.  The flash-based solution allowed the company to reduce complexity by eliminating the need for shared storage, crushing latency while increasing performance per rack unit by six times. The company obtained enough storage capacity for up to three years of projected growth, and was able to keep up with up to 10 times the demand during the holiday season.

Complex Problem Seeks Simple Solution

Let’s recap a few points to keep in mind:

* Cache, replication, pre-fetching and large block sizes are used to overcome imbalances of bandwidth and latency.

* Scaling out components can boost bandwidth but will also increase latency.

Clearly, solving the latency problem can be complex if flash memory products are developed with a laser focus on big numbers while sacrificing latency.  Many SSDs rack up latency because they hide flash’s potential behind a controller that connects the same way as legacy mechanical disk drives.  If flash is integrated as a new memory tier without the disk-era protocols, latency drops dramatically – and that is a very good thing for application performance.

With its impact on how applications perform, user experience, and ultimately, revenues, it’s clear that latency matters.  When evaluating a flash memory solution, check under the hood to determine real-world latency to be sure you’re going to get the acceleration you expect for your applications.  It could mean a better experience for your customers, which ultimately means more revenue for your company, all thanks to your smart IT department.

Subscribe to our newsletter

Related content


Recorded Webinar: Trade South Africa: Considerations for Connecting to and Trading the Johannesburg Markets

Interest among the international institutional community in trading South African markets is on the rise. With connectivity, data and analytics options for trading on the Johannesburg Stock Exchange growing more sophisticated, and the emergence of A2X as a credible alternative equity market, South Africa is shaping up as a financial centre that can offer a...


The Potential and Practicalities of Implementing Generative AI for Compliance

While AI has been around for 20 years or so, its time has come in capital markets with Generative AI and large language models (LLMs) able to handle vast volumes of compliance data and achieve outcomes that cannot be reached by humans. GenAI apps are not, however, a silver bullet, and compliance teams are not...


Data Management Summit New York City

Now in its 14th year the Data Management Summit NYC brings together the North American data management community to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.


Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...