About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Performance by Numbers: Lessons on Latency

Subscribe to our newsletter

When sizing up a data sheet, what numbers pop out at you?  IOPS? GB/s? Latency?  If you’re like most IT professionals, you might be starting to pay more attention to latency.  In case you’re still wondering what all the fuss is about, let’s look at why low latency is so important, why it’s a challenge to get right, and how to avoid marketing tricks that attempt to dismiss its importance in favour of other benchmarks.

Before we dig into the details, however, let’s look at how much low latency performance can affect business success: a 2009 study showed that 40% of shoppers will wait no more than three seconds before abandoning a retail or travel site.  The old saying “time is money” clearly rings true, as data latency can dramatically impact a user’s experience and a company’s revenue.

The Latency Challenge

Until recently, many IT professionals when reviewing storage options – whether mechanical disk or solid state memory – focused primarily on Input/Output Operations (IOPS) or bandwidth rates because marketers focused on pushing bigger, better numbers.  Few marketers want to draw up a chart that drops dramatically as it moves to the right, because it’s human nature to think that bigger is better and marketers know this.  High IOPS numbers are fantastic if they come with ultra low latency.  However, it’s fairly easy to boost bandwidth in ways that drastically raise latency in order to pad data sheets.

For example, take a solid state memory module where you could add more chips to increase bandwidth. When adding more components, the fan-out on address lines increases latency. Unfortunately, most bandwidth improvements are achieved by adding more replicated components: lots of replicated disks in an array, multiple memory chips on a module, many of these modules in a large memory system, or scaling out processors in a cluster.

This forces the implementation of processor caching, file caches, disk caches, replication, pre-fetching, large block sizes, etc., all to deal with the bandwidth-to-latency imbalance.  If bandwidth rate gains come from adding more components and complexity, those latency numbers are also going to rise on the charts right along with the bandwidth access rates.

Bandwidth is certainly important, but not at latency’s expense.  The good thing about low latency is that it will inherently increase bandwidth while directly impacting a user’s experience.

We Really Hate to Wait

Let’s look at a few real world examples of what happens online when latency lags on.  In 2008, Google ran an experiment to measure user satisfaction by increasing the number of search results displayed on a single page from 10 to 30.  This increased latency by more than 100 percent, from 0.4 seconds to 0.9.  While user surveys unanimously showed that they wanted 30 search results per page, the latency increase actually resulted in dissatisfied users, decreased traffic by 20 percent, not to mention revenue.

In another case, a leading online wine vendor estimated it lost 15% of its business in 2007 due to poor latency experienced by customers.  In 2008, the vendor achieved an estimated $45 million in sales, which would equate to $6.75 million in lost revenue.  After implementing a solution with solid state flash memory connected to the server through PCI Express, the company was able to reduce latency by four times.  This is because PCI Express is the best way to connect to the CPU and latency is directly tied to CPU efficiency.  The flash-based solution allowed the company to reduce complexity by eliminating the need for shared storage, crushing latency while increasing performance per rack unit by six times. The company obtained enough storage capacity for up to three years of projected growth, and was able to keep up with up to 10 times the demand during the holiday season.

Complex Problem Seeks Simple Solution

Let’s recap a few points to keep in mind:

* Cache, replication, pre-fetching and large block sizes are used to overcome imbalances of bandwidth and latency.

* Scaling out components can boost bandwidth but will also increase latency.

Clearly, solving the latency problem can be complex if flash memory products are developed with a laser focus on big numbers while sacrificing latency.  Many SSDs rack up latency because they hide flash’s potential behind a controller that connects the same way as legacy mechanical disk drives.  If flash is integrated as a new memory tier without the disk-era protocols, latency drops dramatically – and that is a very good thing for application performance.

With its impact on how applications perform, user experience, and ultimately, revenues, it’s clear that latency matters.  When evaluating a flash memory solution, check under the hood to determine real-world latency to be sure you’re going to get the acceleration you expect for your applications.  It could mean a better experience for your customers, which ultimately means more revenue for your company, all thanks to your smart IT department.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Detecting and preventing market abuse

Market abuse – unlawful disclosure of inside information, insider trading, circular trading, “pump and dump” schemes, etc. – poses significant threats to the integrity of capital markets. In 2024, global trading house Trafigura agreed to pay a $55 million fine to the U.S. Commodity Futures Trading Commission (CFTC) for trading with non-public information, manipulating a...

BLOG

Fenergo Enhances Financial Crime Compliance Capabilities with Agentic AI Integration

Fenergo has introduced an updated financial crime solution – the FinCrime Operating System (FinCrime OS) – featuring a new agentic AI layer aimed at significantly improving operational efficiency within financial institutions. This development comes against a background of spiralling operational costs and rising compliance demands enhanced by geopolitical tension and regulatory flux. Marc Murphy, CEO,...

EVENT

TradingTech Briefing New York

Our TradingTech Briefing in New York is aimed at senior-level decision makers in trading technology, electronic execution, trading architecture and offers a day packed with insight from practitioners and from innovative suppliers happy to share their experiences in dealing with the enterprise challenges facing our marketplace.

GUIDE

Regulatory Data Handbook – Fourth Edition

Need to know all the essentials about the regulations impacting data management? Welcome to the Fourth edition of our A-Team Regulatory Data Handbook which provides all the essentials about regulations impacting data management. A-Team’s series of Regulatory Data Handbooks are a great way to see at-a-glance: All the regulations that are impacting data management today A...