About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Performance by Numbers: Lessons on Latency

Subscribe to our newsletter

When sizing up a data sheet, what numbers pop out at you?  IOPS? GB/s? Latency?  If you’re like most IT professionals, you might be starting to pay more attention to latency.  In case you’re still wondering what all the fuss is about, let’s look at why low latency is so important, why it’s a challenge to get right, and how to avoid marketing tricks that attempt to dismiss its importance in favour of other benchmarks.

Before we dig into the details, however, let’s look at how much low latency performance can affect business success: a 2009 study showed that 40% of shoppers will wait no more than three seconds before abandoning a retail or travel site.  The old saying “time is money” clearly rings true, as data latency can dramatically impact a user’s experience and a company’s revenue.

The Latency Challenge

Until recently, many IT professionals when reviewing storage options – whether mechanical disk or solid state memory – focused primarily on Input/Output Operations (IOPS) or bandwidth rates because marketers focused on pushing bigger, better numbers.  Few marketers want to draw up a chart that drops dramatically as it moves to the right, because it’s human nature to think that bigger is better and marketers know this.  High IOPS numbers are fantastic if they come with ultra low latency.  However, it’s fairly easy to boost bandwidth in ways that drastically raise latency in order to pad data sheets.

For example, take a solid state memory module where you could add more chips to increase bandwidth. When adding more components, the fan-out on address lines increases latency. Unfortunately, most bandwidth improvements are achieved by adding more replicated components: lots of replicated disks in an array, multiple memory chips on a module, many of these modules in a large memory system, or scaling out processors in a cluster.

This forces the implementation of processor caching, file caches, disk caches, replication, pre-fetching, large block sizes, etc., all to deal with the bandwidth-to-latency imbalance.  If bandwidth rate gains come from adding more components and complexity, those latency numbers are also going to rise on the charts right along with the bandwidth access rates.

Bandwidth is certainly important, but not at latency’s expense.  The good thing about low latency is that it will inherently increase bandwidth while directly impacting a user’s experience.

We Really Hate to Wait

Let’s look at a few real world examples of what happens online when latency lags on.  In 2008, Google ran an experiment to measure user satisfaction by increasing the number of search results displayed on a single page from 10 to 30.  This increased latency by more than 100 percent, from 0.4 seconds to 0.9.  While user surveys unanimously showed that they wanted 30 search results per page, the latency increase actually resulted in dissatisfied users, decreased traffic by 20 percent, not to mention revenue.

In another case, a leading online wine vendor estimated it lost 15% of its business in 2007 due to poor latency experienced by customers.  In 2008, the vendor achieved an estimated $45 million in sales, which would equate to $6.75 million in lost revenue.  After implementing a solution with solid state flash memory connected to the server through PCI Express, the company was able to reduce latency by four times.  This is because PCI Express is the best way to connect to the CPU and latency is directly tied to CPU efficiency.  The flash-based solution allowed the company to reduce complexity by eliminating the need for shared storage, crushing latency while increasing performance per rack unit by six times. The company obtained enough storage capacity for up to three years of projected growth, and was able to keep up with up to 10 times the demand during the holiday season.

Complex Problem Seeks Simple Solution

Let’s recap a few points to keep in mind:

* Cache, replication, pre-fetching and large block sizes are used to overcome imbalances of bandwidth and latency.

* Scaling out components can boost bandwidth but will also increase latency.

Clearly, solving the latency problem can be complex if flash memory products are developed with a laser focus on big numbers while sacrificing latency.  Many SSDs rack up latency because they hide flash’s potential behind a controller that connects the same way as legacy mechanical disk drives.  If flash is integrated as a new memory tier without the disk-era protocols, latency drops dramatically – and that is a very good thing for application performance.

With its impact on how applications perform, user experience, and ultimately, revenues, it’s clear that latency matters.  When evaluating a flash memory solution, check under the hood to determine real-world latency to be sure you’re going to get the acceleration you expect for your applications.  It could mean a better experience for your customers, which ultimately means more revenue for your company, all thanks to your smart IT department.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Enhancing Buy-Side Trading Efficiency: Navigating Interoperability and AI in Real Workflows

Enhancing Buy-Side Trading Efficiency: Navigating Interoperability and AI in Real Workflows Emerging capabilities in AI and interoperability are transforming trading workflows, with the promise of heightened levels of collaboration and personalisation resulting in greater efficiency and performance. The potential of these new technologies is encouraging financial firms to modernise their trader desktops and streamline operational...

BLOG

CTM Debuts New Tri-Party Matching Workflow for Prime Brokers with Société Générale as First to Go Live

The Depository Trust & Clearing Corporation (DTCC), the leading provider of post-trade market infrastructure for the global financial services industry, has announced Société Générale as the first Prime Broker to adopt the Central Trade Manager’s (CTM) automated tri-party trade matching workflow. This development comes as the financial services sector worldwide braces for T+1 and more...

EVENT

Future of Capital Markets Tech Summit: Buy AND Build, London

Buy AND Build: The Future of Capital Markets Technology London on September 19th at Marriott Hotel Canary Wharf London examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Regulatory Data Handbook 2024 – Twelfth Edition

Welcome to the twelfth edition of A-Team Group’s Regulatory Data Handbook, a unique and useful guide to capital markets regulation, regulatory change and the data and data management requirements of compliance. The handbook covers regulation in Europe, the UK, US and Asia-Pacific. This edition of the handbook includes a detailed review of acts, plans and...