About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Nanosecond Market Data Feeds – FPGA Centric vs. FPGA Accelerated Designs

Subscribe to our newsletter

The “Microburst” Problem

Trading architects face the constant challenge of lowering latency.  Today, this challenge is focused on achieving nanosecond speeds in a deterministic way, even during periods of high market activity, so called ‘microbursts’.

In fast moving ‘bursty’ markets, commodity CPUs are no longer capable of offering the lowest latency or tolerating microbursts. Memory bottlenecks, OS overheads, poor parallelism, compiler limitations and slow networking stacks are among the many factors constraining performance.

So, having exhausted conventional methods using commodity CPUs, architects of trading systems have been turning to FPGA technologies to improve latencies and address this microburst problem.

Applying FPGAs for Pipeline Processing

Because FPGAs chips are able to compute thousands of operations per clock cycle, they can be programmed to process market data using a technique known as pipelining. Pipelining allows for a message to begin being processed before the previous ones have been fully dealt with.

A properly pipelined FPGA design can guarantee to process each byte of incoming traffic with the same latency, even at 100% network saturation. However, not all FPGA approaches are the same.

Alternative FPGA Architectures

Today, there are two main approaches in use by the trading community:

* A CPU-based system, using FPGA acceleration technologies
* A pure FPGA system, using a matrix of FPGA’s

Architecture 1 – CPU System with FPGA Acceleration:

Collectively known as ‘hardware acceleration’ techniques, the system remains primarily a CPU centric architecture, with the selective use of FPGAs to off-load certain functions and accelerate the data processing.

In this case, FPGA acceleration is predominately achieved by the use of a PCIe card with an embedded FPGA. With this CPU centric architecture, the FPGA is used as an off-load technology. Since some market data processing does occur in the FPGA, improvements in latency are delivered. However this approach is still affected by the inherent bottlenecks of a traditional CPUs system.

For example, the limitation in PCIe slots, space and power makes it difficult to scale the number FPGAs used. In addition, the communication between two PCIe devices generally takes several microseconds, which is a very long time in the world of market data processing.

Therefore, this solution scales only by sharing the CPU further. Thus, performance will be degraded when data rates increase, when more feeds are being processed, or when more distribution interfaces are added.

This architecture is efficient for locally consumed market data, but it is still susceptible to microbursts.

Architecture 2 – A Pure FPGA Matrix Architecture:

When deterministic latency is critical, the goal is to avoid the bottlenecks of the CPU and maximise the use of FPGA’s throughout the whole market data processing cycle. This FPGA centric approach uses an expandable matrix of FPGA nodes that are linked with raw binary interconnections. This matrix architecture offers the flexibility to efficiently add FPGA resources as the need for processing increases.

In addition, to avoid bottlenecks that can occur with other system resources, each FPGA node comes with its own set of I/Os and memory. This modular matrix allows the system to grow proportionally across multiple feeds.

As a result, all functions of the market data parsing, book building, filtering and distribution can all be done in hardware with no bottleneck, irrespective of the number of feeds received, the rate of data  or the number of downstream consumers.

A Platform for the Future

When looking for optimised speed, it is important consider the complete set of functions needed to be performed to receive, manage and distribute market data. A solution making partial use of FPGAs may deliver some acceleration; however bottlenecks are moved to another part of the system, usually the function that is implemented in software.

A pure FPGA centric design that utilises a modular FPGA approach to scale capacity, can maintain nanosecond speeds even during market microbursts. Processing and normalising market data is the first step. But a matrix of FPGAs can then be extend to enrich fields, trigger order executions, conduct risk checks and even host the actual trading algorithm.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Market data in the cloud

Over the past several years, the topic of market data in the cloud has been hotly debated – latency has been an issue, which data to put in the cloud has been discussed, and lines have been drawn. But where are we now, and how have the lines been redrawn? This webinar will consider progress...

BLOG

Creating the Conditions for Real-Time Post-Trade Information

By Russell Levens, US Head of Customer Engagement – Cleared Derivatives, ION Markets. The S&P 500 index recently recorded a new peak in one-month daily volatility, unmatched since early July. Simply put, the industry has tracked a record number of price swings to the upside and the downside, all of significant magnitude, at a time...

EVENT

ESG Data & Tech Briefing APAC

Join us in one of the greenest cities in the world as we bring together thought leading ESG specialists to explore how financial institutions are adapting to the evolving ESG regulatory and market infrastructure.

GUIDE

ESG Handbook 2023

The ESG Handbook 2023 edition is the essential guide to everything you need to know about ESG and how to manage requirements if you work in financial data and technology. Download your free copy to understand: What ESG Covers: The scope and definition of ESG Regulations: The evolution of global regulations, especially in the UK...