About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Nanosecond Market Data Feeds – FPGA Centric vs. FPGA Accelerated Designs

Subscribe to our newsletter

The “Microburst” Problem

Trading architects face the constant challenge of lowering latency.  Today, this challenge is focused on achieving nanosecond speeds in a deterministic way, even during periods of high market activity, so called ‘microbursts’.

In fast moving ‘bursty’ markets, commodity CPUs are no longer capable of offering the lowest latency or tolerating microbursts. Memory bottlenecks, OS overheads, poor parallelism, compiler limitations and slow networking stacks are among the many factors constraining performance.

So, having exhausted conventional methods using commodity CPUs, architects of trading systems have been turning to FPGA technologies to improve latencies and address this microburst problem.

Applying FPGAs for Pipeline Processing

Because FPGAs chips are able to compute thousands of operations per clock cycle, they can be programmed to process market data using a technique known as pipelining. Pipelining allows for a message to begin being processed before the previous ones have been fully dealt with.

A properly pipelined FPGA design can guarantee to process each byte of incoming traffic with the same latency, even at 100% network saturation. However, not all FPGA approaches are the same.

Alternative FPGA Architectures

Today, there are two main approaches in use by the trading community:

* A CPU-based system, using FPGA acceleration technologies
* A pure FPGA system, using a matrix of FPGA’s

Architecture 1 – CPU System with FPGA Acceleration:

Collectively known as ‘hardware acceleration’ techniques, the system remains primarily a CPU centric architecture, with the selective use of FPGAs to off-load certain functions and accelerate the data processing.

In this case, FPGA acceleration is predominately achieved by the use of a PCIe card with an embedded FPGA. With this CPU centric architecture, the FPGA is used as an off-load technology. Since some market data processing does occur in the FPGA, improvements in latency are delivered. However this approach is still affected by the inherent bottlenecks of a traditional CPUs system.

For example, the limitation in PCIe slots, space and power makes it difficult to scale the number FPGAs used. In addition, the communication between two PCIe devices generally takes several microseconds, which is a very long time in the world of market data processing.

Therefore, this solution scales only by sharing the CPU further. Thus, performance will be degraded when data rates increase, when more feeds are being processed, or when more distribution interfaces are added.

This architecture is efficient for locally consumed market data, but it is still susceptible to microbursts.

Architecture 2 – A Pure FPGA Matrix Architecture:

When deterministic latency is critical, the goal is to avoid the bottlenecks of the CPU and maximise the use of FPGA’s throughout the whole market data processing cycle. This FPGA centric approach uses an expandable matrix of FPGA nodes that are linked with raw binary interconnections. This matrix architecture offers the flexibility to efficiently add FPGA resources as the need for processing increases.

In addition, to avoid bottlenecks that can occur with other system resources, each FPGA node comes with its own set of I/Os and memory. This modular matrix allows the system to grow proportionally across multiple feeds.

As a result, all functions of the market data parsing, book building, filtering and distribution can all be done in hardware with no bottleneck, irrespective of the number of feeds received, the rate of data  or the number of downstream consumers.

A Platform for the Future

When looking for optimised speed, it is important consider the complete set of functions needed to be performed to receive, manage and distribute market data. A solution making partial use of FPGAs may deliver some acceleration; however bottlenecks are moved to another part of the system, usually the function that is implemented in software.

A pure FPGA centric design that utilises a modular FPGA approach to scale capacity, can maintain nanosecond speeds even during market microbursts. Processing and normalising market data is the first step. But a matrix of FPGAs can then be extend to enrich fields, trigger order executions, conduct risk checks and even host the actual trading algorithm.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Unlocking value: Harnessing modern data platforms for data integration, advanced investment analytics, visualisation and reporting

Modern data platforms are bringing efficiencies, scalability and powerful new capabilities to institutions and their data pipelines. They are enabling the use of new automation and analytical technologies that are also helping firms to derive more value from their data and reduce costs. Use cases of specific importance to the finance sector, such as data...

BLOG

The Blueprint for High-Performance Trading Infrastructure

On this episode of FinTech Focus TV recorded at A-Team Group’s Buy AND Build Summit, Toby Babb of Harrington Starr chats with Diana Stanescu, Finance and Capital Markets at Keysight Technologies, to explore how speed, quality, and trust are redefining the trading technology landscape. From Keysight Technologies’ investment in InstrumentiX to the evolving “buy and...

EVENT

Eagle Alpha Alternative Data Conference, Spring, New York, hosted by A-Team Group

Now in its 8th year, the Eagle Alpha Alternative Data Conference managed by A-Team Group, is the premier content forum and networking event for investment firms and hedge funds.

GUIDE

MiFID II Handbook – Second Edition

With the compliance deadline for Markets in Financial Instruments Directive II (MiFID II) just over two months away, A-Team Group has updated its MiFID II handbook to bring you the latest details on the regulation’s compliance requirements. Version 2 of the handbook, commissioned by Thomson Reuters, also includes new sections covering data sourcing and data...