About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

NovaSparks’ STAC-M1 Benchmark Highlights Determinism Under Load

Subscribe to our newsletter

A just released STAC Report covering the performance of NovaSparks’ FPGA market data platform highlights not just its processing latency but also the deterministic nature of that latency under different data loads.

The STAC-M1 benchmark (as defined by financial markets participants and administered by the Securities Technology Analysis Center) measures the performance of direct data feed processing solutions according to a number of different criteria, including end-to-end latency and throughput.

The NovaSparks solution uses only FPGA microprocessors in its architecture, in contrast to offerings that augment mainstream x86 processors with FPGA acceleration of certain functions. As such, the company claims its platform is less prone to latency variance – or jitter – compared to its competitors.

The predictable – or deterministic – nature of the NovaSparks platform was borne out by the benchmark tests conducted by STAC, which simulated a Nasdaq TotalView ITCH feed being received at 2x and 20x a typical data rate at market open and close.

According to STAC: “During replay at 20 times recorded market data volumes, the NovaSparks solution demonstrated mean latency of just 1.4 microseconds, along with 99.9th percentile latency of just 2.8 microseconds. Jitter (standard deviation) was just 0.12 microseconds at 2x market rate and 0.15 microseconds at 20x market rate.” See this chart:

 

While for many the push to reduce latency further is not as big a focus as it once was, maintaining deterministic latency is still important for many trading strategies. Keeping latency constant under extreme market conditions has historically been a challenge, and its one that NovaSparks is looking to solve with its FPGA platform.

“Deterministic processing of market data at ultra-low latency rates is a breakthrough for an industry that is constantly re-assessing their ability to trade across all market conditions,” says Michal Sanak, CIO at proprietary trading firm RSJ.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: The Role of Data Fabric and Data Mesh in Modern Trading Infrastructures

The demands on trading infrastructure are intensifying. Increasing data volumes, the necessity for real-time processing, and stringent regulatory requirements are exposing the limitations of legacy data architectures. In response, firms are re-evaluating their data strategies to improve agility, scalability, and governance. Two architectural models central to this conversation are Data Fabric and Data Mesh. This...

BLOG

AI Personalization in Trading: Where We Are and Where We’re Heading

Ivan Kunyankin, Data Science Team Lead at Devexperts. AI may have started out its brokerage career in back-office, enhancing operational efficiency by providing human teams with actionable client insights, but it’s now being promoted to more sensitive client-facing roles. As AI tools continue to evolve and become normalized in more areas of daily life, financial...

EVENT

Buy AND Build: The Future of Capital Markets Technology

Buy AND Build: The Future of Capital Markets Technology London examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Practicalities of Working with the Global LEI

This special report accompanies a webinar we held on the popular topic of The Practicalities of Working with the Global LEI, discussing the current thinking around best practices for entity identification and data management. You can register here to get immediate access to the Special Report.