About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

STAC Points to Everest Boost

Subscribe to our newsletter

Via a report sponsored by data feed handler specialists SR Labs, the benchmarkers at STAC have just announced data for initial tests run on Intel’s recently-introduced Everest chip. Compared to Intel’s standard Westmere chip, one data point suggests a 22% reduction in mean latency.

Everest – or Intel’s Xeon X5698 – is a dual core chip, with each core running at 4.4 Ghz, compared to the X5687 (aka Westmere), with four cores at 3.6 GHz. Intel describes Everest as an “off roadmap” chip designed for “very specific, niche high performance computing applications” while still “running within warranty covered norms, specifications and safe thermal envelope.”

The tests were run using SR Labs’ MIPS (Market Data In Process System) feed handling software. While multi-core chips are often leveraged to boost application performance, some applications are inherently single-threaded, and so benefit more from increased speed of each core. Market data feed handlers and exchange matching engines are two such applications.

For the geeks, the two “stacks under test” comprised:

– SR Labs MIPS In-Process Market Data Line Handler for TVITCH 4.1 
– CentOS 5.5, 64-bit Linux 
– IBM x3650 Server 
– Myricom 10G-PCIE2-8B2-2S Network Interface 
– Processor: 
SUT A: 2 x quad core Intel Xeon 5687 3.60 GHz (“Westmere”) 
SUT B: 2 x dual core Intel Xeon 5698 4.40 GHz (“Everest”)

The test harness for this project incorporated TS-Associates’ TipOff and Simena F16 Fiber Optic Tap for wire-based observation, along with TS-Associates’ Application Tap cards for precise in-process observation. A Symmetricom SyncServer S350 was the time source for the harness.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: The Role of Data Fabric and Data Mesh in Modern Trading Infrastructures

The demands on trading infrastructure are intensifying. Increasing data volumes, the necessity for real-time processing, and stringent regulatory requirements are exposing the limitations of legacy data architectures. In response, firms are re-evaluating their data strategies to improve agility, scalability, and governance. Two architectural models central to this conversation are Data Fabric and Data Mesh. This...

BLOG

AiMi Unveils Agentic Workflow to Automate Mandatory Market Changes

AiMi, specialists in AI for trading and market data operations, has launched an end-to-end agentic workflow designed to streamline how firms manage mandatory changes from exchanges and market data vendors. The new capabilities build on AiMi’s existing AI-enabled platform, introducing a dynamic suite of digital agents that automate the tracking, review, and triage of market...

EVENT

TradingTech Summit New York

Our TradingTech Briefing in New York is aimed at senior-level decision makers in trading technology, electronic execution, trading architecture and offers a day packed with insight from practitioners and from innovative suppliers happy to share their experiences in dealing with the enterprise challenges facing our marketplace.

GUIDE

Evaluated Pricing

Valuations and pricing teams are facing a much higher degree of scrutiny from both the regulatory community and the investor community in the glare of the post-crisis data transparency spotlight. Fair value price transparency requirements and the gradual move towards a more harmonised accounting standards environment is set within the context of the whole debate...