About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

BittWare’s TeraBox Bulks Up FPGA Processing for Trading Scale and Analytics

Subscribe to our newsletter

FPGA specialist BittWare has introduced TeraBox, an appliance that delivers up to 16 FPGAs, and targeted at high scale trading and analytics applications.

TeraBox supports up to eight BittWare S5-PCIe-DS cards, each of which hosts two Altera Stratix V FPGAs, 64 gigabytes of RAM and 16 10gE network ports. Thus, each 5U appliance can scale to 16 FPGAs, 512GB of RAM and 128 10gE ports. The appliance can also optionally host a traditional x86 processor, perhaps for monitoring or co-ordination functionality.

According to BittWare’s vice president of systems and solutions Ron Huizen, TeraBox has two likely applications in the financial markets:

* For trading systems where the entire application logic is hosted on the FPGA card, TeraBox offers high scale in one appliance, thus reducing cost of deployment compared with server-hosted approaches. Algorithmic trading and real-time risk control are applications that can likely be deployed more cost effectively with TeraBox.
* For analytics applications, such as algo back testing, pre-trade analytics and risk managemnt, TeraBox’s multiple FPGAs can work together to provide parallelised performance. Connectivity between the FPGAs can be acheived via the chassis PCIe bus, or via the 10gE network.

The latter analytics example would be to some extent breaking new ground for FPGAs, which typically have been dedicated to specific functions, such as data feed processing. For the most part, parallel analytics applications have been targeted at multi-core x86 processors, or at GPUs.

A key to deploying FPGAs for parallel applications will be the introduction of the next version of the OpenCL programming framework. OpenCL 2.0 – the specification for which was released this July – calls for support for dynamic parallelism and shared virtual memory.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: The Role of Data Fabric and Data Mesh in Modern Trading Infrastructures

The demands on trading infrastructure are intensifying. Increasing data volumes, the necessity for real-time processing, and stringent regulatory requirements are exposing the limitations of legacy data architectures. In response, firms are re-evaluating their data strategies to improve agility, scalability, and governance. Two architectural models central to this conversation are Data Fabric and Data Mesh. This...

BLOG

Data Platform Modernisation: Why The Hardest Problems Are No Longer Technical

Capital markets firms pursuing data platform modernisation have largely solved the technical challenges of compute and storage, but the organisational, governance and architectural decisions surrounding those platforms remain stubbornly difficult, according to practitioners from Northern Trust, RBC Wealth Management and LSEG, speaking at a recent A-Team Group webinar entitled Data platform modernisation: Best practice approaches...

EVENT

AI in Data Management Summit New York City

Following the success of the 15th Data Management Summit NYC, A-Team Group are excited to announce our new event: AI in Data Management Summit NYC!

GUIDE

Entity Data Management Handbook – Fifth Edition

Welcome to the fifth edition of A-Team Group’s Entity Data Management Handbook, sponsored for the fourth year running by entity data specialist Bureau van Dijk, a Moody’s Analytics Company. The past year has seen a crackdown on corporate responsibility for financial crime – with financial firms facing draconian fines for non-compliance and the very real...