About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

BittWare’s TeraBox Bulks Up FPGA Processing for Trading Scale and Analytics

Subscribe to our newsletter

FPGA specialist BittWare has introduced TeraBox, an appliance that delivers up to 16 FPGAs, and targeted at high scale trading and analytics applications.

TeraBox supports up to eight BittWare S5-PCIe-DS cards, each of which hosts two Altera Stratix V FPGAs, 64 gigabytes of RAM and 16 10gE network ports. Thus, each 5U appliance can scale to 16 FPGAs, 512GB of RAM and 128 10gE ports. The appliance can also optionally host a traditional x86 processor, perhaps for monitoring or co-ordination functionality.

According to BittWare’s vice president of systems and solutions Ron Huizen, TeraBox has two likely applications in the financial markets:

* For trading systems where the entire application logic is hosted on the FPGA card, TeraBox offers high scale in one appliance, thus reducing cost of deployment compared with server-hosted approaches. Algorithmic trading and real-time risk control are applications that can likely be deployed more cost effectively with TeraBox.
* For analytics applications, such as algo back testing, pre-trade analytics and risk managemnt, TeraBox’s multiple FPGAs can work together to provide parallelised performance. Connectivity between the FPGAs can be acheived via the chassis PCIe bus, or via the 10gE network.

The latter analytics example would be to some extent breaking new ground for FPGAs, which typically have been dedicated to specific functions, such as data feed processing. For the most part, parallel analytics applications have been targeted at multi-core x86 processors, or at GPUs.

A key to deploying FPGAs for parallel applications will be the introduction of the next version of the OpenCL programming framework. OpenCL 2.0 – the specification for which was released this July – calls for support for dynamic parallelism and shared virtual memory.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: The Role of Data Fabric and Data Mesh in Modern Trading Infrastructures

The demands on trading infrastructure are intensifying. Increasing data volumes, the necessity for real-time processing, and stringent regulatory requirements are exposing the limitations of legacy data architectures. In response, firms are re-evaluating their data strategies to improve agility, scalability, and governance. Two architectural models central to this conversation are Data Fabric and Data Mesh. This...

BLOG

DiffusionData Targets Agentic AI in Finance with New MCP Server

Data technology firm DiffusionData has released an open-source server designed to connect Large Language Models (LLMs) with real-time data streams, aiming to facilitate the development of Agentic AI in financial services. The new Diffusion MCP Server uses the Model Context Protocol (MCP), an open standard for AI models to interact with external tools and data...

EVENT

RegTech Summit New York

Now in its 9th year, the RegTech Summit in New York will bring together the RegTech ecosystem to explore how the North American capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Regulation and Risk as Data Management Drivers

A-Team Group recently held a webinar on the topic of Regulation and Risk as Data Management Drivers. Fill in the form to get immediate access to the accompanying Special Report. Alongside death and taxes, perhaps the only other certainty in life is that regulation of the financial markets will increase in future years. How do...