About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Cantor Evaluating Calxeda ARM Chips for 10x Breakthrough

Subscribe to our newsletter

“I think the Calxeda-ARM machine is an exciting step … I’m evaluating carefully how it can impact the metrics I care about,” says Niall Dalton, director of high frequency trading at Cantor Fitzgerald. He is referring to today’s announcement by Calxeda of their very low power microprocessors based on the ARM architecture – and HP’s plan to build servers based on them.

ARM-based chips run on very low power, and are used by many manufacturers of consumer devices, such as mobile phones. Austin, Texas-based Calxeda is, however, building its chips for highly parallel server designs.

The initial EnergyCore processor – or Server on a Chip – from Calxeda includes four ARM cores, 4MB of L2 cache memory, an 80 gigabit per second interconnect and system/power management functions – all requiring just 1.5 watts of power.

HP will build servers with 288 EnergyCores in a 4U appliance. “A single rack of HP’s Calxeda servers delivers the throughput of some 700 traditional servers and dramatically simplifies the infrastructure needed to hook them all together and manage the cluster,” claims Calxeda co-founder and CEO Barry Evans.

“Companies in our industry are constrained by space and power, yet our appetite for analysis is insatiable,” says Cantor’s Dalton, who continues: “We need a 10x breakthrough and this could be it. We are evaluating the Calxeda technology in hyperscale throughput computing for data and simulation intensive applications. The Calxeda Linux platform enables rapid porting of our software, enabling us to quickly leverage the energy-efficient ARM cores and Calxeda’s scalable communications fabric to scale our applications to new heights.”

For financial markets applications, it looks like Calxeda’s performance/power footprint could be a winner for those firms needing to mine data to develop pre-trade models and post-trade simulations – as fast as possible.  And where those systems are in outsourced managed environments, and possibly in proximity and co-lo centres, the operations costs related to space and power can be considerable.
 

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: The Role of Data Fabric and Data Mesh in Modern Trading Infrastructures

The demands on trading infrastructure are intensifying. Increasing data volumes, the necessity for real-time processing, and stringent regulatory requirements are exposing the limitations of legacy data architectures. In response, firms are re-evaluating their data strategies to improve agility, scalability, and governance. Two architectural models central to this conversation are Data Fabric and Data Mesh. This...

BLOG

DTCC Sees T+1 Prep Accelerate as BNP Paribas and J.P. Morgan Adopt CTM Workflow

As the UK and Europe advance towards their 2027 deadline for T+1 settlement, The Depository Trust & Clearing Corporation (DTCC) has announced that BNP Paribas and J.P. Morgan have adopted its CTM automated tri-party matching workflow. The move is a significant indicator that large-scale preparations for the compressed settlement cycle are gathering pace, with firms...

EVENT

AI in Capital Markets Summit London

Now in its 2nd year, the AI in Capital Markets Summit returns with a focus on the practicalities of onboarding AI enterprise wide for business value creation. Whilst AI offers huge potential to revolutionise capital markets operations many are struggling to move beyond pilot phase to generate substantial value from AI.

GUIDE

Solvency II Data Management Handbook

Want to get a handle on Solvency II and what it means for data management? Need to make sure you have all the bases covered for the looming January 2016 deadline? Our Solvency II Data Management Handbook is now available for free download to help you. This Handbook is the ultimate guide to all things...