About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

STAC Benchmarks IBM’s Hadoop

Subscribe to our newsletter

STAC – aka the Securities Technology Analysis Center – has benchmarked IBM’s proprietary Platform Symphony implementation of Hadoop MapReduce, versus the standard open source offering, to compare their respective performance. On average, IBM’s implementation performed jobs 7.3 times faster than the standard, reducing total processing time by a factor of six.

Better known for its benchmarking of low-latency trading platforms, STAC leveraged the Statistical Workload Injector for MapReduce (SWIM), developed by the University of California at Berkeley. SWIM provides a large set of diverse MapReduce jobs based on production Hadoop traces obtained from Facebook, along with information to enable characterisation of each job. STAC says it undertook the benchmarking because many financial markets firms are deploying Hadoop.

The hardware environment for the testbed consisted of 17 IBM compute servers and one master server communicating over gigabit Ethernet. STAC compared Hadoop version 1.0.1 to Symphony version 5.2. Both systems ran Red Hat Linux and used largely default configurations.

IBM attributes the superior performance of its offering in part to its scheduling speed. IBM’s Hadoop is API-compatible with the open source offering but runs on the Symphony grid middleware that became IBM’s with its aquisition of Platform Computing, which closed in January of this year.

For more information on STAC’s IBM Hadoop benchmark, see here.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: The Role of Data Fabric and Data Mesh in Modern Trading Infrastructures

The demands on trading infrastructure are intensifying. Increasing data volumes, the necessity for real-time processing, and stringent regulatory requirements are exposing the limitations of legacy data architectures. In response, firms are re-evaluating their data strategies to improve agility, scalability, and governance. Two architectural models central to this conversation are Data Fabric and Data Mesh. This...

BLOG

AI Personalization in Trading: Where We Are and Where We’re Heading

Ivan Kunyankin, Data Science Team Lead at Devexperts. AI may have started out its brokerage career in back-office, enhancing operational efficiency by providing human teams with actionable client insights, but it’s now being promoted to more sensitive client-facing roles. As AI tools continue to evolve and become normalized in more areas of daily life, financial...

EVENT

Buy AND Build: The Future of Capital Markets Technology

Buy AND Build: The Future of Capital Markets Technology London examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Dealing with Reality – How to Ensure Data Quality in the Changing Entity Identifier Landscape

“The Global LEI will be a marathon, not a sprint” is a phrase heard more than once during our series of Hot Topic webinars that’s charted the emergence of a standard identifier for entity data. Doubtless, it will be heard again. But if we’re not exactly sprinting, we are moving pretty swiftly. Every time I...