The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

In-Memory Heats Up for Low Latency and Big Data

Last week I was in Orlando for SAP’s SAPPHIRE NOW event, where the main focus was not on the company’s highly successful business and data warehouse applications – revenues in 2012 were around $20 billion – but on a product introduced in late 2010 and said to be its fastest selling ever. Mysteriously named HANA – some say in a nod to the company’s founder and current chairman Hasso Plattner, while others suggest it stands for High Performance Analytics Appliance – the product is an in-memory database, and it is a key player in a technology space that is heating up fast, with applications for both low latency and big data applications.

Performance is the driver of the move to in-memory. Accessing data in a server’s RAM (Random Access Memory) is about 100,000 times faster than accessing it from a hard disk. Since most trading applications – especially those implementing intelligent trading approaches – need to access some data to augment what might be contained in a price update, keeping that in RAM is going to reduce tick-to-trade latency.

Truth be told, in-memory is nothing new. Just about all traditional disk-oriented databases – including those that came to SAP in its 2010 acquisition of Sybase, and financial markets-oriented offerings from Kx Systems – cache data in memory as part of a data retrieval process that is hard disk centric. HANA, though, is designed so that RAM is its primary data store, with hard disk (and non-volatile flash memory) for persistence.

Even as an in-memory database, HANA is also not that new, or unique. One of the earliest in-memory databases that enjoyed commercial success (especially in the financial markets) was the TimesTen offering, spun out of Hewlett-Packard in 1996 (and acquired by Oracle in 2005).

What is new is that technology advances have made in-memory approaches more usable, and cost effective. 64-bit processor architectures can now address much larger RAM memory address spaces, servers can now pack in many terabytes of RAM, network protocols like RDMA can now connect servers together with very low latency so in-memory can scale out, and – last but not least – RAM is getting cheaper, and cheaper.

SAP isn’t the only company with an in-memory offering, as several vendors have been turned on to its promise. Those include Software AG’s Terracotta unit (now under the stewardship of former Tibco Software exec Robin Gilthorpe), Tibco with ActiveSpaces, GigaSpaces Technologies with XAP, McObject’s eXtremeDB, GridGain, ScaleOut Software, and the new EMC/VMware Pivotal venture, which has sucked in GemFire as a key component.

Interestingly, in-memory is being explored not only by the low-latency world, but also by those looking to leverage big data approaches, such as Hadoop, and finding performance is an issue. Data warehouse vendor Teradata recently introduced its Intelligent Memory offering, which adds an in-memory component to its hard disk/flash product. Teradata determined that its users’ implementations made 90% of data queries on just 20% of the data stored, and uses algorithms to keep what it terms ‘hot’ data in RAM for fast retrieval.

This hot data approach to cutting big data down to (less) size underpins most of the in-memory offerings, including HANA, though the implementation differs from product to product, and is certainly a ‘devil in the detail’ aspect to be understood when implementing this technology. Watch this space …

Related content

WEBINAR

Recorded Webinar: Data Standards & Identifiers: Where are they helping and what more can be done?

Beyond regulatory compliance, what are the opportunities for leveraging standards to improve operational efficiencies? Financial institutions are starting to realise there are clear benefits in taking a strategic approach to data standardisation as they move to more data driven approaches which require good quality, accurate data for analytics and AI programmes. This webinar will review...

BLOG

BMLL Expands Distribution Ecosystem with Kx Platform

London-based analytics specialist BMLL Technologies’ new partnership with First Derivatives’ Kx will make BMLL’s data and analytics available via the Kx Streaming Analytics platform, which is used by many of the world’s major global financial institutions. The arrangement gives BMLL a significant new distribution channel as it continues its expansion in the wake of last...

EVENT

RegTech Summit APAC Virtual

RegTech Summit APAC will explore the current regulatory environment in Asia Pacific, the impact of COVID on the RegTech industry and the extent to which the pandemic has acted a catalyst for RegTech adoption in financial markets.

GUIDE

FRTB Special Report

FRTB is one of the most sweeping and transformative pieces of regulation to hit the financial markets in the last two decades. With the deadline confirmed as January 2022, this Special Report provides a detailed insight into exactly what the data requirements are for FRTB in its latest (and final) incarnation, and explores what needs to be done in order to meet these needs on a cost-effective and company-wide basis.