Last week I was in Orlando for SAP’s SAPPHIRE NOW event, where the main focus was not on the company’s highly successful business and data warehouse applications – revenues in 2012 were around $20 billion – but on a product introduced in late 2010 and said to be its fastest selling ever. Mysteriously named HANA – some say in a nod to the company’s founder and current chairman Hasso Plattner, while others suggest it stands for High Performance Analytics Appliance – the product is an in-memory database, and it is a key player in a technology space that is heating up fast, with applications for both low latency and big data applications.
Performance is the driver of the move to in-memory. Accessing data in a server’s RAM (Random Access Memory) is about 100,000 times faster than accessing it from a hard disk. Since most trading applications – especially those implementing intelligent trading approaches – need to access some data to augment what might be contained in a price update, keeping that in RAM is going to reduce tick-to-trade latency.
Truth be told, in-memory is nothing new. Just about all traditional disk-oriented databases – including those that came to SAP in its 2010 acquisition of Sybase, and financial markets-oriented offerings from Kx Systems – cache data in memory as part of a data retrieval process that is hard disk centric. HANA, though, is designed so that RAM is its primary data store, with hard disk (and non-volatile flash memory) for persistence.
Even as an in-memory database, HANA is also not that new, or unique. One of the earliest in-memory databases that enjoyed commercial success (especially in the financial markets) was the TimesTen offering, spun out of Hewlett-Packard in 1996 (and acquired by Oracle in 2005).
What is new is that technology advances have made in-memory approaches more usable, and cost effective. 64-bit processor architectures can now address much larger RAM memory address spaces, servers can now pack in many terabytes of RAM, network protocols like RDMA can now connect servers together with very low latency so in-memory can scale out, and – last but not least – RAM is getting cheaper, and cheaper.
SAP isn’t the only company with an in-memory offering, as several vendors have been turned on to its promise. Those include Software AG’s Terracotta unit (now under the stewardship of former Tibco Software exec Robin Gilthorpe), Tibco with ActiveSpaces, GigaSpaces Technologies with XAP, McObject’s eXtremeDB, GridGain, ScaleOut Software, and the new EMC/VMware Pivotal venture, which has sucked in GemFire as a key component.
Interestingly, in-memory is being explored not only by the low-latency world, but also by those looking to leverage big data approaches, such as Hadoop, and finding performance is an issue. Data warehouse vendor Teradata recently introduced its Intelligent Memory offering, which adds an in-memory component to its hard disk/flash product. Teradata determined that its users’ implementations made 90% of data queries on just 20% of the data stored, and uses algorithms to keep what it terms ‘hot’ data in RAM for fast retrieval.
This hot data approach to cutting big data down to (less) size underpins most of the in-memory offerings, including HANA, though the implementation differs from product to product, and is certainly a ‘devil in the detail’ aspect to be understood when implementing this technology. Watch this space …
Subscribe to our newsletter