About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

In-Memory Heats Up for Low Latency and Big Data

Subscribe to our newsletter

Last week I was in Orlando for SAP’s SAPPHIRE NOW event, where the main focus was not on the company’s highly successful business and data warehouse applications – revenues in 2012 were around $20 billion – but on a product introduced in late 2010 and said to be its fastest selling ever. Mysteriously named HANA – some say in a nod to the company’s founder and current chairman Hasso Plattner, while others suggest it stands for High Performance Analytics Appliance – the product is an in-memory database, and it is a key player in a technology space that is heating up fast, with applications for both low latency and big data applications.

Performance is the driver of the move to in-memory. Accessing data in a server’s RAM (Random Access Memory) is about 100,000 times faster than accessing it from a hard disk. Since most trading applications – especially those implementing intelligent trading approaches – need to access some data to augment what might be contained in a price update, keeping that in RAM is going to reduce tick-to-trade latency.

Truth be told, in-memory is nothing new. Just about all traditional disk-oriented databases – including those that came to SAP in its 2010 acquisition of Sybase, and financial markets-oriented offerings from Kx Systems – cache data in memory as part of a data retrieval process that is hard disk centric. HANA, though, is designed so that RAM is its primary data store, with hard disk (and non-volatile flash memory) for persistence.

Even as an in-memory database, HANA is also not that new, or unique. One of the earliest in-memory databases that enjoyed commercial success (especially in the financial markets) was the TimesTen offering, spun out of Hewlett-Packard in 1996 (and acquired by Oracle in 2005).

What is new is that technology advances have made in-memory approaches more usable, and cost effective. 64-bit processor architectures can now address much larger RAM memory address spaces, servers can now pack in many terabytes of RAM, network protocols like RDMA can now connect servers together with very low latency so in-memory can scale out, and – last but not least – RAM is getting cheaper, and cheaper.

SAP isn’t the only company with an in-memory offering, as several vendors have been turned on to its promise. Those include Software AG’s Terracotta unit (now under the stewardship of former Tibco Software exec Robin Gilthorpe), Tibco with ActiveSpaces, GigaSpaces Technologies with XAP, McObject’s eXtremeDB, GridGain, ScaleOut Software, and the new EMC/VMware Pivotal venture, which has sucked in GemFire as a key component.

Interestingly, in-memory is being explored not only by the low-latency world, but also by those looking to leverage big data approaches, such as Hadoop, and finding performance is an issue. Data warehouse vendor Teradata recently introduced its Intelligent Memory offering, which adds an in-memory component to its hard disk/flash product. Teradata determined that its users’ implementations made 90% of data queries on just 20% of the data stored, and uses algorithms to keep what it terms ‘hot’ data in RAM for fast retrieval.

This hot data approach to cutting big data down to (less) size underpins most of the in-memory offerings, including HANA, though the implementation differs from product to product, and is certainly a ‘devil in the detail’ aspect to be understood when implementing this technology. Watch this space …

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Trade the Middle East & North Africa: Connectivity, Data Systems & Processes

In Partnership With Date: 20 May 2024 Time: 11am London / 1pm Egypt & Saudi Arabia / 2pm United Arab Emirates / 6am CET Duration: 50 minutes As key states across the region seek alternatives to the fossil fuel industries that have driven their economies for decades, pioneering financial centres are emerging in Egypt, United...

BLOG

New DTCC Report Recommends Best Practices to Achieve T+1 Settlement Success

In anticipation of the transition to a T+1 settlement cycle in the US, the Depository Trust & Clearing Corporation (DTCC) has released a new report, “Hitting 90% Affirmation by 9:00 PM ET on Trade Date: The Key to T+1 Success”, which highlights the importance of automating post-trade processes to achieve success in the upcoming T+1...

EVENT

TradingTech Summit London

Now in its 13th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

MiFID II handbook, third edition – How compliant are you?

Six months after Markets in Financial Instruments Directive II (MiFID II) went live, how compliant is your organisation? If you took a tactical approach to cross the compliance line on January 3, 2018, how are you reviewing and renewing systems to take a more strategic approach and what are the business benefits of doing so?...