About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Opinion: Big Data Solutions to the Problem of Volume

Subscribe to our newsletter

By Amir Halfon, Senior Director for Technology, Capital Markets, Oracle Financial Services

With the latest developments in the European debt crisis reverberating across the globe, the importance and necessity of managing the large amounts of data related to risk exposures is apparent more than ever.

As I mentioned in my previous blog post, the ability to gain a holistic view of exposures and positions (requiring rapid, timely, and aggregated access to large amounts of financial data that are growing exponentially) is becoming paramount to financial institutions worldwide. The challenge that many firms are facing right now is how to keep up with the sheer volumes of data that are involved.

So, for this second installment, I’d like to focus on the seemingly most obvious of the “four Vs” of Big Data – volume – and talk about the technical patterns and approaches associated with processing very large amounts of data.

The most relevant strategy is of course parallelism, and while we have been spending a lot of effort as an industry on parallelizing computation, data parallelism remains a challenge and is the focal point of most current projects. Additionally, it is becoming apparent that in many cases compute grids are bottlenecking on data access. And therefore the pattern of moving compute tasks to the data rather than moving large amounts of data over the network is becoming more and more prevalent.

Several technical approaches combine these strategies, parallelizing both data management and computation, while bringing compute tasks close to the data:

Engineered Machines

Engineered machines integrate software and hardware mechanisms, combining data and compute parallelization with partitioning, compression and a high-bandwidth backplane to provide very high throughput data processing capabilities. Some of them are actually able to delegate query and analytics execution to the nodes that hold the data, thus radically minimizing data movement. They do this by replacing the traditional SAN with intelligent storage nodes that can do much more that simple I/O operations, and which are connected to the compute nodes with a high-throughput fabric such as Infiniband.

Integrated Analytics

Whether using engineered machines or not, the concept of performing analytics right on the data management system is a very powerful one, again following the philosophy of moving computation to the data rather than the other way around. Whether it’s ROLAP, MOLAP, predictive, or statistical analytics, today’s relational database management systems are capable of doing a lot of computation right where the data is stored. Some of them actually integrate their data parallelism mechanisms with the analytical engines, so that analytical tasks are parallelized along the same principles.

The combination of high throughput analytics with engineered machines has enabled several financial firms to dramatically reduce the time it takes to run analytical workloads. Whether it’s EOD batch processing, on-demand risk calculation, or pricing and valuations, firms are able to do a lot more in much less time, directly affecting the business by enabling continuous, on-demand data processing.

Data Grids

Unlike compute grids, data grids focus on the challenge of data parallelism. Some of them also provide the ability to ship compute tasks to the nodes holding the data in memory, rather than sending data to compute nodes as most compute grids do. Again, this is based on the principle that it’s cheaper to ship a compute task than it is to move large amounts of data across the wire.

Several firms have been using data grids to aggregate market data as well as positions data across desks and geographies. And some go even further by continuously executing certain analytics right on the nodes where this data is being held, achieving a real-time view of exposures, P&L and other calculated metrics.

NoSQL

The concept of schema-less data management (which is what NoSQL is really all about) has been gaining momentum in recent years. At its core is the notion that developers can be more productive by circumventing the need for complex schema design during the development lifecycle of data-intensive applications, especially when the data lends itself to key-value modelling (e.g. time-series data).

Despite being based on different principles, most of these technologies still follow a similar philosophy to data grids: they distribute the data horizontally across many nodes and model it in an object-oriented rather than a relational manner. They also enable the execution of compute tasks close to the data in order to minimize data movement over the network.

It is important to keep in mind that despite the name, NoSQL technologies are not necessarily antithetical to RDBMSs. In fact they become much more powerful when combined with traditional data warehousing and business intelligence tools. I therefore tend to view these technologies on a continuum rather than in dialectic opposition.

In future posts, I’ll delve into this topic in more detail – particularly in relation to Hadoop, which is quickly becoming a de-facto standard – and continue the discussion on the ‘four V’s’ of Big Data.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: How to simplify and modernize data architecture to unleash data value and innovation

The data needs of financial institutions are growing at pace as new formats and greater volumes of information are integrated into their systems. With this has come greater complexity in managing and governing that data, amplifying pain points along data pipelines. In response, innovative new streamlined and flexible architectures have emerged that can absorb and...

BLOG

Getting Data Right is Crucial to Deriving Value From AI: DMI Webinar Review

Capital markets participants are struggling with data sourcing and cleansing as they deploy artificial intelligence to streamline operations, improve customer relations and add value to their services, according to the latest A-Team Group poll. In a survey survey of attendees at last week’s Data Management Insight webinar on data quality for AI it also emerged...

EVENT

RegTech Summit New York

Now in its 9th year, the RegTech Summit in New York will bring together the RegTech ecosystem to explore how the North American capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...