About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Big Data – The Other Side of Low Latency

Subscribe to our newsletter

We write a lot here about the latency of moving data from point A to point B. But latency is also inherent in the processing of that data at points A and B. Most likely, the processing of data is a more complex undertaking than transporting it. And that’s where big data comes in.

For sure, big data (or Big Data as other publications refer to it) is the tech buzzword – and investment focus – of the moment, not unlike low latency has been in the past. Also, like low latency, the definition of big data is fluid, and is often bent to suit a particular vendor’s business. We’re no exception, so here’s a definition we use at BigDataForFinance.com

“Datasets whose characteristics – size, data type and frequency – are beyond efficient processing, storage and extraction by traditional database management tools.”

Note that it’s not all about size, and that’s especially true for the financial markets, where even many years of time series tick data does not come close to the data volumes processed by the likes of Google and Facebook. But what the financial markets might lack in data size, it makes up for in complexity and frequency.  And the need for accuracy and precision.

One of the more common big data applications relates to the storage of and analytics on tick-by-tick time series data.  Here, the need is to capture data that is hitting at rates up to several million updates per second, for markets such as North American options. This generally requires pretty specialised, in-memory, approaches, since even massively parallel processing (MPP) databases – think EMC Greenplum, IBM Netezza and ParAccel – are not going to keep up. Vendors such as OneMarketData and Kx Systems are likely to be called upon for such storage, the latter perhaps paired with Kove’s RAM-based storage appliance.

Time series applications include creation and back-testing of quantitative trading models, some pre-trade risk checks, and transaction cost analysis (TCA). In the future, time series could support more complex execution algorithms too.

Another increasing big data application is processing of natural language text and analytics on real time and historical textual information in order to derive trading signals from news sources and social media. As an example, so-called Sentiment Analysis might process text messages related to a company or market segment from Twitter, building a view on whether the subject of the tweets is being referred to in a positive or negative manner. This sentiment – combined with other inputs – can be used to make trading decisions ahead of systems the key off price changes in the market.

The processing of big data generally requires grid or cluster infrastructure, with networks connecting 10s, 100, or 1000s of servers and processing nodes. No wonder then, that many of the network and middleware vendors that are engaged in low-latency connectivity also have a ‘story’ for big data. Names such as Informatica, Tibco Software, Cisco Systems, Arista Networks, Solarflare Communications, Mellanox Technologies, Tervela and Solace Systems all spring to mind. A couple of those – Informatica and Tibco – also offer more traditional big data analysis applications too.

One technology often mentioned in the context of big data is Hadoop, which is an open source implementation of the MapReduce framework for the processing of very large datasets, such as searching for data patterns. It too is a multi-server parallel approach, but one that is historically batch oriented. One direction for it is to make it more real time, introducing in-memory and low-latency middleware to boost performance. A real time Hadoop approach would lend itself to driving electronic trading and on-demand determination of risk across an enterprise.

As automated trading systems move to leverage cloud-based infrastructure, then accessing cloud-based big data services will be a natural route to take, leading to faster implementation and less ongoing management.

As we have suggested before, for many trading firms, tapping into the convergence of low latency, cloud and big data technologies will be the way to go. You’ll be hearing more on this architectural approach – so stay tuned!

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: The emerging structure of the institutional digital assets market

As interest in trading digital assets continues to increase among institutional investors, so too does the need to focus on market structure, regulation and trading solutions. For financial institutions that get it right the rewards will be significant, but it is not necessarily easy and the challenges are many. This webinar will consider how digital...

BLOG

New DTCC Report Recommends Best Practices to Achieve T+1 Settlement Success

In anticipation of the transition to a T+1 settlement cycle in the US, the Depository Trust & Clearing Corporation (DTCC) has released a new report, “Hitting 90% Affirmation by 9:00 PM ET on Trade Date: The Key to T+1 Success”, which highlights the importance of automating post-trade processes to achieve success in the upcoming T+1...

EVENT

RegTech Summit London

Now in its 8th year, the RegTech Summit in London will bring together the RegTech ecosystem to explore how the European capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

MiFID II Handbook

As the 3 January 2018 compliance deadline for Markets in Financial Instruments Directive II (MiFID II) approaches, A-Team Group has pulled together everything you need to know about the regulation in a precise and concise handbook. The MiFID II Handbook, commissioned by Thomson Reuters, provides a guide to aspects of the regulation that will have...