About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Big Data is Old News, We Need Meaningful Data

Subscribe to our newsletter

By Robert Iati, Senior Director, Capital Markets, Dun & Bradstreet

I spoke at the FIMA Canada event in September and heard many of the presentations and panels, as well as talks presented by some of the best institutions, vendors, academics and data managers. After it was done, I thought back on what I’d heard and found that all the content centred on one consistent theme: We have data, lots of data, Big Data. We are all data hogs, addicts really. But what do we do with all the data? How do we optimise its use?

Merely the thought of the term Big Data conveys the idea that, as firms that make their money trading, we need to take in all data that is available. The more data we have, the more we can feed into our trading algorithms, risk systems and compliance reports. To steal (and alter) a phrase from Michael Douglas in ‘Wall Street’, ‘Big is good’.

Our trading institutions have indeed taken in data and have benefitted greatly from it. The revolution of electronic trading was founded on, and is predicated on, having access to all available data. So, too, is risk management, while regulatory oversight depends on using greater amounts of data to achieve optimal effectiveness.

As a result, the decision makers in capital markets are bent on acquiring and using as much data as is possible. To some extent, we’ve achieved this objective. We have it all now, all the data we want is close at hand, but still we look for more. We’ve spent hundreds of millions of dollars on data and technology to make it faster and more relevant, but not necessarily more meaningful.

How did this come about? When it comes to Big Data, we are in a seemingly endless loop. We collect more data and make more of it available through new channels, from which we then collect more data. For example, internet blogs and news sites generate data at staggering rates, while hundreds of cable television channels, satellite radio stations and social media sources flood us with more data, sometimes unstructured, but now used in our decision making models. We improve our technology analytics ostensibly every day and, as we get more data, we find more instances of data that is useful to us. The easy availability of this data further feeds our curiosity about the value of more data. More will be better, we think.

We created technology to transmit, filter and scrub data, but the automation that helps us manage more data also creates more data. Trading algorithms create new orders and cancel others. Social media scrapers generate new trading signals. We develop different ways to aggregate data so that we have more indices and predictive metrics. In fact, the US Chamber of Commerce states that 90% of the world’s data has been created in the past three years and that 40% to 50% of all data created is created by technology itself.

Data will always be one step ahead of technology, but at this time, is more data necessarily better? I believe we have reached the point where most Wall Street institutions have too much data that is without clear definition or even true purpose. As an industry, we often feel as if we are not doing well unless we know everything about the data. It’s our nature, but taking in so much data without clearly understanding its purpose can leave institutions open to inefficiencies and to the greater risk of drawing questionable conclusions from the data that may not be accurate.

We can’t know it all and we can’t wait until we do because we never will as the pace of change is too quick. We need data but, more importantly, we need to be comfortable with all the information and our ability to find unique and differentiated data and to leverage it intelligently. To optimise the data is to draw the best insight from it and that is what makes it meaningful.

When we look at all the data we have and all the models we create from it, we need to ask ourselves ‘what is missing’? What is the data that, if we had it, would enable us to overcome our greatest obstacles?

I believe we would find it is the data on opaque securities, private companies and the hidden linkages between them. To improve our trading acumen, we’re looking for unique data to provide predictive signals of movement in a market, sector or name. We search to identify a trend in the private sector that can help to predict public markets’ activity. To increase transparency and diminish risk exposure, depth of insight into counterparty relationships and linkages reduces risk and allows firms to deploy capital with confidence.

For more efficient data management, having the ability to link entities more precisely with reliable standard identifiers allows for greater certainty in our enterprise data management infrastructure, which in turn improves operational efficiency. This unique data is out there but, in large part, needs to be harvested more effectively to be meaningful and to maximise its value for capital markets institutions.

Big Data is great, but real insights are extracted from meaningful data. So, it’s good to be big, but it’s better to be meaningful.

Subscribe to our newsletter

Related content


Recorded Webinar: Best practice approaches to integrating legacy data with the cloud

Acceleration of cloud adoption, increasing demand for digital transformation and real-time data management have led financial institutions to rethink their data infrastructure to enable more agile operating models that can respond faster to change and make data a competitive advantage. For many, integrating data from legacy systems and data across the business landscape with a...


GoldenSource Releases Market Risk Factor Data Standard, Eases FRTB Compliance

GoldenSource, a provider of Enterprise Data Management (EDM) and Master Data Management (MDM) solutions, has created a market risk factor data standard. Called Curve Master Definitions, the standard seeks to provide investment banks with a single risk factor taxonomy for market rates required to price OTC derivatives, including the storage and aggregation of industry standard...


TradingTech Summit London

Now in its 13th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.


Corporate Actions 2009 Edition

Rather than detracting attention away from corporate actions automation projects, the financial crisis appears to have accentuated the importance of the vital nature of this data. Financial institutions are more aware than ever before of the impact that inaccurate corporate actions data has on their bottom lines as a result of the increased focus on...