About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Big Data is Old News, We Need Meaningful Data

Subscribe to our newsletter

By Robert Iati, Senior Director, Capital Markets, Dun & Bradstreet

I spoke at the FIMA Canada event in September and heard many of the presentations and panels, as well as talks presented by some of the best institutions, vendors, academics and data managers. After it was done, I thought back on what I’d heard and found that all the content centred on one consistent theme: We have data, lots of data, Big Data. We are all data hogs, addicts really. But what do we do with all the data? How do we optimise its use?

Merely the thought of the term Big Data conveys the idea that, as firms that make their money trading, we need to take in all data that is available. The more data we have, the more we can feed into our trading algorithms, risk systems and compliance reports. To steal (and alter) a phrase from Michael Douglas in ‘Wall Street’, ‘Big is good’.

Our trading institutions have indeed taken in data and have benefitted greatly from it. The revolution of electronic trading was founded on, and is predicated on, having access to all available data. So, too, is risk management, while regulatory oversight depends on using greater amounts of data to achieve optimal effectiveness.

As a result, the decision makers in capital markets are bent on acquiring and using as much data as is possible. To some extent, we’ve achieved this objective. We have it all now, all the data we want is close at hand, but still we look for more. We’ve spent hundreds of millions of dollars on data and technology to make it faster and more relevant, but not necessarily more meaningful.

How did this come about? When it comes to Big Data, we are in a seemingly endless loop. We collect more data and make more of it available through new channels, from which we then collect more data. For example, internet blogs and news sites generate data at staggering rates, while hundreds of cable television channels, satellite radio stations and social media sources flood us with more data, sometimes unstructured, but now used in our decision making models. We improve our technology analytics ostensibly every day and, as we get more data, we find more instances of data that is useful to us. The easy availability of this data further feeds our curiosity about the value of more data. More will be better, we think.

We created technology to transmit, filter and scrub data, but the automation that helps us manage more data also creates more data. Trading algorithms create new orders and cancel others. Social media scrapers generate new trading signals. We develop different ways to aggregate data so that we have more indices and predictive metrics. In fact, the US Chamber of Commerce states that 90% of the world’s data has been created in the past three years and that 40% to 50% of all data created is created by technology itself.

Data will always be one step ahead of technology, but at this time, is more data necessarily better? I believe we have reached the point where most Wall Street institutions have too much data that is without clear definition or even true purpose. As an industry, we often feel as if we are not doing well unless we know everything about the data. It’s our nature, but taking in so much data without clearly understanding its purpose can leave institutions open to inefficiencies and to the greater risk of drawing questionable conclusions from the data that may not be accurate.

We can’t know it all and we can’t wait until we do because we never will as the pace of change is too quick. We need data but, more importantly, we need to be comfortable with all the information and our ability to find unique and differentiated data and to leverage it intelligently. To optimise the data is to draw the best insight from it and that is what makes it meaningful.

When we look at all the data we have and all the models we create from it, we need to ask ourselves ‘what is missing’? What is the data that, if we had it, would enable us to overcome our greatest obstacles?

I believe we would find it is the data on opaque securities, private companies and the hidden linkages between them. To improve our trading acumen, we’re looking for unique data to provide predictive signals of movement in a market, sector or name. We search to identify a trend in the private sector that can help to predict public markets’ activity. To increase transparency and diminish risk exposure, depth of insight into counterparty relationships and linkages reduces risk and allows firms to deploy capital with confidence.

For more efficient data management, having the ability to link entities more precisely with reliable standard identifiers allows for greater certainty in our enterprise data management infrastructure, which in turn improves operational efficiency. This unique data is out there but, in large part, needs to be harvested more effectively to be meaningful and to maximise its value for capital markets institutions.

Big Data is great, but real insights are extracted from meaningful data. So, it’s good to be big, but it’s better to be meaningful.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: How to optimise SaaS data management solutions

Software-as-a-Service (SaaS) data management solutions go hand-in-hand with cloud technology, delivering not only SaaS benefits of agility, a reduced on-premise footprint and access to third-party expertise, but also the fast data delivery, productivity and efficiency gains provided by the cloud. This webinar will focus on the essentials of SaaS data management, including practical guidance on...

BLOG

Toronto-Based TMX Datalinx Opens ESG Data Hub

TMX Datalinx, Toronto-based TMX Group’s information services division, has opened an ESG data hub designed to support global clients’ integrating ESG in investment decision-making processes. As well as providing TMX Datalinx real-time and historical data, the TMX ESG Data Hub provides data from external sources including OWL, a fintech transforming how ESG data is gathered,...

EVENT

AI in Capital Markets Summit London

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

GUIDE

The Data Management Implications of Solvency II

This special report accompanies a webinar we held on the popular topic of The Data Management Implications of Solvency II, discussing the data implications for asset managers and their custodians and asset servicers. You can register here to get immediate access to the Special Report.