About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Big Data is Old News, We Need Meaningful Data

Subscribe to our newsletter

By Robert Iati, Senior Director, Capital Markets, Dun & Bradstreet

I spoke at the FIMA Canada event in September and heard many of the presentations and panels, as well as talks presented by some of the best institutions, vendors, academics and data managers. After it was done, I thought back on what I’d heard and found that all the content centred on one consistent theme: We have data, lots of data, Big Data. We are all data hogs, addicts really. But what do we do with all the data? How do we optimise its use?

Merely the thought of the term Big Data conveys the idea that, as firms that make their money trading, we need to take in all data that is available. The more data we have, the more we can feed into our trading algorithms, risk systems and compliance reports. To steal (and alter) a phrase from Michael Douglas in ‘Wall Street’, ‘Big is good’.

Our trading institutions have indeed taken in data and have benefitted greatly from it. The revolution of electronic trading was founded on, and is predicated on, having access to all available data. So, too, is risk management, while regulatory oversight depends on using greater amounts of data to achieve optimal effectiveness.

As a result, the decision makers in capital markets are bent on acquiring and using as much data as is possible. To some extent, we’ve achieved this objective. We have it all now, all the data we want is close at hand, but still we look for more. We’ve spent hundreds of millions of dollars on data and technology to make it faster and more relevant, but not necessarily more meaningful.

How did this come about? When it comes to Big Data, we are in a seemingly endless loop. We collect more data and make more of it available through new channels, from which we then collect more data. For example, internet blogs and news sites generate data at staggering rates, while hundreds of cable television channels, satellite radio stations and social media sources flood us with more data, sometimes unstructured, but now used in our decision making models. We improve our technology analytics ostensibly every day and, as we get more data, we find more instances of data that is useful to us. The easy availability of this data further feeds our curiosity about the value of more data. More will be better, we think.

We created technology to transmit, filter and scrub data, but the automation that helps us manage more data also creates more data. Trading algorithms create new orders and cancel others. Social media scrapers generate new trading signals. We develop different ways to aggregate data so that we have more indices and predictive metrics. In fact, the US Chamber of Commerce states that 90% of the world’s data has been created in the past three years and that 40% to 50% of all data created is created by technology itself.

Data will always be one step ahead of technology, but at this time, is more data necessarily better? I believe we have reached the point where most Wall Street institutions have too much data that is without clear definition or even true purpose. As an industry, we often feel as if we are not doing well unless we know everything about the data. It’s our nature, but taking in so much data without clearly understanding its purpose can leave institutions open to inefficiencies and to the greater risk of drawing questionable conclusions from the data that may not be accurate.

We can’t know it all and we can’t wait until we do because we never will as the pace of change is too quick. We need data but, more importantly, we need to be comfortable with all the information and our ability to find unique and differentiated data and to leverage it intelligently. To optimise the data is to draw the best insight from it and that is what makes it meaningful.

When we look at all the data we have and all the models we create from it, we need to ask ourselves ‘what is missing’? What is the data that, if we had it, would enable us to overcome our greatest obstacles?

I believe we would find it is the data on opaque securities, private companies and the hidden linkages between them. To improve our trading acumen, we’re looking for unique data to provide predictive signals of movement in a market, sector or name. We search to identify a trend in the private sector that can help to predict public markets’ activity. To increase transparency and diminish risk exposure, depth of insight into counterparty relationships and linkages reduces risk and allows firms to deploy capital with confidence.

For more efficient data management, having the ability to link entities more precisely with reliable standard identifiers allows for greater certainty in our enterprise data management infrastructure, which in turn improves operational efficiency. This unique data is out there but, in large part, needs to be harvested more effectively to be meaningful and to maximise its value for capital markets institutions.

Big Data is great, but real insights are extracted from meaningful data. So, it’s good to be big, but it’s better to be meaningful.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Potential and pitfalls of large language models and generative AI apps

Large language models (LLMs) and Generative AI applications are a hot topic in financial services, with vendors offering solutions, financial institutions adopting the technologies, and sceptics questioning their outcomes. That said, they are here to stay, and it may be that early adopters of Generative AI apps could gain not only operational benefits, but also...

BLOG

FactSet Introduces Interactive GenAI Solution Transcript Assistant

FactSet has released its first interactive GenAI solution available in the FactSet Workstation. Called Transcript Assistant, the solution is a conversational chatbot designed to accelerate in-depth research and analysis of earnings call transcripts, and help users search, analyse, and extract valuable, actionable insights from all transcripts in FactSet with a view to improving the investment...

EVENT

ESG Data & Tech Summit London

The ESG Data & Tech Summit will explore challenges around assembling and evaluating ESG data for reporting and the impact of regulatory measures and industry collaboration on transparency and standardisation efforts. Expert speakers will address how the evolving market infrastructure is developing and the role of new technologies and alternative data in improving insight and filling data gaps.

GUIDE

Entity Data Management Handbook – Fifth Edition

Welcome to the fifth edition of A-Team Group’s Entity Data Management Handbook, sponsored for the fourth year running by entity data specialist Bureau van Dijk, a Moody’s Analytics Company. The past year has seen a crackdown on corporate responsibility for financial crime – with financial firms facing draconian fines for non-compliance and the very real...