About a-team Marketing Services

A-Team Insight Blogs

Talking Reference Data with Andrew Delaney: The Humpty Dumpty Factor

Subscribe to our newsletter

Humpty Dumpty famously said:

“When I use a word, it means just what I choose it to mean—neither more nor less.”

I think he was on to something.

As I move within reference data circles – recently in London, Zurich, Frankfurt and New York – I find that the term reference data can be applied to a broad church of data disciplines. Old favourites include end-of-day pricing data, security master files and identifiers, valuations, corporate actions and entity identifiers. At a stretch, we could include calendars, perhaps. Tax information. And on.

But in my travels I’ve recently noticed a propensity among reference data practitioners to get passionate – nay, even hot under the collar – about data types heretofore considered outside of our remit here at Reference Data Review.

Two in particular have piqued my interest, and in line with Humpty’s pronouncement I’m going to add them to my personal list of what I consider to be reference data.

The first is index data.

As you may recall, at FIMA DACH in Frankfurt the other week I stumbled across a panel discussion that indicated a degree of outrage at the licensing fees and fee structures of some of the major providers of index calculations and the underlying data they need to function. Indeed, the panel alerted us to the fact that the EU is embarking on an inquiry into market practices of index providers, a study that kicked off last month.

What’s clear is that in these straitened times, scrutiny of data costs is extending into new areas. The result is that index data – for some reason held until now in greater reverence than boring old market data, for example – is being put through the cost and consolidation ringer, just like everything else. It will be interesting to watch how that’s going to pan out, and you’ll be able to read about it here.

The second is benchmark pricing, and by extension the whole of the contributed data segment.

The Libor scandal has truly opened a can of worms. Not only are the culprits being castigated for lying in their Libor contributions, but the issue has forced banks to ask the question: where are we sourcing our contributed data, how is it being distributed, and to whom and for what purpose?

I’ll be looking at this in more depth in the coming weeks. But what started as a set of advertisements – bank currency rates on Reuters FXFX, for example – is now a much more serious proposition.

The consumer has changed, from a bank or asset manager counterparty making a call to the contributing bank to get a current market rate, to a machine that may take the rate and pump out a bunch of trades based on its value. And yet the process for sourcing that rate may not have changed in 30 years. Are banks aware of what rates they are posting to Thomson Reuters, Bloomberg and the rest? Do they know what systems – or spreadsheets, or people – came up with the rates? Can they stand by those rates?

We’ll be delving into this as the market wrests with how it’s going to calculate Libor (and the other bors) going forward.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: How to organise, integrate and structure data for successful AI

Artificial intelligence (AI) is increasingly being rolled out across financial institutions, being put to work in applications that are transforming everything from back-office data management to front-office trading platforms. The potential for AI to bring further cost-savings and operational gains are limited only by the imaginations of individual organisations. What they all require to achieve...

BLOG

Scale and Governance Top Drivers of Modern Data Architecture Plans: Webinar Review

Financial institutions are investing in modern technology architectures to bolster the flexibility and scalability of their data management processes as the number of use cases for that information, and the volume of data they ingest, expands. They are also seizing on the latest architectures to strengthen data governance in response to the growing complexity of...

EVENT

Data Management Summit New York City

Now in its 15th year the Data Management Summit NYC brings together the North American data management community to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...