The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

The Cost of Dirty Data

By Giles Nelson, Chief Technology Officer, Financial Services, MarkLogic

The cost of dirty data – data that is inaccurate, incomplete or inconsistent – is enormous. Earlier this year, Gartner reported that, on average, poor quality data cost an organisation $15 million in 2017. These findings were reinforced by MIT Sloan Management Review, which reported that dirty data costs the average business an astonishing 15% to 25% cent of revenue.

With global revenues of around $80 billion per year, just in investment banking, this means the cost of dirty data in financial services is astronomical. So, where does it come from and what can be done about it?

What’s the source?

Human error is a significant source. An Experian study found human error influences over 60% of dirty data. When different departments are entering related data into separate data silos, without proper governance, fouling of downstream data warehouses, data marts and data lakes will occur. Records will be duplicated, with data such as misspellings of names and addresses. Data silos with poor constraints will also lead to dates, account numbers or personal information being shown in different formats, making them difficult or impossible to reconcile automatically.

Further, once created, dirty data can remain hidden for years, which makes it even more difficult to detect and deal with when it is actually found. Most businesses only find out about dirty data when it’s reported by customers or prospects – a particularly poor way to track down and solve data issues.

And, still in 2018, dealing with print is an issue for many financial services firms. The scanning, marking up and import of printed documents is a recipe for the introduction of errors.

Many organisations search for inconsistent and inaccurate data using manual processes because their data is decentralised and in too many different systems. Harvard Business Review reports that analysts spend 50% of their time searching for data, correcting errors and seeking out confirmatory sources for data they don’t trust. These processes tend to fall into the same trap as the data – instead of consolidated processing, each department is responsible for its own data inaccuracies. While this may work in some instances, it also contributes to internal inconsistencies between department silos. The fix happens in one place, but not in another, which just leads to more data problems.

The impacts of dirty data

All of these issues result in enormous productivity losses and, perhaps worse, to a systemic loss of confidence in the data being used to power the business. The estimates above of revenue loss because of poor data seem extraordinary, but even if they represent the upper limit of the true cost, the impact is still very significant.

In a highly regulated industry, such as financial services, dirty data has an even greater cost. Missing, incomplete and inaccurate data can lead to the wrong trade being made, decisions taking even longer as further manual checks are made, and regulatory breaches being made. MiFID II has, of course, placed significant extra burdens on financial firms to ensure their data is in order.

Cleaning up the mess

What can be done? Here are a few things that organisations having difficulty with dirty data should be thinking about:

  • Achieving one golden version of data has long been an objective. Be careful though – doing this for all the data in an organisation without setting the whole data estate in concrete is an impossible task.
  • Take a data-first approach, rather than model first. Cleaning up dirty data involves the removal of invalid entries, duplicates, combining previously siloed records etc. The path to clean-up can be incremental. Taking the conventional approach and imposing a data model first, before doing anything with the data, leads to less flexibility and more cost.
  • Start building confidence in the data. Too often, data is present in isolation, with no knowledge of its provenance – when it was created, its source system and whether it’s been combined with other data. This metadata is valuable in proving a data item’s worth and actually preventing dirty data in the first place.

In conclusion, it’s worth stopping dirty data slowing you down. The business impact of dirty data is staggering, but an individual organisation can avoid the morass if it takes the right approach. Clean, reliable data makes the business more agile and responsive, and cuts down wasted efforts by data scientists and knowledge workers. And remember that 25% potential loss of revenue. It’s there to be clawed back.

Related content

WEBINAR

Recorded Webinar: Managing the transaction reporting landscape post Brexit: MiFID II, SFTR, EMIR

The transaction reporting landscape has, for many financial institutions, expanded considerably in size since the end of the UK’s Brexit transition period on 31 December 2020 and the resulting need for double reporting of some transactions to both EU and UK authorities. It has also changed dramatically following the UK government’s failure to reach equivalence...

BLOG

A-Team Group Releases Regulatory Data Handbook 2020/2021

A-Team Group releases the eighth edition of its ‘best read’ publication, the Regulatory Data Handbook, today. The handbook is a ‘must have’ for capital markets participants, is free of charge, and profiles every regulation that impacts capital markets’ data management practices. Linda Coffman, executive vice president at The SmartStream Reference Data Utility, leads the handbook...

EVENT

Data Management Summit New York City

Now in its 10th year, the Data Management Summit (DMS) in NYC explores the shift to the new world where data is redefining the operating model and firms are seeking to unlock value via data transformation projects for enterprise gain and competitive edge.

GUIDE

RegTech Suppliers Guide 2020/2021

Welcome to the second edition of A-Team Group’s RegTech Suppliers Guide, an essential aid for financial institutions sourcing innovative solutions to improve their regulatory response, and a showcase for encumbent and new RegTech vendors with offerings designed to match market demand. Available free of charge and based on an industry-wide survey, the guide provides a...