About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Cost of Dirty Data

Subscribe to our newsletter

By Giles Nelson, Chief Technology Officer, Financial Services, MarkLogic

The cost of dirty data – data that is inaccurate, incomplete or inconsistent – is enormous. Earlier this year, Gartner reported that, on average, poor quality data cost an organisation $15 million in 2017. These findings were reinforced by MIT Sloan Management Review, which reported that dirty data costs the average business an astonishing 15% to 25% cent of revenue.

With global revenues of around $80 billion per year, just in investment banking, this means the cost of dirty data in financial services is astronomical. So, where does it come from and what can be done about it?

What’s the source?

Human error is a significant source. An Experian study found human error influences over 60% of dirty data. When different departments are entering related data into separate data silos, without proper governance, fouling of downstream data warehouses, data marts and data lakes will occur. Records will be duplicated, with data such as misspellings of names and addresses. Data silos with poor constraints will also lead to dates, account numbers or personal information being shown in different formats, making them difficult or impossible to reconcile automatically.

Further, once created, dirty data can remain hidden for years, which makes it even more difficult to detect and deal with when it is actually found. Most businesses only find out about dirty data when it’s reported by customers or prospects – a particularly poor way to track down and solve data issues.

And, still in 2018, dealing with print is an issue for many financial services firms. The scanning, marking up and import of printed documents is a recipe for the introduction of errors.

Many organisations search for inconsistent and inaccurate data using manual processes because their data is decentralised and in too many different systems. Harvard Business Review reports that analysts spend 50% of their time searching for data, correcting errors and seeking out confirmatory sources for data they don’t trust. These processes tend to fall into the same trap as the data – instead of consolidated processing, each department is responsible for its own data inaccuracies. While this may work in some instances, it also contributes to internal inconsistencies between department silos. The fix happens in one place, but not in another, which just leads to more data problems.

The impacts of dirty data

All of these issues result in enormous productivity losses and, perhaps worse, to a systemic loss of confidence in the data being used to power the business. The estimates above of revenue loss because of poor data seem extraordinary, but even if they represent the upper limit of the true cost, the impact is still very significant.

In a highly regulated industry, such as financial services, dirty data has an even greater cost. Missing, incomplete and inaccurate data can lead to the wrong trade being made, decisions taking even longer as further manual checks are made, and regulatory breaches being made. MiFID II has, of course, placed significant extra burdens on financial firms to ensure their data is in order.

Cleaning up the mess

What can be done? Here are a few things that organisations having difficulty with dirty data should be thinking about:

  • Achieving one golden version of data has long been an objective. Be careful though – doing this for all the data in an organisation without setting the whole data estate in concrete is an impossible task.
  • Take a data-first approach, rather than model first. Cleaning up dirty data involves the removal of invalid entries, duplicates, combining previously siloed records etc. The path to clean-up can be incremental. Taking the conventional approach and imposing a data model first, before doing anything with the data, leads to less flexibility and more cost.
  • Start building confidence in the data. Too often, data is present in isolation, with no knowledge of its provenance – when it was created, its source system and whether it’s been combined with other data. This metadata is valuable in proving a data item’s worth and actually preventing dirty data in the first place.

In conclusion, it’s worth stopping dirty data slowing you down. The business impact of dirty data is staggering, but an individual organisation can avoid the morass if it takes the right approach. Clean, reliable data makes the business more agile and responsive, and cuts down wasted efforts by data scientists and knowledge workers. And remember that 25% potential loss of revenue. It’s there to be clawed back.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: How to automate entity data management and due diligence to ensure efficiency, accuracy and compliance

Requesting, gathering, analysing and monitoring customer, vendor and partner entity data is time consuming, and often a tedious manual process. This can slow down customer relationships and expose financial institutions to risk from inaccurate, incomplete or outdated data – but there are solutions to these problems. This webinar will consider the challenges of sourcing and...

BLOG

Archer Acquisition of Compliance.ai Drives AI-Powered Regulatory Compliance and Risk Management

Compliance.ai, a San Francisco, California-based provider of AI driven regulatory change management solutions, has been acquired by Kansas headquartered risk management specialist Archer. The acquisition will enable Archer to supplement its compliance line-up with cutting-edge AI technology to automate the monitoring, tracking, reporting, and response to changing regulations in real time. Archer integrates risk and...

EVENT

ESG Data & Tech Summit London

The ESG Data & Tech Summit will explore challenges around assembling and evaluating ESG data for reporting and the impact of regulatory measures and industry collaboration on transparency and standardisation efforts. Expert speakers will address how the evolving market infrastructure is developing and the role of new technologies and alternative data in improving insight and filling data gaps.

GUIDE

Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...