About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Cost of Dirty Data

Subscribe to our newsletter

By Giles Nelson, Chief Technology Officer, Financial Services, MarkLogic

The cost of dirty data – data that is inaccurate, incomplete or inconsistent – is enormous. Earlier this year, Gartner reported that, on average, poor quality data cost an organisation $15 million in 2017. These findings were reinforced by MIT Sloan Management Review, which reported that dirty data costs the average business an astonishing 15% to 25% cent of revenue.

With global revenues of around $80 billion per year, just in investment banking, this means the cost of dirty data in financial services is astronomical. So, where does it come from and what can be done about it?

What’s the source?

Human error is a significant source. An Experian study found human error influences over 60% of dirty data. When different departments are entering related data into separate data silos, without proper governance, fouling of downstream data warehouses, data marts and data lakes will occur. Records will be duplicated, with data such as misspellings of names and addresses. Data silos with poor constraints will also lead to dates, account numbers or personal information being shown in different formats, making them difficult or impossible to reconcile automatically.

Further, once created, dirty data can remain hidden for years, which makes it even more difficult to detect and deal with when it is actually found. Most businesses only find out about dirty data when it’s reported by customers or prospects – a particularly poor way to track down and solve data issues.

And, still in 2018, dealing with print is an issue for many financial services firms. The scanning, marking up and import of printed documents is a recipe for the introduction of errors.

Many organisations search for inconsistent and inaccurate data using manual processes because their data is decentralised and in too many different systems. Harvard Business Review reports that analysts spend 50% of their time searching for data, correcting errors and seeking out confirmatory sources for data they don’t trust. These processes tend to fall into the same trap as the data – instead of consolidated processing, each department is responsible for its own data inaccuracies. While this may work in some instances, it also contributes to internal inconsistencies between department silos. The fix happens in one place, but not in another, which just leads to more data problems.

The impacts of dirty data

All of these issues result in enormous productivity losses and, perhaps worse, to a systemic loss of confidence in the data being used to power the business. The estimates above of revenue loss because of poor data seem extraordinary, but even if they represent the upper limit of the true cost, the impact is still very significant.

In a highly regulated industry, such as financial services, dirty data has an even greater cost. Missing, incomplete and inaccurate data can lead to the wrong trade being made, decisions taking even longer as further manual checks are made, and regulatory breaches being made. MiFID II has, of course, placed significant extra burdens on financial firms to ensure their data is in order.

Cleaning up the mess

What can be done? Here are a few things that organisations having difficulty with dirty data should be thinking about:

  • Achieving one golden version of data has long been an objective. Be careful though – doing this for all the data in an organisation without setting the whole data estate in concrete is an impossible task.
  • Take a data-first approach, rather than model first. Cleaning up dirty data involves the removal of invalid entries, duplicates, combining previously siloed records etc. The path to clean-up can be incremental. Taking the conventional approach and imposing a data model first, before doing anything with the data, leads to less flexibility and more cost.
  • Start building confidence in the data. Too often, data is present in isolation, with no knowledge of its provenance – when it was created, its source system and whether it’s been combined with other data. This metadata is valuable in proving a data item’s worth and actually preventing dirty data in the first place.

In conclusion, it’s worth stopping dirty data slowing you down. The business impact of dirty data is staggering, but an individual organisation can avoid the morass if it takes the right approach. Clean, reliable data makes the business more agile and responsive, and cuts down wasted efforts by data scientists and knowledge workers. And remember that 25% potential loss of revenue. It’s there to be clawed back.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Best practices for compliance with EU Market Abuse Regulation

Date: 18 June 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes EU Market Abuse Regulation (MAR) came into force in July 2016, rescinding the previous Market Abuse Directive and replacing it with a significantly extended scope of regulatory obligations. Eight years later, and amid constant change in capital markets regulation,...

BLOG

Singapore-Based GRC Solutions Provider Xcelerate Secures Funding for Acquisitive Growth

Xcelerate, a Singapore-headquartered provider of governance, risk and compliance (GRC) solutions to industry sectors including capital markets, has secured an equity funding round with participation from funds managed by Altair Capital and Exacta Capital Partners. Co-founders, management and select existing shareholders of Xcelerate also participated in the funding, which will be used to finance acquisitions...

EVENT

AI in Capital Markets Summit London

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

GUIDE

Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...