About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Cost of Dirty Data

Subscribe to our newsletter

By Giles Nelson, Chief Technology Officer, Financial Services, MarkLogic

The cost of dirty data – data that is inaccurate, incomplete or inconsistent – is enormous. Earlier this year, Gartner reported that, on average, poor quality data cost an organisation $15 million in 2017. These findings were reinforced by MIT Sloan Management Review, which reported that dirty data costs the average business an astonishing 15% to 25% cent of revenue.

With global revenues of around $80 billion per year, just in investment banking, this means the cost of dirty data in financial services is astronomical. So, where does it come from and what can be done about it?

What’s the source?

Human error is a significant source. An Experian study found human error influences over 60% of dirty data. When different departments are entering related data into separate data silos, without proper governance, fouling of downstream data warehouses, data marts and data lakes will occur. Records will be duplicated, with data such as misspellings of names and addresses. Data silos with poor constraints will also lead to dates, account numbers or personal information being shown in different formats, making them difficult or impossible to reconcile automatically.

Further, once created, dirty data can remain hidden for years, which makes it even more difficult to detect and deal with when it is actually found. Most businesses only find out about dirty data when it’s reported by customers or prospects – a particularly poor way to track down and solve data issues.

And, still in 2018, dealing with print is an issue for many financial services firms. The scanning, marking up and import of printed documents is a recipe for the introduction of errors.

Many organisations search for inconsistent and inaccurate data using manual processes because their data is decentralised and in too many different systems. Harvard Business Review reports that analysts spend 50% of their time searching for data, correcting errors and seeking out confirmatory sources for data they don’t trust. These processes tend to fall into the same trap as the data – instead of consolidated processing, each department is responsible for its own data inaccuracies. While this may work in some instances, it also contributes to internal inconsistencies between department silos. The fix happens in one place, but not in another, which just leads to more data problems.

The impacts of dirty data

All of these issues result in enormous productivity losses and, perhaps worse, to a systemic loss of confidence in the data being used to power the business. The estimates above of revenue loss because of poor data seem extraordinary, but even if they represent the upper limit of the true cost, the impact is still very significant.

In a highly regulated industry, such as financial services, dirty data has an even greater cost. Missing, incomplete and inaccurate data can lead to the wrong trade being made, decisions taking even longer as further manual checks are made, and regulatory breaches being made. MiFID II has, of course, placed significant extra burdens on financial firms to ensure their data is in order.

Cleaning up the mess

What can be done? Here are a few things that organisations having difficulty with dirty data should be thinking about:

  • Achieving one golden version of data has long been an objective. Be careful though – doing this for all the data in an organisation without setting the whole data estate in concrete is an impossible task.
  • Take a data-first approach, rather than model first. Cleaning up dirty data involves the removal of invalid entries, duplicates, combining previously siloed records etc. The path to clean-up can be incremental. Taking the conventional approach and imposing a data model first, before doing anything with the data, leads to less flexibility and more cost.
  • Start building confidence in the data. Too often, data is present in isolation, with no knowledge of its provenance – when it was created, its source system and whether it’s been combined with other data. This metadata is valuable in proving a data item’s worth and actually preventing dirty data in the first place.

In conclusion, it’s worth stopping dirty data slowing you down. The business impact of dirty data is staggering, but an individual organisation can avoid the morass if it takes the right approach. Clean, reliable data makes the business more agile and responsive, and cuts down wasted efforts by data scientists and knowledge workers. And remember that 25% potential loss of revenue. It’s there to be clawed back.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: How to ensure employees meet fit and proper requirements under global accountability regimes

Fitness and proprietary requirements for employees of financial institutions are not an option, but a regulatory obligation that calls on employers to regularly assess employees’ honesty, integrity and reputation, competence and capability, and financial soundness. In the UK, these requirements are a core element of the Senior Managers and Certification Regime (SMCR). They are also...

BLOG

Citigroup Fine Shows Importance of Having Robust Data Setup

The US$136 million fine meted out to Citigroup for data irregularities dating back to 2020 should serve as a warning to all financial institutions that robust data management is essential to avoid sanctions amid tougher regulatory regimes. The Federal Reserve and Office of the Comptroller of the Currency (OCC) jointly imposed the penalty on the...

EVENT

AI in Capital Markets Summit London

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

GUIDE

Regulatory Data Handbook 2024 – Twelfth Edition

Welcome to the twelfth edition of A-Team Group’s Regulatory Data Handbook, a unique and useful guide to capital markets regulation, regulatory change and the data and data management requirements of compliance. The handbook covers regulation in Europe, the UK, US and Asia-Pacific. This edition of the handbook includes a detailed review of acts, plans and...