Data quality has become an imperative of data management, and while it is often difficult to achieve, improvements can be made to meet both business and regulatory requirements by implementing data governance policies as well as big data and artificial intelligence technologies.
Andrew Delaney, chief content officer at A-Team Group, led discussion on data quality at A-Team’s recent Data Management Summit in London. He was joined by Colin Gibson, group architecture director at Willis Group; Neville Homer, head of RWA reference data and data services at RBS; Ewan McAllan, EMEA head of data quality at HSBC; and Bob Mudhar, associate partner at Citihub Consulting.
Delaney started the discussion with questions about why data quality is important and what it is. Gibson answered: “Data quality is a meaningful outcome of data management and it is a top priority for many people. It is whatever you need it to be. If data is a problem, there is a data quality issue.” On why data quality is important, Mudhar added: “Poor data has to be fixed manually, which is expensive. Getting data right frees up money and people.” On the question of whether quality is being driven by regulators or industry participants, Homer commented: “The focus on data quality is getting broader, but the regulatory stick of BCBS 239 is driving change.”
Large volumes of data make quality improvements complex, but there are solutions. Gibson explained: “Be specific, a broad statement won’t support progress. Understand the pain point, focus on data that you care about and then tackle the issue of whether it is complete, correct and consistent.” McAllan added: “Data quality improvement initiatives must be applied across an organisation and must drive business benefits.”
To achieve improvement, Mudhar first identified three obstacles to data quality including securing investment, eliminating dirty data and working with others to agree differences in data. Then, he advised: “Work with operational teams to figure out what you are spending the most time on and use tools and techniques that may not fix all the data, but will allow it to be used for something valuable. Also, look at data sources and how many golden copies of data you have across the business.”
If data quality is an imperative of reference data management, Delaney asked how data managers can know when they have hit targets. McAllan said: “If the business can fulfil what it wants to achieve, data quality has reached its target.” On the role of data governance, Homer said: “Governance helps to manage data effectively. The focus of governance should be on priorities, outcomes and raising awareness of data quality. The danger is death by governance, so the right balance is needed.” Gibson added: “The upside of governance is clarity. Knowing who sets targets and who to shout at when things go wrong.”
Considering whether shared data management services can improve data quality, panel members suggested shared services could reduce data costs, but would not necessarily improve data quality as improvement depends on data sources and controls. Big data and artificial intelligence technologies found more favour, with panel members noting their capacity and capability to manage huge amounts of data and deliver high quality data quickly.
Subscribe to our newsletter