About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Making Data Valuable Requires Quality and Structure

Subscribe to our newsletter

Building the ability to recognise the value in data and derive that value for the enterprise is a complicated proposition to include in data management models, according to experts who spoke at the Data Management Summit hosted by A-Team Group in New York earlier this month.

First, firms should establish an “accountability structure through a model that cascades from the data owner at the highest level, to the people on the line, the operations, the people who originate accounts, and the IT people who fix accounts with business system owners,” said Tom Mavroudis, head of enterprise data governance at MUFG Union Bank.

The data management model needs a prioritisation capability, or a mechanism to recognise value, according to Jennifer Ippoliti, firmwide data governance lead at JP Morgan Chase. “Knowing which problems to go after, in what order and what to expect from them, means when you close out that issue, you know it was worth the time and effort,” she said. “It’s specific to the technology and business partnership – and really clear about roles, responsibilities and division of labour. You cannot ever stop iterating that model and making it more precise.”

JP Morgan Chase addressed data management by reconfiguring its leadership, establishing a chief data officer (CDO) role and naming CDOs in different business lines and departments. “It’s about how we apply governance to unstructured datasets and the cloud, to make sure the foundational tools we build have the right datasets to begin with,” said Ippoliti.

Evaluating the precision of a data model can be done in many ways, but one should not discount sheer confidence in the data being used, stated Peter Serenita, group CDO, HSBC. “We can come up with a lot of different algorithms and metrics,” he said. “Some of them will mean something to someone, but ultimately, the answer is the firm’s confidence in the data it’s using. Call it a ‘confidence index’, which can indicate if the data previously did not seem to be fit-for-purpose and you didn’t want to use it, or if now, you have strong confidence in that data – that it has been checked and double-checked.”

Trust is key to such confidence in data, particularly the precision of calculations, according to David Saul, senior vice president and chief scientist at State Street. “For anyone making a decision based on the data, their level of confidence in that decision is going to be about how well they trust it, do they know where it came from and what it means,” he said. “That’s true internally and for delivering data to our clients, that they trust we’ve done all the calculations. Also, regulators need to trust that what we deliver is an accurate representation of what is needed to reduce the risk in the market. You must have hard data to prove whether it’s the legacy of the data or it’s how you did the calculations, the algorithm you used and ultimately whether it means what you say it means.”

The origin of data can be tracked successfully through creation of data ontologies, in transactional or operational systems, to determine how the data should be used for risk management, added Saul.

To build more complex accountability structures and prioritisation capabilities, as Mavroudis and Ippoliti advocated, confidence in the quality and sourcing of data is essential, as Serenita and Saul said. To derive value from data, accountability and priorities must be well defined, and be applied to trustworthy data, as the experts illustrated.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Strategies and solutions for unlocking value from unstructured data

Unstructured data accounts for a growing proportion of the information that capital markets participants are using in their day-to-day operations. Technology – especially generative artificial intelligence (GenAI) – is enabling organisations to prise crucial insights from sources – such as social media posts, news articles and sustainability and company reports – that were all but...

BLOG

Reporting Seen Among Use Cases Benefiting from Cloud-based Data Management for AI

Artificial intelligence is being adopted by financial regulators at pace, putting pressure on the financial institutions that the overseers serve to double down on their reporting capabilities. It’s no surprise to find that the same AI that’s helping regulators can aid organisations in getting those reporting procedures in place. To do so, however, they need...

EVENT

AI in Capital Markets Summit London

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

GUIDE

AI in Capital Markets: Practical Insight for a Transforming Industry – Free Handbook

AI is no longer on the horizon – it’s embedded in the infrastructure of modern capital markets. But separating real impact from inflated promises requires a grounded, practical understanding. The AI in Capital Markets Handbook 2025 provides exactly that. Designed for data-driven professionals across the trade life-cycle, compliance, infrastructure, and strategy, this handbook goes beyond...