
As the maturity of artificial intelligence applications evolves, financial institutions are finding that the solution to the challenges associated with the technology of data quality and trust can be found in data management.
Guaranteeing good outcomes from their models requires that organisations feed them good data, and the only way to ensure that is through good data management. That has become even more apparent with the emergence of AI agents, which are enabling firms to automate workflows and coordinate AI processes; with agents making decisions and humans increasingly out of the loop, it’s even more critical that the data they are fed is accurate and complete.That organisations are realising this is apparent in the findings of a recent examination of data leaders’ AI experiences. The study found that data management is now the biggest challenge to capital markets firms’ agentic AI ambitions. The Semarchy-led study found that 51 per cent of 1,000 C-level executive surveyed in the US, UK and France said they were struggling with these foundational elements of agentic AI rollout.
The same percentage said they were deploying agentic AI without having their master data management foundations in place and 38 per cent said they were doing so without enforcing data standards.
This was reflected in the responses of respondents to questions about their AI experiences. A fifth said they had suffered project delays due to data quality concerns and similar proportions had experienced operational inefficiencies as a result of unreliable data while the same again had suffered compliance issues attributed to data protection shortcomings.
Existential Threat from Failing to Ensure Good Quality Data
Because of this “many are at risk of rendering their new agentic capabilities fundamentally unreliable, increasingly costly, and impossible to scale”, the report of the study stated.
Semarchy chief technology officer Craig Gravina said that with the “commoditisation” of the large language models that lay behind generative AI, organisations can no longer point the finger of blame at AI developers when their projects went wrong.
“It’s the best thing that’s ever happened to the data industry because it’s basically put a spotlight on the importance of data management, on data quality and the ability for the rest of the organisation to trust that, whether it’s humans, AI or applications using the data,” Gravina told Data Management Insight.
The US-based data leader said that the gulf between organisations’ perception of AI readiness and their AI capabilities is potentially dangerous. Without being able to control data quality through effective management, organisations were putting their companies at existential risk.The core of any solution, he said, could be found in the semantic layer that converts raw data into business-useful information. Success of the newest AI innovations, he said, still depends on tried-and-trusted data management practices. Companies had realised this in the past couple of years as AI has gained momentum, birthing what he refers to as a “master data management renaissance”.
“It has escalated what the value of MDM brings to the organisation because the last presentation [of AI] has identified the potential risks of putting uncontrolled AI out into the field,” he said.
Report Helps to Guide New and Existing Products
The study is the second annual one of its kind to be conducted by Semarchy and already the evolving patterns of thought and behaviour among capital markets data executives has become apparent. For instance, in this year’s study, more than three-quarters of respondents said they had prioritised the ethics and regulation of AI, a substantial jump from the 50 per cent recorded last year.
The report concluded that this was because many organisations were “retrofitting” compliance guardrails after failing to build them into their AI applications at the outset.
“It goes back to that concept of the last phase of AI; for the last couple of years it has really been the lab when the emphasis was on the AI and not necessarily the data,” Gravina said.
“Everything was running fast and wild in the past, and so that potential risk wasn’t highlighted, whereas now it’s made a primary concern. And so the concept of being able to incorporate ethics into the initiatives is becoming just as important of a facet of the effort.”
While the reports offer a window into the thinking of data chiefs, it also helps Semarchy forge new products and adapt those it provides.
Gravina said that the company had doubled its investment and “clarified” its product strategy in response to the findings of the reports. They have highlighted that AI needs to be regarded as “another persona” in the data management hierarchy in order to get the data right.
“Where it has really focused our direction is around the context and understanding that is necessary from the data,” Gravina said. “The ability to produce meaningful information around the data, inn order to provide instruction to the agent is important.
“But what’s becoming more apparent is that MDM alone does not meet the task – it’s really a combination of MDM, governance, quality and other aspects converging together in a data management platform that is solving the more comprehensive problems around real trusted access for AI.”
Fear of Inaccuracies a Barrier to AI Rollouts
In a separate survey, data behemoth Bloomberg found that the fear of inaccurate outputs was impeding some organisations’ AI adoption plans.
In a survey of attendees at the company’s AI in Finance Summit in London, half of the more than 100 capital markets decision makers said hallucinated facts and errors were their primary concern when it comes to AI, while another 27 per cent cited a lack of explainability.
The presence of features that would give them most confidence in AI were verification and control processes, attribution of sources, built-in error checks and human oversight, the survey found.
“The results suggest that trustworthiness depends on whether an AI’s outputs can be interrogated and validated,” said Amanda Stent, head of AI strategy and research at Bloomberg.
“Solving this challenge depends on attribution, transparency and the quality of the underlying data so outputs can be traced to their sources, validated for accuracy, and confidently used in decision-making.”
Nevertheless, the survey also found strong appetite for AI, with two-thirds of respondents at the event saying that full workflow AI assistants are the most exciting next development in the technology. Others cited the emergence of personalised portfolio insights and no-code quant tools.
Subscribe to our newsletter


