
Artificial intelligence deployment in capital markets’ data processes may be approaching an inflection point that, if not managed properly, could introduce dangerous risks to institutions’ operations.
The growing deployment of anonymous agents has the potential to hardwire data errors into workflows, magnifying data weaknesses as the automating technology scales processes, according Informatica from Salesforce.
The challenge can be easily addressed by organisations that have the necessary AI literacy skills to identify data quality and reliability shortcomings. Unfortunately, few companies feel they possess them. A survey by the enterprise cloud data management specialist found that while the 600 data leaders questioned said they thought two-thirds of their employees trusted the outputs of their AI applications, almost all of them said their staff needed more training in the topic.This created what the report calls a “trust paradox”, in which organisations have blind faith in their AI, said Gregory Hanson, Informatica Group Vice President and Head of EMEA North Sales.
“When you move into that agentic phase, which we’re in right now, and you start to automate business processes, that is next-level risk because that’s impacting the operation of the company,” Hanson told Data Management Insight.
“There’s a real risk of making incorrect decisions at an accelerated pace.”
Rapid Deployment
Agentic AI deployment is rapidly evolving in capital markets because it can autonomously automate AI processes to streamline data-management capabilities, bring analytical heft and collaborative innovation. All of these are also potential operational cost cutters. Data and technology vendors from Snowflake and Acceldata to Informatica have either introduced agentic products and services or updated existing offerings with AI agents.
While organisations are convinced of the benefits it will bring, some have been more guarded, warning against agentic AI’s hurried implementation. Concern has been focused, probably unsurprisingly, around the data quality. With AI processes needing well-structured and complete data to operate effectively, any weakness in inputs will be multiplied when agents run those processes at scale.“People are trusting the outcomes without really having the awareness to question or really understand whether the results and the recommendations are accurate,” said Hanson.
Financial Institutions
The report was based on interviews with leaders in a range of industries, including financial services. Hanson said, however, that the findings were particularly pertinent for financial institutions because they rely heavily on data processes in their everyday operations, especially regulatory compliance.
The challenge is great for them, too, because many still operate on fragmented technology stacks that are ill-equipped to manage the demands of centralised, multi-asset and multi-vendor modern data management strategies.
“People are starting to think about this as this Nirvana moment but there’s a real danger that actually that will never come to pass,” he said. “All it will be is a different style of interface, but with inaccurate results very quickly.
“And even more dangerous than that, it will make incorrect decisions.”
The potential outcomes of embedding data weaknesses through AI agents are not only operational interruptions but also customer dissatisfaction, reputational damage and increased regulatory compliance risks, Hanson said.
The extent of the potential challenge presented by this trust paradox is underlined by the study’s findings that almost seven in 10 respondents said they will begin agentic AI pilots by the end of Q1 2026.
The report’s authors said that organisations can ensure AI-driven decision making is based on trusted, high-quality data by prioritising investment in data reliability, robust AI governance and workforce upskilling.
Challenge Recognition
Encouragingly, the importance of data quality in agentic AI creation was widely recognised by respondents to the survey. It topped a list of challenges to implementation, ahead of security and a lack of agentic expertise.
AI-literacy can be improved without needing to recruit batteries of new skilled employees, said Hanson. In-house education is being aided with low-code, no-code interfaces that enable business users to create necessary workflows by simply identifying their data needs. Informatica is among vendors that offer such tooling.
“It’s not an immediate picture of hiring different types of people with different sets of skills,” Hanson said.
“It’s more about making sure that you’ve got the right interfaces because the people already there are the people who know the data, and that’s the critical bit. If you could leverage those people with the right tools and the right literacy training, then you’re in good shape.”
Subscribe to our newsletter



