The European Union’s Artificial Intelligence Act, which went into force this month, has presented financial institutions with huge opportunities but also some grave challenges, each of which can only be managed with a strong data foundation.
Industry professionals have said that the Act’s provisions, though extensive, can bring clarity to a muddled regulatory view of AI within the bloc and help to unlock the technology’s potential to tease value from organisations’ data.
They warn, however, that compliance will be burdened if institutions’ data management and governance strategies aren’t strong enough.
“The EU AI Act is more than just a compliance mandate; it’s an opportunity to build more robust data integrity frameworks,” Tendü Yogurtçu’s, chief technology officer at Precisely told Data Management Insight. “As AI adoption grows, powering AI initiatives with high-quality, integrated, and contextualised data will be key to long-term success and responsible AI innovation.”
Safe AI
The Act is the world’s first major piece of legislation on the use of AI and has been created to provide a framework in which the technology can be developed and implemented safely and in line with the EU’s values. It will be phased in over the next year but already companies are expected to adhere to its rules that ban the use of AI for social scoring software, facial recognition packages and other processes that it deems as posing an unacceptable risk to citizens.
Companies will eventually have to put in place guardrails if they want to implement AI categorised as high risk by the Act. This includes models used within the financial sector and healthcare industries. Such AI must be accompanied by demonstrable safeguards on data governance and transparency, as well as include human oversight, the Act requires.
For models regarded as offering limited risk, such as chatbots, the Act rules that organisations must make it clear to users that they are engaging with an AI system. There are no restrictions covering systems that pose minimal risk, such as that used in video games.
The Act places greatest emphasis on AI systems categorised as high-risk, whose requirements will most impact financial institutions. This section’s focus on data means that institutions in scope will be required to ensure robust governance structures and implement training, validation and dataset testing.
Costs/Benefits
The measures will undoubtedly place new cost pressures on companies and compliance burdens on chief data officers. Updating creaky data estates won’t come cheap and providing the sort of training required for humans to oversee and validate AI outputs will add to recruitment challenges, especially at a time when talent markets are tight.
Additionally, some prominent figures, including OpenAI chief Sam Altman, have warned that any regulation of AI will hold back technological innovation in the region. The UK and US have already rejected the possibility of adhering to a similar regulation.
Despite that, there is broad expectation that the Act will provide that firms that want to use AI to gain a competitive edge with a powerful incentive to strengthen their data foundations.
The Act “will push financial firms to take a closer and – in some cases long overdue – look at the quality of the data powering their AI systems,” Nick Wood, AI product manager at Finbourne Technology told Data Management Insight.
“In sectors like asset management – where AI adoption remains low – the Act could also serve as a catalyst for firms to re-evaluate their incumbent data management processes,” Wood added.
Benefits Realised
The Act has come into force as the potential benefits of AI become abundantly apparent to institutions. According to the recently published Informatica CDO Insights 2025, 82 per cent of EU companies expect to boost investment in generative AI this year.
It also comes at a time, however, when companies remain incapable or unsure of how to successfully implement its integration.
A report by KPMG, for instance, found that 100 data professionals questioned in its Asset Management Industry Outlook for 2025 suggested that AI maturity had shifted only a little from the conceptual stage to the developmental stage over the past year. And more than half said they were being held back by data integrity, statistical validity and model accuracy issues. A lack of awareness and training as well as the risk of security vulnerabilities were also cited as impediments.
The Informatica study found also that 89 per cent of large businesses in the EU said they experienced conflicting expectations for their GenAI initiatives while half said technology limitations were substantial impediment to moving AI projects from the pilot stage an into production.
Dual Pressures
Levent Ergin, chief strategist for climate, sustainability and AI at Informatica, said that organisations faced dual pressures in adopting the technology: they must not only prove the value of their investments in AI but also navigate “challenges around data quality and regulatory uncertainty”.
Nevertheless, the Act should have a galvanising effect in remedying this, Ergin told Data Management Insight.
“Without robust data foundations, organisations risk stagnation, limiting their ability to unlock AI’s full potential,” he said. “After all, isn’t ensuring strong data governance a core principle that the EU AI Act is built upon?”
As Ergin alludes, the Act has been framed not only to prevent abuse and market distortions from the use of AI but also to strengthen the data foundations on which it is built and run.
The tech maxim GIGO (garbage in garbage out) applies particularly to AI, which must be trained on existing data to produce new insights and other outputs. The Act expressly states that requiring high-risk AI systems to be fed accurate, representative and bias-free data will ensure that companies put in place the processes to achieve valuable and trusted outcomes. The same applies to its requirement that strong data governance and security measures be put in place.
Consequently, the Acts states, data and insights produced by AI will be trusted, secure and, ultimately, more valuable.
“For financial services, where AI is used in high-stakes applications like risk assessment and fraud detection, ensuring fairness and transparency is critical,” said Precisely’s Yogurtçu. “To achieve this, organisations must develop trustworthy AI systems by using representative and high-quality training data and continuously monitoring inference data so that it supports fair and reliable decision-making.
“With the correct AI literacy guardrails, organisations can detect and better understand why these biases are occurring – which often stem from poor data management practices – and prevent them from happening at the source.”
Finbourne’s Wood adds: “While AI can enhance workflows with powerful features and capabilities, firms must be able to explain the models utilised and trust the quality of the data underpinning them.”
What Success Looks Like
Until the Act is fully implemented it is difficult to ascertain whether it will achieve its data objectives. If it does, the benefits would be many.
By establishing a strong foundation built on high-quality data, organisations can overcome some of the most pressing challenges of AI integration. They can improve the reliability of AI results and reduce biases and inaccurate outcomes. The incorporation of trustworthy datasets will help them increase the accuracy of AI-generated actions and provide additional context. Diverse datasets can also greatly improve the accuracy and fairness of AI-driven decisions.
Precisely’s Yogurtçu adds that responsibly operationalised AI, with robust data and AI governance, will play a pivotal role in bridging the gap between regulatory compliance and ethical AI adoption.
“Strengthening data quality and governance is no longer optional, it’s critical,” said Informatica’s Ergin. “Ensuring their data is AI-ready will be the difference between those who succeed and those who remain in the starting blocks.”
Subscribe to our newsletter