Financial institutions are under pressure to put their data estates in order as the European Union’s artificial intelligence regulation comes into force this week, threatening huge fines for failures to observe its tough rules on the safe and fair use of the technology.
Nevertheless, the introduction of stringent measures that will place new compliance burdens on institutions is unlikely to hamper innovation in AI applications and their resultant content but also have a positive impact on data management practices, industry observers said.The EU AI Act is the world’s first comprehensive regulation covering the fast-growing technology and will require financial institutions to put robust and demonstrable data management strategies in place to ensure their AI systems don’t breach safety and privacy guidelines.
With the threat of fines of as much as €35 million, or 7 percent of annual turnover, at stake, data managers must be vigilant, said Niamh Kingsley, Head of Product Innovation and Artificial Intelligence at Delta Capita.
“The AI Act turns data management from a backend function into the frontline of regulatory exposure,” Kingsley told Data Management Insight. “Institutions that haven’t elevated their data lineage, documentation, and bias mitigation protocols will find themselves racing the clock.
“Financial institutions must now demonstrate that their training and validation datasets are representative, bias-checked, fully documented and traceable. This isn’t just about technical hygiene, it’s a legal imperative.”
Far Reaching
The Act was passed in 2023 to put guardrails around the use of AI to protect the public from misuse of data and covers any financial institution that operates an AI system, supplies AI services or whose “output” from an AI system is used within the bloc. The Act classifies AI systems according to the level of risk they pose and its rules reach far across the economy. At its most stringent, the Act bans the use of facial recognition and any other potentially discriminatory technology.
Financial institutions are expected to be most affected by the rules covering the use of“high-risk” AI systems, such as those used in credit evaluations and risk and portfolio management. Among the key requirements for those in scope will be an expectation to ensure their data is of high quality – meaning it must be representative, relevant and free of bias – provide human oversight mechanisms (“humans-in-the-loop”) and implement cybersecurity and robust governance.
“This is the first true test of AI supply chain transparency,” said Levent Ergin, Chief Climate, Sustainability & AI Strategist at Informatica. “If you can’t show where your data came from or how your model reasoned, your organisations’ data is not ready for AI.”
Kingsley concurs.
“It’s not just about what data you use; it’s how you source it, govern it, and prove it,” she said. “The AI Act demands auditable, transparent, and bias-mitigated data pipelines, with technical documentation that can withstand regulatory scrutiny.”
Gradual Introduction
Rules covering high-risk practices came into force in February, but from this week companies will be required to demonstrate full transparency, documentation and due diligence mandates with no further delay or grace period.
For many in capital markets, the Act is a welcome piece of legislation.
“The EU’s AI act will push financial firms to take a closer, and in some cases long overdue, look at the quality of the data powering their AI systems,” said Gus Sekhon, Head of Product at Finbourne Technology.
“While AI can enhance workflows with powerful features and capabilities, firms must be able to explain the models utilised and trust the quality of the data underpinning them.”
The Act integrates with existing regulations, including DORA and GDPR, in seeking to reduce the data risks and boost security. This alignment is seen by some as a streamlining benefit to compliance teams, even if it does require more data management.
Others see further benefits, too, from the Act’s requirement that institutions’ data estates be structured to meet its obligations. Finbourne’s Sekhon, for instance, suggests that asset managers – who have been relatively slow to adopt AI – could be catalysed to “re-evaluate their incumbent data management processes” and fully harness AI in sector-useful applications such as filling in due diligence questionnaires and requests for proposals.
Banking data management specialist Joanne Biggadike also sees opportunities for institutions in being required to strengthen their governance and other policies and processes.
“By calling out data governance, transparency, human in the loop and data ethics as minimum requirements, far from hindering innovation, this proportionate stance offers a framework to empower the creation of new and innovative content, without disregarding vital considerations such as data quality, reasonableness, fairness, and transparency,” Biggadike told Data Management Insight.
Global Adoption
The Act is not without its critics, although most challenges to its provisions have come from outside of the financial sectors. Where there is concern among institutions and data managers alike is how the Act’s provisions will permeate through the rest of the world and how that might impact the approaches of other jurisdictions.
There is some expectation that, like the EU’s sustainability reporting codes, the AI Act will set something of a benchmark for other nations to follow. However, there is a risk that any eventual dislocation will simply compound the safety and privacy measures the Act seeks to address.
“We are seeing a divergence in how countries across the globe regulate AI,” said Informatica’s Ergin. “It’s simply too complex, though, for large international companies to do AI regulation on a country-by-country basis. Instead, they need a core AI data governance framework putting data quality, lineage, governance and control at the centre.”
Ramprakash Ramamoorthy, Director of AI Research at ManageEngine, the enterprise IT division of cloud software provider Zoho, agrees.
“While the EU takes a lead, global alignment is urgently needed,” Ramamoorthy said. “If we end up with a fragmented landscape of conflicting rules, it could stall innovation, creating a compliance nightmare for businesses operating across borders.”
Subscribe to our newsletter