
As the implementation date for the European Union’s AI Act looms, financial institutions are having to put their data estates on a secure footing to ensure they comply with the wide-ranging regulation.
The Act requires organisations to have a broad and granular view of their data in order to show that they can trace any AI-made decisions back to the dataset from which it came and explain how any errors or inaccuracies might have occurred. They must be ale to show that they have sought to ensure their models and agents are free from bias and are acting responsibly.Not only that, the Act stresses more exacting guardrails around use cases that are largely specific to financial companies – the use of credit scores, anti-money laundering processes and fraud detection, for instance.
While the EU recently delayed the date of compliance for such high-risk AI to August 2027, organisations will have been preparing since it was announced in October 2024 and some will be struggling. Being able to show regulators that their data is being properly governed, is explainable and its quality maintained is proving difficult for companies with gaps in their data and controls.
Monitor and Intervene
To help them, Boston, Massachusetts-headquartered Ataccama has added functionality to its platform that helps organisations monitor, assess and intervene in their data as it. passes through their pipelines and into AI processes to ensure it is compliant.
Ataccama ONE has taken capabilities that have been honed to solve for a variety of data management challenges and packaged them to meet the specific demands placed on clients who are within scope of the Act. It has been designed to bring organisations up to compliance standards “without reconstructing pipeline history after the fact”, the master data management specialist said.“As your models are providing answers to different questions that are coming in, you need to have a set of capabilities that allow you to capture why that decision was made, end-to-end,” Ataccama chief product officer Jay Limburn told Data Management Insight.
“What we’ve built is a unified platform across data quality, data lineage, data observability data catalogue that clients you to bring all of this information together across all of their estate.”
AI Ground Rules
The Act, which is applicable to AI use across industries, hopes to establish ground rules for the use of the technology before it becomes so widespread that retrofitting rules would be difficult. It is concerned that as organisations switch from workflows based on deterministic decisions – those made by humans – that the models are being being trained on information that isn’t going to create problems further down the line.
Limburn said data quality is the answer to salving those concerns.
“If you start to automate those processes with AI – agents and AI models – it is based on probabilistic decisions and is making a best-guess that it possibly can on a thing that it thinks is right. And it gets things wrong,” he said.
“But the better your data, the more trusted your data, the more understood your data, the more enriched your metadata, the more context you have around the data, the thinking is that those probabilistic decisions made by AIs are going to be more accurate.”
Legacy Systems a Legacy Headache
Many of the shortcomings that Ataccama ONE addresses are, in data management terms, historical ones that have existed for decades. Manual entry error, for example, is one that still dogs some institutions.
Limburn illustrates the point with reference to the experience of a US-based client who found during an audit of customer records that almost a third of them were listed as being Afghan by birth.
Thinking this was unlikely for a data base that was supposed to be limited to North America, a check found that the entity nationality field hadn’t been accessed in their onboarding system entries. Given that the field was a drop-down box with nations listed alphabetically, those customers were registered as hailing from Afghanistan.
“That was for an analytics use case but if you go put that into an AI use case with that same data set, whereby the AI agent will just work out the best answers and make decisions who knows what I could end up with,” Limburn said.
Legacy architecture is also longstanding data management hazards that remains. Limburn argues that this is an issue that is unlikely to go away as companies merge, spin off from parents or restructure, taking the their data estates with them.
The impact of these hurdles is amplified in modern data setups by the sheer volume of information that now enters the systems. Managing them, meanwhile, is all the more critical because the data is also being fed into AI applications.
“The whole thing changes all the time,” said Limburn. “That’s creating a whole set of new data challenges that customers are trying to modernise their data into.”
Retooled Platform
In the same way that many of these challenges are not new to organisations, creating solutions for them is no novelty for Ataccama. The difference now is that those solutions are being reconfigured specifically to let clients comply with the EU AI Act.
Ataccama’s retooled platform can validate data as it moves through the training and inference pipelines, paying regard to high-risk use cases flagged in the Act. Violations prompt quality gates to stop and routes alerts to the data owner. That also generates a remediation ticket that ensures issues are resolved within a governed workflow.
“We call it the data trust layer and it sits between the AI and the data,” explained Limburn. “It makes sure that as the AI systems are querying the data, we’re enriching it to inform the AI.
“That’s a real-time check as that piece of data is flowing through the system so that that data can never be propagated up into into the models.”
With the Act, and the large fines it will levy for non-compliance, about to create a financial incentive to update data quality processes, organisations should be considering one other substantial risk when assessing their capabilities.
“The biggest one in this new world, where every consumer cares about their data and their trust and security around their data, is the brand,” Limburn said. “In a world where we’ve got agents that are working on behalf or with humans, and if that company is in breach of the regulations or explainability it’s going to hurt the brand image.”
Subscribe to our newsletter


