
George Tziahanas, VP of Compliance at Archive360.
Regulated enterprises are discovering that the hardest part of scaling new technology such as AI isn’t adoption; it’s proving those technologies are properly controlled. For financial institutions in particular – including banks, asset managers, insurers, and capital markets firms – this challenge is intensified by long-standing regulatory expectations around transparency, accountability and auditability. Governance has now expanded well beyond data retention into evidence for the full range of AI’s inputs and processes. This shift transforms governance from passive record-keeping into active decision provenance, where organisations must demonstrate how outcomes were reached, using which data, under which policies and with whose authority.
As a result, defensibility – the ability to prove that the full data pipeline of advanced technology is secure and well-governed – is now top priority for businesses in regulated industries, and especially for financial institutions facing frequent supervisory reviews and audits. Companies can no longer ‘archive and forget’ – it’s crucial that they have clear, detailed insight into the full data lifecycle. And as regulatory requirements grow more stringent and data handling practices grow more complex; that’s no small feat.Regulatory Pressures are Rising
For skilled staff with long standing in the data management industry, governance practices have already changed significantly over the past decade, as new regulations like the EU’s General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA), have been introduced and enforced. Financial services firms have also had to contend with sector-specific regulations governing recordkeeping, communications monitoring and risk management. Now, as the regulatory framework continues to evolve to catch up with developments in the AI space, governance is no longer about storage alone; it’s about proving what happened, under which policy and with whose authority.
For example, the EU AI Act, which was introduced in August 2024 and will be fully enforced from August this year, introduces a broad risk-based framework, which means providers of general-purpose AI models need to be able to disclose detailed information about their training data, including its sources and how it was processed. For financial institutions using AI rather than building it, the implications are equally significant. This also brings with it new governance requirements around data lineage, meaning organisations must be able to demonstrate not only their knowledge of the data they possess, but which data was used to train AI models, at what time and under which authorisation.
For companies running AI systems for applications deemed high-risk – in areas such as employment decisions, credit assessment, law enforcement, education, or critical services – the governance obligations get even tougher. In financial services, this includes use cases such as credit risk assessment, fraud detection, trade surveillance, client-services, and wealth management. They’re required to keep comprehensive records explaining how data was chosen, how the risk of biased or discriminatory outcomes was addressed, and how dataset quality was maintained. The AI Act also requires activity logging to enable traceability of outputs –enforcing audit trail expectations that go way beyond traditional data governance practices.
In short, data governance can no longer operate solely as an internal function. Organisations must be able to substantiate their governance practices through appropriate disclosures to regulators and, where required, to affected individuals. This shift from internal governance to external accountability represents a major readjustment in how businesses approach data stewardship.
Splintered Systems
To add to that complexity, high-risk data is now scattered across live systems and archives, making defensibility difficult. Financial institutions typically operate decades-old legacy platforms alongside modern cloud-based systems, from operational systems and legacy email archives to collaboration platforms, file shares, ERP databases, and countless other repositories, data trails are increasingly hard to follow. These fragmented systems create gaps in the record – and when regulators, auditors, or the courts ask for evidence, enterprises can find themselves caught short.On top of that, almost every system over which data passes or resides will come with its own data management features, search tools, and governance mechanisms. Applying consistent policies is extremely difficult when every platform demands a distinct setup and oversight process. This is particularly problematic in capital markets environments, where communications, transaction records, and analytical data must all be governed to consistent regulatory standards.
As a result, the need to answer discovery requests or regulatory queries can force teams to hunt for relevant data across multiple disconnected systems. Put simply, the process is complex, time-consuming, and costly.
Five Steps to Defensibility
There’s no silver bullet to overcome these challenges, but companies can prioritise five key areas to push towards fully defensible, provable compliance.
- Complete audit trails: Recording interactions with data, capturing who accessed which information, when it occurred, under what conditions, and what actions were performed. These records serve as critical evidence during internal investigations or regulatory reviews.
- Immutable storage: Based on WORM (Write Once, Read Many) technology, this approach prevents alteration and preserves authenticity. Information can also be stored in designated geographic locations to satisfy data sovereignty requirements while safeguarding integrity.
- Classification-based architectures: Allowing organisations to attach metadata policies to distinct information types, ensuring correct retention periods, access rights, and handling based on classification. This fine-grained approach supports different rules for customer data, employee communications, and financial records.
- Zero-trust security models: Ensuring that even system administrators cannot access data unless they are explicitly authorised, significantly lowering the risk of breaches caused by compromised privileged accounts.
- Policy automation: Helps to eliminate human error and inconsistency from governance operations. Retention rules, legal holds, and disposition processes run according to predefined policies rather than relying on individual discretion.
Full defensibility in data governance is an increasingly steep hill to climb, but as data continues to proliferate and AI-enabled applications grow ever more prevalent and complex, it’s non-negotiable. For financial institutions, defensibility underpins not only regulatory compliance but also trust in automated decision-making. Only by establishing consistent auditable governance practices can organizations keep pace with regulatory expectations and manage risk effectively.
A comprehensive data and data governance platform is crucial to making defensibility possible – providing a unified, policy-driven approach to managing data across fragmented environments – by automating key compliance processes can keep up with regulatory requirements and unlock business value along the way.
Subscribe to our newsletter


