
By David Sewell, Chief Technology Officer, Synechron.
The fraud transaction takes milliseconds to clear. In that window, an agentic system has already queried three databases, cross-referenced two watchlists, and pinged the identity verification layer. It works – in the demo. Then the auditor asks where the decision log is, and nobody can find it, because the identity layer sits in a pilot two divisions over that was never designed to write to a shared record.
Walk through a tier-one financial institution and you’ll find the same pattern: a fraud detection pilot in one division, a credit risk algorithm in another, and a customer chatbot tucked away in a third. On their own, the projects hold promise. They demo well. They generate optimistic slide decks. But they don’t talk to each other.
This disconnect is about to matter far more. The next wave of AI shaping financial services is agentic – systems that act autonomously on data, evaluating transactions, coordinating identity checks or triggering compliance workflows. When those agents operate across dozens of disconnected experiments working in isolation, they do not contribute to overall organisational transformation.Instead, they entrench a new state of company-wide “pilot purgatory”, and the operational drag that follows smothers the very potential behind executive investment in AI.
CISOs must increasingly recognise this danger and become the lead internal advocate for change. Deploying agents into fragmented infrastructure multiplies risk before it delivers anything. The challenge can be broken into three barriers to overcome: fragmentation costs, fractured infrastructure, and outdated technical and organisational foundations.
The high cost of pilot purgatory
Companies frequently “wall off” experiments in controlled settings — small, sealed sandboxes to try the unorthodox while the rest of the group continues serving clients with proven techniques.
But this approach neuters agents, which are designed to operate across domains. A fraud detection agent needs to coordinate with identity verification, pull customer history from lending systems, and flag suspicious patterns to compliance in real time. Scatter that functionality across unconnected pilots and it will never be clear whether the system works end-to-end. The temporary salve is mock data. That’s not a test.When auditors request documentation on AI risk management, institutions discover their audit trails don’t exist in any coherent form. “It’s just an experiment” is unlikely to satisfy an American, European, or British regulator.
The result is a widening gap between experimental AI and enterprise AI. The former generates excitement but stays confined to pilots with limited impact — only 12% of financial firms have implemented a global, enterprise-wide AI strategy. Enterprise AI runs on unified infrastructure that enables deployment at scale while maintaining the security, auditability, and governance regulators increasingly demand.
The enterprise framework as a unifier
The march toward agentic workflows makes unified infrastructure a business requirement. Without policies and guardrails embedded at the framework level, institutions can’t control when agents act autonomously, what constraints govern their decisions, or how they coordinate with legacy systems.
Data product and platform-oriented architectures address this by treating datasets as managed assets that interact through standardised APIs and shared controls — giving agents a structured environment in which to operate rather than a patchwork of custom integrations to navigate.
The payoff is scope. When coordination happens through known interfaces, institutions can extend agentic deployment across AML monitoring, credit decisioning and real-time payments without each new domain requiring its own bespoke plumbing. That’s the difference between AI that scales and AI that accumulates.
The technical and organisational foundations required for responsible AI at scale
The infrastructure gap remains a fundamental hurdle. On the technical side, financial institutions are channelling 58% of AI budgets into data modernization to bridge fraud detection gaps and patch legacy inefficiencies. Yet, for 18% of firms, poor data quality remains the primary barrier – a clear signal that spending doesn’t always equal solutions.
Agentic systems compound this problem. When agents chain decisions across multiple systems such as fraud checks, identity verification and payment blocking, flawed data doesn’t just produce a bad output. It potentially triggers a cascade of downstream decisions before anyone intervenes.
Many banks want to stack sophisticated AI workflows onto aging legacy systems. Paralyzing friction is the result – for nearly one in five bank leaders, the fear of backing the wrong solution is now the single greatest investment risk.
Institutions must move toward standardized development pipelines. This means consistent validation, unified approval workflows, and automated deployment: proof-of-concept transformed into a production-grade system with a verifiable audit trail. This foundation makes agentic coordination and regulatory compliance a reality rather than a goal.
The way forward: beyond experimentation
Pilots had their purpose. The question now is whether institutions can make the harder transition — from environments designed to impress to infrastructure designed to last.
Banks that stay in sandbox mode aren’t standing still. Every quarter spent running disconnected experiments is a quarter spent building the wrong foundations: audit trails that won’t hold up to regulatory scrutiny, agent deployments that work in isolation and fail at scale, technical debt that compounds with each new pilot bolted onto the last.
The institution that can’t find its decision log when the auditor arrives demonstrates far more than a compliance problem. It’s evidence the strategy never left the slide deck.
Subscribe to our newsletter



