FIS is working with Anthropic to bring agentic AI into banking operations, starting with a Financial Crimes AI Agent designed to support anti-money laundering investigations. The agent is intended to assemble evidence from bank systems, evaluate activity against known typologies and surface higher-risk cases for investigator review, with BMO and Amalgamated Bank among the first institutions working with the agent. Broader availability is planned for the second half of 2026.
The initiative points to a more governed model for AI in financial crime compliance, where the agent is not positioned as a replacement for investigators but as a way to reduce manual evidence gathering across fragmented systems. FIS says the agent will operate within FIS-controlled infrastructure, with client data remaining inside that environment and agent outputs traceable and auditable. That governance layer is likely to be a key consideration for banks evaluating whether agentic AI can be used in regulated investigative workflows.
Financial crime has been selected as the first use case because of the operational burden AML teams face. FIS cites the United Nations estimate that $2 trillion in illicit funds flows through the global financial system each year and says US financial institutions spend $35–40 billion annually on AML operations. Much of that work remains tied to manual evidence assembly before investigators can make risk-based decisions.
The Financial Crimes AI Agent is designed to connect securely to relevant bank systems, whether run by FIS or the bank, and compile the case evidence at the point an investigation is opened. The objective is to reduce case review time, lower low-value manual work and support investigative and suspicious activity report (SAR) narrative quality, while keeping final decisions with investigators.
Stephanie Ferris, CEO and President of FIS, linked the initiative to FIS’s role as data, governance and orchestration layer for bank AI deployment. “Every bank in the world wants AI that acts, not just assists. The future is about a trusted provider who manages the data, who governs the agents, and who stands between your customers and the AI making decisions about their money. FIS built the architecture that orchestrates this intelligence.”
Anthropic’s Applied AI team and forward-deployed engineers are working with FIS to co-design the financial crimes agent, combining Claude’s reasoning capabilities with FIS’s banking data, regulatory infrastructure, and compliance and fraud systems. Jonathan Pelosi, Head of Financial Services at Anthropic, said: “FIS brings decades of trusted relationships with financial institutions, deep regulatory knowledge, and the transaction data that makes an AI agent useful in practice. That’s why FIS chose Claude, they needed a model that could reason through complex investigations accurately, explain its work, and operate safely inside regulated workflows.”
The financial crimes agent is the first step in a broader agent roadmap. FIS says future use cases will include credit decisioning, deposit retention, customer onboarding and fraud prevention, delivered through a single governed platform. For banks, the development reflects growing pressure to move beyond AI pilots toward more operationally embedded tools that can satisfy compliance, auditability, data governance and human oversight requirements.
Subscribe to our newsletter


