A new report from Global Relay reveals a growing appetite for AI in financial compliance, with 31% of firms either having implemented or planning to adopt AI in surveillance workflows within the next year.
The State of AI in Surveillance Report 2025 highlights changing attitudes across the sector, with a 19% decline in firms hesitant to adopt AI since June 2024. The shift comes as financial institutions face mounting regulatory scrutiny, including over $8 billion in SEC-issued fines last year for communication compliance failures.
The report, based on a global survey of compliance, surveillance, and risk leaders, underscores the limitations of traditional lexicon-based monitoring tools. These systems rely on pre-defined keyword lists, making it difficult to detect nuanced or evolving forms of misconduct. In contrast, AI-powered tools offer enhanced capabilities, such as reducing false positives, improving risk detection, and enabling voice transcription.
“Traditional AI models rely on pattern matching — they need to be trained on examples of, say, insider trading to detect similar behaviour. But this leads to high false positives, as they struggle with context and nuance,” explains Don McElligott, VP of Compliance Supervision at Global Relay, in conversation with TradingTech Insight.
“LLMs work differently. They understand concepts and can assess messages in context, in a similar way to a human. Instead of just reducing false positives, they identify genuine risk more accurately from the outset. That’s the key difference. And right now, Global Relay is the only firm capable of using LLMs to detect risk directly. Other vendors using LLMs tend to apply them after the fact — using older methods to flag potential risks, then relying on the LLM to help filter out false positives. That can work to a degree, but it’s far less effective than having the LLM identify the risks from the outset. That’s the real breakthrough.”
However, challenges remain. According to the report, data security is the most cited barrier to AI adoption, followed by budget constraints, concerns over transparency and explainability, and internal resistance.
Uncertainty around regulatory direction is also affecting adoption rates. While 31% of firms are moving ahead with AI integration, a further 38% are taking a wait-and-see approach, monitoring how the regulatory landscape evolves across jurisdictions.
“We’ve engaged with regulators to understand their perspective, and it’s been encouraging to hear them acknowledge that AI is the future of surveillance,” says McElligott. “They recognise AI as the future of surveillance and don’t want to hold back adoption, but they expect firms to understand and explain how it works. We use open-source LLMs with well-documented training and controls, and we carry out rigorous testing to validate performance and limitations. We then provide that information to clients so they can answer regulators’ questions confidently. Ultimately, it’s our responsibility to ensure they can explain what the model does and how it works.”
The report suggests that as compliance demands intensify, firms are increasingly recognising the role of AI in strengthening surveillance processes, despite ongoing concerns around implementation.
Subscribe to our newsletter