About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

How GenAI Is Reshaping Surveillance and Screening: Practical Takeaways for Compliance Leaders

Subscribe to our newsletter

The rapid expansion of Generative AI across financial institutions is often described in terms of technological capability, model performance, and data scale. But for compliance leaders, the more meaningful shift is organisational and operational. The recent A-Team Group webinar on GenAI and LLM case studies for surveillance, screening and scanning brought this into sharp focus. The discussion revealed not just where firms are deploying AI, but how these tools are reshaping surveillance workflows, testing governance frameworks, and altering the day-to-day behaviour of analysts.

Many firms are actively running models in production-adjacent environments, benchmarking them against human reviewers and engineering workflows to determine where AI genuinely improves outcomes. One panellist described how their teams have begun running multiple LLMs in parallel with senior analysts to expose strengths and blind spots: “We run different models against ‘the human’ to see where the potential gaps are.” This engineering-led approach – iterative, comparative, data-driven – offers a best practice for firms seeking to understand not just whether AI works, but how it works under production surveillance conditions.

Panellists also highlighted an emerging challenge: the risk that junior analysts may not have the experience to question AI outputs. As one speaker put it, “The AI is very convincing; even if it is hallucinating, it’s a very convincing hallucination.” For compliance leaders, this is a reminder that GenAI deployment is more than a model-selection exercise. Compliance teams are going to require skills beyond merely using AI tools, they must be able to recognise when to trust AI generated output and when to step back and re-examine the underlying evidence.

Instead of embedding manual reviews at every stage, panellists described adopting more nuanced risk-tiering strategies. Low-risk, high-volume items can be “closed by AI”, with human intervention reserved for the cases where judgement, proportionality and context matter most. In these higher-risk scenarios, analysts are being supported by AI-generated summaries, entity histories, communication extracts and adverse-media intelligence. Far from replacing the investigator, GenAI is beginning to reshape the investigative process – compressing time-to-insight while preserving accountability.

To counter the risk of analysts slipping into “autopilot approval mode,” one panellist described how their firm has introduced new controls such as randomised AI shut-offs, post-hoc sampling, and “review the reviewer” oversight to ensure institutional knowledge does not erode. These measures may not have been envisaged in earlier generations of surveillance design, but they now represent a critical safety net as GenAI models take on more substantive decision support.

The governance challenge extends beyond workflow design. Model-risk oversight was cited as one of the primary bottlenecks preventing faster adoption. Even when models perform strongly in controlled environments, production approval can be slow as governance teams interpret AI behaviour through frameworks built around earlier generations of statistical modelling. As one panellist noted, “It does take a long time to make sure that the model-governance teams are satisfied… before we can move from pilot to live usage.” Compliance innovators must therefore act as translators: mapping GenAI’s more complex reasoning patterns into documentation that aligns with internal standards while still meeting expectations from SR 11-7, the EU AI Act and other emerging frameworks.

Many institutions are discovering that GenAI works best when fed with consolidated, high-quality, cross-channel data – but their environments were not built with this in mind. The panel described ongoing efforts to centralise data into unified environments – a “single pane of glass” or data-lake architecture – so that AI can interpret trading records, communications, orders, case notes and contextual metadata in a coherent sequence. This consolidation is becoming indispensable for surveillance leaders as unstructured and unconventional data (voice, emojis, multilingual chat) increasingly carry compliance relevance.

An audience poll on the obstacles to GenAI adoption revealed a striking level of consensus: 82% of respondents cited meeting explainability and governance expectations as their biggest barrier – by far the highest scoring category. This aligns closely with the panel’s lived experience. One speaker noted that even when a system is technically sound and performing well, the process of satisfying model-governance teams can significantly delay deployment, describing how “it does take a long time to make sure that the model-governance teams are satisfied… before we can move from pilot to live usage.” The sentiment underscored a broader truth that compliance leaders increasingly recognise: the constraint is no longer the model, but the institution’s ability to evidence, document and govern it.

The session ultimately reinforced a key lesson: AI’s value lies not in the novelty of the models but in the discipline of their deployment. Efficiency gains – in time saved, noise reduced, and human capacity unlocked – are real and measurable. But they can only be realised sustainably when paired with strong guardrails, mature governance, and a culture that encourages challenge rather than passive acceptance.

For compliance leaders navigating this evolution, the message is clear. The future of GenAI and LLMs for compliance will be shaped as much by data foundations, holistic process understanding and governance maturity as by the sophistication of the underlying models. The firms that pull ahead will be those that integrate GenAI into their operating models without losing the critical thinking, scepticism and curiosity that remain the cornerstone of effective compliance.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: The Next Compliance Frontier: Monitoring GenAI and Unstructured Comms Data

Date: 16 September 2026 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Generative AI is rapidly transforming how employees create, share, and interpret information. From internal copilots drafting client messages to AI tools summarising trading conversations or generating research notes, large language models are increasingly embedded in daily workflows across capital...

BLOG

Eventus Unveils Frank AI to Bring Conversational Intelligence to Trade Surveillance

Eventus, the trade surveillance and financial risk solutions provider, today launched Frank AI, a new artificial intelligence platform designed to transform how compliance teams interact with surveillance data. The new tool embeds generative AI into Eventus’s Validus platform, allowing even non-technical users to conduct complex data investigations using conversational English. The launch addresses a persistent...

EVENT

TEST Event page 2

Now in its 15th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Practical Applications of the Global LEI – Client On-Boarding and Beyond

The time for talking is over. The time for action is now. A bit melodramatic, perhaps, but given last month’s official launch of the global legal entity identifier (LEI) standard, practitioners are rolling up their sleeves and getting on with figuring out how to incorporate the new identifier into their customer and entity data infrastructures....