Global Relay has introduced a new AI-powered surveillance solution aimed at meeting the evolving regulatory demands for communications monitoring. The system is designed to address common challenges faced by existing surveillance tools, including poor data quality, the need for constant model retraining, and limited reasoning capabilities, while offering a scalable and future-proof platform.
AI-Driven Compliance Surveillance and Risk Detection
The new solution employs a five-layered approach that combines data standardisation, transcription and translation, noise reduction, risk identification, and streamlined alert management. By focusing on reducing false positives and improving the detection of true compliance risks, the technology enables organisations to enhance their surveillance accuracy and efficiency.
“There are two main takeaways from the current advancements in AI, particularly within the financial industry,” observes Robert Nowacki, Compliance Surveillance SME at Global Relay, in conversation with TradingTech Insight. “The first is transcription. For years, we’ve had to rely on resource-intensive processes—like spending hours listening to dealer board recordings and sampling voice calls—to meet regulatory requirements. It was inefficient and often laborious. Now, with AI, we can intelligently and efficiently scan communications to address these requirements without wasting time or resources. This means we can do far more with less effort. For instance, we can analyse communications not only for market abuse but also for other use cases, even beyond compliance, such as tracking callbacks to clients or confirming orders. For example, if a client gives instructions over the phone, we can now ensure those instructions are confirmed and documented seamlessly. The potential use cases are virtually limitless with this technology, which has finally matured to deliver on its promise.”
He continues: “The second major area is what we can achieve once those voice communications are transcribed and integrated with other written communications. This is where generative AI (GenAI) and large language models (LLMs) come into play. These tools allow us to identify risks in ways that go beyond traditional lexicon-based methods. While lexicons are still important and will remain relevant for years to come, GenAI enables a more dynamic approach. Then, through techniques like prompt engineering, we can define the specific risks we want to identify and let the AI find them for us. This creates a significant enhancement to traditional, more prescriptive risk detection, offering greater flexibility and precision.
Unified & Standardised Data
Central to the solution is Global Relay’s extensive range of data connectors, which integrate with business-approved communication channels to provide enriched and structured data. Features include ‘near-perfect’ voice transcription and translation across more than 50 languages, ensuring all communications are normalised into a unified format for comprehensive analysis.
“Data is absolutely critical—it’s essential to capture it directly from the source and in as much detail as possible,” notes Nowacki. “Ideally, this includes all the metadata that the source can provide. Maintaining those unique features of the data within the tool is key. Different sources will naturally provide different types of metadata, but the challenge lies in presenting this information to both the end user and LLMs in a unified format.”
The platform’s AI capabilities filter irrelevant data, such as spam and disclaimers, while employing large-scale language models to analyse remaining information for compliance and conduct risks. This advanced analysis extends to detecting evasion tactics and other obfuscation methods, by examining entire conversations in context, explains Nowacki.
“By standardising the data while retaining the unique metadata from each platform, we can pre-filter and apply additional conditions more effectively. For example, if we’re dealing with chat rooms—such as persistent chat rooms that remain open and have specific names or identifiers—we can use that metadata to perform precise searches against those specific parameters. At the same time, for LLM purposes, the data is processed in a consistent, standardised format. This means that regardless of the source, all chat, mail, mobile and voice data will appear uniform, enabling the LLM to analyse it more efficiently. By eliminating the confusion caused by inconsistent formats or scattered data, we can maximise the effectiveness of AI processing while still leveraging the unique metadata for advanced filtering and analysis.”
Alert management tools form a critical component of the solution, with dashboards and reporting functionalities designed to streamline workflows and reduce manual effort. Analysts receive detailed compliance-based explanations for every flagged risk, providing clarity and actionable insights.
The technology operates entirely within Global Relay’s proprietary data centres, equipped with Nvidia-powered infrastructure, ensuring robust data security and eliminating reliance on third-party cloud providers. This setup allows Global Relay to run large AI models at scale and at reduced costs while maintaining complete control over data processing.
Data Insights Report
In other news, the company has released its second annual Data Insights Report, highlighting a 400% year-on-year (YoY) increase in AI-related communication capture, driven by growing compliance risks associated with GenAI. The report, based on a global survey of 12,000 compliance and risk leaders in financial services, reveals how firms are adapting to regulatory scrutiny across communication channels. WhatsApp data capture also saw a significant rise of 258% YoY, reflecting intensified regulatory enforcement.
“Despite the growth in WhatsApp data capture, only 4.3% of firms currently monitor it,” points out Rob Mason, Global Relay’s Head of Regulatory Intelligence. “This is striking, given the nearly $3 billion in recordkeeping fines related to off-channel communications. Prohibiting WhatsApp has proven ineffective, as employees continue to use it for business purposes. Our focus is on enabling compliant use of WhatsApp and other channels by providing secure, business-friendly solutions that support safe and compliant communication.”
The report emphasises the compliance risks tied to GenAI, particularly with tools like ChatGPT, as firms aim to maintain audit trails and prepare for future AI-centric regulations. Social media communication capture is also increasing, especially in the U.S., where 38% of firms now monitor these channels, compared to just 9% in the U.K. Emerging platforms such as Instagram and YouTube are gaining traction, while traditional channels like Microsoft Teams and email remain vital, with a 13% YoY rise in Microsoft Teams adoption in 2024.
“One standout is the significant growth in social media channel capture, particularly LinkedIn,” notes Mason. “LinkedIn has become the go-to platform for financial services professionals, but it presents unique risks. For instance, users can connect with individuals without needing an email address, share information informally, or shift from social to business conversations without being monitored. A casual exchange like ‘What time is our golf tee-off?’ could easily segue into a business discussion such as ‘I’m long on dollar-yen—let me know if you find a buyer.’ This dual-use nature makes LinkedIn a rising compliance concern, similar to WhatsApp.”
Subscribe to our newsletter