
Data technology firm DiffusionData has released an open-source server designed to connect Large Language Models (LLMs) with real-time data streams, aiming to facilitate the development of Agentic AI in financial services. The new Diffusion MCP Server uses the Model Context Protocol (MCP), an open standard for AI models to interact with external tools and data sources.
The server enables AI assistants to interact with DiffusionData’s platform using natural language commands. This allows technical teams to perform operational and monitoring tasks – such as querying data streams or configuring system metrics – through conversational interfaces rather than writing code. The initiative is part of a wider industry trend exploring how autonomous agents can act on live data, moving beyond traditional analytics based on historical information.Raphael Vergnaud, Chief Revenue Officer at DiffusionData, explains the company’s longer-term strategy to TradingTech Insight. “MCP and our agents have initially been designed to interact with data streams in natural language, to consume data, but our vision – especially in capital markets – is to evolve into intelligent participants. This means moving beyond simply piping streaming data to models and instead combining real-time data with LLMs to build intelligence directly into the flow. We see this happening in stages: first establishing the right framework and infrastructure for streaming data; then adding intelligence; and ultimately enabling a controlled degree of autonomy where agents can act on that intelligence. In financial services, this must be underpinned by safeguards – security, transparency, auditability, and control. We expect a rapid evolution through these steps, and what we are building now is the foundation for that progression.”
Standardising AI and Data Communication
A significant technical hurdle for deploying AI in trading environments is bridging the gap between the static nature of LLMs and the dynamic, low-latency requirements of market data. DiffusionData’s approach is to use the MCP Server as a standardised communication layer that decouples the AI model from the underlying data infrastructure. This architecture is intended to allow firms to adopt new LLMs as they become available without re-engineering their data pipelines.“MCP serves as a standardised language between our technology and LLMs – a format they inherently understand,” notes Huw Rees, Engineering Technology Officer at DiffusionData. “It’s designed to be flexible, so it can work with new models from day one. This architecture allows LLMs to interpret the DiffusionData environment, request the data they need, and receive it in a transparent, granular form. That level of detail lets users build what they want – whether using standard tools or proprietary systems – as long as any specific nuances are communicated to the LLM.”
The company also highlights model latency as a key barrier to adoption in capital markets. Its design brings AI inference capabilities to the data stream, a method intended to reduce delays associated with moving large datasets to a separate AI environment for processing.
Architectural Approach and Security
DiffusionData states that its background in real-time data distribution informs its strategy for integrating AI. The MCP Server is built upon the company’s existing platform, which was designed for scalable and secure data streaming. As a result, the server incorporates features such as role-based access controls and auditable event streams to log and trace AI interactions.
“What sets us apart is that we begin with data streaming rather than the LLM – that’s a fundamentally different way of solving this problem for organisations,” observes Vergnaud. “The DiffusionData platform has provided this integration layer for years, supported by a gateway adapter framework that connects to external systems. This is simply another form of moving data from A to B, but it builds on that deep experience. And from the outset, everything has been designed with strict safeguards – per user, per feed, per topic – which is critical and built in by default.”
Subscribe to our newsletter


