About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Now the MCP Layer is Commoditised, Are Entitlements the Next Challenge?

Subscribe to our newsletter

LSEG today added Amazon Quick to the growing list of AI-enabled workspaces in which its licensed data and analytics are exposed via a Model Context Protocol (MCP) server, with customers gaining access through natural-language and agentic queries to pricing, fundamentals, estimates, ownership data, macroeconomic indicators, ESG and analytical models. The move is the latest step in “LSEG Everywhere,” the firm’s strategy of meeting customers wherever their AI workflows happen to sit.

A Connection Layer That Is No Longer a Differentiator

The list of places those workflows now sit is long, and getting longer. LSEG has already announced MCP-based partnerships with Anthropic for Claude for Financial Services and with Microsoft for Copilot Studio, alongside named tie-ups with AWS, OpenAI, Snowflake and Databricks. S&P Global, working through its Kensho AI hub, announced its own Amazon Quick Research integration five months ago, alongside MCP-based connectors for Claude, a verified ChatGPT app, and a separate AI-Ready Data MCP server for energy and commodities content. FactSet launched what it billed as the industry’s first production-grade MCP server in December 2025, following a beta with 45 firms and more than 800 institutional users, contrasting its approach with demo or warehouse-dependent offerings. Bloomberg, more cautious about external exposure, has converged its own internal GenAI tools protocol on MCP and is engaged in industry-level work on the protocol primitives that financial workloads need – async tools, structured outputs, and HTTP-only transport among them. The competitive picture is not whether to expose licensed data via MCP. It is how many surfaces, how much governance, and on what terms.

By way of background, MCP is the open standard that Anthropic released in late 2024 and donated to the Linux Foundation’s Agentic AI Foundation in December 2025, since adopted across major foundation-model providers and enterprise data platforms. Its relevance to institutional data teams is that it standardises how an AI agent inside a host application – Claude, ChatGPT, Amazon Quick, an Excel or PowerPoint copilot – discovers and calls licensed data sources without bespoke per-vendor integration. The connection layer, in other words, is being commoditised. That is the bit today’s announcement from LSEG confirms.

Whose Identity Is the Agent Acting Under?

What it does not address is the harder question underneath: when the connection is live, who is the agent acting as?

In capital markets, entitlements are granular and contractual. Front-office desks see content that back-office and compliance functions cannot, even within a single firm. Vendor data-redistribution clauses set explicit limits on what derived outputs can be persisted, shared internally, or exposed to a downstream client. None of that maps cleanly onto an architecture in which a non-human identity – the agent’s service account – fronts data requests on behalf of an end user whose identity must somehow be carried through the call chain and reflected in the response. The industry has, in fairness, been working on this. LSEG itself has been vocal in industry discussions on data classification and entitlement standards as the foundation of AI governance in financial markets. And on the application-platform side, Genesis Global has built MCP on top of an existing entitlements framework so that “every piece of data has entitlements attached, specifying exactly who can see it,” while FINBOURNE describes its Claude integration as enabling agentic actions “with full entitlement checks and data lineage.” It is notable that the application platforms have foregrounded their entitlement model in this way, while the major content vendors – including in today’s announcement – have so far been less explicit about how user-level entitlements propagate through their MCP servers. That detail is what a buy-side data manager assessing the stack will want to see.

A Vendor Portfolio Rebuilt One Layer Up

The second open question is what happens when every major vendor has an MCP endpoint. Institutional users will not consume one – they will consume a portfolio. LSEG, S&P Global, FactSet, Bloomberg, plus a long tail of alternative-data and commodity-data providers, each with its own authentication model, audit pattern, versioning cadence and rate-limit behaviour. FactSet has written explicitly about the governance pattern this implies: central tool registries, proxied access, controller/worker hierarchies. Databricks is positioning its Marketplace as the catalogue layer where buy-side firms discover and govern external MCP servers alongside their own, and Snowflake is moving in a similar direction. Reports from the wider technology community already flag context bloat – agents pulling in too much from too many tools – and a lack of synchronisation across MCP servers within a single organisation as live operational issues. The buy-side’s familiar challenge of managing a portfolio of vendor relationships is, in effect, being rebuilt one layer further up the stack. And while the data side is settling, the execution side remains nascent: MCP’s use as a conduit for trade instruction, rather than data retrieval, is only just being attempted.

Tests the Vendors Have Yet to Pass

Three things are worth tracking from here. The first is industry-level work on the protocol itself. Identity propagation, async tools and structured outputs are the primitives that determine whether agentic access can meet the contractual obligations of regulated buy- and sell-side firms. None of them is solved at the protocol level today. The second is the entitlement model that each vendor publishes alongside its MCP server – or, more tellingly, declines to. The third is which governance layer wins enterprise adoption: an internal MCP gateway built by the data-management team, a vendor marketplace catalogue, or a third-party intermediary that none of the incumbents has yet named.

Today’s announcement reads as a confirmation that the connection layer is settling into place, with the interesting questions now sitting one layer up. Whether the industry’s hard-won data-management discipline – entitlement propagation, audit, redistribution control, vendor governance – travels cleanly into the agentic era, or has to be rebuilt from scratch in this new operating context, is the question the next eighteen months will answer.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: The Data Foundation for Alpha – How fragmented data is eroding hedge fund performance

Date: 23 June 2026 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Alpha depends on more than models, talent and execution. It depends on the quality, consistency and timeliness of the data behind every investment decision. Many hedge funds still operate with fragmented datasets, inconsistent identifiers and manual reconciliation processes that...

BLOG

Introducing Market & Alt Data Insight: Advancing the Industrialisation of Data in Financial Markets

Financial markets are entering a new phase in the evolution of data. Data has always underpinned trading and investment workflows. What has changed is the scale, diversity and strategic management of that data across the enterprise. Traditional market data, alternative signals, derived datasets and AI-generated features now sit on the same operational continuum. The strategic...

EVENT

AI in Capital Markets Summit London

Now in its 3rd year, the AI in Capital Markets Summit returns with a focus on the practicalities of onboarding AI enterprise wide for business value creation. Whilst AI offers huge potential to revolutionise capital markets operations many are struggling to move beyond pilot phase to generate substantial value from AI.

GUIDE

Applications of Reference Data to the Middle Office

Increasing volumes and the complexity of reference data in the post-crisis environment have left the middle office struggling to meet the requirements of the current market order. Middle office functions must therefore be robust enough to be able to deal with the spectre of globalisation, an increase in the use of esoteric security types and...