In late November 2024, Anthropic unveiled the Model Context Protocol (MCP), an open, standard and accompanying software development kits (SDKs). MCP is designed to address specific agentic AI adoption bottlenecks. By defining a common interface through which large language model (LLM) applications discover and invoke external tools, MCP promises to transform AI deployments from a patchwork of point to point integrations into a unified, auditable pipeline. For compliance practitioners charged with maintaining stringent oversight, MCP represents not just a technical convenience but a way to align AI innovation with exacting compliance demands.
MCP follows a host–client–server pattern. The host is the AI application (e.g., Claude, ChatGPT or an inhouse agent orchestrator). The client inside that host converts user requests into MCP messages. The server wraps a data source or tool e.g., a trade surveillance database, an order management system (OMS) or a vector store and exposes three object types: resources (read only data), tools (functions with side effects) and prompts (reusable instruction templates). Messages are transported either through local standard input-output (stdio) pipes, HTTP server-sent events (SSE) or its successor, Streamable HTTP – always encoded as JSONRPC 2.0.A ‘USBC port’ for AI
OpenAI’s agent SDK documentation describes MCP as “a USBC port for AI applications” because once a server implements the protocol, any compliant client can discover its tools automatically and invoke them without custom glue code OpenAI GitHub Pages. This plug and play promise is crucial for compliance teams, which often deal with legacy systems that were never designed to interact with generative models.
The official MCP website lists more than 1,000 community servers and over 70 compatible clients as of June 2025. Early enterprise adopters include Block and Apollo, which have integrated MCP connectors into internal repositories and messaging platforms to give AI assistants live context without breaching data segmentation policies.
Broker dealers have steadily expanded their use of AI well beyond simple chatbots or automated customer service tools. As early as June 2020, FINRA’s Report on Artificial Intelligence in the Securities Industry noted that firms were deploying machine learning systems to scan communications and flag trading anomalies.
By January 2025, FINRA’s Annual Regulatory Oversight Report confirmed that generative AI pilots now touched almost every compliance domain: from anti money laundering (AML) screening to books and records production, underscoring both the appetite for AI efficiency and the urgency of controlling its risks.
Despite this enthusiasm, two persistent obstacles hamper broad rollout. First, integration overhead remains high: each new data source e.g. an order management system, voice recording archive or private equity ledger demands bespoke connectors and lengthy development cycles. Second, data governance risk looms large. Firms must ensure that sensitive customer information, privileged communications and model training outputs cannot become vectors for bias or unauthorized disclosure, and that every AI driven decision remains auditable under SEC and FINRA rules.MCP addresses these pain points by funnelling every tool invocation through a single JSONRPC gateway. This unified interface means that entitlement checks, multifactor authentication and real time logging need only be configured once – greatly reducing the burden on IT teams. For compliance officers, the payoff is immediate: whether an AI agent queries trade surveillance histories or extracts chat transcripts, all calls inherit the gateway’s security policies and are recorded in a consistent, machine readable format.
Equally important is MCP’s capacity to generate auditable context. Each request returns not only the data payload but also structured metadata including timestamps, user identifiers, parameter hashes and server version stamps. Firms can ingest this metadata into their existing audit trail systems, satisfying requirements under Exchange Act Rules 17a3 and FINRA Rule 4511 for recreating original records if altered. This automated “evidence wrapper” transforms AI driven workflows into fully documented, regulator ready processes with minimal human intervention.
Perhaps most striking for capital markets compliance teams is MCP’s promise of rapid onboarding for new regulatory and market data feeds. For example, under the European Union’s revised MiFIR framework, consolidated tape providers (CTPs) will stream bond, equity and derivatives transactions via standard APIs into a continuous live feed. Wrapping that tape in an MCP server means that a surveillance or best execution agent can consume the new feed simply by registering its URL with no code changes, retraining or service downtime required. The same plug and play simplicity applies to AML workflows: the U.S. Treasury’s OFAC Sanctions List Service publishes its Specially Designated Nationals (SDN) and non SDN lists via public APIs, enabling KYC agents to resolve entities against the latest sanctions data in real time.
Together, these capabilities transform agentic AI integrations from a series of custom integrations into a repeatable, standardized pipeline. Rather than allocating resources to build and maintain dozens of bespoke connectors, technology and compliance teams can focus on finetuning risk mitigation rules and model validation frameworks. For RegTech executives and senior practitioners, MCP thus represents a strategic control point – one that aligns cutting edge AI innovation with the rigorous audit and security mandates of capital markets regulation.
MCP is deliberately narrow: it does not choose which tool to call or evaluate whether the model’s response is compliant. Those orchestration and risk management layers remain the institution’s responsibility, as IBM’s technical overview stresses. Neither does MCP obviate SR 117style model risk validation; rather, it centralises the data lineage required for those tests.
Regulatory considerations
Regulators have begun flagging AI specific risks including data bias, privacy, supervisory control systems and recordkeeping – see FINRA guidance and FINRA Regulatory Notice 2409 which explicitly addresses generative AI governance. MCP does not, by itself, solve these risks, but it creates clear interception points where compliance controls can be enforced:
- Bias mitigation. Tool outputs can be validated or filtered before being forwarded to the model.
- Privacy filtering. Sensitive fields can be redacted at the server layer before context reaches an LLM.
- Governance hooks. Since clients enumerate available tools up front, firms can implement allow/deny lists consistent with internal policies, an approach already documented in the OpenAI SDK OpenAI GitHub Pages.
Outlook and a Call to Action
Gartner lists Agentic AI among its top strategic technology trends for 2025, citing the need for standardised tool interfaces as a precondition for scale. By abstracting legacy systems behind a common protocol, MCP positions itself as a core piece of that infrastructure. OpenAI, Anthropic and Microsoft have each shipped first party MCP clients, indicating multivendor momentum.
For capital markets compliance officers who must balance innovation with stringent regulatory duties, the protocol offers a pragmatic path: integrate once, supervise centrally, and iterate quickly as new rules and datasets emerge.
Are you ready to transform compliance from reactive to strategic? Join A-Team Group at RegTech Summit London on 16 October 2025, where industry leaders and regulator insiders come together to surface real-world use cases, practical frameworks, and emerging AI governance models.
Whether you’re focused on regulatory reporting reform, AML scalability, or building resilient, modular infrastructure in the post DORA era – the Summit delivers the strategic insights you need. Visit the event page now to see the full agenda, view confirmed speakers and register your place.
Subscribe to our newsletter