AI is often seen as a disruptive force, but it is just as much an evolution as a revolution. Machine learning and automation have long been embedded in financial workflows—the key difference today is the scale and power of AI-driven technologies.
As these capabilities expand, how should firms navigate data governance, intellectual property protection, and licensing? Should policies regulate the technology itself, or does a use-case-based approach provide a more effective balance between innovation and compliance?
Balancing AI Adoption with Licensing Complexity
Data licensing in financial markets is highly complex, with no standardised framework like those found in other industries. Instead, firms navigate a fragmented landscape shaped by a vast number of data providers, each with its own policies. At the same time, banks, asset managers, and other institutions are continuously developing new AI applications—often within different divisions of the same organisation. This constant evolution makes it increasingly difficult for data providers to track – and therefore license – where and how their data is being used.
“At its core, our approach to AI is based on licensing the ‘what’ rather than the ‘how’,” explains Debbie Lawrence, Head of D&A Data Strategy and Management at London Stock Exchange Group (LSEG). “As long as our clients have the correct license for their use case, we don’t dictate the technology they use—whether that’s human analysts, traditional machine learning, or large language models (LLMs). We don’t license the technology itself; we license the use case, and that’s a key differentiator.”
Licensing structures must evolve to keep pace with AI-driven data discovery. As firms grapple with vast volumes of information, they need better ways to find, understand, and leverage data. Ensuring the right viewing permissions are in place is crucial—particularly as AI transforms how users interact with and extract insights from data.
“Ultimately, it’s the overarching contract that governs usage,” says Lawrence. “We want our clients to be able to consume as much data as they need under a broad licensing agreement. That makes discussions much more straightforward as, in practice, most firms want enterprise data licenses—they take a feed from us or another provider and integrate it into downstream technology. It’s not always AI-specific; often, it’s just part of a broader machine-driven workflow.”
Data Governance: The Foundation of AI and Regulatory Compliance
AI’s rise is not only reshaping financial workflows but also reinforcing the need for robust data governance. Key regulatory frameworks—such as BCBS 239, the Digital Operational Resilience Act (DORA), and upcoming Critical Third-Party (CTP) regulations—all demand a comprehensive understanding of data estates. At the core of these regulations is a simple yet critical requirement: firms must map, document, and control their data.
“One of our key strategic goals is to be as transparent as possible with our metadata,” states David Thomas, LSEG’s Chief Data Officer. “More and more, customers are asking for full data catalogues that they can ingest into their own systems. We’re working to facilitate that, which has also been valuable internally by giving us a clearer picture of our own data assets.”
To illustrate this approach, Thomas likens its data offerings to selling a cake while also providing the ingredients. “These ‘ingredients’ include a data dictionary, ownership details, and potentially even some of our data quality rules to demonstrate how we manage that data. Over time, we aim to expand this further to include regulatory fields—such as whether the data contains personal information and whether it falls under specific global regulations. We also want to provide clear guidance on rights and entitlements, helping customers understand exactly what they can and can’t do with a given dataset.”
AI Compliance: Strengthening Controls Without Changing Policy
A key risk introduced by AI—particularly LLMs and generative models—is data contamination. Unlike traditional databases where incorrect data can be removed, once such data is embedded into an AI model, it can corrupt the entire system. In extreme cases, the only solution is to wipe and retrain the model from scratch—a costly and time-consuming process.
To mitigate this, LSEG has introduced strict pre-ingestion checks and controls. “Before any data is used in an LLM (or even a smaller, more specialised AI model), we conduct a rigorous assessment to ensure we’re not breaching regulatory requirements or licensing agreements,” says Thomas. “We take a conservative approach—diving deep into the details before allowing data to be used in these models.”
Culturally, this mindset is becoming embedded across the organisation, points out Thomas. “Teams across regulation, privacy, and product development now work together more closely to ensure compliance. Rather than seeing these controls as a barrier, product teams increasingly view them as a partnership, working with us to understand what they can and can’t do with specific datasets.”
Standardising AI and Data Licensing: An Industry-Wide Challenge
One particular challenge in AI-driven financial markets is the lack of standardised terminology. Different firms interpret key concepts—such as derived data, AI licensing, and entitlements—in different ways. This creates potential conflicts where a firm’s interpretation of licensing rights may differ from that of the original data provider.
“We strongly advocate for standardised language in contracts, licensing agreements, and all the downstream components of the data ecosystem,” notes Lawrence. “When regulators ask us what the industry needs, that’s one of the first things I highlight. The faster the technology evolves, the more crucial it becomes to establish clear, shared definitions. Because the reality is, we’re all going to have to learn to operate in this rapidly changing, and often uncomfortable, environment.”
This effort extends to metadata standards, with LSEG actively participating in industry-wide discussions on how data classification and entitlements should be structured. “We’re seeing growing momentum from suppliers, vendors, and data consumers to align on these issues, and the discussions taking place today will shape the future of AI governance in financial markets,” says Lawrence.
The Road Ahead: AI as a Catalyst for Better Data Governance
Looking ahead, AI is not just driving technological transformation—it is also serving as a catalyst for stronger data governance across the industry. While data governance was once overlooked as a strategic priority, it is now rising to the top of the agenda for firms navigating AI adoption and increasing regulatory scrutiny.
“Ultimately, data governance underpins everything the industry is trying to achieve—whether it’s AI innovation or regulatory compliance,” says Thomas. “Both are pushing organisations in the same direction: towards a deeper understanding of their data estate and a more structured approach to mapping it.”
The result has been a cultural shift in how firms treat, manage, and license their data. As AI becomes ubiquitous, industry discussions will likely shift from debating where and how AI should be used, to establishing best practices for how it is governed.
With technology evolving at an unprecedented pace, the next 12 to 24 months will be critical in shaping how data is managed, protected, and leveraged across the financial ecosystem. The conversations happening today are likely to define the rules of engagement for AI-driven markets in the years to come.
Subscribe to our newsletter