
Regulatory data has become a firmly established part of the control architecture of capital markets firms. As transparency rules diverge across the jurisdictions, liquidity monitoring becomes more granular, and supervisors demand stronger evidence of how figures are derived, firms are obligated to treat regulatory datasets as governed, versioned and explainable operating assets.
RegTech Insight: How does Bloomberg’s Regulatory Data product team support clients across their regulatory workflows? Can you share an overview of Bloomberg’s full Regulatory Data Solutions suite?
Kate Lee: Bloomberg provides clients with an interconnected suite of regulatory datasets designed to integrate seamlessly across front-, middle- and back-office enterprise workflows. As firms navigate increasing regulatory complexity across jurisdictions and asset classes, the challenge has evolved: it is no longer just about sourcing data, but ensuring that data is consistent, high-quality and interoperable across systems supporting compliance, risk, and reporting.
A key part of Bloomberg’s approach is delivering curated regulatory data rather than raw datasets – combining underlying reference data with regulatory interpretation to produce derived, decision-ready outputs. Instead of interpreting regulations and building classification logic internally, firms can utilize ready-to-use datasets with transparent methodologies that can be integrated directly into their workflows.
Bloomberg’s Regulatory Data Solutions comprise more than 60 datasets organized across four core pillars: Capital & Liquidity, Compliance, Tax, and Accounting. These solutions support a broad range of global regulatory requirements, including Basel III capital and liquidity rules, sanctions and compliance screening, AIFMD, MiFID II/MiFIR transparency, shareholding disclosure, tax obligations such as 871(m) and FTT, and accounting standards including IFRS 9 and IFRS 13.Our team continues to expand the suite in response to regulatory change – enhancing existing fields, introducing new derived data where needed, and ensuring clients can continue to rely on consistent, up-to-date outputs to meet requirements without having to rebuild regulatory logic internally.
These datasets are part of Bloomberg’s Enterprise Data business, which delivers the same trusted data that powers the Bloomberg Terminal, ensuring consistency across desktop and enterprise workflows. Clients can access Bloomberg’s Regulatory Data Solutions via Bloomberg Data License at data.bloomberg.com, with flexible delivery options including SFTP, REST APIs and cloud environments
RegTech Insight: Your role at Bloomberg is leading the Regulatory Data product team, which is focused on the underlying data rather than rules or workflows. How do you define the role of regulatory data within firms’ control frameworks, and to what extent do you see it as an operating layer rather than a supporting input?
Kate Lee: As regulatory requirements become more granular and increasingly time-sensitive, regulatory data is undergoing a fundamental shift – moving from a supporting input for reporting to functioning as a core operating layer within firms’ control frameworks. It no longer simply feeds controls, but actively shapes how they are structured, executed, and maintained across the organization. This shift is also expanding regulatory data’s role beyond post-trade reporting into pre-trade workflows, as firms assess regulatory impact earlier in the decision-making process.
Supporting this shift, Bloomberg’s Regulatory Data sits between raw underlying data and the rules themselves. Traditionally, firms source entity and security-level data, interpret regulatory text, and then build internal rules to derive classifications – such as determining HQLA levels under LCR. This creates ongoing complexity, as those interpretations and rules must be maintained and updated as regulations evolve.Bloomberg’s ready-to-use datasets combine high-quality underlying data with codified regulatory interpretation, allowing firms to consume the outputs directly within their workflows.
By delivering regulatory datasets that are standardized and interconnected, Bloomberg helps clients reduce fragmentation, improve transparency, and support more efficient, defensible regulatory workflows.
RegTech Insight: Bloomberg’s Regulatory Data Solutions span four core pillars including Capital & Liquidity, Compliance, Tax, and Accounting. How do you ensure data consistency and traceability across these regulatory and compliance workflows, particularly where firms are reconciling internal records with what is ultimately reported to regulators and the market?
Kate Lee: Consistency is achieved by treating regulatory data as part of a unified data architecture rather than a collection of isolated datasets. Firms are often reconciling data across multiple systems – including risk engines, accounting platforms and reporting tools – and inconsistencies can introduce significant operational risk. At the same time, traceability is becoming increasingly important as regulators place greater emphasis on transparency, auditability and firms’ abilities to demonstrate how reported figures are derived.
For example, Bloomberg delivers the SPPI test under IFRS 9 and the NAIC Principles Based Bond (PBB) classifications as derived data points based on a combination of underlying data and codified regulatory interpretation. Because all of Bloomberg’s regulatory data is supported by transparent methodologies, clients can reconcile our outputs against their internal views. These methodologies are grounded not only in regulatory text, but also in industry interpretation and working group discussions, helping align our outputs with how regulations are applied in the market.
In addition, Bloomberg’s Regulatory Data Solutions are supported by the SOC 2 Type II framework, which provides independent assurance that we maintain robust controls around data security, availability, and processing integrity. This combination of standardization, transparent methodology, and independently validated controls enables both data consistency and traceability – reducing reconciliation friction while strengthening firms’ confidence in their regulatory reporting.
RegTech Insight: With MiFIR transparency moving toward revised thresholds, consolidated tape developments and evolving FIRDS/FITRS roles, where do you see the biggest data challenges for firms, and how is Bloomberg adapting its transparency datasets to support that transition?
Kate Lee: As the new regime comes into effect, we see two primary data challenges for market participants.
First, the increasing divergence between the EU and UK transparency frameworks – particularly for non-equity instruments – means firms can no longer rely on a single, harmonized view of transparency requirements. They must interpret and apply two parallel regimes, each with its own logic and thresholds.
Second, regulators are shifting away from publishing key transparency parameters and instead requiring firms to calculate and determine these thresholds themselves. This represents a fundamental change, moving complexity from regulators to the industry, and placing greater reliance on firms’ data, interpretation, and calculation capabilities.
Bloomberg aims to absorb that complexity within our Regulatory Data Solutions. As the rules evolve, we update our datasets to reflect the new requirements across both jurisdictions – aligning with implementation timelines and ensuring clients have access to consistent, decision-ready data. Rather than managing regulatory change themselves, clients can utilize curated datasets that incorporate the latest interpretations, methodologies, and calculations.
RegTech Insight: Regulators are placing more emphasis on explainability and evidence-based controls, particularly in reporting and risk calculations. How does Bloomberg approach data lineage, versioning and auditability across its regulatory datasets, and what level of transparency can clients expect when validating outputs?
Explainability and auditability are central to how Bloomberg designs and delivers regulatory data. Clients need to understand not just the output, but how it was derived, how it changed over time, and how it aligns with evolving regulatory requirements.
From a data lineage and versioning perspective, we take a structured and transparent approach when regulations change. When amended regulation is announced, we use the lead time before it becomes effective to interpret the new requirements, implement and test updated logic, and engage with clients early, sharing both the upcoming changes and their expected impact well in advance of the effective date.
A good example is the FR 2052a transition from 5G to 6G, which introduced significantly greater granularity in liquidity monitoring, including more detailed High-Quality Liquid Assets (HQLA) classifications. For clients consuming Bloomberg’s HQLA dataset, these changes were reflected within the existing data structure, but with updated logic and outputs aligned to the new requirements. During a beta phase, clients could clearly see how many instruments would be reclassified and assess the impact across their portfolios before the rules went live.
On the regulatory effective date, the beta logic was promoted to production, allowing clients to seamlessly transition to the new regime. This approach ensures continuity while giving firms time and transparency to validate and prepare.
In terms of auditability and controls, because Bloomberg’s Regulatory Data Solutions are supported by the SOC 2 Type II framework, there’s independent assurance that we maintain robust controls over data processing, change management, and system integrity. This ensures clients can rely on well-governed processes for how data is sourced, transformed, versioned, and delivered, with full traceability and control evidence available to support internal validation and regulatory scrutiny.
RegTech Insight: For regulatory reporting in certain markets, there is a clear trend away from regulator-defined templates towards granular data reporting, with the Common Domain Model (CDM) and Digital Regulatory Reporting (DRR) already adopted for OTC derivatives, repos and securities lending. As regulators in the UK, EU and APAC move further in this direction, how is Bloomberg’s Regulatory Data team responding, and to what extent have you enhanced your data solutions to account for the CDM and DRR?
Kate Lee: The direction of travel is clear: regulatory reporting is moving away from static templates toward more granular, standardized, data-driven reporting. Industry initiatives such as the CDM and DRR reflect that broader shift by seeking to create common definitions, common interpretations, and more machine-readable representations of reportable data. As a result, high-quality regulatory data has become a foundational requirement for accurate reporting.
From a client perspective, this shift creates a significant challenge. Firms are no longer just populating templates – they need consistent classifications, normalized data, and curated derived outputs that align to increasingly detailed reporting instructions. Bloomberg’s Regulatory Data team adds value by industrializing the data layer that sits between raw source data and the final report. We take underlying reference and instrument data, apply regulatory logic and interpretation, and deliver curated outputs that can be used directly in reporting, control, and validation workflows.
As markets in the UK, EU, and APAC continue to move in this direction, we are enhancing our datasets to be more granular, more standardized, and more transparent in methodology, so they can support reporting frameworks that increasingly depend on common data definitions and machine-readable logic. Rather than leaving firms to interpret each reporting requirement and build the logic themselves, Bloomberg helps absorb that complexity through curated regulatory datasets that are designed to fit directly into downstream reporting workflows.
Subscribe to our newsletter


