
As institutions absorb ever greater volumes of data to meet their increasingly complex operational needs and those of regulators, they face a dilemma of how to store and distribute that critical information.
Fragmented legacy systems have long been an impediment to the smooth management of data and now corralling multiple-cloud configurations can be added to the challenge of streamlining data pipelines.
With enterprise artificial intelligence deployment making easy data access and utilisation an imperative, operational agility has become the watchword for organisations. Having recognised they can’t do this without modern data architectures, data chiefs are now faced with the choice of two dominant philosophies. Do they implement a data fabric, which is a technical approach that uses AI and active metadata to weave together disparate sources into a virtualised layer. Or, do they opt for a data mesh strategy, an organisational framework that decentralises ownership by treating data as a product managed by the business domains themselves.This article profiles 10 leading vendors providing the infrastructure to realise these strategies.
IBM provides a comprehensive data management platform that utilises “active metadata” to automate data discovery, integration, and governance across hybrid cloud environments. It deploys AI-powered Knowledge Accelerators, which offer industry-specific ontologies (including banking and financial markets) to pre-classify data for regulatory compliance. This is designed to automatically identifying and connect related datasets across the enterprise without moving the data, reducing the time needed for cross-border risk aggregation.
Denodo’s data virtualisation innovation creates a logical data layer, allowing users to query data from any source in real-time without physical movement. Its fast data access enables firms to join real-time market data with historical records via a single SQL query to eliminate the cost and risk of data silo replication and preventing the proliferation of redundant datasets that often lead to conflicting P&L reports.
Using a data product approach within a fabric architecture, K2view provides a 360-degree view of specific entities, such as a customer or a legal entities. Its patented Micro-Database technology stores data for every individual entity in its own miniature schema, ensuring rapid access times. The solution seeks to fix the entity resolution challenge, allowing capital markets firms to instantly reconcile client data across disparate trading, booking and settlement systems.
Microsoft’s unified SaaS analytics platform integrates data engineering, data science and real-time analytics into a single environment powered by OneLake. Its native integration with the Microsoft 365 and Azure ecosystem enables traders and analysts to access governed data fabric assets directly within Excel or Power BI. It reduces architectural complexity by replacing fragmented tools with one cohesive environment, lowering the total cost of ownership for data-heavy AI initiatives.
Qlik provides a data integration and analytics platform that’s designed to deliver real-time, actionable data through change data capture (CDC) and automated warehouse transformation. Its associative engine and real-time streaming capabilities enable financial institutions to maintain live, always-on data pipelines that connect legacy mainframes directly to modern cloud lakehouses. By automating the ingestion and movement of data, Qlik accelerates reporting times, ensuring that risk officers and traders are making decisions based on the most current market positions rather than T+1 data.
The distributed SQL query engine (based on Trino) is designed to query data across multiple clouds and on-premises lakes. Built specifically for data federation, Starburst allows domain teams to own their data while providing a single point of access for the rest of the firm. It seeks to ensure that business units no longer have to wait for a central IT team to build an ETL pipeline before they can analyse new market datasets.
Founded by the creator of the Data Mesh concept, Zhamak Dehghani, Nextdata provides mesh-native tools specifically designed to create, share and govern data products. Its exclusively on the data product lifecycle, providing a decentralised operating system that data with its own APIs and SLAs. The company seeks to eliminate operational ambiguity in mesh implementations by providing the specific tooling required to move from theoretical decentralisation to a practical, manageable product-oriented workflow.
Collibra’s data intelligence platform acts as the system of record for data governance, cataloguing and quality across a mesh or fabric. It serves as a non-technical data marketplace in which business users can shop for verified data products, complete with quality scores and owner contact info. It is designed to ensure that expensive proprietary data purchased by one desk isn’t accidentally re-purchased by another because they didn’t know it existed.
Databricks’ Data Intelligence Platform unifies lakehouse architectures with a centralised governance layer for files, tables and AI models. Its Unity Catalog provides the federated governance necessary for a mesh, allowing different desks to manage their own data products while maintaining global security standards.
By addresses governance fragmentation in decentralised models, ensuring that even when ownership is distributed, audit trails and data lineage remain centralised for the regulators.
The cloud-native data platform enables secure data sharing and collaborative data clean rooms across organisational boundaries that help enable mesh strategies. Snowflake’s marketplace and sharing capabilities allow firms to facilitate the seamless exchange of data between buy-side and sell-side partners, ending secure sharing challenges by replacing manual FTP uploads and API integrations with a native, governed share that updates in real-time.
Subscribe to our newsletter



