The demands placed upon modern trading infrastructures, driven by increasing data volumes, the mandate for real-time processing, and stringent regulatory requirements, are exposing the limitations of historical data architectures. In response, capital markets firms are accelerating the re-evaluation of their data strategies to secure greater agility, scalability, and enhanced governance. A recent webinar hosted by A-Team Group, sponsored by LSEG Data and Analytics, offered a necessary analysis of the two dominant architectural models central to this evolution: Data Fabric and Data Mesh.
The expert panel, featuring Maxim Morozov, Head of Engineering ArcticDB at Man Group; Sri Bhupatiraju, VP, Senior AI Engineer at BlackRock AI Labs; David McLennan, Head of Data Platforms | Microsoft Partnership at LSEG; and moderated by Mike O’Hara, Editorial Contributor, TradingTech Insight at A-Team Group, moved swiftly past basic definitions to focus on the practical deployment, organizational considerations, and governance challenges inherent in adopting these architectures. The consensus was clear: these models are complementary and require a holistic approach if they are to unlock the next generation of trading analytics.
The Essential Hybrid Strategy
A primary takeaway from the discussion was the need to dismiss the idea that Data Fabric and Data Mesh are competing frameworks. Rather, they fulfil distinct, yet mutually necessary, functions. Data Fabric is defined as the ‘technological backbone’ or platform concept, providing the means to unify data access, automate metadata annotation, and enforce centralised control; a model suited for real-time analytics and rigorous regulatory compliance. Data Mesh, conversely, is an “organizational concept” focused on decentralised data ownership, where teams closest to the data treat it as a primary product. This structure is designed to foster domain autonomy and innovation.
Given the high-stakes, high-speed environment of finance, panellists agreed that a hybrid approach is becoming commonplace. This strategy blends the integration strengths of the Fabric with the agility afforded by the Mesh. The decision on which path to prioritise must be contextual, balancing the requirement for speed and autonomy against the absolute necessity for unified control and governance. As one panellist suggested, firms should first define their goals and bottlenecks, then apply valuation frameworks such as FAIR (Findable, Accessible, Interoperable, and Reusable) to assess their data readiness.
Metadata as a First-Class Citizen for AI
The panel stressed that modern data ecosystems should not simply be referred to as ‘data platforms.’ They are fundamentally metadata management, curation, and delivery ecosystems. This shift in terminology reflects the criticality of metadata – encompassing entity relationships, natural language descriptions of attributes, and context – for powering AI outcomes, particularly in natural language processing.
In this context, the architectures play a direct governance role. Data Fabric facilitates the provision of automatic metadata and lineage tracing across data transformations, while Data Mesh ensures data quality is curated by the owning domain team. One panellist highlighted that relying on “a significant amount of manual curation” for data lineage is a “failing strategy.” It is vital that these new systems automatically curate model documentation and lineage by instrumenting transformation operations, ensuring data quality and providing insight into rights management for regulatory purposes. Metadata also becomes a crucial component for ethical AI deployment, enabling traceability and contextual filtering to identify and manage sensitive or potentially biased data.
Managing Organisational Inertia and Legacy Systems
The complexity of integration, particularly with decades-old legacy systems, was identified by the audience as the biggest implementation challenge. The panelists provided targeted strategies to mitigate this disruption.
For legacy integration, a common strategy is to treat the heritage system as a ‘source as a record,’ creating a mirror of the data content within a Fabric architecture. This can be achieved using techniques like change data capture, rather than relying on complex ETL processes. Alternatively, the migration should be incremental, not disruptive. One speaker advised firms not to migrate everything at once, but rather to “identify the areas where it will be most valuable,” such as quantitative research that requires joining disparate data sources.
Beyond technology, the hardest transformation is often cultural. Decentralisation requires data and technology leaders to address the fact that data is often seen as a by-product rather than a core offering. The solution lies in setting clear objectives and incentives for data owners to treat data as a primary product. Furthermore, while distribution is desired for productivity, it must be coupled with rigorous risk management through a centralised “centre of excellence” dedicated to writing the governance rules of the road and assuring common standards across all federated domains.
Finally, firms must temper expectations regarding immediate return on investment. While one key outcome is the reduction of new product cycle time and the ability to combine information across silos to yield an outcome greater than the sum of its parts, this transformation is a substantial undertaking. One panellist stated that a realistic timeframe for a mid-size firm to implement and begin seeing proper tangible benefits is two to three years. This requires executive sponsorship and an iterative approach, acknowledging that the transition is a marathon, not a sprint .
To avoid pitfalls during this journey, firms must ensure their architecture is designed to be cloud and tool agnostic, thereby preventing tooling overload or vendor lock-in. By focusing on abstraction layers and open standards, firms can maintain the flexibility necessary to adapt to the accelerating pace of technological change.
In conclusion, the successful deployment of Data Fabric and Data Mesh architectures hinges not only on technical integration but also on a foundational shift in how organizations structure governance, incentivise data ownership, and prioritise metadata curation. By adopting a pragmatic, hybrid, and iterative approach, capital markets firms can address legacy limitations and build the scalable, context-rich infrastructures required to power modern trading and AI innovation.
Subscribe to our newsletter