
Capital markets firms pursuing data platform modernisation have largely solved the technical challenges of compute and storage, but the organisational, governance and architectural decisions surrounding those platforms remain stubbornly difficult, according to practitioners from Northern Trust, RBC Wealth Management and LSEG, speaking at a recent A-Team Group webinar entitled Data platform modernisation: Best practice approaches for unifying data, real time data and automated processing, sponsored by LSEG Data & Analytics.
The panel discussion, moderated by TradingTech Insight editor Mike O’Hara and featuring Jez Davies, Head of Information Architecture at Northern Trust; Vinod Surasani, Sr Software Engineer – MDM at RBC Wealth Management; Alket Memushaj, Principal Architect at AWS; and Patrik Färnlöf, Group Head of Real-Time Engineering at LSEG, revealed a striking consensus: the firms most at risk of failed modernisation are those treating it as a technology upgrade rather than a fundamental rethinking of how data is owned, governed and consumed.Automation demands more governance, not less
An audience poll during the webinar identified enabling automation across data processing and analytics as the primary driver of modernisation efforts, ahead of reducing operational complexity and technical debt. Cost optimisation, notably, received zero votes.
But the panel was quick to challenge any assumption that automation simplifies governance. Surasani argued that with streaming, event-driven pipelines and automated decisioning, manual checkpoints are no longer viable. Instead, data quality frameworks must be embedded directly into ingestion and transformation layers. Firms often go wrong, he said, assuming automation reduces the need for governance, when in reality, the more autonomous the platform becomes, the more important it is to build policy-driven quality checks and operational accountability.
Färnlöf reinforced this point, noting that automation and operational complexity reduction are complementary rather than competing objectives. The critical question, he suggested, is whether the data can be trusted, and whether automation across the full data lifecycle is robust enough to ensure a stable experience.
Data mesh: architectural merit, organisational friction
The discussion surfaced a pragmatic view of data mesh adoption. Davies described how domain-oriented ownership – where individual teams push data into a shared mesh rather than funnelling everything through a central IT function – has genuine architectural merit but remains difficult to implement organisationally. Getting buy-in across the business to see the ultimate utility, he acknowledged, is the real challenge.
Memushaj echoed this from the AWS perspective, noting that larger firms face particular interoperability challenges given the variety of tools used by different business units. He argued that a data marketplace model – where all data is discoverable in one place, layered with quality controls and lineage – is what business stakeholders actually want, even if the underlying architecture is distributed. Crucially, he added, providing cost transparency to consuming teams changes their behaviour and makes the overall platform more sustainable.
Real-time is a spectrum, not a binary
On real-time data, the panel pushed back against the assumption that faster is always better. Färnlöf argued that firms must match their consuming pattern to the pace of their business, whether that means full real-time streaming, a conflated feed, or a daily bulk operation. Attempting to keep up with a flow processing millions of messages per second, he noted, is a demanding engineering problem that introduces its own fragility, including the challenge of real-time reconciliation.
The shift towards 24/7 markets intensifies this challenge. Färnlöf described how reconciliation must move from batch-driven to incremental monitoring, with automated recovery processes triggered by deviations. The key, he said, is not letting the data get ahead of you.
Memushaj pointed to AWS’s Zero ETL concept as one approach to reducing the complexity of merging streaming data with historical records, removing fragile real-time pipelines in favour of fully managed integration. With settlement times compressing and markets moving toward round-the-clock operation, he argued, systems built in the past simply cannot keep pace.
Legacy integration: knowing when not to migrate
The audience’s second poll confirmed integrating legacy systems with modern platforms as the most challenging aspect of modernisation in practice. Davies described the reality of working with DB2 databases that are 40 years old, with limited documentation, noting that even modern tooling – including large language models – takes considerable time to interpret legacy data structures.
Surasani offered a three-part framework for deciding what to modernise incrementally versus what to replace: assess business criticality, evaluate technical debt and coupling, and consider the velocity of change the business requires. The common failures, he argued, are over-modernising through big bang approaches that destroy embedded business logic, under-modernising through pure lift-and-shift that merely re-platforms legacy complexity, and ignoring data gravity by assigning modernisation to teams without sufficient domain knowledge.
Importantly, the panel resisted the instinct to migrate everything. Davies pointed out that if a legacy pipeline is simple, understood, and low-maintenance, spending significant resources to move it to the cloud may deliver negative ROI. Färnlöf agreed, adding that forcing a big bang replacement risks losing the in-built history embedded in legacy platforms.
Open standards and the vendor lock-in question
On vendor strategy, the panel converged on interoperability through open standards as the primary defence against lock-in. Davies highlighted the use of Apache Iceberg with Snowflake, Delta tables with Databricks, and XTable as an interoperability layer. He described as “highly exciting” the emerging open standard for semantic modelling, involving players including Snowflake, BlackRock, Databricks and dbt, which he said would improve AI context grounding and reduce hallucinations.
Memushaj emphasised AWS’s investment in open-source formats and managed open-source services, arguing that interoperability must be planned from the outset. Without it, he warned, data can easily become locked into a silo, defeating the purpose of modernisation.
Skills, not tools, determine outcomes
A recurring theme was the critical importance of skilled practitioners. Davies was blunt: despite the promise of agentic engineering and AI-assisted development, low-skill engineering teams present a significant long-term cost implication. Success, he argued, requires experienced individuals who understand both the domain and the pipelines – people whose decisions drastically affect the cost footprint of solutions.
What genuine modernisation looks like
Surasani offered the clearest articulation of what separates genuine modernisation from mere re-platforming. Truly modernised firms, he said, operate data as a product with clear SLAs and quality guarantees, support real-time and event-driven processing rather than legacy batch reporting, embed governance and observability directly into the platform, and enable self-service access guided by guardrails, including AI ethics guidelines.
The panel’s closing remarks carried a note of urgency. Färnlöf warned that firms failing to modernise now face a difficult business scenario within three years, given the pace of innovation and 24/7 market velocity. Davies predicted that insight will increasingly come from natural language queries rather than months-long dashboard builds, calling the current moment “the new industrial age.” Memushaj brought it back to fundamentals: if the data platform cannot deliver data fast enough for critical business decisions, modernisation has failed on the only metric that matters – time to value.
Subscribe to our newsletter



