
The definition of high performance in trading infrastructure is shifting. Raw speed, once the key benchmark, is increasingly being subsumed into a broader set of requirements around determinism, provability and architectural simplicity. For firms operating in fragmented, event-driven and increasingly automated markets, the competitive edge is no longer measured in nanoseconds alone, it lies in the ability to prove what happened, when it happened, and in what order.
That was the overarching message from a panel discussion at A-Team Group’s TradingTech Summit London 2026, entitled “High Performance Trading Infrastructure – The Blueprint for Speed, Trust and Competitive Edge”. The session, moderated by John Owens, FinTech and Trading Technology Specialist, brought together Anthony Warden, MD and Citi Tech Fellow, Global Head of High Performance Architectures at Citi; Diana Stanescu, Director of Finance and Capital Markets at Keysight Technologies; Vlad Ilyushchenko, Co-Founder and CTO of QuestDB; and Deepak Dhayatker, CTO of Rapid Addition.
Beyond latency: determinism as a design philosophy
The panel opened with a reframing of what high performance means in 2026. Latency measurement has progressed from seconds through milliseconds and microseconds to nanoseconds, but panellists argued that the more significant evolution is qualitative. One speaker described the new benchmark as a deterministic infrastructure, one where operators can see, control and prove what is happening at every point in the trading path.Several forces are driving this shift. AI is pushing vastly larger data volumes into infrastructure. Automation is making execution more event-driven, producing burst patterns that stress systems in ways average-load testing cannot anticipate. Hybrid environments are introducing blind spots that legacy monitoring was never designed to detect. As one panellist put it, infrastructure does not fail under average conditions; it fails during bursts.
Tail latency during sustained volatility – not under laboratory conditions – was identified as the metric that matters most. Recovery time and recovery point objectives are becoming first-class measures of platform quality. Jitter caused by garbage collection or background cloud patching can produce detrimental effects at precisely the worst moment. And scalability without degradation is now a baseline expectation: firms want to add instruments, sessions and counterparties intraday without any deterioration in performance. The winners, one panellist argued, will be platforms that deliver deterministic low latency alongside scalability and resilience as defaults.
The complexity tax
An audience poll confirmed that managing the cost of hardware and connectivity is the dominant practitioner concern, followed by visibility across hybrid environments and keeping pace with data volumes. Cyber security and operational resilience registered zero responses, a result one panellist described as surprising.
The panel’s interpretation was that cost pressure and complexity are deeply intertwined. When firms lack control over their software, hardware is thrown at the problem. And often performance does not improve. Applications working at high levels of abstraction can consume twenty times the CPU resources without anyone on the development team being aware, because the cost is obscured through layers of internal charge-backs. The forensic obsession required to build performant trading systems was described as a dying art, with one speaker warning that developers can now use prompt-based tools to produce code that passes all functional tests but operates with no awareness of resource efficiency.The prescription was consistent: simplify. Databases can operate effectively with 32 gigabytes of RAM rather than a terabyte, provided data flows are properly understood. Applications must be profiled and parallelised correctly. Simplifying workflows, one speaker argued, ultimately reduces latency because there are fewer things to debug and fewer things to maintain.
Data architecture: tiering, openness and minimalist design
On data architecture, the panel converged on a tiered model. Recently acquired data sits in hot storage close to the execution path; historical data moves to cheaper tiers – Parquet files on object storage, sometimes structured through Delta Lake or Iceberg – with a system that stitches the tiers into a single logical view. The critical principle is openness: data infrastructure should exist in an accessible state so that the right tools can be applied, rather than a single system gatekeeping access.
The minimalist design philosophy extended to the execution path itself. One speaker described a whiteboard-first approach: identify exactly what data is needed for a trading decision, in what timeframe, and from where, then build outward from that minimum. Asynchronous computation can be offloaded to commodity cloud infrastructure. Only the data required for the execution itself should occupy expensive co-located rack space. Emerging technologies such as CXL 3, which enables cache-coherent access to disaggregated RAM across CPUs and FPGAs, were cited as offering new options for state recovery and replication without adding inline latency.
Trust as engineering, not compliance theatre
The panel pushed back on the framing of speed versus trust as a trade-off. One speaker invoked Fred Brooks’ distinction between essential complexity – matching engines, order routing, risk management – and accidental complexity: legacy constraints such as multiple operating systems, incompatible network stacks and accumulated environmental drift. It is accidental complexity, not the pursuit of transparency, that creates tension with performance.
The remedy proposed was standardised, immutable infrastructure with automated deployment pipelines producing versioned, drift-free images. PTP-synchronised clocks delivering sub-microsecond timestamping were cited as infrastructure that satisfies both execution quality analysis and regulatory requirements simultaneously. The way you prove your infrastructure to yourself, one panellist argued, should be the same way you prove it to the regulator.
Event sourcing was described as the established recipe for deterministic recovery. As long as transactions are sequenced, they can be replayed in order, making systems fully reproducible. One panellist illustrated this with a case where a power interruption caused a trading system to go down, self-heal and continue processing, prompting a philosophical debate among senior leaders over whether the system had technically been kill-switched. A cautionary note was sounded on dual-publishing data to multiple database instances for redundancy, a common pattern that was identified as a major source of divergence and non-reproducible environments.
The 2027 blueprint
Asked to identify the single most important component for high performance infrastructure in 2027, the panel’s answers converged despite approaching from different angles. One speaker raised the prospect of prompt-generated trading code and argued that the industry will need to get there, but that confidence in AI-generated systems can only come from knowing they can be stopped before causing harm. The analogy was with safety-critical systems engineering: the controls regime around the black box matters more than what is inside it.
From the data perspective, the continued growth of AI, automation and eventually quantum computing all point toward more data flowing faster with greater consequences if it arrives corrupted. Infrastructure must maintain deterministic packet behaviour and end-to-end lifecycle visibility. And if constructing a complete operational picture requires visiting ten different systems and navigating multiple approval processes, the environment is not sustainable. The blueprint, the panel suggested, is data governed cohesively in one place using open standards.
The closing consensus was captured in the phrase deterministic transparency: the ability to produce a single, provable, immutable version of the truth across the entire trading landscape. For firms that can achieve this, the benefits extend beyond compliance into operational visibility, strategic confidence and the kind of trust that underpins competitive advantage in markets where complexity shows no signs of abating.
Subscribe to our newsletter


