As trading desks evolve, firms are turning to AI, interoperability, and modern technology platforms to create smarter, more connected environments that reduce friction for traders, enhance decision-making, and allow users to interact seamlessly with data and tools.
However, one of the biggest hurdles they face is the sheer weight of the past. Decades of bespoke scripts and user-specific workarounds have been layered onto legacy systems, creating a complex but familiar environment that traders have mastered. To them, talk of modernisation often sounds like a threat to take away the very tools they rely on to navigate familiar workflows. This resistance to change can leave firms stuck between a past that no longer differentiates them and a future that seems impossibly difficult to reach.Breaking this stalemate requires a new blueprint that goes beyond surface-level fixes. This emerging strategy is increasingly framed not as a simple choice between ‘buy’ or ‘build’, but as a sophisticated hybrid of both. This ‘Buy AND Build’ philosophy, the central theme of A-Team Group’s upcoming Buy AND Build Summit in London, forces firms to confront a new set of strategic questions: how can they build a modern technology foundation without resorting to a hugely costly and disruptive ‘rip and replace’? What does it practically mean to embed AI into daily workflows? How can firms manage a new, hybrid workforce of human and digital employees? And as speed and automation accelerate, what are the essential governance and controls that will ensure these new systems remain secure, explainable, and resilient?
The Modernisation Mandate
For decades, firms have empowered their sharpest minds to solve complex problems with whatever tools were available, leading to a sprawling, highly customised technology environment that is now a source of significant technical debt. This has created a deep-seated inertia that is as much cultural as it is technical. The resistance begins with the traders themselves, who have spent years honing their skills on systems that, while imperfect, are intimately known.
“One of the biggest challenges isn’t just the legacy system itself, but the sheer volume of customisation that has ‘accreted’ on top of it over many years,” says Matt Barrett, CEO of Adaptive, the custom trading technology solutions provider. “Desks have cobbled together countless bespoke scripts and workarounds to solve specific problems. This creates a huge amount of institutional knowledge and ‘muscle memory’ that is deeply embedded in that custom layer.” For such users, talk of modernisation raises anxiety that their expertise, acquired over years, is at threat of being erased, forcing them to start over.
This human challenge is compounded by deep-seated commercial and structural realities for firms that find themselves shackled to inflexible, monolithic systems that are difficult to adapt and evolve.
“Many firms are entangled with large, legacy trading vendors and the inflexible contracts that come with them,” points out Jon Butler, CEO of Velox, the development platform for front- and middle-office systems. “The problem is that this behemoth that they pay $10 million a year for, which has tentacles all over the organisation, doesn’t provide the data or the internal access needed to build higher-level functions.”
This lack of access to foundational data massively hampers innovation and often renders new technology investments ineffective. “Banks and broker-dealers almost always have major data silo problems,” continues Butler. “Even if your developers are efficient, their progress is limited by their ability to get access to the fragmented data they need to build new applications. From this perspective, new technologies like AI and desktop interoperability can often act as ‘band-aids’; attempts to make some semblance of progress without tackling these underlying core issues.”
An even deeper problem is that of architectural debt. Many systems in use today were simply not designed for the data velocity and complexity of modern markets, leading to critical performance failures as they attempt to scale.
“The catch, and the reason so many systems fail, is that pushing all data everywhere works in the beginning, but as the platform scales, that approach’s overhead grows exponentially,” notes Robert Cooke, CEO of 3forge, the provider of high-performance code solutions for business-critical applications. “More data means more calculations, more users, more visualisations across more screens. Suddenly, performance issues become crippling and at that point moving to a virtualisation pub/sub model isn’t possible without a complete rewrite.”
Faced with these overlapping challenges of human resistance, vendor entanglement, and architectural debt, a mandate is clearly emerging across the industry. The desire to break free from these constraints has become a primary driver of change. As Bart Joris, Head of FX Sell-Side Trading at LSEG, observes, “There’s a clear shift toward open, interoperable platforms; firms want to plug in best-of-breed tools without being locked into closed ecosystems.”
This demand sets the stage for a new blueprint. One that provides a practical path forward, without demanding that firms start over from scratch.
A New Blueprint
Leading firms have abandoned the unrealistic rip and replace fantasy in favour of a more pragmatic, multi-faceted strategy. This new blueprint is not about a single ‘silver bullet’, but about systematically building a modern technology practice on top of – and in spite of – legacy constraints. This approach rests on three core pillars: creating a foundational abstraction layer; empowering a new hybrid workforce; and connecting the entire ecosystem with robust interoperability.
The first and most critical pillar insulates developers from the chaos of the old world. Rather than trying to untangle decades of legacy code, the strategy is to build a clean, modern facade that provides a single point of access to all necessary data, regardless of where it lives.
Butler explains: “First, you introduce a development platform that acts as an abstraction layer, essentially putting a facade over your legacy issues. It’s an exercise in keeping the legacy system in place but opening it up just enough so that this new layer can access all the data it needs. This means the developer no longer has to care where the data is located or how it’s accessed. They just go to that one place.” This foundational layer removes the data-wrangling bottleneck and creates the stable ground for building future innovations.
The focus of the second pillar shifts from plumbing to intelligence. The goal is to move beyond isolated experiments and make AI a core component of the daily workflow. According to Joris, the industry is reaching a new level of maturity in this regard. “AI is becoming embedded, not bolted on,” he states. “We’re seeing firms move beyond experimentation to embed AI directly into the workflow, particularly in areas like price discovery, price generation, risk signalling, liquidity optimisation, and compliance purposes.”
This deep integration has given rise to a reimagined, hybrid workforce, manifested in two distinct but complementary ways. On one hand, there are digital assistants that operate externally, augmenting the human user by acting as a conversational interface to an application. A trader might instruct an AI assistant, which then interacts with trading systems through secure protocols to carry out tasks in a governed manner. On the other hand, fully autonomous agents are being deployed within applications. These agents are designed to ‘listen’ for specific market conditions or internal triggers and then execute predefined actions automatically, without needing direct instruction for each task. This dual approach – augmenting the user from the outside and automating processes from within – is creating a new category of worker, explains Stephen Murphy, CEO of Genesis, the software development platform specialising in financial markets.
“We’re seeing the rise of ‘digital employees,’ which can range from digital assistants that augment human workers to fully autonomous digital agents. These agents operate ‘at runtime’ once the software is built and deployed, to transform functions across the business. These digital employees can work 24/7, become highly specialised in very specific tasks, and fundamentally change how work gets done.”
The third pillar ensures this entire new ecosystem – composed of legacy systems, new applications, and digital agents – can communicate seamlessly.
“The battle for internal interoperability has largely been won,” notes Barrett. “Major sell-side and buy-side firms now operate within very healthy internal ecosystems, all leveraging a common interoperability platform and framework. The next frontier, which we haven’t seen fully realised yet, is interoperability that spans multiple vendor products. Getting all of those to support integrated workflows on a single desktop is a different kind of challenge.”
Together, these three pillars: the foundational abstraction layer; the empowered hybrid workforce; and the connected ecosystem, form the blueprint for a design that acknowledges the past while pragmatically building a more intelligent and resilient trading desk for the future.
The Governance Imperative
The power to automate and accelerate workflows with AI brings with it an urgent responsibility to govern them. As firms move from isolated experiments to embedding intelligent agents at the core of their operations, the need for robust, multi-layered controls becomes paramount. The speed of AI-driven decisions means that risks can materialise faster than ever before, demanding a governance framework that is as dynamic and intelligent as the technology it oversees.
Fortunately, the industry is not starting from scratch. The principles of managing automated systems have been honed over years in the world of algorithmic trading, and the requirement for active human supervision and intervention remains the cornerstone of any responsible AI strategy.
“We champion the principle of ‘human in the loop,’ which is especially critical in high-stakes environments like trading,” says Adam Toms, CEO Europe at HERE (formerly OpenFin). “Traders have been working with a similar paradigm for years through algorithmic execution. You had sophisticated algos, but you also had a team of people providing oversight, monitoring exceptions and risk dashboards, and ensuring everything performed as expected.” This established model of human oversight provides a familiar and proven foundation for managing the next generation of autonomous agents.
However, the nature of generative AI introduces new and specific challenges when applied to software development itself. A significant risk emerges from the practice of using AI to write entire applications from scratch, an approach come to be known as ‘vibe coding.’ While appealing for its speed, this method is widely seen as limited in its applicability to financial markets. Any code, especially that written at machine speed, must be rigorously validated for performance, security, and compliance before it can be deployed. This creates a massive governance bottleneck.
It is precisely this challenge that Murphy addresses when he outlines the critical need for lifecycle controls. He explains the fundamental questions that arise when AI is the primary author of the code:
“As AI is used more in development, critical questions arise about the long-term governance of what it creates. We are seeing real concerns around maintainability, audibility and explainability. If an AI generates code, who is responsible for maintaining it over its lifetime? How do you prove to a regulator what the AI did, who approved it, and that it was checked? You need a clear and defensible audit trail.”
The alternative, more governable model involves using AI not to write the bulk of the code, but to configure and connect pre-built, trusted application components. In this scenario, the platform provides the core, validated building blocks for functions like data handling, reporting, and audit. The AI then acts as an intelligent assembler, allowing users (even non-coders) to define how these components interact to achieve their desired custom functionality. This ensures that the resulting application is inherently built on a foundation that is already compliant, performant, and secure.
Ultimately, even with the most rigorous technical and lifecycle controls in place, the success of any AI tool hinges on a critical human factor: trust. If traders and risk managers do not have confidence in a model’s behaviour and outputs, they simply will not use it, rendering any technological investment useless. “Trust is a key challenge,” notes Joris. “Traders and risk teams need to understand what the model is doing, the accuracy of the responses, why it behaves a certain way, and how it responds in volatile conditions. This demands rigorous back testing and effective controls.”
Governance, therefore, is not just about preventing bad outcomes, but about building the institutional confidence required for any AI-powered transformation to succeed.
Measuring What Matters
As firms invest significant capital and resources into reimagining their trading desks, the definition of success is also evolving. While traditional return on investment remains crucial, the primary metrics for these modernisation projects go beyond simple cost savings. They measure a more fundamental shift in a firm’s ability to operate with agility, empower its people, and ultimately deliver a step-change in performance.
The most critical indicator of success, according to some, is not the efficiency of a single new workflow, but the organisation’s newfound ability to innovate at speed. The core failure of legacy technology was its rigidity. Therefore, the ultimate test of any new platform is its flexibility.
“The primary pain point of old systems was how time-consuming, expensive, and difficult it was to implement any change,” says Barrett. “Therefore, the success of a modernisation project should be measured by its ability to solve that core problem. And we advise clients to think at a meta-level: don’t get fixated on solving one specific trader’s complaint. Instead, focus on improving your ability to improve all business workflows. The goal is to build the new workflow on a platform that enables change to happen more quickly and efficiently.”
Beyond this strategic agility, the next set of key performance indicators is deeply human-centric. Given the industry’s history of building powerful but underused tools, one hurdle is simply getting people to engage.
“The first and most fundamental metric is adoption,” states Toms. “This has been a major barrier for the industry. Huge sums of money have been spent on AI tooling that saw low user adoption. So, the number one focus for most firms right now is getting that adoption rate up. The second key metric is capacity creation, i.e. how much time is this technology giving back to our people? Success isn’t just about internal efficiency; it’s about translating that new capacity into tangible, positive business outcomes.”
This drive to create capacity is fuelled by the recognition that AI is a powerful catalyst for broader digital transformation. With software tools improving at an unprecedented rate, there is a growing sense of urgency that firms must move now to produce applications more quickly and prepare for an AI-driven future, or risk being left behind. This urgency is driven by a dramatic shift in expectations; the goal is no longer the incremental efficiency gains of past automation projects. Clients are now targeting gains that are an order of magnitude larger, justifying the push for a new, AI-native approach, observes Murphy.
“A number of our clients have told us that previous automation phases delivered marginal gains of 5% to 10%,” he says. “While valuable, they are now looking for the 40% to 50% transformational gains that AI can deliver.”
Ultimately, these transformational gains must also manifest in the traditional, hard-nosed metrics that define trading success. The new, intelligent workflows are still accountable to the performance of the desk and must prove their worth in tangible market outcomes. As Joris points out, the fundamentals still apply. “It starts with examining efficiency gains and whether new workflows provide the desired effects in price discovery, price creation, ideation, execution quality. Speed, cost, accuracy and slippage also remain central.”
By these measures, a successful modernisation project is one that can deliver strategic agility and human empowerment while simultaneously improving the core P&L of the trading business.
The Exception-Based Future and the AI-Ready Platform
The ultimate ambition of this technological shift is not merely to create faster or more efficient workflows, but to fundamentally change the nature of work on a trading desk. The end goal is an environment where human intellect is reserved for the most complex challenges, while intelligent systems handle the rest. Will this future render our current understanding of process obsolete?
“I think the term ‘workflow’ itself is a bit of a legacy concept,” reflects Butler. “The future probably doesn’t have workflows in the traditional sense. In a futuristic environment, everything is exception-based. You only interact with the system at a point when it doesn’t have the confidence to know what to do on its own.” This vision represents the final destination: a truly symbiotic relationship between trader and technology.
This shift towards an exception-based desk is not a theoretical leap, but a practical evolution that starts with solving simple, everyday frustrations. As Cooke explains, the journey is a natural progression driven by user needs. “A workflow often begins with a simple, user-driven task, such as manually reconciling data between two windows. The next question is almost always, ‘Can we automate this?’ At that stage, the reconciliation logic should move to the server, where the system performs the calculations and sends only the results to the front end – typically the exceptions or the most relevant data. What starts as basic context-sharing evolves into automated processing handled entirely on the back end.”
However, for this vision to become a reality, technology platforms must meet a new and non-negotiable standard. Being ‘AI-ready’ needs to be more than a marketing slogan; in truth it is a strict technical requirement centred on data accessibility at the deepest level. As Barrett concludes, this is the foundational prerequisite that will separate the platforms of the future from the relics of the past. “The next generation of trading workflows will be defined by platforms that are truly ‘AI-ready,’ meaning they solve the challenges people face now in trying to build AI workflows on top of systems that don’t expose the data, and certainly don’t expose the data model for modification,” he states. “Access to both the data and the data model is the lifeblood of the entire AI flywheel, but traditional vendor products with rigid data models make this incredibly difficult.”
If the goal for trading firms is to reserve human intellect for the most complex challenges, then the journey forward is defined by a single, critical question: how to build the resilient, open, and governable platforms necessary to support this new division of labour, finally closing the ‘digital last mile’?
View our full agenda and sign up here or register below.
Subscribe to our newsletter