
Across capital markets, AI pilots are proliferating – on trading desks, in compliance functions, within data teams. The proof-of-concept phase has, for many firms, delivered enough evidence of value to settle the strategic debate. What remains stubbornly unresolved is the operational question: how to translate a growing patchwork of departmental experiments into scalable, production-grade capability.
That was the overarching message from a panel discussion at A-Team Group’s TradingTech Summit London 2026, entitled “Harnessing AI in Trader Workflows”. The session brought together Matthew McLoughlin, Strategic Technology Buy Side Leader; Yarden Jacobson, Head of Order Management and Analytics at Man Group; Adam Ragol-Levy, Head of European and Asian Product for Multi-Asset Agency Solutions at RBC Capital Markets; and Andre Nedelcoux, VP Financial Services at Intellias. It was moderated by Hayley McDowell, Head of European Market Structure at RBC Capital Markets.
Unstructured Data: Where the Value Is Landing
Panellists identified the processing of unstructured data – chat transcripts, voice communications, inter-broker messages – as the area delivering the most tangible impact today. In credit markets, where liquidity discovery still relies on conversational channels rather than electronic order books, AI models that extract intent, price signals and context from these interactions represent a step-change in workflow efficiency.One panellist described this as the mechanism that could bring systematic scalability to credit and rates; not by forcing equity-style workflows onto different market structures, as the industry has repeatedly attempted, but by working with the grain of how those markets operate. The implication: AI-driven processing of unstructured data could open access to previously untapped segments of the credit market at a scale that was not previously viable.
More broadly, panellists noted that AI is surfacing insights from behavioural data – trader decision patterns, investor biases, execution habits – that are invisible to human analysis at scale, though one speaker cautioned against allowing human biases to be replaced by model biases in the process.
A People Problem, Not a Data Problem
An audience poll split evenly between data quality concerns and regulatory uncertainty as the biggest pain points. One panellist pushed back directly: “AI at scale is a people transformation. We need to change people, mindset and skills. It’s not a tech rollout.” Successful adoption, he argued, requires firms to think about AI the way they would onboard a new team member, by identifying use cases, building champions, developing skill sets and scaling team by team. Starting from data quality inverts the problem.Others reinforced the point. Data quality will never be perfect, and waiting for pristine data before deploying AI is a form of institutional procrastination; AI itself can improve data quality in parallel with its deployment. Visible CEO-level commitment matters too: firms where senior leadership uses AI tools and communicates a clear strategic rationale are far more likely to see adoption embedded across the organisation. As one panellist put it: “If a new junior can’t use your docs and onboard, your agent won’t be able to be helped either.”
Enterprise Platforms and the Compliance Dividend
The challenge now is architectural: moving from vertical, siloed pilots to a unified AI platform that supports multiple use cases while meeting governance requirements. Panellists outlined the key components – evaluation frameworks for continuous model performance measurement, observability infrastructure to track drift, and centralised connections to trusted internal data sources – all underpinned by a firmwide AI strategy. Without one, firms risk duplicating effort and failing to realise cross-functional benefits.
A particularly useful insight for a trading technology audience emerged from the guardrails discussion: the infrastructure required for regulatory explainability and for AI performance optimisation is substantially the same. Traceability, continuous evaluation and model benchmarking all serve both the regulator and the firm’s own improvement cycle. Investment in observability and feedback loops pays dividends on both sides simultaneously. Beyond explainability, firms must also address cost controls for token consumption, security protocols for proprietary data, and information security frameworks to prevent sensitive trading data or IP from leaking to external providers.
The practical framing offered by one panellist was instructive: treat AI like a new junior employee. You would not hand a new hire decision-making authority on day one; you would onboard them, assign progressively larger tasks, and maintain oversight throughout. For now, the trader still makes the call and accountability remains with people.Buy the Models, Build the Framework
On buy versus build, the panel reached clear consensus: no firm should be training its own large language models. Models are commoditising rapidly, and the investment required to compete with frontier providers is prohibitive. The strategic value lies in the integration layer, frameworks connecting commercial models to proprietary data, internal workflows and governance structures.
One panellist described his firm’s recent partnership with a frontier model provider as a collaboration rather than a dependency, through working alongside the provider’s engineers while retaining the flexibility to switch models without re-engineering surrounding infrastructure. Others echoed this: the current landscape still requires assembly from multiple vendors, but if a firm is building significant AI infrastructure itself, that should be a warning sign. The value is not in the plumbing.
Budgets, Expansion and the Augmentation Question
With an estimated 60 to 70 percent of technology spend at many firms consumed by maintaining existing systems, panellists argued that AI should be viewed not as an incremental cost line but as a tool for unlocking expenditure trapped in modernising legacy stacks, automating low-value processes and cleansing data at a pace impossible with manual effort, then reinvesting the savings into higher-value use cases.
The more ambitious reframe went beyond cost reduction. The choice, as one speaker put it, is between using AI to do the same things more cheaply and using it to do things that were not possible before, such as entering new markets, offering new products, pursuing opportunities that could not previously be resourced. The firms that will differentiate are those choosing expansion over efficiency alone.
That logic extended to the question of trading roles. The consensus was firmly on the side of augmentation: AI gives teams greater velocity and analytical reach, but relationships, judgement and contextual understanding remain essential sources of alpha. The overarching takeaway was clear: invest in enterprise AI platform capability, own the feedback loop, and above all, recognise that scaling AI is a people challenge. The technology is ready. The question is whether organisations are.
Subscribe to our newsletter


