About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Integrating AI into Legacy Infrastructure – Balancing Innovation and Reality

Subscribe to our newsletter

Integrating cutting-edge AI into the financial sector’s entrenched legacy systems is one of today’s most pressing challenges. How can firms balance the promise of innovation with the hard reality of their existing infrastructure?

At the A-Team Group’s recent AI in Capital Markets Summit London, a panel of experts tackled this very issue. Moderated by Naomi Clarke of CDO Advisory, the panel featured Alpesh Doshi of Redcliffe Capital, Leslie Kanthan PhD of TurinTech AI, and Neil Naidoo of Rimes. They examined the nuanced opportunities and hurdles of deploying AI into established infrastructures and workflows, tackling critical questions such as:

  • How can AI be effectively integrated with existing legacy systems?
  • What are the merits of building versus buying AI solutions?
  • How should financial institutions evaluate the myriad of AI tools flooding the market?
  • What role does cloud architecture play in ensuring scalability and cost-efficiency?
  • How can organizations align AI adoption with governance, ethics, and sustainability?

The Enduring Challenge: Legacy Systems and the Case for a Hybrid Strategy

The panel reached a clear consensus: legacy systems remain a formidable barrier to AI adoption. Crucially, “legacy” no longer just refers to decades-old mainframes; systems only a few years old can stall integration efforts in today’s rapidly evolving landscape. While financial institutions, burdened with these entrenched systems, face significant hurdles, the panel observed that firms are making tangible progress. Fewer organizations are stuck in the exploration phase; more are now actively formulating strategies and beginning to implement AI.

When considering how to acquire these capabilities, a hybrid approach dominated the discussion. The panel showed a strong preference for a strategy that combines specialist external solutions with bespoke internal development. This gives institutions the flexibility to use the best external tools while keeping control of their core technology stack.

Many firms are still wrestling with what AI can realistically deliver before even reaching the build-versus-buy decision. While startups offer a wave of innovation, integration remains a lengthy and complex process, especially in regulated environments. The panel noted that while generative AI excels at creating new code, it struggles with the intricacies of integrating with legacy systems. This limitation reinforces the case for a hybrid approach, where new technologies like large language models are closely aligned with an institution’s business goals and operational frameworks to deliver real value.

Beyond the Hype: Building Rigorous Frameworks for AI Evaluation

With an explosion of AI tools on the market, building robust evaluation frameworks is non-negotiable. Rather than focusing solely on a vendor’s claims, institutions must critically examine what is realistically deliverable and how a solution will measurably improve performance or efficiency. This evaluation must include total cost of ownership—from cloud and on-premises expenses to energy consumption.

One panelist emphasized that despite the hype, traditional vendor due diligence must persist, now augmented with AI-specific impact assessments. Firms must establish objective success criteria and engage domain specialists in the evaluation process to ensure practical, outcome-driven decisions. The panel agreed that effectively evaluating AI is a skill that demands rigour, consistency, and improves with experience.

The Three Pillars of Success: Data, Governance, and Sustainable AI

Success with AI hinges on a solid foundation. The conversation highlighted three critical pillars: data architecture, governance, and sustainability.

First, a scalable, cost-effective data architecture is fundamental. This requires a dual focus: robust governance for incoming data and flexible structures for its extraction and use. While structured data management is relatively mature, the unstructured data pipelines vital for areas like compliance and KYC are still in their infancy. The panel stressed that designing architectures that effectively merge structured and unstructured data is now a critical priority.

Second, the discussion concluded with a strong focus on governance and ethics. Establishing AI councils that bring together risk managers and domain experts was identified as an effective way to govern AI use. Panelists raised serious concerns about the technical debt generated by unrefined AI code, highlighting the need for rigorous validation to prevent future inefficiencies. Although AI governance frameworks are still immature, they are essential for managing risks like bias, toxicity, and regulatory compliance, particularly with emerging legislation like the EU AI Act.

Finally, the panel acknowledged the environmental costs of AI. As organizations move forward, they must scrutinize the carbon footprint of AI models and applications, ensuring that technology adoption aligns with broader ESG goals and corporate values.

While the path to AI integration is undeniably complex, the discussion drove home a powerful point. A deliberate, pragmatic approach, grounded in clear objectives, disciplined execution, and a strong governance framework, is what will unlock significant value, protecting firms from the temptation of premature or ill-considered deployments.

 

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: The Role of Data Fabric and Data Mesh in Modern Trading Infrastructures

The demands on trading infrastructure are intensifying. Increasing data volumes, the necessity for real-time processing, and stringent regulatory requirements are exposing the limitations of legacy data architectures. In response, firms are re-evaluating their data strategies to improve agility, scalability, and governance. Two architectural models central to this conversation are Data Fabric and Data Mesh. This...

BLOG

Bloomberg Launches AI-Powered Research Tool for Terminal Users

Bloomberg has announced the forthcoming release of its Document Search & Analysis solution, an AI-driven research tool designed to streamline how financial professionals interrogate and interpret large volumes of market data and reports. The product is expected to be rolled out to Bloomberg Terminal users by the end of the year. The new tool enables...

EVENT

AI in Data Management Summit New York City

Following the success of the 15th Data Management Summit NYC, A-Team Group are excited to announce our new event: AI in Data Management Summit NYC!

GUIDE

Institutional Digital Assets Handbook 2023

After initial hesitancy, interest in digital assets from institutional market participants has grown over the past three to four years. Early focus inevitably centred on the market opportunities presented by bitcoin and other cryptocurrencies. But this has evolved into a broad acceptance of a potentially meaningful role for digital assets in institutional markets. It’s now...