About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Potential and Pitfalls of Large Language Models

Subscribe to our newsletter

By Tony Seale, Knowledge Graph Engineer at Tier 1 Bank.

Large Language Models (LLMs) like ChatGPT possess enormous power, stemming from their capability to ingest and compress vast amounts of general information gathered from the web. However, this capability is general rather than tailored to your specific business needs. To effectively utilise these models in a context relevant to your business, it’s essential to provide them with specific information and data related to your sector and niche. After all, if the general LLM knows everything your business knows – what’s the point of your business? But here’s the kicker: if you put garbage in, you get garbage out. Disorganised data will result in vague or even inaccurate answers.

We can state that the quality of your AI offering will directly depend on the quality of the data you input into the LLM. In other words, the quality, connectivity, organisation, and availability of information within your organisation are key factors in determining the success of your main generative AI use cases. However, there is a harsh truth to acknowledge; the data estates of most large organisations are currently very disorganised.

Given that the organisation of our data is directly related to the quality of our LLM’s responses, perhaps our primary AI strategy should actually be to double down on our data strategy!

Organising your total data estate is no trivial task, but I believe the great AI acceleration will soon make it necessary. While there are no simple answers, here are some links offering insights into building a semantic data mesh, an architectural blueprint that could help you navigate this complex journey:

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Hearing from the Experts: AI Governance Best Practices

The rapid spread of artificial intelligence in the financial industry presents data teams with novel challenges. AI’s ability to harvest and utilize vast amounts of data has raised concerns about the privacy and security of sensitive proprietary data and the ethical and legal use of external information. Robust data governance frameworks provide the guardrails needed...

BLOG

Twelve Leading Data Lineage Solutions for Capital Markets

The ability to trace the journey of data from its origin to its final report is no longer a luxury but a regulatory and operational necessity. As firms grapple with the intensifying requirements of regulations such as BCBS 239, GDPR and the shifting landscape of MiFID II, the “black box” approach to data management has...

EVENT

ExchangeTech Summit London

A-Team Group, organisers of the TradingTech Summits, are pleased to announce the inaugural ExchangeTech Summit London on May 14th 2026. This dedicated forum brings together operators of exchanges, alternative execution venues and digital asset platforms with the ecosystem of vendors driving the future of matching engines, surveillance and market access.

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...