About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Potential and Pitfalls of Large Language Models

Subscribe to our newsletter

By Tony Seale, Knowledge Graph Engineer at Tier 1 Bank.

Large Language Models (LLMs) like ChatGPT possess enormous power, stemming from their capability to ingest and compress vast amounts of general information gathered from the web. However, this capability is general rather than tailored to your specific business needs. To effectively utilise these models in a context relevant to your business, it’s essential to provide them with specific information and data related to your sector and niche. After all, if the general LLM knows everything your business knows – what’s the point of your business? But here’s the kicker: if you put garbage in, you get garbage out. Disorganised data will result in vague or even inaccurate answers.

We can state that the quality of your AI offering will directly depend on the quality of the data you input into the LLM. In other words, the quality, connectivity, organisation, and availability of information within your organisation are key factors in determining the success of your main generative AI use cases. However, there is a harsh truth to acknowledge; the data estates of most large organisations are currently very disorganised.

Given that the organisation of our data is directly related to the quality of our LLM’s responses, perhaps our primary AI strategy should actually be to double down on our data strategy!

Organising your total data estate is no trivial task, but I believe the great AI acceleration will soon make it necessary. While there are no simple answers, here are some links offering insights into building a semantic data mesh, an architectural blueprint that could help you navigate this complex journey:

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Best practice approaches to data management for regulatory reporting

Effective regulatory reporting requires firms to manage vast amounts of data across multiple systems, regions, and regulatory jurisdictions. With increasing scrutiny from regulators and the rising complexity of financial instruments, the need for a streamlined and strategic approach to data management has never been greater. Financial institutions must ensure accuracy, consistency, and timeliness in their...

BLOG

Being Prepared for Tomorrow Requires an Advanced Data Architecture Today

By Don Huff, Global Head of Client Services and Operations, Bloomberg and Maureen Gallagher, Head of Enterprise Reference Data, Bloomberg. Data has quickly become the hottest commodity in the financial sector: trading and investment teams are laser-focused on accessing the best, newest data to get an edge on the competition. While this arms race for...

EVENT

Buy AND Build: The Future of Capital Markets Technology

Buy AND Build: The Future of Capital Markets Technology London examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

AI in Capital Markets: Practical Insight for a Transforming Industry – Free Handbook

AI is no longer on the horizon – it’s embedded in the infrastructure of modern capital markets. But separating real impact from inflated promises requires a grounded, practical understanding. The AI in Capital Markets Handbook 2025 provides exactly that. Designed for data-driven professionals across the trade life-cycle, compliance, infrastructure, and strategy, this handbook goes beyond...