About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Potential and Pitfalls of Large Language Models

Subscribe to our newsletter

By Tony Seale, Knowledge Graph Engineer at Tier 1 Bank.

Large Language Models (LLMs) like ChatGPT possess enormous power, stemming from their capability to ingest and compress vast amounts of general information gathered from the web. However, this capability is general rather than tailored to your specific business needs. To effectively utilise these models in a context relevant to your business, it’s essential to provide them with specific information and data related to your sector and niche. After all, if the general LLM knows everything your business knows – what’s the point of your business? But here’s the kicker: if you put garbage in, you get garbage out. Disorganised data will result in vague or even inaccurate answers.

We can state that the quality of your AI offering will directly depend on the quality of the data you input into the LLM. In other words, the quality, connectivity, organisation, and availability of information within your organisation are key factors in determining the success of your main generative AI use cases. However, there is a harsh truth to acknowledge; the data estates of most large organisations are currently very disorganised.

Given that the organisation of our data is directly related to the quality of our LLM’s responses, perhaps our primary AI strategy should actually be to double down on our data strategy!

Organising your total data estate is no trivial task, but I believe the great AI acceleration will soon make it necessary. While there are no simple answers, here are some links offering insights into building a semantic data mesh, an architectural blueprint that could help you navigate this complex journey:

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Streamlining trading and investment processes with data standards and identifiers

Financial institutions are integrating not only greater volumes of data for use across their organisation but also more varieties of data. As well, that data is being applied to more use cases than ever before, especially regulatory compliance and ESG integration. Due to this increased complexity of institutions’ data needs, however, information often arrives into...

BLOG

GoldenSource OMNI Evolves as Buy-Side Demands Transform

Data cloud giant Snowflake’s forum in San Francisco last month was closely watched by the data management industry, especially GoldenSource. A year after its launch, the creators of GoldenSource’s OMNI data lake product for asset managers were keenly watching what Snowflake had to offer with an eye to enhancing the app’s own provisions for the...

EVENT

RegTech Summit New York

Now in its 9th year, the RegTech Summit in New York will bring together the RegTech ecosystem to explore how the North American capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...