About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Rise of Large Language Models in Financial Markets

Subscribe to our newsletter

When OpenAI introduced ChatGPT to the public in November 2022, giving users access to its large language model (LLM) through a simple human-like chatbot, it took the world by storm, reaching 100 million users within three months. By comparison, it took TikTok nine months and Instagram two and a half years to hit that milestone.

It’s clear from the fervour surrounding ChatGPT that LLMs are proving to be a highly disruptive manifestation of artificial intelligence (AI). Virtually every industry now seems to be rushing to embrace the technology. But how are LLMs being utilised in the real world of financial markets? What kinds of applications are being built to draw upon the benefits they promise? What challenges do they present, and how are those challenges being addressed?

In this feature, we discuss the rise of LLMs in Financial Markets, investigating various ways in which the technology is being applied.

Human-like interactions

To start with, what exactly are LLMs, and why are they generating such excitement? At the risk of over-simplifying, large language models are a subset of AI designed to understand and generate natural language, where the user inputs a question – or prompt – and the LLM generates a human-like response. Large language models are generally trained on vast amounts of data, often billions of words of text, and can be fine-tuned on smaller, industry-specific or task-specific datasets for more precise use cases. The most common architecture behind LLMs is the Transformer, a type of neural network effective in handling long-range dependencies in text, a version of which underpins OpenAI’s ubiquitous GPT (Generative Pre-Trained Transformer).

Earlier this year, Bloomberg unveiled BloombergGPT, an LLM tailored for the financial industry, trained on a massive domain-specific dataset, and designed to improve tasks such as sentiment analysis, named entity recognition, news classification, and question answering.

“What’s so exciting about the current leap to Large Language Models and Generative Pre-Trained Transformers is that they’re more accessible, general and broad,” suggests Andrew Skala, Global Head of Research – Core Product at Bloomberg. “As a result, we believe LLMs have the potential to increase the speed at which we can bring new solutions to clients, especially since they have the potential to reduce the number of bespoke models we need to build.”

Fundamental to LLMs’ value is the human-like interaction they offer. “This is a completely new way of interacting with data, and one key feature is the ability to ask follow-up questions as it remembers the context,” says Roger Burkhardt, CTO Capital Markets at Broadridge, who together with its subsidiary LTX, recently launched BondGPT, an application designed to respond to bond-related queries and assist users in identifying & accessing real-time bond liquidity.

“You can start by asking a complex question and receive an answer,” he explains. “Then you have the capability to inquire further while maintaining the context. Traders are always pressed for time, so being able to quickly get what they need and see it in the form of numbers, tables, and graphs is invaluable. From a development perspective, it also allows for rapid iteration and the addition of new capabilities to the system.”

Dealing with jargon

This ability to rapidly develop user-friendly applications that can draw upon vast quantities of data to generate meaningful results, is one of the key reasons why many data-rich technology companies are now building their own innovative LLM-based solutions.

“We were exploring ways to put a user interface on this extensive data and AI analytics,” says Jim Kwiatkowski, CEO of Broadridge subsidiary LTX. “However, we encountered challenges because we had many requirements to fulfil, and clients said they could only allocate minimal screen space for this purpose. When GPT-4 was released, we saw that we could provide a simple, natural language user interface to access a wide array of data and models.”

In financial markets, jargon is often heavily used, so the natural language processing (NLP) aspect of the system needed to take that into account, explains Kwiatkowski. “Regarding prompts, our aim is to accept natural language inputs without requiring users to be trained on how to use the product. For instance, the security master table might categorise sectors as ‘food and beverages’ or ‘gaming,’ but a trader might ask for ‘beer bonds,’ ‘casino bonds’ or ‘cruise bonds.’ We needed the system to understand the various terms and vernacular used by bond traders. This information comes from interactions with many market participants, insights from our teams at LTX and Broadridge, and ultimately learning from our customers.”

He continues: “In terms of helping users get more out of the system, from the beginning we have intentionally provided sample questions on the screen because the system interface is primarily an empty text box. We wanted to give users a bit of encouragement by offering some standard questions while ensuring we provide accurate answers. Every time we receive a question and go to answer it, we echo back to the user what we think the question was. And at the end of every answer, we suggest other questions that users might consider, helping them extract more value from the system.”

Market and trade surveillance

Another strong use case for LLMs is in the area of market and trade surveillance. Earlier this year, Steeleye, a surveillance solutions provider, successfully integrated ChatGPT 4 into its compliance platform, to enhance compliance officers’ ability to conduct surveillance investigations.

“When we first explored AI’s application in compliance, we valued its potential for market surveillance to identify patterns and unusual behaviours that might elude the human eye, and we now have technology in place for that purpose,” says Matt Smith, Steeleye’s CEO. “However, when we considered LLMs, we decided they wouldn’t serve that same purpose. Instead, they could be tools to enhance the efficiency of compliance organisations and professionals. For example, in our surveillance system, we could have a voice or video recording of a 45-minute call, in which the system detected something said that could signal insider information leakage. The compliance professional receives an alert, but previously their only recourse was to watch the video, listen to the call or read the transcript. To sift through the information, they had to manually view it. We now provide the ability to work through this information, whether calls, Bloomberg or IM chats, lengthy email exchanges or other communications, and verify those quickly and efficiently through a set of nine pre-defined prompts. These are designed to provide contextual information rather than definitive answers, allowing compliance officers to use their judgement and expertise alongside the AI-generated insights to more efficiently come to a conclusion.”

Data transparency

Despite the excitement around the numerous use cases for NLP and LLMs within financial markets, challenges do exist, as Mike Lynch, Chief Product Officer at Symphony, the market infrastructure and technology platform, points out.

“Our market is uniquely positioned to benefit from AI, given the abundance of unstructured content and the need for quick decision-making,” he says. “However, it’s also one of the most challenging markets due to the highly sensitive data involved, the necessity of robust security controls, the evolving regulatory landscape, and the compliance frameworks that firms need to implement around this content.”

In December 2022, Symphony acquired NLP data analytics solution provider Amenity Analytics, specialists in extracting and delivering actionable insights from unstructured content types. Lynch stresses the need for data transparency when working with NLP and LLMs.

“Being able to not only process insights in real time or at least intraday, but also demonstrate the source of the content is crucial for providing timely and trusted insight in this market,” he says. “The exciting aspect for us is having a real-time feed of incoming content, including earnings, transcripts, news, etc., together with a model that can update content quickly and identify the reasons for changes. That provides data transparency, because users can click through a sentiment score and see the news articles or content driving that change. The key is that you don’t have to trust us blindly. We process the content with high accuracy and empower end users to validate and verify where the content comes from. This way, they can take our input and make their own decisions and assessments based on the available information.”


Another potential issue with LLMs is their tendency to ‘hallucinate,’ i.e. where the model provides a factually incorrect answer to a question. This is a well-recognised problem and the subject of much ongoing research. However, this issue can be addressed in domain-specific LLM implementations, explains Andrew Skala.

“While there is clearly a lot of potential from LLMs, there are also concerns throughout the industry around the potential for model hallucination, and the need for robust processes to curate the data and train the models,” he says. “For example, there is a need to ensure that these models are being trained with high quality data. Our approach at Bloomberg has been to carefully curate the data so we have relevant financial documents. We look for high quality sources like quarterly filings and earnings transcripts. You want to make sure the model is trained on high-quality content. To address hallucinations, we avoid closed-loop programming that requires the LLM to give answers from its memory. Instead, it is more of an open book exam where we instruct the LLM to source responses from a defined set of appropriate financial documents and real-time data. The approach we are pursuing is based on published research that shows that a combination of LLMs and data models working together will deliver more accurate and timely responses.”

He continues: “Most conversations around bias in AI tend to focus on racial and social biases, such as those that can negatively impact decisions related to credit worthiness. But there are also issues of recency bias, where the AI makes decisions based on its last queries or its short-term memory. For example, when you’re dealing with real-time data, you need to know the current EPS of a company, not what it was the most recent time someone requested it. As with hallucinations, this is most likely to occur with a closed loop model where the model is using its own memory. Again, we can reduce this through the combination of LLMs and Data Models working together.”

Context is also important, explains Matt Smith of Steeleye. “The real challenge lies in prompt engineering. Earlier this year, one of our competitors posted on LinkedIn that ChatGPT and LLMs are ineffective, and they provided an incorrect answer to a question as evidence. We examined their approach and considered how the outcome might change if the question were framed differently. By asking more specific questions with relevant parameters, in context, we found that 99% of the time we received a near perfect response. This experience taught us the importance of carefully contextualising prompts; otherwise, it’s easy to get incorrect answers. GPT tends to provide the answer it thinks you want, rather than the answer you seek. You have to teach it to learn to say no. It’s also important to note that GPT becomes smarter over time as it has more information to work with, and you must monitor that. As it learns more, it can change the type of answers it provides. It’s essential to focus on all these different components, as neglecting any can quickly result in a solution that gives you incorrect information.”

Future outlook

Financial market participants are clearly enthusiastic about the potential uses of LLMs, and technologists are working on ever more innovative ways to utilise this type of AI. So where might things go from here?

“As AI technology evolves and advances, it’s going to enable us to build solutions that we’ve never been able to create before,” says Bloomberg’s Andrew Skala. “One good example of this is Text2BQL, a tool we are planning to build to enable users to query our databases in plain language. An LLM will then generate and return the code for the BQL query to them. This way, they can review and/or edit it before they execute it in Excel or BQuant, our cloud-based quantitative analytics and investment technology platform.”

Symphony’s Mike Lynch predicts wider adoption of the technology. “One of the interesting trends we are observing is that many of our customers, particularly the larger ones, have their own teams of data scientists and machine learning engineers who are building their own AI systems. Many of our customers are in the process of developing and testing these systems internally, with the ultimate goal of offering these products to their end users. For example, a bank creating an AI system for research or event insights and making it available to their hedge fund or asset manager customers. This is an exciting prospect because Symphony, as a secure conversational interface, would be one of the key platforms for this. We can help our customers expose their AI systems, even in cases where we are not the ones providing the AI technology. This could apply to various use cases, including the upcoming shift to T+1 settlement in the operations community, for example. We want to support our customers in making their AI systems accessible, even when we are not the ‘brain’ behind those systems.”

For those who are worried about AI taking over the world however, Steeleye’s Matt Smith contends that AI is not about to replace humans any time soon. “If you rely on AI to do a human’s job, you encounter problems, with explainability being a significant one. We have had extensive discussions with regulators about the use of this technology and being able to explain and trace back results. That’s why we view AI as a utility, a tool to help a human. The explainability is there because a human is working through the information to make a decision. The information presented back isn’t just a single piece of information, it’s all the information the compliance professional has been given to assess. Ultimately, the decision is a human one. However, AI will allow compliance officers to become faster, more efficient and more accurate. Their ultimate objective won’t change, but their ability to work through information will be orders of magnitude faster than it was in the past.”

Subscribe to our newsletter

Related content


Upcoming Webinar: Smart Trader Desktops: Placing UX at the front and centre of the trading workflow

Date: 15 October 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Trading strategy is in place, the technology stack is optimised and the trading team is highly skilled – but what about the user experience? Whatever the stack, the desktop, the trading apps and their functionality, a trading platform is...


MENA – A Compelling Capital Markets Destination

As the Middle East and North Africa (MENA) region undergoes a significant evolution from its traditional reliance on oil revenues to establishing itself as a dynamic financial market, exchanges and market intermediaries in the region are embracing advanced technologies to support their burgeoning financial services ecosystems. For international firms looking to take advantage of this...


Data Management Summit New York City

Now in its 14th year the Data Management Summit NYC brings together the North American data management community to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.


ESG Data Handbook 2022

The ESG landscape is changing faster than anyone could have imagined even five years ago. With tens of trillions of dollars expected to have been committed to sustainable assets by the end of the decade, it’s never been more important for financial institutions of all sizes to stay abreast of changes in the ESG data...