About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Bloomberg Unveils BloombergGPT: A Large-Language Model Tailored for the Financial Industry

Subscribe to our newsletter

Bloomberg has introduced BloombergGPT, a generative artificial intelligence (AI) model specifically designed to enhance natural language processing (NLP) tasks within the financial sector. Developed using a vast range of financial data, this large language model (LLM) represents a significant step forward in the application of AI technology in the financial industry.

While recent advancements in AI and LLMs have generated promising new applications across various domains, the financial sector’s complexity and unique terminology necessitate a bespoke model. BloombergGPT will help improve existing financial NLP tasks such as sentiment analysis, named entity recognition, news classification, and question answering. Moreover, the model will unlock new possibilities for efficiently utilising the extensive data available on the Bloomberg Terminal, ensuring that customers reap the full benefits of AI in the financial realm.

As a pioneer in AI, machine learning, and NLP applications in finance for over a decade, Bloomberg now supports an extensive range of NLP tasks that stand to gain from a finance-aware language model. The company’s researchers employed a mixed approach, incorporating finance data with general-purpose datasets, to train a model that excels in financial benchmarks while maintaining competitive performance in general-purpose LLM benchmarks.

“For all the reasons generative LLMs are attractive – few-shot learning, text generation, conversational systems, etc. – we see tremendous value in having developed the first LLM focused on the financial domain,” commented Shawn Edwards, Bloomberg’s Chief Technology Officer. “BloombergGPT will enable us to tackle many new types of applications, while it delivers much higher performance out-of-the-box than custom models for each application, at a faster time-to-market.”

Bloomberg’s ML Product and Research group joined forces with the AI Engineering team to build one of the largest domain-specific datasets to date, leveraging the company’s existing data creation, collection, and curation resources. Bloomberg’s data analysts have been amassing and managing financial language documents for forty years, and the team utilised this extensive archive to create a comprehensive dataset of 363 billion English-language financial tokens.

The team supplemented this data with a 345 billion token public dataset, resulting in a training corpus exceeding 700 billion tokens. They then trained a 50-billion parameter decoder-only causal language model using part of this corpus. Validated on existing finance-specific NLP benchmarks, Bloomberg internal benchmarks, and general-purpose NLP tasks from well-known benchmarks, the BloombergGPT model surpasses comparable open models in financial tasks by considerable margins while matching or exceeding performance in general NLP benchmarks.

“The quality of machine learning and NLP models comes down to the data you put into them,” said Gideon Mann, Head of Bloomberg’s ML Product and Research team. “Thanks to the collection of financial documents Bloomberg has curated over four decades, we were able to carefully create a large and clean, domain-specific dataset to train a LLM that is best suited for financial use cases. We’re excited to use BloombergGPT to improve existing NLP workflows, while also imagining new ways to put this model to work to delight our customers.”

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Streamlining trading and investment processes with data standards and identifiers

3 June 2025 10:00am ET | 3:00pm London | 4:00pm CET Duration: 50 Minutes Financial institutions are integrating not only greater volumes of data for use across their organisation but also more varieties of data. As well, that data is being applied to more use cases than ever before, especially regulatory compliance and ESG integration....

BLOG

big xyt Enters Bidding to Become EU Equities and ETFs Consolidated Tape Provider

Data analytics specialist big xyt has formally declared its intention to compete for the role of official provider of the EU Consolidated Tape (CT) for equities and ETFs, under the selection process overseen by the European Securities and Markets Authority (ESMA). big xyt’s entry into the process comes amid industry concerns over a lack of...

EVENT

TradingTech Briefing New York

Our TradingTech Briefing in New York is aimed at senior-level decision makers in trading technology, electronic execution, trading architecture and offers a day packed with insight from practitioners and from innovative suppliers happy to share their experiences in dealing with the enterprise challenges facing our marketplace.

GUIDE

Institutional Digital Assets Handbook 2024

Despite the setback of the FTX collapse, institutional interest in digital assets has grown markedly in the past 12 months, with firms of all sizes now acknowledging participation in some form. While as recently as a year ago, institutional trading firms were taking a cautious stance toward their use, the acceptance of tokenisation, stablecoins, and...