About a-team Marketing Services

A-Team Insight Blogs

Bloomberg Unveils BloombergGPT: A Large-Language Model Tailored for the Financial Industry

Subscribe to our newsletter

Bloomberg has introduced BloombergGPT, a generative artificial intelligence (AI) model specifically designed to enhance natural language processing (NLP) tasks within the financial sector. Developed using a vast range of financial data, this large language model (LLM) represents a significant step forward in the application of AI technology in the financial industry.

While recent advancements in AI and LLMs have generated promising new applications across various domains, the financial sector’s complexity and unique terminology necessitate a bespoke model. BloombergGPT will help improve existing financial NLP tasks such as sentiment analysis, named entity recognition, news classification, and question answering. Moreover, the model will unlock new possibilities for efficiently utilising the extensive data available on the Bloomberg Terminal, ensuring that customers reap the full benefits of AI in the financial realm.

As a pioneer in AI, machine learning, and NLP applications in finance for over a decade, Bloomberg now supports an extensive range of NLP tasks that stand to gain from a finance-aware language model. The company’s researchers employed a mixed approach, incorporating finance data with general-purpose datasets, to train a model that excels in financial benchmarks while maintaining competitive performance in general-purpose LLM benchmarks.

“For all the reasons generative LLMs are attractive – few-shot learning, text generation, conversational systems, etc. – we see tremendous value in having developed the first LLM focused on the financial domain,” commented Shawn Edwards, Bloomberg’s Chief Technology Officer. “BloombergGPT will enable us to tackle many new types of applications, while it delivers much higher performance out-of-the-box than custom models for each application, at a faster time-to-market.”

Bloomberg’s ML Product and Research group joined forces with the AI Engineering team to build one of the largest domain-specific datasets to date, leveraging the company’s existing data creation, collection, and curation resources. Bloomberg’s data analysts have been amassing and managing financial language documents for forty years, and the team utilised this extensive archive to create a comprehensive dataset of 363 billion English-language financial tokens.

The team supplemented this data with a 345 billion token public dataset, resulting in a training corpus exceeding 700 billion tokens. They then trained a 50-billion parameter decoder-only causal language model using part of this corpus. Validated on existing finance-specific NLP benchmarks, Bloomberg internal benchmarks, and general-purpose NLP tasks from well-known benchmarks, the BloombergGPT model surpasses comparable open models in financial tasks by considerable margins while matching or exceeding performance in general NLP benchmarks.

“The quality of machine learning and NLP models comes down to the data you put into them,” said Gideon Mann, Head of Bloomberg’s ML Product and Research team. “Thanks to the collection of financial documents Bloomberg has curated over four decades, we were able to carefully create a large and clean, domain-specific dataset to train a LLM that is best suited for financial use cases. We’re excited to use BloombergGPT to improve existing NLP workflows, while also imagining new ways to put this model to work to delight our customers.”

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Augmented data quality: Leveraging AI, machine learning and automation to build trust in your data

Artificial intelligence and machine learning are empowering financial institutions to get more from their data. By augmenting traditional data processes with these new technologies, organisations can automate the detection and mitigation of data issues and errors before they become entrenched in workflows. In this webinar, leading proponents of augmented data quality (ADQ) will examine how...

BLOG

What to Expect at A-Team Group’s Second Buy AND Build Summit

On September 19th, Buy AND Build: The Future of Capital Markets Technology returns to London at the Marriott Hotel, Canary Wharf for its second year. This A-Team Group event offers a timely exploration of how financial institutions and technology providers can collaborate more effectively to modernise trading platforms and drive innovation. As firms increasingly transition...

EVENT

AI in Capital Markets Summit London

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

GUIDE

Regulatory Data Handbook 2024 – Twelfth Edition

Welcome to the twelfth edition of A-Team Group’s Regulatory Data Handbook, a unique and useful guide to capital markets regulation, regulatory change and the data and data management requirements of compliance. The handbook covers regulation in Europe, the UK, US and Asia-Pacific. This edition of the handbook includes a detailed review of acts, plans and...