Bloomberg has introduced BloombergGPT, a generative artificial intelligence (AI) model specifically designed to enhance natural language processing (NLP) tasks within the financial sector. Developed using a vast range of financial data, this large language model (LLM) represents a significant step forward in the application of AI technology in the financial industry.
While recent advancements in AI and LLMs have generated promising new applications across various domains, the financial sector’s complexity and unique terminology necessitate a bespoke model. BloombergGPT will help improve existing financial NLP tasks such as sentiment analysis, named entity recognition, news classification, and question answering. Moreover, the model will unlock new possibilities for efficiently utilising the extensive data available on the Bloomberg Terminal, ensuring that customers reap the full benefits of AI in the financial realm.As a pioneer in AI, machine learning, and NLP applications in finance for over a decade, Bloomberg now supports an extensive range of NLP tasks that stand to gain from a finance-aware language model. The company’s researchers employed a mixed approach, incorporating finance data with general-purpose datasets, to train a model that excels in financial benchmarks while maintaining competitive performance in general-purpose LLM benchmarks.
“For all the reasons generative LLMs are attractive – few-shot learning, text generation, conversational systems, etc. – we see tremendous value in having developed the first LLM focused on the financial domain,” commented Shawn Edwards, Bloomberg’s Chief Technology Officer. “BloombergGPT will enable us to tackle many new types of applications, while it delivers much higher performance out-of-the-box than custom models for each application, at a faster time-to-market.”
Bloomberg’s ML Product and Research group joined forces with the AI Engineering team to build one of the largest domain-specific datasets to date, leveraging the company’s existing data creation, collection, and curation resources. Bloomberg’s data analysts have been amassing and managing financial language documents for forty years, and the team utilised this extensive archive to create a comprehensive dataset of 363 billion English-language financial tokens.
The team supplemented this data with a 345 billion token public dataset, resulting in a training corpus exceeding 700 billion tokens. They then trained a 50-billion parameter decoder-only causal language model using part of this corpus. Validated on existing finance-specific NLP benchmarks, Bloomberg internal benchmarks, and general-purpose NLP tasks from well-known benchmarks, the BloombergGPT model surpasses comparable open models in financial tasks by considerable margins while matching or exceeding performance in general NLP benchmarks.
“The quality of machine learning and NLP models comes down to the data you put into them,” said Gideon Mann, Head of Bloomberg’s ML Product and Research team. “Thanks to the collection of financial documents Bloomberg has curated over four decades, we were able to carefully create a large and clean, domain-specific dataset to train a LLM that is best suited for financial use cases. We’re excited to use BloombergGPT to improve existing NLP workflows, while also imagining new ways to put this model to work to delight our customers.”
Subscribe to our newsletter