By Marsal Gavaldà, CTO, Clarity AI.
Unlike some technology trends, the hype around generative artificial intelligence (AI) will not fade, and I expect AI to remain a priority for investors in 2024.
The emergence of generative artificial intelligence (GenAI) is a watershed moment in the tech industry, as transformational as the advent of the internet, web search, cloud computing, or mobile devices. It will be a significant factor in differentiating companies in their race for the competitive edge.
As GenAI becomes more mainstream, more challenges will arise – especially when it comes to the collection, quality, and analysis of the data extracted and processed by AI, which ultimately ends up informing investment decisions.
In 2024, I expect many businesses to find ways to use generative AI more securely, for example, by fine-tuning their own Large Language Models (LLMs) for internal use. When it comes to investment decisions and assessing investment risk, it’s critical that investors who put their trust in any data or information generated by AI have confidence that the process through which it is trained and validated is controlled and supervised.
But how can investors be guaranteed that certainty? To safeguard the accuracy of data for investors and financial institutions alike, a judicious approach to generative AI must be taken.
Data collection and quality assurance
Standardisation of data remains a challenge across the sustainability space, but AI is making progress in helping investors make meaningful comparisons. In 2024, we expect these methods to develop dramatically.
For example, Large Language Models (LLMs) are often applied to extract quantitative and qualitative metrics from corporate reports. Examples of quantitative metrics include GHG emissions (Scope 1, 2, and 3) and waste (hazardous, non-hazardous, and recycled), for which the LLM can identify and extract not only the value but also the unit and the year.
It’s important that these models can better extract these metrics from free text, tables, or graphics and that the values are converted to consistent units to allow a comparable analysis across companies.
Reliability models, trained to include ranges for each metric and industry as well as individual company attributes, will be able to both select the most likely value in the data from multiple sources and also flag data points that look suspicious.
In the year ahead, we expect these intelligent models to become customary across many more businesses beginning to use and develop their own LLMs for internal data collection and analysis.
Learnings from COP28
The future of AI was high on the agenda at COP28 in Dubai. The increasing adoption of AI may make it one of the biggest uses of energy globally — putting pressure on AI providers to measure and publish data on energy use and energy sources.
The global effort to measure ongoing climate efforts to lower carbon emissions can be accelerated through the power of GenAI. But companies need the tools to start delivering successfully. Research and data, augmented by advanced technology and AI, represent some of our most powerful tools to combat climate change. They also allow financial institutions to integrate ESG data into their existing workflows for easier and more integrated data management.
What does this mean for investors and financial institutions mapping out their GenAI usage in 2024? We believe high-quality estimations should be used to make investment decisions in global markets. A lack of reliable data can no longer be used as an excuse. While regulations still need to develop and enforce corporate non-financial disclosures, AI-powered estimations can help inform investment decisions, ensuring data equity and a just transition.
Although currently a proof of concept, the launch of the Net Zero Public Utility Alliance will provide a free data platform across hundreds of companies and will transform the debate around quality of, and access to, reliable data. Equally, the newly announced Global Climate Finance Centre is led by some of the world’s most influential financial institutions and will be a huge accelerant to progress in the year ahead.
‘Gen AI Agents’
In 2024, we expect the next frontier in AI to be something we call “analyst as a service”, where we build GenAI agents to perform increasingly complex analytical tasks.
Starting with “Portfolio Analysis”, for example, these agents will analyse the composition of an entire portfolio, compare it against industry benchmarks or previous versions of itself, to create a report that summarises the ESG scores, climate alignment, exposures, etc. of the portfolio and highlight the organisations that contribute the most to each factor — much in the same way that a human analyst would approach it.
Another use could be using GenAI agents to interface with multiple databases to access the information they need to perform a particular task. For example, a GenAI agent could construct a query to obtain the revenue of a company in order to compute a new intensity metric.
This higher degree of freedom — not just summarising the data provided but actively searching for new information — will get us closer to human-level analytical capabilities, while retaining the trust and timeliness of the reports upon which investors increasingly rely.
Subscribe to our newsletter