Artificial intelligence (AI) seemed to take on a life of its own in 2023. ChatGPT, the large language model-based (LLM) generative AI (GenAI), grabbed headlines with horror stories of hallucinations and doomy sci-fi predictions of lost jobs and machines running wild over humanity.
Most ignored the fact that AI in its various forms had been helping investors make sense of ESG data and risks for years. Nevertheless, 2023 felt like the year it began to deliver on its enormous potential.
Sustainability data and data services companies stepped up their use of machine learning (ML), natural language processing (NLP) and other applications to provide financial institutions more, and better-managed, information and new analytics. They also began using GenAI, which until this year had seemed little more than an eccentric concept.
“Throughout Q4, our clients have consistently communicated the growing mandate to integrate AI solutions and products for heightened workplace efficiency,” said Tori Harris of EthicsAnswer, a start-up that launched a GenAI-powered ESG disclosure product in the late summer. “AI emerges as an invaluable ally in roles burdened by time-consuming manual tasks.”
NLP and ML have been applied to data gathering processes for some time. This year, data and tech companies including Clarity AI and Alygne extended or introduced services built on ML and NatWestexplained how it was harnessing cloud-based AI applications to embed ESG processes throughout the bank.
And as well as EthicsAnswer, GaiaLens began rolling out GenAI applications. The former deploys GenAI to help companies automate the completion of ESG questionnaires sent by investors, while the latter is using the technology to help it scrape data from news reports.
Clients of AI-enabled services benefit from the technology’s ability to process huge swathes of data in a fraction of the time it would take humans to do manually. In doing so, they can streamline their workflows, achieve better economies of scale in their research and mine ever greater insights from their data.
“When tackling ESG disclosures, the traditionally arduous process of locating specific information within a report can be drastically reduced,” said Harris. “AI, specially trained to interpret ESG data, can generate responses in mere seconds.”
An Ideal Match
AI lends itself to solving for the peculiar challenges presented by ESG integration. The large volumes of information needed to make the most accurate analyses, the patchiness of some datasets and the ubiquity of unstructured data within those information pools requires powerful applications to wrangle it into shape.
“ESG information is somewhat scattered and not necessarily reported in a standardised way – sometimes there are templates, and even then there’s variations and the tables can differ quite a bit from company to company, as can the formatting – so AI is good for that,” said Marsal Gavalda, chief technology officer at Clarity AI.
Gavalda argued, also, that AI offers the best solution to greenwashing. Companies will overstate their sustainability when there is an absence of data to prove otherwise. Gavalda said that doing this requires an unbiased assessment of information presented.
AI, he said, can be trained to make its calculations using such vast volumes of data that, when prompted, it will be able to offer a more objective response built on a wider knowledge base than any humans can achieve.
“Here’s an area where these more sophisticated models can make some nuanced distinctions and not be ‘fooled’ by attempts at greenwashing,” he said.
Gavalda cited an example encountered by Clarity AI where a company had declared its products had been produced in accordance with the Forest Stewardship Council’s recommendations though, critically, not certified by it. This gave a misleading impression of the company’s sustainability, especially as the reference could only be applied to its packaging.
AI’s potential pervasiveness was also made apparent this year, raising questions about its application to data management more generally. On the positive side, it became apparent that GenAI could be a powerful addition to company’s customer-facing activities, in the form of interactive chatbots, for example. Also, marketing departments are getting a boost from AI’s involvement on content presentation.
Rinesh Patel, global head of financial services at data cloud specialist Snowflake, said also that this use of AI would be appropriate for asset managers who might need a “co-pilot” in their customer engagements.
There are other, more complicated, questions to answer too. Among them, said Patel, is the impact on institutions of their AI’s use of ungoverned data or data derived from external sources.
“We’re starting to see regulators looking at this technology and how it can be used in a safe way in financial services,” Patel said. “That’s consistent with how financial regulators looked at the cloud years ago and is the first step on the path to greater transparency and adoption that I anticipate we’ll see by this time next year.”
With AI established within the ESG ecosystem, its roll out is expected to continue at pace throughout 2024. A recent survey by IBM suggested that while only a quarter of businesses in the UK and a third of those worldwide had embraced AI by the end of last year, about a half of the rest said they expected to adopt it in the coming years.
However, public disquiet has grown over a perceived lack of guardrails. Fears of a “Terminator”-style scenario, in which super-intelligent computers overthrow humanity, have accompanied headlines about AI. Those fears haven’t been allayed by reports of hallucinations – deliberately wrong answers given by some GenAI in the absence of suitable data.
The ESG data sector is not immune to such alarm. For that reason, all companies that use AI-derived insights have ensured that human expertise has final oversight of output. It’s reasonable to suggest that validity operations will dominate the next stage in the development of AI.
“Inherent to the use of AI is the potential for inaccuracies in responses,” said EthicsAnswer’s Harris. “Our primary recommendation is to maintain a human in the loop, ensuring that there is a final review of AI-generated answers. The role of AI should be that of an assistant, not the ultimate decision-maker.
“This approach not only safeguards against errors but also preserves the essential human touch in nuanced decision-making.”
Subscribe to our newsletter