
The headlong rush to adopt artificial intelligence poses multiple risks to financial institutions that don’t take the necessary preparatory steps before implementation. One potential source is the increasing AI-savviness of company employees.
As they become accustomed to using the technology on consumer devices and websites, there is a greater risk they’ll inadvertently leak or compromise company data on extramural AI models.
The issue is one that data officers must address as they come under pressure from boards that are eager to quickly rollout AI programmes, said Matt Flenley, head of product and marketing at data solutions provider Datactics. In a bid to understand how acute the challenge is, Flenley has created an AI-maturity questionnaire for institutional data chiefs to complete.“The hypothesis is that the people who are being pressured to adopt AI also have a very AI-savvy workforce; they’re wrestling with getting AI done, getting it done right and getting it done in an accountable, organised way that in banking, finance, public sector covers all the inherent risks of just going full-blooded into this thing,” Flenley told Data Management Insight.
It’s More Than Getting the Right Models in Place
If his hunch is correct, organisations readying AI rollouts will need to take very seriously matters of data quality, governance and AI governance. They must also address their approach to giving non-technical workers, whose experience on AI may be just simple chatbot interactions, access to powerful AI applications and complex data operations.
“I’m trying to expose, really, that while the pressure is there to adopt AI as soon as possible, we also have a responsibility to make sure that where people are coming in as an AI-savvy workforce, they’re not just using any tool, putting data anywhere, that there’s no guardrails,” he said. “In the old days that would have been called AI ethics but it’s really AI governance.”
Research has shown that many companies have been subject to data leaks and worse as a result of employees misusing AI at work. A Microsoft survey last year concluded that more than two-thirds of organisations’ employees in the UK had used external AI tools in the workplace and that half of them do so every week. Just a third of them, however, understood the risks their actions posed to corporate data security.
The use of autonomous AI agents has also raised security concern, especially after a major breach at Meta led to the leakage of sensitive employee data.
Opening a Window onto Demands on Data Chiefs
Flenley’s questionnaire – which is available online and was available to delegates at A-Team Group’s Data Management Summit London this week – follows one he devised last year to ascertain the extent to which organisations were dealing with the challenges of data quality, metadata and matching.
The survey concluded that respondents were still struggling with those issues in ways that hadn’t changed in years. Data quality is still managed heavily within manual workflows, metadata is rarely optimised despite awareness of its importance, and data matching and entity resolution solutions are expensive and opaque.
“What emerged from that survey was that quite a lot of the time, the people we would term data leaders – who we would sell to – were often struggling with old problems like standardisation, integration and third parties,” Flenley said.
“Even though the technology had moved on a lot, actually the issues still remained the same.”
Flenley said that the results of this year’s survey would help better inform the industry but also enable Datactics to better tailor its data offerings to the market.
He said he hoped it would identify gaps in data and governance provisioning for AI and expose the challenges that data leaders face within their organisations.
“Let’s say the hypothesis is proved correct – that there is all this pressure – it’s going to mean that there will be a higher premium and priority to develop things that are explainable by design, that have accountability by design baked into them,” he said.
“It means that when an implementer of AI data management capability within an organisation goes off and either architects or buys something off the market, that one of the absolute principles from the start is that they always know who they’re doing business with because that’s a fundamental thing and not just a vague approximated guess by a large language model.”
New Survey Needed as Technology Evolves
The initial survey was born of comments made by Datactics clients, inspiring Flenley to gauge opinions more objectively. The success of that survey inspired this year’s and is it’s hoped that the exercise can be repeated frequently to gauge the needs of market participants and Datactics’ clients as technology evolves.
“I want to understand things like the pressure to adopt and data chiefs’ level of comfort with the pressure to adopt… the things that really keep them awake at night when it comes to AI adoption,” he said.
Subscribe to our newsletter

