About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

AI Depends On Collecting Adequate Data and Organizing Correctly, Experts Say

Subscribe to our newsletter

Capitalizing on internal data repositories, deciding how to stage data, choosing data wisely and achieving semantic interoperability are all ways in which firms can better apply emerging artificial intelligence (AI) technologies for greater data quality and insight based on data, according to experts who spoke at the Data Management Summit hosted by A-Team Group in New York on April 4.

“Where you have enormous internal data repositories, immediate business needs are what force changes,” said Jared Klee, who works on Watson business development at IBM. “As we start to look at the internal processes and data that has been captured over many years, we find through combinations of techniques like cognitive or robotic process automation, we can leverage that knowledge to move much more quickly.”

Cognitive tools, as AI technology may also be called, require data for application, stated J.R. Lowry, head of global exchange EMEA at State Street. “Pulling that data together is a pre-requisite,” he said. “First and foremost for data professionals is the task of aggregating data, tagging it, cleansing it, normalizing it, enriching it and staging it for whatever you want to do with it. Without that, you’re hindered in your ability to apply augmentative AI capability for what you want to do.”

The volume of data that firms hold is so large that “it’s very difficult to unlock the value in it,” said Tony Brownlee, partner at Kingland, a provider of risk and data management software. “You’ll have a department that has a giant file repository of 85,000 documents from the past 20 years. … How do you start to unlock that value at scale?”

Data selection is certainly critical to AI applications, added Klee, who noted that has been evident in IBM’s experience applying Watson in the healthcare industry, as well as financial risk. “It’s knowing and understanding what the data set is and having a strong point of view on what is trustworthy, and going from there,” said Klee. “In some applications, all data may be useful; in many applications, highly trusted data is absolutely critical.”

So, once you have the right data, from the right sources, the last piece for supporting AI appears to be how data is organized semantically and how concepts of data management are related. Efforts to address data quality issues may be designed and coded independently, but end up depending on each other logically, stated Mark Temple-Raston, chief data officer and chief data scientist at Decision Machine, a predictive analytics company.

“If I have two clinical diagnostic tests, if the first test is positive, I may know that the possibility of the second test being positive increases,” he said. “Having advanced analytics, we assume that things are independent, multiplying the probabilities, but where they are logically independent, we can’t assume that [functional] independence.”

Similarly, where there is semantic interoperability, being able to reference both items “is absolutely critical,” IBM’s Klee said. “If I’m asking what controls we have on lending products, I need to understand all that is within that purview. You can get some of the way there by referring directly from the data, but much of it comes from deep expertise applied in cleansing and normalization.”

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Strategies and solutions for unlocking value from unstructured data

27 March 2025 11:00am ET | 3:00pm London | 4:00pm CET Duration: 50 Minutes Unstructured data accounts for a growing proportion of the information that capital markets participants are using in their day-to-day operations. Technology – especially generative artificial intelligence (GenAI) – is enabling organisations to prise crucial insights from sources – such as social...

BLOG

ESG Data Tops Executives’ 2025 Shopping Lists

Senior executives at financial institutions expect to direct the biggest boost in their data expenditure plans over the coming year towards ESG information, according to a survey that also found that high-quality data and analytics in all domains is being prioritised for growth. In its third annual Future of Finance survey, Switzerland-based exchange operator SIX also found...

EVENT

Data Management Summit New York City

Now in its 15th year the Data Management Summit NYC brings together the North American data management community to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

Regulatory Data Handbook 2024 – Twelfth Edition

Welcome to the twelfth edition of A-Team Group’s Regulatory Data Handbook, a unique and useful guide to capital markets regulation, regulatory change and the data and data management requirements of compliance. The handbook covers regulation in Europe, the UK, US and Asia-Pacific. This edition of the handbook includes a detailed review of acts, plans and...