About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

AI Depends On Collecting Adequate Data and Organizing Correctly, Experts Say

Subscribe to our newsletter

Capitalizing on internal data repositories, deciding how to stage data, choosing data wisely and achieving semantic interoperability are all ways in which firms can better apply emerging artificial intelligence (AI) technologies for greater data quality and insight based on data, according to experts who spoke at the Data Management Summit hosted by A-Team Group in New York on April 4.

“Where you have enormous internal data repositories, immediate business needs are what force changes,” said Jared Klee, who works on Watson business development at IBM. “As we start to look at the internal processes and data that has been captured over many years, we find through combinations of techniques like cognitive or robotic process automation, we can leverage that knowledge to move much more quickly.”

Cognitive tools, as AI technology may also be called, require data for application, stated J.R. Lowry, head of global exchange EMEA at State Street. “Pulling that data together is a pre-requisite,” he said. “First and foremost for data professionals is the task of aggregating data, tagging it, cleansing it, normalizing it, enriching it and staging it for whatever you want to do with it. Without that, you’re hindered in your ability to apply augmentative AI capability for what you want to do.”

The volume of data that firms hold is so large that “it’s very difficult to unlock the value in it,” said Tony Brownlee, partner at Kingland, a provider of risk and data management software. “You’ll have a department that has a giant file repository of 85,000 documents from the past 20 years. … How do you start to unlock that value at scale?”

Data selection is certainly critical to AI applications, added Klee, who noted that has been evident in IBM’s experience applying Watson in the healthcare industry, as well as financial risk. “It’s knowing and understanding what the data set is and having a strong point of view on what is trustworthy, and going from there,” said Klee. “In some applications, all data may be useful; in many applications, highly trusted data is absolutely critical.”

So, once you have the right data, from the right sources, the last piece for supporting AI appears to be how data is organized semantically and how concepts of data management are related. Efforts to address data quality issues may be designed and coded independently, but end up depending on each other logically, stated Mark Temple-Raston, chief data officer and chief data scientist at Decision Machine, a predictive analytics company.

“If I have two clinical diagnostic tests, if the first test is positive, I may know that the possibility of the second test being positive increases,” he said. “Having advanced analytics, we assume that things are independent, multiplying the probabilities, but where they are logically independent, we can’t assume that [functional] independence.”

Similarly, where there is semantic interoperability, being able to reference both items “is absolutely critical,” IBM’s Klee said. “If I’m asking what controls we have on lending products, I need to understand all that is within that purview. You can get some of the way there by referring directly from the data, but much of it comes from deep expertise applied in cleansing and normalization.”

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Unlocking Transparency in Private Markets: Data-Driven Strategies in Asset Management

As asset managers continue to increase their allocations in private assets, the demand for greater transparency, risk oversight, and operational efficiency is growing rapidly. Managing private markets data presents its own set of unique challenges due to a lack of transparency, disparate sources and lack of standardization. Without reliable access, your firm may face inefficiencies,...

BLOG

Financial Institutions ‘Layering’ New Risks as Report Highlights Greenwashing Exposure

The number of financial institutions flagged for greenwashing climbed substantially in the past year, highlighting both the vulnerability of individual firms and the need to integrate greenwashing risk management into decision-making processes.. The sector remained the worst offender for overstating their progress or making vague or misleading claims, the report by sustainability risk data company...

EVENT

AI in Data Management Summit New York City

Following the success of the 15th Data Management Summit NYC, A-Team Group are excited to announce our new event: AI in Data Management Summit NYC!

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...