The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

AI Depends On Collecting Adequate Data and Organizing Correctly, Experts Say

Capitalizing on internal data repositories, deciding how to stage data, choosing data wisely and achieving semantic interoperability are all ways in which firms can better apply emerging artificial intelligence (AI) technologies for greater data quality and insight based on data, according to experts who spoke at the Data Management Summit hosted by A-Team Group in New York on April 4.

“Where you have enormous internal data repositories, immediate business needs are what force changes,” said Jared Klee, who works on Watson business development at IBM. “As we start to look at the internal processes and data that has been captured over many years, we find through combinations of techniques like cognitive or robotic process automation, we can leverage that knowledge to move much more quickly.”

Cognitive tools, as AI technology may also be called, require data for application, stated J.R. Lowry, head of global exchange EMEA at State Street. “Pulling that data together is a pre-requisite,” he said. “First and foremost for data professionals is the task of aggregating data, tagging it, cleansing it, normalizing it, enriching it and staging it for whatever you want to do with it. Without that, you’re hindered in your ability to apply augmentative AI capability for what you want to do.”

The volume of data that firms hold is so large that “it’s very difficult to unlock the value in it,” said Tony Brownlee, partner at Kingland, a provider of risk and data management software. “You’ll have a department that has a giant file repository of 85,000 documents from the past 20 years. … How do you start to unlock that value at scale?”

Data selection is certainly critical to AI applications, added Klee, who noted that has been evident in IBM’s experience applying Watson in the healthcare industry, as well as financial risk. “It’s knowing and understanding what the data set is and having a strong point of view on what is trustworthy, and going from there,” said Klee. “In some applications, all data may be useful; in many applications, highly trusted data is absolutely critical.”

So, once you have the right data, from the right sources, the last piece for supporting AI appears to be how data is organized semantically and how concepts of data management are related. Efforts to address data quality issues may be designed and coded independently, but end up depending on each other logically, stated Mark Temple-Raston, chief data officer and chief data scientist at Decision Machine, a predictive analytics company.

“If I have two clinical diagnostic tests, if the first test is positive, I may know that the possibility of the second test being positive increases,” he said. “Having advanced analytics, we assume that things are independent, multiplying the probabilities, but where they are logically independent, we can’t assume that [functional] independence.”

Similarly, where there is semantic interoperability, being able to reference both items “is absolutely critical,” IBM’s Klee said. “If I’m asking what controls we have on lending products, I need to understand all that is within that purview. You can get some of the way there by referring directly from the data, but much of it comes from deep expertise applied in cleansing and normalization.”

Related content

WEBINAR

Recorded Webinar: Brexit: Reviewing the regulatory landscape and the data management response

With Brexit behind us and the UK establishing its own regulatory regime having failed to reach equivalence with the EU, financial firms face challenges of double reporting, uncertainty about UK regulation, and a potential exodus of top talent. The data management response is not easy and could stretch some firms to the limit as they...

BLOG

DSB Publishes Final Report on a Fee Model for the UPI

The Derivatives Service Bureau (DSB) has published its final report on a fee model for the Unique Product Identifier (UPI) that will come into play in July 2022. The report – Principles Underlying the Fee Model for the Unique Product Identifier (UPI) Service – is based on two consultation papers and considers both the UPI...

EVENT

RegTech Summit London

Now in its 6th year, the RegTech Summit in London explores how the European financial services industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Practicalities of Working with the Global LEI

This special report accompanies a webinar we held on the popular topic of The Practicalities of Working with the Global LEI, discussing the current thinking around best practices for entity identification and data management. You can register here to get immediate access to the Special Report.