About a-team Marketing Services
The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

AI Depends On Collecting Adequate Data and Organizing Correctly, Experts Say

Subscribe to our newsletter

Capitalizing on internal data repositories, deciding how to stage data, choosing data wisely and achieving semantic interoperability are all ways in which firms can better apply emerging artificial intelligence (AI) technologies for greater data quality and insight based on data, according to experts who spoke at the Data Management Summit hosted by A-Team Group in New York on April 4.

“Where you have enormous internal data repositories, immediate business needs are what force changes,” said Jared Klee, who works on Watson business development at IBM. “As we start to look at the internal processes and data that has been captured over many years, we find through combinations of techniques like cognitive or robotic process automation, we can leverage that knowledge to move much more quickly.”

Cognitive tools, as AI technology may also be called, require data for application, stated J.R. Lowry, head of global exchange EMEA at State Street. “Pulling that data together is a pre-requisite,” he said. “First and foremost for data professionals is the task of aggregating data, tagging it, cleansing it, normalizing it, enriching it and staging it for whatever you want to do with it. Without that, you’re hindered in your ability to apply augmentative AI capability for what you want to do.”

The volume of data that firms hold is so large that “it’s very difficult to unlock the value in it,” said Tony Brownlee, partner at Kingland, a provider of risk and data management software. “You’ll have a department that has a giant file repository of 85,000 documents from the past 20 years. … How do you start to unlock that value at scale?”

Data selection is certainly critical to AI applications, added Klee, who noted that has been evident in IBM’s experience applying Watson in the healthcare industry, as well as financial risk. “It’s knowing and understanding what the data set is and having a strong point of view on what is trustworthy, and going from there,” said Klee. “In some applications, all data may be useful; in many applications, highly trusted data is absolutely critical.”

So, once you have the right data, from the right sources, the last piece for supporting AI appears to be how data is organized semantically and how concepts of data management are related. Efforts to address data quality issues may be designed and coded independently, but end up depending on each other logically, stated Mark Temple-Raston, chief data officer and chief data scientist at Decision Machine, a predictive analytics company.

“If I have two clinical diagnostic tests, if the first test is positive, I may know that the possibility of the second test being positive increases,” he said. “Having advanced analytics, we assume that things are independent, multiplying the probabilities, but where they are logically independent, we can’t assume that [functional] independence.”

Similarly, where there is semantic interoperability, being able to reference both items “is absolutely critical,” IBM’s Klee said. “If I’m asking what controls we have on lending products, I need to understand all that is within that purview. You can get some of the way there by referring directly from the data, but much of it comes from deep expertise applied in cleansing and normalization.”

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: A practical guide to dual UK and EU regulatory reporting as the Temporary Permission Regime comes to a close

Date: 19 July 2022 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes The Temporary Permission Regime (TPR) allowing capital markets participants in the European Economic Area (EEA) to continue to operate in the UK post Brexit will be withdrawn by the end of 2023, calling on firms that want to stay...

BLOG

Vendor Strategy: Alveo Shifts from Professional Services to Managed Services to Broaden Offering Beyond Data Management

Alveo, a provider of market and reference data integration and analytics solutions for financial services firms, has designed its technology to optimise data flows for business user self-service, and provides cloud-native data aggregation and data quality solutions to give  clients access to trusted data while maximising their return on investment. Martijn Groot, vice president of...

EVENT

A-Team Innovation Briefing: Innovation in Cloud

This Innovation Briefing will explore approaches to data infrastructure transformation, technologies required and how to make sure processes are optimised to support real time data management. Hear from leading practitioners and innovative technology solution providers who will share insight into how to set up and leverage your data infrastructure to provide user access to consistent data and analytics, and companies the ability to monetise their data.

GUIDE

RegTech Suppliers Guide 2020/2021

Welcome to the second edition of A-Team Group’s RegTech Suppliers Guide, an essential aid for financial institutions sourcing innovative solutions to improve their regulatory response, and a showcase for encumbent and new RegTech vendors with offerings designed to match market demand. Available free of charge and based on an industry-wide survey, the guide provides a...