About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

AI Depends On Collecting Adequate Data and Organizing Correctly, Experts Say

Subscribe to our newsletter

Capitalizing on internal data repositories, deciding how to stage data, choosing data wisely and achieving semantic interoperability are all ways in which firms can better apply emerging artificial intelligence (AI) technologies for greater data quality and insight based on data, according to experts who spoke at the Data Management Summit hosted by A-Team Group in New York on April 4.

“Where you have enormous internal data repositories, immediate business needs are what force changes,” said Jared Klee, who works on Watson business development at IBM. “As we start to look at the internal processes and data that has been captured over many years, we find through combinations of techniques like cognitive or robotic process automation, we can leverage that knowledge to move much more quickly.”

Cognitive tools, as AI technology may also be called, require data for application, stated J.R. Lowry, head of global exchange EMEA at State Street. “Pulling that data together is a pre-requisite,” he said. “First and foremost for data professionals is the task of aggregating data, tagging it, cleansing it, normalizing it, enriching it and staging it for whatever you want to do with it. Without that, you’re hindered in your ability to apply augmentative AI capability for what you want to do.”

The volume of data that firms hold is so large that “it’s very difficult to unlock the value in it,” said Tony Brownlee, partner at Kingland, a provider of risk and data management software. “You’ll have a department that has a giant file repository of 85,000 documents from the past 20 years. … How do you start to unlock that value at scale?”

Data selection is certainly critical to AI applications, added Klee, who noted that has been evident in IBM’s experience applying Watson in the healthcare industry, as well as financial risk. “It’s knowing and understanding what the data set is and having a strong point of view on what is trustworthy, and going from there,” said Klee. “In some applications, all data may be useful; in many applications, highly trusted data is absolutely critical.”

So, once you have the right data, from the right sources, the last piece for supporting AI appears to be how data is organized semantically and how concepts of data management are related. Efforts to address data quality issues may be designed and coded independently, but end up depending on each other logically, stated Mark Temple-Raston, chief data officer and chief data scientist at Decision Machine, a predictive analytics company.

“If I have two clinical diagnostic tests, if the first test is positive, I may know that the possibility of the second test being positive increases,” he said. “Having advanced analytics, we assume that things are independent, multiplying the probabilities, but where they are logically independent, we can’t assume that [functional] independence.”

Similarly, where there is semantic interoperability, being able to reference both items “is absolutely critical,” IBM’s Klee said. “If I’m asking what controls we have on lending products, I need to understand all that is within that purview. You can get some of the way there by referring directly from the data, but much of it comes from deep expertise applied in cleansing and normalization.”

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: How to maximise the use of data standards and identifiers beyond compliance and in the interests of the business

Date: 18 July 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Data standards and identifiers have become common currency in regulatory compliance, bringing with them improved transparency, efficiency and data quality in reporting. They also contribute to automation. But their value does not end here, with data standards and identifiers...

BLOG

Canoe Intelligence Releases Data Innovation Hub for Alternative Investments Market

Canoe Intelligence, a financial technology company providing data management for the alternatives industry, has released the Canoe Data Innovation Hub and Canoe Asset Data Design Partnership, initiatives that go hand-in-hand to offer greater data granularity at both fund and asset level data. The solutions respond to client demand for access to asset-level data to solve...

EVENT

ESG Data & Tech Summit London

The ESG Data & Tech Summit will explore challenges around assembling and evaluating ESG data for reporting and the impact of regulatory measures and industry collaboration on transparency and standardisation efforts. Expert speakers will address how the evolving market infrastructure is developing and the role of new technologies and alternative data in improving insight and filling data gaps.

GUIDE

Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...