About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Ensuring AI-Focussed Institutions Take out the Garbage: A-Team Group Webinar Preview

Subscribe to our newsletter

As data quality rises up institutions’ AI-implementation agendas, the next A-Team Group Data Management Insight webinar will take a deep-dive look into how they can ensure the information they feed into their models will give them accurate and valuable outputs.

Avoiding Chaos

The data management maxim of “garbage in, garbage out” can’t be more appropriate for artificial intelligence (AI), especially in financial services where billions of dollars ride on decisions informed by the technology.

Without good-quality data, AI applications are not only likely to provide little value to users, but they can also potentially make matters worse by generating inaccurate outputs.

Examples of such errors are legion. Millions of people last year watched videos on social media of confused McDonald’s customers battling with AI-based drive-thru interfaces that placed wrong orders. And AI-hallucinated court cases upended real proceedings the year before when they were cited as precedents in a hearing.

The burger giant eventually abandoned that particular AI ordering system and the lawyer responsible for the court-room chaos was fined. Imagine, however, if a similar AI error had occurred in a multi-million-dollar financial trade or had prompted an adjustment to a portfolio that wiped out its annual gains.

For this reason – among many others – financial institutions are taking steps to ensure the quality of the data that they feed into their AI models. For many, however, it isn’t an easy task.

Along the entire length of the data management pipeline, processes need to be put in place to vouchsafe that the information passing from internal and external sources is clean, standardised and accurate. Governance, verification, validation and distribution must also be robust enough to safeguard the data’s continued efficacy as it makes its way into centralised storage and end users’ terminals.

Expert Opinion

The webinar, entitled “In data we trust – How to ensure high quality data to power AI”, brings together three leading minds on the use of AI in financial institutions to offer their expert views on how organisation can manage these important tasks.

Marla Dans is head of data management and governance at Chicago Trading Company; Arijit Bhattacharya is head of data governance at Northern Trust Asset Management; and, Jerry Calvanese is senior director, technical sales – industries at Informatica, the webinar’s lead sponsor.

The event, due on March 13, will consider key issues facing institutions as they integrate AI into their systems: How data collection processes can be optimised to reduce errors and inaccuracies; data cleansing strategies at the start of organisations’ data pipelines;       standardisation and mastering to vouchsafe data integrity and validity; and, the need for robust governance strategies to ensure data security.

The challenges that organisations are facing with their AI-focussed data quality was highlighted in recent comments by Jason du Preez, vice president of privacy and security at cloud data management provider Informatica. Du Preez said AI models needed to be subject to strong governance and privacy controls to prevent them from causing harm through the generation of inaccurate outputs.

Speaking specifically about large language models (LLMs) that provide the backbone to generative AI, his comments could be seen as applicable to all AI.

“Inadvertently leaking or exposing sensitive information can pose an existential threat to organisations leading to massive reputational damage and economic loss – this includes data not consented by individuals, a risk exacerbated by the use of nascent LLM-based applications that can produce unpredictable outputs,” he said.

More Spending

The pressing importance of organisations’ need to get their data processes in order was highlighted in a study late last year that found institutions were targeting spending increases on data focussed largely on AI applications.

To catch up on the latest thinking on how companies can forefend themselves against data-centred challenges, register here for “In data we trust – how to ensure high quality data to power AI” on 13 March at 11:00am ET/3:00pm London/4:00pm CET.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Navigating a Complex World: Best Data Practices in Sanctions Screening

As rising geopolitical uncertainty prompts an intensification in the complexity and volume of global economic and financial sanctions, banks and financial institutions are faced with a daunting set of new compliance challenges. The risk of inadvertently engaging with sanctioned securities has never been higher and the penalties for doing so are harsh. Traditional sanctions screening...

BLOG

Snowflake Retools Cortex to Offer FSI Tailored AI Capabilities

Snowflake’s Cortex AI features has been enriched to provide financial services companies with agentic artificial intelligence capabilities honed to their specific needs, the first of a planned suite of editions focused on individual industries. Cortex AI for Financial Services will feature all the functionality of the platform’s Cortex features but will offer clients large language models that...

EVENT

TradingTech Summit London

Now in its 15th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Best Practice Client Onboarding

Client onboarding is central to the success of banks, yet it continues to present challenges and the benefits of getting it right are difficult to achieve. The challenges arise from siloed systems, manual processes and poor entity data quality. The potential benefits of successful implementation include excellent client experience, improved client acquisition and loyalty, new business opportunities, reductions in costs, competitive advantage, and confidence in compliance.