About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Inaugural AI in Date Management Summit NYC Sets New Benchmark in AI Discussion

Subscribe to our newsletter

A-Team Group’s inaugural AI in Data Management Summit NYC set a new benchmark in the global discussion around artificial intelligence.

Leading figures from the worlds of finance and technology gathered in New York to share best practice guidance and observation, real-world case studies and forecasts for the exciting – and challenging – year ahead.

The summit couldn’t have been better timed, as agentic AI gets in its stride promising incredible efficiency and operational gains, and as regulators around the world begin to tackle the thorny matters of AI ethics, data privacy and governance.

Guiding AI Projects from Pilots to Production

The full-day event kicked off with a Keynote Practitioner’s discussion in which Jennifer Ippoliti, chief data officer for the legal department at JP Morgan Chase offered her take on how financial institutions can guide AI applications from pilot to production.

Interviewed by Jane Conway, managing director, head of digital, data and enablement – client and product solutions at Apollo Global Management, Ippoliti explained how processing AI-mined unstructured data was critical to JP Morgan Chase. The bank had implemented policies to guide data quality, preparation and governance to ensure AI projects could be put into production at scale.

She noted that the bank sought to mitigate challenges such as model hallucinations and inaccuracies through stringent data hygiene and risk assessment processes from product origination to implementation.

Communication throughout the organisation was critical to bringing change through new AI products and all projects should enable business strategy, she advised. Ippoliti concluded that all model outputs are continually checked by humans to reduce model drift.

The Continuing Need to Ensure Good Data Quality

Data quality was a core theme of a keynote address by Rohan Kodialam, co-founder and chief executive of Sphinx AI. Kodialam explained that the failure to grasp the differences in how humans and AI think has led to model inaccuracies that have forced the erection of human-in-the-loop guardrails that can erode the quality of the technology’s outputs.

Instead, he suggests an alternative in which Sphinx AI specialises, which is to use specialised algorithms to adjust the models to be able to think differently. This enables AI to become an autonomous co-worker and not just a supervised co-pilot, Kodialam said.

The real-world benefit of agentic AI was explained in a separate session by Suemee Shin, senior vice  president of product management at Northern Trust Asset Management. Shin built her own multi-agent orchestration to solve a data reconciliation challenge that had taken days to complete using other technology tools.

Automation, she concluded, empowered professionals by giving them more time to focus on solving business problems and adding value.

Product ROI Won’t Come Without Good Preparation

In another keynote address, Chris Pierpan, global leader, communities of practice at Salesforce, highlighted how the slow pace of ROI on some AI projects was often due to an absence of context that makes it difficult for agents to understand metadata and nuance. A single source of truth is necessary, giving discipline to data and helping to eliminate inaccuracies. From this platform, organisations can move from pilot to successful production.

A host of panel sessions allowed participants to dive into the weeds of a plethora of important topics.

User C Level Panel: The AI Value Mandate – A CDO Playbook For Measuring And Delivering ROI: Clear business ownership of good quality data are the essential ingredients for delivering ROI, the panellists moderated by Julia Bardmesser, chief executive at Data4Real, discussed. Organisations must transition from simple usage metrics to robust data governance and rich metadata that provide the necessary context for AI to solve specific business problems, the panel concluded.

Holistic Ai Governance – From Black Box To Business Value: Effective AI governance requires a unified, cross-functional framework where leaders in data, security and risk collaborate under a single taxonomy to manage system-level threats, the panel moderated by chief data officer Marla Dans discussed. Organisations are shifting toward continuous, runtime monitoring – including “kill switches” and human-in-the-loop validation – to address unique genAI risks. Success depends on operationalising these policies through risk registries and stress tests, ensuring that every AI decision is rooted in a “golden source” of classified, authoritative data.

Architecting The Intelligent Ecosystem: AI As The Blueprint And The Builder: Some things never change, especially data management challenges and to overcome these the same old remedies apply – governance and control. Except, today, that can be achieved through AI, so long as firms prioritise business process architecture, embed context and lineage directly into data pipelines, ensure AI execution remains deterministic and traceable, and recognise that high-quality, perfectly governed data is the foundational requirement for any intelligent ecosystem.

The Intelligent Data Marketplace: From Static Products To AI-Powered Agents: Agentic AI products offer a more dynamic and interactive approach to problem solving, but need clean and good quality data to work effectively, panellists said. Their benefits are many and include monetisation of unstructured data and more effective decision making. Baking trust into data and focusing products on producing good decisions will ensure they work effectively, the panel concluded.

From Rulebook To Report – Applying GenAI To Automate Regulatory Compliance: AI can help compliance teams in their reporting because it is better suited to ensuring compliance data lineage is complete, it can track changes to regulations and it can automate reporting routines. Humans sill still play an essential role, for instance in complex filings, and to ensure accountability – all data submitted must be explainable and defensible. Foundational data best practices, such as data mastering and quality, apply here, too, to ensure accurate outcomes.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Navigating a Complex World: Best Data Practices in Sanctions Screening

As rising geopolitical uncertainty prompts an intensification in the complexity and volume of global economic and financial sanctions, banks and financial institutions are faced with a daunting set of new compliance challenges. The risk of inadvertently engaging with sanctioned securities has never been higher and the penalties for doing so are harsh. Traditional sanctions screening...

BLOG

Data Fabric vs. Data Mesh: 10 Companies Provisioning Modern Data Architectures for Enterprise AI

As institutions absorb ever greater volumes of data to meet their increasingly complex operational needs and those of regulators, they face a dilemma of how to store and distribute that critical information. Fragmented legacy systems have long been an impediment to the smooth management of data and now corralling multiple-cloud configurations can be added to...

EVENT

Buy AND Build: The Future of Capital Markets Technology

Buy AND Build: The Future of Capital Markets Technology London examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...