
Ensuring artificial intelligence deployments are securely governed without stymieing their potential is a delicate balancing act. It requires carefully drawn policies, frameworks and processes.
As deployment of the technology expands and its capabilities and complexity multiply, the governance structure must adapt and evolve.
How to get this right is among the most important topics swirling right now around AI in the financial space. In some cases, it is requiring a rethink of how organisations work as an enterprise.
The thorny practicalities of devising an AI governance strategy that is robust and responsive to change will be a core theme at the inaugural AI in Data Management Summit New York, held by A-Team Group this week.
During the panel discussion “Holistic AI Governance – From Black Box To Business Value”, speakers drawn from data and financial industries will discuss the key challenges in building a governance strategy and the best practices and methods for executing those plans.Security Forms the Bedrock of Any Governance Strategy
Among the most important considerations to bear in mind for any governance strategy is the need for a secure policy. Not only is security essential to the safe use of AI, the fear of getting it wrong is a drag on AI deployment for many firms. According to research firm Verdantix almost two-thirds of firms “consider cybersecurity a significant or the most significant barrier to AI adoption”.
One of the leaders set to speak at the session, Arun Maheshwari, head of model risk control, legal and compliance at Morgan Stanley, laid out four tenets for a secure governance policy.
Data protection is first on his list of must haves, in which any policy stresses “strict data classification/allowed-use rules, minimisation, encryption, redaction and tight retention, especially regarding prompts/logs/vector database”, Maheshwari told Data Management Insight.Control and access, defence against generative AI-specific attacks and monitoring vendors complete the Maheshwari’s four-point safety net. The final protection also includes responding to and holding vendors accountable.
A robust governance policy will ensure that AI is implemented safely and and responsibly all through its lifetime. It will also be able to establish accountability for the decisions it makes. An important enabler of this is model transparency.
Prising Open Black Box AI
Without the ability to see how the AI is being directed and controlled, efforts to determine accountability will be hamstrung, AI risks become amplified and trust in outputs is undermined.
Nevertheless, opaque “black box AI” is attractive to many organisations because it can deliver better accuracy and performance, it protects firms’ IP, it can prevent human bias and it can scale very well.
Cheryl Benoit, executive director – risk data steward at Mizuho bank, another of the panellists, said black boxes shouldn’t be secret boxes.
“Black-box cannot mean ‘no visibility’,” Benoit told Data Management Insight. “Firms need traceability of inputs and transformations, proof of data quality and monitoring that detects performance degradation, bias and unexpected behaviour so that issues can be explained, escalated and remediated.”
Black box AI can be deleterious to regulatory compliance but needn’t be, she added.
“Regulators expect evidence-based oversight,” she said. “Clear ownership, robust testing and ongoing monitoring– not just good back-testing results – are vital.
“Strong data governance, including lineage and control documentation, is what makes black-box models regulator-ready.”
New Approaches Needed to Govern AI
Among other topics to be discussed by the panel will be:
- CDO, CISO, and Chief Risk Officer collaboration in building a single framework that covers Model Risk, bias, and data security
- Protect sensitive data against specific attacks like prompt injection and who ultimately owns the risk of leakage in GenAI applications
- How firms can use AI as an active tool to automatically scan governance metrics and KPIs.
The session will be moderated by Marla Dans, chief data officer, head of data governance, formerly of Chicago Trading. Her panellists will also include Peggy Tsai, AI and data product director at JP Morgan and Chris Pierpan, senior director, communities of practice at Informatica.
- A-Team Group’s AI In Data Management NYC will be held at @Ease 1345 Avenue of the Americas, New York, on March 19. To book your place, click here for registration details.
Subscribe to our newsletter



