About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Adopting a Principles-Based Framework for AI Governance

Subscribe to our newsletter

Governance in IT is a well-established discipline underpinned by multiple standards from international and national organisations like ISO, IEC and AICPA. What these standards share is a principles-based approach to the implementation of standards within the organisation.

A new governance framework to cover emerging AI technologies is not just a necessity but an urgent need. The significant potential for productivity gains and increased risk profiles for firms that deploy them in critical business functions demand swift action. It’s reasonable to expect AI governance certifications to be added to service contracts requiring SOC 2 and ISO 27001/27701 certifications, and the time to prepare for this is now.

AI has been widely adopted across capital markets for over two decades, including algorithmic trading, wealth management, risk management, and compliance. But in October 2022, a highly disruptive AI technology was released that took the market by storm, leaving regulators, compliance teams, and governments alike wondering how to respond. Within two months of its launch, ChatGPT from OpenAI had already gained over 100 million monthly users. The ease of access to new, powerful technology was met with excitement and apprehension as people began to understand that beyond the potential for good, there was also potential for harm if left unchecked.

This article examines the common principles that underpin current and emerging best practices for AI governance.

The Bletchley Declaration

Back in November, representatives of governments and international organisations representing 28 countries came together at a UK government-sponsored AI Safety Summit to consider the need for a global commitment to AI’s safe, trustworthy, and responsible development.

Bletchley Park’s historical significance as the venue for this summit is symbolic. As the cryptography hub and home of the codebreakers who significantly contributed to the Allied victory in World War II, it is also where Alan Turing laid the foundational work and detailed procedure known as the Turing Test, forming the basis for artificial intelligence. This historical nexus of innovation and strategic significance makes it a fitting backdrop for advancing AI safety and governance.

The output from this summit is an international commitment to the responsible development of AI and provides a helpful backdrop for considering a principles-based approach to AI Governance.

The Bletchley Declaration, the summit’s output, highlights critical principles for developing and governing Artificial Intelligence (AI), emphasising AI’s potential to significantly enhance global well-being. The declaration advocates for AI systems to be developed and utilised in a manner that ensures safety, centres on human needs, and maintains trustworthiness and responsibility. It underscores the importance of inclusive AI that promotes public trust and contributes to economic growth, sustainability, and the protection of human rights.

Additionally, the declaration stresses the need for international collaboration to manage AI’s global challenges effectively. It calls for a unified approach to AI governance that fosters innovation while upholding rigorous safety and ethical standards.

The declaration also recognises the necessity for vigilance and adaptability in AI governance to address emerging risks and unforeseen consequences, especially from advanced AI technologies that may have significant impacts. This requires a governance framework that is flexible enough to evolve as new information and technologies emerge.

Lastly, the declaration advocates for a proactive, principle-based approach to AI governance. This approach aims to harness AI’s transformative potential while ensuring its development is aligned with human values and global standards. It emphasises safety, ethical responsibility, and inclusiveness through sustained international efforts and continuous evaluation.

SR 117 – Model Risk Management

All AI systems are based on an underlying model, trained using supervised or unsupervised methodology.

Issued on April 4, 2011, by the Board of Governors of the Federal Reserve and the Office of the Comptroller of the Currency, SR-117 sets forth comprehensive principles and practices for managing the risks associated with analytical models. While initially designed for banking institutions, these principles can be readily adapted for general AI Governance across capital markets, including Generative AI (GenAI) and large language models (LLMs).

Banks and financial institutions rely on quantitative models in decision-making processes for activities ranging from credit underwriting to risk measurement, capital adequacy assessments and compliance. SR 117 emphasises the necessity of robust model risk management due to increased reliance on these models, mainly as they are applied to more complex products in broader, dynamic cross-border conditions.

The guidance establishes a comprehensive framework for effective model risk management, with rigorous validation as a central feature. This section underscores the importance of sound development, implementation, and utilisation within comprehensive governance and control mechanisms. It points out that while some firms may already possess robust practices, all must ensure their policies and procedures align with these risk management principles and supervisory expectations, tailored to their specific risk exposures and business activities.

Model risk arises from potential errors in models or their incorrect application, which can lead to significant adverse outcomes. This section delineates the core components of a model—input data, processing, and reporting—and the necessity for each to be well-designed to mitigate risks. Principles of effective model risk management are laid out, emphasising the importance of comprehensive validation practices, active management, and fostering an organisational culture that supports robust risk assessment.

SR117’s Model Development, Implementation and Use?section highlights best practices in model development, emphasising the importance of a development process that aligns with the firm’s objectives and policies. It stresses the critical nature of selecting appropriate data and methodologies, thorough testing, and continuous monitoring to ensure models perform as intended under varying conditions. These principles also include the importance of a representative incorporating business, technical and data science expertise. Understanding the limitations and assumptions embedded in models is critical to their safe deployment.

Model validation is an essential process that ensures models perform as expected and highlights potential limitations and assumptions. Best practices include conducting validation activities independently from model development and use, employing personnel with appropriate expertise, and ensuring that validation processes are thorough and lead to actionable insights. Validation should be an ongoing process that adapts as new information becomes available and market conditions evolve.

Effective governance requires clear policies and procedures that define risk management roles and responsibilities across the organisation. This section stresses the need for a robust governance framework that supports rigorous practices in model development, implementation, use, and validation. It also highlights senior management’s and board oversight’s importance in establishing and maintaining a culture that prioritises sound model risk management.

SR117’s conclusion reiterates that firms must ensure their model risk management practices are robust, comprehensive, and consistent with any supervisory guidance provided. It calls for firms to review and adjust their practices continually to keep pace with market and business operations changes, emphasising the dynamic nature of model risk management.

SOC 2 – AICPA

The core principles of SOC 2, defined as Trust Service Criteria, encompass five main areas essential for managing and protecting customer data, particularly in service organisations. These principles are:

Security is fundamental for protecting systems and data from unauthorised access, disclosure, and damage. It includes implementing safeguards, including firewalls, encryption, and access controls, to maintain data confidentiality and integrity.
Availability ensures that the systems and data are available for operation, typically documented in a service level agreement (SLA). This involves having redundant systems, disaster recovery plans, and performance monitoring to minimise downtime.

Processing Integrity ensures system processing is complete, valid, accurate, timely, and authorised. Organisations must use process monitoring and quality assurance to maintain data processing integrity and avoid errors or unauthorised alterations.

Confidentiality protects sensitive or confidential information from unauthorised access and disclosure throughout its lifecycle. Measures include encryption, secure data storage, and regular updates to security protocols to address emerging threats.

Privacy addresses the proper handling of personal information in accordance with applicable data protection regulations. This includes minimising data duplication, consent management, and implementing robust access controls to safeguard personal information.

Achieving SOC 2 compliance involves a rigorous process that starts with a readiness assessment to evaluate current practices against these principles, identifying gaps, and implementing necessary controls. Continuous monitoring and regular audits are required to maintain compliance and ensure the organisation adapts to new security challenges and regulatory requirements.

ISO/IEC 42001

ISO/IEC 42001:2023 is a standard specifying requirements and guidance for establishing, implementing, maintaining, and continually improving an organisation’s artificial intelligence (AI) management system. Published in December 2023, it provides a comprehensive set of governance principles for managing AI and follows the same structure as ISO 27001, which covers security, and ISO 27701, which covers privacy.

Firms already adopting these standards will find incorporating ISO 42001 in an overall governance framework relatively straightforward. It should be noted that achieving ISO certification requires a top-down commitment from the board to C-suite. All ISO standards follow the same structure, beginning with a Management System outlined in Clauses 4 through 10, followed by 9 principles in Annex A and 38 implementation guidelines in Annex B.

The ISO standards emphasise the importance of ownership at the organisation’s highest levels to help establish a culture of compliance and adoption of best practices.

Steering Towards a Unified Future in AI Governance

Adopting a principles-based framework for AI governance is a standard best practice. The core principles highlighted in the Bletchley Declaration, SR 117 SOC 2, and ISO 4200 provide a robust foundation for organisations navigating the complexities of AI development and implementation.

These principles emphasise the importance of safety and security, a human-centric approach, ethical responsibility, international cooperation, ownership by the highest levels of the organisation and comprehensive risk management.

Looking to the future, the path is challenging, but the outlook is promising. Initiatives like the Bletchley AI Safety Summit help to foster essential global dialogue and cooperation. These and similar efforts are critical in shaping a future where AI governance allows AI to deliver sustainable growth and meet compliance requirements.

The journey towards effective AI governance is challenging. However, the concerted efforts of global stakeholders to adopt and adapt to principles-based frameworks signal a proactive commitment to shaping a future where technology serves the industry with minimal risks. For senior executives and leaders in RegTech, compliance, and capital markets firms, this evolving landscape offers a unique opportunity to lead with innovation, enabling their organisations to comply with and shape the standards that will define the future of AI governance.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Best practices for compliance with EU Market Abuse Regulation

Date: 18 June 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes EU Market Abuse Regulation (MAR) came into force in July 2016, rescinding the previous Market Abuse Directive and replacing it with a significantly extended scope of regulatory obligations. Eight years later, and amid constant change in capital markets regulation,...

BLOG

Firms in the US Prepare to Meet Compliance Date for UPI in Regulatory Reporting

The Derivatives Service Bureau (DSB) has released figures indicating industry readiness for the first jurisdictional compliance date for the inclusion of the Unique Product Identifier (UPI) in regulatory reporting in the US on 29 January 2024. The US is the first jurisdiction to start UPI reporting in G20 derivatives markets with EU EMIR Refit regulations...

EVENT

TradingTech Summit London

Now in its 13th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Entity Data Management Handbook – Second Edition

Entity data management is this year’s hot topic as financial firms focus on entity data to gain a better understanding of customers, improve risk management and meet regulatory compliance requirements. Data management programmes that enrich the Legal Entity Identifier with hierarchy data and links to other datasets can also add real value, including new business...