About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

FSB Guidance for Supervisors – Tracking Systemic AI Adoption Risk

Subscribe to our newsletter

The Financial Stability Board (FSB) has released detailed guidance on how regulators and supervisors should monitor the adoption of artificial intelligence (AI) across the financial system. The report, Monitoring Adoption of Artificial Intelligence and Related Vulnerabilities in the Financial Sector, provides a practical framework for identifying where AI use may introduce or amplify systemic risks.

The paper marks one of the most comprehensive global efforts to define what supervisors should look for as firms deploy AI and generative AI (GenAI) in trading, lending, insurance, and compliance operations.

Structured Monitoring

At the core of the FSB’s framework is a menu of indicators designed to help authorities observe both direct and indirect signals of AI adoption. These indicators cover six areas: adoption levels, third-party dependencies, market correlations, cyber threats, model risk and governance, and AI-enabled fraud or disinformation.

The FSB encourages supervisors to use both quantitative and qualitative inputs, combining firm surveys, supervisory dialogues, and publicly available datasets such as patent filings, job postings, and technology expenditure. It also advises aligning data collection with existing operational and model-risk reporting to avoid duplication and unnecessary cost.

Current Supervisory Practice

According to the report, most authorities already use surveys of regulated entities as their primary method for gauging AI adoption. Some supplement this with roundtables and data from technology vendors. However, the FSB notes wide variation in definitions, scope, and frequency. There is no consistent view of what counts as AI, and few jurisdictions publish aggregated findings.

Key challenges include defining AI and GenAI in ways that remain stable across use cases, ensuring representativeness in samples, and assessing the criticality of AI applications when much of the infrastructure is supplied by third parties. The report also highlights difficulties in identifying the point at which an AI service becomes systemically important.

Focus Areas

The report outlines an illustrative, non-prescriptive “menu of indicators”, beginning with adoption patterns. Authorities are advised to track the number and type of AI applications in use, distinguishing between predictive, natural-language, and generative models, and between internally developed and externally sourced systems. Patent activity, recruitment trends, and R&D expenditure can all act as proxies for underlying innovation intensity.

Next is third-party dependency. Supervisors should record the proportion of AI systems sourced from external providers, maintain registers of critical services, and monitor incidents affecting those providers. The goal is to understand concentration risks – especially where multiple firms rely on the same data, models, or cloud infrastructure.

For market correlations, the FSB recommends watching for reliance on common data or pre-trained models that could drive herd behaviour. This includes examining how AI-based decision-making might increase market volatility if many firms respond to similar signals.

Cyber resilience is another priority. The report suggests that supervisors capture data on AI-related attacks such as model poisoning or prompt injection, as well as the growing use of AI in defensive cybersecurity tools. The FSB’s Financial Incident Reporting Exchange (FIRE) format is cited as a mechanism to standardise such reporting.

Model-risk governance and data quality remain central. Authorities are encouraged to monitor the share of AI models within firms’ model inventories, supervisory findings related to explainability or validation, and the extent of human oversight in automated systems. Finally, the FSB urges inclusion of AI-enabled fraud and disinformation in monitoring frameworks, noting that deepfakes and synthetic-identity fraud are rising sources of operational loss.

GenAI Supply-chain Risks

One of the report’s most detailed sections analyses the GenAI supply chain, divided into five layers: hardware, cloud compute, training data, pre-trained models, and user applications. The FSB observes that vertical integration and reliance on a few major technology firms create significant concentration risk. Substitutability can be limited where proprietary architectures or large fixed training costs reduce competition.

Open-weight models and smaller providers may mitigate some of this risk, but the FSB cautions that reasoning-capable models could increase inference costs even as they lower training barriers. Supervisors are advised to apply existing third-party risk frameworks – such as assessing criticality, concentration, and substitutability – to these GenAI layers.

Implementation Considerations

For financial institutions, the report implies a need for structured inventories of AI use cases, clear ownership lines, and assessments of materiality. Firms should extend third-party risk frameworks to cover second- and third-tier suppliers, particularly in cloud and model provision. AI services should also be reflected in recovery and resolution planning, and the balance between human-in-the-loop and fully autonomous systems should be reviewed for high-impact functions.

Technology providers can expect greater scrutiny. The FSB indicates that supervisors may request disclosures about performance, reliability, and incident response for critical AI services. Interoperability and switching costs are also likely to come under examination as part of systemic-risk assessments.

For regulators and supervisors, the FSB recommends beginning with small, regular, and comparable data collections rather than large one-off surveys. Consistent taxonomies are essential, as is domestic coordination between prudential, conduct, and competition authorities. Internationally, supervisors are urged to share data and align indicator sets to improve comparability across jurisdictions.

Next Steps

The FSB intends to continue its analysis in cooperation with standard-setting bodies. Priorities include deeper assessment of third-party relationships, algorithmic trading use, and the overall intensity of AI adoption. Particular attention will be paid to areas where market behaviour may become correlated or where model-governance weaknesses could propagate across firms.

The report concludes that monitoring frameworks must be risk-based, proportionate, and timely, and stresses that technology-neutral approaches remain preferred by most supervisors. They should leverage existing regulatory infrastructure wherever possible and focus on identifying critical dependencies before they become systemic vulnerabilities.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: GenAI and LLM case studies for Surveillance, Screening and Scanning

12 November 2025 11:00am ET | 3:00pm London | 4:00pm CET Duration: 50 Minutes As Generative AI (GenAI) and Large Language Models (LLMs) move from pilot to production, compliance, surveillance, and screening functions are seeing tangible results – and new risks. From trade surveillance to adverse media screening to policy and regulatory scanning, GenAI and...

BLOG

FCA Multi-Firm Review on Off-Channel Communications: Implications and Next Steps

By Paul Cottee, Director, Regulatory Compliance, NICE Actimize. The UK’s financial regulator, the Financial Conduct Authority (FCA), recently published the results of its multi-firm review into off-channel communications within wholesale banking. Off-channel communications, in this context, refer to any professional communication that occurs outside of the firm’s approved channels, such as personal emails, instant messages,...

EVENT

AI in Data Management Summit New York City

Following the success of the 15th Data Management Summit NYC, A-Team Group are excited to announce our new event: AI in Data Management Summit NYC!

GUIDE

The DORA Implementation Playbook: A Practitioner’s Guide to Demonstrating Resilience Beyond the Deadline

The Digital Operational Resilience Act (DORA) has fundamentally reshaped the European Union’s financial regulatory landscape, with its full application beginning on January 17, 2025. This regulation goes beyond traditional risk management, explicitly acknowledging that digital incidents can threaten the stability of the entire financial system. As the deadline has passed, the focus is now shifting...