In a move to support regulated firms across Europe, ISITC Europe CIC has forged a partnership with AI compliance platform Genbounty to deliver third-party AI audits and accreditation. The objective: help organizations build credible AI governance programs and ensure alignment with evolving regulatory standards.
Tackling the AI Governance Gap
As AI deployments proliferate in financial services and capital markets, many firms struggle to translate high-level rules into operational controls. While the EU AI Act defines a compliance baseline, supervisory expectations vary across jurisdictions. In the UK, the FCA’s stated approach is technology agnostic and outcomes-focused, applying existing regulatory frameworks to firms’ use of AI rather than creating AI-specific rules. This stance is reflected across its AI Update, press material and speeches—and operationalised via initiatives like AI Live Testing.
Gary Wright, Director of Industry Affairs and one of ISITC Europe’s founders, underscores the challenge. He argues that firms need clarity on how their AI components affect risk, controls, and accountability. Under this partnership, “ISITC Europe as a not-for profit community interest company will act as an independent resource for members to help manage their AI components and ensure compliance.” The offering will include audits, access to verified AI testers, workshops, benchmark reports, and post-market monitoring.
From Genbounty’s side, co-founder Rob Morel positioned the platform’s role as providing ongoing regulatory insight. He explained that the system will keep users apprised of changes and help them understand “AI components utilised across enterprises.” Through the MOU, Genbounty’s tools and vetted testers will be made available to ISITC Europe’s membership.
What This Enables — and What Still Must Be Built
This collaboration is not simply a new service launch, but a signal of maturation in the AI compliance space. By offering independent assessments, it helps fill a gap between regulation and implementation, particularly for institutions lacking internal AI expertise.
Still, several challenges lie ahead:
- Operationalizing audits. Translating compliance results into practical remediation plans will require domain experience in AI, model risk management, and financial workflows.
- Maintaining independence. As audits are performed by a provider with commercial ties, perceptions of impartiality must be managed.
- Evolving rules. The EU AI Act itself is dynamic; firms will need to continually adapt their controls as standards are refined.
Nonetheless, for firms seeking credible third-party validation of AI governance, this new path offers a clearer route than doing so in isolation.
Subscribe to our newsletter