About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Global Regulators Turn Up Heat on Exaggerated AI Claims

Subscribe to our newsletter

Supervisors on both sides of the Atlantic are no longer content with soft warnings about artificial intelligence (AI) hype. From the United States Securities and Exchange Commission (SEC) to the United Kingdom’s Advertising Standards Authority (ASA), the direction of travel is clear: say what you do, do what you say – and prove it.

Regulators have begun testing marketing language against operational reality. In March 2024, the SEC settled cases with two investment advisers over false and misleading statements about their use of AI, imposing US$400,000 in penalties and signalling that “AI washing” – dressing up ordinary tools as cutting edge systems – now sits in the enforcement cross hairs.

In the United Kingdom (UK), the ASA has scaled up oversight. Its November 2024 analysis found roughly 16,000 ads referring to AI across a three month window and reiterated Committee of Advertising Practice (CAP) Code principles: don’t assert AI where none exists, don’t exaggerate capability, and avoid superiority claims without substantiation.

FCA expectations still apply

Consumer Duty applies for retail cases. Principle 7 applies in all non-retail situations and stipulates that communications must be clear, fair and not misleading. The FCA’s AI Update ties those expectations to both model use and marketing and locates accountability within the Senior Managers and Certification Regime (SM&CR).

The UK Competition and Markets Authority (CMA) has warned about the dynamics of foundation model markets and is ready to use competition and consumer protection tools. In the United States (US), the Federal Trade Commission (FTC) has placed “AI washing” on the docket – including Operation AI Comply – signalling action where claims are deceptive or unsubstantiated.

Frequently Cited Failures

When regulators and compliance teams review AI-related marketing and disclosures, a familiar pattern of missteps emerges. These are not isolated slip-ups but recurring weaknesses that undermine credibility and invite regulatory challenge. From vague language and exaggerated claims to governance blind spots, the following examples highlight where firms most often stumble – and where enforcement scrutiny is increasingly concentrated:

  • Vague, sweeping labels. “AI powered” as a blanket descriptor, with no explanation of scope, method or the part of the workflow involved.
  • Implied equivalence to humans. Language that suggests human level (or super human) performance without robust, reproducible results under production conditions.
  • Over attribution. Outcomes largely driven by rules, heuristics or manual review are credited to an “AI model”, creating a misleading impression of autonomy or novelty.
  • Selective evidence. Benchmarks drawn from narrow pilots or best case datasets are presented as typical live performance.
  • Undisclosed limitations. Known issues – bias, error rates, brittleness, drift and guardrails – are omitted or buried in footnotes.
  • Badges and borrowed authority. Terms such as “regulated”, “approved”, “certified” or “first regulated AI X” are used without a genuine designation from a regulator such as the FCA or SEC.
  • Third party conflation. Capabilities of suppliers – particularly foundation model (FM) providers – are implied to be the firm’s own, without clarifying dependency, constraints or service level limits.
  • Data ambiguity. Claims about training data, provenance or consent are high level or contradictory, creating uncertainty about lawful basis and representativeness.
  • Version mismatch. Marketing materials describe a model lineage or release that is no longer in production, or that differs materially from what customers experience.
  • Governance gaps. No visible link between claims and accountable owners under SM&CR, nor evidence that Senior Management Functions (SMF) have reviewed risk language.

Emerging Best Practices

In response to regulatory scrutiny, leading firms are beginning to tighten their approach to AI disclosures. Instead of relying on broad slogans or selective evidence, they are building structured processes to substantiate every claim. From cataloguing statements and instituting sign-off protocols to publishing model fact sheets and training frontline teams, these practices illustrate how compliance and credibility can move in step:

  • Claims catalogues. Many firms are now scraping and cataloguing every AI related statement across websites, decks, factsheets, sales scripts and social posts, linking each line to its supporting evidence – or withdrawing it.
  • Promotion sign offs. Financial promotion workflows are expanding to capture AI claims explicitly, with Legal/Compliance review and attestation from the accountable model owner.
  • Model “fact sheets”. Short model cards – purpose, data sources, limits, monitoring, human in the loop design and typical error modes – are being maintained internally and, in some cases, published.
  • Language guardrails. Word lists that trigger extra scrutiny (for example, “fully autonomous”, “guaranteed accuracy”, “human equivalent”) are increasingly common; some firms block them outright absent strong evidence.
  • Pause and update protocols. Monitoring for drift, material incidents, performance regressions – triggers a ‘claims freeze’ and rapid refresh of customer facing materials.
  • Supplier diligence. Due diligence checklists for FM and other third party providers now cover training data provenance, evals, red teaming, safety policies and service level constraints; marketing is required to reflect those realities.
  • Advertising alignment. CAP Code guidance is being embedded into creative reviews, so ad copy mirrors the substantiation held on file and avoids superiority claims without head to head evidence.
  • Records and retrieval. Firms are enhancing record keeping – what was claimed, when, where it was approved, and the evidence relied upon – to support internal audit and regulator queries.
  • Front line training. Sales and support teams are briefed on what the models can and cannot do, reducing off script embellishment.
  • Consistency checks. Periodic sweeps compare product reality with public claims, catching version drift and ensuring disclaimers match the current build.

Loose Language & Regulatory Risk

Across markets, “AI Washing” is under increasing scrutiny. Regulators have moved beyond guidance: the US Securities and Exchange Commission (SEC) has brought cases; the UK Advertising Standards Authority’s (ASA) Active Ad Monitoring system now scans millions of adverts; and the Financial Conduct Authority (FCA) can act where promotions are not “clear, fair and not misleading”, with the Competition and Markets Authority (CMA) and the US Federal Trade Commission (FTC) poised to use competition and consumer protection tools. The message for firms is clear: more proactive monitoring and evidence testing of claims, including those inherited from suppliers. Precision and proof matter as much as product features in a market where loose language is a regulatory risk.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: GenAI and LLM case studies for Surveillance, Screening and Scanning

12 November 2025 11:00am ET | 3:00pm London | 4:00pm CET Duration: 50 Minutes As Generative AI (GenAI) and Large Language Models (LLMs) move from pilot to production, compliance, surveillance, and screening functions are seeing tangible results — and new risks. From trade surveillance to adverse media screening to policy and regulatory scanning, GenAI and...

BLOG

A-Team Group Announces Winners of RegTech Insight Awards Europe 2025

A-Team Group has announced the winners of its RegTech Insight Awards Europe 2025. The awards recognise both established providers and innovative newcomers providing RegTech solutions to capital market participants that significantly improve their ability to respond effectively to evolving and ever more complex regulatory requirements. This year’s RegTech Insight Awards Europe included more than 40...

EVENT

Data Management Summit London

Now in its 16th year, the Data Management Summit (DMS) in London brings together the European capital markets enterprise data management community, to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

AI in Capital Markets: Practical Insight for a Transforming Industry – Free Handbook

AI is no longer on the horizon – it’s embedded in the infrastructure of modern capital markets. But separating real impact from inflated promises requires a grounded, practical understanding. The AI in Capital Markets Handbook 2025 provides exactly that. Designed for data-driven professionals across the trade life-cycle, compliance, infrastructure, and strategy, this handbook goes beyond...