About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Latest Changes to the EU AI Act – What You Need to Know

Subscribe to our newsletter

By Nik Kairinos, CEO and Co-Founder of RAIDS AI.

AI’s influence continues to grow, reaching across sectors and across borders. In financial services, it’s fundamentally changing how firms operate, shifting towards more precise, efficient, and innovative ways of working. At a foundational level, it provides the ability to analyse large volumes of data quickly and spot patterns, provide insights and highlight errors quickly. But as well as speed, AI brings accuracy. For investments, it’s now being used to execute trades, using algorithms to analyse markets more precisely than humans can. In addition, it can spot potential investment risks by monitoring market conditions in real time.

But AI is not without its risks – rogue AI has the potential to be devastatingly costly to the financial services organisations that deploy it, and there are a number of examples of this. In January 2025, the U.S. Securities and Exchange Commission (SEC) charged quantitative investment firm Two Sigma Investments LP after its algorithmic investment models – used to produce forecasts and drive live trading decisions for clients – allowed an employee to make unauthorised changes to at least 14 live trading models that altered investment decisions relative to intended strategies. As a result, Two Sigma voluntarily repaid around $165 million to impacted funds and accounts and agreed to pay $90 million in civil penalties. The SEC found that the firm had failed to implement appropriate access controls and to address known deficiencies over several years, resulting in investment decisions that the models otherwise would not have made.

Similarly, on June 3 2024, the New York Stock Exchange (NYSE) experienced a software-related technical error that led to erroneous price displays for numerous stocks triggering volatility halts across dozens of symbols before trading was corrected. The issue stemmed from incorrect pricing bands used in the Limit Up-Limit Down (LULD) mechanism due to a software release affecting consolidated pricing data. This in turn led to sharp, spurious price moves and temporary trading halts. Trading eventually resumed once the software was reverted and normal pricing restored. An additional consequence was a reported $48 million loss absorbed by one brokerage firm because clients placed buy orders based on the incorrect prices during the disruption.

It’s failures like these, and the increasing severity of risk as AI becomes more intelligent, that have led to the EU AI Act. The world’s first comprehensive law for AI, the Act grades AI by categorising systems by risk level: unacceptable risk, high risk, limited risk and minimal risk, and establishes different rules for providers and users depending on the level of risk. Different parts of the Act are being gradually rolled out and, in November, the EU announced some changes to the original legislation.

Headlines focused on alterations to the timeframe, noting that the six-month extension of high-risk system enforcement to December 2027 was a victory for the big tech companies, which had been vocal in their opposition.

However, there was another change – arguably more significant – that reports of the delay barely referenced. The shift from national authority classification to self-assessment is a critical change for those affected by the Act. Since there’s been little attention on it, this important shift risks going unnoticed by organisations. But for companies operating in financial services that are using AI, understanding the shift to self-assessment is critical. Fines for non-compliance will be up to €35 million or 7% of total worldwide annual turnover, whichever is higher, not to mention the reputational damage that comes with public sanctions.

From national authority classification to self-assessment

The changes that the EU made mean that legal accountability for compliance with the Act now falls directly to organisations. Essentially, it places the onus on companies themselves to self-certify that their high-risk AI classifications comply, rather than an outside body deciding who is and who isn’t compliant. It’s an important change because it shifts all legal accountability to the organisations, putting them at substantial risk of liability. In simple terms, there is no one else to blame if they are found to violate the Act.

For this reason, many will want to seek third-party validation. Indeed, in many instances, insurance companies, investors, and enterprise customers are increasingly demanding third-party validation anyway. According to the IAPP’s 2025 AI Governance survey of over 670 respondents across 45 countries, 77% of organisations are currently working on AI governance, with a jump to nearly 90% for those already using AI.

Additionally, engaging a third party gives the firm reassurance that it is compliant and significantly reduces the risk of liability exposure, as well as reducing the time, money and resources needed for self-assessment.

 

Article 17, prEN 18286, ISO 42001 – what does it all mean?

  • Article 17 of the Act specifically mandates quality management systems (QMS) for high-risk AI providers. Following publication of Article 17, the EU then issued a European standard specifically addressing its requirements – prEN 18286.
  • prEN 18286 (Artificial Intelligence – Quality Management System for EU AI Act Regulatory Purposes). With the presumption of conformity, organisations implementing prEN 18286 can assume they meet Article 17 obligations.
  • ISO 42001 is the existing international standard for AI Management systems that was published in December 2023. While it’s voluntary, organisations with existing ISO 42001 certification have a significant head start, as it provides the operational foundation for prEN 18286.

What should organisations be doing now?

Organisations affected by the Act mustn’t waste the six-month time extension recently announced. To make sure they are prepared, they must view it as a strategic adoption window.

Immediate steps they need to take are:

  • Understand whether they use technology that falls into the ‘high risk’ category. The scope of the Act is wide-reaching; any AI model used in the EU – regardless of where it originates from – is covered. So, if a financial services firm, for example, is deploying AI in the EU, or is a user of AI and has colleagues, partners, teams or stakeholders in the EU, then it needs to comply.
  • Know whether they have existing ISO 42001 certification or are working towards it.
  • Understand the requirements of prEN 18286 and take steps to ensure they’re met.
  • Decide whether they wish to have third-party validation and find a provider, or take on the legal accountability that comes with self-assessment.

As the first legislation governing AI, everyone will be watching the EU AI Act closely. For companies operating in financial services, where reputation is everything, ensuring they’re compliant early on will ensure they’re not caught out later on.

Nik Kairinos is the CEO and Co-Founder of RAIDS AI; a real-time AI safety monitoring platform designed to detect and alert organisations to rogue AI behaviour before harm spreads.

With over 40 years of experience in artificial intelligence and deep learning, Nik has dedicated his career to turning advanced research into practical, trustworthy solutions that empower people to use AI safely and effectively.  He believes that true innovation demands safety, transparency, and responsibility – principles that ensure AI strengthens society instead of threatening it.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: GenAI and LLM case studies for Surveillance, Screening and Scanning

As Generative AI (GenAI) and Large Language Models (LLMs) move from pilot to production, compliance, surveillance, and screening functions are seeing tangible results – and new risks. From trade surveillance to adverse media screening to policy and regulatory scanning, GenAI and LLMs promise to tackle complexity and volume at a scale never seen before. But...

BLOG

FCA AI Update 2025: How the Regulator is Embedding AI Oversight into UK Financial Rules

The UK’s Financial Conduct Authority (FCA) has now issued its AI Update (2025), a significant step in its regulatory journey. It builds on the 2022 Discussion Paper on AI and Machine Learning (DP22/4), which set out early questions about AI’s transformative potential and the risks it introduces. Three years on, the FCA’s position has sharpened:...

EVENT

TradingTech Summit London

Now in its 15th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

The DORA Implementation Playbook: A Practitioner’s Guide to Demonstrating Resilience Beyond the Deadline

The Digital Operational Resilience Act (DORA) has fundamentally reshaped the European Union’s financial regulatory landscape, with its full application beginning on January 17, 2025. This regulation goes beyond traditional risk management, explicitly acknowledging that digital incidents can threaten the stability of the entire financial system. As the deadline has passed, the focus is now shifting...