About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

EU Parliament Approves Landmark Artificial Intelligence Act

Subscribe to our newsletter

The EU Parliament has approved the Artificial Intelligence Act, marking the world’s first regulation of AI. The regulation establishes obligations for AI based on its potential risks and level of impact and is designed to ensure safety and compliance with fundamental rights, democracy, the rule of law and environmental sustainability, while boosting innovation.

The act needs to be formally endorsed by the European Council and will come into force 20 days after its publication in the Official Journal. It will be applicable 24 months later except for  bans on prohibited practices, which will apply six months after the regulation comes into force; codes of practice that will come in after nine months; general-purpose AI rules including governance that will come in after a year; and obligations for high-risk systems that will follow in three years.

The regulation covers all types of AI including generative AI and is, no doubt, being scrutinised by capital markets participants as they continue to extend their use of the technology – more on this coming soon.

The act sets out key measures including:

  • Safeguards on general purpose artificial intelligence
  • Limits on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations

It also covers high-risk AI systems that are not specifically identified but are likely to include those used in capital markets. These systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

To encourage innovation across the board, regulatory sandboxes and real-world testing will have to be established at the national level and made accessible to SMEs and start-ups to develop and train innovative AI before it goes to market.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Managing Non-Financial Misconduct Under SMCR

Non-financial misconduct – encompassing behaviours such as bullying, sexual harassment, and discrimination is a key focus of the Senior Managers and Certification Regime (SMCR). The Financial Conduct Authority (FCA) has underscored that such misconduct is not only unethical but also poses significant risks to a firm’s culture and operational integrity. Recognizing the profound impact on...

BLOG

Overcoming Data Challenges of Rapidly Evolving ESG Space: ESG Data and Tech Briefing Preview

The rapid maturation of ESG data integration and utilisation within financial institutions has forced them to invest in new technology and data management processes. The rate of change, however, has been a challenge for some organisations, which have struggled to put in place the necessary capabilities to absorb, order and deploy such large volumes of...

EVENT

AI in Data Management Summit New York City

Following the success of the 15th Data Management Summit NYC, A-Team Group are excited to announce our new event: AI in Data Management Summit NYC!

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...