EU AI Act

The European Union's comprehensive regulatory framework for artificial intelligence, establishing rules based on risk levels and imposing requirements for high-risk AI systems.

Also known as:AI ActEuropean AI Regulation

What is the EU AI Act?

The EU AI Act is the world's first comprehensive legal framework regulating artificial intelligence. Adopted in 2024, it takes a risk-based approach to AI regulation, with stricter rules for higher-risk applications. It aims to ensure AI systems are safe, transparent, and respect fundamental rights.

Risk Categories

Unacceptable Risk (Banned)

  • Social scoring by governments
  • Real-time biometric identification in public
  • Manipulation of vulnerable groups
  • Emotion recognition in workplaces/schools

High Risk

  • Critical infrastructure
  • Education and employment
  • Law enforcement
  • Migration and asylum
  • Access to essential services

Limited Risk

  • Chatbots (transparency required)
  • Emotion recognition systems
  • Deepfakes (disclosure required)

Minimal Risk

  • AI-enabled video games
  • Spam filters
  • Most AI applications

Key Requirements (High-Risk)

  • Risk management systems
  • Data governance standards
  • Technical documentation
  • Record-keeping
  • Transparency to users
  • Human oversight
  • Accuracy and robustness

Timeline

  • 2024: Act enters into force
  • 2025: Prohibited AI provisions apply
  • 2026: Full application for high-risk AI