verified_user
Standardful
Homechevron_rightStandardschevron_rightEU AI Act
ActiveInternational Standardupdate Last Updated: August 2025

EU AI Act

Regulation (EU) 2024/1689 — Artificial Intelligence Act

apartmentPublishing Organization:European Union

Standard Introduction

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence, adopted by the European Parliament in March 2024. The AI Act establishes a risk-based approach to AI regulation, categorizing AI systems into four risk levels: unacceptable (banned), high-risk (strictly regulated), limited risk (transparency obligations), and minimal risk (no specific requirements). It applies to providers and deployers of AI systems placed on the EU market, regardless of where they are established.

The AI Act prohibits AI practices considered an unacceptable risk, including social scoring by governments and real-time biometric surveillance in public spaces. High-risk AI systems — used in hiring, credit scoring, healthcare, education, and critical infrastructure — must meet strict requirements including risk management, data governance, transparency, human oversight, and accuracy. General-purpose AI models have additional obligations around transparency and copyright compliance. Fines reach up to €35 million or 7% of global annual turnover. The Act is implemented in phases, with prohibited practices banned from February 2025 and high-risk system rules applying from August 2026.

psychology

Risk-Based Classification

AI systems are classified into four risk levels — unacceptable (banned), high-risk (strict rules), limited risk (transparency), and minimal risk (no obligations).

block

Prohibited Practices

Bans social scoring by governments, real-time remote biometric identification in public spaces (with exceptions), and AI that manipulates or exploits vulnerabilities.

visibility

Transparency Requirements

AI-generated content must be labeled. Chatbots and deepfakes must be clearly identified as AI. General-purpose AI models must publish training data summaries.

list_alt Key Obligations

  • Prohibited AI practices (social scoring, manipulative AI)
  • High-risk AI system conformity assessments
  • Risk management system for high-risk AI
  • Data governance and quality requirements
  • Transparency and human oversight obligations
  • General-purpose AI model obligations
  • AI literacy requirements for deployers
  • Post-market monitoring and incident reporting

Who Needs to Comply?

groups

Providers (developers) and deployers (users) of AI systems placed on the EU market or whose output is used in the EU. Applies to companies worldwide if their AI systems affect people in the EU.

Key Requirements

1

Risk Classification

Determine whether your AI system falls under prohibited, high-risk, limited-risk, or minimal-risk categories. High-risk includes AI in critical infrastructure, education, employment, law enforcement, and migration.

2

Conformity Assessment (High-Risk)

High-risk AI systems must undergo conformity assessment before market placement. This includes risk management, data quality checks, technical documentation, and in some cases third-party audits.

3

Transparency Obligations

Ensure users know they are interacting with AI. Label AI-generated content including deepfakes. General-purpose AI providers must publish model cards and training data summaries.

4

Human Oversight

High-risk AI systems must be designed to allow effective human oversight. Deployers must assign competent persons to monitor operation and intervene when necessary.

5

Post-Market Monitoring

Providers of high-risk AI must establish post-market monitoring systems. Serious incidents and malfunctions must be reported to national authorities.

Penalties & Enforcement

warning

Fines up to EUR 35 million or 7% of global annual turnover for prohibited AI practices. Up to EUR 15 million or 3% of turnover for other violations. SMEs and startups face proportionally lower caps.

Official Documentation

View All

Implementation Timeline

description
April 2021
European Commission proposal published - First draft of comprehensive AI regulation presented
check_circle
March 2024
European Parliament approved final text - AI Act officially adopted by the European Parliament
gavel
Aug 2024
AI Act entered into force - Regulation entered into force 20 days after publication in Official Journal
block
Feb 2025
Prohibited AI practices ban effective - Social scoring, manipulative AI, and certain biometric uses banned
psychology
Aug 2025
General-purpose AI rules apply - Transparency and copyright obligations for GPAI models become enforceable
warning
Aug 2026
High-risk AI systems rules apply - Full compliance requirements for high-risk AI systems become mandatory

Related Categories