EU AI Act
Regulation (EU) 2024/1689 — Artificial Intelligence Act
Standard Introduction
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence, adopted by the European Parliament in March 2024. The AI Act establishes a risk-based approach to AI regulation, categorizing AI systems into four risk levels: unacceptable (banned), high-risk (strictly regulated), limited risk (transparency obligations), and minimal risk (no specific requirements). It applies to providers and deployers of AI systems placed on the EU market, regardless of where they are established.
The AI Act prohibits AI practices considered an unacceptable risk, including social scoring by governments and real-time biometric surveillance in public spaces. High-risk AI systems — used in hiring, credit scoring, healthcare, education, and critical infrastructure — must meet strict requirements including risk management, data governance, transparency, human oversight, and accuracy. General-purpose AI models have additional obligations around transparency and copyright compliance. Fines reach up to €35 million or 7% of global annual turnover. The Act is implemented in phases, with prohibited practices banned from February 2025 and high-risk system rules applying from August 2026.
Risk-Based Classification
AI systems are classified into four risk levels — unacceptable (banned), high-risk (strict rules), limited risk (transparency), and minimal risk (no obligations).
Prohibited Practices
Bans social scoring by governments, real-time remote biometric identification in public spaces (with exceptions), and AI that manipulates or exploits vulnerabilities.
Transparency Requirements
AI-generated content must be labeled. Chatbots and deepfakes must be clearly identified as AI. General-purpose AI models must publish training data summaries.
list_alt Key Obligations
- Prohibited AI practices (social scoring, manipulative AI)
- High-risk AI system conformity assessments
- Risk management system for high-risk AI
- Data governance and quality requirements
- Transparency and human oversight obligations
- General-purpose AI model obligations
- AI literacy requirements for deployers
- Post-market monitoring and incident reporting
Who Needs to Comply?
Providers (developers) and deployers (users) of AI systems placed on the EU market or whose output is used in the EU. Applies to companies worldwide if their AI systems affect people in the EU.
Key Requirements
Risk Classification
Determine whether your AI system falls under prohibited, high-risk, limited-risk, or minimal-risk categories. High-risk includes AI in critical infrastructure, education, employment, law enforcement, and migration.
Conformity Assessment (High-Risk)
High-risk AI systems must undergo conformity assessment before market placement. This includes risk management, data quality checks, technical documentation, and in some cases third-party audits.
Transparency Obligations
Ensure users know they are interacting with AI. Label AI-generated content including deepfakes. General-purpose AI providers must publish model cards and training data summaries.
Human Oversight
High-risk AI systems must be designed to allow effective human oversight. Deployers must assign competent persons to monitor operation and intervene when necessary.
Post-Market Monitoring
Providers of high-risk AI must establish post-market monitoring systems. Serious incidents and malfunctions must be reported to national authorities.
Penalties & Enforcement
Fines up to EUR 35 million or 7% of global annual turnover for prohibited AI practices. Up to EUR 15 million or 3% of turnover for other violations. SMEs and startups face proportionally lower caps.
Official Documentation
EU AI Act (EU) 2024/1689
EUR-Lex • Full Regulation Text • All EU Languages
European Commission AI Policy
External Link • digital-strategy.ec.europa.eu • Regulatory Framework
AI Act Explorer
External Link • artificialintelligenceact.eu • Interactive Guide