标准简介
欧盟人工智能法案(法规 (EU) 2024/1689)是世界上第一个全面的人工智能法律框架,于 2024 年 3 月由欧洲议会通过。该法案建立了基于风险的 AI 监管方法,将 AI 系统分为四个风险级别:不可接受(禁止)、高风险(严格监管)、有限风险(透明度义务)和最低风险(无特定要求)。它适用于在欧盟市场投放 AI 系统的提供者和部署者,无论其设立地点。
该法案禁止被视为不可接受风险的 AI 做法,包括政府的社会评分和公共场所的实时生物识别监控。高风险 AI 系统——用于招聘、信用评分、医疗、教育和关键基础设施——必须满足严格要求,包括风险管理、数据治理、透明度、人工监督和准确性。通用 AI 模型在透明度和版权合规方面有额外义务。违规罚款最高可达 3500 万欧元或全球年营业额的 7%。该法案分阶段实施,从 2025 年 2 月起禁止被禁 AI 做法,2026 年 8 月起适用高风险系统规则。
Risk-Based Classification
AI systems are classified into four risk levels — unacceptable (banned), high-risk (strict rules), limited risk (transparency), and minimal risk (no obligations).
Prohibited Practices
Bans social scoring by governments, real-time remote biometric identification in public spaces (with exceptions), and AI that manipulates or exploits vulnerabilities.
Transparency Requirements
AI-generated content must be labeled. Chatbots and deepfakes must be clearly identified as AI. General-purpose AI models must publish training data summaries.
list_alt Key Obligations
- Prohibited AI practices (social scoring, manipulative AI)
- High-risk AI system conformity assessments
- Risk management system for high-risk AI
- Data governance and quality requirements
- Transparency and human oversight obligations
- General-purpose AI model obligations
- AI literacy requirements for deployers
- Post-market monitoring and incident reporting
Who Needs to Comply?
Providers (developers) and deployers (users) of AI systems placed on the EU market or whose output is used in the EU. Applies to companies worldwide if their AI systems affect people in the EU.
Key Requirements
Risk Classification
Determine whether your AI system falls under prohibited, high-risk, limited-risk, or minimal-risk categories. High-risk includes AI in critical infrastructure, education, employment, law enforcement, and migration.
Conformity Assessment (High-Risk)
High-risk AI systems must undergo conformity assessment before market placement. This includes risk management, data quality checks, technical documentation, and in some cases third-party audits.
Transparency Obligations
Ensure users know they are interacting with AI. Label AI-generated content including deepfakes. General-purpose AI providers must publish model cards and training data summaries.
Human Oversight
High-risk AI systems must be designed to allow effective human oversight. Deployers must assign competent persons to monitor operation and intervene when necessary.
Post-Market Monitoring
Providers of high-risk AI must establish post-market monitoring systems. Serious incidents and malfunctions must be reported to national authorities.
Penalties & Enforcement
Fines up to EUR 35 million or 7% of global annual turnover for prohibited AI practices. Up to EUR 15 million or 3% of turnover for other violations. SMEs and startups face proportionally lower caps.