標準簡介
歐盟人工智慧法案(法規 (EU) 2024/1689)是世界上第一個全面的人工智慧法律框架,於 2024 年 3 月由歐洲議會通過。該法案建立了基於風險的 AI 監管方法,將 AI 系統分為四個風險級別:不可接受(禁止)、高風險(嚴格監管)、有限風險(透明度義務)和最低風險(無特定要求)。它適用於在歐盟市場投放 AI 系統的提供者和部署者,無論其設立地點。
該法案禁止被視為不可接受風險的 AI 做法,包括政府的社會評分和公共場所的即時生物辨識監控。高風險 AI 系統——用於招聘、信用評分、醫療、教育和關鍵基礎設施——必須滿足嚴格要求,包括風險管理、資料治理、透明度、人工監督和準確性。通用 AI 模型在透明度和版權合規方面有額外義務。違規罰款最高可達 3500 萬歐元或全球年營業額的 7%。該法案分階段實施,從 2025 年 2 月起禁止被禁 AI 做法,2026 年 8 月起適用高風險系統規則。
Risk-Based Classification
AI systems are classified into four risk levels — unacceptable (banned), high-risk (strict rules), limited risk (transparency), and minimal risk (no obligations).
Prohibited Practices
Bans social scoring by governments, real-time remote biometric identification in public spaces (with exceptions), and AI that manipulates or exploits vulnerabilities.
Transparency Requirements
AI-generated content must be labeled. Chatbots and deepfakes must be clearly identified as AI. General-purpose AI models must publish training data summaries.
list_alt Key Obligations
- Prohibited AI practices (social scoring, manipulative AI)
- High-risk AI system conformity assessments
- Risk management system for high-risk AI
- Data governance and quality requirements
- Transparency and human oversight obligations
- General-purpose AI model obligations
- AI literacy requirements for deployers
- Post-market monitoring and incident reporting
Who Needs to Comply?
Providers (developers) and deployers (users) of AI systems placed on the EU market or whose output is used in the EU. Applies to companies worldwide if their AI systems affect people in the EU.
Key Requirements
Risk Classification
Determine whether your AI system falls under prohibited, high-risk, limited-risk, or minimal-risk categories. High-risk includes AI in critical infrastructure, education, employment, law enforcement, and migration.
Conformity Assessment (High-Risk)
High-risk AI systems must undergo conformity assessment before market placement. This includes risk management, data quality checks, technical documentation, and in some cases third-party audits.
Transparency Obligations
Ensure users know they are interacting with AI. Label AI-generated content including deepfakes. General-purpose AI providers must publish model cards and training data summaries.
Human Oversight
High-risk AI systems must be designed to allow effective human oversight. Deployers must assign competent persons to monitor operation and intervene when necessary.
Post-Market Monitoring
Providers of high-risk AI must establish post-market monitoring systems. Serious incidents and malfunctions must be reported to national authorities.
Penalties & Enforcement
Fines up to EUR 35 million or 7% of global annual turnover for prohibited AI practices. Up to EUR 15 million or 3% of turnover for other violations. SMEs and startups face proportionally lower caps.