verified_user
Standardful
首页chevron_right标准chevron_right欧盟 AI 法案
有效国际标准update 最后更新:2025年8月

欧盟 AI 法案

法规(EU)2024/1689 人工智能法案

apartment发布组织:欧盟

标准简介

欧盟人工智能法案(法规 (EU) 2024/1689)是世界上第一个全面的人工智能法律框架,于 2024 年 3 月由欧洲议会通过。该法案建立了基于风险的 AI 监管方法,将 AI 系统分为四个风险级别:不可接受(禁止)、高风险(严格监管)、有限风险(透明度义务)和最低风险(无特定要求)。它适用于在欧盟市场投放 AI 系统的提供者和部署者,无论其设立地点。

该法案禁止被视为不可接受风险的 AI 做法,包括政府的社会评分和公共场所的实时生物识别监控。高风险 AI 系统——用于招聘、信用评分、医疗、教育和关键基础设施——必须满足严格要求,包括风险管理、数据治理、透明度、人工监督和准确性。通用 AI 模型在透明度和版权合规方面有额外义务。违规罚款最高可达 3500 万欧元或全球年营业额的 7%。该法案分阶段实施,从 2025 年 2 月起禁止被禁 AI 做法,2026 年 8 月起适用高风险系统规则。

psychology

Risk-Based Classification

AI systems are classified into four risk levels — unacceptable (banned), high-risk (strict rules), limited risk (transparency), and minimal risk (no obligations).

block

Prohibited Practices

Bans social scoring by governments, real-time remote biometric identification in public spaces (with exceptions), and AI that manipulates or exploits vulnerabilities.

visibility

Transparency Requirements

AI-generated content must be labeled. Chatbots and deepfakes must be clearly identified as AI. General-purpose AI models must publish training data summaries.

list_alt Key Obligations

  • Prohibited AI practices (social scoring, manipulative AI)
  • High-risk AI system conformity assessments
  • Risk management system for high-risk AI
  • Data governance and quality requirements
  • Transparency and human oversight obligations
  • General-purpose AI model obligations
  • AI literacy requirements for deployers
  • Post-market monitoring and incident reporting

Who Needs to Comply?

groups

Providers (developers) and deployers (users) of AI systems placed on the EU market or whose output is used in the EU. Applies to companies worldwide if their AI systems affect people in the EU.

Key Requirements

1

Risk Classification

Determine whether your AI system falls under prohibited, high-risk, limited-risk, or minimal-risk categories. High-risk includes AI in critical infrastructure, education, employment, law enforcement, and migration.

2

Conformity Assessment (High-Risk)

High-risk AI systems must undergo conformity assessment before market placement. This includes risk management, data quality checks, technical documentation, and in some cases third-party audits.

3

Transparency Obligations

Ensure users know they are interacting with AI. Label AI-generated content including deepfakes. General-purpose AI providers must publish model cards and training data summaries.

4

Human Oversight

High-risk AI systems must be designed to allow effective human oversight. Deployers must assign competent persons to monitor operation and intervene when necessary.

5

Post-Market Monitoring

Providers of high-risk AI must establish post-market monitoring systems. Serious incidents and malfunctions must be reported to national authorities.

Penalties & Enforcement

warning

Fines up to EUR 35 million or 7% of global annual turnover for prohibited AI practices. Up to EUR 15 million or 3% of turnover for other violations. SMEs and startups face proportionally lower caps.

官方文档

查看全部

实施时间线

description
2021年4月
欧盟委员会提案发布 - 首份全面 AI 监管草案提出
check_circle
2024年3月
欧洲议会批准最终文本 - AI 法案由欧洲议会正式通过
gavel
2024年8月
AI 法案生效 - 法规在官方公报发布后 20 天生效
block
2025年2月
被禁 AI 实践禁令生效 - 社会评分、操纵性 AI 和某些生物识别用途被禁止
psychology
2025年8月
通用 AI 规则适用 - GPAI 模型的透明度和版权义务变得可执行
warning
2026年8月
高风险 AI 系统规则适用 - 高风险 AI 系统的完整合规要求成为强制要求

相关分类