verified_user
Standardful
首頁chevron_right標準chevron_right歐盟 AI 法案
現行有效國際標準update 最後更新:2025年8月

歐盟 AI 法案

法規(EU)2024/1689 人工智慧法案

apartment發布組織:歐盟

標準簡介

歐盟人工智慧法案(法規 (EU) 2024/1689)是世界上第一個全面的人工智慧法律框架,於 2024 年 3 月由歐洲議會通過。該法案建立了基於風險的 AI 監管方法,將 AI 系統分為四個風險級別:不可接受(禁止)、高風險(嚴格監管)、有限風險(透明度義務)和最低風險(無特定要求)。它適用於在歐盟市場投放 AI 系統的提供者和部署者,無論其設立地點。

該法案禁止被視為不可接受風險的 AI 做法,包括政府的社會評分和公共場所的即時生物辨識監控。高風險 AI 系統——用於招聘、信用評分、醫療、教育和關鍵基礎設施——必須滿足嚴格要求,包括風險管理、資料治理、透明度、人工監督和準確性。通用 AI 模型在透明度和版權合規方面有額外義務。違規罰款最高可達 3500 萬歐元或全球年營業額的 7%。該法案分階段實施,從 2025 年 2 月起禁止被禁 AI 做法,2026 年 8 月起適用高風險系統規則。

psychology

Risk-Based Classification

AI systems are classified into four risk levels — unacceptable (banned), high-risk (strict rules), limited risk (transparency), and minimal risk (no obligations).

block

Prohibited Practices

Bans social scoring by governments, real-time remote biometric identification in public spaces (with exceptions), and AI that manipulates or exploits vulnerabilities.

visibility

Transparency Requirements

AI-generated content must be labeled. Chatbots and deepfakes must be clearly identified as AI. General-purpose AI models must publish training data summaries.

list_alt Key Obligations

  • Prohibited AI practices (social scoring, manipulative AI)
  • High-risk AI system conformity assessments
  • Risk management system for high-risk AI
  • Data governance and quality requirements
  • Transparency and human oversight obligations
  • General-purpose AI model obligations
  • AI literacy requirements for deployers
  • Post-market monitoring and incident reporting

Who Needs to Comply?

groups

Providers (developers) and deployers (users) of AI systems placed on the EU market or whose output is used in the EU. Applies to companies worldwide if their AI systems affect people in the EU.

Key Requirements

1

Risk Classification

Determine whether your AI system falls under prohibited, high-risk, limited-risk, or minimal-risk categories. High-risk includes AI in critical infrastructure, education, employment, law enforcement, and migration.

2

Conformity Assessment (High-Risk)

High-risk AI systems must undergo conformity assessment before market placement. This includes risk management, data quality checks, technical documentation, and in some cases third-party audits.

3

Transparency Obligations

Ensure users know they are interacting with AI. Label AI-generated content including deepfakes. General-purpose AI providers must publish model cards and training data summaries.

4

Human Oversight

High-risk AI systems must be designed to allow effective human oversight. Deployers must assign competent persons to monitor operation and intervene when necessary.

5

Post-Market Monitoring

Providers of high-risk AI must establish post-market monitoring systems. Serious incidents and malfunctions must be reported to national authorities.

Penalties & Enforcement

warning

Fines up to EUR 35 million or 7% of global annual turnover for prohibited AI practices. Up to EUR 15 million or 3% of turnover for other violations. SMEs and startups face proportionally lower caps.

官方文件

查看全部

實施時間線

description
2021年4月
歐盟委員會提案發布 - 首份全面 AI 監管草案提出
check_circle
2024年3月
歐洲議會批准最終文本 - AI 法案由歐洲議會正式通過
gavel
2024年8月
AI 法案生效 - 法規在官方公報發布後 20 天生效
block
2025年2月
被禁 AI 實踐禁令生效 - 社會評分、操縱性 AI 和某些生物辨識用途被禁止
psychology
2025年8月
通用 AI 規則適用 - GPAI 模型的透明度和版權義務變得可執行
warning
2026年8月
高風險 AI 系統規則適用 - 高風險 AI 系統的完整合規要求成為強制要求

相關分類