Skip to content

EU AI Act (Regulation 2024/1689)

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) was published in the Official Journal of the European Union on 2024-07-12 and entered into force on 2024-08-01 as the world’s first horizontal AI regulation. Core design:

  1. Four-tier risk classification: unacceptable (prohibited) / high-risk / limited risk (transparency) / minimal risk
  2. Dedicated chapter on general-purpose AI models (GPAI): Articles 51-56 + Annexes XI / XII / XIII
  3. Phased application: prohibited list 2025-02-02 → GPAI provisions 2025-08-02 → high risk (most) 2026-08-02 → high-risk systems embedded in public-sector products 2027-08-02
  4. Governance architecture: EU AI Office (center) + member-state market surveillance authorities; centralized GPAI oversight

Penalty ceilings are exceptionally high: violations of the prohibited list may reach 7% of global annual turnover; other obligations up to 3%.

The AI Act (Regulation EU 2024/1689) is the world’s first horizontal AI law, applying a risk-tiered approach plus a dedicated chapter on general-purpose AI models, with global extraterritorial reach and headline penalties up to 7% of worldwide turnover.

PartContentArticles
IGeneral provisions1-4
IIProhibited list5
IIIHigh-risk AI systems6-49
IVTransparency obligations (limited risk)50
VGeneral-purpose AI models (GPAI)51-56
VISupport for innovation57-63
VIIGovernance64-70
VIIIEU database71
IXPost-market monitoring72-94
XCodes of conduct95-96
XI-XIIIFinal provisions97-113

Effective 2025-02-02. Prohibits placing on the EU market AI systems that:

  1. Use subliminal, manipulative, or deceptive techniques causing material harm
  2. Exploit vulnerabilities of specific groups (including age, disability, socio-economic circumstance) to materially distort behavior
  3. Conduct social scoring (general-purpose classification/scoring of social behavior or personal characteristics) leading to disproportionate or out-of-context adverse treatment
  4. Conduct predictive policing based solely on profiling or personality features
  5. Perform untargeted scraping of facial images to build facial recognition databases
  6. Deploy emotion recognition in workplace or education settings (with medical / safety exceptions)
  7. Perform biometric categorization using sensitive attributes (with statutory law-enforcement exceptions)
  8. Conduct real-time remote biometric identification in publicly accessible spaces for law-enforcement purposes (with three narrow exceptions)

Two categories:

  • Annex I product safety: AI components embedded in products already covered by EU product-safety legislation (toys, machinery, medical devices, etc.)
  • Annex III standalone use cases: biometrics, critical infrastructure, education / vocational training, employment / HR management, access to essential private / public services (including credit, insurance), law enforcement, migration and borders, judicial and democratic processes
ObligationArticleKey point
Risk management system9Entire lifecycle
Data governance10Training / validation / testing data quality
Technical documentation11 + Annex IVDetailed documentation
Logging12Automatic event logs
Transparency13Information to deployers
Human oversight14Feasible and effective
Accuracy, robustness, cybersecurity15Quantitative metrics
Quality management system17Providers
Conformity assessment43 + Annex VI / VIISelf-assessment or third-party
CE marking and EU declaration47-48
EU database registration49 + 71

General-purpose AI models (GPAI) (Articles 51-56)

Section titled “General-purpose AI models (GPAI) (Articles 51-56)”
  1. All GPAI models: training documentation, downstream documentation, copyright compliance (EU Copyright Directive TDM exception), publicly available training-data summary
  2. GPAI with systemic risk: cumulative training compute ≥ 10²⁵ FLOP creates an automatic presumption of systemic risk; or Commission designation

Additional obligations for systemic-risk GPAI (Article 55)

Section titled “Additional obligations for systemic-risk GPAI (Article 55)”
  • Model evaluation including adversarial testing
  • Systemic-risk assessment and mitigation
  • Serious-incident reporting (Article 56)
  • Cybersecurity protection of the model and physical infrastructure

Article 56 authorizes the AI Office to convene a GPAI Code of Practice. Finalized 2025-07-10; the European Commission and AI Board adopted Adequacy Decisions on 2025-08-01 and published the list of signatories; after the 2025-08-02 entry into application of the GPAI provisions, signing the Code is the de facto path to demonstrating compliance. See the standalone GPAI Code of Practice page.

Transparency obligations (Article 50 · limited risk)

Section titled “Transparency obligations (Article 50 · limited risk)”
  • Notify users when interacting with AI (chatbot exception: “obvious to a reasonably informed natural person”)
  • Machine-readable marking of synthetic content (Article 50(2))
  • Disclosure for biometric categorization / emotion recognition systems
  • Deepfake “artificially generated or manipulated” disclosure, with limited artistic / satirical exceptions
  • EU AI Office (within Commission DG CNECT): GPAI supervision, Codes of Practice, standardization push
  • EU AI Board (member-state representatives): coordination mechanism
  • Member-state market surveillance authorities (MSAs): enforcement of high-risk systems
  • EU database (Article 71): registration of high-risk systems
  • Breach of Article 5 prohibited list: up to EUR 35M or 7% of global annual turnover
  • Breach of other applicable obligations (high-risk, transparency): EUR 15M or 3%
  • Supply of false / misleading information: EUR 7.5M or 1%
  • GPAI penalties separately (Article 101): up to EUR 15M or 3%
DateContent that applies
2024-08-01Regulation enters into force
2025-02-02Article 5 prohibited list + general provisions (Arts. 1-4) + common definitions + AI literacy obligation
2025-08-02GPAI provisions (Arts. 51-56) + governance (Part III) + most penalties
2026-08-02High-risk systems (Annex III) + most remaining provisions
2027-08-02Embedded high-risk (in Annex I products) + deferred public-sector provisions
  • Risk tiering: EU four tiers vs. China’s Generative AI Measures Article 3 “tiered and graded supervision,” which has yet to produce concrete tiers
  • GPAI threshold: EU 10²⁵ FLOP vs. China’s absence of a quantitative compute threshold, with lines drawn by service type instead
  • Transparency: AI Act Article 50 requires machine-readable marking vs. China’s Labeling Measures + GB 45438 with more granular technical fields
  • Prohibited list: AI Act Article 5 contains 8 express prohibitions vs. China’s “shall not” clauses scattered across sectoral regulations
LanguageSourceLink
English (original)EUR-Lexeur-lex.europa.eu
24 official EU languagesEUR-LexEUR-Lex multilingual
Chinese (unofficial academic translation)Cite recognized academic translations; avoid self-made full-text translations
EU AI Act ExplorerFuture of Life Instituteartificialintelligenceact.eu
Article lookup toolEU AI Officedigital-strategy.ec.europa.eu
  • Mueller (CEPS) commentary series
  • Bradford’s “Brussels Effect” framework applied to the AI Act
  • Engler (Brookings) on GPAI provisions
  • MacCarthy’s ongoing tracking of the Code of Practice
DateEvent
2021-04-21Commission proposal
2023-06Parliament first-reading position
2023-12-08Trilogue political agreement
2024-03-13Parliament final vote
2024-05-21Council approval
2024-07-12Published in Official Journal
2024-08-01Entry into force
2025-02-02Prohibited list applies
2025-08-02GPAI provisions apply
2026-08-02High-risk systems apply