Skip to content

EU — Risk Classification

EU AI Act · Four-Tier Risk Pyramid

Hover or tap each tier for definition, article references, examples, and application date.

禁止类 Unacceptable Risk 高风险 High-Risk 有限风险 Limited Risk 最低风险 Minimal Risk
RuleProvisionsRelationship to risk classification
EU AI ActArts. 5 / 6 / 50 / 51Four tiers + the GPAI track
GPAI Code of PracticeArt. 56Presumption-of-conformity route for GPAI
GDPRArt. 35DPIA’s independent “high-risk” determination
DSAArt. 34VLOP systemic-risk assessment
Digital Omnibus ProposalProposes to postpone the high-risk provisions to Dec 2027
Spain AESIAMember-state MSAFirst dedicated AI authority in the EU
France CNIL AIGDPR × AIAmong the most active DPAs

Article 5 · Applicable from Feb 2, 2025.

Eight explicit prohibitions (see the AI Act Rules page):

  1. Subliminal or manipulative techniques causing harm.
  2. Exploitation of vulnerabilities.
  3. Social scoring.
  4. Predictive policing based solely on profiling.
  5. Untargeted scraping to build facial-recognition databases.
  6. Emotion recognition in the workplace or educational settings.
  7. Biometric categorisation based on sensitive attributes.
  8. Real-time remote biometric identification by law enforcement in public spaces.

Article 6 + Annex III (standalone use cases) / Annex I (product-embedded).

Annex III (standalone use cases) covers eight domains:

  • Biometrics
  • Critical infrastructure
  • Education / vocational training
  • Employment / workforce management
  • Essential private and public services (credit, insurance, public benefits)
  • Law enforcement
  • Migration / border / asylum
  • Administration of justice and democratic processes

See the AI Act Rules page for the full matrix of obligations.

Article 50 · Applicable from Aug 2, 2026.

  • AI systems interacting with natural persons (chatbots): disclose.
  • Emotion recognition / biometric categorisation: disclose.
  • Deepfakes: disclose that content is AI-generated or manipulated.
  • Text on matters of public interest: disclose (unless subject to human editorial review).
  • Generative AI output: machine-readable marking (e.g., C2PA).

Voluntary best practice.

Two sub-tiers:

  • All GPAI: training documentation, downstream documentation, copyright policy, public training-data summary.
  • Systemic-risk GPAI (≥ 10²⁵ FLOP): adversarial testing, incident reporting, cybersecurity.
  • GDPR DPIA (art. 35): its own “high-risk processing” determination — scope does not perfectly overlap with AI Act high-risk.
  • DSA systemic risk (art. 34): obligations on VLOPs to assess generative-AI-related risks.
  • Product Liability Directive (2024/2853): ex-post defect determinations reference, and are referenced by, AI Act compliance.
  1. Expandability of Annex III: the Commission can expand the list by delegated act; the exception in art. 6(3) (where the system does not significantly influence the outcome of the decision) is a focal point of controversy.
  2. The 10²⁵ FLOP threshold: dynamically liable to be exceeded post-training, and more models will hit it in future, raising the question of whether the “systemic-risk” population will inflate.
  3. Coupling with harmonised standards: conformity with CEN-CENELEC harmonised standards produces a presumption of compliance (art. 40); the pace of standard-setting is the critical path for full applicability in 2026.
  • Systematisation: EU > China ≫ US (federal).
  • Predictability: the EU’s obligations checklist is the clearest; China relies on filing practice; the US federal layer is the vaguest.
  • Dedicated GPAI chapter: unique to the EU. China achieves functional equivalence via TC260-003 but not through legislation; the US previously had a nascent version under EO 14110 (now revoked).