Skip to content

Risk Classification

Risk classification is the load-bearing engineering decision inside AI governance: regulatory intensity should scale with risk, but how risk is defined, who defines it, and in what units are the three axes on which the jurisdictions diverge most.

JurisdictionClassification basisTiersTriggering mechanismRepresentative instruments
EUUse case / context (Annex III list) + compute (GPAI 10²⁵ FLOP)Four tiers (prohibited / high-risk / limited-risk / minimal-risk) + a separate GPAI trackEnumerated list + compute thresholdAI Act arts. 5 / 6 / 50 / 51
ChinaService type (deep synthesis / generative / algorithmic recommendation / anthropomorphic) + compute / user scaleDe facto tiers (each departmental rule is freestanding)Scenario lists + 1M-user threshold + filingGenerative AI Interim Measures · TC260-003 · AI Safety Governance Framework
United StatesFederal: process-based (NIST AI RMF, context-driven) / states: consequential decisions + computeNo unified federal tiers; state-by-state variationState statutes define their ownNIST AI RMF + Colorado AI Act + California SB 53

Theoretical foundations of the EU’s four-tier model

Section titled “Theoretical foundations of the EU’s four-tier model”
  • Veale & Zuiderveen Borgesius (2021), “Demystifying the Draft EU Artificial Intelligence Act” (Computer Law Review International): the earliest systematic reading of the draft AI Act, criticising the Annex III list as a “political compromise rather than risk science”.
  • Smuha, Ahmed-Rengers, Harkens et al. (2021), “How the EU Can Achieve Legally Trustworthy AI”: an assessment grounded in fundamental-rights analysis.
  • Bradford (2020), The Brussels Effect (Oxford): treats the AI Act as a textbook case of EU regulatory exportation.
  • Engler (Brookings): an ongoing running commentary on gaps and patches during AI Act implementation, with especially detailed work on the GPAI provisions.
  • Mueller (CEPS): sustained critique of the dynamic expansion of the high-risk list and the presumption-of-conformity mechanism.

Academic readings of China’s “scenario-based, agile” approach

Section titled “Academic readings of China’s “scenario-based, agile” approach”
  • Xue Lan 薛澜 (Tsinghua School of Public Policy and Management): articulated the formula “inclusive, prudent, agile, and effective” (baorong shenshen, minjie youxiao), now the standard official and academic shorthand for China’s AI governance paradigm.
  • Zhang Linghan 张凌寒 (China University of Political Science and Law):
    • Argues that Chinese governance is “scenario-based ex-ante regulation” — sorted by service form rather than by risk tier.
    • Calls for a transition from “policy-driven” to “rule-of-law-based” governance.
  • Matt Sheehan (Carnegie Endowment):
    • “Tracing the Roots of China’s AI Regulations” (2024) is the most systematic English-language analysis of the evolution of China’s AI rule-making.
    • Argues that China’s scenario-based path predates portions of the EU AI Act and exerted reverse influence.
  • Paul Triolo: policy-practitioner tracking; has repeatedly noted that China’s “classified and graded” supervision principle remains largely hollow.
  • Olivia’s thesis, A Comparative Analysis of AI Governance in China, the US, and the EU (2026): proposes a three-layer “structure — institutions — choices” framework, dividing the jurisdictions into China’s “agile coordination”, the US’s “voluntary risk management”, and the EU’s “ex-ante compliance”.

Structural explanations of the US’s “no unified tier” model

Section titled “Structural explanations of the US’s “no unified tier” model”
  • Calo (University of Washington), “Artificial Intelligence Policy: A Primer and Roadmap” (2017): an early diagnosis of US AI-governance fragmentalism.
  • Selbst & Barocas, “The Intuitive Appeal of Explainable Machines” (FAccT community): structural problems in algorithmic accountability.
  • Lehr & Ohm (2017), “Playing with the Data: What Legal Scholars Should Learn About Machine Learning”.
  • Pasquale (2015), The Black Box Society: algorithmic opacity as a precondition to any tiered regulation.
  • Ho & Casey (Stanford RegLab): empirical analysis of federal AI governance.
  • Engler (Brookings): ongoing comparative work on EU / US AI governance.

Cross-jurisdictional classics and recent work

Section titled “Cross-jurisdictional classics and recent work”
  • Anu Bradford (2023), Digital Empires: a general framework for three-jurisdiction comparison.
  • Floridi et al. (2022), “CapAI: A Procedure for Conducting Conformity Assessment of AI Systems”: implementation-oriented tooling for the AI Act.
  • Fjeld et al. (2020), “Principled Artificial Intelligence” (Berkman Klein): a cross-national map of AI principles.
  • CAIDP (Center for AI & Digital Policy), annual AI and Democratic Values Index: cross-national comparison.

1. Units of “risk”: capability vs. use case vs. compute

Section titled “1. Units of “risk”: capability vs. use case vs. compute”
  • EU AI Act: primarily use-case lists (Annex III) with compute as a secondary axis (GPAI 10²⁵ FLOP).
  • California SB 53: compute-first (10²⁶ FLOP).
  • China: primarily service type (deep synthesis / generative / recommendation), with no quantitative compute threshold.
  • Academic critique: the DeepSeek shock of Jan 2025 demonstrated that a compute threshold is not a capability threshold — frontier-level capability is reachable with far less compute. This undermines the threshold designs in both the EU and California.

2. Dynamic expansion of the high-risk list

Section titled “2. Dynamic expansion of the high-risk list”
  • AI Act Annex III lets the Commission expand the list by delegated acts.
  • Veale and others object that this hands the Commission excessive legislative power, bypassing normal procedure.
  • China’s scenario-based rule-making is itself a sequence of ever-growing lists (2022 algorithmic recommendation → 2023 deep synthesis → 2023 generative AI → 2025 content labeling → 2026 anthropomorphic interaction), with each addition materialising as a new departmental rule.

3. The political boundaries of a “prohibited list”

Section titled “3. The political boundaries of a “prohibited list””
  • Article 5 of the AI Act sets out eight prohibitions (social scoring, predictive policing, etc.) and is frequently criticised as ideologically driven — a response to China’s social credit system.
  • China has no explicit “prohibited list”; the equivalent work is done by scattered “shall not” clauses in individual departmental rules.
  • The US federal government has no prohibitions; California’s SB-1047 (vetoed in 2024) would have introduced them.

4. Presumption of conformity vs. independent assessment

Section titled “4. Presumption of conformity vs. independent assessment”
  • EU AI Act: conformity with harmonised standards = presumption of compliance (art. 40) — delegating judgement to standardisation bodies.
  • US: NIST AI RMF is treated as a compliance-presumption on-ramp in several state laws.
  • China: TC260-003 is the de facto compliance yardstick — failure means the service cannot be filed and cannot launch.
  • Controversy: whether conformity presumption places excessive public-policy discretion on private standardisation bodies (Almada 2025, EU commentary).

How firms map onto different tiering systems

Section titled “How firms map onto different tiering systems”
CompanyEU AI Act triggerCalifornia SB 53 triggerChina filing triggerCorporate response
AnthropicYes — GPAI + systemic riskYes — 10²⁶ FLOP frontierNot in ChinaRSP v3 ASL tiers (map onto most regulations)
OpenAIYes — GPAI + systemic riskYesNot in ChinaPreparedness Framework v2 High / Critical thresholds
Google DeepMindYes — GPAI + systemic riskYesNot in ChinaFSF v3 Critical + Tracked CLs
MistralYes — GPAI + systemic riskBorderline (Mistral Large 3 ≈ 10²⁶)Not in ChinaGPAI CoP signatory + open-source transparency
MetaPartialYesNot in ChinaFrontier AI Framework v2
ByteDance DoubaoNot in marketNot in marketYes — deep synthesis + generativeCAC filing + TC260-003 compliance
Alibaba QwenOpen-weight downloads create latent EU obligationsOpen-weight → latent California triggerYesOpen source + domestic filing

Strategy 1: one consolidated document across jurisdictions (Anthropic / Google DeepMind)

  • A single RSP / FSF document mapped simultaneously onto the EU GPAI CoP, California SB 53, and UK AISI evaluations.
  • Upside: low cost. Downside: must satisfy the strictest standard.

Strategy 2: tiered, per-jurisdiction documents (OpenAI / Mistral)

  • Preparedness Framework as the baseline, with jurisdiction-specific supplements.
  • Upside: flexibility. Downside: duplicated paperwork and cross-jurisdictional consistency risk.

Strategy 3: de facto compliance + minimal disclosure (most Chinese companies)

  • Meet Chinese requirements through CAC filing + TC260-003.
  • No independent safety-framework document is published.
  • Upside: low compliance cost. Downside: constrained international expansion.
  • Anthropic RSP v3 (Feb 2026): drops the pause commitment and separates “unilateral commitments” from “industry-wide obligations”.
  • OpenAI Preparedness v2 (Apr 2025): simplified to two tiers, High / Critical.
  • Google DeepMind FSF v3 (Apr 2026): introduces Tracked Capability Levels (TCLs) for early-warning signal + a new Harmful Manipulation CCL.
  • California SB 53 in force (Jan 2026): the first dedicated US state law on frontier AI.
  • Trump EO 14365 (Dec 2025): attempts to preempt state AI laws → rejected by California and Colorado.

See the “Safety Framework” sections of the individual company pages — the analysis is deepest for Anthropic, OpenAI, and Google DeepMind.