Skip to content

Topic comparisons

Three-Jurisdiction Legislative Density

X = year, Y = topic. Each cell splits into CN / US / EU sub-cells; saturation encodes count. Reveals who moves first.

Topic ↓ Year →
2022
2023
2024
2025
2026
Risk classification
CN 1
US 0
EU 0
CN 1
US 1
EU 0
CN 1
US 2
EU 1
CN 2
US 2
EU 1
CN 2
US 3
EU 2
Frontier / GPAI
CN 0
US 0
EU 0
CN 1
US 1
EU 0
CN 1
US 1
EU 1
CN 2
US 2
EU 2
CN 1
US 1
EU 1
Data & training
CN 0
US 0
EU 0
CN 1
US 1
EU 0
CN 0
US 2
EU 1
CN 1
US 1
EU 1
CN 1
US 1
EU 1
Content labeling
CN 1
US 0
EU 0
CN 2
US 0
EU 0
CN 0
US 1
EU 1
CN 2
US 1
EU 1
CN 2
US 1
EU 1
China
United States
European Union

Observations

  1. China moves first on labeling (2022) and content-synthesis (2023); the US follows in 2024–2025 via state laws; the EU codifies in 2024–2025.
  2. GPAI density spikes in 2025 across all three: EU GPAI CoP (Jul), CA SB 53 (Sep), CN Safety Governance 2.0 (Sep).
  3. Data-training is the thinnest column everywhere — in the US it is court-driven (case law), not legislated.

Counts are analytical proxies (major primary-source rules), not exhaustive. See Methodology(/en/methodology/).

The topic as this site’s unit of analysis

Section titled “The topic as this site’s unit of analysis”

Every topic page has the same four sections — overview + China + US + EU — and uses the same comparative framework (see Methodology).

Topic pages do horizontal analysis: how the three jurisdictions think about the same question, where the differences lie, and where the disagreements are. The specific statutory text lives on the top-level Rules and Subnational pages; topic pages cite them rather than reproduce them.

The shared structure of each topic page:

  1. Topic framing. Why this is a distinct topic within AI governance.
  2. Three-jurisdiction comparison at a glance. One table that captures the structural differences.
  3. Scholarly discussion. Citations to the principal literature (Bradford; Xue Lan 薛澜; Anderljung; Farid; Solove, and others).
  4. Core controversies. Open questions.
  5. Industry practice. How companies actually respond (with links to Company practice).
  6. Related rules. Cross-references to the Rules pages.
SlugTopicBrief
risk-classificationRisk classificationDifferences in the prohibition / high / limited / minimal-risk approach; three threshold types — compute, use case, capability
frontier-gpaiFrontier models and general-purpose AISpecialised duties for foundation models and GPAI; the 10²⁵ / 10²⁶ FLOP thresholds; industry self-regulation via Anthropic’s RSP, OpenAI’s Preparedness, and Google’s FSF
data-trainingData and trainingLawful basis for training data; copyright (fair use, TDM opt-out); personal information; training-data summaries; cross-border flows
content-labeling-provenanceSynthetic-content labellingThe dual-track explicit / implicit labelling regime; C2PA versus GB 45438; deepfakes as a democratic risk

Theoretical resources for the topic analyses

Section titled “Theoretical resources for the topic analyses”

Cross-jurisdictional comparative frameworks

Section titled “Cross-jurisdictional comparative frameworks”
  • Bradford (2023), Digital Empiresthe general framework for US / EU / China digital-governance comparison.
  • Bradford (2020), The Brussels Effect — theory of EU regulatory export.
  • Fjeld et al. (2020), “Principled Artificial Intelligence” (Berkman Klein) — a cross-national map of AI principles.
  • Olivia’s thesis, A Comparative Study of AI Governance Models in China, the US, and the EU (2026)a three-layer “structure–institutions–choices” framework, characterising China as agile coordination, the US as voluntary risk control, and the EU as compliance-first.
  • Risk classification: Bradford; Xue Lan 薛澜; Veale and Borgesius; Engler (Brookings); Matt Sheehan (Carnegie).
  • Frontier GPAI: Anderljung et al. (GovAI); Amodei and Christiano; Bengio, Hinton, and Russell; LeCun (on the opposing side); Hacker (author of the earliest paper on ChatGPT regulation).
  • Data and training: Solove; Lemley and Casey, Fair Learning; Bender and Gebru, Stochastic Parrots; the French DPA CNIL’s guidance; Grimmelmann.
  • Content labelling: Chesney and Citron (the foundational deepfake paper); Farid (forensic AI, UC Berkeley); Paris and Donovan; Kirchenbauer (text watermarking); the C2PA community.

The following topics are on the roadmap but will not be developed in v1. They will be advanced once the four v1 topics have stabilised.

  • High-risk systems — focused on a clause-by-clause reading of the EU AI Act’s Annex III.
  • Transparency and disclosure.
  • Red-teaming and evaluation.
  • Human oversight.
  • Algorithm registration (the Chinese perspective).
  • Bias and discrimination.
  • Protection of minors (closely related to the 2026 Chinese Measures on Humanised Interaction, the Character.AI litigation, and similar developments).
  • Liability allocation.
  • Cross-border transfer and export control (BIS, CFIUS, Schrems, and China’s cross-border “three-piece set”).

To push a topic forward, please file a “topic request” on GitHub Issues.