Topic comparisons
Three-Jurisdiction Legislative Density
X = year, Y = topic. Each cell splits into CN / US / EU sub-cells; saturation encodes count. Reveals who moves first.
Observations
- China moves first on labeling (2022) and content-synthesis (2023); the US follows in 2024–2025 via state laws; the EU codifies in 2024–2025.
- GPAI density spikes in 2025 across all three: EU GPAI CoP (Jul), CA SB 53 (Sep), CN Safety Governance 2.0 (Sep).
- Data-training is the thinnest column everywhere — in the US it is court-driven (case law), not legislated.
Counts are analytical proxies (major primary-source rules), not exhaustive. See Methodology(/en/methodology/).
The topic as this site’s unit of analysis
Section titled “The topic as this site’s unit of analysis”Every topic page has the same four sections — overview + China + US + EU — and uses the same comparative framework (see Methodology).
Topic pages do horizontal analysis: how the three jurisdictions think about the same question, where the differences lie, and where the disagreements are. The specific statutory text lives on the top-level Rules and Subnational pages; topic pages cite them rather than reproduce them.
The shared structure of each topic page:
- Topic framing. Why this is a distinct topic within AI governance.
- Three-jurisdiction comparison at a glance. One table that captures the structural differences.
- Scholarly discussion. Citations to the principal literature (Bradford; Xue Lan 薛澜; Anderljung; Farid; Solove, and others).
- Core controversies. Open questions.
- Industry practice. How companies actually respond (with links to Company practice).
- Related rules. Cross-references to the Rules pages.
v1 topics (four)
Section titled “v1 topics (four)”| Slug | Topic | Brief |
|---|---|---|
| risk-classification | Risk classification | Differences in the prohibition / high / limited / minimal-risk approach; three threshold types — compute, use case, capability |
| frontier-gpai | Frontier models and general-purpose AI | Specialised duties for foundation models and GPAI; the 10²⁵ / 10²⁶ FLOP thresholds; industry self-regulation via Anthropic’s RSP, OpenAI’s Preparedness, and Google’s FSF |
| data-training | Data and training | Lawful basis for training data; copyright (fair use, TDM opt-out); personal information; training-data summaries; cross-border flows |
| content-labeling-provenance | Synthetic-content labelling | The dual-track explicit / implicit labelling regime; C2PA versus GB 45438; deepfakes as a democratic risk |
Theoretical resources for the topic analyses
Section titled “Theoretical resources for the topic analyses”Cross-jurisdictional comparative frameworks
Section titled “Cross-jurisdictional comparative frameworks”- Bradford (2023), Digital Empires — the general framework for US / EU / China digital-governance comparison.
- Bradford (2020), The Brussels Effect — theory of EU regulatory export.
- Fjeld et al. (2020), “Principled Artificial Intelligence” (Berkman Klein) — a cross-national map of AI principles.
- Olivia’s thesis, A Comparative Study of AI Governance Models in China, the US, and the EU (2026) — a three-layer “structure–institutions–choices” framework, characterising China as agile coordination, the US as voluntary risk control, and the EU as compliance-first.
Key scholars by topic
Section titled “Key scholars by topic”- Risk classification: Bradford; Xue Lan 薛澜; Veale and Borgesius; Engler (Brookings); Matt Sheehan (Carnegie).
- Frontier GPAI: Anderljung et al. (GovAI); Amodei and Christiano; Bengio, Hinton, and Russell; LeCun (on the opposing side); Hacker (author of the earliest paper on ChatGPT regulation).
- Data and training: Solove; Lemley and Casey, Fair Learning; Bender and Gebru, Stochastic Parrots; the French DPA CNIL’s guidance; Grimmelmann.
- Content labelling: Chesney and Citron (the foundational deepfake paper); Farid (forensic AI, UC Berkeley); Paris and Donovan; Kirchenbauer (text watermarking); the C2PA community.
v2 plan (not in v1; anchors only)
Section titled “v2 plan (not in v1; anchors only)”The following topics are on the roadmap but will not be developed in v1. They will be advanced once the four v1 topics have stabilised.
- High-risk systems — focused on a clause-by-clause reading of the EU AI Act’s Annex III.
- Transparency and disclosure.
- Red-teaming and evaluation.
- Human oversight.
- Algorithm registration (the Chinese perspective).
- Bias and discrimination.
- Protection of minors (closely related to the 2026 Chinese Measures on Humanised Interaction, the Character.AI litigation, and similar developments).
- Liability allocation.
- Cross-border transfer and export control (BIS, CFIUS, Schrems, and China’s cross-border “three-piece set”).
To push a topic forward, please file a “topic request” on GitHub Issues.