Top-level rules
Why “top-level rules” is a separate axis
Section titled “Why “top-level rules” is a separate axis”This axis collects central-level primary texts of AI governance across the three jurisdictions: China’s five-tier hierarchy (statutes / administrative regulations / ministerial regulations / normative documents / technical standards); the US executive orders, OMB memoranda, and NIST technical frameworks; the EU’s secondary legislation (regulations / directives), harmonised standards, and codes of practice. Subnational rules (Chinese localities, US states and cities, EU Member States) form a separate axis at Subnational.
Division of labour with the “Topic comparisons” axis: the topic pages do horizontal analysis (how the three jurisdictions think about the same question); this axis does vertical archiving (a primary-source index of specific provisions). Topic pages cite the rules pages rather than reproduce them.
Three forms of governance at a glance
Section titled “Three forms of governance at a glance”| Dimension | 🇨🇳 China | 🇺🇸 US (federal) | 🇪🇺 EU |
|---|---|---|---|
| Comprehensive AI legislation | None; relies on layered ministerial regulations | None; relies on state law plus executive orders | AI Act horizontal regulation (2024) |
| Main tier | Ministerial regulations (tier 3) | Executive orders plus NIST soft law | Secondary legislation (regulations directly applicable) |
| Issuing body | CAC-led multi-ministry coordination | White House / OMB / NIST / FTC, etc. | Parliament + Council + Commission |
| Enforcement body | CAC-led | No specialised body (FTC, EEOC, state AGs) | Member-State MSAs + AI Office + EDPB |
| Governance philosophy | Agile coordination + scenario-specific rulemaking | Voluntary risk control + deregulation | Compliance-first + human-rights-led |
| Policy stability | High (Party Central coordination) | Low (EOs flip with administrations) | High (stable legislative procedure) |
Common scholarly references: Bradford (2023), Digital Empires, for the “US / EU / China” trilateral framework on digital governance; the Xue Lan 薛澜 team’s characterisation of Chinese “agile governance”; Anu Bradford (2020), “The Brussels Effect”, on EU regulatory spillover.
By jurisdiction
Section titled “By jurisdiction”Legal hierarchy at a glance:
| Tier | Issuing body | Representative rules |
|---|---|---|
| ① Statute | National People’s Congress (Standing Committee) | CSL (2017); DSL (2021); PIPL (2021) |
| ② Administrative regulation | State Council | Regulations on the Protection of Minors Online (2024) |
| ③ Ministerial regulation | Joint ministries | Algorithmic Recommendation Provisions; Deep Synthesis Provisions; Interim Measures on Generative AI; Labelling Measures; Measures on Humanised Interaction |
| ④ Normative document | Ministries / special committees | New Generation AI Governance Principles; AI Safety Governance Framework 1.0/2.0 |
| ⑤ Technical standard | TC260 / SAMR | TC260-003-2024; GB 45438-2025 (mandatory national standard) |
Key observation. Tier 3 ministerial regulations are the main theatre of AI governance; the “interim measures” kind of rule is neither an NPC statute nor a State Council administrative regulation — it is a ministerial regulation. Grasping this is the first prerequisite for any serious discussion of Chinese AI rules.
🇺🇸 United States — Federal-layer overview →
Section titled “🇺🇸 United States — Federal-layer overview →”The federal legislative vacuum. To date there is no comprehensive or specialised federal statute regulating AI systems themselves. Three compensating tracks fill the gap:
| Compensation | Form | Representative documents |
|---|---|---|
| ① Presidential executive orders | EO | EO 14179 (2025-01, revoking Biden EO); EO 14365 (2025-12, state-law pre-emption); Trump AI Action Plan |
| ② OMB memoranda | Federal government’s own AI use | M-25-21 / M-25-22 (2025-04) |
| ③ NIST soft law | Technical frameworks | AI RMF 1.0 + GenAI Profile (AI 600-1) |
State law is the real theatre. Of 1,208 state-level AI bills introduced in 2025, 145 passed. See Subnational: US.
Analogical application of general laws. FTC Act §5 (deceptive practices), Title VII (anti-discrimination), the FCRA (credit), COPPA, HIPAA, and sector-specific privacy statutes — these general laws are “analogised” onto AI scenarios through agency interpretation plus judicial precedent.
🇪🇺 European Union — EU-layer overview →
Section titled “🇪🇺 European Union — EU-layer overview →”The AI Act’s horizontal-regulation model (Reg 2024/1689):
| Layer | Form | Representative |
|---|---|---|
| ① Secondary legislation | Regulation / Directive | AI Act (2024); GDPR (2016); DSA (2022); Product Liability Directive 2024/2853 |
| ② Harmonised standards (hEN) | CEN-CENELEC JTC 21 | prEN 18286 and others (pending publication) |
| ③ Soft law | Codes of conduct / guidelines | GPAI Code of Practice (2025-07) |
| ④ Legislative proposals (not yet adopted) | Draft regulations | Digital Omnibus Proposal (2025-11) |
Three compliance paths. (1) Full technical compliance (demonstrating conformity with the relevant provisions); (2) conformity presumption via harmonised standards; (3) signatory-based presumption via the GPAI Code of Practice. The Brussels Effect. Through global market access, the AI Act becomes a de facto standard — but 2025–2026 also saw a “Brussels Effect backlash”: public White House pressure, and a Trump executive order prohibiting federal procurement from entities that “comply with foreign AI laws”.
Institutional tensions across the three models
Section titled “Institutional tensions across the three models”China: low tier × strong enforcement
Section titled “China: low tier × strong enforcement”Locating the main theatre at ministerial regulations (tier 3) creates several problems:
- Low tier → risk of conflict with upstream law (competition with PIPL, DSL, and so on).
- Jointly issued by ministries → substantial cross-bureau coordination burden.
- The word “interim” → frequent revision (the Interim Measures on Generative AI underwent major rewriting within two months between consultation and final version).
- Enforcement is led by the CAC → concentrated but non-specialised (the CAC is simultaneously content regulator, data regulator, and AI regulator).
United States: soft-law hardening × policy reversal
Section titled “United States: soft-law hardening × policy reversal”- NIST AI RMF is nominally recommendatory, but hardens de facto through “reasonable care” obligations, insurance, procurement, and state-law citation.
- Executive orders flip with administrations: Biden EO 14110 → Trump EO 14179 (revocation) → Trump EO 14365 (reverse pre-emption of state law).
- More than 100 bills pending in Congress → none enacted.
EU: compliance-first × delayed enforcement
Section titled “EU: compliance-first × delayed enforcement”- The AI Act’s prohibitions take effect 2025-02; GPAI obligations 2025-08; high-risk obligations 2026-08; further provisions 2027-08.
- But Member-State MSA (market surveillance authority) designation is lagging (as of 2026-04, roughly half the Member States still have no designated MSA).
- Where does enforcement start? The AI Office only covers GPAI; specific cases still depend on Member-State MSAs; the DPA / DSA coordination mechanisms (AI Pact Board, AI Board) have only recently been set up.
Cross-jurisdiction map for the four core topics
Section titled “Cross-jurisdiction map for the four core topics”| Topic | 🇨🇳 China | 🇺🇸 US | 🇪🇺 EU |
|---|---|---|---|
| Risk classification (→) | Service / user scale + scenario (filing + evaluation) | No uniform scheme; some states (Colorado AI Act’s “high-risk AI”) | Prohibited / high / limited / minimal four tiers (AI Act Arts. 5–52) |
| Frontier GPAI (→) | No specialised threshold; user scale of 1 million triggers evaluation (Humanised Interaction Measures) | No federal scheme; California SB 53’s 10²⁶ FLOP state-level threshold | 10²⁵ FLOP systemic-risk threshold (AI Act Art. 51) + GPAI CoP |
| Data and training (→) | PIPL + Deep Synthesis Provisions + Article 7 of the Interim Measures | No federal scheme; copyright via case law (NYT v. OpenAI, Authors Guild v. Anthropic) | GDPR + DSM Directive Article 4 TDM opt-out + AI Act Article 53 training summary |
| Synthetic-content labelling (→) | Mandatory dual-track (explicit + implicit): Labelling Measures + GB 45438-2025 | No federal scheme; state laws focus on election deepfakes (CA AB 2655) | AI Act Art. 50 + GPAI CoP watermarking recommendation |
Methodological note
Section titled “Methodological note”The hard-law / soft-law division on this axis is independent of legal hierarchy. For example:
- TC260-003 is a technical standard (nominally recommendatory = soft law), but it is de facto binding (filing threshold).
- NIST AI RMF is a non-binding framework, yet hardens de facto through reasonable-care / insurance / state-law citation.
- The EU GPAI CoP is a voluntary code of conduct, but signing triggers an AI Act conformity presumption — a textbook case of “soft law as a hard-law compliance path”.
See Methodology — hard law / soft law.
AI proximity. Not every rule that touches data, networks, or algorithms is included on this axis. The site applies a strict “AI proximity” filter — general data / infrastructure / anti-fraud rules (such as the Regulations on the Security Protection of Critical Information Infrastructure or the Anti-Telecom and Online Fraud Law) do not receive standalone pages and are merely cross-referenced from related AI-rule pages. See Methodology — inclusion criteria.