Skip to content

Top-level rules

Why “top-level rules” is a separate axis

Section titled “Why “top-level rules” is a separate axis”

This axis collects central-level primary texts of AI governance across the three jurisdictions: China’s five-tier hierarchy (statutes / administrative regulations / ministerial regulations / normative documents / technical standards); the US executive orders, OMB memoranda, and NIST technical frameworks; the EU’s secondary legislation (regulations / directives), harmonised standards, and codes of practice. Subnational rules (Chinese localities, US states and cities, EU Member States) form a separate axis at Subnational.

Division of labour with the “Topic comparisons” axis: the topic pages do horizontal analysis (how the three jurisdictions think about the same question); this axis does vertical archiving (a primary-source index of specific provisions). Topic pages cite the rules pages rather than reproduce them.

Dimension🇨🇳 China🇺🇸 US (federal)🇪🇺 EU
Comprehensive AI legislationNone; relies on layered ministerial regulationsNone; relies on state law plus executive ordersAI Act horizontal regulation (2024)
Main tierMinisterial regulations (tier 3)Executive orders plus NIST soft lawSecondary legislation (regulations directly applicable)
Issuing bodyCAC-led multi-ministry coordinationWhite House / OMB / NIST / FTC, etc.Parliament + Council + Commission
Enforcement bodyCAC-ledNo specialised body (FTC, EEOC, state AGs)Member-State MSAs + AI Office + EDPB
Governance philosophyAgile coordination + scenario-specific rulemakingVoluntary risk control + deregulationCompliance-first + human-rights-led
Policy stabilityHigh (Party Central coordination)Low (EOs flip with administrations)High (stable legislative procedure)

Common scholarly references: Bradford (2023), Digital Empires, for the “US / EU / China” trilateral framework on digital governance; the Xue Lan 薛澜 team’s characterisation of Chinese “agile governance”; Anu Bradford (2020), “The Brussels Effect”, on EU regulatory spillover.

Legal hierarchy at a glance:

TierIssuing bodyRepresentative rules
① StatuteNational People’s Congress (Standing Committee)CSL (2017); DSL (2021); PIPL (2021)
② Administrative regulationState CouncilRegulations on the Protection of Minors Online (2024)
③ Ministerial regulationJoint ministriesAlgorithmic Recommendation Provisions; Deep Synthesis Provisions; Interim Measures on Generative AI; Labelling Measures; Measures on Humanised Interaction
④ Normative documentMinistries / special committeesNew Generation AI Governance Principles; AI Safety Governance Framework 1.0/2.0
⑤ Technical standardTC260 / SAMRTC260-003-2024; GB 45438-2025 (mandatory national standard)

Key observation. Tier 3 ministerial regulations are the main theatre of AI governance; the “interim measures” kind of rule is neither an NPC statute nor a State Council administrative regulation — it is a ministerial regulation. Grasping this is the first prerequisite for any serious discussion of Chinese AI rules.

The federal legislative vacuum. To date there is no comprehensive or specialised federal statute regulating AI systems themselves. Three compensating tracks fill the gap:

CompensationFormRepresentative documents
① Presidential executive ordersEOEO 14179 (2025-01, revoking Biden EO); EO 14365 (2025-12, state-law pre-emption); Trump AI Action Plan
② OMB memorandaFederal government’s own AI useM-25-21 / M-25-22 (2025-04)
③ NIST soft lawTechnical frameworksAI RMF 1.0 + GenAI Profile (AI 600-1)

State law is the real theatre. Of 1,208 state-level AI bills introduced in 2025, 145 passed. See Subnational: US.

Analogical application of general laws. FTC Act §5 (deceptive practices), Title VII (anti-discrimination), the FCRA (credit), COPPA, HIPAA, and sector-specific privacy statutes — these general laws are “analogised” onto AI scenarios through agency interpretation plus judicial precedent.

The AI Act’s horizontal-regulation model (Reg 2024/1689):

LayerFormRepresentative
① Secondary legislationRegulation / DirectiveAI Act (2024); GDPR (2016); DSA (2022); Product Liability Directive 2024/2853
② Harmonised standards (hEN)CEN-CENELEC JTC 21prEN 18286 and others (pending publication)
③ Soft lawCodes of conduct / guidelinesGPAI Code of Practice (2025-07)
④ Legislative proposals (not yet adopted)Draft regulationsDigital Omnibus Proposal (2025-11)

Three compliance paths. (1) Full technical compliance (demonstrating conformity with the relevant provisions); (2) conformity presumption via harmonised standards; (3) signatory-based presumption via the GPAI Code of Practice. The Brussels Effect. Through global market access, the AI Act becomes a de facto standard — but 2025–2026 also saw a “Brussels Effect backlash”: public White House pressure, and a Trump executive order prohibiting federal procurement from entities that “comply with foreign AI laws”.

Institutional tensions across the three models

Section titled “Institutional tensions across the three models”

Locating the main theatre at ministerial regulations (tier 3) creates several problems:

  • Low tier → risk of conflict with upstream law (competition with PIPL, DSL, and so on).
  • Jointly issued by ministries → substantial cross-bureau coordination burden.
  • The word “interim” → frequent revision (the Interim Measures on Generative AI underwent major rewriting within two months between consultation and final version).
  • Enforcement is led by the CAC → concentrated but non-specialised (the CAC is simultaneously content regulator, data regulator, and AI regulator).

United States: soft-law hardening × policy reversal

Section titled “United States: soft-law hardening × policy reversal”
  • NIST AI RMF is nominally recommendatory, but hardens de facto through “reasonable care” obligations, insurance, procurement, and state-law citation.
  • Executive orders flip with administrations: Biden EO 14110 → Trump EO 14179 (revocation) → Trump EO 14365 (reverse pre-emption of state law).
  • More than 100 bills pending in Congress → none enacted.

EU: compliance-first × delayed enforcement

Section titled “EU: compliance-first × delayed enforcement”
  • The AI Act’s prohibitions take effect 2025-02; GPAI obligations 2025-08; high-risk obligations 2026-08; further provisions 2027-08.
  • But Member-State MSA (market surveillance authority) designation is lagging (as of 2026-04, roughly half the Member States still have no designated MSA).
  • Where does enforcement start? The AI Office only covers GPAI; specific cases still depend on Member-State MSAs; the DPA / DSA coordination mechanisms (AI Pact Board, AI Board) have only recently been set up.

Cross-jurisdiction map for the four core topics

Section titled “Cross-jurisdiction map for the four core topics”
Topic🇨🇳 China🇺🇸 US🇪🇺 EU
Risk classification ()Service / user scale + scenario (filing + evaluation)No uniform scheme; some states (Colorado AI Act’s “high-risk AI”)Prohibited / high / limited / minimal four tiers (AI Act Arts. 5–52)
Frontier GPAI ()No specialised threshold; user scale of 1 million triggers evaluation (Humanised Interaction Measures)No federal scheme; California SB 53’s 10²⁶ FLOP state-level threshold10²⁵ FLOP systemic-risk threshold (AI Act Art. 51) + GPAI CoP
Data and training ()PIPL + Deep Synthesis Provisions + Article 7 of the Interim MeasuresNo federal scheme; copyright via case law (NYT v. OpenAI, Authors Guild v. Anthropic)GDPR + DSM Directive Article 4 TDM opt-out + AI Act Article 53 training summary
Synthetic-content labelling ()Mandatory dual-track (explicit + implicit): Labelling Measures + GB 45438-2025No federal scheme; state laws focus on election deepfakes (CA AB 2655)AI Act Art. 50 + GPAI CoP watermarking recommendation

The hard-law / soft-law division on this axis is independent of legal hierarchy. For example:

  • TC260-003 is a technical standard (nominally recommendatory = soft law), but it is de facto binding (filing threshold).
  • NIST AI RMF is a non-binding framework, yet hardens de facto through reasonable-care / insurance / state-law citation.
  • The EU GPAI CoP is a voluntary code of conduct, but signing triggers an AI Act conformity presumption — a textbook case of “soft law as a hard-law compliance path”.

See Methodology — hard law / soft law.

AI proximity. Not every rule that touches data, networks, or algorithms is included on this axis. The site applies a strict “AI proximity” filter — general data / infrastructure / anti-fraud rules (such as the Regulations on the Security Protection of Critical Information Infrastructure or the Anti-Telecom and Online Fraud Law) do not receive standalone pages and are merely cross-referenced from related AI-rule pages. See Methodology — inclusion criteria.