China — Frontier Models and GPAI
Relevant rules
Section titled “Relevant rules”| Rule | Relationship to GPAI |
|---|---|
| Generative AI Interim Measures (2023) | Primary rule for large models offered as public-facing services |
| TC260-003-2024 | De facto technical standard for filing review |
| Deep Synthesis Provisions (2023) | Sister rule for service-level filings |
| Algorithmic Recommendation Provisions (2022) | Earliest rule for algorithm filing |
China has no “GPAI” concept
Section titled “China has no “GPAI” concept”- Chinese legislation has no dedicated category corresponding to art. 51 of the EU AI Act.
- The regulatory anchor is the service, not the model: whether the model has 10 billion or 1 trillion parameters, any public-facing generative AI service in China follows the same filing + assessment pathway.
- The open-source vs. closed-source distinction is thinly articulated in the regulatory text: a 2024 CAC Q&A did state that “open-source without public-facing deployment does not trigger filing”, but grey areas remain in practice.
Three-part regulatory toolkit
Section titled “Three-part regulatory toolkit”1. Algorithm filing
Section titled “1. Algorithm filing”- Legal basis: Algorithmic Recommendation Provisions art. 24, Generative AI Interim Measures art. 17, Deep Synthesis Provisions art. 19.
- Process: submit filing materials → CAC review → receive filing number → permissioned to market.
- De facto threshold: large-model services with “public-opinion attributes or capacity for social mobilisation”.
- Register: CAC publishes filing lists quarterly (more than ten batches released by Apr 2026).
2. Security assessment (referenced in art. 23 of the Generative AI Interim Measures)
Section titled “2. Security assessment (referenced in art. 23 of the Generative AI Interim Measures)”- Primary rule: Provisions on Security Assessment of Internet Information Services with Public-Opinion Attributes or Capacity for Social Mobilisation (2018).
- Yardstick: TC260-003-2024.
- Substantive requirements: lawfulness of training corpora / normative annotation / ≥ 90% safety-pass rate on model outputs / keyword-base coverage / incident-response mechanism.
3. Science and technology ethics review (R&D stage)
Section titled “3. Science and technology ethics review (R&D stage)”- Legal basis: Science and Technology Ethics Review Measures (Trial).
- Article 25’s list includes “algorithmic models with capacity for social mobilisation or for shaping social consciousness”.
- An R&D-stage obligation (as distinct from obligations at the market-launch stage).
Practice notes
Section titled “Practice notes”- “Public-facing service” is the trigger: internal R&D or closed testing does not mandate filing.
- Foreign-model mirrors: Chinese-language services built on foreign large models typically file through a domestic partner.
- Model layer vs. application layer: many applications (chatbots, virtual avatars, text-to-image) share one base model — each application must file separately.
- The awkwardness of open-source models: whether the training party that releases an open-weight model (e.g., DeepSeek) itself needs to file is disputed — in practice, the training party files when it also offers an API, and weight-only releases may not trigger filing.
Comparison with the EU and the US
Section titled “Comparison with the EU and the US”| Dimension | China | EU (AI Act) | US |
|---|---|---|---|
| Dedicated concept | None | ”GPAI” + “systemic-risk GPAI” | None (EO 14110’s 10²⁶ FLOP is revoked) |
| Compute threshold | None | 10²⁵ FLOP presumption | None (historically 10²⁶) |
| Obligations | Filing + security assessment + TC260 | Training documentation / incident reporting / adversarial testing | Voluntary (Frontier Model Forum, NIST AI RMF) |
| Market gate | Strict (no filing = no public service) | Ex-ante compliance but no “approval” | None |
Among the three jurisdictions, China has the strictest ex-ante gate for frontier models.