AI Safety Governance Framework 1.0 / 2.0
📑 Legal hierarchy: Level 4 · Normative document (not a departmental rule, not a law) | Issuance: TC260 under CAC guidance | 1.0 effective: 2024-09-09 | 2.0 effective: 2025-09-15 | Character: soft law · policy guidance
⚠️ Hierarchy note: This document is a normative document (Level 4), below departmental rules and above technical standards in hierarchy. It is not directly legally binding, but it strongly shapes corporate compliance in practice. See Index of Chinese Rules.
Chinese Summary
Section titled “Chinese Summary”The AI Safety Governance Framework (《人工智能安全治理框架》) was issued by the National Information Security Standardization Technical Committee (TC260) under the guidance of the Cyberspace Administration of China. Version 1.0 was officially released at the Third China Cyberspace Civilization Conference on 2024-09-09; version 2.0 was released on 2025-09-15.
Significance: this is China’s first cross-scenario AI risk classification-and-grading system, marking China’s strategic turn from scenario-specific (“one rule per scenario”) to systemic (a unified risk coordinate system) governance.
Following the reading of Zhang Linghan 张凌寒 and others:
- 1.0 established the first cross-scenario risk classification-and-grading system;
- 2.0 further refined the “full chain from risk identification to governance response”;
- together, the Framework marks a shift from “policy-driven” toward more stable “rule-of-law-based governance.”
English One-Sentence Summary
Section titled “English One-Sentence Summary”The AI Safety Governance Framework, issued by TC260 under the guidance of CAC (v1.0 on 2024-09-09; v2.0 on 2025-09-15), is China’s first cross-scenario risk classification and mitigation scheme for AI — a turning point from the scenario-specific regulatory approach (2022-2025) toward a more systemic, horizontal governance paradigm.
Version 1.0 (2024-09) — Highlights
Section titled “Version 1.0 (2024-09) — Highlights”Three Governance Principles
Section titled “Three Governance Principles”- Inclusive and prudent, ensuring safety (包容审慎,确保安全);
- Risk-oriented, agile governance (风险导向,敏捷治理);
- Combining technology and management, coordinated response (技管结合,协同应对).
Four Categories of Risk
Section titled “Four Categories of Risk”- Model and algorithm security risks: poor explainability, bias / discrimination, insufficient robustness, theft.
- Data security risks: unlawful training data, leakage of personal information, data poisoning.
- AI system security risks: adversarial attacks, backdoors, supply chain.
- AI application security risks:
- Social ethics: algorithmic discrimination, information cocoons;
- Digital trust: misuse of deep synthesis, information forgery;
- Cyberspace: AI-enabled cyberattacks;
- Economy and employment: substitution effects;
- National security: terrorism, dual civilian-military use.
Technical Response and Integrated Governance
Section titled “Technical Response and Integrated Governance”For each risk category, the Framework provides checklists of technical measures and governance measures.
Version 2.0 (2025-09) — Evolutions
Section titled “Version 2.0 (2025-09) — Evolutions”Compared with 1.0, 2.0 deepens the Framework along the following lines:
- Finer risk classification and grading: a more granular risk map under each of the four major categories;
- Responsibility of governance actors: explicit differentiation among model developers, service providers, and users;
- Full lifecycle coverage: requirements across R&D → deployment → operations → decommissioning;
- Coverage of new forms such as Agents and embodied intelligence: not addressed in 1.0, now incorporated;
- Governance capacity building: assessment systems, monitoring and early warning, emergency response.
Relationship with Downstream Rules
Section titled “Relationship with Downstream Rules”- Upstream guidance: provides a risk-classification frame for technical standards such as TC260-003.
- Parallel to departmental rules: does not amend the Deep Synthesis Provisions or Generative AI Interim Measures, but supplies a unified risk vocabulary across rules.
- International interface: an important text carrier for China’s participation in global AI governance (UN, G20, BRICS dialogue).
Practical Significance for Corporate Compliance
Section titled “Practical Significance for Corporate Compliance”- Upgraded compliance baseline: from “satisfy each departmental rule” to “conduct a systematic self-assessment against the Framework’s risk categories.”
- Internal governance documentation: leading firms now build internal AI governance handbooks around the Framework’s spine.
- Communication language with regulators: using the Framework’s terminology in filings and regulatory dialogues supports clearer exchange.
Comparison with Other Jurisdictions
Section titled “Comparison with Other Jurisdictions”- Vis-à-vis EU AI Act risk tiering: AI Act is hard law + list-based; this Framework is soft law + classification guidance.
- Vis-à-vis NIST AI RMF: both are cross-scenario risk-management frameworks, but NIST emphasizes process more, while this Framework emphasizes the risk catalogue more.
Source Text and Archival Copies
Section titled “Source Text and Archival Copies”| Version | Source | Link |
|---|---|---|
| 1.0 (Chinese) | CAC / TC260 | cac.gov.cn |
| 1.0 (bilingual PDF) | TC260 | tc260.org.cn |
| 2.0 (Chinese) | CAC / TC260 | — |
| English | v1.0 has an official English release in 2024; see TC260 / CAC English pages |
Version History
Section titled “Version History”| Date | Event |
|---|---|
| 2024-09-09 | Version 1.0 released at the Third China Cyberspace Civilization Conference |
| 2025-09-15 | Version 2.0 released (Eighth Digital China Summit / WAIC venues) |
| Future versions | Further iteration likely, with expanded coverage of Agents and embodied intelligence |