Skip to content

Company Practice

Why “Company Practice” is a separate axis

Section titled “Why “Company Practice” is a separate axis”

Voluntary industry commitments are one of the largest pillars of contemporary AI governance. In several critical areas, company-authored frameworks, policies, and commitments have predated and shaped the regulatory texts that followed:

  • Anthropic’s Responsible Scaling Policy (RSP, Sept 2023) preceded the EU AI Act GPAI chapter (2024) and California SB 53 (2025)
  • The NIST AI RMF (2023) exported a “risk management” paradigm that several U.S. states later adopted as a rebuttable compliance-by-reference path
  • The Frontier Model Forum (2023) supplied the shared industry vocabulary underneath the White House Voluntary Commitments
  • The GPAI Code of Practice (2025) turned voluntary industry practice into a formal presumption-of-conformity tool under the AI Act

But 2025–2026 shows structural erosion:

  • Anthropic RSP v3 (Feb 2026) removes the pause commitment
  • OpenAI Preparedness v2 (Apr 2025) simplifies thresholds (deleting Low / Medium tiers)
  • Google DeepMind (2024) deleted its military-use prohibition from the AI Principles
  • xAI rejects the industry self-regulation paradigm outright

This exposes the inherent limit of voluntary commitments: absent binding law, the “floor” of self-regulation is set by the least self-disciplined player in the market.

This axis archives publicly released, durably accessible company documents under that structural framing. Each company page organises materials in the same five categories.

See Methodology §3–4 for archival principles and attribution rules.

CompanySafety frameworkPositioningPage
AnthropicRSP v3 (Feb 2026, pause commitment withdrawn)Safety-firstanthropic
OpenAIPreparedness v2 (Apr 2025, simplified thresholds)Commercial accelerationopenai
Google DeepMindFSF v3 (Apr 2026, expanded CCL + TCL)Balancedgoogle-deepmind
xAIWeak (no counterpart document)Anti-self-regulationen/companies/xai
CompanyRolePositioningPage
NVIDIASingle-vendor dominance in AI GPUs + export-control focal point”Selling shovels” in the AI gold rushnvidia
CompanySafety frameworkPositioningPage
Mistral AI (France)No standalone framework; relies on open-source + GPAI CoP signatureSovereign European AImistral
CompanySafety frameworkPositioningPage
Baidu 百度None standalone; participates in national standard-setting (TC260)National-team playerbaidu
Alibaba 阿里巴巴None standalone; open-source across the line (Qwen 通义千问)Open-source ecosystemalibaba
ByteDance 字节跳动None standalone; embedded compliance (CAC + Party committee)Globally fragmentedbytedance
Tencent 腾讯None standalone; full-line multimodal open source (Hunyuan 混元)Platform-embedded AItencent
CompanySafety frameworkPositioningPage
ZhipuAI 智谱None standalone; listed on HKEX 2513.HKNational-team + academic heritagezhipuai
Moonshot 月之暗面None standalone; open-source Kimi K2.5Long-context + paid breakoutmoonshot
MiniMaxNone standalone; Talkie is an overseas hitAI companionship + going globalminimax
DeepSeekNone standalone; maximalist open source (MIT)The “DeepSeek moment” of Jan 2025deepseek

U.S. frontier labs — “structured self-regulation”:

  • Publish standalone safety frameworks (RSP / Preparedness / FSF)
  • Publicly commit to a capability-threshold → mitigation mapping
  • Coordinate through industry venues (Frontier Model Forum, GPAI CoP)
  • 2025–2026: broad erosion (Anthropic withdraws pause, OpenAI simplifies, Google deletes the military prohibition)

EU — “institutionalised self-regulation”:

  • Few standalone corporate frameworks (Mistral has no RSP-equivalent either)
  • Presumption of conformity achieved via GPAI Code of Practice signature
  • Open source + hard law used as a joint transparency mechanism
  • Self-regulation tightly coupled to the AI Act

China — “embedded self-regulation”:

  • No standalone corporate safety framework documents
  • Self-regulation equals participation in national standard-setting (TC260-003, the AI Safety Governance Framework)
  • Compliance delivered through CAC algorithm filings (算法备案) plus internal Party-committee coordination
  • Open source functions as the de facto transparency mechanism (Alibaba Qwen 通义千问, part of Baidu ERNIE 文心, DeepSeek maximalist)

Cross-jurisdictional observations (2026 Q1 snapshot)

Section titled “Cross-jurisdictional observations (2026 Q1 snapshot)”
CompanyTransparencyCopyrightSafety & Security
AnthropicSignedSignedSigned
Google DeepMindSignedSignedSigned
MicrosoftSignedSignedSigned
MistralSignedSignedSigned
OpenAISignedSignedSigned with partial reservations
MetaSignedObjectedSigned
xAISignedSignedObjected
Alibaba / Baidu / DeepSeek / ByteDanceNot signedNot signedNot signed

Compute thresholds and compliance pressure

Section titled “Compute thresholds and compliance pressure”
  • California SB 53 10^26 FLOP threshold: Claude Opus, GPT-5, Gemini Ultra, Grok 4, Llama 4 Max
  • EU AI Act 10^25 FLOP systemic-risk threshold: the above plus Mistral Large 2/3, and likely Qwen 3.5 and ERNIE 5.0
  • China filings (备案): services with ≥ 1 million users trigger mandatory safety assessment (Art. 22 of the 2026-04 Anthropomorphic Interaction Measures 人工智能拟人化互动服务管理暂行办法)
  • “Safety first”: Anthropic
  • “Acceleration first”: OpenAI, xAI
  • “Balanced safety”: Google DeepMind
  • “Sovereign AI”: Mistral
  • “Serving the state”: Baidu
  • “Open-source ecosystem”: Alibaba, DeepSeek
  • “Geopolitical crossfire”: ByteDance (involuntarily)

Five-category archive structure (per company)

Section titled “Five-category archive structure (per company)”
  1. Usage / Acceptable Use Policy — what users may and may not do with the model
  2. Model Card / System Card — capabilities, training data, evaluations
  3. Safety Framework — responsible scaling, preparedness, frontier-risk management
  4. Transparency Report — periodic disclosures (data requests, content moderation, etc.)
  5. Red-Team & Eval Disclosures — third-party evaluations and red-team results

Each item carries a snapshot_date, a link to the original source, and an archived copy on this site (PDFs in public/archives/). No normative framing — we list the original clauses and factual differences, not editorial judgments.

After strict AI-relevance filtering, the following companies may be added later:

  • United States: Meta (the Llama team — important, but open-source governance can be represented through Mistral for v1)
  • European Union: Aleph Alpha, Stability AI, Black Forest Labs (confirmed not being added)
  • China: Baichuan 百川智能, StepFun 阶跃星辰, 01.AI 零一万物 (medium priority among AI-native startups); Huawei (confirmed not being added)

Current focus: deepening the existing 13 companies. Five frontier labs (Anthropic, OpenAI, Google DeepMind, ByteDance, DeepSeek) already have substantive analysis across all five subpages; the remaining eight are rendered as consolidated index pages, with subpages expanded as company activity warrants.