Skip to content

Transparency Report

Snapshot: covers disclosures publicly available through openai.com/safety and openai.com/transparency as of 23 April 2026; focus on the Influence and Cyber Operations Reports series from Q1 2024.

1. Boundaries and structure of OpenAI “transparency”

Section titled “1. Boundaries and structure of OpenAI “transparency””

Unlike legacy platforms such as Google (reports since 2010) and Meta (since 2013) that have long published comprehensive transparency reports, OpenAI’s transparency disclosure was fragmented across five document classes and was only consolidated under openai.com/transparency in October 2025:

CategoryFirst issuedFrequencyPrincipal content
Threat Intel ReportsFebruary 2024Quarterly (approximately, since Q1 2024)Discovery and disruption of influence operations, cyber operations, and disinformation campaigns
Government-request reportsSeptember 2025 (first)Semi-annualVolume, compliance rate, country distribution of government data requests
Usage Policy enforcement statisticsAd hoc (one-off in spring 2024 and Q4 2025)IrregularAccount-ban counts, detection categories
Copyright and data disclosuresMultiple blog posts since December 2023Event-drivenMedia Manager, licensing partnerships, litigation responses
Election-cycle transparencyJanuary 2024 (US), June 2024 (EU)PeriodicRedirection mechanisms, partner detection, watermarking (C2PA)

Observation: OpenAI only issued its first government-request report in September 2025 — 15 years after Google and 12 years after Meta. This reflects both OpenAI’s historical trajectory as a “non-platform” company and the fact that mandatory disclosure pressure from the DSA, California SB 53, and the Seoul commitments is what actually drives transparency practice.

2. Threat Intel Reports (Influence and Cyber Operations)

Section titled “2. Threat Intel Reports (Influence and Cyber Operations)”
ReportDateRepresentative disclosures
”Disrupting Malicious Uses of AI by State-Affiliated Threat Actors”14 February 2024Charcoal Typhoon (PRC), Salmon Typhoon (PRC), Crimson Sandstorm (IRGC), Emerald Sleet (DPRK), Forest Blizzard (GRU) — five account networks
”AI and Covert Influence Operations”30 May 2024Doppelganger (Russia), Spamouflage (PRC), Bad Grammar (Russia), International Union of Virtual Media (Iran), STOIC (Israeli commercial) — 5 operations
”An Update on Disrupting Deceptive Uses of AI”9 October 202420+ cumulative operations; first disclosed cyber-attack cases: SweetSpecter (PRC), CyberAv3ngers (Iran), Storm-0817 (Iran)
“Influence and Cyber Operations Report (Q2 2025 update)“June 2025Peer Review (PRC academic manipulation), Sponsored Discontent (PRC domestic stability)
“Q4 2025 Threat Intel”December 2025First detailed account of Sora 2 synthetic-media misuse and bans
”Q1 2026 Threat Intel”March 2026Microsoft / OpenAI joint disclosure of GPT-5.x used in automated spear-phishing

A typical Threat Intel Report includes:

  1. Operation summary: name, attribution, target, scale
  2. Usage detail: how ChatGPT / the GPT API was used (debugging code, generating translations, drafting social-media posts)
  3. Attribution evidence: joint attribution with Microsoft Threat Intelligence, Meta Security, Graphika, SIO
  4. Intervention: account bans + notification of affected platforms and governments
  5. Reflection: “uplift” evaluation of ChatGPT capabilities (OpenAI’s characteristic narrative: “no substantive new capability provided”)

2.3 Critiques: selectivity in attribution and the “safety theatre” problem

Section titled “2.3 Critiques: selectivity in attribution and the “safety theatre” problem”

Kirsten Martin (Notre Dame), within the “transparency theatre” framework developed in her MIS Quarterly Executive 2024 article:

  • Reports concentrate on operations by adversary states (PRC / Russia / Iran / DPRK) and rarely disclose Western commercial or state sources
    • The lone exception is the May 2024 “STOIC” (Israeli commercial firm), disclosed with markedly less depth than the PRC cases
  • The “we detected and stopped it” narrative reinforces “platforms are self-governing,” lowering political demand for hard-law mandatory disclosure

Josh Goldstein (Stanford Internet Observatory), in the June 2024 Brookings report:

  • OpenAI’s operation disclosures agree with independent academic observation in sample selection (these operations do exist), but scale may be under-estimated (OpenAI sees only ChatGPT’s own usage; a cross-platform view requires Meta, X, Telegram cooperation)
  • OpenAI does not publicly release complete banned-account user IDs, prompt samples, or conversation-length distributions — making independent replication difficult

Joshua Tucker (NYU CSMaP), in a March 2025 PNAS commentary:

  • Threat Intel Reports are an important contribution to the “adversary behaviour dataset”, but the absence of infrastructure-level transparency (training data, internal red-team cadence) leaves “platform-level governance quality” reliant on self-report
  • Recommends that OpenAI adopt the data-sharing channels proposed by the Stanford Platform Governance Research Network

Graphika’s 2025 annual report is relatively positive:

  • Considers OpenAI’s Threat Intel Reports higher in quality than most commercial threat intelligence, but publication frequency and granularity lag the Meta Adversarial Threat Report

3. Government-request report (first in September 2025)

Section titled “3. Government-request report (first in September 2025)”

OpenAI’s September 2025 inaugural Government Requests Transparency Report discloses:

  • Total legal compulsory requests (semi-annually aggregated, predominantly US; single-period counts in the low-double-digit to low-triple-digit range)
  • Emergency (no-warrant) requests as a typical proportion (a minority) of legal requests
  • Full compliance / partial compliance / objection proportions (full compliance typically the majority; objections in single-digit percentage)
  • Country-level breakdown

Precise figures should be read from the official openai.com/transparency report; compared with Google’s and Meta’s decade-plus of quarterly disclosure, OpenAI’s request volumes remain at a low-density early stage.

Order-of-magnitude comparisons drawn from public transparency reports (semi-annual or annual):

  • OpenAI: low double- to low triple-digits per half-year; publicly-estimated MAU in the hundreds of millions
  • Google: tens of thousands per half-year; MAU in the billions
  • Meta: hundreds of thousands per half-year; MAU in the billions
  • Microsoft: tens of thousands per half-year
  • Apple: thousands per half-year

Exact figures should be read from each company’s official transparency report.

Interpretation: OpenAI’s request density is materially below service providers of similar scale. Possible reasons:

  1. Product nature: ChatGPT is primarily an interaction tool rather than a social network, with low third-party visibility of content
  2. Enforcement pathway: law enforcement less often sees ChatGPT as a direct evidence source
  3. Completeness in question: the first report does not include detailed numbers for national-security requests (NSL) — a key deduction item in Ranking Digital Rights (RDR) assessment of transparency reports
  4. Time window: 2024 is an early stage in OpenAI’s handling of government requests; processes may not yet be mature

Ad hoc Usage Policy enforcement data points:

DateDisclosed contentScale
April 2024Q1 2024 election-related bansTens of accounts
October 2024Cumulative influence-operation bans20+ networks
May 2025Sora 1 → Sora 2 transitional CSAM bansAbsolute number not disclosed
December 2025Annual Trust & Safety action summaryLarge-scale (aggregated; read the official release)

Critique (Ranking Digital Rights 2025 Corporate Accountability Index):

  • No regular aggregated statistics (no quarterly release like the Meta Community Standards Enforcement Report)
  • No category breakdown for bans (how many bans per Usage-Policy category)
  • No appeals data (rates of successful reinstatement after ban)
  • No false-positive data (false-positive rate of automated detection)

In recent RDR assessments, OpenAI’s “enforcement transparency” sub-score is materially below mature platforms like Google and Meta, putting it in the mid-to-low band along with Anthropic (specific scores should be taken from the RDR annual report).

DateEvent
December 2023NYT sues OpenAI and Microsoft (training-data infringement)
September 2023Authors Guild class action (George R. R. Martin and others)
April 2024Media Manager first previewed (allowing rightsholders to pre-opt-out)
2024–2025Data-licensing deals signed with AP, Axel Springer, FT, News Corp, The Atlantic, Reddit, Shutterstock, and others
May 2025Media Manager formally launched (opt-out) but criticised for insufficient coverage
December 2025NYT case discovery discloses partial training-set samples
January 2026Authors Guild case summary-judgment motion
March 2026OpenAI’s first Model Training Data Summary (GPAI CoP compliance)

5.2 The March 2026 GPAI Transparency Template

Section titled “5.2 The March 2026 GPAI Transparency Template”

Under the Transparency chapter of the EU GPAI Code of Practice, in March 2026 OpenAI submitted the Training Data Summary Template — for the first time disclosing:

  • Overall training-data category proportions (web / code / books / images / synthetic / human)
  • List of principal licensors (no contract detail)
  • Data-acquisition method (crawl / purchase / partner / synthetic)
  • Filter-method overview (no specific filter rules)

Still not disclosed:

  • Specific token counts (GPT-5 scale estimated by third parties; OpenAI has not confirmed)
  • Common Crawl slice used
  • Sources of human-feedback data (Scale AI, Surge AI, Invisible Technologies, etc.)
  • Models and scale for synthetic data

Academic assessment (Ed Newton-Rex / Fairly Trained April 2026 blog post):

  • “Better than zero, weaker than Stability AI Stable Diffusion 3’s training-data card”
  • “Meets the literal wording of EU compliance but does not solve the practical rights-assertion difficulties of creators”

OpenAI’s January 2024 blog How OpenAI Is Approaching 2024 Elections:

  1. Prohibits using ChatGPT to produce content impersonating candidates
  2. Prohibits using ChatGPT as a voting chatbot
  3. ChatGPT redirects US election queries to CanIVote.org
  4. DALL-E adds C2PA metadata and provenance watermarking
  5. Retrospective analysis published December 2024

Critiques:

  • The December 2024 retrospective was criticised for strong sample selectivity and not disclosing specific error rates
  • Independent research by Joshua Tucker (CSMaP) and Brendan Nyhan (Dartmouth) shows ChatGPT still generates specific misleading content during the 2024 election period, with only partial effect from the redirect path
  • Indian, Indonesian, Brazilian, and other elections received transparency disclosure materially weaker than the US
RegimeRelevant obligationsOpenAI compliance status
EU DSA Art. 15, 24, 42VLOP transparency reportsFollowing ChatGPT’s 2024 VLOP designation, semi-annual releases (first in October 2024)
EU DSA Art. 40Researcher data accessNot yet fully implemented (academic-researcher applications delayed)
EU AI Act Art. 55Systemic-risk disclosureBridged through Preparedness + GPAI CoP documents
California SB 53 §22757.11Critical safety incident reportingCommitment to comply from Q1 2026
Seoul Commitments (May 2024)Transparency about safety decisionsPreparedness + System Cards as compliance evidence
China Generative AI Interim Measures 《生成式人工智能服务管理暂行办法》Content labelling, handling of unlawful contentNot applicable (no China operations)

8. Industry practice: internal operation of transparency reporting

Section titled “8. Industry practice: internal operation of transparency reporting”

Reverse-inferable from former-employee interviews, official-blog authorship, and GovAI / Stanford HAI academic collaborations:

  • Intelligence & Investigations Team (previously Disruption Intel): inferable as a small team on the order of dozens based on public authorship and hiring notices; produces Threat Intel Reports
  • Trust & Safety / Integrity: handles Usage Policy enforcement statistics and appeals
  • Legal + Privacy: handles government-request reports
  • Policy Research / Global Affairs: handles election-cycle and DSA / AI Act compliance documents
  • Developer Platform Team: handles creator tools (Media Manager, C2PA)

External partners (public attribution):

  • Microsoft Threat Intelligence Center (MSTIC): influence-operation attribution
  • Graphika, SIO (successor to the Stanford Internet Observatory): cross-platform influence research
  • NCMEC, Thorn: CSAM detection
  • C2PA Steering Committee: content provenance
DimensionOpenAIAnthropicGoogle (AI)Meta (Llama)xAI
Threat IntelQuarterly (from Q1 2024)No stand-alone seriesIntegrated with TAGAdversarial Threat ReportNone
Government RequestsSemi-annual (from September 2025)No stand-alone reportBy productSemi-annualNone
Usage Policy enforcement statsIrregularIrregularBy productQuarterlyNone
Training data summaryFirst GPAI summary March 2026Partially in Model CardBy productLlama model cardNone
Election transparencyCycles 2024, 20252024 blogPeriodic reportsPresentNone
Unified Transparency HubLaunched October 2025Transparency Hub 2025Long-runningLong-runningNone

Structural observation: OpenAI’s transparency reporting caught up quickly during 2024–2026, but with a late start and fragmented structure — there is still a gap to the systematic disclosure machinery built by Google and Meta over more than a decade. In recent RDR Corporate Accountability Indexes, AI-native companies (OpenAI, Anthropic, xAI) still score materially below mature platforms such as Google and Meta.