Skip to content

United States — Content Labeling and Provenance

The US posture on this topic is state-first, federal in deliberation, supplemented by industry commitments:

  • Federal executive orders: EO 14110 (Oct 2023, revoked Jan 2025) had directed the Department of Commerce / NIST to study watermarking and provenance standards. Successor orders (EO 14179, EO 14365) pivot toward “anti-overregulation + preemption of state law”.
  • Federal legislation (pending):
    • COPIED Act (provenance, artist protection).
    • NO FAKES Act (non-consensual deepfakes, rights of publicity).
    • DEFIANCE Act (civil remedies for intimate forgery).
  • State law: California, Texas, Washington, Minnesota, and others have enacted AI-labeling-related statutes, focused principally on elections and sexual-abuse forgery contexts.
  • Industry commitments: the 2023 White House–led voluntary commitments by major AI companies (watermarking, provenance), together with C2PA’s technical specifications.
  • Publishers: required to mark the source of synthetic content in election-related material.
  • Large platforms: must verify, label, and take down suspicious synthetic content.
  • Developers: most state laws do not yet impose labeling duties directly on foundation-model developers.
  1. Elections: distributing unlabelled synthetic content within the N days before an election is unlawful (most states: 60–90 days).
  2. Non-consensual sexual synthesis: almost every state law prohibits it and permits civil remedies; some states add criminal penalties.
  3. General consumer contexts: still largely governed by industry self-regulation, with no unified federal mandate.
  • FTC: prosecutes deceptive AI promotion under “Section 5 — unfair or deceptive acts”.
  • State attorneys general: the primary enforcers at the state level.
  • Private right of action: some state laws (such as the NO FAKES-type proposals) grant individuals a direct right to sue.