Skip to content

United States — Risk Classification

RuleLegal tierRelationship to risk classification
NIST AI RMF 1.0 (2023)Technical specificationOrganised around a risk-management process, not graded risk tiers
EO 14179 (Jan 2025)Executive orderRevokes EO 14110 and reshapes the federal posture
Trump AI Action Plan (Jul 2025)StrategyDeregulation + global dominance agenda
EO 14365 (Dec 2025)Executive orderAttempts to preempt state AI laws
Colorado AI Act (in force Jun 30, 2026)State lawFirst US state law to introduce “High-Risk AI System”
California SB 53 (Jan 1, 2026)State lawFrontier-AI transparency
Texas TRAIGA (Jan 1, 2026)State lawProhibits harmful uses
NYC Local Law 144 (2023)Municipal lawEmployment-specific
  • NIST AI RMF does not assign “risk levels”; it lays out a risk-management process (GOVERN / MAP / MEASURE / MANAGE).
  • Its seven “trustworthiness characteristics” (valid and reliable, safe, secure, accountable, explainable, privacy-enhanced, fair) carry no graded intensity.
  • EO 14110 (Biden) had introduced a 10²⁶ FLOP reporting obligation — an embryonic “frontier model” tier — but was revoked in Jan 2025.
  • EO 14179 and the AI Action Plan (Jul 2025) explicitly rule out a unified federal intensity tier.

State laws: three tiering models (in force from 2026)

Section titled “State laws: three tiering models (in force from 2026)”

2026–2027 is the ramp-up phase of US state AI laws, crystallising into three distinct structural patterns:

  1. A “consequential decision” high-risk tier (Democratic model): Colorado AI Act (in force Jun 30, 2026) — a single “High-Risk” category covering eight types of consequential decision.
  2. A compute-threshold frontier-model tier (Democratic + tech-industry model): California SB 53 (Jan 1, 2026) — models above 10²⁶ FLOP must publish transparency reports and file critical incident reports.
  3. A prohibited-use tier (Republican model): Texas TRAIGA (Jan 1, 2026) — prohibits specific harmful uses without classifying general AI systems.

Dec 2025 EO 14365 threat: the Trump administration stood up an AI-litigation task force to challenge the constitutionality and federal preemption of state AI laws, although the legal consensus is that an executive order cannot unilaterally preempt state legislation. Federal litigation is expected in Q1–Q2 2026; state AGs have signaled that enforcement will continue.

  • Finance: SR 11-7 model risk management (2011, banking).
  • Medical: FDA SaMD risk classification (Classes I–IV) + 2024 AI/ML Predetermined Change Control Plan.
  • Employment: NYC LL 144 + EEOC 2023 guidance.
  • CFPB applying ECOA / FCRA to AI-driven credit decisions.
  • HUD applying FHA to AI-driven tenant selection.
  1. No unified federal tiering exists, and none is expected in the near term after EO 14179.
  2. State tiering gravitates toward the lowest common denominator (a single “high-risk” category): simpler than the EU’s multi-tier structure but also incompatible with it.
  3. Sectoral regulators classify more finely, but each only within its vertical.
  4. The NIST AI RMF functions as the common yardstick, referenced across federal, state, and voluntary industry practice.
  • Versus the EU: EU four tiers + GPAI vs. US “no federal tier, a single state-level tier, per-sector classification.”
  • Versus China: China uses filing as a de facto filter; the US federal government has no comparable gateway, while state-level impact assessments overlap in function with China’s security assessments.