Skip to content

Compute Governance

📋Page Status
Last edited:2025-12-26 (12 days ago)
LLM Summary:Using computational infrastructure as a lever for AI governance
Policy

Compute Governance

ApproachRegulate AI via compute access
StatusEmerging policy area

Compute governance uses computational hardware as a lever to regulate AI development. Because advanced AI requires enormous amounts of computing power, and that compute comes from concentrated supply chains, controlling compute provides a tractable way to govern AI before models are built.

This is one of the most promising governance approaches because compute is:

  • Measurable: FLOP, GPU-hours, chip counts
  • Concentrated: Few manufacturers and cloud providers
  • Physical: Hardware is trackable in ways software isn’t
  • Necessary: Can’t train frontier models without it
DimensionAssessmentNotes
TractabilityHighHardware chokepoints exist, some measures already implemented
NeglectednessLow-MediumActive government priority, growing research field
Potential ImpactHighOne of few levers that creates physical constraints
Time HorizonNear to medium-termAlready being implemented
RiskMechanismEffectiveness
Racing DynamicsSlows development pace, reduces competitive pressureHigh
ProliferationRestricts access to training compute for new actorsMedium-High
BioweaponsPrevents adversaries from training dangerous modelsMedium
CyberweaponsSame mechanismMedium
Concentration of PowerCan enforce compute access limits on any actorLow-Medium

Compute governance encompasses several distinct policy approaches, each with different mechanisms and tradeoffs:

Restricting who can access AI chips and manufacturing equipment.

US-led export controls restrict advanced semiconductor exports to China and other countries. This is the most aggressive compute governance measure currently in place—essentially sanctions on AI hardware.

  • Chips restricted: NVIDIA A100/H100 and above
  • Equipment restricted: EUV lithography (ASML)
  • Enforcement: Bureau of Industry and Security (BIS)

Key question: How long do controls buy, and does the time get used productively?

Using training compute as a trigger for regulatory requirements.

Both the EU AI Act and US Executive Order define compute thresholds (10^25 to 10^26 FLOP) that trigger safety requirements. This creates a structured approach: cross the threshold → face additional obligations.

  • EU AI Act: 10^25 FLOP triggers GPAI requirements
  • US EO: 10^26 FLOP triggers reporting requirements
  • Lower thresholds for biological sequence models

Key question: Can thresholds keep pace with algorithmic efficiency improvements?

Ongoing visibility into who’s training large models.

Rather than just restricting access, monitoring approaches create visibility into AI development. This includes KYC (Know Your Customer) requirements for cloud providers and proposals for hardware-level governance.

  • Cloud KYC: Verify customers, report large training runs
  • Hardware governance: Chips with built-in monitoring
  • Verification: Cryptographic attestation of what’s running

Key question: Can monitoring work without enabling surveillance overreach?

Coordinating compute governance across nations.

Unilateral controls have limits. International regimes—analogous to nuclear non-proliferation—could enable verification and prevent races. These are mostly proposals rather than implemented policies.

  • IAEA-like institution for AI compute
  • Compute allocation treaties
  • Verification through hardware governance

Key question: Is US-China cooperation on compute governance possible?


For compute governance to substantially reduce AI risk:

  1. Compute remains necessary: Can’t achieve dangerous AI without massive compute
  2. Chokepoints persist: Semiconductor supply chain remains concentrated
  3. Algorithmic efficiency limits: Improvements don’t make thresholds obsolete too quickly
  4. Political will: Governments prioritize enforcement
  5. International coordination: Major AI powers (US, China, EU) eventually cooperate
  6. Technical feasibility: Monitoring and verification systems work
  7. Comprehensive coverage: Can’t easily evade through alternative providers

Will compute remain the bottleneck? Algorithmic efficiency is improving rapidly. Each year, the same capabilities require less compute. If this trend continues, compute-based governance becomes less relevant over time. However, frontier capabilities may always require frontier compute.

Can international coordination emerge? Export controls are currently unilateral (US-led). Without broader coordination, they create bifurcation rather than global governance. The US-China relationship makes cooperation difficult but not impossible.

Does governance enable or prevent safety? Some argue compute governance is primarily geopolitical rather than safety-motivated. Others argue that slowing global AI development buys time for alignment research. The answer depends on how the time is used.


If interested in compute governance:

  1. Build foundations:

    • Learn AI basics (understand what compute enables)
    • Study semiconductor industry (supply chains, manufacturing)
    • Read compute governance research (GovAI, CSET papers)
  2. Entry paths:

    • Policy school with tech/national security focus
    • Government fellowship (AAAS, TechCongress)
    • Think tank research assistant
    • Industry policy/compliance roles
  3. Key organizations:

    • Centre for the Governance of AI (Oxford)
    • Center for Security and Emerging Technology (Georgetown)
    • US Bureau of Industry and Security
    • US/UK AI Safety Institutes

Compute governance improves the Ai Transition Model across multiple factors:

FactorParameterImpact
Misuse PotentialAI Control ConcentrationPrevents concentration by distributing access
Transition TurbulenceRacing IntensitySlows racing dynamics by limiting training resources
Civilizational CompetenceInternational CoordinationCreates coordination leverage through hardware chokepoints

Compute governance affects both Existential Catastrophe (by slowing development and enabling oversight) and Long-term Trajectory (by shaping power distribution).