Compute Governance
Compute Governance
Overview
Section titled âOverviewâCompute governance uses computational hardware as a lever to regulate AI development. Because advanced AI requires enormous amounts of computing power, and that compute comes from concentrated supply chains, controlling compute provides a tractable way to govern AI before models are built.
This is one of the most promising governance approaches because compute is:
- Measurable: FLOP, GPU-hours, chip counts
- Concentrated: Few manufacturers and cloud providers
- Physical: Hardware is trackable in ways software isnât
- Necessary: Canât train frontier models without it
Quick Assessment
Section titled âQuick Assessmentâ| Dimension | Assessment | Notes |
|---|---|---|
| Tractability | High | Hardware chokepoints exist, some measures already implemented |
| Neglectedness | Low-Medium | Active government priority, growing research field |
| Potential Impact | High | One of few levers that creates physical constraints |
| Time Horizon | Near to medium-term | Already being implemented |
Risks Addressed
Section titled âRisks Addressedâ| Risk | Mechanism | Effectiveness |
|---|---|---|
| Racing Dynamics | Slows development pace, reduces competitive pressure | High |
| Proliferation | Restricts access to training compute for new actors | Medium-High |
| Bioweapons | Prevents adversaries from training dangerous models | Medium |
| Cyberweapons | Same mechanism | Medium |
| Concentration of Power | Can enforce compute access limits on any actor | Low-Medium |
Approaches
Section titled âApproachesâCompute governance encompasses several distinct policy approaches, each with different mechanisms and tradeoffs:
Restricting who can access AI chips and manufacturing equipment.
US-led export controls restrict advanced semiconductor exports to China and other countries. This is the most aggressive compute governance measure currently in placeâessentially sanctions on AI hardware.
- Chips restricted: NVIDIA A100/H100 and above
- Equipment restricted: EUV lithography (ASML)
- Enforcement: Bureau of Industry and Security (BIS)
Key question: How long do controls buy, and does the time get used productively?
Using training compute as a trigger for regulatory requirements.
Both the EU AI Act and US Executive Order define compute thresholds (10^25 to 10^26 FLOP) that trigger safety requirements. This creates a structured approach: cross the threshold â face additional obligations.
- EU AI Act: 10^25 FLOP triggers GPAI requirements
- US EO: 10^26 FLOP triggers reporting requirements
- Lower thresholds for biological sequence models
Key question: Can thresholds keep pace with algorithmic efficiency improvements?
Ongoing visibility into whoâs training large models.
Rather than just restricting access, monitoring approaches create visibility into AI development. This includes KYC (Know Your Customer) requirements for cloud providers and proposals for hardware-level governance.
- Cloud KYC: Verify customers, report large training runs
- Hardware governance: Chips with built-in monitoring
- Verification: Cryptographic attestation of whatâs running
Key question: Can monitoring work without enabling surveillance overreach?
Coordinating compute governance across nations.
Unilateral controls have limits. International regimesâanalogous to nuclear non-proliferationâcould enable verification and prevent races. These are mostly proposals rather than implemented policies.
- IAEA-like institution for AI compute
- Compute allocation treaties
- Verification through hardware governance
Key question: Is US-China cooperation on compute governance possible?
What Needs to Be True
Section titled âWhat Needs to Be TrueâFor compute governance to substantially reduce AI risk:
- Compute remains necessary: Canât achieve dangerous AI without massive compute
- Chokepoints persist: Semiconductor supply chain remains concentrated
- Algorithmic efficiency limits: Improvements donât make thresholds obsolete too quickly
- Political will: Governments prioritize enforcement
- International coordination: Major AI powers (US, China, EU) eventually cooperate
- Technical feasibility: Monitoring and verification systems work
- Comprehensive coverage: Canât easily evade through alternative providers
Key Uncertainties
Section titled âKey UncertaintiesâWill compute remain the bottleneck? Algorithmic efficiency is improving rapidly. Each year, the same capabilities require less compute. If this trend continues, compute-based governance becomes less relevant over time. However, frontier capabilities may always require frontier compute.
Can international coordination emerge? Export controls are currently unilateral (US-led). Without broader coordination, they create bifurcation rather than global governance. The US-China relationship makes cooperation difficult but not impossible.
Does governance enable or prevent safety? Some argue compute governance is primarily geopolitical rather than safety-motivated. Others argue that slowing global AI development buys time for alignment research. The answer depends on how the time is used.
Getting Started
Section titled âGetting StartedâIf interested in compute governance:
-
Build foundations:
- Learn AI basics (understand what compute enables)
- Study semiconductor industry (supply chains, manufacturing)
- Read compute governance research (GovAI, CSET papers)
-
Entry paths:
- Policy school with tech/national security focus
- Government fellowship (AAAS, TechCongress)
- Think tank research assistant
- Industry policy/compliance roles
-
Key organizations:
- Centre for the Governance of AI (Oxford)
- Center for Security and Emerging Technology (Georgetown)
- US Bureau of Industry and Security
- US/UK AI Safety Institutes
AI Transition Model Context
Section titled âAI Transition Model ContextâCompute governance improves the Ai Transition Model across multiple factors:
| Factor | Parameter | Impact |
|---|---|---|
| Misuse Potential | AI Control Concentration | Prevents concentration by distributing access |
| Transition Turbulence | Racing Intensity | Slows racing dynamics by limiting training resources |
| Civilizational Competence | International Coordination | Creates coordination leverage through hardware chokepoints |
Compute governance affects both Existential Catastrophe (by slowing development and enabling oversight) and Long-term Trajectory (by shaping power distribution).
Related Pages
Section titled âRelated PagesâWhat links here
- AI Governance and Policycrux
- GovAIlab-research
- Epoch AIorganization
- MIRIorganization
- US AI Safety Instituteorganization
- Dan Hendrycksresearcher
- Safe and Secure Innovation for Frontier Artificial Intelligence Models Actpolicy
- China AI Regulatory Frameworkpolicy
- Compute Thresholdspolicy
- Compute Monitoringpolicy
- International Compute Regimespolicy
- EU AI Actpolicy
- Executive Order on Safe, Secure, and Trustworthy AIpolicy
- Pause Advocacyintervention
- AI Proliferationrisk
- Racing Dynamicsrisk