Skip to content

Compute (AI Capabilities)

Compute refers to the hardware resources required to train and run AI systems—GPUs, TPUs, and specialized accelerators. Training frontier models costs tens to hundreds of millions of dollars in compute alone.

Compute is uniquely tractable for governance because it is measurable (FLOPs, GPU-hours), concentrated (few chokepoints like ASML, TSMC, NVIDIA), and physical (can be tracked and controlled).

MetricScoreNotes
Changeability30Requires international coordination
X-risk Impact70Directly affects capability timelines
Trajectory Impact80Primary driver of AI advancement speed
Uncertainty35Hardware trends relatively predictable

Governance Approaches:

Key Debates:

  • Can compute controls effectively slow dangerous AI development?
  • Will efficiency gains outpace hardware restrictions?

Ratings

MetricScoreInterpretation
Changeability30/100Hard to prevent or redirect
X-risk Impact70/100Substantial extinction risk
Trajectory Impact80/100Major effect on long-term welfare
Uncertainty35/100Moderate uncertainty in estimates