Compute (AI Capabilities)
Compute refers to the hardware resources required to train and run AI systems—GPUs, TPUs, and specialized accelerators. Training frontier models costs tens to hundreds of millions of dollars in compute alone.
Compute is uniquely tractable for governance because it is measurable (FLOPs, GPU-hours), concentrated (few chokepoints like ASML, TSMC, NVIDIA), and physical (can be tracked and controlled).
| Metric | Score | Notes |
|---|---|---|
| Changeability | 30 | Requires international coordination |
| X-risk Impact | 70 | Directly affects capability timelines |
| Trajectory Impact | 80 | Primary driver of AI advancement speed |
| Uncertainty | 35 | Hardware trends relatively predictable |
Related Content
Section titled “Related Content”Governance Approaches:
Key Debates:
- Can compute controls effectively slow dangerous AI development?
- Will efficiency gains outpace hardware restrictions?