Compute (AI Capabilities)
- StructureNo tables or diagrams - consider adding visual content
Compute refers to the hardware resources required to train and run AI systems, including GPUs, TPUs, and specialized AI accelerators. The current generation of frontier AI models requires extraordinary amounts of computational powerβtraining runs cost tens to hundreds of millions of dollars in compute alone. The significance of compute for AI governance stems from several unique properties: it is measurable (training runs can be quantified in FLOPs), concentrated (the global semiconductor supply chain depends on chokepoints like ASML, TSMC, and NVIDIA), and physical (unlike algorithms that can be copied infinitely, hardware must be manufactured and shipped).
Current Assessment
What Drives Effective AI Compute?
Causal factors affecting frontier AI training compute. Note: This forms a cycleβAI capabilities drive revenue, which funds more computeβbut feedback loops are omitted for clarity.
Warning Indicators
| Indicator | Status | Trend | Concern |
|---|---|---|---|
| Training run costs | $100M+ for frontier models | β | medium |
| Chip concentration | TSMC produces 90%+ advanced chips | β | high |
| Export control effectiveness | Significant circumvention observed | β | high |
Addressed By
| Intervention | Effect | Strength |
|---|---|---|
| Compute Governance | β positive | βββ |
| Export Controls | β positive | βββ |
Scenarios Influenced
| Scenario | Effect | Strength |
|---|---|---|
| AI Takeover | β Increases | strong |
| Human-Caused Catastrophe | β Increases | medium |
| Long-term Lock-in | β Increases | medium |