Compute Forecast Model Sketch
This is a sketch of what a quantitative compute forecasting model might look like.
1. Enriched Data Structure
Section titled “1. Enriched Data Structure”Instead of just qualitative causal diagrams, each node would have quantitative estimates:
computeForecastModel: target: id: effective-compute label: "Effective Compute for Frontier AI" unit: "FLOP/s (peak training)" current: value: 5e24 date: "2024-01" source: "Epoch AI" projections: - year: 2027 p10: 1e25 p50: 5e25 p90: 2e26 notes: "Depends heavily on investment trajectory" - year: 2030 p10: 5e25 p50: 5e26 p90: 5e27 - year: 2035 p10: 1e26 p50: 5e27 p90: 1e29 notes: "Wide uncertainty; could hit physical limits or breakthrough"
factors: - id: asml-capacity label: "ASML EUV Production" unit: "machines/year" current: value: 50 date: "2024" source: "ASML annual report" projections: - year: 2027 p10: 60 p50: 80 p90: 100 - year: 2030 p10: 80 p50: 120 p90: 180 constraints: - "Factory expansion takes 3-4 years" - "High-NA EUV adds capacity but different machines" keyQuestions: - question: "Will ASML build a second major facility?" impact: "Could add 50% capacity by 2030" - question: "Will high-NA EUV be production-ready by 2026?" impact: "2-3x improvement in transistor density"
- id: fab-capacity label: "Advanced Node Fab Capacity" unit: "wafer starts/month (3nm equivalent)" current: value: 100000 date: "2024" source: "TrendForce" projections: - year: 2027 p10: 150000 p50: 200000 p90: 280000 dependsOn: - factor: asml-capacity relationship: "~2000 wafers/month per EUV machine" elasticity: 0.8 - factor: power-grid relationship: "~100MW per major fab" elasticity: 0.3 - factor: taiwan-stability relationship: "Disruption could remove 70% of capacity" elasticity: -0.9
- id: ai-chip-production label: "AI Chip Production" unit: "H100-equivalents/year" current: value: 2000000 date: "2024" source: "Estimated from NVIDIA revenue" projections: - year: 2027 p10: 5000000 p50: 10000000 p90: 20000000 dependsOn: - factor: fab-capacity relationship: "~500 chips per wafer, 30% of capacity to AI" - factor: ai-compute-spending relationship: "Demand signal drives allocation"
- id: ai-compute-spending label: "AI Compute Spending" unit: "$/year" current: value: 100e9 date: "2024" source: "Sum of major lab capex" projections: - year: 2027 p10: 150e9 p50: 300e9 p90: 600e9 - year: 2030 p10: 200e9 p50: 800e9 p90: 2000e9 dependsOn: - factor: ai-valuations relationship: "High valuations enable equity financing" elasticity: 0.7 - factor: ai-revenue relationship: "Revenue enables sustainable spending" elasticity: 0.9 keyQuestions: - question: "Will AI revenue justify current valuations by 2027?" scenarios: yes: "Spending continues exponential growth" no: "Pullback to ~$150B/year, slower growth"
- id: algorithmic-efficiency label: "Algorithmic Efficiency" unit: "multiplier vs 2024 baseline" current: value: 1.0 date: "2024" projections: - year: 2027 p10: 2 p50: 8 p90: 30 notes: "Historical ~4x/year, but may slow" - year: 2030 p10: 5 p50: 50 p90: 500 keyQuestions: - question: "Will efficiency gains continue at 4x/year?" impact: "Difference between p50 and p90" - question: "Is there a 'DeepSeek moment' coming?" impact: "Could see sudden 10x jump"
scenarios: - id: base-case probability: 0.55 description: "Current trends continue, moderate growth" assumptions: taiwan-stability: "No major disruption" ai-revenue: "Grows but below hype expectations" asml-capacity: "Steady expansion" outcome: effective-compute-2030: 5e26 effective-compute-2035: 5e27
- id: bull-case probability: 0.20 description: "AI boom continues, massive investment" assumptions: taiwan-stability: "Stable" ai-revenue: "Exceeds expectations, clear ROI" asml-capacity: "Aggressive expansion" algorithmic-efficiency: "Continued 4x/year gains" outcome: effective-compute-2030: 2e27 effective-compute-2035: 1e29
- id: bear-case probability: 0.20 description: "AI winter or investment pullback" assumptions: ai-revenue: "Disappoints, valuations crash" ai-compute-spending: "Drops 50%" outcome: effective-compute-2030: 1e26 effective-compute-2035: 5e26
- id: disruption-case probability: 0.05 description: "Major supply shock (Taiwan, other)" assumptions: taiwan-stability: "Major disruption" fab-capacity: "Drops 50-70%" outcome: effective-compute-2030: 2e25 effective-compute-2035: 1e262. Squiggle Model
Section titled “2. Squiggle Model”Here’s what the actual quantitative model might look like in Squiggle:
// === INPUT PARAMETERS ===
// ASML EUV machine production (machines/year)asmlProduction2024 = 50asmlGrowthRate = normal(0.08, 0.03) // 8% ± 3% annual growthasmlProduction(year) = asmlProduction2024 * (1 + asmlGrowthRate)^(year - 2024)
// Wafers per EUV machine per yearwafersPerMachine = normal(24000, 3000) // ~2000/month
// Advanced fab capacity (wafer starts/year, 3nm equivalent)fabCapacity(year) = asmlProduction(year) * wafersPerMachine * 0.7 // 70% utilization
// Taiwan risk - probability of major disruption by yeartaiwanDisruptionProb(year) = 0.02 * (year - 2024) // 2% per year cumulativetaiwanImpact = beta(2, 8) // If disruption, lose 20-80% capacitytaiwanMultiplier(year) = if bernoulli(taiwanDisruptionProb(year)) then (1 - taiwanImpact) else 1
// AI chips per waferchipsPerWafer = normal(400, 50)
// Fraction of advanced capacity going to AI chipsaiCapacityShare2024 = 0.25aiCapacityShareGrowth = normal(0.03, 0.01) // Growing 3% per yearaiCapacityShare(year) = min(0.6, aiCapacityShare2024 + aiCapacityShareGrowth * (year - 2024))
// AI chip production (H100-equivalents/year)aiChipProduction(year) = { baseProduction = fabCapacity(year) * chipsPerWafer * aiCapacityShare(year) baseProduction * taiwanMultiplier(year)}
// FLOPS per chip (H100 = 2e15 FLOPS for training)flopsPerChip2024 = 2e15chipImprovementRate = normal(0.25, 0.08) // 25% per year Moore's law continuationflopsPerChip(year) = flopsPerChip2024 * (1 + chipImprovementRate)^(year - 2024)
// Algorithmic efficiency multiplieralgoEfficiency2024 = 1.0algoEfficiencyGrowth = lognormal(1.4, 0.5) // ~4x/year but high variancealgoEfficiency(year) = algoEfficiency2024 * algoEfficiencyGrowth^(year - 2024)
// AI company revenue and investmentaiRevenue2024 = 200e9 // $200BrevenueGrowthRate = normal(0.20, 0.10) // 20% ± 10% annual growthaiRevenue(year) = aiRevenue2024 * (1 + revenueGrowthRate)^(year - 2024)
// Investment as fraction of revenue/valuationinvestmentRate = beta(3, 7) // 20-40% of revenue goes to computeaiComputeSpending(year) = aiRevenue(year) * investmentRate * 1.5 // 1.5x for valuation leverage
// Utilization rate (what fraction of chips are used for frontier training)utilizationRate = beta(5, 5) // ~50% utilization
// === MAIN MODEL ===
// Total AI chip FLOPS availabletotalChipFlops(year) = { // Stock of chips (assume 3 year lifespan, accumulating) stock = sum( List.map( List.range(max(2024, year - 3), year), y -> aiChipProduction(y) * flopsPerChip(y) ) ) stock * utilizationRate}
// Effective compute (accounting for algorithmic efficiency)effectiveCompute(year) = totalChipFlops(year) * algoEfficiency(year)
// === OUTPUTS ===
effectiveCompute2027 = effectiveCompute(2027)effectiveCompute2030 = effectiveCompute(2030)effectiveCompute2035 = effectiveCompute(2035)
// Training run size (largest single run, ~10% of total capacity)largestTrainingRun(year) = effectiveCompute(year) * 0.1 * (365 * 24 * 3600) // FLOP per run
// === KEY METRICS ===
// Years to 10x current computeyearsTo10x = { current = effectiveCompute(2024) target = current * 10 // Find year where we cross threshold List.findIndex( List.map(List.range(2024, 2040), y -> effectiveCompute(y) > target), x -> x )}3. What This Enables
Section titled “3. What This Enables”With this structure, you could:
- Generate probabilistic forecasts - not just point estimates
- Run sensitivity analysis - which inputs matter most?
- Scenario modeling - what if Taiwan is disrupted? What if AI revenue disappoints?
- Update on evidence - new ASML numbers? Update the model
- Identify cruxes - where do optimists and pessimists disagree?
4. Key Uncertainties Ranked by Impact
Section titled “4. Key Uncertainties Ranked by Impact”| Factor | Impact on 2030 Compute | Current Uncertainty |
|---|---|---|
| Algorithmic efficiency | 10-100x range | Very high |
| Taiwan stability | 0.3-1.0x | Low prob, high impact |
| AI revenue/investment | 2-5x range | High |
| ASML expansion | 1.5-2x range | Medium |
| Chip architecture | 2-4x range | Medium |
5. Integration with Diagram
Section titled “5. Integration with Diagram”The causal diagram could become an interface to this model:
- Click a node → see current estimate, distribution, sources
- Hover over edge → see elasticity/relationship strength
- Scenario selector → see how diagram changes under different assumptions
- Time slider → see which bottlenecks dominate when