Skip to content

Compute Forecast Model Sketch

This is a sketch of what a quantitative compute forecasting model might look like.

Instead of just qualitative causal diagrams, each node would have quantitative estimates:

computeForecastModel:
target:
id: effective-compute
label: "Effective Compute for Frontier AI"
unit: "FLOP/s (peak training)"
current:
value: 5e24
date: "2024-01"
source: "Epoch AI"
projections:
- year: 2027
p10: 1e25
p50: 5e25
p90: 2e26
notes: "Depends heavily on investment trajectory"
- year: 2030
p10: 5e25
p50: 5e26
p90: 5e27
- year: 2035
p10: 1e26
p50: 5e27
p90: 1e29
notes: "Wide uncertainty; could hit physical limits or breakthrough"
factors:
- id: asml-capacity
label: "ASML EUV Production"
unit: "machines/year"
current:
value: 50
date: "2024"
source: "ASML annual report"
projections:
- year: 2027
p10: 60
p50: 80
p90: 100
- year: 2030
p10: 80
p50: 120
p90: 180
constraints:
- "Factory expansion takes 3-4 years"
- "High-NA EUV adds capacity but different machines"
keyQuestions:
- question: "Will ASML build a second major facility?"
impact: "Could add 50% capacity by 2030"
- question: "Will high-NA EUV be production-ready by 2026?"
impact: "2-3x improvement in transistor density"
- id: fab-capacity
label: "Advanced Node Fab Capacity"
unit: "wafer starts/month (3nm equivalent)"
current:
value: 100000
date: "2024"
source: "TrendForce"
projections:
- year: 2027
p10: 150000
p50: 200000
p90: 280000
dependsOn:
- factor: asml-capacity
relationship: "~2000 wafers/month per EUV machine"
elasticity: 0.8
- factor: power-grid
relationship: "~100MW per major fab"
elasticity: 0.3
- factor: taiwan-stability
relationship: "Disruption could remove 70% of capacity"
elasticity: -0.9
- id: ai-chip-production
label: "AI Chip Production"
unit: "H100-equivalents/year"
current:
value: 2000000
date: "2024"
source: "Estimated from NVIDIA revenue"
projections:
- year: 2027
p10: 5000000
p50: 10000000
p90: 20000000
dependsOn:
- factor: fab-capacity
relationship: "~500 chips per wafer, 30% of capacity to AI"
- factor: ai-compute-spending
relationship: "Demand signal drives allocation"
- id: ai-compute-spending
label: "AI Compute Spending"
unit: "$/year"
current:
value: 100e9
date: "2024"
source: "Sum of major lab capex"
projections:
- year: 2027
p10: 150e9
p50: 300e9
p90: 600e9
- year: 2030
p10: 200e9
p50: 800e9
p90: 2000e9
dependsOn:
- factor: ai-valuations
relationship: "High valuations enable equity financing"
elasticity: 0.7
- factor: ai-revenue
relationship: "Revenue enables sustainable spending"
elasticity: 0.9
keyQuestions:
- question: "Will AI revenue justify current valuations by 2027?"
scenarios:
yes: "Spending continues exponential growth"
no: "Pullback to ~$150B/year, slower growth"
- id: algorithmic-efficiency
label: "Algorithmic Efficiency"
unit: "multiplier vs 2024 baseline"
current:
value: 1.0
date: "2024"
projections:
- year: 2027
p10: 2
p50: 8
p90: 30
notes: "Historical ~4x/year, but may slow"
- year: 2030
p10: 5
p50: 50
p90: 500
keyQuestions:
- question: "Will efficiency gains continue at 4x/year?"
impact: "Difference between p50 and p90"
- question: "Is there a 'DeepSeek moment' coming?"
impact: "Could see sudden 10x jump"
scenarios:
- id: base-case
probability: 0.55
description: "Current trends continue, moderate growth"
assumptions:
taiwan-stability: "No major disruption"
ai-revenue: "Grows but below hype expectations"
asml-capacity: "Steady expansion"
outcome:
effective-compute-2030: 5e26
effective-compute-2035: 5e27
- id: bull-case
probability: 0.20
description: "AI boom continues, massive investment"
assumptions:
taiwan-stability: "Stable"
ai-revenue: "Exceeds expectations, clear ROI"
asml-capacity: "Aggressive expansion"
algorithmic-efficiency: "Continued 4x/year gains"
outcome:
effective-compute-2030: 2e27
effective-compute-2035: 1e29
- id: bear-case
probability: 0.20
description: "AI winter or investment pullback"
assumptions:
ai-revenue: "Disappoints, valuations crash"
ai-compute-spending: "Drops 50%"
outcome:
effective-compute-2030: 1e26
effective-compute-2035: 5e26
- id: disruption-case
probability: 0.05
description: "Major supply shock (Taiwan, other)"
assumptions:
taiwan-stability: "Major disruption"
fab-capacity: "Drops 50-70%"
outcome:
effective-compute-2030: 2e25
effective-compute-2035: 1e26

Here’s what the actual quantitative model might look like in Squiggle:

// === INPUT PARAMETERS ===
// ASML EUV machine production (machines/year)
asmlProduction2024 = 50
asmlGrowthRate = normal(0.08, 0.03) // 8% ± 3% annual growth
asmlProduction(year) = asmlProduction2024 * (1 + asmlGrowthRate)^(year - 2024)
// Wafers per EUV machine per year
wafersPerMachine = normal(24000, 3000) // ~2000/month
// Advanced fab capacity (wafer starts/year, 3nm equivalent)
fabCapacity(year) = asmlProduction(year) * wafersPerMachine * 0.7 // 70% utilization
// Taiwan risk - probability of major disruption by year
taiwanDisruptionProb(year) = 0.02 * (year - 2024) // 2% per year cumulative
taiwanImpact = beta(2, 8) // If disruption, lose 20-80% capacity
taiwanMultiplier(year) = if bernoulli(taiwanDisruptionProb(year)) then (1 - taiwanImpact) else 1
// AI chips per wafer
chipsPerWafer = normal(400, 50)
// Fraction of advanced capacity going to AI chips
aiCapacityShare2024 = 0.25
aiCapacityShareGrowth = normal(0.03, 0.01) // Growing 3% per year
aiCapacityShare(year) = min(0.6, aiCapacityShare2024 + aiCapacityShareGrowth * (year - 2024))
// AI chip production (H100-equivalents/year)
aiChipProduction(year) = {
baseProduction = fabCapacity(year) * chipsPerWafer * aiCapacityShare(year)
baseProduction * taiwanMultiplier(year)
}
// FLOPS per chip (H100 = 2e15 FLOPS for training)
flopsPerChip2024 = 2e15
chipImprovementRate = normal(0.25, 0.08) // 25% per year Moore's law continuation
flopsPerChip(year) = flopsPerChip2024 * (1 + chipImprovementRate)^(year - 2024)
// Algorithmic efficiency multiplier
algoEfficiency2024 = 1.0
algoEfficiencyGrowth = lognormal(1.4, 0.5) // ~4x/year but high variance
algoEfficiency(year) = algoEfficiency2024 * algoEfficiencyGrowth^(year - 2024)
// AI company revenue and investment
aiRevenue2024 = 200e9 // $200B
revenueGrowthRate = normal(0.20, 0.10) // 20% ± 10% annual growth
aiRevenue(year) = aiRevenue2024 * (1 + revenueGrowthRate)^(year - 2024)
// Investment as fraction of revenue/valuation
investmentRate = beta(3, 7) // 20-40% of revenue goes to compute
aiComputeSpending(year) = aiRevenue(year) * investmentRate * 1.5 // 1.5x for valuation leverage
// Utilization rate (what fraction of chips are used for frontier training)
utilizationRate = beta(5, 5) // ~50% utilization
// === MAIN MODEL ===
// Total AI chip FLOPS available
totalChipFlops(year) = {
// Stock of chips (assume 3 year lifespan, accumulating)
stock = sum(
List.map(
List.range(max(2024, year - 3), year),
y -> aiChipProduction(y) * flopsPerChip(y)
)
)
stock * utilizationRate
}
// Effective compute (accounting for algorithmic efficiency)
effectiveCompute(year) = totalChipFlops(year) * algoEfficiency(year)
// === OUTPUTS ===
effectiveCompute2027 = effectiveCompute(2027)
effectiveCompute2030 = effectiveCompute(2030)
effectiveCompute2035 = effectiveCompute(2035)
// Training run size (largest single run, ~10% of total capacity)
largestTrainingRun(year) = effectiveCompute(year) * 0.1 * (365 * 24 * 3600) // FLOP per run
// === KEY METRICS ===
// Years to 10x current compute
yearsTo10x = {
current = effectiveCompute(2024)
target = current * 10
// Find year where we cross threshold
List.findIndex(
List.map(List.range(2024, 2040), y -> effectiveCompute(y) > target),
x -> x
)
}

With this structure, you could:

  1. Generate probabilistic forecasts - not just point estimates
  2. Run sensitivity analysis - which inputs matter most?
  3. Scenario modeling - what if Taiwan is disrupted? What if AI revenue disappoints?
  4. Update on evidence - new ASML numbers? Update the model
  5. Identify cruxes - where do optimists and pessimists disagree?
FactorImpact on 2030 ComputeCurrent Uncertainty
Algorithmic efficiency10-100x rangeVery high
Taiwan stability0.3-1.0xLow prob, high impact
AI revenue/investment2-5x rangeHigh
ASML expansion1.5-2x rangeMedium
Chip architecture2-4x rangeMedium

The causal diagram could become an interface to this model:

  • Click a node → see current estimate, distribution, sources
  • Hover over edge → see elasticity/relationship strength
  • Scenario selector → see how diagram changes under different assumptions
  • Time slider → see which bottlenecks dominate when