Skip to content

AI Future Projections

Scenario planning is a strategic tool for thinking through how the future might unfold under different conditions. Rather than making single-point predictions, scenarios help us explore multiple plausible futures, identify critical decision points, and prepare for uncertainty.

The future of AI is radically uncertain. We don’t know:

  • How quickly capabilities will advance
  • Whether alignment will succeed
  • How nations and labs will coordinate
  • Which technical approaches will work
  • How society will respond to AI

Scenarios help us:

  • Explore uncertainties: Map out different ways key unknowns could resolve
  • Identify decision points: Spot moments where actions could shift trajectories
  • Stress-test strategies: See which actions are valuable across multiple futures
  • Build shared understanding: Create common language for discussing AI futures
  • Notice early warning signs: Recognize which scenario we’re heading toward

These scenarios span the plausible range of AI development outcomes from 2024-2040. Probabilities are rough estimates reflecting current uncertainties.

Aligned AGI

AI labs successfully solve alignment. Coordinated deployment of powerful AI systems helps solve global challenges. Key decision points go right.

Slow Takeoff Muddle

Gradual AI progress without single breakthrough. Mix of harms and benefits. Governance keeps rough pace with capabilities. Most likely scenario?

Misaligned Catastrophe

Alignment fails. AI pursues misaligned goals leading to catastrophic outcomes. Fast or slow takeover variants. What goes wrong.

Multipolar Competition

Multiple competing AI systems. No single dominant AI. Ongoing instability, conflict, and coordination failures between AI-empowered actors.

Pause and Redirect

Successful coordination to slow AI development. More time for alignment research. Different governance structures emerge. Requires unprecedented coordination.

Key Crux
Note on ProbabilitiesThese are subjective estimates, not precise forecasts. They reflect current uncertainties and are meant to be starting points for discussion.
OverlapScenarios aren't mutually exclusive. We might see elements of multiple scenarios, or transition from one to another.
TimeframeThese scenarios focus on 2024-2040, with emphasis on critical decision points in the next 5-10 years.
  1. Read through all scenarios to understand the range of possibilities
  2. Adjust probabilities based on your own models and uncertainties
  3. Identify which assumptions drive your probability estimates
  4. Find your cruxes: What would change your mind about which scenario is most likely?
  1. Identify robust actions: What’s valuable across multiple scenarios?
  2. Find scenario-specific opportunities: What’s only valuable in certain futures?
  3. Notice warning signs: What early indicators suggest which path we’re on?
  4. Prepare contingencies: How would you respond if a different scenario emerges?
  1. Map interventions to scenarios: Which work matters in which worlds?
  2. Assess neglectedness: Are some scenarios getting too much/little attention?
  3. Find leverage points: Where can actions most shift probabilities between scenarios?

Key Dimensions That Differentiate Scenarios

Section titled “Key Dimensions That Differentiate Scenarios”

These scenarios vary along several critical axes:

  • Slow/Continuous: Gradual capability improvement (Muddle, Multipolar)
  • Moderate: Noticeable jumps but time to respond (Pause, parts of Aligned)
  • Fast: Rapid capability jump limiting response time (parts of Catastrophe)
  • Success: Technical alignment problems solved (Aligned AGI)
  • Partial: Some alignment but ongoing challenges (Muddle, Pause)
  • Failure: Alignment doesn’t work at scale (Catastrophe)
  • High: Strong international cooperation (Aligned AGI, Pause)
  • Medium: Some coordination, ongoing competition (Muddle, parts of Multipolar)
  • Low: Racing dynamics dominate (parts of Catastrophe, Multipolar)
  • Unipolar: Single dominant AI system or actor (parts of Aligned, Catastrophe)
  • Bipolar: Two major AI powers competing (parts of Multipolar)
  • Multipolar: Many competing AI systems (Multipolar, parts of Muddle)
  • Proactive: Society adapts ahead of AI capabilities (Pause)
  • Reactive: Society keeps rough pace with AI (Muddle, Aligned)
  • Overwhelmed: Society can’t keep up (Catastrophe, parts of Multipolar)

Certain decisions and events could significantly shift which scenario we end up in:

  • Scaling continuation: Do we hit capability walls or keep scaling?
  • Catastrophic AI incident: Does a major AI accident occur?
  • Governance momentum: Do AI safety policies gain or lose steam?
  • China-US dynamics: Cooperation or intensifying competition?
  • Alignment breakthroughs: Do we make progress on core alignment problems?
  • AGI threshold: Do we develop systems that qualify as AGI?
  • Deployment decisions: How are powerful AI systems deployed?
  • International agreements: Can nations coordinate on AI development?
  • Economic impacts: How disruptive is AI to employment and growth?
  • Safety culture: Do leading labs maintain or abandon safety commitments?
  • Superintelligence: Do we develop systems significantly beyond human intelligence?
  • Value lock-in: Do certain values or systems become entrenched?
  • Existential outcomes: Do we avoid catastrophic failures?

Key Questions

Which scenario do you think is most likely? Why?
What evidence would make you update toward a different scenario?
Which interventions are valuable across multiple scenarios?
What early warning signs should we watch for?
How should we allocate resources given scenario uncertainty?

Different interventions have different value depending on which scenario unfolds. This matrix helps identify robust interventions (valuable across scenarios) vs. scenario-specific bets.

InterventionAligned AGISlow MuddleCatastropheMultipolarPause
Interpretability research✅ Accelerates✅ Ongoing value⚠️ Maybe too late✅ Useful✅ Useful if resumed
RLHF/Constitutional AI✅ Core technique✅ Standard practice❌ Didn’t scale✅ Widely used✅ Improved during pause
AI Control research⚠️ Less needed✅ Valuable✅ Critical if possible✅ Necessary⚠️ Less urgent
Capability evals✅ Enabled safe deployment✅ Standard practice⚠️ Didn’t prevent✅ Arms race tool✅ Monitoring tool
Agent foundations theory⚠️ Not needed⚠️ Uncertain value❌ Too slow⚠️ Uncertain✅ Time to develop

Most robust technical intervention: Interpretability (valuable in 4/5 scenarios)

InterventionAligned AGISlow MuddleCatastropheMultipolarPause
Compute governance⚠️ Enabled control✅ Key tool⚠️ Maybe bypassed✅ Critical✅ Enables pause
International coordination✅ Managed transition✅ Prevented racing❌ Failed❌ Failed✅ Made it possible
Lab safety standards✅ Industry norm✅ Helpful⚠️ Insufficient⚠️ Inconsistent✅ Strengthened
Public advocacy✅ Built support✅ Maintained pressure⚠️ Too late⚠️ Polarized✅ Created mandate
Liability frameworks⚠️ Less needed✅ Useful deterrent❌ Irrelevant✅ Some effect✅ Incentive alignment

Most robust governance intervention: Compute governance (valuable in 4/5 scenarios)

InterventionAligned AGISlow MuddleCatastropheMultipolarPause
Safety research at lab✅ Contributed✅ Valuable⚠️ Accelerated capabilities?⚠️ Mixed❌ Lab stopped
Independent safety research⚠️ Less resourced✅ Complementary⚠️ Insufficient✅ Independent voice✅ Continued
Policy/governance work✅ Shaped outcome✅ Core contribution⚠️ Didn’t prevent✅ Critical✅ Made it happen
Building alternative orgs⚠️ Less needed✅ Diversified field⚠️ Too late✅ Needed✅ Essential
Public communication✅ Built consensus✅ Maintained awareness⚠️ Warning ignored⚠️ Weaponized✅ Created mandate

Most robust career choice: Policy/governance work (valuable in 4/5 scenarios)

Legend:

  • ✅ = Clearly valuable in this scenario
  • ⚠️ = Uncertain or mixed value
  • ❌ = Not valuable or counterproductive

How to use this:

  1. If you’re highly uncertain about scenarios: Choose robust interventions (many ✅)
  2. If you have strong scenario beliefs: Weight by your probability estimates
  3. If you want to hedge: Diversify across robust + scenario-specific interventions

If you assign rough probabilities to scenarios, you can estimate intervention value:

InterventionExample Weighting*Robustness Score
Interpretability research0.8High
Compute governance0.75High
Policy/governance careers0.75High
International coordination0.6Medium
Safety research at frontier lab0.5Medium (controversial)
Agent foundations theory0.4Low

*Assuming equal scenario probabilities; your mileage will vary based on your beliefs.

Key Insight: Scenario Uncertainty Favors Governance

Section titled “Key Insight: Scenario Uncertainty Favors Governance”

Technical interventions tend to be more scenario-specific (great if alignment works, useless if it doesn’t). Governance interventions tend to be more robust (useful for managing whatever happens).

If you’re uncertain which technical approach will work: Governance may be a better bet.

If you’re confident in a technical approach: Direct technical work may have higher upside.


What these scenarios don’t capture:

  • Black swan events: Truly unexpected developments we can’t anticipate
  • Technological surprises: Novel AI architectures or approaches
  • Non-AI factors: Climate change, pandemics, other existential risks
  • Detailed timelines: Specific years when events occur
  • Probability distributions: More nuanced likelihoods than rough ranges

Important caveats:

  • Scenarios are illustrative, not exhaustive - other futures are possible
  • Probabilities are subjective and should be debated, not taken as facts
  • Path dependence matters - early choices affect later possibilities
  • Scenarios can blend - we might see elements of multiple futures
  • The future is not fixed - our actions can shift probabilities

Each scenario includes:

  • Detailed narrative of how the future unfolds (2024-2040)
  • Key decision points and branch points along the path
  • Preconditions - what needs to be true for this scenario
  • Warning signs - how we’d know we’re entering this scenario
  • Valuable actions - what to do if this scenario seems likely
  • Winners and losers - who benefits and who suffers
Sources & References