AI Future Projections
Scenario planning is a strategic tool for thinking through how the future might unfold under different conditions. Rather than making single-point predictions, scenarios help us explore multiple plausible futures, identify critical decision points, and prepare for uncertainty.
Why Scenario Planning?
Section titled “Why Scenario Planning?”The future of AI is radically uncertain. We don’t know:
- How quickly capabilities will advance
- Whether alignment will succeed
- How nations and labs will coordinate
- Which technical approaches will work
- How society will respond to AI
Scenarios help us:
- Explore uncertainties: Map out different ways key unknowns could resolve
- Identify decision points: Spot moments where actions could shift trajectories
- Stress-test strategies: See which actions are valuable across multiple futures
- Build shared understanding: Create common language for discussing AI futures
- Notice early warning signs: Recognize which scenario we’re heading toward
The Five Scenarios
Section titled “The Five Scenarios”These scenarios span the plausible range of AI development outcomes from 2024-2040. Probabilities are rough estimates reflecting current uncertainties.
Aligned AGI
AI labs successfully solve alignment. Coordinated deployment of powerful AI systems helps solve global challenges. Key decision points go right.
Slow Takeoff Muddle
Gradual AI progress without single breakthrough. Mix of harms and benefits. Governance keeps rough pace with capabilities. Most likely scenario?
Misaligned Catastrophe
Alignment fails. AI pursues misaligned goals leading to catastrophic outcomes. Fast or slow takeover variants. What goes wrong.
Multipolar Competition
Multiple competing AI systems. No single dominant AI. Ongoing instability, conflict, and coordination failures between AI-empowered actors.
Pause and Redirect
Successful coordination to slow AI development. More time for alignment research. Different governance structures emerge. Requires unprecedented coordination.
How to Use These Scenarios
Section titled “How to Use These Scenarios”For Understanding Your Views
Section titled “For Understanding Your Views”- Read through all scenarios to understand the range of possibilities
- Adjust probabilities based on your own models and uncertainties
- Identify which assumptions drive your probability estimates
- Find your cruxes: What would change your mind about which scenario is most likely?
For Strategic Planning
Section titled “For Strategic Planning”- Identify robust actions: What’s valuable across multiple scenarios?
- Find scenario-specific opportunities: What’s only valuable in certain futures?
- Notice warning signs: What early indicators suggest which path we’re on?
- Prepare contingencies: How would you respond if a different scenario emerges?
For Research and Advocacy
Section titled “For Research and Advocacy”- Map interventions to scenarios: Which work matters in which worlds?
- Assess neglectedness: Are some scenarios getting too much/little attention?
- Find leverage points: Where can actions most shift probabilities between scenarios?
Key Dimensions That Differentiate Scenarios
Section titled “Key Dimensions That Differentiate Scenarios”These scenarios vary along several critical axes:
1. Takeoff Speed
Section titled “1. Takeoff Speed”- Slow/Continuous: Gradual capability improvement (Muddle, Multipolar)
- Moderate: Noticeable jumps but time to respond (Pause, parts of Aligned)
- Fast: Rapid capability jump limiting response time (parts of Catastrophe)
2. Alignment Success
Section titled “2. Alignment Success”- Success: Technical alignment problems solved (Aligned AGI)
- Partial: Some alignment but ongoing challenges (Muddle, Pause)
- Failure: Alignment doesn’t work at scale (Catastrophe)
3. Coordination Level
Section titled “3. Coordination Level”- High: Strong international cooperation (Aligned AGI, Pause)
- Medium: Some coordination, ongoing competition (Muddle, parts of Multipolar)
- Low: Racing dynamics dominate (parts of Catastrophe, Multipolar)
4. Distribution of AI Power
Section titled “4. Distribution of AI Power”- Unipolar: Single dominant AI system or actor (parts of Aligned, Catastrophe)
- Bipolar: Two major AI powers competing (parts of Multipolar)
- Multipolar: Many competing AI systems (Multipolar, parts of Muddle)
5. Societal Response
Section titled “5. Societal Response”- Proactive: Society adapts ahead of AI capabilities (Pause)
- Reactive: Society keeps rough pace with AI (Muddle, Aligned)
- Overwhelmed: Society can’t keep up (Catastrophe, parts of Multipolar)
Critical Branch Points
Section titled “Critical Branch Points”Certain decisions and events could significantly shift which scenario we end up in:
Near-Term (2024-2027)
Section titled “Near-Term (2024-2027)”- Scaling continuation: Do we hit capability walls or keep scaling?
- Catastrophic AI incident: Does a major AI accident occur?
- Governance momentum: Do AI safety policies gain or lose steam?
- China-US dynamics: Cooperation or intensifying competition?
- Alignment breakthroughs: Do we make progress on core alignment problems?
Medium-Term (2027-2032)
Section titled “Medium-Term (2027-2032)”- AGI threshold: Do we develop systems that qualify as AGI?
- Deployment decisions: How are powerful AI systems deployed?
- International agreements: Can nations coordinate on AI development?
- Economic impacts: How disruptive is AI to employment and growth?
- Safety culture: Do leading labs maintain or abandon safety commitments?
Long-Term (2032-2040)
Section titled “Long-Term (2032-2040)”- Superintelligence: Do we develop systems significantly beyond human intelligence?
- Value lock-in: Do certain values or systems become entrenched?
- Existential outcomes: Do we avoid catastrophic failures?
Using Scenario Analysis
Section titled “Using Scenario Analysis”❓Key Questions
Intervention Robustness Matrix
Section titled “Intervention Robustness Matrix”Different interventions have different value depending on which scenario unfolds. This matrix helps identify robust interventions (valuable across scenarios) vs. scenario-specific bets.
Technical Interventions
Section titled “Technical Interventions”| Intervention | Aligned AGI | Slow Muddle | Catastrophe | Multipolar | Pause |
|---|---|---|---|---|---|
| Interpretability research | ✅ Accelerates | ✅ Ongoing value | ⚠️ Maybe too late | ✅ Useful | ✅ Useful if resumed |
| RLHF/Constitutional AI | ✅ Core technique | ✅ Standard practice | ❌ Didn’t scale | ✅ Widely used | ✅ Improved during pause |
| AI Control research | ⚠️ Less needed | ✅ Valuable | ✅ Critical if possible | ✅ Necessary | ⚠️ Less urgent |
| Capability evals | ✅ Enabled safe deployment | ✅ Standard practice | ⚠️ Didn’t prevent | ✅ Arms race tool | ✅ Monitoring tool |
| Agent foundations theory | ⚠️ Not needed | ⚠️ Uncertain value | ❌ Too slow | ⚠️ Uncertain | ✅ Time to develop |
Most robust technical intervention: Interpretability (valuable in 4/5 scenarios)
Governance Interventions
Section titled “Governance Interventions”| Intervention | Aligned AGI | Slow Muddle | Catastrophe | Multipolar | Pause |
|---|---|---|---|---|---|
| Compute governance | ⚠️ Enabled control | ✅ Key tool | ⚠️ Maybe bypassed | ✅ Critical | ✅ Enables pause |
| International coordination | ✅ Managed transition | ✅ Prevented racing | ❌ Failed | ❌ Failed | ✅ Made it possible |
| Lab safety standards | ✅ Industry norm | ✅ Helpful | ⚠️ Insufficient | ⚠️ Inconsistent | ✅ Strengthened |
| Public advocacy | ✅ Built support | ✅ Maintained pressure | ⚠️ Too late | ⚠️ Polarized | ✅ Created mandate |
| Liability frameworks | ⚠️ Less needed | ✅ Useful deterrent | ❌ Irrelevant | ✅ Some effect | ✅ Incentive alignment |
Most robust governance intervention: Compute governance (valuable in 4/5 scenarios)
Career/Personal Interventions
Section titled “Career/Personal Interventions”| Intervention | Aligned AGI | Slow Muddle | Catastrophe | Multipolar | Pause |
|---|---|---|---|---|---|
| Safety research at lab | ✅ Contributed | ✅ Valuable | ⚠️ Accelerated capabilities? | ⚠️ Mixed | ❌ Lab stopped |
| Independent safety research | ⚠️ Less resourced | ✅ Complementary | ⚠️ Insufficient | ✅ Independent voice | ✅ Continued |
| Policy/governance work | ✅ Shaped outcome | ✅ Core contribution | ⚠️ Didn’t prevent | ✅ Critical | ✅ Made it happen |
| Building alternative orgs | ⚠️ Less needed | ✅ Diversified field | ⚠️ Too late | ✅ Needed | ✅ Essential |
| Public communication | ✅ Built consensus | ✅ Maintained awareness | ⚠️ Warning ignored | ⚠️ Weaponized | ✅ Created mandate |
Most robust career choice: Policy/governance work (valuable in 4/5 scenarios)
Reading the Matrix
Section titled “Reading the Matrix”Legend:
- ✅ = Clearly valuable in this scenario
- ⚠️ = Uncertain or mixed value
- ❌ = Not valuable or counterproductive
How to use this:
- If you’re highly uncertain about scenarios: Choose robust interventions (many ✅)
- If you have strong scenario beliefs: Weight by your probability estimates
- If you want to hedge: Diversify across robust + scenario-specific interventions
Scenario-Weighted Expected Value
Section titled “Scenario-Weighted Expected Value”If you assign rough probabilities to scenarios, you can estimate intervention value:
| Intervention | Example Weighting* | Robustness Score |
|---|---|---|
| Interpretability research | 0.8 | High |
| Compute governance | 0.75 | High |
| Policy/governance careers | 0.75 | High |
| International coordination | 0.6 | Medium |
| Safety research at frontier lab | 0.5 | Medium (controversial) |
| Agent foundations theory | 0.4 | Low |
*Assuming equal scenario probabilities; your mileage will vary based on your beliefs.
Key Insight: Scenario Uncertainty Favors Governance
Section titled “Key Insight: Scenario Uncertainty Favors Governance”Technical interventions tend to be more scenario-specific (great if alignment works, useless if it doesn’t). Governance interventions tend to be more robust (useful for managing whatever happens).
If you’re uncertain which technical approach will work: Governance may be a better bet.
If you’re confident in a technical approach: Direct technical work may have higher upside.
Limitations of This Framework
Section titled “Limitations of This Framework”What these scenarios don’t capture:
- Black swan events: Truly unexpected developments we can’t anticipate
- Technological surprises: Novel AI architectures or approaches
- Non-AI factors: Climate change, pandemics, other existential risks
- Detailed timelines: Specific years when events occur
- Probability distributions: More nuanced likelihoods than rough ranges
Important caveats:
- Scenarios are illustrative, not exhaustive - other futures are possible
- Probabilities are subjective and should be debated, not taken as facts
- Path dependence matters - early choices affect later possibilities
- Scenarios can blend - we might see elements of multiple futures
- The future is not fixed - our actions can shift probabilities
Further Reading
Section titled “Further Reading”Each scenario includes:
- Detailed narrative of how the future unfolds (2024-2040)
- Key decision points and branch points along the path
- Preconditions - what needs to be true for this scenario
- Warning signs - how we’d know we’re entering this scenario
- Valuable actions - what to do if this scenario seems likely
- Winners and losers - who benefits and who suffers
- Scenarios for the Future of AI - Future of Life Institute
- AGI Safety from First Principles - Richard Ngo
- What Failure Looks Like - Paul Christiano
- Superintelligence: Paths, Dangers, Strategies - Nick Bostrom
- The Precipice - Toby Ord