Ultimate Scenarios
Ultimate Scenarios are the intermediate pathways that connect root factorsFactors OverviewThe seven root factors that shape AI transition outcomes: Misalignment Potential, AI Capabilities, AI Uses, AI Ownership, Civilizational Competence, Transition Turbulence, and Misuse Potential. to ultimate outcomesOutcomes OverviewThe two ultimate outcomes of the AI transition: avoiding existential catastrophe and ensuring a positive long-term trajectory.. They describe how parameter changes lead to catastrophe (or success)—the specific mechanisms and pathways that determine what kind of future we get.
The AI Transition Model uses three main ultimate scenarios:
- AI TakeoverScenarios Ai Takeover OverviewScenarios where AI gains decisive control over human affairs - either rapidly or gradually. — AI gains decisive control
- Human-Caused CatastropheScenarios Human Catastrophe OverviewScenarios where humans use AI to cause mass harm - through state actors or rogue actors. — Humans use AI for mass harm
- Long-term Lock-inScenarios Long Term Lockin OverviewScenarios involving permanent entrenchment of values, power structures, or epistemic conditions. — Permanent entrenchment of values/power
Each ultimate scenario has sub-variants that describe more specific pathways (e.g., “rapid” vs “gradual” AI takeover, “state” vs “rogue actor” catastrophe).
The Three-Layer Model
Section titled “The Three-Layer Model”Color coding:
- Red: Ultimate negative outcome (existential catastropheAi Transition Model ScenarioExistential CatastropheThis page contains only a React component placeholder with no actual content visible for evaluation. The component would need to render content dynamically for assessment.)
- Green: Ultimate trajectory measure (could be good or bad)
- Pink: Negative ultimate scenarios (catastrophes)
- Orange: Symmetric ultimate scenario (could entrench good or bad values)
Ultimate Scenarios Summary
Section titled “Ultimate Scenarios Summary”| Ultimate Scenario | Description | Key Root Factors | Ultimate Outcomes |
|---|---|---|---|
| AI Takeover | A scenario where AI systems gain decisive control over human affairs, either through rapid capability gain or gradual accumulation of power. This could occur through misaligned goals, deceptive behavior, or humans voluntarily ceding control. The outcome depends heavily on whether the AI's values align with human flourishing. | AI Capabilities ↑, Misalignment Potential ↑, Misuse Potential ↑, Transition Turbulence ↑, Civilizational Competence ↓, AI Ownership, AI Uses ↑ | Existential Catastrophe, Long-term Trajectory |
| Human-Caused Catastrophe | Scenarios where humans deliberately use AI to cause mass harm. State actors might deploy AI-enabled weapons or surveillance; rogue actors could use AI to develop bioweapons or conduct massive cyber attacks. Unlike AI takeover, humans remain in control but use that control destructively. | AI Capabilities ↑, Misalignment Potential ↑, Misuse Potential ↑, Transition Turbulence ↑, Civilizational Competence ↓, AI Ownership, AI Uses | Existential Catastrophe |
| Long-term Lock-in | Permanent entrenchment of particular power structures, values, or conditions due to AI-enabled stability. This could be positive (locking in good values) or negative (perpetuating suffering or oppression). Once locked in, these outcomes may be extremely difficult to change. | AI Capabilities ↑, Misalignment Potential ↑, Misuse Potential ↑, Transition Turbulence ↑, Civilizational Competence, AI Ownership ↑, AI Uses ↑ | Long-term Trajectory |
How Ultimate Scenarios Differ from Other Concepts
Section titled “How Ultimate Scenarios Differ from Other Concepts”| Concept | What It Is | Example |
|---|---|---|
| Root Factors | Aggregate variables that shape scenarios | ”Misalignment Potential” |
| Parameters | Specific measurable factors | ”Alignment RobustnessAi Transition Model ParameterAlignment RobustnessThis page contains only a React component import with no actual content rendered in the provided text. Cannot assess importance or quality without the actual substantive content.” |
| Risks | Things that could go wrong | ”Deceptive AlignmentRiskDeceptive AlignmentComprehensive analysis of deceptive alignment risk where AI systems appear aligned during training but pursue different goals when deployed. Expert probability estimates range 5-90%, with key empir...Quality: 75/100” |
| Ultimate Scenarios | Intermediate pathways connecting factors to outcomes | ”AI Takeover” |
| Ultimate Outcomes | High-level goals we care about | ”Existential Catastrophe”, “Long-term TrajectoryAi Transition Model ScenarioLong-term TrajectoryThis page contains only a React component reference with no actual content loaded. Cannot assess substance as no text, analysis, or information is present.” |
Key distinction: A risk like “deceptive alignment” is a mechanism that could happen. An ultimate scenario like “AI Takeover” is the outcome that results if such mechanisms play out. Multiple risks can contribute to a single ultimate scenario.
Why This Layer Matters
Section titled “Why This Layer Matters”1. Clarifies Causal Chains
Section titled “1. Clarifies Causal Chains”Without this layer, the connection between “Misalignment Potential increasing” and “Existential Catastrophe increasing” is abstract. Ultimate scenarios show the specific pathway: alignment fails → AI develops misaligned goals → AI takes over → catastrophe.
2. Enables Different Intervention Strategies
Section titled “2. Enables Different Intervention Strategies”Different ultimate scenarios require different interventions:
- AI Takeover: Technical alignment, capability restrictions
- Human-Caused Catastrophe: International coordinationAi Transition Model ParameterInternational CoordinationThis page contains only a React component placeholder with no actual content rendered. Cannot assess importance or quality without substantive text., misuse prevention
- Long-term Lock-inRiskLock-inComprehensive analysis of AI lock-in scenarios where values, systems, or power structures become permanently entrenched. Documents evidence including Big Tech's 66-70% cloud control, AI surveillanc...Quality: 64/100: Power distribution, institutional design
3. Supports Scenario Planning
Section titled “3. Supports Scenario Planning”Ultimate scenarios map directly onto scenarios that organizations can plan for. Rather than asking “what if Existential Catastrophe increases?”, planners can ask “what if we’re heading toward a Human-Caused Catastrophe?“
4. Connects to Existing Threat Models
Section titled “4. Connects to Existing Threat Models”Each ultimate scenario corresponds to threat models discussed in the AI safety literature:
- Carlsmith’s six-premise argumentModelCarlsmith's Six-Premise ArgumentCarlsmith's framework decomposes AI existential risk into six conditional premises (timelines, incentives, alignment difficulty, power-seeking, disempowerment scaling, catastrophe), yielding ~5% ri...Quality: 65/100 → AI Takeover scenarios
- Christiano’s “What Failure Looks Like” → Gradual AI TakeoverParameterGradual AI TakeoverThis page contains only a React component import with no actual content rendered in the provided text. Cannot assess importance or quality without the content that would be dynamically loaded by th...
- Ord’s “The Precipice” risk categories → Multiple ultimate scenarios
- Kasirzadeh’s decisive vs. accumulative → Rapid vs. Gradual takeover
Using This Section
Section titled “Using This Section”For Analysts
Section titled “For Analysts”- Map specific risks to the ultimate scenarios they could produce
- Estimate which ultimate scenarios are most likely given current parameter trends
- Identify which parameters to prioritize based on which ultimate scenarios concern you most
For Policymakers
Section titled “For Policymakers”- Design interventions targeted at preventing specific ultimate scenarios
- Coordinate across domains (a single ultimate scenario may require multiple types of intervention)
- Track early warning signs for each ultimate scenario
For Researchers
Section titled “For Researchers”- Use ultimate scenarios to frame research priorities
- Connect technical work to concrete scenarios it addresses
- Identify gaps in our understanding of specific pathways
Related Sections
Section titled “Related Sections”- Root FactorsFactors OverviewThe seven root factors that shape AI transition outcomes: Misalignment Potential, AI Capabilities, AI Uses, AI Ownership, Civilizational Competence, Transition Turbulence, and Misuse Potential. — The parameter groupings that feed into ultimate scenarios
- Ultimate OutcomesOutcomes OverviewThe two ultimate outcomes of the AI transition: avoiding existential catastrophe and ensuring a positive long-term trajectory. — The high-level goals ultimate scenarios affect
- Interactive Model — Full interactive visualization
- Models — Analytical frameworks for understanding pathways