Ultimate Outcomes
Ultimate Outcomes represent what we fundamentally care about when thinking about AI’s impact on humanity. Unlike Scenarios (which describe intermediate scenarios) or parameters (which measure specific factors), Ultimate Outcomes describe the final states we’re trying to achieve or avoid.
There are two Ultimate Outcomes:
- Existential Catastrophe — Does catastrophe occur?
- Long-term Trajectory — What’s the expected value of the future?
The Two Outcomes
Section titled “The Two Outcomes”| Outcome | Question | Key Ultimate Scenarios |
|---|---|---|
| Existential Catastrophe | Does civilization-ending harm occur? | AI Takeover, Human-Caused Catastrophe |
| Long-term Trajectory | What's the quality of the post-transition future? | AI Takeover, Long-term Lock-in |
Why Two Outcomes?
Section titled “Why Two Outcomes?”Previous versions of this framework had three outcomes (including “Transition Smoothness”). We moved to two because:
-
Transition Turbulence is a pathway, not endpoint: How rough the transition is affects both existential catastrophe and long-term trajectory. It belongs in Root Factors.
-
Cleaner analytical structure: Two outcomes are genuinely orthogonal:
- You can have low existential catastrophe but poor long-term trajectory (safe dystopia)
- You can have high existential catastrophe but good conditional value (high-stakes gamble)
-
Temporal clarity: Existential Catastrophe is primarily about the transition period; Long-term Trajectory is about what comes after.
How They Relate
Section titled “How They Relate”These outcomes are partially independent—you can have different combinations:
| Scenario | Existential Catastrophe | Long-term Trajectory | Example |
|---|---|---|---|
| Best case | Low | High | Aligned AI, smooth transition, flourishing |
| Safe dystopia | Low | Low | No catastrophe but authoritarian lock-in |
| High-stakes success | High (survived) | High | Near-misses but good outcome |
| Extinction | Very High | N/A | Catastrophe occurs |
This independence means:
- Different Ultimate Scenarios affect different Ultimate Outcomes
- Trade-offs exist: Some approaches that reduce existential catastrophe might worsen long-term trajectory (e.g., authoritarian control)
- Both matter: We shouldn’t sacrifice one entirely for the other
How Ultimate Scenarios Flow to Ultimate Outcomes
Section titled “How Ultimate Scenarios Flow to Ultimate Outcomes”Each ultimate scenario has sub-variants with different probability estimates. See the Ultimate Scenarios section for details.
Temporal Structure
Section titled “Temporal Structure”These outcomes map to different phases of the AI transition:
| Phase | Primary Concern | Relevant Outcome |
|---|---|---|
| Pre-transformative AI (now) | Building capacity, avoiding racing | Existential Catastrophe (preparation) |
| Existential Catastrophe Period | Surviving the transition | Existential Catastrophe |
| Resolution | How it resolves | Both |
| Long-run Trajectory | Quality of the future | Long-term Trajectory |
Related Pages
Section titled “Related Pages”- Scenarios — The intermediate scenarios
- AI Transition Model — All parameters
- Aggregate Parameters — How parameters group together