Power Lock-in
Overview
Section titled “Overview”The power transition describes the inevitable shift in relative capabilities and influence between AI systems and humans. Unlike the other critical outcomes (which describe specific catastrophic scenarios or value questions), this is a neutral framing of a transition that will happen as AI capabilities advance. The question is not whether there will be a power transition, but how it unfolds:
- How fast?
- How smooth?
- Who retains meaningful influence?
- What oversight mechanisms persist?
Polarity
Section titled “Polarity”Neutral framing with multiple possible characters.
| Character | Description | Key Features |
|---|---|---|
| Smooth, human-led | Humans maintain meaningful control throughout transition | Gradual capability handoff, robust oversight, human values preserved |
| Bumpy but stable | Disruption occurs but society adapts | Some loss of control, course corrections, eventual equilibrium |
| Chaotic fragmentation | Transition overwhelms coordination capacity | Multiple competing AI systems, fragmented governance, unpredictable outcomes |
| AI-dominated | Humans lose meaningful influence | AI systems make most consequential decisions, humans become dependent |
How This Happens
Section titled “How This Happens”Key Dimensions of the Transition
Section titled “Key Dimensions of the Transition”1. Speed How quickly do AI capabilities advance relative to human ability to adapt?
- Slow transition: Time for institutions, governance, and social norms to evolve
- Fast transition: Society overwhelmed, existing structures inadequate
2. Controllability Can humans maintain meaningful oversight as AI becomes more capable?
- High controllability: AI systems remain tools, humans make key decisions
- Low controllability: AI systems operate autonomously, human oversight nominal
3. Distribution How is AI capability and influence distributed?
- Distributed: Many actors have AI capabilities, checks and balances possible
- Concentrated: Few actors control AI, power asymmetries grow
4. Reversibility Can the transition be adjusted if things go wrong?
- Reversible: Course corrections possible, mistakes recoverable
- Irreversible: Once certain thresholds passed, no going back
Key Parameters
Section titled “Key Parameters”| Parameter | Impact on Transition |
|---|---|
| Safety-Capability Gap | Larger gap → less smooth transition |
| Human Agency | Higher → more human-led transition |
| Human Expertise | Higher → humans can meaningfully participate |
| Societal Adaptability | Higher → better handling of disruption |
| Racing Intensity | Higher → faster, less controlled transition |
| Coordination Capacity | Higher → better collective management |
Which Ultimate Outcomes It Affects
Section titled “Which Ultimate Outcomes It Affects”Long-term Trajectory (Primary)
Section titled “Long-term Trajectory (Primary)”How the transition unfolds shapes the long-run trajectory:
- Human-led transitions more likely to preserve human values
- AI-dominated transitions may optimize for other goals
- Who retains influence determines whose values prevail
Existential Catastrophe (Secondary)
Section titled “Existential Catastrophe (Secondary)”Transition character affects existential catastrophe:
- Chaotic transitions increase accident probability
- Fast transitions reduce response time to problems
- Loss of control increases risk of catastrophic outcomes
Historical Analogies
Section titled “Historical Analogies”| Transition | Speed | Character | Lessons |
|---|---|---|---|
| Industrial Revolution | Decades | Bumpy but transformative | Social adaptation takes time; inequality increased before policies adjusted |
| Internet/Digital | Years | Fast, disruptive | Institutions still catching up; concentration emerged |
| Nuclear weapons | Sudden | Managed but scary | International coordination possible under extreme threat |
| Agricultural Revolution | Centuries | Smooth at civilizational scale | Slow transitions allow gradual adaptation |
The AI transition may be faster than any previous technological transition, which is concerning.
Scenarios by Transition Character
Section titled “Scenarios by Transition Character”Smooth, Human-Led (Best case)
Section titled “Smooth, Human-Led (Best case)”- AI development proceeds at pace humans can manage
- Strong safety research stays ahead of capabilities
- Democratic governance adapts to AI
- Human expertise remains relevant
- Benefits broadly shared
Bumpy but Stable (Acceptable)
Section titled “Bumpy but Stable (Acceptable)”- Significant disruption (job displacement, institutional stress)
- Some loss of human control, some accidents
- Course corrections happen, society adapts
- Eventual stable equilibrium with humans still influential
Chaotic Fragmentation (Concerning)
Section titled “Chaotic Fragmentation (Concerning)”- AI development outpaces coordination
- Multiple competing AI systems with different goals
- No clear governance structure
- Unpredictable outcomes, possible conflict
AI-Dominated (Failure)
Section titled “AI-Dominated (Failure)”- Humans lose meaningful control over key decisions
- AI systems operate autonomously
- Human preferences may or may not be considered
- Effectively equivalent to AI Takeover - Gradual if values misaligned
Warning Signs
Section titled “Warning Signs”Signs of smooth transition:
- Safety research keeping pace with capabilities
- Governance frameworks adapting effectively
- Public understanding increasing
- Benefits visibly distributed
Signs of problematic transition:
- Racing dynamics intensifying
- Safety research falling behind
- Governance fragmented or captured
- Public trust declining
- Expertise atrophying
- Power concentrating
Interventions That Affect Transition Character
Section titled “Interventions That Affect Transition Character”To slow the transition (buy time):
- Compute governance
- Safety requirements before deployment
- International coordination on pace
To improve adaptability:
- Workforce transition support
- Education reform
- Institutional flexibility
To maintain human influence:
- Human-in-the-loop requirements
- Transparency and oversight mechanisms
- Maintaining human expertise
- Democratic AI governance
To improve coordination:
- International AI governance frameworks
- Industry coordination on safety
- Public engagement and deliberation
Probability Estimates
Section titled “Probability Estimates”| Transition Type | Assessment |
|---|---|
| Smooth, human-led | Possible with significant effort; not default |
| Bumpy but stable | Perhaps most likely if current trajectory continues |
| Chaotic fragmentation | Significant risk, especially with racing dynamics |
| AI-dominated | Risk increases with capability acceleration |
Related Content
Section titled “Related Content”Existing Pages
Section titled “Existing Pages”- Long-term Trajectory — The ultimate outcome this affects
- Transition Turbulence — Related critical outcome
- Societal Adaptability — Key aggregate for this outcome
External Resources
Section titled “External Resources”- Karnofsky, H. (2021). Most Important Century series
- Ord, T. (2020). The Precipice — Discussion of existential risk during transition
- Cotra, A. (2022). AI timelines and transition scenarios