Skip to content

Power Lock-in

The power transition describes the inevitable shift in relative capabilities and influence between AI systems and humans. Unlike the other critical outcomes (which describe specific catastrophic scenarios or value questions), this is a neutral framing of a transition that will happen as AI capabilities advance. The question is not whether there will be a power transition, but how it unfolds:

  • How fast?
  • How smooth?
  • Who retains meaningful influence?
  • What oversight mechanisms persist?

Neutral framing with multiple possible characters.

CharacterDescriptionKey Features
Smooth, human-ledHumans maintain meaningful control throughout transitionGradual capability handoff, robust oversight, human values preserved
Bumpy but stableDisruption occurs but society adaptsSome loss of control, course corrections, eventual equilibrium
Chaotic fragmentationTransition overwhelms coordination capacityMultiple competing AI systems, fragmented governance, unpredictable outcomes
AI-dominatedHumans lose meaningful influenceAI systems make most consequential decisions, humans become dependent

Loading diagram...

1. Speed How quickly do AI capabilities advance relative to human ability to adapt?

  • Slow transition: Time for institutions, governance, and social norms to evolve
  • Fast transition: Society overwhelmed, existing structures inadequate

2. Controllability Can humans maintain meaningful oversight as AI becomes more capable?

  • High controllability: AI systems remain tools, humans make key decisions
  • Low controllability: AI systems operate autonomously, human oversight nominal

3. Distribution How is AI capability and influence distributed?

  • Distributed: Many actors have AI capabilities, checks and balances possible
  • Concentrated: Few actors control AI, power asymmetries grow

4. Reversibility Can the transition be adjusted if things go wrong?

  • Reversible: Course corrections possible, mistakes recoverable
  • Irreversible: Once certain thresholds passed, no going back

ParameterImpact on Transition
Safety-Capability GapLarger gap → less smooth transition
Human AgencyHigher → more human-led transition
Human ExpertiseHigher → humans can meaningfully participate
Societal AdaptabilityHigher → better handling of disruption
Racing IntensityHigher → faster, less controlled transition
Coordination CapacityHigher → better collective management

How the transition unfolds shapes the long-run trajectory:

  • Human-led transitions more likely to preserve human values
  • AI-dominated transitions may optimize for other goals
  • Who retains influence determines whose values prevail

Transition character affects existential catastrophe:

  • Chaotic transitions increase accident probability
  • Fast transitions reduce response time to problems
  • Loss of control increases risk of catastrophic outcomes

TransitionSpeedCharacterLessons
Industrial RevolutionDecadesBumpy but transformativeSocial adaptation takes time; inequality increased before policies adjusted
Internet/DigitalYearsFast, disruptiveInstitutions still catching up; concentration emerged
Nuclear weaponsSuddenManaged but scaryInternational coordination possible under extreme threat
Agricultural RevolutionCenturiesSmooth at civilizational scaleSlow transitions allow gradual adaptation

The AI transition may be faster than any previous technological transition, which is concerning.


  • AI development proceeds at pace humans can manage
  • Strong safety research stays ahead of capabilities
  • Democratic governance adapts to AI
  • Human expertise remains relevant
  • Benefits broadly shared
  • Significant disruption (job displacement, institutional stress)
  • Some loss of human control, some accidents
  • Course corrections happen, society adapts
  • Eventual stable equilibrium with humans still influential
  • AI development outpaces coordination
  • Multiple competing AI systems with different goals
  • No clear governance structure
  • Unpredictable outcomes, possible conflict
  • Humans lose meaningful control over key decisions
  • AI systems operate autonomously
  • Human preferences may or may not be considered
  • Effectively equivalent to AI Takeover - Gradual if values misaligned

Signs of smooth transition:

  • Safety research keeping pace with capabilities
  • Governance frameworks adapting effectively
  • Public understanding increasing
  • Benefits visibly distributed

Signs of problematic transition:

  • Racing dynamics intensifying
  • Safety research falling behind
  • Governance fragmented or captured
  • Public trust declining
  • Expertise atrophying
  • Power concentrating

Interventions That Affect Transition Character

Section titled “Interventions That Affect Transition Character”

To slow the transition (buy time):

  • Compute governance
  • Safety requirements before deployment
  • International coordination on pace

To improve adaptability:

  • Workforce transition support
  • Education reform
  • Institutional flexibility

To maintain human influence:

  • Human-in-the-loop requirements
  • Transparency and oversight mechanisms
  • Maintaining human expertise
  • Democratic AI governance

To improve coordination:

  • International AI governance frameworks
  • Industry coordination on safety
  • Public engagement and deliberation

Transition TypeAssessment
Smooth, human-ledPossible with significant effort; not default
Bumpy but stablePerhaps most likely if current trajectory continues
Chaotic fragmentationSignificant risk, especially with racing dynamics
AI-dominatedRisk increases with capability acceleration

  • Karnofsky, H. (2021). Most Important Century series
  • Ord, T. (2020). The Precipice — Discussion of existential risk during transition
  • Cotra, A. (2022). AI timelines and transition scenarios