Lock-in Probability Model
Overview
Section titled “Overview”This model provides a quantitative framework for assessing AI-enabled lock-in risk, complementing the comprehensive mechanism analysis in Lock-in Risk.
For detailed coverage of mechanisms, current trends, and responses, see Lock-in Risk.
Probability Estimates by Scenario
Section titled “Probability Estimates by Scenario”The following probability estimates draw on expert assessments from Future of Life Institute↗, EA Forum analyses↗, and Future of Humanity Institute↗ research:
| Scenario | Probability by 2050 | Duration if Realized | Key Drivers | Reversibility Window |
|---|---|---|---|---|
| Totalitarian surveillance state | 5-15% | Potentially indefinite | AI-enhanced monitoring, predictive policing, autonomous enforcement | 5-10 years before fully entrenched |
| Value lock-in via AI training | 10-20% | Centuries to millennia | Constitutional AI approaches, training data choices, RLHF value embedding↗ | 3-7 years during development phase |
| Economic power concentration | 15-25% | Decades to centuries | Network effects, compute monopoly↗, data advantages | 10-20 years with antitrust action |
| Geopolitical lock-in | 10-20% | Decades to centuries | First-mover AI advantages, regulatory capture | Uncertain, depends on coordination |
| Aligned singleton (positive) | 5-10% | Indefinite | Successful alignment, beneficial governance | N/A (desirable outcome) |
| Misaligned AI takeover | 2-10% | Permanent | Deceptive alignment, capability overhang | Days to weeks at critical juncture |
Note: These ranges reflect significant uncertainty. The stable totalitarianism analysis↗ suggests extreme scenarios may be below 1%, while other researchers place combined lock-in risk at 10-30%.
Risk Factor Framework
Section titled “Risk Factor Framework”| Risk Factor | Mechanism | AI Amplification | Reversibility |
|---|---|---|---|
| Enforcement capability | Autonomous systems maintain control | 10-100x more comprehensive surveillance; no human defection risk | Very Low |
| Path dependence | Early choices constrain future options | Faster deployment cycles compress decision windows | Low |
| Network effects | Systems become more valuable as adoption grows | AI models compound advantages via data and compute | Low-Medium |
| Value embedding | Preferences encoded during development persist | Constitutional AI approaches embed values during training | Medium |
| Complexity barriers | System understanding requires specialized expertise | AI systems may become inscrutable even to developers | Very Low |
Timeline Indicators
Section titled “Timeline Indicators”The IMD AI Safety Clock↗ tracks lock-in urgency:
| Date | Clock Position | Key Developments |
|---|---|---|
| September 2024 | 29 minutes to midnight | Clock launched |
| December 2024 | 26 minutes | AGI timeline acceleration |
| February 2025 | 24 minutes | California SB 1047 vetoed |
| September 2025 | 20 minutes | Agentic AI proliferation |
The nine-minute advance in one year reflects compressed decision timelines.
Model Limitations
Section titled “Model Limitations”This framework cannot capture:
- Novel lock-in pathways not yet identified
- Interaction effects between scenarios
- Tail risks from capability discontinuities
- Political feasibility of interventions
For intervention analysis, see Lock-in Risk: Responses.
Sources
Section titled “Sources”- Stable Totalitarianism: An Overview↗ - EA Forum analysis
- IMD AI Safety Clock↗ - Real-time risk tracking
- Bostrom on permanent lock-in scenarios↗
- Finnveden, Riedel, and Shulman on AI-enabled dictatorship↗