Lock-in: Research Report
Executive Summary
Section titled “Executive Summary”| Finding | Key Data | Implication |
|---|---|---|
| Value lock-in risk identified | MacAskill warns AGI could “lock in values indefinitely” | Current moral blind spots may become permanent |
| Surveillance infrastructure spreading | China-sourced AI surveillance in 80+ countries; 34% global market share | Political system lock-in already occurring |
| Economic concentration extreme | Nvidia 92% GPU market share; $90B+ AI infrastructure investments | Monopoly power creates technological lock-in |
| Value embedding operational | Constitutional AI explicitly trains values into models | Embedded values may resist modification |
| Infrastructure path dependence | 24-month lead time for AI capacity; 15-20 year data center lifespans | Early infrastructure choices constrain future options |
| Irreversibility timeline | 5-20 years before lock-in becomes permanent | Intervention window narrowing rapidly |
Research Summary
Section titled “Research Summary”Lock-in refers to the permanent entrenchment of values, systems, or power structures in ways that become extremely difficult or impossible to reverse. In the AI context, this represents a pathway to existential catastrophe where early decisions about development, deployment, or governance become irreversibly embedded in future systems. Unlike traditional technologies where course correction remains possible, advanced AI could create enforcement mechanisms so powerful that alternative paths become permanently inaccessible.
Three primary lock-in mechanisms are documented. Value lock-in occurs through training processes like Constitutional AI that explicitly embed ethical principles into models, raising questions about whose values get embedded and whether they can be changed. Political lock-in manifests through AI surveillance deployed in over 80 countries, with Chinese firms controlling 34% of the global surveillance market and lock-in effects making supplier changes prohibitively expensive. Economic lock-in emerges from extreme market concentration—Nvidia holds 92% of data center GPU market share, while training costs exceeding $100M create insurmountable barriers to entry.
Path dependence in AI infrastructure creates additional irreversibility. Organizations must plan AI capacity 24 months in advance or risk being locked out entirely. Data centers optimized for current training workloads face 15-20 year lifespans but may become obsolete if inference patterns shift. Energy infrastructure choices—fossil fuel versus renewable pathways—could lock in decades of climate impact. Researchers including MacAskill and Bostrom warn that technological advances, particularly AGI, could enable those in power to lock in their values indefinitely, making current decisions potentially the most consequential in human history. The 5-20 year timeline before irreversibility suggests each intervention year is more valuable than the next.
Background
Section titled “Background”Lock-in represents a unique category of existential risk where the permanence of outcomes, rather than their immediate severity, defines the threat. As Toby Ord articulates in The Precipice, “dystopian lock-in” could be as serious as extinction—both permanently curtail humanity’s potential, but lock-in preserves awareness of what was lost.
The concept gained prominence through William MacAskill’s What We Owe the Future (2022), which identifies AI as a key enabler of value lock-in. MacAskill argues we may be living through humanity’s “time of perils”—a period when technological capabilities enable permanent entrenchment of current values before humanity has time to evolve them. Historical examples like slavery demonstrate that widely accepted values can later prove deeply wrong, making premature lock-in catastrophic.
Recent research distinguishes “decisive” from “accumulative” existential risks. Lock-in primarily operates through accumulative mechanisms—gradual entrenchment over years to decades rather than sudden catastrophic events. This temporal dimension makes political mobilization difficult, as there’s no clear “stop moment” to rally around.
Key Findings
Section titled “Key Findings”Value Lock-In Through AI Training
Section titled “Value Lock-In Through AI Training”Modern AI training explicitly embeds values and objectives into systems in ways that may resist modification. Constitutional AI, developed by Anthropic, trains models to follow a constitution of principles curated by employees, drawing from sources including the UN Universal Declaration of Human Rights and Apple’s terms of service.
| Value Source | Implementation | Lock-in Concern |
|---|---|---|
| UN Declaration of Human Rights | Principles like “support freedom, equality and brotherhood” | Western liberal values may not represent global consensus |
| Corporate terms of service | Apple’s ToS influences model behavior | Commercial interests shape public AI systems |
| Anthropic employee judgment | Internal curation of principles | Small group determines values for millions of users |
| Training data distribution | Reflects English-language, Western internet | Cultural biases may become permanent |
The irreversibility concern stems from training economics. Advanced models cost over $100M to train, with costs doubling approximately every six months. Once values are embedded, modification requires expensive retraining that may not fully reverse earlier value embedding. As researchers note, “AI systems might resist subsequent attempts to change their goals”—a superintelligent system would likely succeed in out-maneuvering operators attempting to reprogram it.
Political System Lock-In via Surveillance
Section titled “Political System Lock-In via Surveillance”AI surveillance infrastructure is creating conditions for permanent political entrenchment. According to the Carnegie Endowment for International Peace, PRC-sourced AI surveillance solutions have diffused to over 80 authoritarian and democratic countries worldwide. As of 2024, Hikvision and Dahua jointly control approximately 34% of the global surveillance camera market.
| Surveillance Metric | Data | Source | Lock-in Mechanism |
|---|---|---|---|
| Countries using Chinese AI surveillance | 80+ | Carnegie Endowment | Global infrastructure dependency |
| Global surveillance market share | 34% (Hikvision + Dahua) | Carnegie Endowment | Vendor lock-in through incompatibility |
| China’s AI-powered cameras | 200M+ | MERICS | Domestic enforcement capability |
| Social Credit restrictions | 23M+ flight bans, 5.5M+ train bans | Multiple sources | Behavioral control at scale |
The “lock-in effect” operates through technical incompatibility and switching costs. Systems from different companies are not interoperable, making supplier changes prohibitively expensive. Countries that have come to rely on Chinese surveillance technology become dependent on it, according to Carnegie researchers.
China’s domestic implementation provides a proof-of-concept. The Chinese Communist Party has deployed vast networks of AI-driven cameras capable of identifying individuals in real time, making it nearly impossible for activists to operate anonymously. Through the Digital Silk Road initiative, China has become an exporter of digital authoritarianism, with instances observable in Bangladesh, Colombia, Ethiopia, Guatemala, the Philippines, and Thailand.
Economic Lock-In Through Market Concentration
Section titled “Economic Lock-In Through Market Concentration”AI market concentration has reached levels that threaten competitive alternatives. Research published in Economic Policy concludes that without clear antitrust rules and regulatory actions, market concentration in generative AI could lead to systemic risks and stark inequality.
| Company/Sector | Market Share/Position | Lock-in Mechanism | Implication |
|---|---|---|---|
| Nvidia (GPUs) | 92% data center market share | Hardware bottleneck | Single point of failure for AI development |
| Cloud (Big 3) | 66-70% AWS+Azure+Google | Infrastructure integration, data gravity | Dependency on incumbent platforms |
| Surveillance | 34% Hikvision+Dahua | Hardware lock-in, data formats | Political implications of vendor dependency |
| Training Costs | $100M+ for frontier models | Insurmountable barriers to entry | Only largest firms can compete |
Vertical integration compounds concentration risks. The same companies that manufacture essential chips (Nvidia), provide cloud computing (Amazon, Google, Microsoft), and collect training data (Meta, Google) are also developing the most important AI models. Researchers warn that generative AI firms face strong incentives to integrate vertically with providers of AI building blocks, further consolidating control.
The “context flywheel”—a rich, structured user and project data layer—drives up switching costs, creating lock-in effects that trap accumulated data within platforms. According to a January 2025 FTC report, partnerships like Google-Anthropic and Microsoft-OpenAI risk “locking in the market dominance of large incumbent technology firms.”
Technological Path Dependence in Infrastructure
Section titled “Technological Path Dependence in Infrastructure”AI infrastructure decisions create decades-long path dependencies. Industry analysis from S&P Global indicates that with vacancy rates in primary data center markets at record lows, organizations must plan IT infrastructure needs at least 24 months in advance or risk being locked out of capacity required to scale AI.
| Infrastructure Dimension | Lock-in Dynamic | Timeline | Reversibility |
|---|---|---|---|
| Data center capacity | Must reserve 24 months ahead | 2-year planning horizon | Low—capacity constraints limit alternatives |
| Training vs. inference optimization | Infrastructure optimized for training may become obsolete | 15-20 year facility lifespan | Very low—$90B+ investments |
| Energy pathway | Fossil fuel vs. renewable choices | Decades of operation | Low—infrastructure replacement costs |
| Supply chain | TSMC 87% of 5nm+ smartphone SoCs | Multi-year fabrication lead times | Very low—geopolitical concentration risk |
The scale of investment amplifies lock-in. In late 2024, Alphabet announced $40 billion for AI infrastructure while Anthropic committed $50 billion for new data centers. Grid Strategies found that nationally, the utility industry is planning for about 50% more data center demand than the tech industry is projecting—a potential overbuilding that locks in energy infrastructure choices for decades.
Taiwan Semiconductor Manufacturing (TSMC) stands at the epicenter. As the world’s leading foundry, TSMC will likely lead smartphone SoC shipments with 87% share in 5nm and below nodes, expected to grow to 89% by 2028. Constrained supply of leading-edge fabrication has enabled a small group of suppliers to capture majority market share, creating geopolitical risk and single points of failure.
Irreversibility of AI Deployment
Section titled “Irreversibility of AI Deployment”Once highly capable AI systems are deployed, correcting misalignment becomes extremely difficult. Research from arXiv argues that the recursive failure to assess certain alignment models “is not just a sociological oversight but a structural attractor, mirroring the very risks of misalignment we aim to avoid in AGI.” Without adopting models of epistemic correction, “we may be on a predictable path toward irreversible misalignment.”
| Irreversibility Mechanism | Evidence | Timeline | Prevention Window |
|---|---|---|---|
| Value embedding resistance | Models may resist goal changes | Post-deployment | Pre-training only |
| Economic sunk costs | $100M+ training investments | Years to decades | During development |
| Infrastructure dependencies | Critical systems require AI | 5-15 years | Current period |
| Political entrenchment | Surveillance enables permanent control | 10-30 years | Next decade |
MacAskill’s Framework: Value Lock-In as Existential Risk
Section titled “MacAskill’s Framework: Value Lock-In as Existential Risk”William MacAskill’s What We Owe the Future provides the most comprehensive treatment of lock-in as existential risk. MacAskill warns of potential value lock-in—“an event that causes a single value system to persist for an extremely long time”—which may result from technological advances, particularly AGI development.
Three mechanisms enable AI-driven lock-in:
- AGI agents with aligned goals: People may create AGI agents with goals closely aligned with their own that act on their behalf indefinitely
- Hard-coded objectives: Someone could carefully specify what future they want and ensure the AGI aims to achieve it
- Mind uploading: People could “upload” themselves, potentially achieving indefinite lifespan while maintaining current values
Robin Hanson’s analysis emphasizes that MacAskill sees “advanced artificial intelligence” as enabling “those in power to lock in their values indefinitely.” The concern is not merely that bad actors might lock in harmful values, but that even well-intentioned actors might lock in current moral understanding before humanity has time to evolve it.
Causal Factors
Section titled “Causal Factors”The following factors influence lock-in probability and severity. These tables are designed to inform future cause-effect diagram creation.
Primary Factors (Strong Influence)
Section titled “Primary Factors (Strong Influence)”| Factor | Direction | Type | Evidence | Confidence |
|---|---|---|---|---|
| AI Training Costs | ↑ Lock-in | cause | $100M+ frontier models; doubling every 6 months | High |
| Market Concentration | ↑ Lock-in | cause | Nvidia 92% GPU share; Big 3 Cloud 66-70% | High |
| Surveillance Infrastructure | ↑ Political Lock-in | intermediate | 80+ countries using Chinese AI; 34% market share | High |
| Value Embedding Methods | ↑ Value Lock-in | intermediate | Constitutional AI explicitly trains values | High |
| Infrastructure Lead Times | ↑ Path Dependence | cause | 24-month planning horizon; 15-20 year lifespans | High |
| Supply Chain Concentration | ↑ Technological Lock-in | cause | TSMC 87% advanced fabrication | High |
Secondary Factors (Medium Influence)
Section titled “Secondary Factors (Medium Influence)”| Factor | Direction | Type | Evidence | Confidence |
|---|---|---|---|---|
| Switching Costs | ↑ Lock-in | intermediate | Data gravity; context flywheel effects | Medium |
| Regulatory Fragmentation | ↑ Lock-in | leaf | EU vs China vs US different approaches | Medium |
| Democratic Input Mechanisms | ↓ Lock-in | leaf | Collective Constitutional AI experiments | Medium |
| Antitrust Enforcement | ↓ Economic Lock-in | leaf | DOJ actions; effectiveness uncertain | Medium |
| Open Source Alternatives | ↓ Lock-in | intermediate | Limited viability at frontier scale | Medium |
| Energy Infrastructure | ↑ Path Dependence | cause | Fossil fuel choices lock in decades of emissions | Medium |
Minor Factors (Weak Influence)
Section titled “Minor Factors (Weak Influence)”| Factor | Direction | Type | Evidence | Confidence |
|---|---|---|---|---|
| Public Awareness | ↓ Lock-in | leaf | Growing concern but limited actionability | Low |
| International Coordination | ↓ Lock-in | leaf | UN AI Advisory Body; limited enforcement | Low |
| Transparency Requirements | ↓ Lock-in | intermediate | EU AI Act provisions; implementation TBD | Low |
Lock-In Variants and Manifestations
Section titled “Lock-In Variants and Manifestations”| Variant | Mechanism | Timeline | Warning Signs | Current Status |
|---|---|---|---|---|
| Value Lock-in | Training embeds specific moral/political values | 10-30 years | Constitutional AI deployment; CCP value mandates | Early stage |
| Political Lock-in | AI surveillance enables permanent authoritarian control | 5-20 years | 80+ countries using Chinese surveillance | Actively occurring |
| Economic Lock-in | Market concentration creates irreversible monopolies | 5-15 years | Nvidia 92% share; $100M+ training costs | Rapidly advancing |
| Technological Lock-in | Infrastructure choices constrain future options | 15-30 years | $90B+ data center investments; 24-month lead times | Current window |
| Epistemic Lock-in | AI-mediated information shapes beliefs permanently | 10-25 years | Recommendation algorithm dominance | Early stage |
Value Lock-In
Section titled “Value Lock-In”Permanent embedding of specific moral, political, or cultural values in AI systems that shape human society. This occurs through:
- Training Data Lock-in: Models trained on Western internet data embed cultural perspectives
- Objective Function Lock-in: Systems optimizing for specific metrics reshape society around those metrics
- Constitutional Lock-in: Explicit value systems embedded during training become permanent features
Researchers note that value lock-in involves “freezing current moral and political perspectives before humanity has time to evolve them.” These risks emerge from broader systemic dynamics rather than requiring any single AI system to behave badly.
Political System Lock-In
Section titled “Political System Lock-In”AI-enabled permanent entrenchment of particular governments or political systems. Analysis indicates that AI law enforcement tends to undermine democratic government, promote authoritarian drift, and entrench existing authoritarian regimes by reducing structural checks on executive authority.
Journal of Democracy research documents how, through mass surveillance, facial recognition, predictive policing, online harassment, and electoral manipulation, AI has become a potent tool for authoritarian control. Modern technologies promising automated “social management” represent the culmination of a decades-old authoritarian vision: leveraging data to “engineer away dissent.”
Economic Structure Lock-In
Section titled “Economic Structure Lock-In”AI-enabled economic arrangements that become self-perpetuating and impossible to change through normal market mechanisms. Yale Law & Policy Review analysis warns that concentration in AI markets amplifies the digital divide, with central urban areas and larger economic actors benefiting disproportionately while peripheral regions and SMEs are excluded.
The concern extends beyond consumer harm to democratic governance. Society stands at a critical crossroads: one path leads to AI controlled by a small number of powerful corporations, threatening innovation, privacy, liberty and democracy; the other path leads to a diverse and competitive market where AI serves the broader public interest.
Technological Lock-In
Section titled “Technological Lock-In”Specific AI architectures or approaches becoming so embedded in global infrastructure that alternatives become impossible. This occurs through:
- Infrastructure Dependencies: AI systems integrated into power grids, financial systems, transportation
- Network Effects: Data advantages and switching costs make dominant platforms unassailable
- Capability Lock-in: Particular architectures achieve advantages making alternatives uncompetitive
Infrastructure analysis indicates the market will generate over $250 billion in 2025, with durable pricing power and high barriers to entry across the infrastructure stack due to constrained supply.
Timeline and Intervention Windows
Section titled “Timeline and Intervention Windows”| Period | Lock-in Risk | Key Developments | Intervention Options |
|---|---|---|---|
| 2024-2026 | Low-Medium | Market concentration accelerating; training costs exploding | Antitrust enforcement; regulatory frameworks; open alternatives |
| 2027-2030 | Medium | AGI timelines approaching; infrastructure choices locked in | Governance mechanisms; democratic input; transparency requirements |
| 2031-2035 | Medium-High | Advanced AI deployment; surveillance global; economic concentration extreme | Value learning systems; international coordination; shutdown capabilities |
| 2036-2045 | High | Potential irreversibility; embedded systems; path dependence complete | Very limited; may require radical restructuring |
| 2046+ | Very High | Lock-in permanent unless prevented earlier | None—prevention only viable strategy |
Expert predictions for 2026 indicate that AI has ceased to be an “emerging” policy issue. Real world harms are accumulating rapidly, putting pressure on lawmakers. The stage is set for important political and legal battles that will define who controls AI, who bears the costs of its harms, and whether democratic governments can keep pace.
Open Questions
Section titled “Open Questions”| Question | Why It Matters | Current State |
|---|---|---|
| Can embedded values be modified post-training? | Determines whether value lock-in is reversible | Unknown; $100M+ retraining costs suggest limited reversibility |
| What level of market concentration enables lock-in? | Defines intervention thresholds for antitrust | Nvidia at 92% GPU share; unclear if already past threshold |
| Do democratic AI governance mechanisms work? | Determines legitimacy and adaptability of systems | Collective Constitutional AI experiments ongoing; effectiveness TBD |
| How long until surveillance infrastructure becomes irreversible? | Defines intervention window for political lock-in | 80+ countries already using Chinese systems; switching costs high |
| Can open source alternatives prevent concentration? | Viability of competitive pluralism | Limited success at frontier scale due to compute costs |
| What triggers irreversibility in value lock-in? | Need to identify point of no return | Theoretical models exist; empirical validation lacking |
| How do different lock-in types interact? | Economic + political + value lock-in may compound | Likely reinforcing but mechanisms unclear |
| Is infrastructure overbuilding creating flexibility? | Excess capacity might enable alternatives | Unclear; may lock in wrong architecture if training→inference shift occurs |
Prevention and Mitigation Strategies
Section titled “Prevention and Mitigation Strategies”While lock-in by definition prevents recovery once achieved, several strategies may reduce probability or delay onset:
Technological Diversity
Section titled “Technological Diversity”| Strategy | Mechanism | Status | Effectiveness |
|---|---|---|---|
| Open source models | Prevent monopoly control | Limited at frontier | Medium—cost barriers remain |
| Research pluralism | Multiple AI approaches | Active globally | Medium—convergence pressure high |
| Interoperability requirements | Reduce switching costs | EU considering | Unknown |
| Public cloud infrastructure | Break vendor lock-in | Proposed not implemented | Unknown |
Democratic Governance
Section titled “Democratic Governance”| Strategy | Mechanism | Status | Effectiveness |
|---|---|---|---|
| Collective Constitutional AI | Democratic value input | Experimental | Low—scale unclear |
| Citizens’ assemblies | Broad stakeholder input | Limited deployment | Low—influence on development unclear |
| Transparency requirements | Enable oversight | EU AI Act provisions | Medium—enforcement TBD |
| International coordination | Prevent race to bottom | UN AI Advisory Body | Low—limited enforcement |
Antitrust and Competition Policy
Section titled “Antitrust and Competition Policy”| Strategy | Mechanism | Status | Effectiveness |
|---|---|---|---|
| Vertical integration restrictions | Prevent concentration | DOJ investigating | Unknown—legal challenges likely |
| Data portability mandates | Reduce lock-in | Proposed not implemented | Medium—if implemented |
| Structural separation | Break up incumbents | Proposed for Google | Unknown—political feasibility low |
| Non-discrimination obligations | Open access to essentials | Under consideration | Medium—enforcement challenges |
Technical Safeguards
Section titled “Technical Safeguards”| Strategy | Mechanism | Status | Effectiveness |
|---|---|---|---|
| Value learning systems | Adapt as values evolve | Active research | Unknown—capabilities unclear |
| Constitutional flexibility | Mechanisms to update values | Theoretical | Unknown—may conflict with stability |
| Interpretability requirements | Enable human oversight | Active research | Low—advanced systems resist interpretation |
| Shutdown capabilities | Preserve human control | Limited effectiveness | Low—sophisticated systems may resist |
Sources
Section titled “Sources”Academic Research
Section titled “Academic Research”- Hendrycks, D. et al. “X-Risk Analysis for AI Research” - Framework categorizing existential risks including value lock-in, enfeeblement, eroded epistemics
- “The AI Risk Spectrum: From Dangerous Capabilities to Existential Threats” - Value lock-in as freezing moral perspectives before humanity evolves them
- “Introduction to AI Existential Risks” (arXiv, Nov 2025) - Value lock-in mechanisms and survey data on extinction risk
- “Epistemic Closure and Irreversibility of Misalignment” - 2025 paper on structural path to irreversible misalignment
- MacAskill, W. “What We Owe the Future” - Comprehensive treatment of value lock-in as existential risk
- Hanson, R. “MacAskill on Value Lock-In” - Analysis of MacAskill’s lock-in framework
AI Governance and Policy
Section titled “AI Governance and Policy”- TechPolicy.Press “Expert Predictions on AI Policy in 2026” - 2026 outlook on political battles over AI control
- Cloud Security Alliance & Google Cloud “Governance Maturity Study” - Link between governance and AI adoption security
Surveillance and Political Lock-In
Section titled “Surveillance and Political Lock-In”- Carnegie Endowment “Can Democracy Survive AI?” - PRC surveillance diffusion to 80+ countries; 34% market share; lock-in effects
- National Endowment for Democracy “Data-Centric Authoritarianism” - China’s role in spreading digital authoritarianism globally
- Journal of Democracy “How Autocrats Weaponize AI” - AI as tool for authoritarian control through surveillance and manipulation
- Journal of Democracy “The Road to Digital Unfreedom” - How AI is reshaping repression
- Yale Review “The Rise of Digital Authoritarianism” - Impacts on democracy and human rights
- Democratic Erosion “AI and Authoritarian Governments” - AI undermining democratic government
Economic Concentration and Monopoly
Section titled “Economic Concentration and Monopoly”- Economic Policy “AI Monopolies” - Market concentration could lead to systemic risks and inequality
- TechPolicy.Press “AI Monopolies Are Coming” - FTC report warning on incumbent firm dominance
- Yale Law & Policy Review “Antimonopoly Approach to AI” - Vertical integration and concentration concerns
- Open Markets Institute “Stopping Big Tech from Becoming Big AI” - Report on preventing monopoly control
- AI Frontiers “Open Protocols Can Prevent AI Monopolies” - Model Context Protocol as anti-lock-in tool
- TRENDS Research “Big Tech’s Monopoly of AI” - Threats to fair competition
- WebProNews “AI Giants Form ‘The Blob’” - Consolidation sparking monopoly concerns; Nvidia 92% market share
- The Asset “AI Revolution Cementing Big Tech Monopoly” - How AI revolution strengthens monopolies
Infrastructure and Path Dependence
Section titled “Infrastructure and Path Dependence”- S&P Global “AI Infrastructure: Midyear 2025 Update” - 24-month planning horizon; vacancy rate concerns
- Development Corporate “The AI Infrastructure Bubble” - $90B boom could end in bust; training vs inference mismatch
- Cirion Technologies “AI Infrastructure Consolidation 2025” - Whether infrastructure consolidates in 2025
- Data Center Knowledge “AI Infrastructure Revolution” - Predictions for 2026
- IEEFA “Risk of AI-Driven, Overbuilt Infrastructure” - Utility planning exceeds tech industry projections by 50%
- Flexential “State of AI Infrastructure Report 2025” - Market to generate $250B+ in 2025
- INET “The U.S. Is Betting the Economy on ‘Scaling’ AI” - Critique of scaling assumptions
- ScienceDirect “Energy and Climate Alignment of AI Infrastructure” - Energy pathway lock-in concerns
- Sands Capital “Unleashing AI’s Next Wave” - TSMC 87% fabrication market share
Constitutional AI and Value Embedding
Section titled “Constitutional AI and Value Embedding”- Anthropic “Constitutional AI: Harmlessness from AI Feedback” - Original Constitutional AI methodology
- Anthropic “Claude’s Constitution” - UN Declaration, Apple ToS, employee judgment sources
- Anthropic “Collective Constitutional AI” - Public input approach
- TechCrunch “Anthropic thinks ‘constitutional AI’ is the best way” - Value embedding through training
- Constitutional.ai “Tracking Anthropic’s AI Revolution” - Constitutional AI overview
Existential Risk Frameworks
Section titled “Existential Risk Frameworks”- Wikipedia “Existential risk from artificial intelligence” - Overview of AI x-risk including lock-in
- Wikipedia “AI alignment” - Irreversibility of advanced AI deployment
- Medium “AI Alignment: The Hidden Challenge” - Humanity’s future depends on alignment success
- EA Forum “The AI Dilemma: Growth vs Existential Risk” - Extension for effective altruists
- EA Forum “My take on What We Owe the Future” - Value lock-in argument analysis
- LSE “Value Alignment Without Institutional Change” - Institutional change necessary for risk mitigation
- Synthese “Current cases of AI misalignment” - Implications for future risks
AI Transition Model Context
Section titled “AI Transition Model Context”Connections to Model Elements
Section titled “Connections to Model Elements”| Model Element | Relationship |
|---|---|
| Civilizational Competence → Governance | Insufficient governance enables lock-in; concentration enables governance capture |
| Civilizational Competence → Epistemics | AI-mediated information shapes beliefs; epistemic degradation prevents recognition |
| Civilizational Competence → Adaptability | Lock-in by definition prevents adaptation; flexibility mechanisms critical |
| AI Ownership → Concentration | Market concentration and surveillance infrastructure are lock-in mechanisms |
| Misuse Potential → AI Control Concentration | Power concentration creates conditions for political lock-in |
| Long-term Trajectory | Lock-in determines whether outcomes are temporary or permanent |
Lock-in is the defining feature of the Long-term Lock-in scenario—whether values, power, or epistemics become permanently entrenched. This affects Long-term Trajectory more than acute existential risk, as it transforms recoverable problems into permanent ones.