Skip to content

Lock-in: Research Report

📋Page Status
Quality:3 (Stub)⚠️
Words:4.2k
Backlinks:9
Structure:
📊 17📈 0🔗 0📚 6918%Score: 10/15
FindingKey DataImplication
Value lock-in risk identifiedMacAskill warns AGI could “lock in values indefinitely”Current moral blind spots may become permanent
Surveillance infrastructure spreadingChina-sourced AI surveillance in 80+ countries; 34% global market sharePolitical system lock-in already occurring
Economic concentration extremeNvidia 92% GPU market share; $90B+ AI infrastructure investmentsMonopoly power creates technological lock-in
Value embedding operationalConstitutional AI explicitly trains values into modelsEmbedded values may resist modification
Infrastructure path dependence24-month lead time for AI capacity; 15-20 year data center lifespansEarly infrastructure choices constrain future options
Irreversibility timeline5-20 years before lock-in becomes permanentIntervention window narrowing rapidly

Lock-in refers to the permanent entrenchment of values, systems, or power structures in ways that become extremely difficult or impossible to reverse. In the AI context, this represents a pathway to existential catastrophe where early decisions about development, deployment, or governance become irreversibly embedded in future systems. Unlike traditional technologies where course correction remains possible, advanced AI could create enforcement mechanisms so powerful that alternative paths become permanently inaccessible.

Three primary lock-in mechanisms are documented. Value lock-in occurs through training processes like Constitutional AI that explicitly embed ethical principles into models, raising questions about whose values get embedded and whether they can be changed. Political lock-in manifests through AI surveillance deployed in over 80 countries, with Chinese firms controlling 34% of the global surveillance market and lock-in effects making supplier changes prohibitively expensive. Economic lock-in emerges from extreme market concentration—Nvidia holds 92% of data center GPU market share, while training costs exceeding $100M create insurmountable barriers to entry.

Path dependence in AI infrastructure creates additional irreversibility. Organizations must plan AI capacity 24 months in advance or risk being locked out entirely. Data centers optimized for current training workloads face 15-20 year lifespans but may become obsolete if inference patterns shift. Energy infrastructure choices—fossil fuel versus renewable pathways—could lock in decades of climate impact. Researchers including MacAskill and Bostrom warn that technological advances, particularly AGI, could enable those in power to lock in their values indefinitely, making current decisions potentially the most consequential in human history. The 5-20 year timeline before irreversibility suggests each intervention year is more valuable than the next.


Lock-in represents a unique category of existential risk where the permanence of outcomes, rather than their immediate severity, defines the threat. As Toby Ord articulates in The Precipice, “dystopian lock-in” could be as serious as extinction—both permanently curtail humanity’s potential, but lock-in preserves awareness of what was lost.

The concept gained prominence through William MacAskill’s What We Owe the Future (2022), which identifies AI as a key enabler of value lock-in. MacAskill argues we may be living through humanity’s “time of perils”—a period when technological capabilities enable permanent entrenchment of current values before humanity has time to evolve them. Historical examples like slavery demonstrate that widely accepted values can later prove deeply wrong, making premature lock-in catastrophic.

Recent research distinguishes “decisive” from “accumulative” existential risks. Lock-in primarily operates through accumulative mechanisms—gradual entrenchment over years to decades rather than sudden catastrophic events. This temporal dimension makes political mobilization difficult, as there’s no clear “stop moment” to rally around.


Modern AI training explicitly embeds values and objectives into systems in ways that may resist modification. Constitutional AI, developed by Anthropic, trains models to follow a constitution of principles curated by employees, drawing from sources including the UN Universal Declaration of Human Rights and Apple’s terms of service.

Value SourceImplementationLock-in Concern
UN Declaration of Human RightsPrinciples like “support freedom, equality and brotherhood”Western liberal values may not represent global consensus
Corporate terms of serviceApple’s ToS influences model behaviorCommercial interests shape public AI systems
Anthropic employee judgmentInternal curation of principlesSmall group determines values for millions of users
Training data distributionReflects English-language, Western internetCultural biases may become permanent

The irreversibility concern stems from training economics. Advanced models cost over $100M to train, with costs doubling approximately every six months. Once values are embedded, modification requires expensive retraining that may not fully reverse earlier value embedding. As researchers note, “AI systems might resist subsequent attempts to change their goals”—a superintelligent system would likely succeed in out-maneuvering operators attempting to reprogram it.

AI surveillance infrastructure is creating conditions for permanent political entrenchment. According to the Carnegie Endowment for International Peace, PRC-sourced AI surveillance solutions have diffused to over 80 authoritarian and democratic countries worldwide. As of 2024, Hikvision and Dahua jointly control approximately 34% of the global surveillance camera market.

Surveillance MetricDataSourceLock-in Mechanism
Countries using Chinese AI surveillance80+Carnegie EndowmentGlobal infrastructure dependency
Global surveillance market share34% (Hikvision + Dahua)Carnegie EndowmentVendor lock-in through incompatibility
China’s AI-powered cameras200M+MERICSDomestic enforcement capability
Social Credit restrictions23M+ flight bans, 5.5M+ train bansMultiple sourcesBehavioral control at scale

The “lock-in effect” operates through technical incompatibility and switching costs. Systems from different companies are not interoperable, making supplier changes prohibitively expensive. Countries that have come to rely on Chinese surveillance technology become dependent on it, according to Carnegie researchers.

China’s domestic implementation provides a proof-of-concept. The Chinese Communist Party has deployed vast networks of AI-driven cameras capable of identifying individuals in real time, making it nearly impossible for activists to operate anonymously. Through the Digital Silk Road initiative, China has become an exporter of digital authoritarianism, with instances observable in Bangladesh, Colombia, Ethiopia, Guatemala, the Philippines, and Thailand.

Economic Lock-In Through Market Concentration

Section titled “Economic Lock-In Through Market Concentration”

AI market concentration has reached levels that threaten competitive alternatives. Research published in Economic Policy concludes that without clear antitrust rules and regulatory actions, market concentration in generative AI could lead to systemic risks and stark inequality.

Company/SectorMarket Share/PositionLock-in MechanismImplication
Nvidia (GPUs)92% data center market shareHardware bottleneckSingle point of failure for AI development
Cloud (Big 3)66-70% AWS+Azure+GoogleInfrastructure integration, data gravityDependency on incumbent platforms
Surveillance34% Hikvision+DahuaHardware lock-in, data formatsPolitical implications of vendor dependency
Training Costs$100M+ for frontier modelsInsurmountable barriers to entryOnly largest firms can compete

Vertical integration compounds concentration risks. The same companies that manufacture essential chips (Nvidia), provide cloud computing (Amazon, Google, Microsoft), and collect training data (Meta, Google) are also developing the most important AI models. Researchers warn that generative AI firms face strong incentives to integrate vertically with providers of AI building blocks, further consolidating control.

The “context flywheel”—a rich, structured user and project data layer—drives up switching costs, creating lock-in effects that trap accumulated data within platforms. According to a January 2025 FTC report, partnerships like Google-Anthropic and Microsoft-OpenAI risk “locking in the market dominance of large incumbent technology firms.”

Technological Path Dependence in Infrastructure

Section titled “Technological Path Dependence in Infrastructure”

AI infrastructure decisions create decades-long path dependencies. Industry analysis from S&P Global indicates that with vacancy rates in primary data center markets at record lows, organizations must plan IT infrastructure needs at least 24 months in advance or risk being locked out of capacity required to scale AI.

Infrastructure DimensionLock-in DynamicTimelineReversibility
Data center capacityMust reserve 24 months ahead2-year planning horizonLow—capacity constraints limit alternatives
Training vs. inference optimizationInfrastructure optimized for training may become obsolete15-20 year facility lifespanVery low—$90B+ investments
Energy pathwayFossil fuel vs. renewable choicesDecades of operationLow—infrastructure replacement costs
Supply chainTSMC 87% of 5nm+ smartphone SoCsMulti-year fabrication lead timesVery low—geopolitical concentration risk

The scale of investment amplifies lock-in. In late 2024, Alphabet announced $40 billion for AI infrastructure while Anthropic committed $50 billion for new data centers. Grid Strategies found that nationally, the utility industry is planning for about 50% more data center demand than the tech industry is projecting—a potential overbuilding that locks in energy infrastructure choices for decades.

Taiwan Semiconductor Manufacturing (TSMC) stands at the epicenter. As the world’s leading foundry, TSMC will likely lead smartphone SoC shipments with 87% share in 5nm and below nodes, expected to grow to 89% by 2028. Constrained supply of leading-edge fabrication has enabled a small group of suppliers to capture majority market share, creating geopolitical risk and single points of failure.

Once highly capable AI systems are deployed, correcting misalignment becomes extremely difficult. Research from arXiv argues that the recursive failure to assess certain alignment models “is not just a sociological oversight but a structural attractor, mirroring the very risks of misalignment we aim to avoid in AGI.” Without adopting models of epistemic correction, “we may be on a predictable path toward irreversible misalignment.”

Irreversibility MechanismEvidenceTimelinePrevention Window
Value embedding resistanceModels may resist goal changesPost-deploymentPre-training only
Economic sunk costs$100M+ training investmentsYears to decadesDuring development
Infrastructure dependenciesCritical systems require AI5-15 yearsCurrent period
Political entrenchmentSurveillance enables permanent control10-30 yearsNext decade

MacAskill’s Framework: Value Lock-In as Existential Risk

Section titled “MacAskill’s Framework: Value Lock-In as Existential Risk”

William MacAskill’s What We Owe the Future provides the most comprehensive treatment of lock-in as existential risk. MacAskill warns of potential value lock-in—“an event that causes a single value system to persist for an extremely long time”—which may result from technological advances, particularly AGI development.

Three mechanisms enable AI-driven lock-in:

  1. AGI agents with aligned goals: People may create AGI agents with goals closely aligned with their own that act on their behalf indefinitely
  2. Hard-coded objectives: Someone could carefully specify what future they want and ensure the AGI aims to achieve it
  3. Mind uploading: People could “upload” themselves, potentially achieving indefinite lifespan while maintaining current values

Robin Hanson’s analysis emphasizes that MacAskill sees “advanced artificial intelligence” as enabling “those in power to lock in their values indefinitely.” The concern is not merely that bad actors might lock in harmful values, but that even well-intentioned actors might lock in current moral understanding before humanity has time to evolve it.


The following factors influence lock-in probability and severity. These tables are designed to inform future cause-effect diagram creation.

FactorDirectionTypeEvidenceConfidence
AI Training Costs↑ Lock-incause$100M+ frontier models; doubling every 6 monthsHigh
Market Concentration↑ Lock-incauseNvidia 92% GPU share; Big 3 Cloud 66-70%High
Surveillance Infrastructure↑ Political Lock-inintermediate80+ countries using Chinese AI; 34% market shareHigh
Value Embedding Methods↑ Value Lock-inintermediateConstitutional AI explicitly trains valuesHigh
Infrastructure Lead Times↑ Path Dependencecause24-month planning horizon; 15-20 year lifespansHigh
Supply Chain Concentration↑ Technological Lock-incauseTSMC 87% advanced fabricationHigh
FactorDirectionTypeEvidenceConfidence
Switching Costs↑ Lock-inintermediateData gravity; context flywheel effectsMedium
Regulatory Fragmentation↑ Lock-inleafEU vs China vs US different approachesMedium
Democratic Input Mechanisms↓ Lock-inleafCollective Constitutional AI experimentsMedium
Antitrust Enforcement↓ Economic Lock-inleafDOJ actions; effectiveness uncertainMedium
Open Source Alternatives↓ Lock-inintermediateLimited viability at frontier scaleMedium
Energy Infrastructure↑ Path DependencecauseFossil fuel choices lock in decades of emissionsMedium
FactorDirectionTypeEvidenceConfidence
Public Awareness↓ Lock-inleafGrowing concern but limited actionabilityLow
International Coordination↓ Lock-inleafUN AI Advisory Body; limited enforcementLow
Transparency Requirements↓ Lock-inintermediateEU AI Act provisions; implementation TBDLow

VariantMechanismTimelineWarning SignsCurrent Status
Value Lock-inTraining embeds specific moral/political values10-30 yearsConstitutional AI deployment; CCP value mandatesEarly stage
Political Lock-inAI surveillance enables permanent authoritarian control5-20 years80+ countries using Chinese surveillanceActively occurring
Economic Lock-inMarket concentration creates irreversible monopolies5-15 yearsNvidia 92% share; $100M+ training costsRapidly advancing
Technological Lock-inInfrastructure choices constrain future options15-30 years$90B+ data center investments; 24-month lead timesCurrent window
Epistemic Lock-inAI-mediated information shapes beliefs permanently10-25 yearsRecommendation algorithm dominanceEarly stage

Permanent embedding of specific moral, political, or cultural values in AI systems that shape human society. This occurs through:

  • Training Data Lock-in: Models trained on Western internet data embed cultural perspectives
  • Objective Function Lock-in: Systems optimizing for specific metrics reshape society around those metrics
  • Constitutional Lock-in: Explicit value systems embedded during training become permanent features

Researchers note that value lock-in involves “freezing current moral and political perspectives before humanity has time to evolve them.” These risks emerge from broader systemic dynamics rather than requiring any single AI system to behave badly.

AI-enabled permanent entrenchment of particular governments or political systems. Analysis indicates that AI law enforcement tends to undermine democratic government, promote authoritarian drift, and entrench existing authoritarian regimes by reducing structural checks on executive authority.

Journal of Democracy research documents how, through mass surveillance, facial recognition, predictive policing, online harassment, and electoral manipulation, AI has become a potent tool for authoritarian control. Modern technologies promising automated “social management” represent the culmination of a decades-old authoritarian vision: leveraging data to “engineer away dissent.”

AI-enabled economic arrangements that become self-perpetuating and impossible to change through normal market mechanisms. Yale Law & Policy Review analysis warns that concentration in AI markets amplifies the digital divide, with central urban areas and larger economic actors benefiting disproportionately while peripheral regions and SMEs are excluded.

The concern extends beyond consumer harm to democratic governance. Society stands at a critical crossroads: one path leads to AI controlled by a small number of powerful corporations, threatening innovation, privacy, liberty and democracy; the other path leads to a diverse and competitive market where AI serves the broader public interest.

Specific AI architectures or approaches becoming so embedded in global infrastructure that alternatives become impossible. This occurs through:

  • Infrastructure Dependencies: AI systems integrated into power grids, financial systems, transportation
  • Network Effects: Data advantages and switching costs make dominant platforms unassailable
  • Capability Lock-in: Particular architectures achieve advantages making alternatives uncompetitive

Infrastructure analysis indicates the market will generate over $250 billion in 2025, with durable pricing power and high barriers to entry across the infrastructure stack due to constrained supply.


PeriodLock-in RiskKey DevelopmentsIntervention Options
2024-2026Low-MediumMarket concentration accelerating; training costs explodingAntitrust enforcement; regulatory frameworks; open alternatives
2027-2030MediumAGI timelines approaching; infrastructure choices locked inGovernance mechanisms; democratic input; transparency requirements
2031-2035Medium-HighAdvanced AI deployment; surveillance global; economic concentration extremeValue learning systems; international coordination; shutdown capabilities
2036-2045HighPotential irreversibility; embedded systems; path dependence completeVery limited; may require radical restructuring
2046+Very HighLock-in permanent unless prevented earlierNone—prevention only viable strategy

Expert predictions for 2026 indicate that AI has ceased to be an “emerging” policy issue. Real world harms are accumulating rapidly, putting pressure on lawmakers. The stage is set for important political and legal battles that will define who controls AI, who bears the costs of its harms, and whether democratic governments can keep pace.


QuestionWhy It MattersCurrent State
Can embedded values be modified post-training?Determines whether value lock-in is reversibleUnknown; $100M+ retraining costs suggest limited reversibility
What level of market concentration enables lock-in?Defines intervention thresholds for antitrustNvidia at 92% GPU share; unclear if already past threshold
Do democratic AI governance mechanisms work?Determines legitimacy and adaptability of systemsCollective Constitutional AI experiments ongoing; effectiveness TBD
How long until surveillance infrastructure becomes irreversible?Defines intervention window for political lock-in80+ countries already using Chinese systems; switching costs high
Can open source alternatives prevent concentration?Viability of competitive pluralismLimited success at frontier scale due to compute costs
What triggers irreversibility in value lock-in?Need to identify point of no returnTheoretical models exist; empirical validation lacking
How do different lock-in types interact?Economic + political + value lock-in may compoundLikely reinforcing but mechanisms unclear
Is infrastructure overbuilding creating flexibility?Excess capacity might enable alternativesUnclear; may lock in wrong architecture if training→inference shift occurs

While lock-in by definition prevents recovery once achieved, several strategies may reduce probability or delay onset:

StrategyMechanismStatusEffectiveness
Open source modelsPrevent monopoly controlLimited at frontierMedium—cost barriers remain
Research pluralismMultiple AI approachesActive globallyMedium—convergence pressure high
Interoperability requirementsReduce switching costsEU consideringUnknown
Public cloud infrastructureBreak vendor lock-inProposed not implementedUnknown
StrategyMechanismStatusEffectiveness
Collective Constitutional AIDemocratic value inputExperimentalLow—scale unclear
Citizens’ assembliesBroad stakeholder inputLimited deploymentLow—influence on development unclear
Transparency requirementsEnable oversightEU AI Act provisionsMedium—enforcement TBD
International coordinationPrevent race to bottomUN AI Advisory BodyLow—limited enforcement
StrategyMechanismStatusEffectiveness
Vertical integration restrictionsPrevent concentrationDOJ investigatingUnknown—legal challenges likely
Data portability mandatesReduce lock-inProposed not implementedMedium—if implemented
Structural separationBreak up incumbentsProposed for GoogleUnknown—political feasibility low
Non-discrimination obligationsOpen access to essentialsUnder considerationMedium—enforcement challenges
StrategyMechanismStatusEffectiveness
Value learning systemsAdapt as values evolveActive researchUnknown—capabilities unclear
Constitutional flexibilityMechanisms to update valuesTheoreticalUnknown—may conflict with stability
Interpretability requirementsEnable human oversightActive researchLow—advanced systems resist interpretation
Shutdown capabilitiesPreserve human controlLimited effectivenessLow—sophisticated systems may resist


Model ElementRelationship
Civilizational Competence → GovernanceInsufficient governance enables lock-in; concentration enables governance capture
Civilizational Competence → EpistemicsAI-mediated information shapes beliefs; epistemic degradation prevents recognition
Civilizational Competence → AdaptabilityLock-in by definition prevents adaptation; flexibility mechanisms critical
AI Ownership → ConcentrationMarket concentration and surveillance infrastructure are lock-in mechanisms
Misuse Potential → AI Control ConcentrationPower concentration creates conditions for political lock-in
Long-term TrajectoryLock-in determines whether outcomes are temporary or permanent

Lock-in is the defining feature of the Long-term Lock-in scenario—whether values, power, or epistemics become permanently entrenched. This affects Long-term Trajectory more than acute existential risk, as it transforms recoverable problems into permanent ones.