Political Power Lock-in: Research Report
Executive Summary
Section titled “Executive Summary”| Finding | Key Data | Implication |
|---|---|---|
| Global autocratization acceleration | 72% of humanity (5.7B people) under autocracy; 45 countries autocratizing in 2024 vs. 12 in 2004 | Democracy decline coincides with AI surveillance proliferation |
| 15-year internet freedom decline | Freedom House: 15 consecutive years of declining internet freedom; 283 internet shutdowns in 39 countries (2023) | Digital control mechanisms spreading globally |
| AI surveillance proliferation | AI surveillance deployed in 80+ countries; Chinese companies control 34% of global surveillance camera market | Technology exported from authoritarian regimes to democracies |
| Irreversibility mechanism | AI addresses all traditional overthrow pathways simultaneously | First potentially permanent authoritarian systems in history |
| Democratic backsliding | 50% of “Free” countries experienced internet freedom declines (2024-2025) | Even democracies vulnerable to AI-enabled drift |
| Mass surveillance scale | China: 400 million CCTV cameras (54% of global total) | Infrastructure for comprehensive population monitoring exists |
Research Summary
Section titled “Research Summary”Political power lock-in occurs when AI-enabled surveillance and control mechanisms make authoritarian governance structures effectively permanent by closing all traditional pathways for regime change. Unlike historical authoritarianism—always vulnerable to revolution, coup, elite defection, economic crisis, or external pressure—AI surveillance potentially creates the first truly stable authoritarian systems in human history by addressing each vulnerability simultaneously.
Current trajectories are concerning. V-Dem’s 2025 Democracy Report finds 72% of humanity (5.7 billion people) now lives under autocracy, with 91 autocracies outnumbering 88 democracies globally for the first time in two decades. Forty-five countries are actively autocratizing in 2024—up from only 12 countries twenty years ago. This democratic erosion coincides with AI surveillance proliferation: over 80 countries have deployed AI surveillance technologies, and Freedom House documents 15 consecutive years of declining internet freedom, with 283 internet shutdowns across 39 countries in 2023 alone.
The mechanisms enabling permanent control are already visible. China’s Xinjiang deployment showcases the integrated approach: facial recognition identifies individuals in real-time, predictive systems flag potential dissidents before they act, social credit restricts movement and employment, and automated enforcement reduces reliance on human agents who might defect. Research warns that “the normalization of AI monitoring may mold future generations to automatically conform to the regime’s stringent standards, making rebellion and dissident ideas difficult to come to fruition.” Academic analysis identifies “AI-tocracy”—a self-reinforcing cycle where AI innovation entrenches regimes while regime investment in AI for political control stimulates further innovation. Critically, democracies show vulnerability: half of Freedom House “Free” countries experienced internet freedom declines in 2024-2025, suggesting emergency surveillance powers adopted for security purposes may prove impossible to roll back.
Background
Section titled “Background”Political power lock-in represents a qualitative departure from all historical forms of authoritarianism. Every previous autocratic system remained vulnerable to traditional overthrow mechanisms: popular uprisings, military coups, elite defections, economic crises, or external pressure. AI surveillance technology potentially closes all these pathways simultaneously, creating what researchers call the first “irreversible” authoritarian systems.
The scale of global autocratization is accelerating. V-Dem’s 2025 Democracy Report documents that as of 2024, autocracies (N=91) outnumber democracies (N=88) globally, with 72% of the world’s population living under autocratic governance. Forty-five countries were actively autocratizing in 2024, up from 42 in 2023 and only 12 two decades ago. The average level of democracy enjoyed globally has declined to 1985 levels.
Freedom House’s Freedom on the Net 2025 report finds internet freedom has declined for 15 consecutive years, with conditions deteriorating in 27 countries while only 17 experienced improvements. Critically, even among countries rated “Free,” significant backsliding occurred: Georgia declined by 4 points, Germany by 3, and the United States by 3.
AI surveillance has spread to over 80 countries, with Chinese companies Hikvision and Dahua controlling 34% of the global surveillance camera market. China operates an estimated 400 million CCTV cameras—54% of the world’s total—creating infrastructure for comprehensive population monitoring. This technology is being exported globally: governments across Africa utilize 266 Chinese tech projects, including Zimbabwe’s facial recognition system for monitoring critics.
Key Findings
Section titled “Key Findings”The Global Autocratization Trend
Section titled “The Global Autocratization Trend”The statistical evidence for democratic erosion is unambiguous:
| Metric | 2014 | 2024 | Trend | Source |
|---|---|---|---|---|
| Autocracies vs. democracies | Democracies dominant | 91 autocracies, 88 democracies | Autocracies outnumber for first time in 20 years | V-Dem 2025 |
| Population under autocracy | 48% | 72% (5.7B people) | +50% of global population | V-Dem 2025 |
| Countries autocratizing | 12 | 45 | +275% increase | V-Dem 2025 |
| Liberal democracies | More common | 29 (least common regime type) | Steepest decline | V-Dem 2025 |
| Freedom of expression decline | 35 countries | 44 countries | +26% year-over-year | V-Dem 2025 |
| Internet freedom | Baseline | 15 consecutive years of decline | Longest continuous decline on record | Freedom House 2025 |
| Internet shutdowns | Lower | 283 shutdowns in 39 countries (2023) | Highest annual tally ever; +41% from prior year | Freedom House |
Mechanisms of AI-Enabled Permanent Control
Section titled “Mechanisms of AI-Enabled Permanent Control”Academic research identifies how AI addresses each traditional vulnerability of authoritarian regimes:
1. Closing the Popular Uprising Pathway
ArXiv research on catastrophic AI risks (Hendrycks & Woodside) warns about “leveraging censorship and mass surveillance to irreversibly concentrate power.” The mechanism: comprehensive surveillance detects organizing at its earliest stages, while predictive analytics identify potential leaders for preemptive neutralization.
Research from Oxford on resisting AI-enabled authoritarianism documents how AI surveillance undermines dissent: “In Argentina, two days before a major demonstration, the government threatened to use face recognition to identify people and then cut their social benefits. As a result, only a few people took to the streets—the government successfully intimidated the population and prevented public political protest.”
In Russia, people attending dissident Alexei Navalny’s funeral were arrested after identification through facial recognition analyzing surveillance camera footage and social media images.
2. Closing the Military Coup Pathway
Journal of Democracy research on digital unfreedom identifies how AI monitoring prevents military coups: continuous surveillance of officer communications maps potential conspiracies before they materialize. Unlike human informants who might have divided loyalties, AI systems execute monitoring with perfect compliance.
JSTOR analysis on AI and authoritarianism notes: “In a democracy, soldiers are likely to hesitate before firing on peaceful protesters or opposition lawmakers. But automated systems will not hesitate to follow orders, and shame will not prevent them from using deadly force when commanded. Commanding a violent force characterized by total loyalty to an administrator empowers authoritarians and facilitates human rights abuses.”
3. Closing the Elite Defection Pathway
Elite coordination against autocrats requires secret communication. Research on the surveillance AI pipeline found that analyzing three decades of computer vision research papers and downstream patents (40,000+ documents), “the large majority of annotated computer vision papers and patents self-report their technology enables extracting data about humans, specifically about human bodies and body parts.”
When surveillance makes coordination visible, elite defection becomes prohibitively risky. The social credit system in China restricts movement and employment based on “trustworthiness” scores, creating economic costs for any detected disloyalty.
4. Creating Self-Reinforcing “AI-tocracy”
MIT research on how “AI-tocracy” emerges describes a self-reinforcing cycle: “AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.” The scholars describe “the connected cycle in which increased deployment of AI-driven technology quells dissent while also boosting the country’s innovation capacity.”
This creates path dependency: early AI surveillance investment generates both political control and technical capabilities, which enable further investment, creating escalating advantages for incumbents.
The Surveillance Infrastructure Reality
Section titled “The Surveillance Infrastructure Reality”Global Deployment Scale
Research on digital authoritarianism documents that AI surveillance has spread to over 80 countries. China operates approximately 400 million CCTV cameras, representing 54% of the global total.
The technology is being actively exported. Chinese surveillance technology companies Hikvision and Dahua control 34% of the global surveillance camera market. Analysis from the National Bureau of Asian Research notes: “China’s mass production of information and communications technology and surveillance equipment, fueled by government subsidization and unparalleled domestic demand, has facilitated the exportation of low-cost digital infrastructure around the world via the BRI and DSR.”
Technological Capabilities
Analysis from AlgorithmWatch on biometric surveillance documents current capabilities: real-time facial recognition in publicly accessible areas, operating in stadiums, airports, casinos, and schools. Police authorities use them for law enforcement, and several countries deployed them for social distancing control during COVID-19.
The Brennan Center’s analysis on AI in policing warns: “Today’s data fusion tools claim a broad array of uses, including crime trend prediction, threat identification, sentiment analysis, risk score calculations, relationship mapping, and pattern and anomaly detection. The capacity to access them in tandem while drawing on such large volumes of data magnifies the risks that outputs will be inaccurate, bake in bias, and enable indiscriminate surveillance.”
Predictive Policing and Preemptive Control
Journal of Democracy analysis documents how predictive policing enables preemptive control: “Powered by AI that analyzes data from various sources such as police records, surveillance footage, social media activity, and public and private databases, these tools forecast potential crimes or unrest. While the technology has legitimate uses, it has been widely criticized for perpetuating systemic bias and enabling authoritarian control.”
The critical shift: from reactive law enforcement (responding to crimes) to preemptive neutralization (acting before dissent manifests). This eliminates the window for organized opposition to form.
China’s Integrated Surveillance Model
Section titled “China’s Integrated Surveillance Model”China’s deployment in Xinjiang showcases the fully integrated approach:
| Component | Function | Lock-in Mechanism |
|---|---|---|
| Facial recognition | Real-time individual identification | Eliminates anonymity in public spaces |
| Predictive analytics | Flag potential dissidents before action | Preemptive neutralization prevents organizing |
| Social credit system | Restrict movement, employment, services | Economic punishment without legal process |
| Automated enforcement | Reduce human agents | Eliminates defection pathway via human informants |
| Data integration | Combine police, social media, financial, biometric | Creates comprehensive profiles enabling precise targeting |
Cambridge University research on China’s corporate social credit system finds the system creates “surveillance state capitalism” where politically-connected firms receive higher scores by accumulating “soft merits” for charitable donations sanctioned by the party-state, volunteer activities, and awards from government organs.
Democratic Backsliding and Vulnerability
Section titled “Democratic Backsliding and Vulnerability”Democracies are not immune. Freedom House’s 2025 report found significant declines even among “Free” countries: Georgia (-4 points), Germany (-3), and the United States (-3).
Mechanisms of Democratic Drift
Research published in Democratization on how digital authoritarianism harms democracy identifies practices spreading to democratic contexts: official disinformation in social media, abuse of defamation and copyright laws for censorship, social media surveillance, and government access to personal internet data.
CSIS analysis on digital authoritarianism strategy documents: “A range of democratically elected governments also apply practices of digital authoritarianism.” The pathway: emergency powers adopted for legitimate security concerns (terrorism, public health) create surveillance infrastructure that proves difficult to dismantle once established.
U.S. Surveillance Expansion
The Intercept reports the FBI is seeking AI and machine learning technology for unmanned aerial systems, including facial recognition, license plate recognition, and weapons detection capabilities.
MIT Technology Review documents how police circumvent facial recognition bans: “Police and federal agencies have found a new way to skirt the growing patchwork of laws that curb how they use facial recognition: an AI model called Track that can track people using attributes like body size, gender, hair color and style, clothing, and accessories.” The tool is used by 400 customers, including state and local police departments and US attorneys at the Department of Justice.
Election Manipulation and Information Control
Section titled “Election Manipulation and Information Control”AI enables manipulation of democratic processes at unprecedented scale:
Deepfakes and Disinformation
Brookings analysis on AI and elections warns: “Political campaigns in America have always featured misinformation about issues, but today, AI and other new technologies represent an unprecedented challenge to the electorate and our political system. The scale and sophistication of AI-generated deepfake images, voice recordings, and videos are widespread and could alter the outcome in many elections.”
Carnegie Endowment research documents actual cases: “In Zimbabwe’s 2018 election, reports indicated that AI-powered bots were used to spread false information about voter-registration deadlines, leading to voter suppression in opposition strongholds. Similarly in Russia, AI has been used to manipulate public opinion by amplifying state-sponsored narratives while silencing critics, as seen in the 2021 parliamentary elections.”
Slovakia experienced AI-produced audio affecting elections by damaging candidate reputations. Nature analysis warns: “Deep fake AI content poses increasing risks of election manipulation.”
Foreign Interference Capabilities
Yale ISPS research documents that “states including Iran, Russia, and Venezuela are purposefully experimenting with and weaponizing generative AI to manipulate the information space and undermine democracy. Countries like Georgia, Moldova, Romania, and Ukraine face a deluge of hybrid threats and AI-generated disinformation campaigns aimed at destabilizing societies and disrupting electoral processes.”
Scalability Advantage
Research in Technology and Innovation in Politics notes: “The core of the problem lies in the speed and scale at which AI tools, once deployed on social media platforms, can generate misleading content—outpacing both governmental oversight and society’s ability to manage the consequences.”
The Irreversibility Question
Section titled “The Irreversibility Question”Multiple research sources warn about permanent lock-in:
ArXiv research on AI governance identifies as a catastrophic outcome: “Authoritarianism/Lock-in: The world locks in values or conditions that are harmful, such as a stable, global authoritarian regime.”
EU AI Act analysis warns: “Mass surveillance and censorship can consequently facilitate a self-reinforcing regime, i.e., lead to an authoritarian ‘lock-in.’ AI, in the hands of non-democratic leaders, could serve as a blessing for regimes that want to surveil their population. Supercharging the collection and processing of an unprecedented load of information, the technology bears the risk of entrenching totalitarian systems.”
Oxford research on AI-enabled authoritarianism identifies the mechanism: “The continuous and pervasive surveillance by AI, especially in the context of China’s extensive camera network, not only instills fear in citizens but also molds behavior. The insidious nature of this surveillance means that future generations, growing up under the watchful eyes of AI, might internalize self-censorship and conformity as the norm. The normalization of AI monitoring may mold future generations to automatically conform to the regime’s stringent standards and control, making rebellion and dissident ideas and opinions difficult to come to fruition.”
Causal Factors
Section titled “Causal Factors”The following factors influence political power lock-in probability and severity. This analysis is designed to inform future cause-effect diagram creation.
Primary Factors (Strong Influence)
Section titled “Primary Factors (Strong Influence)”| Factor | Direction | Type | Evidence | Confidence |
|---|---|---|---|---|
| AI Surveillance Proliferation | ↑ Lock-in | intermediate | 80+ countries deployed; 400M cameras in China alone | High |
| Facial Recognition Capability | ↑ Lock-in | leaf | Real-time identification eliminates anonymity; deployed in stadiums, airports, public spaces | High |
| Predictive Policing Systems | ↑ Lock-in | intermediate | Enables preemptive neutralization before dissent manifests | High |
| Social Credit Systems | ↑ Lock-in | intermediate | Restricts movement/employment without legal process; economic coercion | High |
| Data Integration Infrastructure | ↑ Lock-in | cause | Combines police, social media, financial, biometric data for comprehensive profiling | High |
| Internet Shutdowns | ↑ Lock-in | intermediate | 283 shutdowns in 39 countries (2023); +41% from prior year | High |
| Global Autocratization Trend | ↑ Lock-in | cause | 45 countries autocratizing (2024) vs. 12 (2004); 72% of humanity under autocracy | High |
| Automated Enforcement Systems | ↑ Lock-in | intermediate | Eliminates human defection pathway; perfect compliance | High |
Secondary Factors (Medium Influence)
Section titled “Secondary Factors (Medium Influence)”| Factor | Direction | Type | Evidence | Confidence |
|---|---|---|---|---|
| Chinese Tech Exports | ↑ Lock-in | leaf | Hikvision/Dahua 34% market share; 266 projects in Africa; locks countries into tech dependencies | Medium |
| AI Election Manipulation | ↑ Lock-in | intermediate | Deepfakes, bot networks suppress opposition; Zimbabwe, Russia, Slovakia cases documented | Medium |
| Algorithmic Bias | ↑ Lock-in | leaf | Higher false positives for marginalized groups creates asymmetric pressure | Medium |
| Emergency Power Normalization | ↑ Lock-in | cause | Surveillance adopted for security proves difficult to dismantle; democratic backsliding | Medium |
| Platform Content Control | ↑ Lock-in | intermediate | Information overloading, filtering, gatekeeping, echo chambers limit access to reliable info | Medium |
| Generational Normalization | ↑ Lock-in | cause | Growing up under surveillance internalizes self-censorship; cultural capacity for resistance erodes | Medium |
| AI-tocracy Feedback Loop | ↑ Lock-in | cause | AI innovation entrenches regime; regime investment stimulates further innovation | Medium |
| Cross-Border Repression | ↑ Lock-in | intermediate | Surveillance of diaspora communities; transnational cooperation among autocracies | Medium |
Minor Factors (Weak Influence)
Section titled “Minor Factors (Weak Influence)”| Factor | Direction | Type | Evidence | Confidence |
|---|---|---|---|---|
| Privacy Regulations | ↓ Lock-in | leaf | GDPR, BIPA, CCPA provide some protection; enforcement varies widely | Low |
| Tech Company Restraint | ↓ Lock-in | leaf | Facebook, Microsoft, Amazon, IBM stopped facial recognition sales; voluntary, reversible | Low |
| Civil Society Monitoring | ↓ Lock-in | leaf | NGOs track surveillance proliferation; limited enforcement power | Low |
| Counter-Surveillance Tools | ↓ Lock-in | leaf | Real-time perturbations (visual, acoustic, traffic) restore some activist agency | Low |
| Public Awareness | ↓ Lock-in | leaf | Growing concern about surveillance; not yet translating to policy action | Low |
| International Norms | ↓ Lock-in | leaf | Human rights frameworks exist; weak enforcement mechanisms | Low |
Intervention Mechanisms and Challenges
Section titled “Intervention Mechanisms and Challenges”Proposed Interventions
Section titled “Proposed Interventions”Research communities have identified several potential intervention strategies:
1. Export Controls on Surveillance Technology
Section titled “1. Export Controls on Surveillance Technology”NBR analysis on confronting digital authoritarianism recommends restricting exports of surveillance capabilities to authoritarian regimes. However, implementation faces challenges:
| Challenge | Description | Mitigation |
|---|---|---|
| Dual-use technology | Same tech has legitimate and repressive uses | Context-based evaluation; end-user monitoring |
| Alternative suppliers | China provides unrestricted alternatives | Coordinate with allies; offer competitive alternatives |
| Technological diffusion | Know-how spreads regardless of hardware controls | Focus on specialized components (AI chips) |
2. Privacy-Preserving Technologies and Encryption
Section titled “2. Privacy-Preserving Technologies and Encryption”Oxford research on resisting AI-enabled authoritarianism emphasizes developing defensive tools: “Because authoritarian actors already enjoy a data and resource surplus (asymmetry gap), even marginal defensive tools can change outcomes for individual activists. Real-time perturbations—visual, acoustic or traffic-based—restore some agency.”
However, governments are actively undermining encryption. The challenge: backdoors created for law enforcement also create vulnerabilities for malicious actors.
3. International Human Rights Enforcement
Section titled “3. International Human Rights Enforcement”Existing frameworks include the Universal Declaration of Human Rights and International Covenant on Civil and Political Rights. The problem: weak enforcement mechanisms and lack of jurisdiction over domestic surveillance.
Research on AI and global governance notes enforcement gaps: authoritarian states simply ignore international norms when convenient.
4. Transparency and Accountability Requirements
Section titled “4. Transparency and Accountability Requirements”Brennan Center analysis on AI threats to elections recommends: “Social media platforms, AI developers, and policymakers must act now to implement transparency requirements, strengthen trust and safety protections, and establish accountability mechanisms for AI-generated content.”
The Department of Homeland Security instituted requirements that all uses of face recognition technologies be thoroughly tested for unintended bias, with U.S. citizens afforded opt-out rights for non-law enforcement uses.
5. Democratic AI Development
Section titled “5. Democratic AI Development”Brookings research on public AI proposes government-developed AI under political oversight rather than market control. The Biden administration’s Executive Order on AI called for a National AI Research Resource pilot program.
Research on AI and democracy in Universities found that using generative AI in deliberation platforms, participants showed more willingness to find compromise, reported higher satisfaction, and felt more respected. The key: “Generative AI can act as a guide, not an actor, to amplify agency, respect, and inclusiveness.”
6. Activist Support and Training
Section titled “6. Activist Support and Training”Journal of Democracy analysis recommends: “Activists must also be included in policy discussions about AI governance to ensure that AI systems are designed with transparency, accountability, and human rights in mind. By providing activists with early access to AI tools, training, funding, and collaboration opportunities, the global community can better equip them to counter repression.”
Why Interventions May Fail
Section titled “Why Interventions May Fail”Each proposed intervention faces substantial challenges:
| Intervention | Primary Challenge | Lock-in Enabling Mechanism |
|---|---|---|
| Export controls | China provides unrestricted alternatives; dual-use technology hard to categorize | Technology diffusion continues via alternative suppliers |
| Privacy tech/encryption | Governments actively undermine encryption; technical sophistication required | Asymmetric resource gap favors surveillance states |
| International enforcement | Weak jurisdiction over domestic surveillance; authoritarian states ignore norms | No credible enforcement mechanism |
| Transparency requirements | Applies only to democratic jurisdictions; autocracies ignore | Creates competitive disadvantage for democracies |
| Democratic AI | Requires sustained political will; vulnerable to administration changes | Market dynamics favor incumbent platforms |
| Activist support | Authoritarian regimes criminalize support; resource asymmetry | Activists individually vulnerable despite collective capacity |
Open Questions
Section titled “Open Questions”| Question | Why It Matters | Current State |
|---|---|---|
| What defines the irreversibility threshold for political lock-in? | Need to identify intervention window before crossing point of no return | Theoretical frameworks exist; no empirical validation or early warning indicators |
| Can technical counter-surveillance restore meaningful privacy? | Determines whether resistance remains possible under comprehensive surveillance | Marginal defensive tools help individual activists; unclear if scalable to population level |
| Will generational normalization create cultural inability to resist? | Lock-in may operate through socialization rather than purely technological control | Evidence from China suggests internalization occurring; long-term effects unknown |
| How do AI safety requirements interact with surveillance proliferation? | Safety measures may require monitoring that enables repression | Active policy debate; no consensus on balancing safety and liberty |
| Can democracies maintain surveillance restraint under security pressures? | Tests whether liberal norms can withstand authoritarian competition | 50% of “Free” countries experienced internet freedom decline; trajectory concerning |
| Is the China model replicable in other authoritarian contexts? | Determines speed of global proliferation | Technology exported to 80+ countries; adaptation to local contexts varies |
| Will international coordination on surveillance controls succeed? | Unilateral restrictions ineffective if authoritarian alternatives exist | Minimal coordination; China actively provides unrestricted alternatives |
| Can predictive policing be constrained to legitimate public safety uses? | Boundary between crime prevention and political repression unclear | Multiple U.S. cities abandoned predictive policing; others expanding use |
| How does AI election manipulation affect democratic legitimacy long-term? | Repeated manipulation may erode belief in democratic processes | Evidence from 2024-2025 elections shows growing impact; trust effects unknown |
| Is technological lock-in reversible if political will emerges? | Critical for determining whether late intervention can succeed | Historical examples of surveillance rollback rare; network effects may prevent reversal |
Sources
Section titled “Sources”Academic Research Papers
Section titled “Academic Research Papers”- Hendrycks, D., & Woodside, T. “An Overview of Catastrophic AI Risks” - Surveillance enabling irreversible power concentration
- Vidgen, B., et al. (2023). “The Surveillance AI Pipeline” - Analysis of 40,000+ computer vision papers enabling human data extraction
- Anderljung, M., et al. (2025). “AI Governance to Avoid Extinction” - Authoritarian lock-in as catastrophic outcome
- Veale, M., & Borgesius, F. (2024). “Taxonomy to Regulation: AI Risks and the EU AI Act” - Mass surveillance facilitating self-reinforcing regimes
- Maas, M. (2024). “AI, Global Governance, and Digital Sovereignty” - AI empowering surveillance capacity and structural power
- Schneier, B., & Barez, F. (2025). “Artificial Intelligence and Democracy” - Digital authoritarianism vs. democratic upgrade pathways
- Future of Humanity Institute. “The Malicious Use of Artificial Intelligence” - State surveillance automation and privacy elimination
- Barez, F. (2025). “Toward Resisting AI-Enabled Authoritarianism” - Defensive tools and generational normalization concerns
Policy and Think Tank Research
Section titled “Policy and Think Tank Research”- V-Dem Institute. (2025). “Democracy Report 2025: 25 Years of Autocratization” - 72% of humanity under autocracy; 45 countries autocratizing
- Freedom House. (2025). “Freedom on the Net 2025” - 15 consecutive years of internet freedom decline
- Polyakova, A., & Meserole, C. (2019). “Exporting Digital Authoritarianism” - Chinese technology export and soft power
- Brookings Institution. “How AI Can Enable Public Surveillance” - Surveillance capabilities and political environment risks
- Brookings Institution. “How AI Impacts Democratic Engagement” - Generative AI and foreign interference
- Brookings Institution. “Is the Politicization of Generative AI Inevitable?” - Political bias in AI chatbots
- Brennan Center for Justice. “The Dangers of Unregulated AI in Policing” - Data fusion tools and indiscriminate surveillance
- Brennan Center for Justice. “Gauging the AI Threat to Free and Fair Elections” - AI-generated content accountability
- Carnegie Endowment for International Peace. (2024). “Can Democracy Survive the Disruptive Power of AI?” - Institutional capacity and democratic outcomes
- CSIS. “Promote and Build: A Strategic Approach to Digital Authoritarianism” - Democratic governments applying authoritarian practices
Case Studies and Country-Specific Analysis
Section titled “Case Studies and Country-Specific Analysis”- Stanford Freeman Spogli Institute. “Assessing China’s National Model Social Credit System” - Social credit system structure and implementation
- Cambridge University Press. “China’s Corporate Social Credit System” - Surveillance state capitalism and political connections
- Journal of Politics. “Information Control and Public Support for Social Credit in China” - Public support despite repressive potential
- ORF Online. “China’s Social Credit System and Information Control Regime” - Information control mechanisms
- MERICS. “China’s Social Credit Score: Untangling Myth from Reality” - Fragmentation vs. consolidation
Surveillance Technology and Capabilities
Section titled “Surveillance Technology and Capabilities”- AlgorithmWatch. “Show Your Face and AI Tells Who You Are” - Biometric surveillance in publicly accessible areas
- ISACA. (2025). “Facial Recognition and Privacy Concerns” - AI-powered FRT evolution and concerns
- The Intercept. (2025). “The FBI Wants AI Surveillance Drones With Facial Recognition” - U.S. law enforcement AI capabilities
- MIT Technology Review. (2025). “How AI Helps Police Skirt Facial Recognition Bans” - Track system circumventing regulations
- NAACP. “Artificial Intelligence in Predictive Policing Issue Brief” - Bias perpetuation in AI-driven policing
Election Manipulation and Democratic Processes
Section titled “Election Manipulation and Democratic Processes”- Journal of Democracy. “How Autocrats Weaponize AI—And How to Fight Back” - AI for monitoring, targeting, silencing activists
- Westminster Foundation for Democracy. “How AI Might Impact Democracy” - Information overloading and echo chambers
- Yale ISPS. (2025). “AI and Democracy: Scholars Unpack Technology and Governance” - Foreign interference using generative AI
- Nature. (2025). “AI Has a Democracy Problem” - Threats to democratic trust
- International IDEA. “Democracy in the Age of AI” - Ethical standards and transparency needs
Digital Authoritarianism Analysis
Section titled “Digital Authoritarianism Analysis”- Sage Journals. (2025). “Authoritarianism in the Digital Age” - Digital authoritarianism characterization
- Democratization. (2025). “How Practices of Digital Authoritarianism Harm Democracy” - Specific practices in democratic contexts
- Yale Review of International Studies. “The Rise of Digital Authoritarianism” - Impacts on global democracy
- Centre for International Governance Innovation. “Evolving Surveillance Tech” - Authoritarian impulse to comprehensive surveillance
- Centre for International Governance Innovation. “Authoritarianism Reinvented” - 80+ countries with AI surveillance
- National Bureau of Asian Research. “Confronting Digital Authoritarianism” - Technology export and path dependencies
- European Centre for Populism Studies. “Transnational Diffusion” - Learning and emulation mechanisms
- Philosophy & Technology. “Defining Digital Authoritarianism” - Conceptual frameworks
- SSRN (G’sell). “Digital Authoritarianism: From State Control to Algorithmic Despotism” - Evolution of control mechanisms
Additional Academic Sources
Section titled “Additional Academic Sources”- MIT News. (2023). “How an ‘AI-tocracy’ Emerges” - Self-reinforcing cycle of AI innovation and regime entrenchment
- JSTOR. “Artificial Intelligence, Authoritarianism and the Future of Political Systems” - Automated enforcement and total loyalty
- Journal of Democracy. “The Road to Digital Unfreedom” - AI reshaping repression mechanisms
- Lawfare. “The Authoritarian Risks of AI Surveillance” - Undermining democratic government
- Foreign Affairs. (2018). “How AI Will Reshape the Global Order” - Geopolitical implications
- Technology and Innovation in Politics. (2025). “AI: Challenges for Democracy” - Policy solutions
- ITU. (2025). “The Annual AI Governance Report 2025” - International governance frameworks
AI Transition Model Context
Section titled “AI Transition Model Context”Connections to Other Model Elements
Section titled “Connections to Other Model Elements”| Model Element | Relationship to Political Lock-in |
|---|---|
| AI Capabilities (Algorithms) | Computer vision enables facial recognition; LLMs enable scaled disinformation |
| AI Capabilities (Compute) | Large-scale surveillance requires massive compute; concentrated infrastructure enables control |
| AI Capabilities (Adoption) | Rapid surveillance adoption before democratic safeguards creates path dependency |
| AI Ownership (Companies) | Chinese tech companies export surveillance globally; U.S. companies developing capabilities |
| AI Ownership (Countries) | China drives proliferation to authoritarian regimes; geopolitical competition |
| AI Uses (Governments) | Surveillance, predictive policing, censorship, propaganda, electoral manipulation |
| AI Uses (Coordination) | International cooperation among autocracies; surveillance technology transfer |
| Civilizational Competence (Governance) | Democratic governance struggles to constrain surveillance; regulatory lag |
| Civilizational Competence (Epistemics) | Information control undermines shared reality; echo chambers and manipulation |
| Civilizational Competence (Adaptability) | Emergency powers prove difficult to roll back; normalization of surveillance |
| Misalignment Potential (AI Governance) | Surveillance requirements may conflict with privacy protections |
| Transition Turbulence (Racing Intensity) | Security competition drives surveillance adoption despite democratic concerns |
| Long-term Lock-in (Economic Power) | Economic concentration enables political influence; social credit restricts mobility |
| Long-term Lock-in (Values) | Generational normalization embeds authoritarian values; cultural capacity for resistance erodes |
Key Insights for the Model
Section titled “Key Insights for the Model”-
Lock-in operates incrementally: Unlike catastrophic scenarios with clear inflection points, political lock-in proceeds through accumulation of individually justifiable surveillance capabilities. Each step appears reasonable, making intervention politically difficult.
-
Irreversibility threshold is uncertain but critical: Research suggests comprehensive AI surveillance may create the first truly permanent authoritarian systems by closing all traditional overthrow pathways simultaneously. However, empirical validation of the irreversibility claim is lacking.
-
Democracies are vulnerable: Half of Freedom House “Free” countries experienced internet freedom declines in 2024-2025. Emergency powers adopted for legitimate security concerns create infrastructure difficult to dismantle, enabling authoritarian drift even in democracies.
-
Intervention timing paradox: Effective intervention requires action before lock-in, but political will emerges only after harms are evident. By the time democratic publics mobilize, surveillance infrastructure may be embedded and difficult to reverse.
-
Technology export accelerates proliferation: Chinese companies control 34% of global surveillance camera market and actively export integrated systems to 80+ countries. This creates technological path dependencies and locks recipient countries into future development controlled by authoritarian suppliers.
-
Generational normalization may be key mechanism: Lock-in may operate through cultural transmission—generations growing up under comprehensive surveillance internalize self-censorship and lose cultural capacity for resistance—rather than purely technological control.
-
No effective international enforcement: Human rights frameworks exist but lack credible enforcement mechanisms for domestic surveillance. Unilateral export controls are ineffective when China provides unrestricted alternatives.
The research suggests political power lock-in should be considered a high-probability failure mode that receives insufficient attention because harms are distributed and incremental rather than sudden and catastrophic. The trajectory is concerning: 45 countries actively autocratizing, 72% of humanity under autocracy, 15 consecutive years of internet freedom decline, and proliferation of surveillance capabilities that may make resistance structurally impossible rather than merely difficult.