Skip to content

State-Caused Catastrophe

A state actor catastrophe occurs when governments use AI capabilities to cause mass harm—either through interstate conflict (great power war enhanced by AI), internal repression (AI-enabled authoritarian control), or state-sponsored attacks (biological, cyber, or other weapons of mass destruction). Unlike rogue actor catastrophes, these scenarios involve the resources and legitimacy of nation-states.

This is the “bad actor risk” that governance researchers emphasize alongside technical alignment concerns. Even perfectly aligned AI systems could enable catastrophic outcomes if wielded by states with harmful intentions.


Inherently negative. Beneficial state use of AI (effective governance, improved public services) is not the focus here. This page specifically addresses catastrophic misuse pathways.


Loading diagram...

AI transforms military capabilities, increasing the risk and severity of great power conflict:

  • Autonomous weapons: AI-enabled weapons systems that can select and engage targets without human intervention
  • Speed of conflict: AI accelerates decision-making beyond human timescales, making escalation harder to control
  • New attack surfaces: AI enables novel attack vectors (cyber, information, economic)
  • Deterrence instability: AI may undermine nuclear deterrence or create first-strike incentives

AI provides tools for unprecedented state control over populations:

  • Mass surveillance: AI-powered monitoring of all communications and movements
  • Predictive policing: Preemptive detention based on predicted behavior
  • Propaganda optimization: AI-generated content that maximally influences beliefs
  • Economic control: AI management of resources to reward loyalty and punish dissent

If such systems become entrenched globally, this could constitute a permanent loss of human freedom—a form of existential catastrophe.

AI enhances state capacity to develop and deploy weapons of mass destruction:

  • Bioweapons: AI-designed pathogens optimized for lethality or spread
  • Cyberweapons: AI-enabled attacks on critical infrastructure at civilizational scale
  • Novel weapons: AI-discovered attack vectors humans haven’t conceived

ParameterDirectionImpact
International CoordinationLow → EnablesUnable to establish norms or verify compliance
Racing IntensityHigh → AcceleratesPressure to deploy military AI without adequate safety
Governance CapacityLow → EnablesInstitutions can’t manage AI development
Cyber Threat ExposureHigh → AmplifiesMore attack surfaces for state-level conflict
Biological Threat ExposureHigh → AmplifiesAI-enabled bioweapons become more feasible

State actor catastrophe is a major pathway to acute existential risk:

  • Nuclear war escalated by AI systems
  • Engineered pandemic released by state program
  • Permanent global authoritarianism

Even short of extinction, state misuse shapes the long-run trajectory:

  • Authoritarian control may become the global norm
  • International system may fragment or collapse
  • Trust and cooperation may be permanently damaged
  • State conflict intensifies racing dynamics and diverts resources from beneficial development

TechnologyState MisuseLessons
Nuclear weaponsArms race, Cold War brinkmanshipInternational coordination possible but fragile
Chemical weaponsWWI, ongoing useNorms can develop but enforcement is hard
Biological weaponsState programs (USSR, others)Even with treaties, verification is difficult
Cyber capabilitiesState-sponsored attacksAttribution difficult, escalation risks

  1. Military AI deployments: Autonomous weapons systems entering service
  2. AI arms race rhetoric: Leaders framing AI as key to military dominance
  3. Coordination breakdown: International AI governance efforts failing
  4. Authoritarian AI exports: Surveillance technology spreading to repressive states
  5. State bioweapon indicators: AI capabilities at state biological research facilities
  6. Escalation incidents: Near-misses involving AI-enabled military systems

International:

  • Arms control agreements for AI weapons systems
  • Verification regimes for military AI
  • Confidence-building measures between great powers
  • Export controls on surveillance AI

Domestic:

  • Human control requirements for lethal autonomous systems
  • Democratic oversight of military AI programs
  • Whistleblower protections for concerning programs

Technical:

  • AI systems designed with escalation prevention
  • Kill switches and human override capabilities
  • Defensive AI (cyber defense, attribution)

This is one of the harder catastrophe pathways to estimate because it depends heavily on geopolitics:

FactorAssessment
Great power war probabilityLow but non-trivial; AI may increase risk
AI impact on war severityLikely significant—faster, more autonomous, new domains
Authoritarian AI entrenchmentAlready occurring in some states
State WMD enhancementPlausible; verification very difficult

  • Dafoe, A. (2018). “AI Governance: A Research Agenda
  • Ord, T. (2020). The Precipice — Discussion of state-level AI risks
  • Future of Life Institute — Work on lethal autonomous weapons

Ratings

MetricScoreInterpretation
Changeability45/100Somewhat influenceable
X-risk Impact75/100Substantial extinction risk
Trajectory Impact70/100Major effect on long-term welfare
Uncertainty55/100Moderate uncertainty in estimates