Skip to content

Rogue Actor Catastrophe

A rogue actor catastrophe occurs when non-state actors use AI to cause mass harm—potentially at civilizational scale. Unlike state actor catastrophes, these scenarios involve individuals or groups operating outside governmental authority. AI lowers the barriers to acquiring dangerous capabilities, potentially enabling small groups to cause harm previously requiring nation-state resources.

This is a key “misuse” risk that may be more tractable than alignment failures, since it involves known bad actors using AI as a tool rather than AI systems developing misaligned goals.


Inherently negative. There is no positive version of rogue actors causing mass harm. Beneficial non-state use of AI (innovation, civil society empowerment) is a separate consideration.


Loading diagram...

1. AI-Enabled Bioweapons

AI could help non-experts design and synthesize dangerous pathogens:

  • LLMs providing step-by-step synthesis guidance
  • AI-designed pathogens optimized for transmissibility or lethality
  • Reduced need for tacit knowledge that currently limits bioweapon development
  • Potential for pandemic-scale casualties (millions to billions)

2. AI-Enhanced Cyberattacks

AI dramatically improves offensive cyber capabilities:

  • Automated vulnerability discovery and exploitation
  • AI-generated social engineering at scale
  • Attacks on critical infrastructure (power grids, water, financial systems)
  • Potential for cascading failures across interdependent systems

3. Coordination and Recruitment

AI amplifies organizational capabilities of rogue actors:

  • AI-optimized radicalization and recruitment
  • Better operational security and planning
  • Coordination of complex multi-stage attacks
  • Harder for defenders to infiltrate or monitor

ParameterDirectionImpact
Biological Threat ExposureHigh → EnablesEasier access to dangerous biological knowledge
Cyber Threat ExposureHigh → EnablesMore attack surfaces and vulnerabilities
Information AuthenticityLow → EnablesHarder to counter radicalization content
Safety Culture StrengthLow → EnablesLabs may not implement access controls

Rogue actor catastrophes could cause existential-scale harm:

  • Engineered pandemic causing billions of deaths
  • Cascading infrastructure failures
  • Even if not extinction, could cause civilizational collapse

Successful attacks would reshape the long-run trajectory:

  • Permanent surveillance and security measures
  • Loss of trust and openness
  • Reduced innovation due to fear of misuse
  • Backlash could lead to heavy-handed regulation or divert resources from beneficial development

DimensionPre-AIPost-AI
Expertise requiredHigh (needed tacit knowledge)Lower (AI provides guidance)
Resources requiredSignificant (state-level for WMD)Reduced (smaller groups can act)
Attack sophisticationLimited by human planningEnhanced by AI optimization
Defense effectivenessOften adequateOffense may outpace defense

The “Democratization of Destruction” Problem

Section titled “The “Democratization of Destruction” Problem”

AI potentially allows small groups to cause harm that previously required nation-state resources. This is particularly concerning for bioweapons, where the barriers have been:

  1. Access to dangerous pathogen sequences (now more available)
  2. Knowledge of synthesis techniques (AI can provide)
  3. Lab equipment (increasingly available)
  4. Tacit knowledge (AI reduces this requirement)

  1. Capability proliferation: AI tools that could assist attack planning becoming widely available
  2. Concerning queries: Reports of AI systems being asked about attack methods
  3. Radicalization AI: Use of AI for recruitment by extremist groups
  4. Near-misses: Foiled attacks that show AI involvement in planning
  5. Lab security failures: Breaches at facilities with dangerous biological materials
  6. Infrastructure vulnerabilities: Discovery of critical systems susceptible to AI-enhanced attack

Technical/Access Controls:

  • DNA synthesis screening — Prevent synthesis of dangerous sequences (see Bioweapons Risk for details)
  • AI model access restrictions for dangerous queries
  • Know-Your-Customer requirements for AI services
  • Watermarking and monitoring of AI-generated content

Defensive Measures:

  • AI-enhanced detection and response
  • Infrastructure hardening and redundancy
  • Broad-spectrum medical countermeasures (e.g., metagenomic sequencing)

Governance:

  • International coordination on AI misuse prevention
  • Export controls on dual-use capabilities
  • Liability frameworks for AI providers

FactorAssessment
Bio attack capabilityIncreasing; current LLMs provide some uplift
Bio attack motivationLow base rate but non-zero
Cyber attack capabilitySignificantly enhanced by AI
Civilizational-scale outcomeUncertain; depends on specific attack and response

The combination of low base rates (most people don’t want to cause mass harm) with increasing capability (AI lowers barriers) creates genuine uncertainty about risk levels.


  • Sandbrink, J. (2023). “Artificial Intelligence and Biological Misuse” — Nature Machine Intelligence
  • CSET reports on AI and weapons of mass destruction
  • Nuclear Threat Initiative — Biosecurity and AI work

Ratings

MetricScoreInterpretation
Changeability35/100Somewhat influenceable
X-risk Impact70/100Substantial extinction risk
Trajectory Impact45/100Significant effect on long-term welfare
Uncertainty65/100Moderate uncertainty in estimates