Skip to content

Global Resilience

Most AI safety work focuses on prevention: ensuring AI systems don’t cause harm. But prevention may fail. Resilience asks: If things go wrong, how do we limit damage and recover?

Resilience is valuable because:

  • Uncertainty: We may not prevent all AI harms
  • Redundancy: Defense in depth is wise
  • Graceful degradation: Partial failures shouldn’t become total failures
  • Recovery capacity: Even after harm, rebuilding matters

Maintaining society’s ability to know what’s true despite AI-enabled deception:

  • Epistemic Security — Protecting collective knowledge and truth-finding capacity
  • Content Authentication — Verifying what’s real in a synthetic content era
  • Institutional Trust — Preserving and rebuilding trust in key institutions

Ensuring critical systems function despite AI-related disruption:

  • Critical Infrastructure Protection — Power, communications, finance, healthcare
  • AI Dependency Management — Avoiding single points of AI failure
  • Cyber Resilience — Defending against AI-enhanced cyber attacks

Maintaining social cohesion and function under AI-induced stress:

  • Economic Adaptation — Managing AI-driven labor disruption
  • Democratic Resilience — Protecting democratic processes from AI manipulation
  • Community Resilience — Local capacity for mutual aid and recovery

Ensuring governance systems remain functional and legitimate:

  • Regulatory Adaptability — Governance that can respond to rapid AI change
  • International Stability — Avoiding AI-triggered conflict escalation
  • Institutional Redundancy — Backup systems for critical governance functions

No single point of failure. Multiple systems can perform critical functions.

Avoid monoculture. Different approaches reduce correlated failures.

Failures should be contained. Damage to one component shouldn’t cascade.

Systems should fail partially, not totally. Reduced function beats no function.

Systems should learn and adjust. Rigid systems break; flexible systems bend.

After failure, systems should be rebuildable. Preserve knowledge and capacity for reconstruction.


Prevention FocusResilience Focus
Stop bad outcomes from occurringSurvive and recover if bad outcomes occur
Requires accurate predictionRobust to prediction failure
High value if successfulValuable even if prevention succeeds
May create fragility (single strategy)Builds robustness (multiple defenses)

Best approach: Both. Prevention is primary; resilience is backup.


AI RiskResilience Relevance
MisalignmentMay not be preventable; need to survive initial failures
MisuseCan’t prevent all misuse; need to limit damage
Racing dynamicsMay not be stoppable; need to handle fast development
Coordination failureIf coordination fails, resilience is fallback

Resilience isn’t a substitute for alignment or safety research. It’s a complement:

  • Safety research: Make AI systems safe
  • Resilience: Survive if safety fails

Neither alone is sufficient. Both together provide defense in depth.


  • How much should we invest in resilience vs. prevention?
  • Which resilience measures are most tractable and important?
  • How do we build resilience without creating new risks?
  • What resilience measures are valuable across many AI scenarios?