Skip to content

Structural Risks

Structural risks are harmful end states that emerge from how AI development and deployment reshape society—rather than from individual AI systems failing or being misused.

enablesentrenchespermanentleads tocan't resistConcentrationof PowerAuthoritarianTakeoverErosion ofAgencyEnfeeblementLock in
increases

RiskDescription
Concentration of PowerAI enabling unprecedented power accumulation by small groups
Authoritarian TakeoverStable, durable authoritarianism harder to reverse than historical autocracies
Erosion of Human AgencyHumans losing meaningful control over their lives and decisions
EnfeeblementHumanity losing capability to function independently of AI
Lock-inPermanent entrenchment of values, systems, or structures

These are end-state harms that emerge from system dynamics, not individual failures:

  1. System-level emergence - They arise from how AI reshapes society, not from any single AI system misbehaving
  2. Could occur even if AI works as intended - Perfect AI alignment doesn’t prevent power concentration or human skill loss
  3. Difficult to attribute - No single actor “causes” these risks; they emerge from collective dynamics
  4. Path dependent - Early decisions constrain later options in ways that may be hard to reverse

These structural outcome risks are influenced by the following amplifiers (also in this section):

FactorHow It Contributes
Racing DynamicsAccelerates development, reduces safety margins
Winner-Take-AllAmplifies concentration of power
Economic DisruptionCreates dependency, erodes agency
Flash DynamicsReduces human oversight capacity
IrreversibilityMakes lock-in more likely
Multipolar TrapDrives racing, concentration
ProliferationSpreads capabilities to more actors

Racing dynamics (an amplifier) increase accident probability. Power concentration means accidents by dominant actors have larger consequences.

Power concentration enables misuse at scale. Authoritarian takeover is both a structural risk and enables systematic misuse.

Power concentration enables information control. Enfeeblement reduces humanity’s ability to evaluate AI systems.


Key Questions

Does AI actually concentrate power more than previous technologies?
Is capability loss from AI qualitatively different from previous technological shifts?
Can coordination mechanisms address structural risks, or are they inherent to AI development?
How would we know if we're entering a lock-in scenario?
Are structural risks amenable to technical solutions, or only governance solutions?

SourceFocus
Bostrom (2014): SuperintelligenceAI trajectories, “dystopian” scenarios
Ord (2020): The PrecipiceExistential risk, “dystopian lock-in”
MacAskill (2022): What We Owe the FutureValue lock-in, trajectory changes
GovAIAI governance research
AI Now InstituteConcentration, power, inequality
Sources & References