Structural Risks
Structural risks are harmful end states that emerge from how AI development and deployment reshape society—rather than from individual AI systems failing or being misused.
How These Risks Connect
Section titled “How These Risks Connect”increases
The Risks
Section titled “The Risks”| Risk | Description |
|---|---|
| Concentration of Power | AI enabling unprecedented power accumulation by small groups |
| Authoritarian Takeover | Stable, durable authoritarianism harder to reverse than historical autocracies |
| Erosion of Human Agency | Humans losing meaningful control over their lives and decisions |
| Enfeeblement | Humanity losing capability to function independently of AI |
| Lock-in | Permanent entrenchment of values, systems, or structures |
What Makes These “Structural”?
Section titled “What Makes These “Structural”?”These are end-state harms that emerge from system dynamics, not individual failures:
- System-level emergence - They arise from how AI reshapes society, not from any single AI system misbehaving
- Could occur even if AI works as intended - Perfect AI alignment doesn’t prevent power concentration or human skill loss
- Difficult to attribute - No single actor “causes” these risks; they emerge from collective dynamics
- Path dependent - Early decisions constrain later options in ways that may be hard to reverse
Contributing Amplifiers
Section titled “Contributing Amplifiers”These structural outcome risks are influenced by the following amplifiers (also in this section):
| Factor | How It Contributes |
|---|---|
| Racing Dynamics | Accelerates development, reduces safety margins |
| Winner-Take-All | Amplifies concentration of power |
| Economic Disruption | Creates dependency, erodes agency |
| Flash Dynamics | Reduces human oversight capacity |
| Irreversibility | Makes lock-in more likely |
| Multipolar Trap | Drives racing, concentration |
| Proliferation | Spreads capabilities to more actors |
Relationship to Other Risk Categories
Section titled “Relationship to Other Risk Categories”Structural + Accident Risks
Section titled “Structural + Accident Risks”Racing dynamics (an amplifier) increase accident probability. Power concentration means accidents by dominant actors have larger consequences.
Structural + Misuse Risks
Section titled “Structural + Misuse Risks”Power concentration enables misuse at scale. Authoritarian takeover is both a structural risk and enables systematic misuse.
Structural + Epistemic Risks
Section titled “Structural + Epistemic Risks”Power concentration enables information control. Enfeeblement reduces humanity’s ability to evaluate AI systems.
Key Uncertainties
Section titled “Key Uncertainties”❓Key Questions
Does AI actually concentrate power more than previous technologies?
Is capability loss from AI qualitatively different from previous technological shifts?
Can coordination mechanisms address structural risks, or are they inherent to AI development?
How would we know if we're entering a lock-in scenario?
Are structural risks amenable to technical solutions, or only governance solutions?
Research Landscape
Section titled “Research Landscape”| Source | Focus |
|---|---|
| Bostrom (2014): Superintelligence↗ | AI trajectories, “dystopian” scenarios |
| Ord (2020): The Precipice↗ | Existential risk, “dystopian lock-in” |
| MacAskill (2022): What We Owe the Future↗ | Value lock-in, trajectory changes |
| GovAI↗ | AI governance research |
| AI Now Institute↗ | Concentration, power, inequality |
Sources & References
- Superintelligence - Nick Bostrom (2014)
- The Precipice - Toby Ord (2020)
- What We Owe the Future - Will MacAskill (2022)
- GovAI Research
- AI Now Institute