Epistemic Harms
Epistemic harms are harmful end states where AI has damaged humanity’s ability to know things, verify claims, and make good collective decisions.
How These Risks Connect
Section titled “How These Risks Connect”increases
The Risks
Section titled “The Risks”These are end-state harms to humanity’s epistemic capacity:
| Risk | Description |
|---|---|
| Knowledge Monopoly | A few AI systems become sole arbiters of “truth” |
| Institutional Capture | AI advisors subtly bias all organizational decisions |
| Reality Fragmentation | Populations live in incompatible information environments |
| Epistemic Collapse | Society loses ability to establish shared facts |
| Trust Cascade Failure | Once trust breaks, no mechanism exists to rebuild it |
| Learned Helplessness | People give up trying to know anything; epistemic nihilism |
Additional Specific Harms
Section titled “Additional Specific Harms”| Risk | Description |
|---|---|
| Legal Evidence Crisis | Courts can no longer trust digital evidence |
| Cyber Psychosis | AI-induced psychological harm from reality distortion |
Contributing Pathways and Amplifiers
Section titled “Contributing Pathways and Amplifiers”These outcome-level harms are driven by pathways and amplifiers (also in this section):
| Factor | How It Contributes |
|---|---|
| Authentication Collapse | Verification fails → can’t distinguish real from fake |
| Scientific Corruption | Fake papers proliferate → knowledge base corrupted |
| Expertise Atrophy | Humans lose ability to evaluate AI outputs |
| Sycophancy at Scale | AI confirms biases → no reality check |
| Preference Manipulation | AI shapes what people want, not just believe |
| Consensus Manufacturing | Fake agreement masks actual disagreement |
| Historical Revisionism | Past becomes contested → shared history lost |
| Automation Bias | Over-reliance on AI recommendations |
| Trust Erosion | Gradual decline enables sudden collapse |
Causal Levels Explained
Section titled “Causal Levels Explained”This section includes risks at three causal levels:
Outcomes (what we ultimately want to avoid):
- Society can’t agree on basic facts
- Institutions make systematically biased decisions
- People stop trying to know things
Pathways (mechanisms leading to outcomes):
- Authentication systems failing
- Expertise degrading from disuse
- Trust eroding over time
Amplifiers (conditions that increase risk):
- AI telling users what they want to hear (sycophancy)
- Over-reliance on AI recommendations (automation bias)
The distinction matters because intervening on amplifiers and pathways can prevent multiple downstream outcomes.
What Makes AI Different
Section titled “What Makes AI Different”Previous information problems (propaganda, fake news) were limited by human capacity. AI changes the game:
| Old Problem | AI Escalation |
|---|---|
| Propaganda exists | Personalized propaganda for each individual |
| Fake news spreads | Generated faster than verification |
| Evidence can be faked | All digital evidence becomes deniable |
| Experts can be wrong | Expertise itself atrophies |
| Institutions can be corrupted | AI advisors capture decisions invisibly |
Severity and Reversibility
Section titled “Severity and Reversibility”| Risk | Severity | Reversibility |
|---|---|---|
| Knowledge Monopoly | High | Difficult (infrastructure lock-in) |
| Institutional Capture | High | Moderate (requires institutional reform) |
| Reality Fragmentation | High | Difficult (no shared ground to rebuild) |
| Epistemic Collapse | Catastrophic | Very difficult |
| Trust Cascade | High | Very difficult (no trusted rebuilder) |
| Learned Helplessness | High | Generational |
Relationship to Other Risk Categories
Section titled “Relationship to Other Risk Categories”Epistemic + Structural
Section titled “Epistemic + Structural”- Knowledge monopoly enables concentration of power
- Trust collapse can accelerate lock-in
Epistemic + Misuse
Section titled “Epistemic + Misuse”- Disinformation is a key driver of reality fragmentation
- Deepfakes accelerate authentication collapse
Epistemic + Accident
Section titled “Epistemic + Accident”- Epistemic collapse makes it harder to evaluate AI alignment
- Sycophancy in AI systems contributes to sycophancy at scale