Skip to content

Worldview Profiles

People’s views on AI safety tend to cluster into recognizable worldviews. Each worldview combines beliefs about risk, feasibility, and strategy into a coherent package.

WorldviewCore BeliefP(doom)Priority Approaches
AI DoomerAlignment is very hard; timelines short30-90%Pause, agent foundations, governance
Optimistic AlignmentAlignment is tractable engineering<5%RLHF, evals, lab culture
Governance-FocusedTechnical solutions need policy support10-30%Policy, compute governance, coordination
Long-Timelines TechnicalWe have time for careful research5-20%Interpretability, agent foundations

Your worldview emerges from your answers to key cruxes:

Short timelines? ──┬── Yes ──► Alignment hard? ──┬── Yes ──► DOOMER
│ └── No ───► Urgent but optimistic
└── No ───► Alignment hard? ──┬── Yes ──► LONG-TIMELINES TECHNICAL
└── No ───► OPTIMISTIC ALIGNMENT

Governance-focused worldview cuts across these based on belief that technical solutions alone are insufficient.

  1. Identify your worldview: Which profile is closest to your current beliefs?
  2. Understand alternatives: What would you need to believe to hold a different view?
  3. Find cruxes: Where do you differ from other worldviews?
  4. Steel-man: Can you articulate the strongest version of views you disagree with?

Some beliefs vary within each worldview:

DimensionSpectrum
AI assistanceTrust AI to help with safety ↔ Keep humans in charge
Open sourceNet positive ↔ Net negative
Lab workChange from inside ↔ Change from outside
Theory vs. empiricalFormal methods ↔ Experimental safety