Worldview Profiles
People’s views on AI safety tend to cluster into recognizable worldviews. Each worldview combines beliefs about risk, feasibility, and strategy into a coherent package.
Major Worldviews
Section titled “Major Worldviews”| Worldview | Core Belief | P(doom) | Priority Approaches |
|---|---|---|---|
| AI Doomer | Alignment is very hard; timelines short | 30-90% | Pause, agent foundations, governance |
| Optimistic Alignment | Alignment is tractable engineering | <5% | RLHF, evals, lab culture |
| Governance-Focused | Technical solutions need policy support | 10-30% | Policy, compute governance, coordination |
| Long-Timelines Technical | We have time for careful research | 5-20% | Interpretability, agent foundations |
How Worldviews Form
Section titled “How Worldviews Form”Your worldview emerges from your answers to key cruxes:
Short timelines? ──┬── Yes ──► Alignment hard? ──┬── Yes ──► DOOMER │ └── No ───► Urgent but optimistic │ └── No ───► Alignment hard? ──┬── Yes ──► LONG-TIMELINES TECHNICAL └── No ───► OPTIMISTIC ALIGNMENTGovernance-focused worldview cuts across these based on belief that technical solutions alone are insufficient.
Using This Section
Section titled “Using This Section”- Identify your worldview: Which profile is closest to your current beliefs?
- Understand alternatives: What would you need to believe to hold a different view?
- Find cruxes: Where do you differ from other worldviews?
- Steel-man: Can you articulate the strongest version of views you disagree with?
Cross-Cutting Dimensions
Section titled “Cross-Cutting Dimensions”Some beliefs vary within each worldview:
| Dimension | Spectrum |
|---|---|
| AI assistance | Trust AI to help with safety ↔ Keep humans in charge |
| Open source | Net positive ↔ Net negative |
| Lab work | Change from inside ↔ Change from outside |
| Theory vs. empirical | Formal methods ↔ Experimental safety |