Worldviews
Overview
Section titled “Overview”People working on AI safety hold diverse worldviews that lead to different risk assessments and priorities. Understanding these worldviews helps explain disagreements and enables more productive dialogue.
Major Worldviews
Section titled “Major Worldviews”Believes AI existential risk is very high (often >50% p(doom)):
- Alignment is fundamentally hard
- Current approaches are inadequate
- We may not get many chances to get it right
- Often associated with: MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100, Eliezer YudkowskyResearcherEliezer YudkowskyComprehensive biographical profile of Eliezer Yudkowsky covering his foundational contributions to AI safety (CEV, early problem formulation, agent foundations) and notably pessimistic views (>90% ...Quality: 35/100, some AI safety researchers
Believes transformative AI is further away (20+ years):
- More time to solve alignment
- Current risks are more speculative
- Near-term concerns deserve more attention
- Associated with: Some ML researchers, AI ethics community
Prioritizes policy and institutional solutions:
- Technical alignment is necessary but insufficient
- Racing dynamicsRiskRacing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100 are the key problem
- International coordinationAi Transition Model ParameterInternational CoordinationThis page contains only a React component placeholder with no actual content rendered. Cannot assess importance or quality without substantive text. is critical
- Associated with: GovAILab ResearchGovAIGovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving ($1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and cur...Quality: 43/100, policy researchers, some EA organizations
Believes alignment is likely to succeed:
- Current research is making real progress
- Markets and institutions create safety incentives
- Past technology fears were often overblown
- Associated with: Some lab researchers, techno-optimists
How Worldviews Affect Priorities
Section titled “How Worldviews Affect Priorities”| Worldview | Technical Research | Governance | Timelines | Key Intervention |
|---|---|---|---|---|
| Doomer | Critical but may be hopeless | Valuable | Short | Pause/slow down |
| Long Timelines | Important, time available | Moderate priority | Long | Gradual progress |
| Governance Focused | Necessary | Highest priority | Medium | Coordination |
| Optimistic | On track | Light touch | Medium | Continue current |
Using Worldview Analysis
Section titled “Using Worldview Analysis”Explicitly identifying worldview assumptions helps:
- Understand why experts disagree
- Identify which cruxes would change conclusions
- Design robustly valuable interventions
- Have more productive conversations