🎯 Understanding AI Risk
The core argument for why AI might pose existential risk, broken into key claims and cruxes.
This wiki maps the landscape of AI existential risk—the arguments, disagreements, organizations, and interventions that matter for ensuring advanced AI goes well for humanity.
Whether you’re new to AI safety or a researcher looking for a comprehensive reference, this site aims to be your guide.
🎯 Understanding AI Risk
The core argument for why AI might pose existential risk, broken into key claims and cruxes.
⚖️ Key Debates
Structured arguments on contested questions:
🏢 Organizations
Profiles of key players:
🛡️ Safety Approaches
Technical and governance solutions:
📐 Key Parameters
Foundational variables AI affects in both directions:
📊 History
AI safety timeline from the Dartmouth conference to the present day.
🗺️ Entity Graph
Visual dependency graph showing how entities connect.
🔮 Scenarios
Future projections for how AI development might unfold.
📚 Knowledge Base
Browse all entries — risks, responses, organizations, people, and cruxes.
New to AI safety? Start with:
Want to contribute? Explore:
Looking for depth? Try:
| Question | Estimates |
|---|---|
| P(transformative AI by 2040) | 40-80% (varies by source) |
| P(doom) estimates | 5-90% (wide disagreement) |
| AI safety researchers | ~300-1000 FTE |
| Annual safety funding | ~$100-500M |
| Frontier lab safety spend | ~$50-200M combined |
See the dashboard for more details.
Key voices in AI safety:
✅ Comprehensive — Covers technical, governance, and strategic perspectives
✅ Structured — Organized by cruxes, not just topics
✅ Parameter-oriented — Tracks foundational variables, not just risks
✅ Interactive — Timeline, risk maps, argument maps
✅ Practical — Career and funding guidance
This wiki is not neutral. It was created within the AI safety community and reflects that perspective. While we strive to present counterarguments fairly, readers should be aware:
What this wiki does well:
What this wiki does less well:
Key assumptions embedded in this wiki:
If you’re skeptical of these assumptions, this wiki may still be useful for understanding what AI safety researchers believe and why—but you should seek out alternative perspectives as well.
Recommended alternative viewpoints:
Read our full transparency statement →
This is an open project. Key areas where contributions would be valuable:
Explore by Topic
Browse the sidebar to explore specific topics, risks, and organizations.
Start Learning