Start Here
This wiki maps the landscape of AI existential risk—the arguments, key uncertainties, organizations, and interventions. Here’s how to navigate it.
Core Framework: Key Parameters
Section titled “Core Framework: Key Parameters”The wiki is organized around Key Parameters—foundational variables that AI development affects in both directions. This framework connects:
- Risks — What decreases these parameters (51 documented risks)
- Responses — What increases or protects them (technical & governance approaches)
Parameters include things like alignment robustness, racing intensity, societal trust, and human agency. Start with the Parameters overview to understand the analytical framework.
Main Sections
Section titled “Main Sections”| Section | What’s There |
|---|---|
| Key Parameters | 22 foundational variables with trends, risks, and interventions |
| Risks | Accident, misuse, structural, and epistemic risks |
| Responses | Technical alignment approaches and governance interventions |
| Organizations | Frontier labs, safety research orgs, government bodies |
| People | Key researchers and their positions |
| Key Debates | Structured arguments on contested questions |
| Cruxes | 53 key uncertainties driving disagreements |
Quick Paths
Section titled “Quick Paths”Want to understand the risk argument? → AI Transition Model presents the framework for understanding AI outcomes
Want to see what can be done? → Responses covers technical and governance approaches
Want to understand disagreements? → Key Debates presents strongest arguments on each side
Want data and estimates? → Key Metrics has forecasts and measurements
Key Numbers
Section titled “Key Numbers”| Question | Range |
|---|---|
| P(transformative AI by 2040) | 40-80% |
| P(doom) estimates | 5-90% |
| AI safety researchers | ~300-1000 FTE |
| Annual safety funding | ~$100-500M |
This Wiki’s Perspective
Section titled “This Wiki’s Perspective”This wiki was created within the AI safety community and reflects that perspective. It:
- Maps arguments, organizations, and research in the field
- Presents the range of views within AI safety
- Uses the Key Parameters framework to connect risks and responses
- Does not claim neutrality—see the About page for limitations
If you’re skeptical of the AI safety framing, this wiki can help you understand what researchers believe and why.
Browse
Section titled “Browse”- Knowledge Base — All categories
- All Entities — Searchable database
- Entity Graph — Visual relationships