Skip to content

Start Here

This wiki maps the landscape of AI existential risk—the arguments, key uncertainties, organizations, and interventions. Here’s how to navigate it.

The wiki is organized around Key Parameters—foundational variables that AI development affects in both directions. This framework connects:

  • Risks — What decreases these parameters (51 documented risks)
  • Responses — What increases or protects them (technical & governance approaches)

Parameters include things like alignment robustness, racing intensity, societal trust, and human agency. Start with the Parameters overview to understand the analytical framework.

SectionWhat’s There
Key Parameters22 foundational variables with trends, risks, and interventions
RisksAccident, misuse, structural, and epistemic risks
ResponsesTechnical alignment approaches and governance interventions
OrganizationsFrontier labs, safety research orgs, government bodies
PeopleKey researchers and their positions
Key DebatesStructured arguments on contested questions
Cruxes53 key uncertainties driving disagreements

Want to understand the risk argument?AI Transition Model presents the framework for understanding AI outcomes

Want to see what can be done?Responses covers technical and governance approaches

Want to understand disagreements?Key Debates presents strongest arguments on each side

Want data and estimates?Key Metrics has forecasts and measurements

QuestionRange
P(transformative AI by 2040)40-80%
P(doom) estimates5-90%
AI safety researchers~300-1000 FTE
Annual safety funding~$100-500M

This wiki was created within the AI safety community and reflects that perspective. It:

  • Maps arguments, organizations, and research in the field
  • Presents the range of views within AI safety
  • Uses the Key Parameters framework to connect risks and responses
  • Does not claim neutrality—see the About page for limitations

If you’re skeptical of the AI safety framing, this wiki can help you understand what researchers believe and why.