Skip to content

Key Uncertainties & Cruxes

A crux is a question where:

  1. Different answers lead to significantly different conclusions or priorities
  2. The question is (at least somewhat) empirically resolvable
  3. Reasonable people currently disagree on the answer

Identifying cruxes helps:

  • Focus research on the most decision-relevant questions
  • Enable productive disagreement (find what actually matters)
  • Track how evidence should update views over time

Key uncertainties about which risks matter most and how severe they are:

DomainPageKey Questions
Accident RisksTechnical failures, misalignmentWill we get warning signs? Can we verify alignment?
Misuse RisksIntentional harm, weaponsHow much does AI amplify malicious actors?
Structural RisksConcentration, racing, lock-inAre racing dynamics inevitable?
Epistemic RisksTruth, knowledge, trustCan verification keep pace with generation?

Key uncertainties about what interventions work:

DomainPageKey Questions
SolutionsEpistemic & coordination toolsCan AI defense match AI offense?

Some questions cut across multiple domains:

  • When will transformative AI arrive?
  • How much does this affect which interventions matter?
  • Is alignment a hard technical problem or an engineering challenge?
  • Do current techniques scale?
  • Can labs/governments coordinate effectively?
  • Do racing dynamics prevent adequate safety margins?
  • Will we get clear signals before catastrophic capabilities?
  • Can evaluations provide meaningful safety guarantees?

For prioritization:

  • Identify which cruxes most affect your views
  • Focus learning on resolving your key uncertainties

For research:

  • Target empirical work at resolvable cruxes
  • Design studies that could shift community views

For dialogue:

  • Find where you actually disagree with others
  • Avoid arguing past each other on downstream conclusions