Skip to content

Formal Arguments

This section presents formal, structured arguments about AI risk. Rather than mixing claims and evidence throughout narrative text, we lay out explicit premises, evidence, and logical structures.

BenefitDescription
ClarityExplicit premises make it clear exactly what each argument claims
FalsifiabilityClear what evidence would change the conclusion
SteelmanningPresent the best version of each position
Intellectual honestyExpress uncertainty (“I assign 30% to P2”) rather than pretending certainty

Each argument page follows this format:

  1. Thesis Statement: One-sentence summary
  2. Formal Structure: Premises (P1, P2, …) leading to conclusion (C)
  3. Evidence for Each Premise: Empirical data, theoretical reasoning, expert opinions
  4. Objections and Responses: Strongest counterarguments and potential replies
  5. Cruxes: What evidence would change the conclusion?
On Existential Risk
argument

Case FOR AI X-Risk

The strongest argument that AI poses existential risk

argument

Case AGAINST AI X-Risk

Steelmanned skeptical position on AI risk

On Alignment Difficulty
argument

Why Alignment Might Be Hard

Arguments for fundamental alignment difficulty

argument

Why Alignment Might Be Easy

Arguments for alignment tractability

As a skeptic: Read the AGAINST case first, then identify which premises in the FOR case you reject.

As concerned: Read the FOR case first, then seriously engage with the AGAINST case.

As uncertain: Read both, identify which premises you find most/least convincing, and track your credences.


See also: Key Debates for less formal explorations of contested questions.