Superforecasting the Premises in 'Is Power-Seeking AI an Existential Risk?'
Abstract
A comparison of Carlsmith's estimates with superforecasters on the six premises of the AI x-risk argument. Reveals key cruxes: superforecasters are more optimistic about alignment tractability (P3) and skeptical of power-seeking (P4).