The possibility that advanced AI systems could cause human extinction or permanent civilizational collapse has moved from science fiction speculation to mainstream concern among AI researchers, policymakers, and technology leaders. Surveys of AI researchers consistently find substantial probability estimates for existential outcomes: the 2023 AI Impacts survey found median estimates around 5-10% for “extremely bad outcomes (e.g., human extinction)” from advanced AI, while some leading researchers estimate significantly higher probabilities.
Multiple pathways to existential catastrophe have been identified. The most discussed is misaligned superintelligence: an AI system with capabilities far exceeding human intelligence that pursues goals misaligned with human values, potentially eliminating humanity as an obstacle or side effect. Other pathways include human misuse of powerful AI (e.g., engineered pandemics, automated warfare), loss of human control and agency to AI systems even without explicit “takeover,” and AI-enabled totalitarian lock-in that permanently forecloses humanity’s future potential.
The field remains deeply uncertain. Some researchers consider existential risk from AI speculative and unlikely; others view it as the most important problem facing humanity. This uncertainty itself is important: given the irreversibility of existential outcomes, even modest probabilities warrant substantial preventive effort.