State actors represent the most capable and consequential class of AI users, with both the resources to develop advanced systems and the power to deploy them at scale. The intersection of AI and state power creates multiple catastrophic risk pathways. Military AI could enable rapid-onset conflicts that escalate beyond human control, with autonomous systems making life-and-death decisions in milliseconds. AI-enhanced surveillance and control systems could enable unprecedented authoritarianism, locking in oppressive regimes indefinitely.
Great power competition between the United States and China is the primary driver of state AI risk. Both nations view AI leadership as essential to national security and economic competitiveness, creating intense racing dynamics that pressure both sides to deprioritize safety for speed. The US has invested over $15 billion annually in military AI; China aims to be the world leader in AI by 2030. This competition makes international coordination on AI safety extremely difficult.
The nuclear analogy is imperfect but instructive. Like nuclear weapons, AI could enable rapid, devastating attacks that outpace human decision-making. Unlike nuclear weapons, AI development is more diffuse, harder to verify, and offers continuous rather than discrete capability gains. The governance challenges may be even harder than nuclear arms control, which itself took decades and near-catastrophes to develop.