Rogue actorsâterrorists, criminal organizations, lone wolves, and ideologically motivated individualsâhave historically been limited in their capacity to cause catastrophic harm by capability constraints: building weapons of mass destruction required resources and expertise that few non-state actors possessed. AI threatens to change this calculus by dramatically lowering the knowledge and skill barriers to catastrophic attacks.
The most concerning pathways involve AI-assisted development of biological weapons and AI-enabled cyberattacks on critical infrastructure. Studies have shown that current LLMs provide meaningful assistance to individuals seeking to develop biological agents, with âupliftâ factors of 1.3-2.5x for non-experts. In cybersecurity, AI tools can automate vulnerability discovery and attack execution, enabling sophisticated operations by less skilled actors. As AI capabilities advance, these risks will grow.
Unlike state actors, rogue actors are less deterrable and harder to negotiate with. They may have apocalyptic or nihilistic motivations that make them indifferent to consequences. The âlong tailâ of ideologically motivated individuals means that even if most people would never misuse AI, a small fraction of billions of potential users could cause enormous harm. Traditional security approaches focused on preventing capability acquisition may be insufficient when those capabilities are embedded in widely available AI systems.