AIâs impact on biological threat exposure represents one of the most concrete and near-term AI safety concerns. Multiple evaluations have demonstrated that current large language models can provide meaningful assistance to individuals seeking to develop biological weapons. OpenAIâs evaluation of GPT-4 found it provided âat most a mild upliftâ to experts but more significant assistance to non-experts. Anthropicâs studies suggest uplift factors of 1.3-2.5x for certain tasks.
The concern is not that AI enables biological attacks that were previously impossibleâdetermined state actors have long possessed these capabilities. Rather, AI lowers the knowledge barriers that previously limited the pool of potential actors. Tasks that once required years of specialized training or access to classified information can now be partially assisted by widely available AI systems. This includes guidance on pathogen selection, synthesis routes, genetic modification strategies, and evasion of detection.
Current safeguards are inconsistent. While frontier labs implement filters on bioweapon-related queries, these protections vary in effectiveness, can sometimes be circumvented through prompt engineering, and may not exist in open-weight models. The dual-use nature of biology knowledge makes it difficult to restrict harmful applications without also limiting beneficial research.