Robot Threat Exposure
Robot Threat Exposure measures the degree to which AI-controlled physical systems—particularly lethal autonomous weapons systems (LAWS)—enable deliberate harm at scale. Unlike cyber threats operating in digital space, robotic threats cause direct physical casualties and represent one of the most immediate military applications of AI.
Autonomous weapons are battlefield realities. The March 2020 Libya incident marked a watershed when Turkish Kargu-2 drones allegedly engaged human targets autonomously. AI-guided drones in Ukraine achieve 70-80% hit rates versus 10-20% for manual systems. The LAWS Proliferation Model projects autonomous weapons proliferating 4-6x faster than nuclear weapons.
| Metric | Score | Notes |
|---|---|---|
| Changeability | 40 | Moderately difficult—strong military incentives drive adoption |
| X-risk Impact | 60 | Moderate-high—direct pathway to mass casualties and escalation |
| Trajectory Impact | 50 | Shapes norms around AI in warfare and human control |
| Uncertainty | 65 | High uncertainty around proliferation speed and escalation dynamics |
Related Content
Section titled “Related Content”Risks:
Models:
Key Debates:
- At what level of autonomy do AI weapons become unacceptably dangerous?
- Can autonomous weapons be controlled like nuclear weapons, or are they too easy to develop?
- Do coordinated autonomous swarms create qualitatively new risks beyond individual systems?