Misuse Risks
Misuse risks arise when humans deliberately use AI systems to cause harm. Unlike accident risks, these require malicious intent—but AI dramatically amplifies what bad actors can do.
How These Risks Connect
Section titled “How These Risks Connect”Weapons & Violence
Section titled “Weapons & Violence”AI enabling new forms of violence:
| Risk | Description |
|---|---|
| Bioweapons | AI-assisted pathogen design and synthesis |
| Cyberweapons | Autonomous hacking and vulnerability exploitation |
| Autonomous Weapons | Lethal autonomous weapons systems (LAWS) |
Manipulation & Deception
Section titled “Manipulation & Deception”AI-powered influence and fraud:
| Risk | Description |
|---|---|
| Disinformation | AI-generated propaganda and influence operations |
| Deepfakes | Synthetic media for impersonation and fabrication |
| AI-Powered Fraud | Automated scams, social engineering, impersonation |
Surveillance
Section titled “Surveillance”| Risk | Description |
|---|---|
| Mass Surveillance | AI-enabled monitoring at scale |
Related: Authoritarian Applications
Section titled “Related: Authoritarian Applications”Surveillance and other misuse capabilities can enable Authoritarian Takeover (a structural risk). The tools and techniques that enable authoritarian control are documented in Authoritarian Tools.
Key Dynamics
Section titled “Key Dynamics”Democratization of Harm
Section titled “Democratization of Harm”AI lowers barriers to sophisticated attacks. A lone actor can now generate personalized disinformation at scale, or potentially access dangerous knowledge that previously required rare expertise.
Offense-Defense Balance
Section titled “Offense-Defense Balance”Many misuse risks shift the balance toward offense. It’s easier to generate disinformation than to detect it, easier to find vulnerabilities than to patch them.
Dual-Use Dilemma
Section titled “Dual-Use Dilemma”Most AI capabilities have both beneficial and harmful applications. Language models that help with coding can also help with malware. Models that accelerate drug discovery could accelerate bioweapon design.
Relationship to Other Risk Categories
Section titled “Relationship to Other Risk Categories”Misuse + Accident Risks
Section titled “Misuse + Accident Risks”- Misuse is more dangerous when AI systems are more capable
- Accident risks become more severe if misused AI is harder to control
- Both require understanding AI capabilities we don’t fully control
Misuse + Structural Risks
Section titled “Misuse + Structural Risks”- Surveillance and disinformation enable authoritarian takeover
- Misuse capabilities contribute to concentration of power
Misuse + Epistemic Risks
Section titled “Misuse + Epistemic Risks”- Disinformation drives reality fragmentation
- Deepfakes accelerate authentication collapse
Contributing Amplifiers
Section titled “Contributing Amplifiers”These misuse risks are influenced by the following amplifiers from other risk categories:
| Factor | How It Contributes |
|---|---|
| Authoritarian Tools | Capabilities that enable surveillance and control |
| Proliferation | More actors gain access to dangerous capabilities |
| Authentication Collapse | Harder to distinguish real from fake content |