nodes:
- id: misalignment-potential
label: "Misalignment Potential"
type: cause
description: "The potential for AI systems to be misaligned with human values - pursuing goals that diverge from human intentions. This encompasses technical alignment research, interpretability of AI reasoning, and robustness of safety measures. Lower misalignment potential reduces the risk of AI takeover."
- id: ai-capabilities
label: "AI Capabilities"
type: cause
description: "How powerful and general AI systems become over time. This includes raw computational power, algorithmic efficiency, and breadth of deployment. More capable AI can bring greater benefits but also amplifies risks if safety doesn't keep pace."
- id: civ-competence
label: "Civilizational Competence"
type: cause
description: "Humanity's collective ability to understand AI risks, coordinate responses, and adapt institutions. This includes quality of governance, epistemic health of public discourse, and flexibility of economic and political systems. Higher competence enables better navigation of the AI transition."
- id: transition-turbulence
label: "Transition Turbulence"
type: cause
description: "Background instability during the AI transition period. Economic disruption from automation, competitive racing dynamics between labs or nations, and social upheaval can create pressure that leads to hasty decisions or reduced safety margins."
- id: misuse-potential
label: "Misuse Potential"
type: cause
description: "The degree to which AI enables humans to cause deliberate harm at scale. This includes biological weapons development, cyber attacks, autonomous weapons, and novel threat vectors. Even well-aligned AI could be catastrophic if misused by malicious actors."
- id: ai-ownership
label: "AI Ownership"
type: cause
description: "Who controls the most powerful AI systems and their outputs. Concentration among a few companies, countries, or individuals creates different risks than broad distribution. Ownership structure shapes incentives, accountability, and the distribution of AI benefits."
- id: ai-uses
label: "AI Uses"
type: cause
description: "Where and how AI is actually deployed in the economy and society. Key applications include recursive AI development (AI improving AI), integration into critical industries, government use for surveillance or military, and tools for coordination and decision-making."
- id: ai-takeover
label: "AI Takeover"
type: intermediate
description: "A scenario where AI systems gain decisive control over human affairs, either through rapid capability gain or gradual accumulation of power. This could occur through misaligned goals, deceptive behavior, or humans voluntarily ceding control. The outcome depends heavily on whether the AI's values align with human flourishing."
- id: human-catastrophe
label: "Human-Caused Catastrophe"
type: intermediate
description: "Scenarios where humans deliberately use AI to cause mass harm. State actors might deploy AI-enabled weapons or surveillance; rogue actors could use AI to develop bioweapons or conduct massive cyber attacks. Unlike AI takeover, humans remain in control but use that control destructively."
- id: long-term-lockin
label: "Long-term Lock-in"
type: intermediate
description: "Permanent entrenchment of particular power structures, values, or conditions due to AI-enabled stability. This could be positive (locking in good values) or negative (perpetuating suffering or oppression). Once locked in, these outcomes may be extremely difficult to change."
- id: existential-catastrophe
label: "Existential Catastrophe"
type: effect
description: "Outcomes that permanently and drastically curtail humanity's potential. This includes human extinction, irreversible collapse of civilization, or permanent subjugation. The key feature is irreversibility—recovery becomes impossible or extremely unlikely."
- id: long-term-trajectory
label: "Long-term Trajectory"
type: effect
description: "The quality and character of the post-transition future, assuming civilization survives. This encompasses how much of humanity's potential is realized, the distribution of wellbeing, preservation of human agency, and whether the future remains open to positive change."
edges:
- source: ai-capabilities
target: ai-takeover
strength: strong
effect: increases
- source: ai-capabilities
target: human-catastrophe
strength: medium
effect: increases
- source: ai-capabilities
target: long-term-lockin
strength: medium
effect: increases
- source: misalignment-potential
target: ai-takeover
strength: strong
effect: increases
- source: misalignment-potential
target: human-catastrophe
strength: weak
effect: increases
- source: misalignment-potential
target: long-term-lockin
strength: medium
effect: increases
- source: misuse-potential
target: human-catastrophe
strength: strong
effect: increases
- source: misuse-potential
target: ai-takeover
strength: weak
effect: increases
- source: misuse-potential
target: long-term-lockin
strength: weak
effect: increases
- source: transition-turbulence
target: ai-takeover
strength: medium
effect: increases
- source: transition-turbulence
target: human-catastrophe
strength: medium
effect: increases
- source: transition-turbulence
target: long-term-lockin
strength: weak
effect: increases
- source: civ-competence
target: ai-takeover
strength: medium
effect: decreases
- source: civ-competence
target: human-catastrophe
strength: medium
effect: decreases
- source: civ-competence
target: long-term-lockin
strength: strong
effect: mixed
- source: ai-ownership
target: ai-takeover
strength: weak
effect: mixed
- source: ai-ownership
target: human-catastrophe
strength: weak
effect: mixed
- source: ai-ownership
target: long-term-lockin
strength: strong
effect: increases
- source: ai-uses
target: ai-takeover
strength: medium
effect: increases
- source: ai-uses
target: human-catastrophe
strength: medium
effect: mixed
- source: ai-uses
target: long-term-lockin
strength: strong
effect: increases
- source: ai-takeover
target: existential-catastrophe
strength: strong
effect: increases
- source: human-catastrophe
target: existential-catastrophe
strength: strong
effect: increases
- source: ai-takeover
target: long-term-trajectory
strength: strong
effect: increases
- source: long-term-lockin
target: long-term-trajectory
strength: strong
effect: mixed