Root Factors
Root Factors are the top-level causal drivers that shape AI transition outcomes. They bridge the gap between specific component parameters and ultimate outcomes like existential catastrophe and long-term trajectory.
See the interactive AI transition model for a visual representation.
The Seven Root Factors
Section titled “The Seven Root Factors”Loading diagram...
| Root Factor | Description | Sub-components | Scenarios Influenced |
|---|---|---|---|
| Misalignment Potential | The potential for AI systems to be misaligned with human values - pursuing goals that diverge from human intentions. This encompasses technical alignment research, interpretability of AI reasoning, and robustness of safety measures. Lower misalignment potential reduces the risk of AI takeover. | — | All scenarios |
| AI Capabilities | How powerful and general AI systems become over time. This includes raw computational power, algorithmic efficiency, and breadth of deployment. More capable AI can bring greater benefits but also amplifies risks if safety doesn't keep pace. | — | All scenarios |
| AI Uses | Where and how AI is actually deployed in the economy and society. Key applications include recursive AI development (AI improving AI), integration into critical industries, government use for surveillance or military, and tools for coordination and decision-making. | — | All scenarios |
| AI Ownership | Who controls the most powerful AI systems and their outputs. Concentration among a few companies, countries, or individuals creates different risks than broad distribution. Ownership structure shapes incentives, accountability, and the distribution of AI benefits. | — | All scenarios |
| Civilizational Competence | Humanity's collective ability to understand AI risks, coordinate responses, and adapt institutions. This includes quality of governance, epistemic health of public discourse, and flexibility of economic and political systems. Higher competence enables better navigation of the AI transition. | — | All scenarios |
| Transition Turbulence | Background instability during the AI transition period. Economic disruption from automation, competitive racing dynamics between labs or nations, and social upheaval can create pressure that leads to hasty decisions or reduced safety margins. | — | All scenarios |
| Misuse Potential | The degree to which AI enables humans to cause deliberate harm at scale. This includes biological weapons development, cyber attacks, autonomous weapons, and novel threat vectors. Even well-aligned AI could be catastrophic if misused by malicious actors. | — | All scenarios |
Factor Components
Section titled “Factor Components”Each root factor contains multiple sub-components. Click through to explore each factor’s detailed breakdown.
Factor Impact on Scenarios
Section titled “Factor Impact on Scenarios”Each root factor affects the three scenarios differently. The grid below shows quantified impact scores (0-100) and direction:
| Source → Target | AI Takeover | Human-Caused Catastrophe | Long-term Lock-in |
|---|---|---|---|
| Misalignment Potential | 85↑ | 30↑ | 60↑ |
| AI Capabilities | 80↑ | 65↑ | 70↑ |
| AI Uses | 50↑ | 55↔ | 80↑ |
| AI Ownership | 40↔ | 45↔ | 85↑ |
| Civilizational Competence | 55↓ | 60↓ | 75↔ |
| Transition Turbulence | 45↑ | 55↑ | 40↑ |
| Misuse Potential | 25↑ | 90↑ | 35↑ |
Increases risk
Decreases risk
Mixed effect
Numbers = impact magnitude (0-100)
Related Pages
Section titled “Related Pages”- Outcome Parameters — Ultimate outcomes these factors affect
- Scenarios — Intermediate pathways to outcomes
- AI Transition Model — Full model overview