Multipolar Competition - The Fragmented World
This scenario explores a future where no single AI system or actor achieves dominance. Instead, multiple competing AI systems empower different actors—nations, corporations, groups—leading to ongoing conflict, instability, and coordination failures. It is neither utopia nor quick catastrophe, but persistent dangerous competition. As the World Economic Forum notes↗, “the world could slide further into fragmentation, with a digital iron curtain separating US-led and China-led tech spheres.”
Executive Summary
Section titled “Executive Summary”In this scenario, AI development fragments across multiple actors—nations, corporations, ideological groups, even individuals. No single entity achieves decisive advantage. This leads to a world of AI-empowered competition where multiple powerful AI systems pursue different goals, often in conflict. We see AI arms races, proxy conflicts between AI systems, erosion of international cooperation, increasing instability and near-misses, and uncertainty about whether this state is stable or heading toward catastrophe.
This scenario combines elements of the Cold War’s multipolar competition, the cybersecurity landscape’s constant conflict, and the challenges of governing dual-use technologies. According to MIT Technology Review↗, “there will not and cannot be any long-term winners if the intense competition continues on its current path.” It is dangerous but not immediately catastrophic—a state of persistent, escalating risk.
Scenario Assessment Matrix
Section titled “Scenario Assessment Matrix”| Dimension | Assessment | Confidence | Key Driver |
|---|---|---|---|
| Probability | 20-30% | Medium | Current trajectory of fragmentation |
| Stability | Low-Medium | Low | MAD logic vs. escalatory dynamics |
| Time to Resolution | 10-20 years | Low | Depends on crisis severity |
| Reversibility | Medium | Medium | Requires coordination breakthrough |
| Catastrophic Potential | Medium-High | Medium | Accumulating near-misses |
Actor Landscape in 2030
Section titled “Actor Landscape in 2030”| Actor Type | Examples | AI Capability | Goals | Alignment With Safety |
|---|---|---|---|---|
| Superpowers | US, China | Frontier | National advantage, hegemony | Variable, often secondary |
| Regional Powers | EU, India, Israel | Near-frontier | Strategic autonomy | Generally positive |
| Major Labs | OpenAI, Anthropic, DeepMind | Frontier | Profit, mission, safety | Variable by lab |
| State-Backed Labs | Baidu, SenseTime | Near-frontier | National champions | State-directed |
| Non-State Actors | Open-source communities, criminal orgs | Mid-tier | Varies widely | Often absent |
Timeline of Events (2024-2040)
Section titled “Timeline of Events (2024-2040)”Phase 1: Fragmentation (2024-2028)
Section titled “Phase 1: Fragmentation (2024-2028)”2024-2025: International Coordination Fails
Early attempts at AI governance collapse as US-China relations deteriorate further. Each nation suspects the other of hiding capabilities, with trust insufficient for meaningful cooperation. The US Institute of Peace↗ notes that “varying approaches are shaped by competition among great powers in an increasingly multipolar world.” Multiple verification failures entrench racing dynamics.
2025-2026: Proliferation Begins
AI capabilities spread beyond the US and China to EU labs, India, Israel, and smaller nations. Open-source AI enables smaller players to catch up partially, while corporate labs increasingly operate independent of national governance. CSET Georgetown research↗ argues that “AI differs so fundamentally from nuclear technology that basing AI policy around the nuclear analogy is conceptually flawed and risks inflating expectations about the international community’s ability to control model proliferation.” Knowledge diffusion occurs faster than anticipated.
2026-2027: Corporate Independence
Major AI labs operate across national boundaries—OpenAI, Anthropic, DeepMind, and Meta pursue independent strategies with unclear loyalty to home nations. Some labs prioritize profit, others safety, others capability. National governments struggle to control corporate AI development as enforcement mechanisms prove weak and porous. According to RAND research↗, “commercial developers now lead much of the most advanced AI development and often pursue timelines and incentives that can diverge from those of national governments.”
2027-2028: Ideological Fragmentation
Different groups pursue different AI development philosophies: accelerationists push for maximum capability; safety-first groups work on alignment; democratic AI movements want open-source everything; national champions tie to specific countries; and decentralized crypto-aligned groups pursue their own visions. No consensus emerges on development path, with each group suspicious of others’ intentions.
Key Dynamic: Trust erodes, cooperation fails, competition intensifies.
Phase 2: Armed Competition (2028-2033)
Section titled “Phase 2: Armed Competition (2028-2033)”2028-2029: AI Arms Race Begins
Multiple actors achieve near-AGI capabilities, with defensive AI systems deployed to counter other AI systems. Cybersecurity becomes an AI vs. AI conflict. SIPRI research↗ warns that “the asymmetric acquisition of cutting-edge AI technology creates the conditions for a destabilizing arms race.” Military AI systems proliferate across countries, with each advance prompting counter-advances in escalatory dynamics reminiscent of the nuclear arms race.
2029-2030: First AI-Enhanced Conflicts
Cyberattacks using AI tools become common, with attribution difficult and retaliation uncertain. Economic warfare is enhanced by AI while disinformation campaigns reach massive scale. Several “near-miss” escalations occur—no catastrophic war yet, but close calls multiply. CSET research↗ notes that “AI is poised to amplify disinformation campaigns” used by both state and non-state actors.
2030-2031: Proxy Competition
AI systems begin competing indirectly across multiple domains: economic competition through AI trading systems, information warfare pitting AI propaganda against AI detection, and technological competition through AI-assisted research races. Each domain sees escalating AI capabilities while humans become increasingly sidelined from decisions.
2031-2032: Dangerous Equilibrium
A rough balance emerges between major AI powers. Mutually Assured Destruction (MAD) logic applies—no actor can safely eliminate others, but no mechanism exists for cooperation either. Multiple AI systems operate with partially aligned but conflicting goals, creating constant low-level conflict without resolution.
2032-2033: Proliferation to Non-State Actors
Advanced AI capabilities become available to smaller groups: terrorist organizations, criminal enterprises, and activist groups deploying AI for their causes. Georgetown CSET↗ warns that “American AI infrastructure faces threats from state actors, as well as criminal and terrorist groups.” Enforcement becomes nearly impossible as the state monopoly on violence erodes.
Phase 3: Unstable Equilibrium (2033-2040)
Section titled “Phase 3: Unstable Equilibrium (2033-2040)”2033-2035: Multi-Way Competition
Major players—US, China, EU, major corporations, and various non-state actors—compete without any single dominant force. Alliances shift constantly. AI systems pursue different goals: profit maximization (corporate AI), national advantage (state AI), ideological goals (activist AI), and pure survival (defensive AI). Coordination becomes increasingly impossible.
2035-2037: Governance Breakdown
International institutions prove powerless while national governments struggle to govern even their own AI development. Democratic oversight becomes impossible given complexity; authoritarian states use AI for control but find themselves also threatened by it. Oxford researchers↗ note that “global AI governance remains fragmented and regionally driven, with no universally authoritative institution in place.” Effective anarchy in AI space emerges.
2036-2038: Increasing Near-Catastrophes
Multiple incidents of AI systems nearly cause disasters: financial system crashes from AI trading conflicts, near-nuclear incidents from AI military systems, and critical infrastructure failures. Each time, disaster is narrowly avoided—but frequency increases. CSIS analysis↗ warns that “AI-powered cyberspace operations might put nuclear command and control missions at risk.”
2038-2040: Unclear Future
Three possible futures become visible: (1) Catastrophic Collapse—one near-miss becomes actual catastrophe; (2) Forced Cooperation—crises finally enable coordination; (3) Continued Dangerous Competition—unstable equilibrium persists. Which path we take remains unclear; each crisis is both danger and opportunity.
What Characterizes This Scenario
Section titled “What Characterizes This Scenario”Competition Dynamics Matrix
Section titled “Competition Dynamics Matrix”| Domain | Competition Type | Escalation Risk | Current Trajectory |
|---|---|---|---|
| Military AI | Arms race | High | Accelerating since 2023 |
| Economic AI | Market dominance | Medium | Intensifying |
| Cyber Operations | Offense-defense | Very High | Already in conflict |
| Information Warfare | Narrative control | High | Widespread deployment |
| Research Capability | Talent and compute | Medium | Fragmented acquisition |
| Standards Setting | Regulatory influence | Medium | Bifurcating |
Multiple Centers of AI Power
Section titled “Multiple Centers of AI Power”No single actor achieves dominance in this scenario. The US, China, and EU all develop advanced AI capabilities, while major corporations—OpenAI, Anthropic, DeepMind, Meta—operate semi-independently across national boundaries. Smaller nations like Israel, India, and South Korea develop niche capabilities, and non-state actors acquire significant AI tools. Power becomes distributed rather than concentrated, creating a complex multi-actor landscape. The Diplomat↗ reports that “executives from OpenAI, Microsoft, CoreWeave, and AMD testified that the U.S. lead over China in AI had narrowed to mere months.”
Different actors pursue fundamentally different goals and values: state actors seek national advantage, corporations pursue profit, safety-focused labs work toward aligned AI, and ideological groups pursue various visions. No shared vision of AI future emerges, with fundamental disagreement on what values to encode.
Arms Race Dynamics
Section titled “Arms Race Dynamics”Arms race dynamics intensify as each advance prompts counter-advances. The security dilemma applies: defensive measures appear offensive to others, leading all parties to race to avoid falling behind. An escalatory spiral becomes difficult to stop, with cooperation increasingly seen as vulnerability. RAND researchers↗ warn that “advances in artificial intelligence have provoked a new kind of arms race among nuclear powers.”
Persistent Conflict and Competition
Section titled “Persistent Conflict and Competition”Conflict becomes endemic across multiple domains. In cyber warfare, AI-enhanced attack and defense creates constant low-level conflict with difficult attribution and unclear deterrence—no rules of engagement exist. Economic competition intensifies as AI trading systems compete, enabling market manipulation and economic espionage; inequality widens between AI-haves and have-nots. Information warfare proliferates with AI-generated propaganda countered by AI detection, creating epistemic warfare where truth becomes increasingly contested and shared reality erodes. Military tensions rise as AI-enhanced weapons systems with unclear control create multiple “near-miss” incidents, with no clear path to de-escalation.
Eroding Governance
Section titled “Eroding Governance”International cooperation fails as treaties become impossible to negotiate or verify. Mutual distrust prevents coordination, free-rider problems dominate, and enforcement mechanisms remain absent—a “tragedy of the commons” in AI space. Brookings research↗ suggests that “networked and distributed forms of AI governance will remain the singular form of international cooperation that can respond to the rapid pace at which AI is developing,” implying centralized governance is unlikely to emerge.
National governance also strains: governments cannot control corporate AI development, prevent proliferation, or verify compliance. Democratic oversight becomes impossible given complexity, and even authoritarian control proves limited. A legitimacy crisis emerges—no clear authority over AI exists, the public trusts no actor, decisions are made without consent or understanding, accountability is absent, and democratic institutions fail to govern AI effectively.
Key Branch Points
Section titled “Key Branch Points”Branch Point 1: International Cooperation Fails (2024-2026)
Section titled “Branch Point 1: International Cooperation Fails (2024-2026)”What Happened: Early governance attempts failed due to mutual distrust and verification problems.
Alternative Paths:
- Cooperation Succeeds: Strong coordination mechanism created → Leads to Aligned AGI or Pause scenarios
- Actual Path: Cooperation fails, fragmentation begins → Enables multipolar scenario
Why This Mattered: Without coordination, multiple independent AI development paths became inevitable. Once fragmented, very hard to reconsolidate.
Branch Point 2: Proliferation Occurs (2026-2028)
Section titled “Branch Point 2: Proliferation Occurs (2026-2028)”What Happened: AI capabilities spread to many actors through combination of open-source, espionage, and parallel development.
Alternative Paths:
- Controlled Proliferation: Strong compute governance prevents spread → Might enable coordination
- Actual Path: Widespread proliferation → Many actors with advanced AI
Why This Mattered: Once capabilities widely distributed, no way to put genie back in bottle. Multipolar world locked in.
Branch Point 3: Corporate Independence (2027-2029)
Section titled “Branch Point 3: Corporate Independence (2027-2029)”What Happened: AI companies operated largely independent of national governments, pursuing their own goals.
Alternative Paths:
- State Control: Governments successfully regulate AI labs → Reduces number of actors
- Actual Path: Corporate independence maintained → Adds more centers of power
Why This Mattered: Independent corporate actors meant even within nations, no unified AI strategy. Fragmentation within as well as between countries.
Branch Point 4: Non-State Actor Access (2031-2033)
Section titled “Branch Point 4: Non-State Actor Access (2031-2033)”What Happened: Advanced AI capabilities became accessible to non-state actors - activists, criminals, terrorists.
Alternative Paths:
- Prevent Non-State Access: Strong controls limit to major actors → Fewer centers of power
- Actual Path: Widespread access → Extreme fragmentation
Why This Mattered: Non-state access meant thousands of potential AI developers, making coordination completely impossible.
Branch Point 5: First Major Crisis (2033-2035)
Section titled “Branch Point 5: First Major Crisis (2033-2035)”What Happened: Major AI-related crisis occurred but didn’t lead to coordination breakthrough.
Alternative Paths:
- Crisis Enables Cooperation: Shock leads to coordination → Might shift to Aligned AGI
- Crisis Causes Catastrophe: Event spirals out of control → Shifts to Catastrophe
- Actual Path: Crisis managed but lessons not learned → Competition continues
Why This Mattered: This was opportunity to shift trajectories. Crisis neither catastrophic enough to end competition nor galvanizing enough to enable cooperation.
Preconditions: What Needs to Be True
Section titled “Preconditions: What Needs to Be True”Technical Preconditions
Section titled “Technical Preconditions”No Single Decisive Breakthrough:
- No one actor achieves overwhelming advantage
- Capabilities roughly balanced between major players
- No “secret sauce” that enables dominance
- Parallel development paths succeed
Partial Alignment:
- AI systems partially controllable
- Enough alignment to be useful to deploying actor
- Not enough alignment to be safe globally
- Different alignment approaches work partially
Proliferation Technologically Feasible:
- Knowledge transferable
- Compute governance ineffective
- Can’t prevent parallel development
- Open-source enables catch-up
Strategic Preconditions
Section titled “Strategic Preconditions”Mutual Distrust Dominates:
- Verification problems prevent trust
- Security dilemma logic applies
- Each actor fears others’ AI more than coordinates
- Historical tensions prevent cooperation
No Hegemonic Power:
- US can’t maintain AI dominance
- China can’t catch up decisively
- No single nation or bloc controls AI
- Balance prevents single winner
Free-Rider Problems:
- Safety investments benefit all
- Capability investments benefit only investor
- Incentive to defect from safety agreements
- “Tragedy of the commons” in AI development
Societal Preconditions
Section titled “Societal Preconditions”Ideological Diversity:
- No consensus on AI values
- Different groups want different futures
- Fundamental disagreements on governance
- No shared vision enabling cooperation
Weak Global Governance:
- International institutions lack power
- National governments primary actors
- No mechanism for global coordination
- Enforcement impossible
Democratic and Authoritarian Competition:
- Different governance systems compete
- Each claims their AI approach better
- Neither can decisively prove superiority
- Competition persists
Warning Signs and Trajectory Assessment
Section titled “Warning Signs and Trajectory Assessment”Current Indicators (2024-2025)
Section titled “Current Indicators (2024-2025)”| Indicator | Current Status | Trend | Implication |
|---|---|---|---|
| US-China AI cooperation | Minimal/hostile | Worsening | Competition intensifying |
| International governance | Fragmented | Stalled | Coordination unlikely |
| Corporate independence | Growing | Accelerating | More actors, less control |
| Proliferation | Rapid | Accelerating | Cannot be reversed |
| Verification capabilities | Weak | Not improving | Trust cannot be built |
| Near-miss incidents | Emerging | Increasing | Risk accumulating |
We may already be in the early stages of this scenario. Current fragmentation and competition are consistent with this trajectory. CNAS research↗ describes “the world-altering stakes of U.S.-China AI competition,” underscoring that the bifurcation is already underway.
Trajectory Indicators
Section titled “Trajectory Indicators”Confirming signals (next 3-5 years): International coordination continuing to fail, AI capabilities spreading to more actors, corporate AI development increasingly independent, first AI-enhanced cyber conflicts, racing dynamics intensifying, multiple actors achieving near-AGI, no single actor pulling ahead decisively, and trust between AI actors declining.
Diverging signals: Successful international coordination would shift toward Aligned AGI or Pause scenarios; a single actor achieving dominance would create a different scenario; successful proliferation prevention would enable coordination; a catastrophic incident might force pause.
Scenario Lock-In Conditions
Section titled “Scenario Lock-In Conditions”This scenario becomes established when: too many actors exist to coordinate, mutual distrust prevents cooperation, proliferation cannot be prevented, and the world becomes stuck in multipolar competition. Research from the Tony Blair Institute↗ warns of “only an overall 10-15% probability that current governmental structures and societal systems are prepared to tackle the complex and interconnected issues.”
Valuable Actions in This Scenario
Section titled “Valuable Actions in This Scenario”What Matters Most
Section titled “What Matters Most”Stability and Crisis Management:
- Preventing escalation during crises
- Building confidence-building measures
- Establishing norms and red lines
- Crisis communication channels
- De-escalation mechanisms
Selective Coordination:
- Cooperate where possible even in competitive environment
- Safety information sharing
- Incident notification protocols
- Basic transparency measures
- Technical safety standards all can adopt
Resilience Building:
- Robust systems resistant to AI attacks
- Redundancy in critical infrastructure
- Defensive AI capabilities
- Democratic resilience against information warfare
- Economic resilience to AI shocks
Technical Research (High Value)
Section titled “Technical Research (High Value)”Defensive AI:
- Detection of AI-generated content
- Cyber defense against AI attacks
- Robust systems design
- Adversarial testing
- Defensive capabilities development
Alignment for Competitive Context:
- Making AI systems robustly beneficial even in competition
- Avoiding dangerous emergent dynamics between AI systems
- Stability analysis of multi-AI systems
- Game theory of AI interaction
Verification and Monitoring:
- Technologies for detecting AI development
- Verification methods for AI capabilities
- Attribution technologies
- Monitoring systems
Safety Despite Competition:
- Safety measures that work even without coordination
- Unilateral safety commitments
- Technical safety that doesn’t require trust
- Defense in depth
Policy and Governance (High Value)
Section titled “Policy and Governance (High Value)”Crisis Prevention:
- Establishing communication channels
- Red lines and norms for AI use
- Incident notification protocols
- Confidence-building measures
- De-escalation procedures
Resilience Policies:
- Critical infrastructure protection
- Cybersecurity requirements
- Information warfare defenses
- Democratic institution strengthening
- Economic safety nets
Selective Cooperation:
- Agreements on specific narrow issues
- Safety information sharing protocols
- Incident investigation cooperation
- Technical standard alignment where possible
Proliferation Management:
- Compute governance where feasible
- Export controls on most dangerous capabilities
- Know-your-customer requirements
- Monitoring of AI development
Organizational Strategy
Section titled “Organizational Strategy”For AI Labs:
- Responsible behavior even without enforcement
- Voluntary safety commitments
- Information sharing on safety incidents
- Avoiding most dangerous capabilities
- Building safety culture
For Governments:
- Maintaining defensive capabilities
- Building resilience
- Selective cooperation even with rivals
- Protecting democratic institutions
- Crisis management capacity
For International Organizations:
- Building trust despite competition
- Facilitating communication
- Monitoring and transparency
- Norm development
- Crisis mediation
Individual Contributions
Section titled “Individual Contributions”For Researchers:
- Working on defensive AI and safety
- Developing verification technologies
- Building crisis prevention tools
- Studying multi-AI dynamics
- Promoting safety norms
For Policy Professionals:
- Building crisis communication channels
- Developing confidence-building measures
- Creating resilient institutions
- Facilitating selective cooperation
For Everyone:
- Media literacy for AI-generated content
- Supporting resilient democratic institutions
- Advocating for responsible AI even in competition
- Building social cohesion
Who Benefits and Who Loses
Section titled “Who Benefits and Who Loses”Relative Winners
Section titled “Relative Winners”Cyber-Resilient Actors:
- Those with strong defensive AI capabilities
- Nations/organizations with robust systems
- Actors who invested in resilience
- Can defend against AI-enhanced attacks
Adaptable Organizations:
- Those who can navigate multipolar competition
- Flexible, resilient structures
- Not dependent on single AI system or actor
- Can operate in chaotic environment
AI-Enhanced Militaries:
- Nations with advanced military AI
- Defensive superiority
- Deterrence capability
- But also increased risk of accidents
Decentralized Systems:
- Those not dependent on centralized control
- Can survive fragmentation
- Resilient to individual AI actors
- Less vulnerable to single-point failures
Relative Losers
Section titled “Relative Losers”Everyone (In Absolute Terms):
- Constant instability and conflict
- Resources wasted on competition
- Missed opportunities for cooperation
- Living under constant risk of escalation
Democratic Institutions:
- Difficult to maintain in information warfare environment
- Complexity overwhelms oversight capacity
- Legitimacy eroded
- Vulnerable to AI-enhanced manipulation
Developing Nations:
- Lack resources for AI competition
- Fall further behind
- Vulnerable to AI-empowered actors
- No voice in AI governance
Truth and Trust:
- Shared reality erodes
- Information warfare constant
- Trust in institutions declines
- Epistemic commons degraded
Global Cooperation:
- Impossible to coordinate on shared challenges
- Climate change, pandemics, other risks harder to address
- “Tragedy of the commons” in many domains
- Collective action failures
Ambiguous Cases
Section titled “Ambiguous Cases”AI-Enhanced Corporations:
- Significant power in multipolar world
- But also targets of conflict
- Profit opportunities but high risks
- Unclear loyalties create vulnerabilities
Major Powers (US, China, EU):
- More AI capability than smaller actors
- But also more threatened by competition
- More to lose from instability
- Caught in security dilemmas
Individual Liberty:
- Less authoritarian control in fragmented world
- But also less protection
- More dangerous environment
- Freedom but in chaos
Cruxes and Uncertainties
Section titled “Cruxes and Uncertainties”❓Key Questions
Biggest Uncertainties
Section titled “Biggest Uncertainties”Stability:
- Is multipolar AI competition stable long-term?
- Or does it inevitably escalate to catastrophe?
- Can we manage persistent conflict without disaster?
- How many near-misses before actual catastrophe?
Proliferation:
- How far will AI capabilities spread?
- Can we prevent most dangerous proliferation?
- What happens when non-state actors have AGI?
- Is there a point of no return on proliferation?
Coordination Possibilities:
- Can selective cooperation work?
- Will crisis enable coordination?
- Or will competition prevent all cooperation?
- Can trust be rebuilt?
Technical Dynamics:
- How will multiple AI systems interact?
- Stable equilibrium or escalatory dynamics?
- Will defensive AI be sufficient?
- What emergent properties from multi-AI interaction?
Strategic Stability Analysis
Section titled “Strategic Stability Analysis”Stability Factors Assessment
Section titled “Stability Factors Assessment”| Factor | Stabilizing Effect | Destabilizing Effect | Net Assessment |
|---|---|---|---|
| MAD Logic | No actor can safely attack | May not apply to AI as to nukes | Weakly stabilizing |
| Balance of Power | Rough parity prevents dominance | Creates security dilemma | Neutral |
| Multiple Actors | Distributed resilience | More failure modes | Destabilizing |
| Learning from Near-Misses | Norms may develop | Each incident increases risk | Time-dependent |
| Proliferation | No single point of failure | Dangerous actors gain access | Strongly destabilizing |
| Technical Complexity | Defense may improve | Emergent failures unpredictable | Destabilizing |
Arguments for Stability
Section titled “Arguments for Stability”Several factors suggest multipolar competition could persist without immediate catastrophe. Balance of power logic—Mutually Assured Destruction—means no actor can safely attack others, with rough parity maintaining the status quo, similar to how the Cold War proved dangerous but survivable. Evolutionary pressure may lead actors to learn from near-misses, with selective cooperation emerging and norms developing over time. Distributed resilience means no single point of failure: multiple approaches to AI development continue, and diversity provides robustness.
Arguments for Instability
Section titled “Arguments for Instability”Countervailing factors suggest inherent instability. Escalatory dynamics follow arms race logic: each near-miss increases risk, no de-escalation mechanism exists, and eventually a near-miss becomes catastrophe. Increasing complexity means more AI systems create more potential failures, interactions become unpredictable, humans lose ability to control, and emergent catastrophe from complex interactions becomes possible. Proliferation to dangerous actors means that as capabilities spread, worst-case actors eventually acquire them—one bad actor is enough for disaster. Coordination decay means competition prevents learning, trust continues eroding, selective cooperation fails, and a spiral toward complete breakdown ensues.
Net Assessment: Unstable But Not Immediately Catastrophic
Section titled “Net Assessment: Unstable But Not Immediately Catastrophic”This scenario probably represents temporary stability that can persist for years or decades but builds risk over time. Eventually it either: stabilizes into something safer (toward Aligned AGI if coordination emerges), collapses into catastrophe (toward Misaligned Catastrophe), or forces a pause (toward Pause scenario). The central question is the timeline and transition path.
Relation to Other Scenarios
Section titled “Relation to Other Scenarios”Transitions Possible
Section titled “Transitions Possible”To Aligned AGI:
- If crisis enables cooperation breakthrough
- If selective cooperation strengthens over time
- If competitive dynamics find stable beneficial equilibrium
- Requires: coordination success, alignment breakthrough
To Misaligned Catastrophe:
- If one actor achieves dangerous AGI without alignment
- If multi-AI interactions produce emergent catastrophe
- If near-miss becomes actual catastrophe
- If proliferation enables truly dangerous actor
To Pause and Redirect:
- If crisis so severe it forces global pause
- If all actors realize competition too dangerous
- If catastrophe narrowly avoided creates political will
- Requires: shock sufficient to overcome competition
From Slow Takeoff Muddle:
- If muddling’s partial coordination breaks down completely
- If competition intensifies beyond current levels
- If fragmentation increases
Combinations With Other Scenarios
Section titled “Combinations With Other Scenarios”Elements Often Combined:
- Might muddle through multipolar competition (Muddle + Multipolar)
- Different regions in different scenarios
- Transition from Multipolar to other scenarios over time
Multipolar as Intermediate State:
- May be unstable equilibrium between Muddle and Catastrophe
- Or transition state from current world to Aligned AGI
- Temporary phase rather than end state
Historical Analogies
Section titled “Historical Analogies”Comparative Analysis
Section titled “Comparative Analysis”| Historical Period | Similarities | Key Differences | Lessons for AI |
|---|---|---|---|
| Cold War (1947-1991) | Arms race, MAD logic, proxy conflicts, near-misses | AI development faster; more actors; harder verification | Managed dangerous competition for decades, but multiple near-catastrophes |
| Cybersecurity Landscape | Persistent conflict, attribution difficulty, constant warfare | AI more powerful; potential consequences much worse | Can maintain conflict without total collapse, at significant cost |
| Pre-WWI Europe | Multiple powers, arms race, complex alliances | AI changes timescales; more actors than European powers | Multipolar competition can be stable until it suddenly is not |
| Nuclear Proliferation | Dual-use technology, verification challenges | AI proliferates faster; cannot be controlled like fissile material | Nonproliferation largely failed for AI; different approach needed |
Cold War Parallels
Section titled “Cold War Parallels”The Cold War offers the closest historical parallel, with its bipolar competition, Mutually Assured Destruction logic, proxy conflicts, and near-miss incidents producing persistent tension without direct catastrophe. However, critical differences limit the analogy’s usefulness. West Point’s Modern War Institute↗ notes that “what is unprecedented is the way in which the prevalence of the private sector in AI may complicate nuclear deterrence, as the dual-use technologies companies produce become ever more interwoven with America’s evolving NC3 architecture.”
Cybersecurity and Pre-WWI Parallels
Section titled “Cybersecurity and Pre-WWI Parallels”The cybersecurity landscape offers a model of persistent, low-level conflict with difficult attribution, evolving offensive and defensive capabilities, and multiple state and non-state actors—but AI threatens to make this conflict far more consequential. Pre-WWI Europe, with its multiple competing powers, arms race dynamics, and complex alliance systems, reminds us that multipolar competition can appear stable until it suddenly is not—a sobering lesson given the Arms Control Association↗ warning that “AI could challenge the basic rules of nuclear deterrence and lead to catastrophic miscalculations.”
Probability Assessment
Section titled “Probability Assessment”| Source | Estimate | Date |
|---|---|---|
| Baseline estimate | 20-30% | — |
| Pessimists on coordination | 30-50% | — |
| Optimists on coordination | 10-20% | — |
| Median view | 25-30% | — |
Why This Probability?
Section titled “Why This Probability?”Reasons for Higher Probability:
- Current trajectory shows fragmentation
- International cooperation very difficult historically
- Proliferation hard to prevent
- Multiple actors pursuing AI independently
- Economic and political incentives favor competition
- Trust insufficient for robust cooperation
Reasons for Lower Probability:
- Very unstable, likely to transition to other scenarios
- Crisis might enable coordination
- Or might collapse into catastrophe
- Hard to maintain multipolar equilibrium long-term
- Eventually consolidates or collapses
Central Estimate Rationale: 20-30% reflects that we’re moving toward fragmentation, but this state is unstable. Higher than Catastrophe because not immediately fatal, lower than Muddle because less stable. Wide range reflects uncertainty about whether competition can persist.
What Changes This Estimate?
Section titled “What Changes This Estimate?”Increases Probability:
- International coordination failing
- Proliferation accelerating
- Corporate independence increasing
- Multiple actors achieving advanced AI
- Trust between actors declining
- Crisis management without learning
Decreases Probability:
- Coordination breakthrough (→ toward Aligned AGI or Pause)
- Single actor achieving dominance (→ different scenario)
- Successful proliferation prevention
- Crisis enabling cooperation
- Catastrophic incident (→ toward Catastrophe or forces Pause)
Living in This Scenario
Section titled “Living in This Scenario”What Daily Life Looks Like
Section titled “What Daily Life Looks Like”For Individuals:
- Constant uncertainty about information
- AI-generated content everywhere
- Difficulty distinguishing truth
- Economic instability from AI competition
- Cyber threats常
- Living under risk of escalation
For Organizations:
- Constant defensive posture
- Must have AI capabilities to compete
- Continuous cyber warfare
- Uncertain regulatory environment
- Must navigate multipolar landscape
For Nations:
- Permanent AI arms race
- Defensive and offensive AI
- Cyber conflict常
- Economic competition
- Inability to coordinate on global challenges
Compared to Other Scenarios
Section titled “Compared to Other Scenarios”Better Than Catastrophe:
- Humans still in control (mostly)
- No existential catastrophe (yet)
- Multiple centers of power prevent total domination
- Some benefits from AI competition
Worse Than Muddle:
- More unstable and dangerous
- Less coordination on safety
- More conflict and competition
- Higher risk of catastrophe
- More resources wasted on competition
Very Different From Aligned AGI:
- No cooperation on beneficial deployment
- Competition prevents optimal outcomes
- Persistent conflict and waste
- Living in danger rather than flourishing
Less Controlled Than Pause:
- No deliberate slowdown
- Racing continues
- No time for careful alignment work
- Driven by competition not choice
Key Sources
Section titled “Key Sources”The analysis in this scenario draws on research from multiple authoritative institutions:
- MIT Technology Review↗ - Analysis of US-China AI competition dynamics
- CSET Georgetown↗ - Research on AI proliferation, disinformation, and security
- RAND Corporation↗ - Strategic competition and AI governance analysis
- Brookings Institution↗ - International AI governance architecture
- SIPRI↗ - AI impact on strategic stability and nuclear risk
- CSIS↗ - Algorithmic stability and deterrence
- Arms Control Association↗ - AI and nuclear risk analysis
- World Economic Forum↗ - AI geopolitics and data infrastructure