Skip to content

Multipolar Competition - The Fragmented World

📋Page Status
Quality:82 (Comprehensive)
Importance:64.5 (Useful)
Last edited:2025-12-28 (10 days ago)
Words:4.6k
Structure:
📊 6📈 2🔗 25📚 047%Score: 10/15
LLM Summary:Outlines a multipolar AI competition scenario (2024-2040) with 20-30% probability, progressing through fragmentation, armed competition, and unstable equilibrium phases. Describes dynamics of multiple competing AI systems across nations, corporations, and non-state actors leading to persistent conflict without single dominant power.

This scenario explores a future where no single AI system or actor achieves dominance. Instead, multiple competing AI systems empower different actors—nations, corporations, groups—leading to ongoing conflict, instability, and coordination failures. It is neither utopia nor quick catastrophe, but persistent dangerous competition. As the World Economic Forum notes, “the world could slide further into fragmentation, with a digital iron curtain separating US-led and China-led tech spheres.”

Scenario
Scenario TypeCompetitive / Unstable Equilibrium
Probability Estimate20-30%
Timeframe2024-2040
Key AssumptionMultiple actors achieve advanced AI without single winner
Core UncertaintyCan multipolar competition remain stable or does it collapse?

In this scenario, AI development fragments across multiple actors—nations, corporations, ideological groups, even individuals. No single entity achieves decisive advantage. This leads to a world of AI-empowered competition where multiple powerful AI systems pursue different goals, often in conflict. We see AI arms races, proxy conflicts between AI systems, erosion of international cooperation, increasing instability and near-misses, and uncertainty about whether this state is stable or heading toward catastrophe.

This scenario combines elements of the Cold War’s multipolar competition, the cybersecurity landscape’s constant conflict, and the challenges of governing dual-use technologies. According to MIT Technology Review, “there will not and cannot be any long-term winners if the intense competition continues on its current path.” It is dangerous but not immediately catastrophic—a state of persistent, escalating risk.

DimensionAssessmentConfidenceKey Driver
Probability20-30%MediumCurrent trajectory of fragmentation
StabilityLow-MediumLowMAD logic vs. escalatory dynamics
Time to Resolution10-20 yearsLowDepends on crisis severity
ReversibilityMediumMediumRequires coordination breakthrough
Catastrophic PotentialMedium-HighMediumAccumulating near-misses
Actor TypeExamplesAI CapabilityGoalsAlignment With Safety
SuperpowersUS, ChinaFrontierNational advantage, hegemonyVariable, often secondary
Regional PowersEU, India, IsraelNear-frontierStrategic autonomyGenerally positive
Major LabsOpenAI, Anthropic, DeepMindFrontierProfit, mission, safetyVariable by lab
State-Backed LabsBaidu, SenseTimeNear-frontierNational championsState-directed
Non-State ActorsOpen-source communities, criminal orgsMid-tierVaries widelyOften absent
Loading diagram...

2024-2025: International Coordination Fails

Early attempts at AI governance collapse as US-China relations deteriorate further. Each nation suspects the other of hiding capabilities, with trust insufficient for meaningful cooperation. The US Institute of Peace notes that “varying approaches are shaped by competition among great powers in an increasingly multipolar world.” Multiple verification failures entrench racing dynamics.

2025-2026: Proliferation Begins

AI capabilities spread beyond the US and China to EU labs, India, Israel, and smaller nations. Open-source AI enables smaller players to catch up partially, while corporate labs increasingly operate independent of national governance. CSET Georgetown research argues that “AI differs so fundamentally from nuclear technology that basing AI policy around the nuclear analogy is conceptually flawed and risks inflating expectations about the international community’s ability to control model proliferation.” Knowledge diffusion occurs faster than anticipated.

2026-2027: Corporate Independence

Major AI labs operate across national boundaries—OpenAI, Anthropic, DeepMind, and Meta pursue independent strategies with unclear loyalty to home nations. Some labs prioritize profit, others safety, others capability. National governments struggle to control corporate AI development as enforcement mechanisms prove weak and porous. According to RAND research, “commercial developers now lead much of the most advanced AI development and often pursue timelines and incentives that can diverge from those of national governments.”

2027-2028: Ideological Fragmentation

Different groups pursue different AI development philosophies: accelerationists push for maximum capability; safety-first groups work on alignment; democratic AI movements want open-source everything; national champions tie to specific countries; and decentralized crypto-aligned groups pursue their own visions. No consensus emerges on development path, with each group suspicious of others’ intentions.

Key Dynamic: Trust erodes, cooperation fails, competition intensifies.

2028-2029: AI Arms Race Begins

Multiple actors achieve near-AGI capabilities, with defensive AI systems deployed to counter other AI systems. Cybersecurity becomes an AI vs. AI conflict. SIPRI research warns that “the asymmetric acquisition of cutting-edge AI technology creates the conditions for a destabilizing arms race.” Military AI systems proliferate across countries, with each advance prompting counter-advances in escalatory dynamics reminiscent of the nuclear arms race.

2029-2030: First AI-Enhanced Conflicts

Cyberattacks using AI tools become common, with attribution difficult and retaliation uncertain. Economic warfare is enhanced by AI while disinformation campaigns reach massive scale. Several “near-miss” escalations occur—no catastrophic war yet, but close calls multiply. CSET research notes that “AI is poised to amplify disinformation campaigns” used by both state and non-state actors.

2030-2031: Proxy Competition

AI systems begin competing indirectly across multiple domains: economic competition through AI trading systems, information warfare pitting AI propaganda against AI detection, and technological competition through AI-assisted research races. Each domain sees escalating AI capabilities while humans become increasingly sidelined from decisions.

2031-2032: Dangerous Equilibrium

A rough balance emerges between major AI powers. Mutually Assured Destruction (MAD) logic applies—no actor can safely eliminate others, but no mechanism exists for cooperation either. Multiple AI systems operate with partially aligned but conflicting goals, creating constant low-level conflict without resolution.

2032-2033: Proliferation to Non-State Actors

Advanced AI capabilities become available to smaller groups: terrorist organizations, criminal enterprises, and activist groups deploying AI for their causes. Georgetown CSET warns that “American AI infrastructure faces threats from state actors, as well as criminal and terrorist groups.” Enforcement becomes nearly impossible as the state monopoly on violence erodes.

2033-2035: Multi-Way Competition

Major players—US, China, EU, major corporations, and various non-state actors—compete without any single dominant force. Alliances shift constantly. AI systems pursue different goals: profit maximization (corporate AI), national advantage (state AI), ideological goals (activist AI), and pure survival (defensive AI). Coordination becomes increasingly impossible.

2035-2037: Governance Breakdown

International institutions prove powerless while national governments struggle to govern even their own AI development. Democratic oversight becomes impossible given complexity; authoritarian states use AI for control but find themselves also threatened by it. Oxford researchers note that “global AI governance remains fragmented and regionally driven, with no universally authoritative institution in place.” Effective anarchy in AI space emerges.

2036-2038: Increasing Near-Catastrophes

Multiple incidents of AI systems nearly cause disasters: financial system crashes from AI trading conflicts, near-nuclear incidents from AI military systems, and critical infrastructure failures. Each time, disaster is narrowly avoided—but frequency increases. CSIS analysis warns that “AI-powered cyberspace operations might put nuclear command and control missions at risk.”

2038-2040: Unclear Future

Three possible futures become visible: (1) Catastrophic Collapse—one near-miss becomes actual catastrophe; (2) Forced Cooperation—crises finally enable coordination; (3) Continued Dangerous Competition—unstable equilibrium persists. Which path we take remains unclear; each crisis is both danger and opportunity.

DomainCompetition TypeEscalation RiskCurrent Trajectory
Military AIArms raceHighAccelerating since 2023
Economic AIMarket dominanceMediumIntensifying
Cyber OperationsOffense-defenseVery HighAlready in conflict
Information WarfareNarrative controlHighWidespread deployment
Research CapabilityTalent and computeMediumFragmented acquisition
Standards SettingRegulatory influenceMediumBifurcating

No single actor achieves dominance in this scenario. The US, China, and EU all develop advanced AI capabilities, while major corporations—OpenAI, Anthropic, DeepMind, Meta—operate semi-independently across national boundaries. Smaller nations like Israel, India, and South Korea develop niche capabilities, and non-state actors acquire significant AI tools. Power becomes distributed rather than concentrated, creating a complex multi-actor landscape. The Diplomat reports that “executives from OpenAI, Microsoft, CoreWeave, and AMD testified that the U.S. lead over China in AI had narrowed to mere months.”

Different actors pursue fundamentally different goals and values: state actors seek national advantage, corporations pursue profit, safety-focused labs work toward aligned AI, and ideological groups pursue various visions. No shared vision of AI future emerges, with fundamental disagreement on what values to encode.

Arms race dynamics intensify as each advance prompts counter-advances. The security dilemma applies: defensive measures appear offensive to others, leading all parties to race to avoid falling behind. An escalatory spiral becomes difficult to stop, with cooperation increasingly seen as vulnerability. RAND researchers warn that “advances in artificial intelligence have provoked a new kind of arms race among nuclear powers.”

Conflict becomes endemic across multiple domains. In cyber warfare, AI-enhanced attack and defense creates constant low-level conflict with difficult attribution and unclear deterrence—no rules of engagement exist. Economic competition intensifies as AI trading systems compete, enabling market manipulation and economic espionage; inequality widens between AI-haves and have-nots. Information warfare proliferates with AI-generated propaganda countered by AI detection, creating epistemic warfare where truth becomes increasingly contested and shared reality erodes. Military tensions rise as AI-enhanced weapons systems with unclear control create multiple “near-miss” incidents, with no clear path to de-escalation.

International cooperation fails as treaties become impossible to negotiate or verify. Mutual distrust prevents coordination, free-rider problems dominate, and enforcement mechanisms remain absent—a “tragedy of the commons” in AI space. Brookings research suggests that “networked and distributed forms of AI governance will remain the singular form of international cooperation that can respond to the rapid pace at which AI is developing,” implying centralized governance is unlikely to emerge.

National governance also strains: governments cannot control corporate AI development, prevent proliferation, or verify compliance. Democratic oversight becomes impossible given complexity, and even authoritarian control proves limited. A legitimacy crisis emerges—no clear authority over AI exists, the public trusts no actor, decisions are made without consent or understanding, accountability is absent, and democratic institutions fail to govern AI effectively.

Branch Point 1: International Cooperation Fails (2024-2026)

Section titled “Branch Point 1: International Cooperation Fails (2024-2026)”

What Happened: Early governance attempts failed due to mutual distrust and verification problems.

Alternative Paths:

  • Cooperation Succeeds: Strong coordination mechanism created → Leads to Aligned AGI or Pause scenarios
  • Actual Path: Cooperation fails, fragmentation begins → Enables multipolar scenario

Why This Mattered: Without coordination, multiple independent AI development paths became inevitable. Once fragmented, very hard to reconsolidate.

Branch Point 2: Proliferation Occurs (2026-2028)

Section titled “Branch Point 2: Proliferation Occurs (2026-2028)”

What Happened: AI capabilities spread to many actors through combination of open-source, espionage, and parallel development.

Alternative Paths:

  • Controlled Proliferation: Strong compute governance prevents spread → Might enable coordination
  • Actual Path: Widespread proliferation → Many actors with advanced AI

Why This Mattered: Once capabilities widely distributed, no way to put genie back in bottle. Multipolar world locked in.

Branch Point 3: Corporate Independence (2027-2029)

Section titled “Branch Point 3: Corporate Independence (2027-2029)”

What Happened: AI companies operated largely independent of national governments, pursuing their own goals.

Alternative Paths:

  • State Control: Governments successfully regulate AI labs → Reduces number of actors
  • Actual Path: Corporate independence maintained → Adds more centers of power

Why This Mattered: Independent corporate actors meant even within nations, no unified AI strategy. Fragmentation within as well as between countries.

Branch Point 4: Non-State Actor Access (2031-2033)

Section titled “Branch Point 4: Non-State Actor Access (2031-2033)”

What Happened: Advanced AI capabilities became accessible to non-state actors - activists, criminals, terrorists.

Alternative Paths:

  • Prevent Non-State Access: Strong controls limit to major actors → Fewer centers of power
  • Actual Path: Widespread access → Extreme fragmentation

Why This Mattered: Non-state access meant thousands of potential AI developers, making coordination completely impossible.

Branch Point 5: First Major Crisis (2033-2035)

Section titled “Branch Point 5: First Major Crisis (2033-2035)”

What Happened: Major AI-related crisis occurred but didn’t lead to coordination breakthrough.

Alternative Paths:

  • Crisis Enables Cooperation: Shock leads to coordination → Might shift to Aligned AGI
  • Crisis Causes Catastrophe: Event spirals out of control → Shifts to Catastrophe
  • Actual Path: Crisis managed but lessons not learned → Competition continues

Why This Mattered: This was opportunity to shift trajectories. Crisis neither catastrophic enough to end competition nor galvanizing enough to enable cooperation.

No Single Decisive Breakthrough:

  • No one actor achieves overwhelming advantage
  • Capabilities roughly balanced between major players
  • No “secret sauce” that enables dominance
  • Parallel development paths succeed

Partial Alignment:

  • AI systems partially controllable
  • Enough alignment to be useful to deploying actor
  • Not enough alignment to be safe globally
  • Different alignment approaches work partially

Proliferation Technologically Feasible:

  • Knowledge transferable
  • Compute governance ineffective
  • Can’t prevent parallel development
  • Open-source enables catch-up

Mutual Distrust Dominates:

  • Verification problems prevent trust
  • Security dilemma logic applies
  • Each actor fears others’ AI more than coordinates
  • Historical tensions prevent cooperation

No Hegemonic Power:

  • US can’t maintain AI dominance
  • China can’t catch up decisively
  • No single nation or bloc controls AI
  • Balance prevents single winner

Free-Rider Problems:

  • Safety investments benefit all
  • Capability investments benefit only investor
  • Incentive to defect from safety agreements
  • “Tragedy of the commons” in AI development

Ideological Diversity:

  • No consensus on AI values
  • Different groups want different futures
  • Fundamental disagreements on governance
  • No shared vision enabling cooperation

Weak Global Governance:

  • International institutions lack power
  • National governments primary actors
  • No mechanism for global coordination
  • Enforcement impossible

Democratic and Authoritarian Competition:

  • Different governance systems compete
  • Each claims their AI approach better
  • Neither can decisively prove superiority
  • Competition persists
IndicatorCurrent StatusTrendImplication
US-China AI cooperationMinimal/hostileWorseningCompetition intensifying
International governanceFragmentedStalledCoordination unlikely
Corporate independenceGrowingAcceleratingMore actors, less control
ProliferationRapidAcceleratingCannot be reversed
Verification capabilitiesWeakNot improvingTrust cannot be built
Near-miss incidentsEmergingIncreasingRisk accumulating

We may already be in the early stages of this scenario. Current fragmentation and competition are consistent with this trajectory. CNAS research describes “the world-altering stakes of U.S.-China AI competition,” underscoring that the bifurcation is already underway.

Confirming signals (next 3-5 years): International coordination continuing to fail, AI capabilities spreading to more actors, corporate AI development increasingly independent, first AI-enhanced cyber conflicts, racing dynamics intensifying, multiple actors achieving near-AGI, no single actor pulling ahead decisively, and trust between AI actors declining.

Diverging signals: Successful international coordination would shift toward Aligned AGI or Pause scenarios; a single actor achieving dominance would create a different scenario; successful proliferation prevention would enable coordination; a catastrophic incident might force pause.

This scenario becomes established when: too many actors exist to coordinate, mutual distrust prevents cooperation, proliferation cannot be prevented, and the world becomes stuck in multipolar competition. Research from the Tony Blair Institute warns of “only an overall 10-15% probability that current governmental structures and societal systems are prepared to tackle the complex and interconnected issues.”

Stability and Crisis Management:

  • Preventing escalation during crises
  • Building confidence-building measures
  • Establishing norms and red lines
  • Crisis communication channels
  • De-escalation mechanisms

Selective Coordination:

  • Cooperate where possible even in competitive environment
  • Safety information sharing
  • Incident notification protocols
  • Basic transparency measures
  • Technical safety standards all can adopt

Resilience Building:

  • Robust systems resistant to AI attacks
  • Redundancy in critical infrastructure
  • Defensive AI capabilities
  • Democratic resilience against information warfare
  • Economic resilience to AI shocks

Defensive AI:

  • Detection of AI-generated content
  • Cyber defense against AI attacks
  • Robust systems design
  • Adversarial testing
  • Defensive capabilities development

Alignment for Competitive Context:

  • Making AI systems robustly beneficial even in competition
  • Avoiding dangerous emergent dynamics between AI systems
  • Stability analysis of multi-AI systems
  • Game theory of AI interaction

Verification and Monitoring:

  • Technologies for detecting AI development
  • Verification methods for AI capabilities
  • Attribution technologies
  • Monitoring systems

Safety Despite Competition:

  • Safety measures that work even without coordination
  • Unilateral safety commitments
  • Technical safety that doesn’t require trust
  • Defense in depth

Crisis Prevention:

  • Establishing communication channels
  • Red lines and norms for AI use
  • Incident notification protocols
  • Confidence-building measures
  • De-escalation procedures

Resilience Policies:

  • Critical infrastructure protection
  • Cybersecurity requirements
  • Information warfare defenses
  • Democratic institution strengthening
  • Economic safety nets

Selective Cooperation:

  • Agreements on specific narrow issues
  • Safety information sharing protocols
  • Incident investigation cooperation
  • Technical standard alignment where possible

Proliferation Management:

  • Compute governance where feasible
  • Export controls on most dangerous capabilities
  • Know-your-customer requirements
  • Monitoring of AI development

For AI Labs:

  • Responsible behavior even without enforcement
  • Voluntary safety commitments
  • Information sharing on safety incidents
  • Avoiding most dangerous capabilities
  • Building safety culture

For Governments:

  • Maintaining defensive capabilities
  • Building resilience
  • Selective cooperation even with rivals
  • Protecting democratic institutions
  • Crisis management capacity

For International Organizations:

  • Building trust despite competition
  • Facilitating communication
  • Monitoring and transparency
  • Norm development
  • Crisis mediation

For Researchers:

  • Working on defensive AI and safety
  • Developing verification technologies
  • Building crisis prevention tools
  • Studying multi-AI dynamics
  • Promoting safety norms

For Policy Professionals:

  • Building crisis communication channels
  • Developing confidence-building measures
  • Creating resilient institutions
  • Facilitating selective cooperation

For Everyone:

  • Media literacy for AI-generated content
  • Supporting resilient democratic institutions
  • Advocating for responsible AI even in competition
  • Building social cohesion

Cyber-Resilient Actors:

  • Those with strong defensive AI capabilities
  • Nations/organizations with robust systems
  • Actors who invested in resilience
  • Can defend against AI-enhanced attacks

Adaptable Organizations:

  • Those who can navigate multipolar competition
  • Flexible, resilient structures
  • Not dependent on single AI system or actor
  • Can operate in chaotic environment

AI-Enhanced Militaries:

  • Nations with advanced military AI
  • Defensive superiority
  • Deterrence capability
  • But also increased risk of accidents

Decentralized Systems:

  • Those not dependent on centralized control
  • Can survive fragmentation
  • Resilient to individual AI actors
  • Less vulnerable to single-point failures

Everyone (In Absolute Terms):

  • Constant instability and conflict
  • Resources wasted on competition
  • Missed opportunities for cooperation
  • Living under constant risk of escalation

Democratic Institutions:

  • Difficult to maintain in information warfare environment
  • Complexity overwhelms oversight capacity
  • Legitimacy eroded
  • Vulnerable to AI-enhanced manipulation

Developing Nations:

  • Lack resources for AI competition
  • Fall further behind
  • Vulnerable to AI-empowered actors
  • No voice in AI governance

Truth and Trust:

  • Shared reality erodes
  • Information warfare constant
  • Trust in institutions declines
  • Epistemic commons degraded

Global Cooperation:

  • Impossible to coordinate on shared challenges
  • Climate change, pandemics, other risks harder to address
  • “Tragedy of the commons” in many domains
  • Collective action failures

AI-Enhanced Corporations:

  • Significant power in multipolar world
  • But also targets of conflict
  • Profit opportunities but high risks
  • Unclear loyalties create vulnerabilities

Major Powers (US, China, EU):

  • More AI capability than smaller actors
  • But also more threatened by competition
  • More to lose from instability
  • Caught in security dilemmas

Individual Liberty:

  • Less authoritarian control in fragmented world
  • But also less protection
  • More dangerous environment
  • Freedom but in chaos

Key Questions

Can multipolar AI competition remain stable, or does it inevitably lead to catastrophe?
Is cooperation possible even in competitive environment?
Can we prevent proliferation to dangerous actors?
Will defensive AI be sufficient to prevent catastrophe?
Can democratic institutions survive in AI-saturated information environment?
Will competition eventually consolidate or keep fragmenting?
At what point does instability become unmanageable?

Stability:

  • Is multipolar AI competition stable long-term?
  • Or does it inevitably escalate to catastrophe?
  • Can we manage persistent conflict without disaster?
  • How many near-misses before actual catastrophe?

Proliferation:

  • How far will AI capabilities spread?
  • Can we prevent most dangerous proliferation?
  • What happens when non-state actors have AGI?
  • Is there a point of no return on proliferation?

Coordination Possibilities:

  • Can selective cooperation work?
  • Will crisis enable coordination?
  • Or will competition prevent all cooperation?
  • Can trust be rebuilt?

Technical Dynamics:

  • How will multiple AI systems interact?
  • Stable equilibrium or escalatory dynamics?
  • Will defensive AI be sufficient?
  • What emergent properties from multi-AI interaction?
FactorStabilizing EffectDestabilizing EffectNet Assessment
MAD LogicNo actor can safely attackMay not apply to AI as to nukesWeakly stabilizing
Balance of PowerRough parity prevents dominanceCreates security dilemmaNeutral
Multiple ActorsDistributed resilienceMore failure modesDestabilizing
Learning from Near-MissesNorms may developEach incident increases riskTime-dependent
ProliferationNo single point of failureDangerous actors gain accessStrongly destabilizing
Technical ComplexityDefense may improveEmergent failures unpredictableDestabilizing
Loading diagram...

Several factors suggest multipolar competition could persist without immediate catastrophe. Balance of power logic—Mutually Assured Destruction—means no actor can safely attack others, with rough parity maintaining the status quo, similar to how the Cold War proved dangerous but survivable. Evolutionary pressure may lead actors to learn from near-misses, with selective cooperation emerging and norms developing over time. Distributed resilience means no single point of failure: multiple approaches to AI development continue, and diversity provides robustness.

Countervailing factors suggest inherent instability. Escalatory dynamics follow arms race logic: each near-miss increases risk, no de-escalation mechanism exists, and eventually a near-miss becomes catastrophe. Increasing complexity means more AI systems create more potential failures, interactions become unpredictable, humans lose ability to control, and emergent catastrophe from complex interactions becomes possible. Proliferation to dangerous actors means that as capabilities spread, worst-case actors eventually acquire them—one bad actor is enough for disaster. Coordination decay means competition prevents learning, trust continues eroding, selective cooperation fails, and a spiral toward complete breakdown ensues.

Net Assessment: Unstable But Not Immediately Catastrophic

Section titled “Net Assessment: Unstable But Not Immediately Catastrophic”

This scenario probably represents temporary stability that can persist for years or decades but builds risk over time. Eventually it either: stabilizes into something safer (toward Aligned AGI if coordination emerges), collapses into catastrophe (toward Misaligned Catastrophe), or forces a pause (toward Pause scenario). The central question is the timeline and transition path.

To Aligned AGI:

  • If crisis enables cooperation breakthrough
  • If selective cooperation strengthens over time
  • If competitive dynamics find stable beneficial equilibrium
  • Requires: coordination success, alignment breakthrough

To Misaligned Catastrophe:

  • If one actor achieves dangerous AGI without alignment
  • If multi-AI interactions produce emergent catastrophe
  • If near-miss becomes actual catastrophe
  • If proliferation enables truly dangerous actor

To Pause and Redirect:

  • If crisis so severe it forces global pause
  • If all actors realize competition too dangerous
  • If catastrophe narrowly avoided creates political will
  • Requires: shock sufficient to overcome competition

From Slow Takeoff Muddle:

  • If muddling’s partial coordination breaks down completely
  • If competition intensifies beyond current levels
  • If fragmentation increases

Elements Often Combined:

  • Might muddle through multipolar competition (Muddle + Multipolar)
  • Different regions in different scenarios
  • Transition from Multipolar to other scenarios over time

Multipolar as Intermediate State:

  • May be unstable equilibrium between Muddle and Catastrophe
  • Or transition state from current world to Aligned AGI
  • Temporary phase rather than end state
Historical PeriodSimilaritiesKey DifferencesLessons for AI
Cold War (1947-1991)Arms race, MAD logic, proxy conflicts, near-missesAI development faster; more actors; harder verificationManaged dangerous competition for decades, but multiple near-catastrophes
Cybersecurity LandscapePersistent conflict, attribution difficulty, constant warfareAI more powerful; potential consequences much worseCan maintain conflict without total collapse, at significant cost
Pre-WWI EuropeMultiple powers, arms race, complex alliancesAI changes timescales; more actors than European powersMultipolar competition can be stable until it suddenly is not
Nuclear ProliferationDual-use technology, verification challengesAI proliferates faster; cannot be controlled like fissile materialNonproliferation largely failed for AI; different approach needed

The Cold War offers the closest historical parallel, with its bipolar competition, Mutually Assured Destruction logic, proxy conflicts, and near-miss incidents producing persistent tension without direct catastrophe. However, critical differences limit the analogy’s usefulness. West Point’s Modern War Institute notes that “what is unprecedented is the way in which the prevalence of the private sector in AI may complicate nuclear deterrence, as the dual-use technologies companies produce become ever more interwoven with America’s evolving NC3 architecture.”

The cybersecurity landscape offers a model of persistent, low-level conflict with difficult attribution, evolving offensive and defensive capabilities, and multiple state and non-state actors—but AI threatens to make this conflict far more consequential. Pre-WWI Europe, with its multiple competing powers, arms race dynamics, and complex alliance systems, reminds us that multipolar competition can appear stable until it suddenly is not—a sobering lesson given the Arms Control Association warning that “AI could challenge the basic rules of nuclear deterrence and lead to catastrophic miscalculations.”

📊
SourceEstimateDate
Baseline estimate20-30%
Pessimists on coordination30-50%
Optimists on coordination10-20%
Median view25-30%

Reasons for Higher Probability:

  • Current trajectory shows fragmentation
  • International cooperation very difficult historically
  • Proliferation hard to prevent
  • Multiple actors pursuing AI independently
  • Economic and political incentives favor competition
  • Trust insufficient for robust cooperation

Reasons for Lower Probability:

  • Very unstable, likely to transition to other scenarios
  • Crisis might enable coordination
  • Or might collapse into catastrophe
  • Hard to maintain multipolar equilibrium long-term
  • Eventually consolidates or collapses

Central Estimate Rationale: 20-30% reflects that we’re moving toward fragmentation, but this state is unstable. Higher than Catastrophe because not immediately fatal, lower than Muddle because less stable. Wide range reflects uncertainty about whether competition can persist.

Increases Probability:

  • International coordination failing
  • Proliferation accelerating
  • Corporate independence increasing
  • Multiple actors achieving advanced AI
  • Trust between actors declining
  • Crisis management without learning

Decreases Probability:

  • Coordination breakthrough (→ toward Aligned AGI or Pause)
  • Single actor achieving dominance (→ different scenario)
  • Successful proliferation prevention
  • Crisis enabling cooperation
  • Catastrophic incident (→ toward Catastrophe or forces Pause)

For Individuals:

  • Constant uncertainty about information
  • AI-generated content everywhere
  • Difficulty distinguishing truth
  • Economic instability from AI competition
  • Cyber threats常
  • Living under risk of escalation

For Organizations:

  • Constant defensive posture
  • Must have AI capabilities to compete
  • Continuous cyber warfare
  • Uncertain regulatory environment
  • Must navigate multipolar landscape

For Nations:

  • Permanent AI arms race
  • Defensive and offensive AI
  • Cyber conflict常
  • Economic competition
  • Inability to coordinate on global challenges

Better Than Catastrophe:

  • Humans still in control (mostly)
  • No existential catastrophe (yet)
  • Multiple centers of power prevent total domination
  • Some benefits from AI competition

Worse Than Muddle:

  • More unstable and dangerous
  • Less coordination on safety
  • More conflict and competition
  • Higher risk of catastrophe
  • More resources wasted on competition

Very Different From Aligned AGI:

  • No cooperation on beneficial deployment
  • Competition prevents optimal outcomes
  • Persistent conflict and waste
  • Living in danger rather than flourishing

Less Controlled Than Pause:

  • No deliberate slowdown
  • Racing continues
  • No time for careful alignment work
  • Driven by competition not choice

The analysis in this scenario draws on research from multiple authoritative institutions: