Multi-Actor Strategic Landscape
Core thesis: Risk is primarily determined by which actors develop TAI and their incentive structures. The strategic landscape of competition and cooperation shapes outcomes.
Overview
Section titled “Overview”This model analyzes how AI existential risk depends on which actors—US frontier labs, Chinese developers, open-source communities, or malicious actors—develop transformative AI first, and under what competitive conditions. The core insight is that actor identity and incentive structures may matter as much as technical alignment progress in determining outcomes.
The strategic landscape shifted dramatically in 2024-2025. According to Recorded Future analysis↗, the gap in overall model performance between the best US and Chinese models narrowed from 9.26% in January 2024 to just 1.70% by February 2025. This was catalyzed by DeepSeek’s R1 release in January 2025, which matched OpenAI’s o1 performance↗ while training for just $1.6 million—a fraction of US costs. Similarly, open-source models closed to within 1.70%↗ of frontier closed models on Chatbot Arena, fundamentally changing proliferation dynamics.
Despite narrowing capability gaps, structural asymmetries persist. US private AI investment topped $109 billion in 2024↗—over 12 times China’s figure. The US maintains roughly 4,049 data centers versus China’s 379, representing the largest single advantage↗ the US enjoys. Yet China leads in deployment: it installed approximately 295,000 industrial robots in 2024 alone—more than the rest of the world combined—and accounts for 69.7% of all AI patents.
Capability Gap Estimates (2024-2025)
Section titled “Capability Gap Estimates (2024-2025)”The following table synthesizes publicly available data on relative AI capabilities across actor categories. Estimates draw from benchmark performance, investment levels, and expert assessments.
| Actor Category | Capability vs Frontier | Trend | Key Evidence | Source |
|---|---|---|---|---|
| US Frontier Labs | 100% (reference) | Stable | GPT-4.5, Claude 3.5, Gemini 2.0 define frontier | Industry consensus |
| Chinese Labs (aggregate) | 98.3% | Rapidly closing | Gap narrowed from 9.26% to 1.70% (Jan 2024 - Feb 2025) | Recorded Future↗ |
| DeepSeek specifically | ~100% on benchmarks | Matched frontier | R1 matched o1 at $1.6M training cost; gold medal at IMO 2025 | CSIS↗ |
| Open-Source (Llama, Qwen) | 98.3% | Rapidly closing | Gap narrowed from 8.04% to 1.70% on Chatbot Arena | State of Open-Source AI↗ |
| Malicious Actor Access | ~40-60% | Increasing | Access via open-source, jailbreaks, or theft | Expert estimate |
Investment and Infrastructure Asymmetries
Section titled “Investment and Infrastructure Asymmetries”| Dimension | United States | China | Ratio | Implications |
|---|---|---|---|---|
| Private AI Investment (2024) | $109 billion | ~$1 billion | 12:1 | US leads funding despite capability parity |
| Data Centers | 4,049 | 379 | 11:1 | Largest structural US advantage |
| New Data Center Capacity (2024) | 5.8 GW | Lower | — | Continued infrastructure expansion |
| Industrial Robot Installations (2024) | 34,000 | 295,000 | 1:9 | China leads deployment/application |
| AI Patents (2023) | 13% of global | 69.7% of global | 1:5 | China dominates IP filings |
| AI Research Citations (2023) | 13% of global | 22.6% of global | 1:2 | China leads academic output |
Sources: CFR↗, RAND↗, Stanford HAI↗
Key Dynamics
Section titled “Key Dynamics”The following diagram illustrates how actor competition dynamics flow through to risk outcomes:
The key mechanisms are:
-
Competition intensity → Safety shortcuts → Misalignment risk: As US-China competition intensifies (currently 0.75 on normalized scale), labs face pressure to accelerate timelines, potentially cutting safety corners.
-
Capability diffusion → Malicious access → Misuse risk: Open-source releases (now within 1.70% of frontier) enable rapid proliferation to actors who may lack safety constraints or beneficial intent.
-
First-mover advantage → Winner-take-all → Reduced caution: If decisive strategic advantage exists for first-mover, actors rationally accept higher alignment risk to capture it.
-
Democratic oversight → Deployment delays → Capability gaps: Strong oversight in democratic nations may create windows where authoritarian actors gain advantages, creating perverse incentives against regulation.
-
Transparency → Better coordination → Reduced racing: Conversely, capability transparency and safety research sharing (currently ~0.6 openness) can reduce competitive pressure.
Risk Pathways
Section titled “Risk Pathways”| Pathway | Description | Estimate |
|---|---|---|
| Unaligned Singleton | One misaligned AI gains decisive advantage | 8% |
| Multi-Agent Conflict | Multiple powerful AI systems in conflict | 6% |
| Authoritarian Lock-in | AI enables permanent authoritarian control | 5% |
| Catastrophic Misuse | Intentional misuse causes catastrophe | 7% |
| Combined X-Risk | Total from all pathways | ~25% |
Actor Categories
Section titled “Actor Categories”| Category | Key Actors |
|---|---|
| Leading US | OpenAI, Anthropic, Google DeepMind, Meta |
| Leading China | DeepSeek, Baidu, Alibaba, ByteDance |
| Open-Source | Meta (Llama), Mistral, Hugging Face ecosystem |
| Malicious | Cybercriminals, terrorists, rogue states |
| Governments | US (NSA, DARPA), China (PLA, MSS), EU |
Full Variable List
Section titled “Full Variable List”This diagram simplifies the full model. The complete Multi-Actor Strategic Landscape includes:
Actor Capabilities (15 variables): Leading US lab, leading Chinese lab, US government AI, Chinese government AI, open-source ecosystem, second-tier corporate labs, academic research, cybercriminal AI, terrorist access, authoritarian regime AI, democratic allies AI, corporate espionage, state IP theft, insider threat, supply chain security.
Actor Incentives (12 variables): US-China competition, profit pressure, academic openness, classification levels, democratic accountability, authoritarian control, geopolitical crises, economic desperation, military doctrine, regulatory arbitrage, talent mobility, public-private partnerships.
Information & Transparency (7 variables): Capability disclosure, safety sharing, incident reporting, capability intelligence, dual-use publication norms, evaluation standards, third-party verification.
Alignment & Control (8 variables): US actor alignment, China actor alignment, Constitutional AI effectiveness, human oversight scalability, kill switch reliability, containment protocols, red-teaming, post-deployment monitoring.
Strategic Outcomes (8 variables): First-mover advantage, winner-take-all dynamics, diffusion speed, multipolar vs bipolar, offense-defense balance, escalation control, governance lock-in, misuse probability.
Existential Risk Paths (5 variables): Unaligned singleton, multi-agent conflict, authoritarian lock-in, economic/social collapse, combined risk.
Strategic Importance
Section titled “Strategic Importance”Magnitude Assessment
Section titled “Magnitude Assessment”The multi-actor landscape determines whether AI development is coordinated or conflictual. Actor heterogeneity creates both risks (racing, proliferation) and opportunities (diverse approaches).
| Dimension | Assessment | Quantitative Estimate |
|---|---|---|
| Potential severity | High - multipolar dynamics drive racing and proliferation | Actor landscape contributes 40-60% of total risk variance |
| Probability-weighted importance | High - currently in competitive multipolar phase | 75% probability of continued multipolar competition through 2030 |
| Comparative ranking | Essential context for governance and coordination strategies | #2 priority behind technical alignment |
| Malleability | Medium - actor incentives partially shiftable | 20-30% of racing dynamics addressable via policy |
Actor Safety Assessment
Section titled “Actor Safety Assessment”| Actor Category | Safety Investment | Safety Culture | Transparency | Overall Safety Grade |
|---|---|---|---|---|
| Anthropic | ~30% of budget | Strong | High | A- |
| OpenAI | ~15% of budget | Declining | Medium | B- |
| Google DeepMind | ~20% of budget | Strong | Medium | B+ |
| Meta AI | ~10% of budget | Moderate | High (open-source) | B- |
| Chinese Labs | ~5% of budget | Unknown | Low | C- (estimated) |
| Open-Source Ecosystem | Minimal | Variable | Very high | C |
Diffusion Timeline Estimates
Section titled “Diffusion Timeline Estimates”| Capability Level | US Labs | Chinese Labs | Open-Source | Malicious Actors |
|---|---|---|---|---|
| GPT-4 class | 2023 | 2024-2025 | 2024-2025 | 2025-2026 |
| GPT-5 class (projected) | 2025 | 2026-2027 | 2027-2028 | 2028-2030 |
| Autonomous agents (dangerous) | 2025-2026 | 2026-2027 | 2027-2028 | 2028-2029 |
Key Finding: The open-source lag has collapsed. As of late 2025, the center of gravity for open-weight models has shifted toward China↗, with DeepSeek and Qwen becoming household names. US firms released fewer open-weight models citing commercial and safety constraints, while Chinese labs treated open-weight leadership as a deliberate catch-up strategy. Meta—long a champion of frontier open models—has delayed release of Llama Behemoth↗ and suggested it may keep future “superintelligence” models behind paywalls.
First-Mover Advantage: Evidence Assessment
Section titled “First-Mover Advantage: Evidence Assessment”The model’s risk estimates depend critically on the magnitude of first-mover advantage. Strong first-mover advantages create racing incentives; weak ones reduce them. Current evidence suggests first-mover advantages are significant but not overwhelming:
| Evidence Type | Finding | Implication for FMA |
|---|---|---|
| Historical analysis | First movers have 47% failure rate; only 11% become market leaders (Golder & Tellis↗) | Weak FMA |
| AI competitive landscape | 2,011 companies in 2024 ML/AI landscape, 578 new entrants since 2023 | Weak FMA |
| Model replication | 11 different developers globally achieved GPT-4-level models in 2024 | Weak FMA |
| Cloud market | AWS and Azure trading leadership position; “more than one winner” possible | Moderate FMA |
| Network effects | AI systems less network-effect-driven than social platforms | Weak FMA |
| TAI-specific dynamics | Decisive strategic advantage at TAI level remains uncertain | Unknown |
Key insight: Evidence from the Abundance Institute↗ suggests “no signs of winner-take-all dynamics” in the current AI ecosystem. However, TAI (transformative AI) may differ qualitatively if it enables rapid capability improvements or strategic advantages not available to followers. The model’s 0.7 first-mover advantage estimate may be too high based on current evidence, but TAI-level dynamics remain highly uncertain.
Resource Implications
Section titled “Resource Implications”Understanding actor landscape enables:
- Targeted engagement with highest-leverage actors: Focus on top 3-4 US labs could cover 70% of frontier capability
- Coalition-building for safety standards: Anthropic-OpenAI-DeepMind coalition would set de facto standards
- Monitoring of capability diffusion: $50-100M/year for comprehensive capability intelligence
- Anticipation of strategic behavior and reactions: Game-theoretic modeling investment ~$10-20M/year
Recommended investment: $100-200M/year in actor-focused governance work (vs. ~$20-30M current).
Key Cruxes
Section titled “Key Cruxes”| Crux | If True | If False | Current Probability |
|---|---|---|---|
| Leading coalition is stable | Top 3 can set norms | Racing to bottom | 45% |
| Safety can be coordination point | Voluntary standards viable | Regulation required | 35% |
| China is engageable on safety | Global coordination possible | Bifurcated governance | 30% |
| Diffusion to malicious actors is slow | Window for governance | Proliferation dominates | 50% |
Multipolar vs Unipolar Governance Considerations
Section titled “Multipolar vs Unipolar Governance Considerations”A crucial variable in this model is whether AI development converges toward unipolar (single dominant actor or coalition) or multipolar (distributed power among multiple actors) outcomes. Each presents distinct risk profiles:
| Governance Structure | Key Risks | Key Advantages |
|---|---|---|
| Unipolar (single dominant actor) | Value lock-in, institutional stagnation, internal corruption, single points of failure | Coordination easier, racing reduced, unified safety standards |
| Multipolar (distributed power) | Unchecked proliferation, system instability, coordination failures, racing dynamics | Diversity of approaches, no single point of failure, competitive pressure for safety |
Current research from AI Impacts↗ identifies key research questions: What “considerations might tip us between multipolar and unipolar scenarios”? What “risks [are] distinctive to a multipolar scenario”? The CO/AI analysis↗ notes that while current AI safety discussions often default to unipolar frameworks, “exploring decentralized governance structures could address key risks like value lock-in and institutional stagnation.”
Current assessment: The model estimates 55% probability of continued multipolar development, with the US-China bifurcation appearing increasingly stable. Geopolitical tensions, divergent regulatory approaches, and the collapse of open-source lags all point toward a world with multiple competing AI powers rather than a single dominant actor.
Limitations
Section titled “Limitations”-
Capability estimates rapidly outdating: The 2024-2025 data showing near-parity may not persist; breakthrough capabilities could restore gaps.
-
Safety investment data opaque: Lab safety budgets are not publicly disclosed; estimates are inferential.
-
TAI dynamics uncertain: Current competitive patterns may not predict TAI-level dynamics where decisive advantages could differ fundamentally.
-
Geopolitical volatility: US-China relations, export control effectiveness, and regulatory trajectories are highly uncertain.
-
Malicious actor access hard to estimate: Underground markets and state-sponsored theft create significant uncertainty in capability diffusion.
Sources
Section titled “Sources”- Recorded Future: US-China AI Gap Analysis (2025)↗
- RAND: China’s AI Models Closing the Gap (2025)↗
- Council on Foreign Relations: China, the United States, and the AI Race↗
- Boston University: DeepSeek and AI Frontier (2025)↗
- State of Open-Source AI 2025↗
- CSIS: DeepSeek, Huawei, and US-China AI Race↗
- Abundance Institute: AI Competitive Landscape↗
- AI Impacts: Multipolar Research Projects↗
- Frontier Model Forum: Progress Update 2024↗