International Coordination
International Coordination
Overview
Section titled “Overview”International Coordination measures the degree of global cooperation on AI governance and safety across nations and institutions. Higher international coordination is better—it enables collective responses to risks that transcend borders, prevents regulatory arbitrage, and reduces dangerous racing dynamics between nations. This parameter tracks treaty participation, shared standards adoption, institutional network strength, and the quality of bilateral and multilateral dialogues on AI risks.
Geopolitical dynamics, AI development trajectories, and near-miss incidents all shape whether international coordination strengthens or fragments—with major implications for collective action on AI risk. High coordination enables shared safety standards that prevent racing to the bottom; low coordination risks regulatory fragmentation and competitive dynamics that undermine safety.
This parameter underpins critical AI governance mechanisms. Strong international coordination enables shared safety standards that prevent racing dynamics where competitive pressure sacrifices safety for speed—research suggests uncoordinated development could reduce safety investment by 30-60% compared to coordinated scenarios. Technical cooperation enables faster dissemination of safety research and evaluation methods across borders, while multilateral mechanisms prove essential for coordinated responses to AI incidents with global implications. Finally, global governance of transformative technology requires international buy-in for democratic legitimacy, not unilateral action by individual powers that lacks broader consent.
Parameter Network
Section titled “Parameter Network”Contributes to: Governance Capacity
Primary outcomes affected:
- Existential Catastrophe ↓↓ — Coordination prevents racing dynamics and enables collective response
- Transition Smoothness ↓↓ — International frameworks manage global disruption
- Steady State ↓↓ — Who governs AI shapes long-term power distribution
Current State Assessment
Section titled “Current State Assessment”Key Metrics
Section titled “Key Metrics”| Metric | Current Value | Historical Baseline | Trend |
|---|---|---|---|
| AISI Network countries | 11 + EU | 0 (2022) | Growing |
| Combined AISI budget | ~$150M annually | $0 | Establishing |
| Binding treaty signatories | 14 (Council of Europe) | 0 | Growing |
| Summit declaration adherence | Mixed (US/UK refused Paris 2025) | High (2023 Bletchley) | Fragmenting |
| Industry safety commitments | 16 companies (Seoul) | 0 | Stable |
Sources: AISI Network↗, Council of Europe AI Treaty↗, Bletchley Declaration↗
Institutional Infrastructure
Section titled “Institutional Infrastructure”| Institution | Type | Participants | Budget | Status |
|---|---|---|---|---|
| UK AI Security Institute | National | UK | ~$65M | Operational |
| US AI Safety Institute (CAISI) | National | US | ~$10M | Refocused (2025) |
| EU AI Office | Supranational | EU-27 | ~$8M | Operational |
| International AISI Network | Multilateral | 11 countries + EU | $11M+ joint research | Building capacity |
| G7 Hiroshima Process | Multilateral | G7 | N/A | Monitoring framework active |
| UN Scientific Panel on AI | Global | Pending | TBD | Proposed Sept 2024 |
Note: AISI Network held inaugural meeting November 2024 in San Francisco, completing first joint testing exercise across US, UK, and Singapore institutes. Network announced $11M+ in commitments including $7.2M from South Korea for synthetic content research and $3M from Knight Foundation.
What “Healthy International Coordination” Looks Like
Section titled “What “Healthy International Coordination” Looks Like”Healthy international coordination on AI safety would exhibit several key characteristics that enable effective global governance while respecting national sovereignty and diverse regulatory philosophies.
Key Characteristics of Healthy Coordination
Section titled “Key Characteristics of Healthy Coordination”- Binding mutual commitments: Countries agree to enforceable safety standards with verification mechanisms, not just voluntary declarations
- Technical cooperation infrastructure: Robust information sharing on capabilities, risks, and evaluation methods across national boundaries
- Inclusive participation: Major AI powers (US, China, EU) and emerging AI nations (India, UAE, Singapore) all engaged substantively
- Rapid response capability: Mechanisms exist for coordinated action if concerning capabilities emerge or incidents occur
- Sustained political commitment: Cooperation survives changes in national leadership and geopolitical tensions
Current Gap Assessment
Section titled “Current Gap Assessment”| Characteristic | Current Status | Gap | Trend |
|---|---|---|---|
| Binding commitments | Council of Europe treaty (14 signatories) | Large—most frameworks voluntary | Worsening (US/UK withdrew Paris 2025) |
| Technical cooperation | AISI network operational; joint evaluations begun | Medium—capacity still building | Improving (first joint tests Nov 2024) |
| Inclusive participation | US/UK diverging from broader consensus | Large—key actors withdrawing | Worsening (governance bifurcation) |
| Rapid response | No mechanism exists | Very large | Flat (no progress since Bletchley) |
| Sustained commitment | Fragile—US pivoted away in 2025 | Large—political volatility | Worsening (administration reversals) |
Coordination Mechanisms: Comparative Effectiveness
Section titled “Coordination Mechanisms: Comparative Effectiveness”| Mechanism Type | Example | Enforcement | Coverage | Effectiveness Score (1-5) |
|---|---|---|---|---|
| Binding treaties | Council of Europe AI Treaty | Legal obligations | 14 countries | 2/5 (limited participation) |
| Voluntary summits | Bletchley, Seoul, Paris | Reputational pressure | 28+ countries | 2/5 (non-binding, fragmenting) |
| Technical networks | AISI Network | Peer cooperation | 11 countries + EU | 3/5 (building capacity, concrete outputs) |
| Industry commitments | Frontier AI Safety Commitments | Self-regulation | 16 companies | 2/5 (voluntary, variable compliance) |
| Regulatory extraterritoriality | EU AI Act | Legal for EU market access | Global (via Brussels Effect) | 4/5 (enforceable, broad reach) |
| Bilateral agreements | US-UK MOU, US-China dialogue | Government-to-government | Pairwise | 3/5 (limited scope but sustained) |
| UN frameworks | Global Digital Compact, proposed Scientific Panel | Norm-setting | Universal participation | 2/5 (early stage, unclear enforcement) |
Note: Effectiveness scores assess actual impact on coordination quality based on enforcement capability, coverage breadth, and sustained operation. The EU AI Act scores highest due to legal enforceability and market leverage creating de facto global standards.
Factors That Decrease International Coordination (Threats)
Section titled “Factors That Decrease International Coordination (Threats)”Geopolitical Competition
Section titled “Geopolitical Competition”| Threat | Mechanism | Evidence |
|---|---|---|
| US-China rivalry | AI seen as decisive for economic/military competition | Export controls since 2022; $150B+ Chinese AI investment |
| National security framing | Safety cooperation viewed as sharing strategic advantage | UK renamed AISI to “AI Security Institute” (Feb 2025) |
| Trust deficit | Decades of strategic competition limit information sharing | US/China dialogue constrained to “working level” |
Domestic Political Volatility
Section titled “Domestic Political Volatility”| Threat | Mechanism | Evidence |
|---|---|---|
| Administration changes | New governments can reverse predecessors’ commitments | Trump revoked Biden EO 14110 within hours of taking office |
| Innovation vs. regulation framing | Safety cooperation portrayed as competitiveness threat | Vance at Paris: “cannot and will not” accept foreign regulation |
| Industry influence | Tech companies lobby against binding international rules | $100B+ annual AI investment creates strong lobbying capacity |
Structural Barriers
Section titled “Structural Barriers”AI governance faces fundamental challenges that make international coordination harder than previous technology regimes. Unlike nuclear weapons, AI capabilities cannot be physically inspected through traditional verification methods—recent research on verification methods for international AI agreements explores techniques like differential privacy and secure multi-party computation, but these remain immature compared to nuclear inspection regimes. Nearly all AI research exhibits dual-use universality with both beneficial and harmful applications, making export controls more difficult than weapons-specific technologies. The speed mismatch proves severe: AI capabilities advance weekly while international diplomacy operates on annual cycles, creating persistent gaps between technical reality and governance frameworks. Finally, distributed development across thousands of organizations globally—from major labs to academic institutions to startups—makes comprehensive monitoring far harder than tracking state-run weapons programs.
Factors That Increase International Coordination (Supports)
Section titled “Factors That Increase International Coordination (Supports)”Shared Risk Recognition
Section titled “Shared Risk Recognition”| Factor | Mechanism | Status | Evidence |
|---|---|---|---|
| Catastrophic risk consensus | 28+ countries acknowledged “potential for serious, even catastrophic harm” | Bletchley Declaration (2023) | First formal international recognition of existential AI risks |
| Near-miss incidents | AI-caused harms could motivate stronger cooperation | No major incidents yet | Academic research suggests 15-25% probability of major AI incident by 2030 |
| Scientific consensus | Growing expert agreement on risk severity | AI Safety Summit series building evidence base | UN Scientific Panel on AI proposed Sept 2024, modeled on IPCC |
| US-China dialogue | Limited technical cooperation despite broader tensions | Biden-Xi agreement Nov 2024 | Agreement to exclude AI from nuclear command/control systems |
Institutional Development
Section titled “Institutional Development”| Factor | Mechanism | Status |
|---|---|---|
| AISI network expansion | Technical cooperation builds trust and shared methods | 11 countries + EU; $11M+ joint research funding; inaugural meeting Nov 2024 completed first joint testing exercise |
| Joint model evaluations | Practical cooperation on pre-deployment testing | US-UK-Singapore joint evaluations of Claude 3.5 Sonnet, o1; demonstrates feasible cross-border technical collaboration |
| EU AI Act extraterritoriality | ”Brussels Effect” creates de facto global standards | Implementation began August 2024; prohibited practices effective Feb 2025; GPAI obligations Aug 2025 |
| UN institutional frameworks | Global governance architecture development | Scientific Panel on AI proposed Sept 2024; Global Digital Compact adopted Sept 2024; biannual intergovernmental dialogues recommended |
Crisis Motivation Potential
Section titled “Crisis Motivation Potential”| Factor | Mechanism | Probability |
|---|---|---|
| Major AI incident | Catastrophic event could trigger emergency cooperation | 15-25% by 2030 |
| Capability surprise | Unexpected AI advancement could motivate precaution | 10-20% |
| International incident | AI-related conflict between states could drive agreements | 5-10% |
Why This Parameter Matters
Section titled “Why This Parameter Matters”Consequences of Low International Coordination
Section titled “Consequences of Low International Coordination”| Domain | Impact | Severity | Quantified Risk |
|---|---|---|---|
| Racing dynamics | Countries cut safety corners to maintain competitive advantage | Critical | 30-60% reduction in safety investment vs. coordinated scenarios |
| Regulatory arbitrage | AI development concentrates in least-regulated jurisdictions | High | Similar to tax havens; creates “safety havens” for risky development |
| Fragmented standards | Incompatible safety frameworks multiply compliance costs | High | Estimated 15-25% increase in compliance costs for multinational deployment |
| Crisis response | No mechanism for coordinated action during AI emergencies | Critical | Zero current capacity for rapid multilateral intervention |
| Democratic deficit | Global technology governed by few powerful actors | High | 2-3 countries controlling 80%+ of frontier AI development |
| Verification gaps | No credible monitoring of commitments | Critical | Unlike nuclear regime with IAEA inspections; AI lacks equivalent |
International Coordination and Existential Risk
Section titled “International Coordination and Existential Risk”International coordination directly affects existential risk through several quantifiable mechanisms that determine whether the global community can respond effectively to advanced AI development.
Racing prevention: Without coordination, competitive dynamics between US-China or between AI labs pressure actors to deploy insufficiently tested systems. Game-theoretic modeling suggests racing conditions reduce safety investment by 30-60% compared to coordinated scenarios. Coordination mechanisms like shared safety standards administered through institutions like model registries or compute governance frameworks could prevent this “race to the bottom” by creating common compliance obligations.
Collective response capability: If dangerous AI capabilities emerge, effective response may require coordinated global action—pausing development, sharing countermeasures, or coordinating deployment restrictions. Current coordination gaps leave no rapid response mechanism for AI emergencies, despite 28 countries acknowledging catastrophic risk potential. The absence of such mechanisms increases the probability that capability surprises proceed unchecked.
Legitimacy and compliance: International frameworks provide legitimacy for domestic AI governance that purely national approaches lack, similar to how climate agreements strengthen domestic climate policy. This legitimacy increases the likelihood of sustained compliance even when politically inconvenient. Research on international organizations suggests effectiveness improves dramatically with technical levers (like ICANN’s DNS control), monetary levers (IMF/WTO), or reputation mechanisms—suggesting AI governance requires similar institutional design.
Trajectory and Scenarios
Section titled “Trajectory and Scenarios”Projected Trajectory
Section titled “Projected Trajectory”| Timeframe | Key Developments | Coordination Impact |
|---|---|---|
| 2025-2026 | India hosts AI Impact Summit; CAISI mission shift; EU AI Act enforcement | Mixed—institutional building continues but US/UK divergence deepens |
| 2027-2028 | Next-gen AI systems deployed; potential incidents | Uncertain—depends on whether incidents motivate cooperation |
| 2029-2030 | Council of Europe treaty enforcement; potential new frameworks | Could crystallize into either coordination or fragmentation |
Scenario Analysis
Section titled “Scenario Analysis”| Scenario | Probability | Outcome |
|---|---|---|
| Coordination consolidation | 20-25% | Major incident or leadership change drives renewed US engagement; binding international framework emerges by 2030 |
| Muddle through | 40-50% | Voluntary frameworks continue with mixed compliance; AISI network grows but lacks enforcement; fragmented governance persists |
| Governance bifurcation | 25-30% | US/UK pursue innovation-focused approach; EU/China/Global South develop alternative framework; AI governance splits into competing blocs |
| Coordination collapse | 5-10% | Geopolitical crisis undermines all cooperation; AI development proceeds with minimal international oversight |
Key Debates
Section titled “Key Debates”Is US-China Cooperation Possible?
Section titled “Is US-China Cooperation Possible?”The question of US-China AI cooperation represents perhaps the most critical governance uncertainty, given these two nations’ dominance in AI development and their broader geopolitical rivalry.
Arguments for cooperation:
- Both countries have expressed concern about AI risks through official channels and academic research
- Precedents exist for technical cooperation during broader competition (climate research, pandemic preparedness)
- Chinese officials engaged substantively in Bletchley Declaration (2023) and supported US-led UN resolution on AI safety (March 2024)
- November 2024 Biden-Xi agreement to exclude AI from nuclear command/control systems demonstrates concrete cooperation is achievable
- First bilateral AI governance meeting occurred in Geneva (May 2024), establishing working-level dialogue
- China’s performance gap with US AI models shrunk from 9.3% (2024) to 1.7% (February 2025), reducing asymmetric incentives
- Both nations supported each other’s UN resolutions: US backed China’s capacity-building resolution, China backed US trustworthy AI resolution (June 2024)
Arguments against:
- AI framed as central to economic and military competition in both countries’ strategic planning
- Broader US-China relations have deteriorated since 2018, with trust deficit spanning decades
- Export controls (since 2022) signal strategic containment rather than cooperation framework
- Verification of AI commitments fundamentally more difficult than nuclear arms control—no physical inspection equivalent exists
- US $150B+ investment in AI competitiveness creates domestic political barriers to cooperation perceived as “sharing advantage”
- China’s July 2025 Action Plan for Global AI Governance proposes alternative institutional architecture potentially competing with US-led frameworks
Summit Process: Foundation or Theater?
Section titled “Summit Process: Foundation or Theater?”The AI Safety Summit process (Bletchley 2023, Seoul 2024, Paris 2025) represents a major diplomatic investment, but its ultimate effectiveness remains contested among governance researchers.
Arguments summits are building blocks:
- Bletchley Declaration↗ achieved first formal international recognition of catastrophic AI risks across 28 countries
- Summit process created institutional infrastructure (AISIs) that continues operating beyond summits—AISI network completed first joint testing exercise November 2024
- Voluntary commitments from 16 major AI companies at Seoul represent meaningful industry engagement with safety protocols
- Technical cooperation through AISI network provides practical foundation for future frameworks, with $11M+ in joint research commitments
- UN adopted Global Digital Compact (September 2024) building on summit momentum
- Carnegie Endowment analysis (October 2024) suggests summits created “governance arms race” spurring national regulatory action
Arguments summits are insufficient:
- All commitments remain voluntary with no enforcement mechanisms—16 company commitments are “nonbinding”
- Speed mismatch: annual summits cannot keep pace with weekly AI advances, creating persistent governance gaps
- Paris Summit criticized↗ as “missed opportunity” by Anthropic CEO and others for lacking binding agreements
- US/UK refusal to sign Paris declaration suggests coordination is fragmenting, not building—represents governance bifurcation
- Research identifies “confusing web of summits” (UK, UN, Seoul, G7, France) that may undermine coherent global governance
- No progress toward rapid response mechanisms for AI emergencies despite repeated acknowledgment of need
Related Pages
Section titled “Related Pages”Related Risks
Section titled “Related Risks”- Racing Dynamics — Competitive pressures that coordination could address; coordination reduces safety investment cuts by 30-60%
Related Interventions
Section titled “Related Interventions”- International AI Safety Summits — Primary diplomatic mechanism for coordination
- International Coordination Overview — Detailed analysis of coordination mechanisms
- Model Registries — Technical infrastructure that could enable coordination verification
- Compute Governance — Hardware-based coordination mechanisms
Related Parameters
Section titled “Related Parameters”- Regulatory Capacity — Domestic capacity enables international engagement
- Institutional Quality — Healthy institutions required for sustained coordination
- Societal Trust — Public confidence affects compliance with international frameworks
- Human Agency — Coordination must preserve meaningful human control over AI systems
Sources & Key Research
Section titled “Sources & Key Research”Recent Academic Research (2024-2025)
Section titled “Recent Academic Research (2024-2025)”International Institutional Frameworks:
- Saran, Samir. “Establishment of an international AI agency: an applied solution to global AI governance.” International Affairs 101, no. 4 (2025): 1483-1502. Oxford Academic. Proposes UN-based International Artificial Intelligence Agency (IAIA) as solution to governance gaps.
- Allan, Bentley B., et al. “Global AI governance: barriers and pathways forward.” International Affairs 100, no. 3 (2024): 1275-1293. Oxford Academic. Maps geopolitical and institutional barriers; notes centrality of AI to interstate competition problematizes substantive cooperation.
- “Verification methods for international AI agreements.” ResearchGate (2024). Examines techniques like differential privacy and secure multi-party computation for compliance verification.
US-China Cooperation Prospects:
- Sandia National Laboratories. “Challenges and Opportunities for US-China Collaboration on Artificial Intelligence Governance.” April 2025. Technical cooperation possible without compromising security or trade secrets.
- Mukherjee, Sayash, et al. “Promising Topics for US–China Dialogues on AI Risks and Governance.” Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (2025). Identifies viable cooperation areas despite broader tensions.
- Brookings Institution. “A roadmap for a US-China AI dialogue” (2024). Framework for bilateral technical dialogue.
Summit Effectiveness:
- Carnegie Endowment for International Peace. “The AI Governance Arms Race: From Summit Pageantry to Progress?” October 2024. Analyzes whether summits produce substantive progress or symbolic gestures.
- Centre for International Governance Innovation. “China’s AI Governance Initiative and Its Geopolitical Ambitions” (2025). Examines China’s July 2025 Action Plan for competing governance architecture.
Summit Documentation
Section titled “Summit Documentation”- The Bletchley Declaration↗ - UK Government (November 2023)
- Seoul Declaration for Safe, Innovative and Inclusive AI↗ - AI Seoul Summit (May 2024)
- Frontier AI Safety Commitments↗ - AI Seoul Summit (May 2024)
Institutional Analysis
Section titled “Institutional Analysis”- International Network of AI Safety Institutes↗ - US Commerce Department
- US Department of Commerce. “FACT SHEET: Launch of International Network of AI Safety Institutes.” November 2024. Details inaugural San Francisco meeting and $11M+ funding commitments.
- Council of Europe Framework Convention on AI↗ - First binding AI treaty
- The AI Safety Institute International Network: Next Steps↗ - CSIS analysis
Geopolitical Research
Section titled “Geopolitical Research”- Potential for U.S.-China Cooperation on Reducing AI Risks↗ - RAND Corporation
- Insights from Nuclear History for AI Governance↗ - RAND Corporation
- International Control of Powerful Technology: Lessons from the Baruch Plan↗ - GovAI
What links here
- Coordination Capacityparameter
- Geopoliticsmetricmeasures
- Civilizational Competencerisk-factorcomposed-of
- AI Safety Summit (Bletchley Park)historical
- Multipolar Trap Dynamics Modelmodelmodels
- Racing Dynamics Game Theory Modelmodelaffects
- International Coordination Game Modelmodelmodels
- GovAIlab-research
- International Compute Regimespolicy