Skip to content

International Coordination

Parameter

International Coordination

Importance80
DirectionHigher is better
Current TrendMixed (11-country AISI network, but US/UK refused Paris 2025 declaration)
Key MeasurementTreaty signatories, AISI network participation, shared evaluation standards
Prioritization
Importance80
Tractability30
Neglectedness50
Uncertainty60

International Coordination measures the degree of global cooperation on AI governance and safety across nations and institutions. Higher international coordination is better—it enables collective responses to risks that transcend borders, prevents regulatory arbitrage, and reduces dangerous racing dynamics between nations. This parameter tracks treaty participation, shared standards adoption, institutional network strength, and the quality of bilateral and multilateral dialogues on AI risks.

Geopolitical dynamics, AI development trajectories, and near-miss incidents all shape whether international coordination strengthens or fragments—with major implications for collective action on AI risk. High coordination enables shared safety standards that prevent racing to the bottom; low coordination risks regulatory fragmentation and competitive dynamics that undermine safety.

This parameter underpins critical AI governance mechanisms. Strong international coordination enables shared safety standards that prevent racing dynamics where competitive pressure sacrifices safety for speed—research suggests uncoordinated development could reduce safety investment by 30-60% compared to coordinated scenarios. Technical cooperation enables faster dissemination of safety research and evaluation methods across borders, while multilateral mechanisms prove essential for coordinated responses to AI incidents with global implications. Finally, global governance of transformative technology requires international buy-in for democratic legitimacy, not unilateral action by individual powers that lacks broader consent.


Loading diagram...

Contributes to: Governance Capacity

Primary outcomes affected:


MetricCurrent ValueHistorical BaselineTrend
AISI Network countries11 + EU0 (2022)Growing
Combined AISI budget~$150M annually$0Establishing
Binding treaty signatories14 (Council of Europe)0Growing
Summit declaration adherenceMixed (US/UK refused Paris 2025)High (2023 Bletchley)Fragmenting
Industry safety commitments16 companies (Seoul)0Stable

Sources: AISI Network, Council of Europe AI Treaty, Bletchley Declaration

InstitutionTypeParticipantsBudgetStatus
UK AI Security InstituteNationalUK~$65MOperational
US AI Safety Institute (CAISI)NationalUS~$10MRefocused (2025)
EU AI OfficeSupranationalEU-27~$8MOperational
International AISI NetworkMultilateral11 countries + EU$11M+ joint researchBuilding capacity
G7 Hiroshima ProcessMultilateralG7N/AMonitoring framework active
UN Scientific Panel on AIGlobalPendingTBDProposed Sept 2024

Note: AISI Network held inaugural meeting November 2024 in San Francisco, completing first joint testing exercise across US, UK, and Singapore institutes. Network announced $11M+ in commitments including $7.2M from South Korea for synthetic content research and $3M from Knight Foundation.


What “Healthy International Coordination” Looks Like

Section titled “What “Healthy International Coordination” Looks Like”

Healthy international coordination on AI safety would exhibit several key characteristics that enable effective global governance while respecting national sovereignty and diverse regulatory philosophies.

Key Characteristics of Healthy Coordination

Section titled “Key Characteristics of Healthy Coordination”
  1. Binding mutual commitments: Countries agree to enforceable safety standards with verification mechanisms, not just voluntary declarations
  2. Technical cooperation infrastructure: Robust information sharing on capabilities, risks, and evaluation methods across national boundaries
  3. Inclusive participation: Major AI powers (US, China, EU) and emerging AI nations (India, UAE, Singapore) all engaged substantively
  4. Rapid response capability: Mechanisms exist for coordinated action if concerning capabilities emerge or incidents occur
  5. Sustained political commitment: Cooperation survives changes in national leadership and geopolitical tensions
CharacteristicCurrent StatusGapTrend
Binding commitmentsCouncil of Europe treaty (14 signatories)Large—most frameworks voluntaryWorsening (US/UK withdrew Paris 2025)
Technical cooperationAISI network operational; joint evaluations begunMedium—capacity still buildingImproving (first joint tests Nov 2024)
Inclusive participationUS/UK diverging from broader consensusLarge—key actors withdrawingWorsening (governance bifurcation)
Rapid responseNo mechanism existsVery largeFlat (no progress since Bletchley)
Sustained commitmentFragile—US pivoted away in 2025Large—political volatilityWorsening (administration reversals)

Coordination Mechanisms: Comparative Effectiveness

Section titled “Coordination Mechanisms: Comparative Effectiveness”
Mechanism TypeExampleEnforcementCoverageEffectiveness Score (1-5)
Binding treatiesCouncil of Europe AI TreatyLegal obligations14 countries2/5 (limited participation)
Voluntary summitsBletchley, Seoul, ParisReputational pressure28+ countries2/5 (non-binding, fragmenting)
Technical networksAISI NetworkPeer cooperation11 countries + EU3/5 (building capacity, concrete outputs)
Industry commitmentsFrontier AI Safety CommitmentsSelf-regulation16 companies2/5 (voluntary, variable compliance)
Regulatory extraterritorialityEU AI ActLegal for EU market accessGlobal (via Brussels Effect)4/5 (enforceable, broad reach)
Bilateral agreementsUS-UK MOU, US-China dialogueGovernment-to-governmentPairwise3/5 (limited scope but sustained)
UN frameworksGlobal Digital Compact, proposed Scientific PanelNorm-settingUniversal participation2/5 (early stage, unclear enforcement)

Note: Effectiveness scores assess actual impact on coordination quality based on enforcement capability, coverage breadth, and sustained operation. The EU AI Act scores highest due to legal enforceability and market leverage creating de facto global standards.


Factors That Decrease International Coordination (Threats)

Section titled “Factors That Decrease International Coordination (Threats)”
Loading diagram...
ThreatMechanismEvidence
US-China rivalryAI seen as decisive for economic/military competitionExport controls since 2022; $150B+ Chinese AI investment
National security framingSafety cooperation viewed as sharing strategic advantageUK renamed AISI to “AI Security Institute” (Feb 2025)
Trust deficitDecades of strategic competition limit information sharingUS/China dialogue constrained to “working level”
ThreatMechanismEvidence
Administration changesNew governments can reverse predecessors’ commitmentsTrump revoked Biden EO 14110 within hours of taking office
Innovation vs. regulation framingSafety cooperation portrayed as competitiveness threatVance at Paris: “cannot and will not” accept foreign regulation
Industry influenceTech companies lobby against binding international rules$100B+ annual AI investment creates strong lobbying capacity

AI governance faces fundamental challenges that make international coordination harder than previous technology regimes. Unlike nuclear weapons, AI capabilities cannot be physically inspected through traditional verification methods—recent research on verification methods for international AI agreements explores techniques like differential privacy and secure multi-party computation, but these remain immature compared to nuclear inspection regimes. Nearly all AI research exhibits dual-use universality with both beneficial and harmful applications, making export controls more difficult than weapons-specific technologies. The speed mismatch proves severe: AI capabilities advance weekly while international diplomacy operates on annual cycles, creating persistent gaps between technical reality and governance frameworks. Finally, distributed development across thousands of organizations globally—from major labs to academic institutions to startups—makes comprehensive monitoring far harder than tracking state-run weapons programs.


Factors That Increase International Coordination (Supports)

Section titled “Factors That Increase International Coordination (Supports)”
FactorMechanismStatusEvidence
Catastrophic risk consensus28+ countries acknowledged “potential for serious, even catastrophic harm”Bletchley Declaration (2023)First formal international recognition of existential AI risks
Near-miss incidentsAI-caused harms could motivate stronger cooperationNo major incidents yetAcademic research suggests 15-25% probability of major AI incident by 2030
Scientific consensusGrowing expert agreement on risk severityAI Safety Summit series building evidence baseUN Scientific Panel on AI proposed Sept 2024, modeled on IPCC
US-China dialogueLimited technical cooperation despite broader tensionsBiden-Xi agreement Nov 2024Agreement to exclude AI from nuclear command/control systems
FactorMechanismStatus
AISI network expansionTechnical cooperation builds trust and shared methods11 countries + EU; $11M+ joint research funding; inaugural meeting Nov 2024 completed first joint testing exercise
Joint model evaluationsPractical cooperation on pre-deployment testingUS-UK-Singapore joint evaluations of Claude 3.5 Sonnet, o1; demonstrates feasible cross-border technical collaboration
EU AI Act extraterritoriality”Brussels Effect” creates de facto global standardsImplementation began August 2024; prohibited practices effective Feb 2025; GPAI obligations Aug 2025
UN institutional frameworksGlobal governance architecture developmentScientific Panel on AI proposed Sept 2024; Global Digital Compact adopted Sept 2024; biannual intergovernmental dialogues recommended
FactorMechanismProbability
Major AI incidentCatastrophic event could trigger emergency cooperation15-25% by 2030
Capability surpriseUnexpected AI advancement could motivate precaution10-20%
International incidentAI-related conflict between states could drive agreements5-10%

Consequences of Low International Coordination

Section titled “Consequences of Low International Coordination”
DomainImpactSeverityQuantified Risk
Racing dynamicsCountries cut safety corners to maintain competitive advantageCritical30-60% reduction in safety investment vs. coordinated scenarios
Regulatory arbitrageAI development concentrates in least-regulated jurisdictionsHighSimilar to tax havens; creates “safety havens” for risky development
Fragmented standardsIncompatible safety frameworks multiply compliance costsHighEstimated 15-25% increase in compliance costs for multinational deployment
Crisis responseNo mechanism for coordinated action during AI emergenciesCriticalZero current capacity for rapid multilateral intervention
Democratic deficitGlobal technology governed by few powerful actorsHigh2-3 countries controlling 80%+ of frontier AI development
Verification gapsNo credible monitoring of commitmentsCriticalUnlike nuclear regime with IAEA inspections; AI lacks equivalent

International Coordination and Existential Risk

Section titled “International Coordination and Existential Risk”

International coordination directly affects existential risk through several quantifiable mechanisms that determine whether the global community can respond effectively to advanced AI development.

Racing prevention: Without coordination, competitive dynamics between US-China or between AI labs pressure actors to deploy insufficiently tested systems. Game-theoretic modeling suggests racing conditions reduce safety investment by 30-60% compared to coordinated scenarios. Coordination mechanisms like shared safety standards administered through institutions like model registries or compute governance frameworks could prevent this “race to the bottom” by creating common compliance obligations.

Collective response capability: If dangerous AI capabilities emerge, effective response may require coordinated global action—pausing development, sharing countermeasures, or coordinating deployment restrictions. Current coordination gaps leave no rapid response mechanism for AI emergencies, despite 28 countries acknowledging catastrophic risk potential. The absence of such mechanisms increases the probability that capability surprises proceed unchecked.

Legitimacy and compliance: International frameworks provide legitimacy for domestic AI governance that purely national approaches lack, similar to how climate agreements strengthen domestic climate policy. This legitimacy increases the likelihood of sustained compliance even when politically inconvenient. Research on international organizations suggests effectiveness improves dramatically with technical levers (like ICANN’s DNS control), monetary levers (IMF/WTO), or reputation mechanisms—suggesting AI governance requires similar institutional design.


TimeframeKey DevelopmentsCoordination Impact
2025-2026India hosts AI Impact Summit; CAISI mission shift; EU AI Act enforcementMixed—institutional building continues but US/UK divergence deepens
2027-2028Next-gen AI systems deployed; potential incidentsUncertain—depends on whether incidents motivate cooperation
2029-2030Council of Europe treaty enforcement; potential new frameworksCould crystallize into either coordination or fragmentation
ScenarioProbabilityOutcome
Coordination consolidation20-25%Major incident or leadership change drives renewed US engagement; binding international framework emerges by 2030
Muddle through40-50%Voluntary frameworks continue with mixed compliance; AISI network grows but lacks enforcement; fragmented governance persists
Governance bifurcation25-30%US/UK pursue innovation-focused approach; EU/China/Global South develop alternative framework; AI governance splits into competing blocs
Coordination collapse5-10%Geopolitical crisis undermines all cooperation; AI development proceeds with minimal international oversight

The question of US-China AI cooperation represents perhaps the most critical governance uncertainty, given these two nations’ dominance in AI development and their broader geopolitical rivalry.

Arguments for cooperation:

  • Both countries have expressed concern about AI risks through official channels and academic research
  • Precedents exist for technical cooperation during broader competition (climate research, pandemic preparedness)
  • Chinese officials engaged substantively in Bletchley Declaration (2023) and supported US-led UN resolution on AI safety (March 2024)
  • November 2024 Biden-Xi agreement to exclude AI from nuclear command/control systems demonstrates concrete cooperation is achievable
  • First bilateral AI governance meeting occurred in Geneva (May 2024), establishing working-level dialogue
  • China’s performance gap with US AI models shrunk from 9.3% (2024) to 1.7% (February 2025), reducing asymmetric incentives
  • Both nations supported each other’s UN resolutions: US backed China’s capacity-building resolution, China backed US trustworthy AI resolution (June 2024)

Arguments against:

  • AI framed as central to economic and military competition in both countries’ strategic planning
  • Broader US-China relations have deteriorated since 2018, with trust deficit spanning decades
  • Export controls (since 2022) signal strategic containment rather than cooperation framework
  • Verification of AI commitments fundamentally more difficult than nuclear arms control—no physical inspection equivalent exists
  • US $150B+ investment in AI competitiveness creates domestic political barriers to cooperation perceived as “sharing advantage”
  • China’s July 2025 Action Plan for Global AI Governance proposes alternative institutional architecture potentially competing with US-led frameworks

The AI Safety Summit process (Bletchley 2023, Seoul 2024, Paris 2025) represents a major diplomatic investment, but its ultimate effectiveness remains contested among governance researchers.

Arguments summits are building blocks:

  • Bletchley Declaration achieved first formal international recognition of catastrophic AI risks across 28 countries
  • Summit process created institutional infrastructure (AISIs) that continues operating beyond summits—AISI network completed first joint testing exercise November 2024
  • Voluntary commitments from 16 major AI companies at Seoul represent meaningful industry engagement with safety protocols
  • Technical cooperation through AISI network provides practical foundation for future frameworks, with $11M+ in joint research commitments
  • UN adopted Global Digital Compact (September 2024) building on summit momentum
  • Carnegie Endowment analysis (October 2024) suggests summits created “governance arms race” spurring national regulatory action

Arguments summits are insufficient:

  • All commitments remain voluntary with no enforcement mechanisms—16 company commitments are “nonbinding”
  • Speed mismatch: annual summits cannot keep pace with weekly AI advances, creating persistent governance gaps
  • Paris Summit criticized as “missed opportunity” by Anthropic CEO and others for lacking binding agreements
  • US/UK refusal to sign Paris declaration suggests coordination is fragmenting, not building—represents governance bifurcation
  • Research identifies “confusing web of summits” (UK, UN, Seoul, G7, France) that may undermine coherent global governance
  • No progress toward rapid response mechanisms for AI emergencies despite repeated acknowledgment of need

  • Racing Dynamics — Competitive pressures that coordination could address; coordination reduces safety investment cuts by 30-60%
  • Regulatory Capacity — Domestic capacity enables international engagement
  • Institutional Quality — Healthy institutions required for sustained coordination
  • Societal Trust — Public confidence affects compliance with international frameworks
  • Human Agency — Coordination must preserve meaningful human control over AI systems

International Institutional Frameworks:

US-China Cooperation Prospects:

Summit Effectiveness: