Skip to content

Coordination Capacity: Research Report

📋Page Status
Quality:3 (Stub)⚠️
Words:1.0k
Backlinks:3
Structure:
📊 15📈 0🔗 4📚 53%Score: 11/15
FindingKey DataImplication
International weakNo binding AI treatiesGlobal risks unaddressed
National polarizedLimited domestic consensusInconsistent policy
Industry limitedSome lab coordinationSelf-interested
Collective action problemsRacing, free-ridingStructural barriers
Need unprecedentedAI requires more coordination than achievedMajor gap

Coordination capacity—the ability of multiple actors to align behavior toward common goals—is essential for addressing AI risks that cross organizational, national, and sectoral boundaries. AI’s global development means that purely local governance leaves gaps. AI’s racing dynamics create collective action problems where individually rational behavior produces collectively harmful outcomes. And AI’s complexity requires coordination across technical, policy, and business domains.

Current coordination capacity is weak relative to what AI governance requires. Internationally, there are no binding treaties and US-China cooperation is minimal. Domestically, political polarization limits consensus on AI policy. Industry coordination exists but is constrained by competitive pressures. And coordination across government, industry, civil society, and technical communities is ad hoc.

Building coordination capacity requires investment in institutions, trust, and mechanisms for collective decision-making. Historical examples—from arms control to environmental agreements—show that coordination is possible but typically requires decades and often crises to achieve. The question is whether AI timelines allow for traditional coordination processes or whether new, faster approaches are needed.


LevelDescriptionCurrent Status
InternationalBetween nationsVery weak
NationalWithin governmentsModerate, fragmented
IndustryBetween companiesLimited
Cross-sectorGovernment-industry-civil societyAd hoc
TechnicalAmong researchersModerate
ChallengeDescriptionMechanism
RacingEach actor benefits from moving fastPrisoner’s dilemma
Free-ridingBenefit from others’ safety investmentCollective action
Information asymmetryDon’t know what others are doingTrust problems
Value differencesDisagree on goalsNegotiation difficulty
ComplexityToo many dimensions to coordinateCognitive limits

MechanismStatusEffectiveness
Binding treatiesNoneN/A
Soft commitmentsBletchley, SeoulNormative
Technical cooperationAI Safety InstitutesBuilding
US-China dialogueVery limitedMinimal
Multilateral forumsActiveSlow
CountryPolitical ConsensusAgency CoordinationStatus
USLow (polarized)Low (fragmented)Weak
EUModerateBuildingModerate
UKModerateImprovingModerate
ChinaHigh (state-directed)HighStrong internally
MechanismParticipantsScopeEnforcement
Frontier Model ForumMajor labsSafety practicesVoluntary
Partnership on AIBroader industryPrinciplesNone
Standards bodiesVariousTechnicalMarket
Bilateral agreementsLab pairsSpecific issuesContractual
TypeExamplesEffectiveness
Government-industryWhite House commitmentsAd hoc
Industry-academiaResearch partnershipsVariable
Civil society-governmentAdvocacy influenceLimited
Multi-stakeholderAI governance forumsEarly

FactorMechanismSeverity
CompetitionZero-sum framingHigh
Trust deficitDon’t believe others will complyHigh
Verification difficultyCan’t check complianceHigh
SpeedCan’t coordinate fast enoughHigh
ComplexityToo many dimensionsModerate
FactorMechanismStatus
Shared risk perceptionCommon threat motivatesGrowing
Technical communityResearchers collaborateActive
CrisisCreates urgencyNot yet
LeadershipChampions push coordinationSome
InstitutionsBodies facilitateBuilding

MechanismDescriptionAI Status
TreatiesBinding international lawNone
RegulationsBinding national lawEmerging
ContractsBinding agreementsSome
StandardsRequired specificationsDeveloping
MechanismDescriptionAI Status
NormsShared expectationsBuilding
DeclarationsNon-binding commitmentsActive
Best practicesShared methodsDeveloping
Information sharingMutual awarenessLimited
MechanismDescriptionAI Status
Focal pointsNatural coordination pointsSome
ImitationFollow leadersActive
ReputationMaintain good standingSome
NetworksRelationship-basedBuilding

InstitutionFunctionStatus
AI Safety InstitutesTechnical coordinationGrowing
International bodiesDiplomatic coordinationWeak
Industry associationsSector coordinationExisting
Research networksTechnical coordinationActive
ApproachMechanismStatus
Information sharingDemonstrate good faithLimited
VerificationReduce need for trustDifficult
Small winsBuild track recordSome
RelationshipsPersonal connectionsVariable

Related ParameterConnection
International CoordinationSpecific to international level
Societal TrustTrust enables coordination
GovernanceGovernance requires coordination
Institutional QualityInstitutions enable coordination