Coordination capacity—the ability of multiple actors to align behavior toward common goals—is essential for addressing AI risks that cross organizational, national, and sectoral boundaries. AI’s global development means that purely local governance leaves gaps. AI’s racing dynamics create collective action problems where individually rational behavior produces collectively harmful outcomes. And AI’s complexity requires coordination across technical, policy, and business domains.
Current coordination capacity is weak relative to what AI governance requires. Internationally, there are no binding treaties and US-China cooperation is minimal. Domestically, political polarization limits consensus on AI policy. Industry coordination exists but is constrained by competitive pressures. And coordination across government, industry, civil society, and technical communities is ad hoc.
Building coordination capacity requires investment in institutions, trust, and mechanisms for collective decision-making. Historical examples—from arms control to environmental agreements—show that coordination is possible but typically requires decades and often crises to achieve. The question is whether AI timelines allow for traditional coordination processes or whether new, faster approaches are needed.