AI governance encompasses the rules, norms, institutions, and practices that shape AI development and deployment. Despite the rapid advancement of AI capabilities since 2022, governance frameworks have struggled to keep pace. The EU AI Act, adopted in 2024, represents the most comprehensive binding regulation globally, establishing risk-based requirements for AI systems. However, it wonât be fully effective until 2026, by which time AI capabilities may have advanced significantly further.
The United States lacks a unified federal AI framework, instead relying on a patchwork of executive orders, agency guidance, and voluntary commitments. The October 2023 Executive Order on AI Safety established some requirements for frontier models but lacks enforcement mechanisms and may not survive administration changes. China has implemented binding regulations focused on specific applications (recommender systems, generative AI) but maintains state direction over AI development priorities.
International governance remains nascent. The Bletchley Declaration (November 2023) and Seoul Summit (May 2024) established voluntary frameworks and the International AI Safety Network, but binding international agreements remain elusive. The pace of capability advancement continues to outstrip governance capacity, creating persistent gaps between what AI systems can do and what rules govern their use.