Structure: 📊 14 📈 0 🔗 4 📚 5 •4% Score: 11/15
Finding Key Data Implication Frameworks emerging EU AI Act, US EO, China rules Some structure Enforcement weak Limited capacity, unclear jurisdiction Rules unenforced Fragmentation high No unified approach Gaps and conflicts Speed mismatch Regulation years, AI months Always behind Expertise gap 10-100x fewer experts in government Can’t evaluate
Governance as a parameter measures society’s capacity to effectively regulate, steer, and oversee AI development. This includes the formal legal frameworks that establish rules, the institutional capacity to monitor compliance, the enforcement mechanisms to address violations, and the adaptive processes to update rules as technology changes.
Current AI governance is emerging but inadequate. The EU AI Act represents the most comprehensive framework but implementation faces challenges. The US relies on sector-specific regulation and executive action. China has moved quickly on specific AI applications but with different priorities. International coordination is minimal. And across jurisdictions, the gap between regulatory ambition and enforcement capacity is large.
The fundamental challenge is that governance evolves slowly while AI evolves rapidly. Building regulatory capacity requires expertise, resources, and political will that accumulate over years or decades. AI capabilities advance in months. This mismatch means governance is perpetually behind—regulating yesterday’s AI while tomorrow’s AI is being developed.
Why Governance Capacity Matters
Even well-designed AI policies fail without governance capacity to implement them. Governance is the practical ability to translate intentions into outcomes—and it’s severely limited for AI.
Component Description Current Status Legal frameworks Laws and regulations Emerging Regulatory capacity Staff, expertise, resources Very limited Enforcement mechanisms Penalties, monitoring Weak Adaptive processes Updating rules Slow International coordination Cross-border governance Minimal
Approach Description Examples Horizontal regulation Rules for all AI EU AI Act Sector-specific Rules for domains FDA for medical AI Self-regulation Industry governance Voluntary commitments Standards-based Technical requirements NIST AI RMF Liability Ex post accountability Product liability
Jurisdiction Framework Status Enforcement EU AI Act Implementing Building US Executive Order, sector rules Fragmented Limited China Algorithm, generative AI rules Active State capacity UK Pro-innovation approach Developing Sector-based International No binding rules Minimal None
Dimension Global Capacity Need AI experts in government ~1,000-2,000 10,000+ Dedicated AI regulators ~500 5,000+ Government AI compute Minimal Significant International coordinators Dozens Hundreds Enforcement staff Very limited Substantial
Process Typical Duration AI Equivalent Major legislation 3-7 years Multiple model generations Agency rulemaking 1-3 years Capability doublings International treaty 5-15 years Transformative advances Court decisions 2-5 years Major shifts
Gap Description Severity Frontier models Most capable systems unregulated Critical International No cross-border coordination High Open source Can’t regulate released models High Dual-use Same tech, different uses Moderate Enforcement Can’t verify compliance High
Factor Mechanism Trend Complexity AI hard to understand Increasing Speed Change outpaces regulation Accelerating Expertise gap Government lacks knowledge Persistent Industry power Resources for lobbying Strong Jurisdictional limits AI crosses borders Structural
Factor Mechanism Status Investment More resources for regulators Growing AI Safety Institutes Build technical capacity Emerging Crisis events Incidents motivate action Pending International cooperation Coordinate approaches Early AI assistance Use AI to regulate AI Experimental
Model Description Effectiveness Risk-based Regulate by risk level EU approach Principles-based Flexible guidelines UK approach Sector-specific Domain regulators US approach State-directed Government control China approach Industry self-regulation Voluntary commitments Dominant globally
Approach Description Status Compute governance Regulate via hardware Proposed Liability expansion Increase accountability Debated Licensing regimes Require approval Proposed International regimes Treaty-based Discussed Auditing requirements Independent assessment Early
Implication Description Regulatory uncertainty Rules unclear, may change Compliance burden Varies by jurisdiction Race dynamics May accelerate development Safety incentives Governance creates motivation
Implication Description External pressure Governance can mandate safety Verification gap Can’t check safety claims Accountability Limited consequences for harms Public input Governance channels public concerns