Skip to content

AI Governance: Research Report

📋Page Status
Quality:3 (Stub)⚠️
Words:1.0k
Structure:
📊 12📈 0🔗 4📚 5•4%Score: 11/15
FindingKey DataImplication
Regulatory lag2-5 year gap behind capabilitiesGovernance often reactive
Limited coverage6 countries with binding rulesMost AI development unregulated
EU leadershipAI Act fully effective 2026Sets global benchmark
US fragmentation10+ agencies with AI oversightNo unified framework
International gapsNo binding treatiesCoordination insufficient

AI governance encompasses the rules, norms, institutions, and practices that shape AI development and deployment. Despite the rapid advancement of AI capabilities since 2022, governance frameworks have struggled to keep pace. The EU AI Act, adopted in 2024, represents the most comprehensive binding regulation globally, establishing risk-based requirements for AI systems. However, it won’t be fully effective until 2026, by which time AI capabilities may have advanced significantly further.

The United States lacks a unified federal AI framework, instead relying on a patchwork of executive orders, agency guidance, and voluntary commitments. The October 2023 Executive Order on AI Safety established some requirements for frontier models but lacks enforcement mechanisms and may not survive administration changes. China has implemented binding regulations focused on specific applications (recommender systems, generative AI) but maintains state direction over AI development priorities.

International governance remains nascent. The Bletchley Declaration (November 2023) and Seoul Summit (May 2024) established voluntary frameworks and the International AI Safety Network, but binding international agreements remain elusive. The pace of capability advancement continues to outstrip governance capacity, creating persistent gaps between what AI systems can do and what rules govern their use.


PeriodGovernance FocusKey Developments
2016-2019Ethical principlesAsilomar AI Principles, OECD guidelines
2019-2022Soft lawNational AI strategies, voluntary commitments
2022-2024Hard law emergenceEU AI Act, China regulations, US EO
2024-presentImplementationEnforcement begins, international coordination
DimensionDescription
Technical standardsSafety requirements, testing protocols
Legal frameworksBinding regulations, liability rules
Industry self-governanceVoluntary commitments, best practices
International coordinationTreaties, mutual recognition
Institutional capacityRegulatory expertise and resources

JurisdictionFrameworkStatusScope
EUAI ActEffective Aug 2024-2026Risk-based, comprehensive
ChinaMultiple regulationsEffectiveApplication-specific
UKPro-innovation approachFrameworkSector-based, light touch
USExecutive Order + agency rulesFragmentedLimited binding requirements
CanadaAIDA (proposed)PendingRisk-based
JapanGuidelinesVoluntaryPrinciples-based
Risk LevelRequirementsExamples
UnacceptableProhibitedSocial scoring, real-time biometric surveillance
HighStrict complianceCritical infrastructure, employment, law enforcement
LimitedTransparencyChatbots, emotion recognition
MinimalNoneMost AI applications
ApproachActorsMechanism
Responsible ScalingAnthropic, OpenAI, DeepMindSelf-imposed capability thresholds
Safety evaluationsLabs + third partiesPre-deployment testing
Compute thresholdsEU AI Act, US EOTraining compute triggers requirements
LicensingProposed in EUMay require approval for frontier models
InitiativeYearParticipantsStatus
Bletchley Declaration202328 countriesVoluntary principles
Seoul Declaration202416 companies, 27 countriesSafety commitments
AI Safety Institutes2023-2024US, UK, Japan, SingaporeNational bodies
Frontier Model Forum2023Major labsIndustry coordination
GPAI202029 countriesResearch coordination

FactorMechanismSeverity
Capability paceTechnology outruns rulesHigh
Technical complexityRegulators lack expertiseHigh
Industry lobbyingWeakens proposed rulesMedium-High
Jurisdictional arbitrageDevelopment moves to lenient jurisdictionsMedium
Coordination failuresCountries can’t agreeHigh
FactorMechanismCurrent Status
Safety incidentsVisible harm creates political willNot yet significant
Technical standardsProvide basis for regulationDeveloping (NIST, ISO)
Institutional capacityDedicated regulatorsEmerging (AI Safety Institutes)
International agreementMutual standardsEarly stage
Industry supportSelf-interest in predictable rulesMixed

GapDescriptionRisk
Frontier modelsLimited oversight of most capable systemsHigh
Open weightsNo governance frameworkMedium-High
InternationalNo binding global rulesHigh
EnforcementLimited capacity to monitor complianceHigh
SpeedGovernance lags capabilityHigh
ChallengeStatusUrgency
AGI governanceNo framework existsHigh
Autonomous weaponsIncomplete treatiesHigh
AI agentsUndefined liabilityGrowing
Compute governanceEarly proposalsMedium

Related FactorConnection
Lab Safety PracticesGovernance shapes lab requirements
Racing IntensityWeak governance enables racing
Technical AI SafetyStandards depend on technical research
Concentration of PowerGovernance affects power distribution