Skip to content

Regulatory Capacity

Parameter

Regulatory Capacity

Importance75
DirectionHigher is better
Current TrendGrowing but constrained (AISI budgets ~\$10-50M vs. \$100B+ industry spending)
Key MeasurementAgency technical expertise, enforcement actions, evaluation capability
Prioritization
Importance75
Tractability50
Neglectedness45
Uncertainty45

Regulatory Capacity measures the ability of governments to effectively understand, evaluate, and regulate AI systems. Higher regulatory capacity is better—it enables evidence-based oversight that can actually keep pace with AI development. This parameter encompasses technical expertise within regulatory agencies, institutional resources for enforcement, and the capability to keep pace with rapidly advancing AI technology. Unlike international coordination, which focuses on cooperation between nations, regulatory capacity addresses the fundamental question of whether any government—acting alone—can meaningfully oversee AI development.

Institutional investments, talent flows, and political priorities all shape whether regulatory capacity grows or declines. High capacity enables evidence-based regulation and credible enforcement; low capacity results in either ineffective oversight or innovation-stifling rules that fail to address actual risks.

This parameter underpins:

  • Credible oversight: Without technical understanding, regulators cannot distinguish genuine safety measures from compliance theater—a capability gap that creates risks of institutional decision capture
  • Evidence-based policy: Effective regulation requires capacity to evaluate AI systems and their impacts, which AI Safety Institutes attempt to provide
  • Enforcement capability: Rules without enforcement resources become voluntary guidelines, undermining frameworks like the NIST AI RMF
  • Adaptive governance: Rapidly advancing technology requires regulators who can update frameworks as capabilities evolve—a challenge that becomes more severe as racing dynamics intensify

Loading diagram...

Contributes to: Governance Capacity

Primary outcomes affected:


MetricCurrent ValueComparisonTrendSource
Combined AISI budgets~$150M annually0.15% of industry R&DConstrainedUK/US/EU AISI budgets
Industry AI investment$100B+ annually (US alone)600:1 vs. regulatorsGrowing rapidlyIndustry reports
NIST AI RMF adoption40-60% Fortune 500Voluntary frameworkGrowingNIST
Federal AI regulations59 (2024)25 (2023)+136% YoYStanford HAI
State AI bills passed131 (2024)~50 (2023)+162% YoYState legislatures
Federal AI talent hired200+ (2024)Target: 500 by FY2025+100% YoYWhite House AI Task Force
Government AI readinessUS: #1, China: #2 (2025)195 countries assessedBipolar leadershipOxford Insights Index
AISI network size11 countries + EUNov 2023: 1 (UK)+1100% growthInternational AI Safety Report
InstitutionAnnual BudgetStaffPrimary Focus
UK AI Security Institute~$65M (50M GBP)~100+Model evaluations, red-teaming
US CAISI (formerly AISI)~$10M~50Standards, innovation (refocused 2025)
EU AI Office~$8MGrowingAI Act enforcement
OpenAI (for comparison)~$5B+2,000+AI development
Anthropic (for comparison)~$2B+1,000+AI development

The resource asymmetry is stark: a single frontier AI lab spends 30-50x more than the entire global network of AI Safety Institutes combined.


What “Healthy Regulatory Capacity” Looks Like

Section titled “What “Healthy Regulatory Capacity” Looks Like”

Healthy regulatory capacity would enable governments to understand AI systems at a technical level sufficient to evaluate safety claims, enforce requirements, and adapt frameworks as technology evolves.

  1. Technical expertise: Regulators can evaluate model capabilities, understand training processes, and assess safety measures without relying solely on industry self-reporting
  2. Competitive compensation: Government positions attract top AI talent, not just those unable to secure industry roles
  3. Independent evaluation capability: Regulators can conduct their own assessments rather than relying on company-provided data
  4. Enforcement resources: Violations can be detected and penalties applied, making compliance economically rational
  5. Adaptive processes: Regulatory frameworks can update faster than the 5-10 year cycle typical of traditional rulemaking
CharacteristicCurrent StatusGap
Technical expertiseBuilding via AISIs; still limitedLarge—industry expertise 10-100x greater
Competitive compensationGovernment salaries 50-80% below industryVery large
Independent evaluationFirst joint evaluations in 2024Large—capacity limited to ~2-3 models/year
Enforcement resourcesMinimal for AI-specific violationsVery large
Adaptive processesEU AI Act: 2-3 year implementationMedium—improving but still slow

Factors That Decrease Regulatory Capacity (Threats)

Section titled “Factors That Decrease Regulatory Capacity (Threats)”
Loading diagram...
ThreatMechanismEvidenceProbability Range
Budget disparityIndustry outspends regulators 600:1$100B+ vs. $150M95-99% likelihood gap persists through 2027
Talent competitionTop AI researchers choose industry salariesGoogle pays $1M+; government pays $150-250K; federal hiring surge reached 200/500 target by mid-202470-85% of top talent chooses industry
Information asymmetryCompanies know more about their systems than regulatorsModel evaluations require company cooperation; voluntary access agreements with OpenAI, Anthropic, DeepMind80-90% of evaluation data comes from labs
Expertise gap wideningAI capabilities advance faster than regulatory learningUK AISI evaluations show models now complete expert-level cyber tasks (10+ years experience equivalent)60-75% chance gap widens 2025-2027
ThreatMechanismEvidence
Mission reversalNew administrations can redirect agenciesAISI renamed CAISI; refocused from safety to innovation (June 2025)
Leadership turnoverKey officials depart with administration changesElizabeth Kelly (AISI director) resigned February 2025
Budget cutsRegulatory funding depends on political prioritiesCongressional appropriators cut AISI funding requests
ThreatMechanismEvidence
Capability outpacingAI advances faster than regulatory adaptationAI capabilities advance weekly; rules take years
Model opacityEven developers cannot fully explain model behaviorInterpretability covers ~10% of frontier model capacity
Evaluation complexityAssessing safety requires sophisticated technical infrastructureUK AISI evaluation of o1 took months with dedicated resources

Factors That Increase Regulatory Capacity (Supports)

Section titled “Factors That Increase Regulatory Capacity (Supports)”
FactorMechanismStatusGrowth Trajectory
AISI network developmentBuilding dedicated evaluation expertise11 countries + EU (2024-2025); inaugural network meeting November 2024From 1 institute (Nov 2023) to 11+ (Dec 2024); 15-20 institutes projected by 2026
Academic partnershipsUniversities provide research capacityNIST AI RMF community of 6,500+ participantsGrowing 30-40% annually
Industry cooperationVoluntary testing agreements expand accessAnthropic, OpenAI, DeepMind signed pre-deployment access agreements (2024)Fragile—depends on continued voluntary participation
Federal talent recruitmentSpecialized hiring programs for AI experts200+ hired in 2024; target 500 by FY2025 via AI Corps, US Digital Corps40-60% of target achieved mid-2024; uncertain post-administration change
FactorMechanismStatusImplementation Details
EU AI ActCreates mandatory compliance obligations with penalties up to €35M/7% revenueImplementation timeline: entered force August 2024; GPAI obligations active August 2025; full enforcement August 2026Only 3 of 27 member states designated authorities by August 2025 deadline—severe implementation capacity gap
NIST AI RMFProvides structured assessment methodology40-60% Fortune 500 adoption; voluntary framework limits enforcement70-75% adoption in financial services (existing regulatory culture); 25-35% in retail
State legislationCreates enforcement opportunities131 state AI bills passed (2024); over 1,000 bills introduced in 2025 legislative sessionFragmentation risk—federal preemption efforts may override state capacity building
FactorMechanismStatus
Interpretability researchBetter understanding of model behavior70% of Claude 3 Sonnet features interpretable
Evaluation toolsOpen-source frameworks for safety assessmentUK AISI Inspect framework released May 2024
Automated auditingAI-assisted oversight could reduce resource needsResearch stage

DomainImpactSeverity
Compliance theaterCompanies perform safety rituals without substantive risk reductionHigh
Reactive governanceRegulation only after harms materializeHigh
Credibility gapIndustry ignores regulations it knows cannot be enforcedCritical
Innovation harmPoorly designed rules burden companies without improving safetyMedium
Democratic accountabilityCitizens cannot hold companies accountable through governmentHigh

Regulatory capacity affects existential risk through several mechanisms:

Pre-deployment evaluation: If regulators cannot assess frontier AI systems before deployment, safety depends entirely on company self-governance. The ~$150M combined AISI budget versus $100B+ industry spending suggests current capacity is insufficient for meaningful pre-deployment oversight. The UK AISI’s Frontier AI Trends Report documents evaluation capacity of 2-3 major models per year—insufficient when labs release models quarterly or monthly.

Enforcement credibility: Without enforcement capability, even well-designed rules become voluntary. The EU AI Act establishes penalties up to €35M or 7% of global revenue, but only 3 of 27 member states designated enforcement authorities by the August 2025 deadline. This 11% compliance rate with basic administrative requirements suggests severe capacity constraints for actual enforcement. The US has zero federal AI-specific enforcement actions as of December 2025.

Adaptive governance: Transformative AI may require rapid regulatory response—potentially within weeks of capability emergence. Current regulatory processes operate on multi-year timelines: the EU AI Act took 3 years to pass (2021-2024) and requires 2 more years for full implementation (2024-2026). The OECD’s research on AI in regulatory design finds governments must shift from “regulate-and-forget” to “adapt-and-learn” approaches, but 70% of countries still lack capacity for AI-enhanced policy implementation as of 2023.

Capability-regulation race dynamics: Academic research documents “regulatory inertia” where lack of technical capabilities prevents timely response despite urgent need. Nature’s 2024 analysis identifies information asymmetry, pacing problems, and risk of regulatory capture as fundamental challenges requiring new approaches—yet most jurisdictions continue traditional frameworks. The probability of meaningful catastrophic risk regulation before transformative AI arrival is estimated at 15-30% given current trajectories.


TimeframeKey DevelopmentsCapacity Impact
2025-2026EU AI Act enforcement begins; CAISI mission unclear; state legislation proliferatesMixed—EU capacity growing; US uncertain
2027-2028Next-gen frontier models deployed; AISI network maturesCapacity gap may widen if models advance faster than institutions
2029-2030Potential new frameworks; enforcement track record emergesDepends on political commitments and incident history
ScenarioProbabilityOutcomeKey DriversTimeline
Capacity catch-up15-20%Major incident or political shift drives significant regulatory investment (5-10x budget increases); capacity begins closing gap with industryCatastrophic AI incident, bipartisan legislative action, international coordination breakthrough2026-2028 window; requires sustained 3-5 year commitment
Muddle through45-55%AISI network grows modestly (15-20 institutes by 2027); EU enforcement proceeds with gaps; US capacity stagnates; industry remains 80-90% self-governingStatus quo political dynamics, incremental funding increases, continued voluntary cooperation2025-2030; baseline trajectory
Capacity decline20-25%Budget cuts (30-50% reductions), talent drain (net negative hiring), and political deprioritization reduce regulatory capability; safety depends 95%+ on industry self-governanceEconomic recession, anti-regulation political shift, US-China competition prioritizes speed over safety2025-2027; accelerated by administration changes
Regulatory innovation10-15%AI-assisted oversight, novel funding models (industry levies), or international pooling dramatically improve capacity efficiency (3-5x multiplier effect)Technical breakthroughs in automated evaluation, new governance models (e.g., AI Safety Institutes gain enforcement authority)2026-2029; requires both technical and political innovation

Quantitative Assessment: Capacity Requirements vs. Reality

Section titled “Quantitative Assessment: Capacity Requirements vs. Reality”

To provide meaningful oversight of frontier AI development, regulators would need capacity to evaluate major model releases before deployment. Current capacity falls far short:

MetricCurrent StateRequired for Adequate OversightGap Magnitude
Models evaluated per year2-3 (UK AISI, 2024)12-24 (quarterly releases from 4-6 frontier labs)4-8x shortage
Evaluation time per model8-12 weeks2-4 weeks (to avoid deployment delays)2-3x too slow
Technical staff per evaluation10-15 researchers20-30 (to match lab eval teams)2x shortage
Budget per evaluation$500K-1M (estimated)$2-5M (comprehensive red-teaming)2-5x underfunded
Annual evaluation capacity$2-3M total$30-60M (if all frontier labs evaluated)10-20x shortfall

Implication: Current AISI network capacity would need to grow 10-20x to provide pre-deployment evaluation of all frontier models. At current growth rates (doubling every 18-24 months), adequate capacity would require 5-7 years—likely longer than the timeline to transformative AI systems.

The salary differential creates structural barriers to regulatory capacity:

Position LevelIndustry CompensationGovernment CompensationMultiplierAnnual Talent Loss Estimate
Entry-level ML engineer$180-250K total comp$80-120K1.5-2x60-70% choose industry
Senior researcher$400-800K total comp$150-200K2.5-4x75-85% choose industry
Principal/Staff level$800K-2M total comp$180-250K3-8x85-95% choose industry
Top 1% talent$2-5M+ (equity-heavy)$200-280K (GS-15 max)7-20x95-99% choose industry

The 2024 federal AI hiring initiative offers recruitment incentives up to 25% of base pay (plus relocation, retention bonuses, and $60K student loan repayment). This improves the situation at entry levels but leaves senior/principal gaps unchanged:

  • Entry-level improved: $100K → $125K + $60K loan repayment = effectively $185K over 4 years (competitive with industry)
  • Senior level still inadequate: $180K → $225K + retention ≈ $250K total (vs. $400-800K industry)
  • Principal level hopeless: $250K max vs. $800K-2M (3-8x gap persists)

Implication: Government can potentially hire entry-level talent with aggressive incentives, but acquiring senior expertise required to lead evaluations faces near-insurmountable compensation barriers. Estimates suggest 70-85% of regulatory technical leadership comes from individuals unable to secure equivalent industry positions, not from top-tier talent choosing public service.

The EU AI Act provides a test case for enforcement capacity needs. With 27 member states and an estimated 500-2,000 high-risk AI systems requiring compliance:

Enforcement FunctionEstimated Annual Cost per Member StateTotal EU Cost (27 states)Current Budget Allocation
Authority setup$2-5M (one-time)$54-135MUnknown—only 3 states compliant
Market surveillance$5-10M annually$135-270MSeverely underfunded
Conformity assessment$10-20M annually$270-540MMostly delegated to private notified bodies
Incident investigation$3-8M annually$81-216MNot yet established
Penalty enforcement$2-5M annually$54-135MZero enforcement actions to date
Total annual requirement$20-43M$540-1,160M$8M EU AI Office (2024)

Gap assessment: The EU AI Office budget of ~$8M represents 0.7-1.5% of estimated enforcement requirements. Even if member states collectively spend 10x the EU Office budget ($80M total), this reaches only 7-15% of required capacity. The 11% compliance rate (3 of 27 states designated authorities by deadline) suggests many states lack resources for even basic administrative setup.


Arguments for feasibility:

  • Nuclear and pharmaceutical regulation achieved effective oversight of complex technologies
  • AI Safety Institutes are building real technical capacity, demonstrated through joint model evaluations
  • NIST AI RMF shows government can develop sophisticated technical frameworks
  • Industry cooperation (voluntary testing agreements) extends government capacity

Arguments against:

  • AI advances faster than any previous technology; traditional regulatory timelines are fundamentally inadequate
  • Resource asymmetry (600:1) is unprecedented; no previous industry-regulator gap was this large
  • AI capabilities are intangible and opaque; physical inspection models from nuclear/pharma don’t apply
  • Top AI talent strongly prefers industry; government cannot compete on compensation

Arguments for voluntary (NIST AI RMF approach):

  • Flexibility allows adaptation to different contexts and company sizes
  • Industry buy-in produces genuine implementation rather than compliance theater
  • 40-60% Fortune 500 adoption shows voluntary frameworks can achieve scale
  • Avoids innovation-stifling rules that don’t match actual risks

Arguments against:

  • Voluntary compliance is selective; highest-risk actors may opt out
  • No enforcement mechanism means violations go unaddressed
  • EO 14110 revocation shows voluntary frameworks can be eliminated overnight
  • “Affirmative defense” approach (Colorado AI Act) may incentivize minimal compliance

US AI Safety Institute to CAISI (2023-2025)

Section titled “US AI Safety Institute to CAISI (2023-2025)”

The trajectory of the US AI Safety Institute illustrates both the potential and fragility of regulatory capacity:

PhaseDateDevelopment
FoundingNovember 2023AISI established at NIST; $10M initial budget
Momentum2024Director appointed; agreements signed with Anthropic, OpenAI
Demonstrated valueNovember 2024Joint evaluation of Claude 3.5 Sonnet published
Political shiftJanuary 2025EO 14110 revoked; AISI future uncertain
TransformationJune 2025Renamed CAISI; mission shifted from safety to innovation

Key lesson: Regulatory capacity built over 18 months was effectively redirected in weeks, demonstrating the fragility of government capacity without legislative foundation.

NIST AI RMF adoption shows uneven capacity effects across sectors:

SectorAdoption RateImplementation DepthCapacity Effect
Financial services70-75%High (full four-function)Significant
Healthcare60-65%Medium-HighModerate
Technology45-70%VariableMixed
Government30-40% (rising)GrowingBuilding
Retail25-35%LowMinimal

Key lesson: Voluntary frameworks achieve highest adoption where existing regulatory culture (finance, healthcare) creates implementation incentives.



Recent Academic & Government Research (2024-2025)

Section titled “Recent Academic & Government Research (2024-2025)”

Regulatory Challenges and Academic Analysis

Section titled “Regulatory Challenges and Academic Analysis”