Skip to content

Institutional Quality

Parameter

Institutional Quality

Importance72
DirectionHigher is better
Current TrendUnder pressure (regulatory capture concerns, expertise gaps, rapid policy shifts)
Key MeasurementIndependence from industry, expertise retention, decision quality metrics
Prioritization
Importance72
Tractability40
Neglectedness40
Uncertainty50

Institutional Quality measures the health and effectiveness of institutions involved in AI governance—their independence from capture, ability to retain expertise, and quality of decision-making processes. Higher institutional quality is better—it determines whether AI governance serves the public interest or narrow constituencies. While regulatory capacity asks whether governments can regulate, institutional quality asks whether they will do so effectively.

Funding structures, personnel practices, transparency norms, and the balance of power between regulated industries and oversight bodies all shape whether institutional quality improves or degrades. High quality enables governance that genuinely serves public interest; low quality results in capture where institutions nominally serving the public actually advance industry interests.

This parameter underpins:

  • Governance legitimacy: Institutions perceived as captured lose public trust and political support
  • Decision quality: Independent institutions make better decisions based on evidence rather than influence
  • Long-term thinking: High-quality institutions can prioritize long-term safety over short-term political pressures
  • Adaptive capacity: Healthy institutions can evolve as AI technology and risks change

Loading diagram...

Contributes to: Governance Capacity

Primary outcomes affected:

  • Steady State ↓↓ — Quality institutions preserve democratic governance in the long term
  • Transition Smoothness ↓ — Effective institutions manage disruption and maintain legitimacy

MetricCurrent ValueBaseline/ComparisonTrend
Industry-academic co-authorship85% of AI papers (2024)50% (2010)Increasing
AI PhD graduates entering industry70% (2024)20% (two decades ago)Strongly increasing
Largest AI models from industry96% (current)Unknown (2010)Dominant
Regulatory-industry resource ratio600:1 (~$100B vs. $150M)N/A for previous technologiesUnprecedented
US AISI budget request vs. received$47.7M requested, ~$10M receivedN/A (new institution)Underfunded
OpenAI lobbyist count18 (2024)3 (2023)6x increase
AISI direction reversals1 major (AISI to CAISI, 2025)0 (new institutions)Concerning
Revolving door in AI-related sectors53% of electric manufacturing lobbyistsUnknown baselineAccelerating

Sources: MIT Sloan AI research study, OpenSecrets lobbying data, CSIS AISI analysis, Stanford HAI Tracker

InstitutionFunding SourceIndustry TiesIndependence Rating2025 Budget
UK AI Security InstituteGovernmentVoluntary lab cooperationMedium-High£50M (~$65M) annually
US CAISI (formerly AISI)GovernmentRefocused toward innovation (2025)Medium (declining)~$10M received ($47.7M requested)
EU AI OfficeEU budgetEnforcement mandateHigh~€10M (estimated)
Academic AI safety research60-70%+ industry-fundedStrongLow-MediumVariable
Think tanksMixed (industry, philanthropy)VariableVariableVariable

Note: UK AISI has largest national AI safety budget globally; US underfunding creates expertise gap. Sources: CSIS AISI Network analysis, All Tech Is Human landscape report


What “Healthy Institutional Quality” Looks Like

Section titled “What “Healthy Institutional Quality” Looks Like”

Healthy institutional quality in AI governance would exhibit characteristics that enable independent, expert, and accountable decision-making in the public interest.

Key Characteristics of Healthy Institutions

Section titled “Key Characteristics of Healthy Institutions”
  1. Independence from capture: Decisions based on evidence and public interest, not industry influence or political pressure
  2. Expertise retention: Institutions can attract and keep technical talent despite industry competition
  3. Transparent processes: Decision-making is visible to the public and open to scrutiny
  4. Long-term orientation: Institutions can prioritize future risks over immediate political considerations
  5. Adaptive capacity: Structures and processes can evolve as AI technology changes
  6. Accountability mechanisms: Clear processes for identifying and correcting institutional failures
CharacteristicCurrent StatusGap
Independence from captureResource asymmetry enables industry influenceLarge
Expertise retentionCompensation gaps of 50-80% vs. industryVery large
Transparent processesVariable; some institutions opaqueMedium
Long-term orientationPolitical volatility undermines planningLarge
Adaptive capacityMulti-year regulatory timelinesLarge
Accountability mechanismsLimited for AI-specific governanceMedium-Large

Factors That Decrease Institutional Quality (Threats)

Section titled “Factors That Decrease Institutional Quality (Threats)”
Loading diagram...

The 2024 RAND/AAAI study “How Do AI Companies ‘Fine-Tune’ Policy?” interviewed 17 AI policy experts to identify key capture mechanisms. The study found agenda-setting (mentioned by 15 of 17 experts), advocacy (13), academic capture (10), information management (9), cultural capture through status (7), and media capture (7) as primary channels for industry influence.

Capture MechanismHow It WorksCurrent EvidenceImpact on Quality
Agenda-settingIndustry shapes which issues receive attentionFraming AI policy as “innovation vs. regulation”; capture of policy discourseHigh—determines what gets regulated
Advocacy and lobbyingDirect influence through campaign contributions, meetingsOpenAI: 3→18 lobbyists (2023-2024); 53% of sector lobbyists are ex-governmentHigh—direct policy influence
Academic captureIndustry funding shapes research priorities and findings85% of AI papers have industry co-authors; 70% of PhDs enter industryVery High—captures expertise production
Information managementIndustry controls access to data needed for regulationVoluntary model evaluations; proprietary benchmarks; 29x compute advantageCritical—regulators depend on industry data
Cultural captureIndustry norms become regulatory norms”Move fast” culture; “innovation-first” mindset in agenciesMedium-High—shapes institutional values
Media captureIndustry shapes public discourse through PR and fundingTech media dependence on company access; sponsored contentMedium—affects public pressure on regulators
Resource asymmetryIndustry outspends regulators 600:1$100B+ industry R&D vs. $150M total regulatory budgetsCritical—enables all other mechanisms

Sources: RAND regulatory capture study, MIT Sloan industry dominance analysis, OpenSecrets lobbying data

ThreatMechanismEvidence
Mission reversalNew administrations redirect institutional prioritiesAISI to CAISI (2025): safety evaluation to innovation promotion; EO 14110 revoked
Budget manipulationFunding cuts undermine institutional capacityUS AISI requested $47.7M; received ~$10M (21% of request); NIST forced to “cut to the bone”
Leadership churnPolitical appointees depart with administrationsElizabeth Kelly (AISI director) resigned February 2025; typical 18-24 month tenure for political appointees

Sources: FedScoop NIST budget analysis, CSIS AISI recommendations

ThreatMechanismEvidence
Compensation gapGovernment cannot compete with industry salaries50-80% salary differential (estimated); AI researchers can earn 5-10x more in industry than government
Career incentivesBest career path is government-to-industry transition70% of AI PhDs now enter industry; revolving door provides lucrative exit opportunities
Capability gapIndustry technical capacity exceeds regulatorsIndustry invests $100B+ in AI R&D annually; industry models 29x larger than academic models on average; 96% of largest models now from industry
Computing resource asymmetryAcademic institutions lack large-scale compute for frontier researchForces academic researchers into industry collaborations; creates dependence on company resources

Sources: MIT Sloan AI research dominance, RAND regulatory capture mechanisms


Factors That Increase Institutional Quality (Supports)

Section titled “Factors That Increase Institutional Quality (Supports)”
FactorMechanismStatus
Independent fundingInsulate budgets from political interferenceLimited—most AI governance dependent on annual appropriations
Cooling-off periodsLimit revolving door with waiting periodsVaries by jurisdiction; often weakly enforced
Transparency requirementsPublic disclosure of industry contacts and influenceIncreasing but inconsistent
FactorMechanismStatus
Academic partnershipsUniversities supplement government expertiseGrowing—NIST AI RMF community of 6,500+
Technical fellowship programsBring industry expertise into governmentLimited scale
International cooperationShare evaluation methods across AISI networkBuilding—first joint evaluations completed
FactorMechanismStatus
Congressional oversightLegislative review of agency actionsInconsistent for AI-specific issues
Civil society monitoringNGOs track and publicize captureActive—AI Now, Future of Life, etc.
Judicial reviewCourts can overturn captured decisionsAvailable but rarely invoked for AI
Section titled “Recommended Mitigations from Expert Analysis”

The 2024 RAND/AAAI study on regulatory capture identified systemic changes needed to improve institutional quality. Based on interviews with 17 AI policy experts, the study recommends:

Mitigation StrategyMechanismImplementation DifficultyEstimated Effectiveness
Develop technical expertise in governmentCompetitive salaries, fellowship programs, trainingHigh—requires sustained fundingHigh (20-40% improvement)
Develop technical expertise in civil societyFund independent research organizations and watchdogsMedium—philanthropic support availableMedium-High (15-30% improvement)
Create independent funding streamsInsulate AI ecosystem from industry dependenceVery High—requires new institutionsVery High (30-50% improvement)
Increase transparency and ethics requirementsDisclosure of industry funding, conflicts of interestMedium—can be legislatedMedium (10-25% improvement)
Enable greater civil society access to policyOpen comment periods, public advisory boardsLow-Medium—procedural changesMedium (15-25% improvement)
Implement procedural safeguardsCooling-off periods, recusal requirements, lobbying limitsMedium—political resistanceMedium-High (20-35% improvement)
Diversify academic fundingGovernment and philanthropic grants for AI safety researchHigh—requires hundreds of millions annuallyHigh (25-40% improvement)

Effectiveness estimates represent expert judgment on potential reduction in capture influence if fully implemented. Most strategies show compound effects when combined. Source: RAND regulatory capture study


DomainImpactSeverity
Regulatory captureRules serve industry interests, not public safetyCritical
Governance legitimacyPublic loses trust in AI oversightHigh
Safety theaterAppearance of oversight without substanceCritical
Democratic accountabilityCitizens cannot influence AI governance through normal channelsHigh
Long-term blindnessShort-term political pressures override safety concernsCritical

Institutional Quality and Existential Risk

Section titled “Institutional Quality and Existential Risk”

Institutional quality affects existential risk through several mechanisms:

Capture prevents intervention: If AI governance institutions are captured by industry, they cannot take action against industry interests—even when safety requires it. The ~$100B industry spending versus ~$150M regulatory budget creates unprecedented capture potential.

Political volatility undermines continuity: Long-term AI safety requires sustained institutional commitment across political cycles. The AISI-to-CAISI transformation shows how quickly institutional direction can reverse, undermining multi-year safety efforts.

Expertise asymmetry prevents evaluation: Without independent technical expertise, regulators cannot assess industry safety claims. This forces reliance on self-reporting, which becomes unreliable precisely when stakes are highest.

Trust deficit undermines legitimacy: If the public perceives AI governance as captured, political support for stronger oversight erodes, creating a vicious cycle of weakening institutions.


TimeframeKey DevelopmentsQuality Impact
2025-2026CAISI direction stabilizes; EU AI Act enforcement begins; state legislation proliferatesMixed—EU institutions strengthen; US uncertain
2027-2028Next-gen AI deployed; first major enforcement actionsCritical test—will institutions act independently?
2029-2030Institutional track record emerges; capture patterns become visibleDetermines whether quality improves or declines
ScenarioProbabilityOutcomeKey IndicatorsTimeline
Quality improvement15-20%Major incident or reform movement drives institutional strengthening; independent funding, expertise programs, and transparency measures implementedStatutory funding protections; cooling-off periods enforced; academic funding diversified2026-2028
Muddle through45-55% (baseline)Institutions maintain partial independence; some capture but also some genuine oversight; quality varies by jurisdictionMixed enforcement record; continued resource gaps; some effective interventions2025-2030+
Gradual capture25-35%Industry influence increases over time; institutions provide appearance of oversight without substance; safety depends on industry self-governanceIncreasing revolving door; weakening enforcement; industry-friendly rule changes2025-2027
Rapid deterioration5-10%Political crisis or budget cuts severely weaken institutions; AI governance effectively collapsesMajor budget cuts (greater than 50%); mass departures of technical staff; regulatory rollbacks2025-2026

Note on probabilities: These estimates reflect expert judgment based on historical regulatory patterns, current trends, and political economy dynamics. Actual outcomes depend heavily on near-term developments including major AI incidents, election outcomes, and civil society mobilization. The “muddle through” scenario receives highest probability as institutional capture rarely reaches extremes—most regulatory systems maintain some independence while also exhibiting capture dynamics.


Arguments capture is inevitable:

  • Resource asymmetry (600:1) is unprecedented in regulatory history
  • AI companies can offer government officials 5-10x salaries
  • Technical complexity forces dependence on industry expertise
  • Political economy: industry has concentrated interests; public has diffuse interests
  • Historical pattern: most industries eventually capture their regulators

Arguments capture can be resisted:

  • EU AI Office demonstrates that well-designed institutions can maintain independence
  • Civil society organizations provide counterweight to industry influence
  • Public concern about AI creates political space for independent action
  • Transparency requirements and cooling-off periods can limit capture mechanisms
  • Crisis events (like major AI harms) can reset institutional dynamics

Should AI Governance Be Technocratic or Democratic?

Section titled “Should AI Governance Be Technocratic or Democratic?”

Arguments for technocratic governance:

  • AI is too complex for democratic deliberation; experts must lead
  • Speed of AI development requires rapid institutional response
  • Technical decisions should be made by those who understand technology
  • Democratic processes are vulnerable to misinformation and manipulation

Arguments for democratic governance:

  • Technocratic institutions are more vulnerable to capture
  • Democratic legitimacy is essential for public acceptance of AI governance
  • Citizens should have voice in decisions affecting their lives
  • Diverse perspectives catch blind spots that homogeneous expert groups miss

AI Safety Institute Direction Reversal (2023-2025)

Section titled “AI Safety Institute Direction Reversal (2023-2025)”

The US AI Safety Institute’s transformation illustrates institutional quality challenges:

PhaseDevelopmentQuality Implication
Founding (Nov 2023)Mission: pre-deployment safety testingHigh—independent safety mandate
Building (2024)Signed voluntary agreements with labs; conducted evaluationsMedium—relied on industry cooperation
Transition (Jan 2025)EO 14110 revoked; leadership departedDeclining—political vulnerability exposed
Transformation (Jun 2025)Renamed CAISI; mission: innovation promotionLow—safety mission replaced

Key lesson: Institutions without legislative foundation are vulnerable to rapid capture through political channels, even when initially designed for independence.

The evolution of academic AI research demonstrates gradual capture dynamics:

Metric201020202024Trend
Industry co-authorship~50%~75%~85%Increasing
Industry funding share~30%~50%~60%+Increasing
Industry publication venuesLimitedGrowingDominantIncreasing
Critical industry researchCommonDecliningRareDecreasing

Key lesson: Gradual financial dependence shifts research priorities even without explicit directives, creating “soft capture” that maintains appearance of independence while substantively serving industry interests.


DimensionMetricCurrent Status
Independence% budget from independent sourcesLow (most dependent on appropriations)
ExpertiseTechnical staff credentials vs. industryLow (significant gap)
TransparencyPublic disclosure of industry contactsMedium (inconsistent)
Decision qualityRate of decisions later reversed or criticizedUnknown (too new)
EnforcementViolations detected and penalizedVery low (minimal enforcement)
  • Institutions adopt industry framing of issues (“innovation vs. regulation”)
  • Leadership recruited primarily from regulated industry
  • Technical assessments consistently favor industry positions
  • Enforcement actions rare despite documented violations
  • Public communications emphasize industry partnership over accountability


AI Safety Institute Resources and Governance

Section titled “AI Safety Institute Resources and Governance”