Skip to content

Model Registries

📋Page Status
Quality:87 (Comprehensive)
Importance:79.5 (High)
Last edited:2025-12-28 (13 days ago)
Words:1.8k
Structure:
📊 11📈 2🔗 14📚 024%Score: 12/15
LLM Summary:Analyzes model registries as foundational AI governance infrastructure, comparing implementation across US (10^26 FLOP threshold), EU (10^25 FLOP), and China with specific timelines (2025-2026 implementation). Provides design frameworks with 6 detailed comparison tables and 2 process diagrams, concluding registries are necessary but insufficient governance tools (Grade: B+) that enable pre-deployment review and incident tracking.

Model registries represent a foundational governance tool for managing risks from advanced AI systems. Like drug registries that enable pharmaceutical regulation or aircraft registries that support aviation safety, AI model registries would create centralized databases containing information about frontier AI systems—their capabilities, training details, deployment contexts, and safety evaluations. This infrastructure provides governments with the visibility necessary to implement more sophisticated AI governance measures.

The policy momentum is significant. The U.S. Executive Order on AI (October 2023) mandated quarterly reporting for models trained above 10^26 FLOP. The EU AI Act requires registration of high-risk AI systems and general-purpose AI models. California’s Transparency in Frontier AI Act (TFAIA) mandates annual publication of comprehensive “Frontier AI Frameworks” by large developers. New York’s RAISE Act requires incident reporting within 72 hours. These requirements create the skeleton of a registry system, though implementation remains fragmented and early-stage.

The strategic value of model registries lies in their enabling function. A registry alone doesn’t prevent harm—but it provides the information foundation for safety requirements, pre-deployment review, incident tracking, and international coordination. Without knowing what models exist and what capabilities they possess, governments cannot effectively regulate AI development. Model registries transform AI governance from reactive to proactive by creating visibility into the development pipeline before deployment.

Effective model registries must balance multiple objectives: providing sufficient information for governance while minimizing regulatory burden on developers and avoiding competitive disclosure concerns.

Loading diagram...
QuestionConsiderationsCurrent Approaches
What triggers registration?Compute thresholds vs. capabilities vs. use casesUS: 10^26 FLOP; EU: 10^25 FLOP + capability criteria
What information required?Training data, capabilities, safety evals, incidentsVaries; usually training details + safety documentation
Who has access?Public, regulators, international partnersTiered access common; sensitive info restricted
When to register?Pre-training, pre-deployment, post-deploymentTrend toward pre-deployment notification
Enforcement mechanisms?Penalties for non-complianceFines up to $1-3M (NY RAISE Act)
Update requirements?Material changes, incidents, periodic reviewAnnual updates + incident reporting

A comprehensive model registry would include:

CategoryInformationSensitivityGovernance Use
IdentityModel name, version, developer, release dateLowTracking, accountability
TrainingCompute used, data sources, training methodsMedium-HighThreshold triggers, capability inference
CapabilitiesBenchmark results, evaluated risks, known limitationsMediumRisk assessment, deployment decisions
SafetyRed team findings, mitigations, known failuresHighSafety requirements, best practices
DeploymentAPIs, user counts, use cases, geographic reachMediumImpact assessment, enforcement
IncidentsFailures, harms, near-missesHighLearning, accountability

Federal Level: The October 2023 Executive Order directed the Bureau of Industry and Security (BIS) to establish reporting requirements for advanced AI models. Under the proposed rule:

  • Entities must report models trained with >10^26 FLOP
  • Quarterly reporting on training activities
  • Six-month forward-looking projections required
  • Information includes ownership, compute access, safety testing

State Level:

StateLegislationKey RequirementsStatus
CaliforniaTFAIA (AB 2885)Annual Frontier AI Framework publication; developer accountabilityEnacted; effective Jan 1, 2026
New YorkRAISE Act72-hour incident reporting; safety protocol publication; civil penalties up to $1MEnacted 2024
ColoradoSB 24-205High-risk AI system registration; algorithmic impact assessmentsEnacted May 2024

The EU AI Act establishes the most comprehensive registry requirements to date:

  • General-Purpose AI Models: Registration with EU AI Office if trained >10^25 FLOP
  • High-Risk AI Systems: Registration in EU database before market placement
  • Systemic Risk Models: Additional transparency and safety requirements
  • Required Information: Technical documentation, compliance evidence, intended use

The EU database will be publicly accessible for high-risk AI systems, with confidential technical documentation available to regulators.

China has implemented registration requirements since 2023:

  • Deep synthesis (deepfake) algorithms must register with CAC
  • Generative AI services require registration before public offering
  • Algorithmic recommendation services subject to separate registry
  • Focus on content moderation and political sensitivity
JurisdictionCompute ThresholdPre/Post DeploymentPublic AccessPenalties
US Federal10^26 FLOPPre + ongoingLimited (security)TBD
CaliforniaCapability-basedPre-deploymentFramework publicCivil liability
New YorkScale-basedPre + incidentsProtocols publicUp to $1M
EU10^25 FLOPPre-marketPartialUp to 7% revenue
ChinaAny public AIPre-deploymentLimitedService suspension
BenefitMechanismConfidence
Visibility for governanceKnow what exists before regulatingHigh
Incident learningTrack failures across the ecosystemHigh
Pre-deployment reviewEnable safety checks before releaseMedium-High
International coordinationCommon information standardsMedium
Enforcement foundationCan’t enforce rules without knowing who to apply them toHigh
Research ecosystem supportAggregate data for policy researchMedium
ChallengeDescriptionMitigation
Threshold gamingDevelopers structure training to avoid thresholdsMultiple thresholds; capability-based triggers
Dual-use concernsRegistry information could advantage competitors/adversariesTiered access; confidentiality provisions
Open-source gapRegistries focus on centralized developersPost-release monitoring; community registries
Enforcement difficultyVerifying submitted information is accurateAuditing; whistleblower protections
Rapid obsolescenceThresholds outdated as technology advancesAutomatic update mechanisms; sunset provisions
International gapsNo global registry; jurisdiction shoppingInternational coordination (nascent)

Model registries are necessary but not sufficient for AI governance. They enable but don’t replace:

Loading diagram...

For jurisdictions establishing initial AI model registries:

  1. Compute-based threshold: 10^25-10^26 FLOP (adjustable)
  2. Pre-deployment notification: 30-90 days before public release
  3. Required information:
    • Developer identity and contact
    • Training compute and data sources (categorical)
    • Intended use cases and deployment scope
    • Safety evaluation summary
    • Known risks and mitigations
  4. Incident reporting: 72 hours for critical harms
  5. Annual updates: Mandatory refresh of all information
  6. Tiered access: Public summary + confidential technical details

Based on analysis by Convergence Analysis and the Institute for Law & AI:

PrincipleRationaleImplementation
Minimal burdenEncourage compliance, reduce resistanceRequire only information developers already track
InteroperableEnable international coordinationAlign with emerging international standards
UpdatableTechnology changes faster than regulationBuilt-in mechanism for threshold adjustment
ComplementaryRegistry enables other tools, doesn’t replace themDesign for integration with safety requirements
ProportionateDifferent requirements for different risk levelsTiered obligations based on capability/deployment

Don’t:

  • Set thresholds so high only 2-3 models qualify (too narrow)
  • Require disclosure of trade secrets unnecessarily (industry opposition)
  • Create registry without enforcement mechanism (toothless)
  • Assume static thresholds will remain appropriate (obsolescence)
  • Ignore international coordination from the start (jurisdiction shopping)
  • US federal registry rules finalized
  • EU database operational for high-risk AI
  • California TFAIA implementation
  • 5-10 jurisdictions with some form of registry
  • Initial international coordination discussions
  • Potential international registry framework
  • Capability-based triggers supplement compute thresholds
  • Integration with compute monitoring
  • Real-time incident reporting systems
  • Cross-border data sharing agreements
QuestionOptimistic ScenarioPessimistic Scenario
International coordinationCommon standards, shared databaseFragmented, incompatible systems
Enforcement effectivenessHigh compliance, meaningful oversightWidespread evasion, symbolic only
Open-source coverageCommunity registries, post-release trackingUnmonitored proliferation
Threshold relevanceAdaptive thresholds track real risksOutdated, easily gamed
DimensionAssessmentNotes
TractabilityHighActive legislation in multiple jurisdictions
If AI risk highHighEssential infrastructure for any governance
If AI risk lowMediumStill useful for transparency and accountability
NeglectednessLow-MediumActive policy area but implementation gaps
Timeline to impact1-3 yearsRequirements taking effect 2025-2026
GradeB+Foundational but not transformative alone
RiskMechanismEffectiveness
Racing DynamicsVisibility into development timelinesLow-Medium
Misuse RisksKnow what capabilities existMedium
Regulatory arbitrageHarmonized international requirementsLow (currently)
Incident learning gapsMandatory reporting creates databaseMedium-High
  • Convergence Analysis (2024): “AI Model Registries: A Foundational Tool for AI Governance” - Comprehensive design framework
  • Institute for Law & AI (2024): “The Role of Compute Thresholds for AI Governance” - Threshold design considerations
  • Carnegie Endowment (2025): “Entity-Based Regulation in Frontier AI Governance” - Alternative regulatory approaches
  • US Executive Order 14110 (October 2023): “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”
  • EU AI Act (2024): Regulation establishing harmonized rules on artificial intelligence
  • California AB 2885 (2024): Transparency in Frontier Artificial Intelligence Act
  • New York RAISE Act (2024): Requiring AI Safety and Excellence
  • NIST: AI Risk Management Framework integration guidance
  • EU AI Office: High-risk AI database specifications
  • BIS: Proposed rule on AI model reporting requirements (2024)

Model registries improve the Ai Transition Model through Civilizational Competence:

FactorParameterImpact
Civilizational CompetenceRegulatory CapacityProvides information foundation for any governance interventions
Civilizational CompetenceInstitutional QualityEnables pre-deployment review and incident learning
Civilizational CompetenceInternational CoordinationCommon standards facilitate cross-border coordination

Registries are necessary but not sufficient infrastructure; they enable rather than replace safety requirements, evaluations, and enforcement mechanisms.