Skip to content

California SB 1047

📋Page Status
Quality:88 (Comprehensive)⚠️
Importance:82.5 (High)
Last edited:2025-12-28 (10 days ago)
Words:3.5k
Structure:
📊 3📈 1🔗 35📚 057%Score: 8/15
LLM Summary:California SB 1047 was the most significant US AI safety legislation attempted to date, passing the state legislature before being vetoed in September 2024. It would have required frontier AI models (>10^26 FLOP or >$100M training cost) to implement safety testing, shutdown capabilities, and third-party auditing with civil penalties up to 10% of training costs for non-compliance.
Policy

Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

Importance82
IntroducedFebruary 2024
Passed LegislatureAugust 29, 2024
VetoedSeptember 29, 2024
AuthorSenator Scott Wiener

SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was California state legislation that would have required safety testing and liability measures for developers of the most powerful AI models.

The bill passed the California legislature but was vetoed by Governor Gavin Newsom on September 29, 2024.

SB 1047 was the most significant AI safety legislation attempted in the United States to date. Its passage through the legislature demonstrated growing political willingness to regulate frontier AI, while its veto illustrated the political challenges such regulation faces.

DimensionAssessmentNotes
TractabilityMediumPassed legislature (Assembly 45-11, Senate 32-1) but vetoed; demonstrated political feasibility with refinement
EffectivenessMedium-High (if enacted)Would have created enforceable requirements for frontier AI safety testing, shutdown capabilities, and incident reporting
Political ViabilityLow-MediumStrong industry opposition including safety-focused labs; governor cited innovation concerns; federal approach preferred
Enforcement MechanismStrongAttorney General enforcement with civil penalties up to 10% of training costs; whistleblower protections; mandatory auditing
CoverageNarrowOnly frontier models >10^26 FLOP or >$100M training cost; exempted open-source and academic research
StatusVetoed (Sept 29, 2024)Legislative success followed by executive veto; precedent for future state/federal legislation

The bill would have applied to AI models meeting any of these criteria:

Training Compute:

Training Cost:

  • Cost >$100 million to train
  • Adjusted annually for inflation
  • At current cloud compute prices, 10^26 FLOP costs approximately $70-100 million (Anthropic estimate)

Fine-tuned Models:

  • Fine-tuning cost >$10 million
  • Based on a covered model

Why these thresholds?

  • Target only frontier models from well-resourced labs
  • Exclude open-source models and academic research
  • Align with international compute governance efforts (US EO, EU AI Act)
Requirement CategorySpecific ProvisionTimingPenalty for Non-ComplianceComparison to Alternatives
Safety TestingRed-team testing for CBRN weapons, cyber attacks >$500M damage, autonomous operationBefore deployment or third-party accessCivil penalties up to 10% of training costsStricter than voluntary commitments (Anthropic RSP, OpenAI Preparedness); similar scope to US EO requirements
Shutdown CapabilityFull shutdown of all instances including during trainingBefore beginning trainingAG enforcement + injunctive reliefUnique requirement; not in US EO, EU AI Act, or industry frameworks
CybersecurityProtection of model weights from theft; secure infrastructure; incident responseBefore training beginsCivil liability for security breachesSimilar to US EO reporting but with enforcement teeth
Third-Party AuditingAnnual independent audits starting Jan 1, 2026; 5-year record retentionAnnually after Jan 1, 2026Civil penalties for audit failuresMore stringent than US EO (voluntary); weaker than EU AI Act (ongoing)
Incident ReportingReport AI safety incidents to AG within 72 hoursWithin 72 hours of incidentCivil penalties + potential criminal referralFaster timeline than US EO (unspecified); AG enforcement vs. federal agencies
Whistleblower ProtectionProhibit retaliation; anonymous reporting process; 7-year complaint retentionImmediate; ongoingLabor Commissioner enforcement + civil damagesStronger than industry standards; similar to federal whistleblower laws
Compute Cluster ReportingClusters >10^26 ops/second must report to state; customer information requiredOngoing for CA-based clustersCivil penalties for non-reportingSimilar to US EO compute reporting but state-level jurisdiction
Liability FrameworkAffirmative defense for compliance; AG can sue for violations causing harmPost-deployment if harm occursUp to 10% of training costs + damagesSofter than strict liability (original draft); stronger than status quo

The following diagram illustrates how SB 1047 would have regulated frontier AI development, from initial determination through deployment and enforcement:

Loading diagram...

Key Enforcement Mechanisms:

The diagram shows three primary enforcement pathways in SB 1047:

  1. Preventive Compliance Path (top): Developers who implement all requirements and pass audits receive affirmative defense from liability
  2. Incident Response Path (bottom): Safety incidents trigger mandatory 72-hour reporting and Attorney General investigation
  3. Penalty Path (right): Non-compliance or violation results in civil penalties up to 10% of training costs plus potential injunctive relief

The bill created overlapping accountability through third-party auditing (annual), whistleblower protections (continuous), and incident reporting (reactive), ensuring multiple mechanisms to detect non-compliance.

Pre-Training Requirements:

Developers must:

  • Determine whether model will be a “covered model”
  • Implement safety protocols before beginning training
  • Establish shutdown procedures

Covered Model Determination:

If expected to meet thresholds:

  • Document safety plan
  • Prepare for testing requirements
  • Establish compliance measures

Required Testing:

Before deployment or making available to third parties, test for:

Critical Harm Capabilities:

  • Creation of chemical, biological, radiological, or nuclear weapons (CBRN)
  • Mass casualty cyber attacks (>$500M damage or mass casualties)
  • Autonomous operation and self-exfiltration
  • Self-improvement and recursive self-modification

Testing Methods:

  • Red-team testing
  • Adversarial probing
  • Capability evaluations
  • Third-party auditing

Threshold: Model enables non-expert to cause mass casualties or >$500M in damage.

Required Measures:

Developers must implement:

Cybersecurity:

  • Protection of model weights from theft
  • Secure infrastructure
  • Incident response plans

Shutdown Capability:

  • Full model shutdown ability
  • Separate from safety fine-tuning
  • Effective on all deployed instances

Ongoing Monitoring:

  • Detection of hazardous use
  • Capability creep tracking
  • Post-deployment evaluation

Documentation:

  • Written safety protocol
  • Regular updates
  • Public summary (redacted for security)

Employee Rights:

Protected disclosures about:

  • Safety violations
  • Unreasonable risk to public
  • Non-compliance with the act

Prohibitions:

  • Cannot retaliate against whistleblowers
  • Cannot require non-disclosure preventing safety reports
  • Civil penalties for violations

New State Agency:

Created within California Government Operations Agency:

  • Oversee compliance
  • Receive safety protocols
  • Investigate violations
  • Issue guidance

Powers:

  • Subpoena authority
  • Civil penalty assessment
  • Emergency orders

Affirmative Defense:

Developers protected from liability if:

  • Complied with all safety requirements
  • Conducted reasonable testing
  • Implemented safety protocols
  • Acted in good faith

Strict Liability Removed:

Does NOT create automatic liability for harms; must prove negligence or non-compliance.

Attorney General Enforcement:

California AG can sue for:

  • Violations of safety requirements
  • Civil penalties up to 10% of training costs
  • Injunctive relief

Reporting Requirement:

Owners of computing clusters with:

  • 10^26 integer or floating-point operations per second

  • Located in California

Must report to:

  • Frontier Model Division
  • Information about cluster
  • Customers using cluster

Purpose: Track who has capability to train covered models.

Explicitly Exempted:

  • Open-source models (unless developer makes $50M+/year from derivatives)
  • Academic research
  • Models below thresholds
  • Government use

Safe Harbor:

  • Compliance with safety requirements provides affirmative defense
  • Good faith efforts protected

Original Sponsors:

  • Senator Scott Wiener (D-San Francisco), representing District 11 (San Francisco tech corridor)
  • Co-sponsored by AI safety organizations including Center for AI Safety
  • Support from AI safety advocates and researchers
  • Immediately opposed by major AI companies and some researchers
  • Official bill text introduced February 7, 2024

Major Changes:

  • Narrowed scope to truly frontier models (>10^26 FLOP or >$100M)
  • Added safe harbors and affirmative defenses for compliant developers
  • Reduced liability provisions (removed strict liability; kept negligence standard)
  • Clarified open-source exemptions (unless developer earns >$50M/year from derivatives)
  • Specified hazardous capabilities more precisely (CBRN, >$500M cyber damage)
  • Removed Frontier Model Division and criminal penalties in August 2024 amendments

Purpose of Amendments:

  • Address industry concerns about overbreadth and compliance costs
  • Balance innovation incentives with safety requirements
  • Build bipartisan coalition for passage
  • Respond to >50 stakeholder comments during committee process

August 29, 2024: Passed California Legislature

  • Assembly: 45-11 (80% approval)
  • Senate: 32-1 (97% approval)
  • Bipartisan support across party lines
  • Most significant AI legislation to pass any US state legislature
  • Represented months of amendments responding to >50 industry comments
  • Final version removed criminal penalties and Frontier Model Division creation

Governor Newsom’s Rationale:

From Newsom’s official veto message:

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.”

Additional concerns: “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047.”

Specific Concerns:

  • Focus on model size rather than deployment context
  • Could stifle innovation in California’s tech sector
  • Regulatory approach not nuanced enough (described as not “informed by an empirical trajectory analysis”)
  • Preferred federal regulation given interstate nature of AI

Accompanying Actions:

Newsom simultaneously:

  • Signed 18 other AI bills on narrower topics (deepfakes, discrimination, transparency)
  • Called for federal AI legislation to address interstate nature of technology
  • Committed to working with legislature on alternative approaches
  • Convened expert panel including Fei-Fei Li (Stanford), Tino Cuéllar (Carnegie Endowment), and Jennifer Tour Chayes (UC Berkeley) to develop “empirical, science-based trajectory analysis”

AI Safety Organizations:

  • Center for AI Safety
  • Future of Life Institute
  • AI safety researchers

Arguments:

  • Frontier models pose catastrophic risks
  • Industry self-regulation insufficient
  • California can lead on AI safety
  • Requirements are reasonable and achievable

Notable Individual Supporters:

  • Yoshua Bengio (Turing Award winner, 2018)
  • Geoffrey Hinton (Turing Award winner 2018, “Godfather of AI”)
  • Stuart Russell (UC Berkeley professor, author of leading AI textbook)
  • Max Tegmark (MIT professor, founder of Future of Life Institute)
  • Elon Musk (xAI CEO, publicly endorsed the bill)
  • 113+ current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI (September 9, 2024 letter to Governor Newsom)

Major AI Companies:

  • OpenAI (initially opposed; later neutral)
  • Anthropic (opposed initially; called final version “benefits likely outweigh costs” but remained uncertain)
  • Google/DeepMind (opposed)
  • Meta (strongly opposed)
  • Combined market value of opposing companies: >$3 trillion

Arguments:

  • Stifles innovation in California’s $200+ billion AI industry
  • Drives development out of California (threatening 300,000+ tech jobs)
  • Premature to regulate models that don’t yet exist
  • Better to focus on use cases than model capabilities (size-based vs. risk-based regulation)
  • Federal regulation more appropriate for interstate technology

Venture Capital:

  • Y Combinator
  • Andreessen Horowitz
  • Others concerned about startup ecosystem impact

Some Researchers:

  • Yann LeCun (Meta, Turing Award winner)
  • Andrew Ng (Stanford, Google Brain co-founder)
  • Fei-Fei Li (Stanford)

Concerns:

  • Open-source implications despite exemptions
  • Compliance costs for startups
  • Regulatory overreach
  • Vague standards

Labor and Progressive Groups:

  • Some supported
  • Some concerned it didn’t address labor impacts enough

Size-Based vs. Risk-Based:

  • Bill focuses on model size (compute/cost) not deployment risks
  • Small models in high-risk contexts not covered
  • Large models in benign contexts over-regulated

Innovation Concerns:

  • California is hub of AI development
  • Regulation could drive companies elsewhere
  • Startups face compliance burdens

Federal Action Preferable:

  • AI transcends state borders
  • National framework more appropriate
  • International coordination needed

Industry Pressure:

  • Major AI companies lobbied heavily against
  • Economic arguments about California’s AI ecosystem
  • Threat of relocation

Presidential Politics:

  • Biden administration developing AI policy
  • Harris (VP, former CA Senator) in presidential race
  • National Democratic messaging on tech

Tactical Considerations:

  • Newsom signed 18 other AI bills simultaneously
  • Positioned as pro-innovation, pro-safety balance
  • Left door open for future iteration

Lack of Coalition:

  • Many Democrats skeptical
  • Republicans opposed
  • Labor not fully engaged
  • Insufficient grassroots pressure

Economic:

  • California tech industry contributes $200+ billion annually to state GDP
  • AI-focused companies employ 300,000+ workers in California
  • Competing jurisdictions (Texas, Florida, international) actively recruiting AI companies
  • Tech industry contributes 15-20% of California’s general fund revenue
  • Estimated compliance costs for SB 1047: $10-50M per covered model annually (industry estimates)

Policy:

  • Precedent-setting implications
  • Uncertainty about effectiveness
  • Implementation challenges

Political:

  • Presidential election dynamics
  • Tech industry relationships
  • Future political ambitions

Political Will Exists:

  • Bipartisan legislative passage showed AI safety resonates
  • Not just fringe concern but mainstream political issue
  • Legislators willing to regulate despite industry opposition

Industry Opposition is Formidable:

  • Even safety-focused companies (Anthropic) opposed
  • Economic arguments effective
  • Innovation framing powerful

Federal vs. State Tension:

  • AI is inherently interstate and international
  • State-level regulation faces jurisdictional limits
  • But federal action is slow

Details Matter:

  • Size-based vs. risk-based framing was central
  • Specific thresholds and requirements heavily debated
  • Implementation details crucial to political viability

Focused Scope:

  • Targeting only frontier models built support
  • Exemptions for open-source and research
  • Concrete thresholds (compute, cost)

Safety Framing:

  • Catastrophic risk resonated
  • Whistleblower protections popular
  • Bipartisan appeal

Expert Endorsement:

  • Turing Award winners lending credibility
  • Technical community engagement

Industry Consensus:

  • Even safety-concerned labs opposed
  • Economic arguments effective
  • Innovation framing won

Implementation Clarity:

  • Vague enforcement mechanisms
  • Uncertainty about compliance costs
  • Questions about Frontier Model Division capacity

Coalition Building:

  • Labor not fully engaged
  • Grassroots support limited
  • Competing priorities on left

Narrower Bills:

  • Focus on specific harms (deepfakes, discrimination)
  • Deployment context rather than model capabilities
  • Procurement standards

Coordination:

  • Multi-state coordination
  • Uniform standards
  • Regional compacts

California Iteration:

  • Newsom committed to continued dialogue
  • Future versions possible
  • Refined approach incorporating feedback

Legislation:

  • Comprehensive AI safety bill
  • Build on Executive Order
  • Bipartisan framework

Challenges:

  • Congressional gridlock
  • Lobbying pressure
  • Competing priorities

Coordination Imperative:

  • AI development global
  • Race to the bottom risk
  • Need for international standards

Precedents:

  • EU AI Act as model
  • UK approach
  • Multilateral frameworks

Mainstream Attention:

  • SB 1047 brought frontier AI risk into public discourse
  • Media coverage extensive
  • Political engagement increased

Overton Window:

  • Made AI regulation thinkable
  • Future efforts less radical by comparison
  • Normalized safety concerns

Community Building:

  • Coalition formation
  • Political skills development
  • Lessons learned

Backlash:

  • Some researchers now more skeptical of regulation
  • “Regulatory capture” accusations
  • Polarization on safety issues

Movement Division:

  • Some AI safety researchers opposed bill
  • Tensions over strategy
  • Open-source community alienation

Political Capital:

  • Loss might discourage future efforts
  • Industry emboldened
  • Harder to argue regulations are inevitable

Arguments For:

  • Only way to test political viability
  • Built coalition and momentum
  • Shifted discourse even in defeat

Arguments Against:

  • Premature; should have built more support first
  • Better to focus on federal action
  • Antagonized potential allies

Double Down:

  • Refine and reintroduce
  • Build broader coalition
  • Address veto concerns

Pivot to Federal:

  • Focus energy on Congress
  • Support Executive Order implementation
  • International coordination

Focus on Narrower Wins:

  • Procurement standards
  • Use-case specific regulation
  • Voluntary frameworks

Build Power:

  • Grassroots organizing
  • Labor coalition
  • Public education

Size-Based (SB 1047 Approach):

Pros:

  • Objective, measurable thresholds
  • Targets most capable models
  • Easier to enforce
  • Aligns with international compute governance

Cons:

  • Doesn’t capture deployment context
  • Could miss dangerous applications of smaller models
  • Algorithmic efficiency makes thresholds obsolete

Risk-Based (Newsom’s Preference):

Pros:

  • Focuses on actual harm potential
  • Context-appropriate
  • Adapts to changing technology

Cons:

  • Harder to define and measure
  • Enforcement challenges
  • Potentially broader scope (privacy, fairness, etc.)
  • Risk assessment subjective

Synthesis Possible:

  • Combination of both approaches
  • Size thresholds trigger risk assessments
  • Deployment context determines requirements

SB 1047 Approach:

  • Affirmative defense for compliance
  • Attorney General enforcement
  • Civil penalties

Debate:

  • Too much liability deters innovation?
  • Too little fails to ensure safety?
  • Who should bear costs of AI harms?

Alternative Approaches:

  • Strict liability with caps
  • Insurance requirements
  • Tiered liability based on precautions
  • No-fault compensation schemes

SB 1047 Exemption:

  • Open-source models exempt unless developer profits >$50M from derivatives

Concerns Raised:

  • Could still chill open-source development
  • Uncertainty about liability
  • Derivative work tracking difficult

Counter-Arguments:

  • Exemption was broad
  • Open-source not inherently safe
  • Need some oversight of powerful models

Ongoing Debate:

  • How to encourage open research while managing risks
  • Different models for different risk levels
  • Role of open-source in AI safety ecosystem

The compute thresholds in SB 1047 were deliberately aligned with Biden’s Executive Order 14110.

Similarities:

  • Compute thresholds (10^26 FLOP for training)
  • Safety testing requirements for CBRN risks
  • Focus on frontier models only
  • Developer reporting obligations

Differences:

  • SB 1047 had enforcement teeth (civil penalties up to 10% of training costs, AG lawsuits)
  • EO has broader scope (government use, competition policy, immigration for AI talent)
  • SB 1047 state-level mandatory law; EO federal executive action (can be rescinded)
  • SB 1047 required shutdown capability (unique provision)
  • SB 1047 included third-party auditing requirement (EO relies on voluntary compliance)

Relationship:

  • SB 1047 would have complemented EO with state-level enforcement
  • State enforcement of federal principles with local adaptation
  • Potential model for other states considering AI legislation
  • Analysis from legal firms noted SB 1047 went further than EO on liability

EU Act:

  • Risk categories for deployed systems
  • Broader scope (not just frontier models)
  • Binding regulation with large fines

SB 1047:

  • Narrower focus on frontier models
  • More specific technical requirements (shutdown, testing)
  • State-level vs. EU-wide

Lessons:

  • EU’s comprehensiveness politically difficult in US
  • SB 1047’s focused approach still failed
  • Suggests US regulation will be patchwork

Industry Commitments:

  • No enforcement
  • Self-defined standards
  • Flexible and adaptive

SB 1047:

  • Mandatory requirements
  • State enforcement
  • Specific standards

Debate:

  • Is voluntary compliance sufficient?
  • Does regulation stifle beneficial innovation?
  • Can industry self-regulate emerging risks?

Lessons Learned:

  • Understanding legislative process crucial
  • Coalition building essential
  • Technical expertise must translate to policy

Opportunities:

  • State-level AI policy growing
  • Need for policy entrepreneurs
  • Legislative staff positions

Regulatory Design:

  • How to balance innovation and safety?
  • What thresholds are appropriate?
  • How to make regulation adaptive?

Political Economy:

  • Industry influence on regulation
  • Public opinion on AI risk
  • Coalition formation strategies

Technical:

  • Measuring model capabilities
  • Shutdown mechanisms
  • Audit methodologies

Strategic Questions:

  • When to push for regulation vs. build support?
  • How to engage industry productively?
  • Building public constituency

Skills Needed:

  • Political strategy
  • Coalition management
  • Communications
  • Policy design


SB 1047 (though vetoed) represented a template for how legislation could affect the Ai Transition Model:

FactorParameterImpact
Civilizational CompetenceRegulatory CapacityWould have required safety testing and shutdown capabilities for frontier models
Misalignment PotentialSafety Culture StrengthMandatory third-party auditing would have raised safety standards
Transition TurbulenceRacing IntensityCompute thresholds (10^26 FLOP) target models posing systemic risk

The bill’s veto demonstrated the political difficulty of frontier AI regulation; Governor Newsom cited concerns about targeting “the most basic functions” of AI systems.