California SB 1047
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
Summary
Section titled “Summary”SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was California state legislation that would have required safety testing and liability measures for developers of the most powerful AI models.
The bill passed the California legislature but was vetoed by Governor Gavin Newsom on September 29, 2024.
SB 1047 was the most significant AI safety legislation attempted in the United States to date. Its passage through the legislature demonstrated growing political willingness to regulate frontier AI, while its veto illustrated the political challenges such regulation faces.
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Notes |
|---|---|---|
| Tractability | Medium | Passed legislature (Assembly 45-11, Senate 32-1) but vetoed; demonstrated political feasibility with refinement |
| Effectiveness | Medium-High (if enacted) | Would have created enforceable requirements for frontier AI safety testing, shutdown capabilities, and incident reporting |
| Political Viability | Low-Medium | Strong industry opposition including safety-focused labs; governor cited innovation concerns; federal approach preferred |
| Enforcement Mechanism | Strong | Attorney General enforcement with civil penalties up to 10% of training costs; whistleblower protections; mandatory auditing |
| Coverage | Narrow | Only frontier models >10^26 FLOP or >$100M training cost; exempted open-source and academic research |
| Status | Vetoed (Sept 29, 2024) | Legislative success followed by executive veto; precedent for future state/federal legislation |
What the Bill Proposed
Section titled “What the Bill Proposed”Scope: “Covered Models”
Section titled “Scope: “Covered Models””The bill would have applied to AI models meeting any of these criteria:
Training Compute:
- Trained using >10^26 FLOP (floating-point operations)
- Approximately GPT-4.5/Claude 3 Opus scale or larger
- Threshold aligned with US Executive Order 14110↗ on AI safety
Training Cost:
- Cost >$100 million to train
- Adjusted annually for inflation
- At current cloud compute prices, 10^26 FLOP costs approximately $70-100 million↗ (Anthropic estimate)
Fine-tuned Models:
- Fine-tuning cost >$10 million
- Based on a covered model
Why these thresholds?
- Target only frontier models from well-resourced labs
- Exclude open-source models and academic research
- Align with international compute governance efforts (US EO, EU AI Act)
Provisions Comparison Table
Section titled “Provisions Comparison Table”| Requirement Category | Specific Provision | Timing | Penalty for Non-Compliance | Comparison to Alternatives |
|---|---|---|---|---|
| Safety Testing | Red-team testing for CBRN weapons, cyber attacks >$500M damage, autonomous operation | Before deployment or third-party access | Civil penalties up to 10% of training costs | Stricter than voluntary commitments (Anthropic RSP, OpenAI Preparedness); similar scope to US EO requirements |
| Shutdown Capability | Full shutdown of all instances including during training | Before beginning training | AG enforcement + injunctive relief | Unique requirement; not in US EO, EU AI Act, or industry frameworks |
| Cybersecurity | Protection of model weights from theft; secure infrastructure; incident response | Before training begins | Civil liability for security breaches | Similar to US EO reporting but with enforcement teeth |
| Third-Party Auditing | Annual independent audits starting Jan 1, 2026; 5-year record retention | Annually after Jan 1, 2026 | Civil penalties for audit failures | More stringent than US EO (voluntary); weaker than EU AI Act (ongoing) |
| Incident Reporting | Report AI safety incidents to AG within 72 hours | Within 72 hours of incident | Civil penalties + potential criminal referral | Faster timeline than US EO (unspecified); AG enforcement vs. federal agencies |
| Whistleblower Protection | Prohibit retaliation; anonymous reporting process; 7-year complaint retention | Immediate; ongoing | Labor Commissioner enforcement + civil damages | Stronger than industry standards; similar to federal whistleblower laws |
| Compute Cluster Reporting | Clusters >10^26 ops/second must report to state; customer information required | Ongoing for CA-based clusters | Civil penalties for non-reporting | Similar to US EO compute reporting but state-level jurisdiction |
| Liability Framework | Affirmative defense for compliance; AG can sue for violations causing harm | Post-deployment if harm occurs | Up to 10% of training costs + damages | Softer than strict liability (original draft); stronger than status quo |
Bill Structure and Enforcement Framework
Section titled “Bill Structure and Enforcement Framework”The following diagram illustrates how SB 1047 would have regulated frontier AI development, from initial determination through deployment and enforcement:
Key Enforcement Mechanisms:
The diagram shows three primary enforcement pathways in SB 1047:
- Preventive Compliance Path (top): Developers who implement all requirements and pass audits receive affirmative defense from liability
- Incident Response Path (bottom): Safety incidents trigger mandatory 72-hour reporting and Attorney General investigation
- Penalty Path (right): Non-compliance or violation results in civil penalties up to 10% of training costs plus potential injunctive relief
The bill created overlapping accountability through third-party auditing (annual), whistleblower protections (continuous), and incident reporting (reactive), ensuring multiple mechanisms to detect non-compliance.
Core Requirements
Section titled “Core Requirements”1. Safety Testing Before Training
Section titled “1. Safety Testing Before Training”Pre-Training Requirements:
Developers must:
- Determine whether model will be a “covered model”
- Implement safety protocols before beginning training
- Establish shutdown procedures
Covered Model Determination:
If expected to meet thresholds:
- Document safety plan
- Prepare for testing requirements
- Establish compliance measures
2. Hazardous Capability Testing
Section titled “2. Hazardous Capability Testing”Required Testing:
Before deployment or making available to third parties, test for:
Critical Harm Capabilities:
- Creation of chemical, biological, radiological, or nuclear weapons (CBRN)
- Mass casualty cyber attacks (>$500M damage or mass casualties)
- Autonomous operation and self-exfiltration
- Self-improvement and recursive self-modification
Testing Methods:
- Red-team testing
- Adversarial probing
- Capability evaluations
- Third-party auditing
Threshold: Model enables non-expert to cause mass casualties or >$500M in damage.
3. Safety and Security Protocol
Section titled “3. Safety and Security Protocol”Required Measures:
Developers must implement:
Cybersecurity:
- Protection of model weights from theft
- Secure infrastructure
- Incident response plans
Shutdown Capability:
- Full model shutdown ability
- Separate from safety fine-tuning
- Effective on all deployed instances
Ongoing Monitoring:
- Detection of hazardous use
- Capability creep tracking
- Post-deployment evaluation
Documentation:
- Written safety protocol
- Regular updates
- Public summary (redacted for security)
4. Whistleblower Protections
Section titled “4. Whistleblower Protections”Employee Rights:
Protected disclosures about:
- Safety violations
- Unreasonable risk to public
- Non-compliance with the act
Prohibitions:
- Cannot retaliate against whistleblowers
- Cannot require non-disclosure preventing safety reports
- Civil penalties for violations
5. Frontier Model Division
Section titled “5. Frontier Model Division”New State Agency:
Created within California Government Operations Agency:
- Oversee compliance
- Receive safety protocols
- Investigate violations
- Issue guidance
Powers:
- Subpoena authority
- Civil penalty assessment
- Emergency orders
6. Liability Framework
Section titled “6. Liability Framework”Affirmative Defense:
Developers protected from liability if:
- Complied with all safety requirements
- Conducted reasonable testing
- Implemented safety protocols
- Acted in good faith
Strict Liability Removed:
Does NOT create automatic liability for harms; must prove negligence or non-compliance.
Attorney General Enforcement:
California AG can sue for:
- Violations of safety requirements
- Civil penalties up to 10% of training costs
- Injunctive relief
7. Compute Cluster Reporting
Section titled “7. Compute Cluster Reporting”Reporting Requirement:
Owners of computing clusters with:
-
10^26 integer or floating-point operations per second
- Located in California
Must report to:
- Frontier Model Division
- Information about cluster
- Customers using cluster
Purpose: Track who has capability to train covered models.
Exemptions and Safe Harbors
Section titled “Exemptions and Safe Harbors”Explicitly Exempted:
- Open-source models (unless developer makes $50M+/year from derivatives)
- Academic research
- Models below thresholds
- Government use
Safe Harbor:
- Compliance with safety requirements provides affirmative defense
- Good faith efforts protected
Path Through Legislature
Section titled “Path Through Legislature”Initial Introduction (February 2024)
Section titled “Initial Introduction (February 2024)”Original Sponsors:
- Senator Scott Wiener↗ (D-San Francisco), representing District 11 (San Francisco tech corridor)
- Co-sponsored by AI safety organizations including Center for AI Safety
- Support from AI safety advocates and researchers
- Immediately opposed by major AI companies and some researchers
- Official bill text↗ introduced February 7, 2024
Amendment Process
Section titled “Amendment Process”Major Changes:
- Narrowed scope to truly frontier models (>10^26 FLOP or >$100M)
- Added safe harbors and affirmative defenses for compliant developers
- Reduced liability provisions (removed strict liability; kept negligence standard)
- Clarified open-source exemptions (unless developer earns >$50M/year from derivatives)
- Specified hazardous capabilities more precisely (CBRN, >$500M cyber damage)
- Removed Frontier Model Division↗ and criminal penalties in August 2024 amendments
Purpose of Amendments:
- Address industry concerns about overbreadth and compliance costs
- Balance innovation incentives with safety requirements
- Build bipartisan coalition for passage
- Respond to >50 stakeholder comments during committee process
Legislative Passage
Section titled “Legislative Passage”August 29, 2024: Passed California Legislature
- Assembly: 45-11 (80% approval)
- Senate: 32-1 (97% approval)
- Bipartisan support across party lines
- Most significant AI legislation to pass any US state legislature
- Represented months of amendments responding to >50 industry comments
- Final version removed criminal penalties and Frontier Model Division creation
Veto (September 29, 2024)
Section titled “Veto (September 29, 2024)”Governor Newsom’s Rationale:
From Newsom’s official veto message↗:
“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.”
Additional concerns: “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047.”
Specific Concerns:
- Focus on model size rather than deployment context
- Could stifle innovation in California’s tech sector
- Regulatory approach not nuanced enough (described as not “informed by an empirical trajectory analysis”)
- Preferred federal regulation given interstate nature of AI
Accompanying Actions:
Newsom simultaneously:
- Signed 18 other AI bills on narrower topics (deepfakes, discrimination, transparency)
- Called for federal AI legislation to address interstate nature of technology
- Committed to working with legislature on alternative approaches
- Convened expert panel including Fei-Fei Li (Stanford), Tino Cuéllar (Carnegie Endowment), and Jennifer Tour Chayes (UC Berkeley) to develop “empirical, science-based trajectory analysis”
Support and Opposition
Section titled “Support and Opposition”Supporters
Section titled “Supporters”AI Safety Organizations:
- Center for AI Safety
- Future of Life Institute
- AI safety researchers
Arguments:
- Frontier models pose catastrophic risks
- Industry self-regulation insufficient
- California can lead on AI safety
- Requirements are reasonable and achievable
Notable Individual Supporters:
- Yoshua Bengio (Turing Award winner, 2018)
- Geoffrey Hinton (Turing Award winner 2018, “Godfather of AI”)
- Stuart Russell (UC Berkeley professor, author of leading AI textbook)
- Max Tegmark (MIT professor, founder of Future of Life Institute)
- Elon Musk (xAI CEO, publicly endorsed the bill)
- 113+ current and former employees↗ of OpenAI, Google DeepMind, Anthropic, Meta, and xAI (September 9, 2024 letter to Governor Newsom)
Opponents
Section titled “Opponents”Major AI Companies:
- OpenAI (initially opposed; later neutral)
- Anthropic (opposed initially; called final version “benefits likely outweigh costs” but remained uncertain)
- Google/DeepMind (opposed)
- Meta (strongly opposed)
- Combined market value of opposing companies: >$3 trillion
Arguments:
- Stifles innovation in California’s $200+ billion AI industry
- Drives development out of California (threatening 300,000+ tech jobs)
- Premature to regulate models that don’t yet exist
- Better to focus on use cases than model capabilities (size-based vs. risk-based regulation)
- Federal regulation more appropriate for interstate technology
Venture Capital:
- Y Combinator
- Andreessen Horowitz
- Others concerned about startup ecosystem impact
Some Researchers:
- Yann LeCun (Meta, Turing Award winner)
- Andrew Ng (Stanford, Google Brain co-founder)
- Fei-Fei Li (Stanford)
Concerns:
- Open-source implications despite exemptions
- Compliance costs for startups
- Regulatory overreach
- Vague standards
Labor and Progressive Groups:
- Some supported
- Some concerned it didn’t address labor impacts enough
Why It Was Vetoed
Section titled “Why It Was Vetoed”Stated Reasons (Governor Newsom)
Section titled “Stated Reasons (Governor Newsom)”Size-Based vs. Risk-Based:
- Bill focuses on model size (compute/cost) not deployment risks
- Small models in high-risk contexts not covered
- Large models in benign contexts over-regulated
Innovation Concerns:
- California is hub of AI development
- Regulation could drive companies elsewhere
- Startups face compliance burdens
Federal Action Preferable:
- AI transcends state borders
- National framework more appropriate
- International coordination needed
Political Analysis
Section titled “Political Analysis”Industry Pressure:
- Major AI companies lobbied heavily against
- Economic arguments about California’s AI ecosystem
- Threat of relocation
Presidential Politics:
- Biden administration developing AI policy
- Harris (VP, former CA Senator) in presidential race
- National Democratic messaging on tech
Tactical Considerations:
- Newsom signed 18 other AI bills simultaneously
- Positioned as pro-innovation, pro-safety balance
- Left door open for future iteration
Lack of Coalition:
- Many Democrats skeptical
- Republicans opposed
- Labor not fully engaged
- Insufficient grassroots pressure
Unstated Factors (Analysis)
Section titled “Unstated Factors (Analysis)”Economic:
- California tech industry contributes $200+ billion annually to state GDP
- AI-focused companies employ 300,000+ workers in California
- Competing jurisdictions (Texas, Florida, international) actively recruiting AI companies
- Tech industry contributes 15-20% of California’s general fund revenue
- Estimated compliance costs for SB 1047: $10-50M per covered model annually (industry estimates)
Policy:
- Precedent-setting implications
- Uncertainty about effectiveness
- Implementation challenges
Political:
- Presidential election dynamics
- Tech industry relationships
- Future political ambitions
Implications for AI Safety Regulation
Section titled “Implications for AI Safety Regulation”What SB 1047 Demonstrated
Section titled “What SB 1047 Demonstrated”Political Will Exists:
- Bipartisan legislative passage showed AI safety resonates
- Not just fringe concern but mainstream political issue
- Legislators willing to regulate despite industry opposition
Industry Opposition is Formidable:
- Even safety-focused companies (Anthropic) opposed
- Economic arguments effective
- Innovation framing powerful
Federal vs. State Tension:
- AI is inherently interstate and international
- State-level regulation faces jurisdictional limits
- But federal action is slow
Details Matter:
- Size-based vs. risk-based framing was central
- Specific thresholds and requirements heavily debated
- Implementation details crucial to political viability
Lessons for Future Efforts
Section titled “Lessons for Future Efforts”What Worked
Section titled “What Worked”Focused Scope:
- Targeting only frontier models built support
- Exemptions for open-source and research
- Concrete thresholds (compute, cost)
Safety Framing:
- Catastrophic risk resonated
- Whistleblower protections popular
- Bipartisan appeal
Expert Endorsement:
- Turing Award winners lending credibility
- Technical community engagement
What Didn’t Work
Section titled “What Didn’t Work”Industry Consensus:
- Even safety-concerned labs opposed
- Economic arguments effective
- Innovation framing won
Implementation Clarity:
- Vague enforcement mechanisms
- Uncertainty about compliance costs
- Questions about Frontier Model Division capacity
Coalition Building:
- Labor not fully engaged
- Grassroots support limited
- Competing priorities on left
Future Regulatory Approaches
Section titled “Future Regulatory Approaches”State Level
Section titled “State Level”Narrower Bills:
- Focus on specific harms (deepfakes, discrimination)
- Deployment context rather than model capabilities
- Procurement standards
Coordination:
- Multi-state coordination
- Uniform standards
- Regional compacts
California Iteration:
- Newsom committed to continued dialogue
- Future versions possible
- Refined approach incorporating feedback
Federal Level
Section titled “Federal Level”Legislation:
- Comprehensive AI safety bill
- Build on Executive Order
- Bipartisan framework
Challenges:
- Congressional gridlock
- Lobbying pressure
- Competing priorities
International
Section titled “International”Coordination Imperative:
- AI development global
- Race to the bottom risk
- Need for international standards
Precedents:
- EU AI Act as model
- UK approach
- Multilateral frameworks
Impact on AI Safety Movement
Section titled “Impact on AI Safety Movement”Positive Effects
Section titled “Positive Effects”Mainstream Attention:
- SB 1047 brought frontier AI risk into public discourse
- Media coverage extensive
- Political engagement increased
Overton Window:
- Made AI regulation thinkable
- Future efforts less radical by comparison
- Normalized safety concerns
Community Building:
- Coalition formation
- Political skills development
- Lessons learned
Negative Effects
Section titled “Negative Effects”Backlash:
- Some researchers now more skeptical of regulation
- “Regulatory capture” accusations
- Polarization on safety issues
Movement Division:
- Some AI safety researchers opposed bill
- Tensions over strategy
- Open-source community alienation
Political Capital:
- Loss might discourage future efforts
- Industry emboldened
- Harder to argue regulations are inevitable
Strategic Debates
Section titled “Strategic Debates”Should SB 1047 Have Been Pursued?
Section titled “Should SB 1047 Have Been Pursued?”Arguments For:
- Only way to test political viability
- Built coalition and momentum
- Shifted discourse even in defeat
Arguments Against:
- Premature; should have built more support first
- Better to focus on federal action
- Antagonized potential allies
What Should Come Next?
Section titled “What Should Come Next?”Double Down:
- Refine and reintroduce
- Build broader coalition
- Address veto concerns
Pivot to Federal:
- Focus energy on Congress
- Support Executive Order implementation
- International coordination
Focus on Narrower Wins:
- Procurement standards
- Use-case specific regulation
- Voluntary frameworks
Build Power:
- Grassroots organizing
- Labor coalition
- Public education
Technical and Policy Debates
Section titled “Technical and Policy Debates”Size-Based vs. Risk-Based Regulation
Section titled “Size-Based vs. Risk-Based Regulation”Size-Based (SB 1047 Approach):
Pros:
- Objective, measurable thresholds
- Targets most capable models
- Easier to enforce
- Aligns with international compute governance
Cons:
- Doesn’t capture deployment context
- Could miss dangerous applications of smaller models
- Algorithmic efficiency makes thresholds obsolete
Risk-Based (Newsom’s Preference):
Pros:
- Focuses on actual harm potential
- Context-appropriate
- Adapts to changing technology
Cons:
- Harder to define and measure
- Enforcement challenges
- Potentially broader scope (privacy, fairness, etc.)
- Risk assessment subjective
Synthesis Possible:
- Combination of both approaches
- Size thresholds trigger risk assessments
- Deployment context determines requirements
Liability Questions
Section titled “Liability Questions”SB 1047 Approach:
- Affirmative defense for compliance
- Attorney General enforcement
- Civil penalties
Debate:
- Too much liability deters innovation?
- Too little fails to ensure safety?
- Who should bear costs of AI harms?
Alternative Approaches:
- Strict liability with caps
- Insurance requirements
- Tiered liability based on precautions
- No-fault compensation schemes
Open Source Implications
Section titled “Open Source Implications”SB 1047 Exemption:
- Open-source models exempt unless developer profits >$50M from derivatives
Concerns Raised:
- Could still chill open-source development
- Uncertainty about liability
- Derivative work tracking difficult
Counter-Arguments:
- Exemption was broad
- Open-source not inherently safe
- Need some oversight of powerful models
Ongoing Debate:
- How to encourage open research while managing risks
- Different models for different risk levels
- Role of open-source in AI safety ecosystem
Comparison to Other Policies
Section titled “Comparison to Other Policies”vs. US Executive Order
Section titled “vs. US Executive Order”The compute thresholds in SB 1047 were deliberately aligned↗ with Biden’s Executive Order 14110.
Similarities:
- Compute thresholds (10^26 FLOP for training)
- Safety testing requirements for CBRN risks
- Focus on frontier models only
- Developer reporting obligations
Differences:
- SB 1047 had enforcement teeth (civil penalties up to 10% of training costs, AG lawsuits)
- EO has broader scope (government use, competition policy, immigration for AI talent)
- SB 1047 state-level mandatory law; EO federal executive action (can be rescinded)
- SB 1047 required shutdown capability (unique provision)
- SB 1047 included third-party auditing requirement (EO relies on voluntary compliance)
Relationship:
- SB 1047 would have complemented EO with state-level enforcement
- State enforcement of federal principles with local adaptation
- Potential model for other states considering AI legislation
- Analysis from legal firms↗ noted SB 1047 went further than EO on liability
vs. EU AI Act
Section titled “vs. EU AI Act”EU Act:
- Risk categories for deployed systems
- Broader scope (not just frontier models)
- Binding regulation with large fines
SB 1047:
- Narrower focus on frontier models
- More specific technical requirements (shutdown, testing)
- State-level vs. EU-wide
Lessons:
- EU’s comprehensiveness politically difficult in US
- SB 1047’s focused approach still failed
- Suggests US regulation will be patchwork
vs. Voluntary Commitments
Section titled “vs. Voluntary Commitments”Industry Commitments:
- No enforcement
- Self-defined standards
- Flexible and adaptive
SB 1047:
- Mandatory requirements
- State enforcement
- Specific standards
Debate:
- Is voluntary compliance sufficient?
- Does regulation stifle beneficial innovation?
- Can industry self-regulate emerging risks?
Career and Research Implications
Section titled “Career and Research Implications”Policy Careers
Section titled “Policy Careers”Lessons Learned:
- Understanding legislative process crucial
- Coalition building essential
- Technical expertise must translate to policy
Opportunities:
- State-level AI policy growing
- Need for policy entrepreneurs
- Legislative staff positions
Research Questions
Section titled “Research Questions”Regulatory Design:
- How to balance innovation and safety?
- What thresholds are appropriate?
- How to make regulation adaptive?
Political Economy:
- Industry influence on regulation
- Public opinion on AI risk
- Coalition formation strategies
Technical:
- Measuring model capabilities
- Shutdown mechanisms
- Audit methodologies
Movement Building
Section titled “Movement Building”Strategic Questions:
- When to push for regulation vs. build support?
- How to engage industry productively?
- Building public constituency
Skills Needed:
- Political strategy
- Coalition management
- Communications
- Policy design
Sources
Section titled “Sources”Primary Documents
Section titled “Primary Documents”- California Legislature: SB-1047 Bill Text↗ - Official bill text and legislative history
- Governor Newsom’s Veto Message (PDF)↗ - Official veto statement, September 29, 2024
- California Assembly Privacy and Consumer Protection Committee Analysis↗ - Detailed bill analysis, June 18, 2024
News Coverage and Analysis
Section titled “News Coverage and Analysis”- CalMatters: Newsom vetoes major California artificial intelligence bill↗ - Comprehensive coverage of veto decision
- NPR: California Gov. Newsom vetoes AI safety bill that divided Silicon Valley↗ - Context on industry division
- TechCrunch: California’s legislature just passed AI bill SB 1047↗ - Coverage of legislative passage
- Carnegie Endowment: All Eyes on Sacramento: SB 1047 and the AI Safety Debate↗ - Policy analysis
Legal and Technical Analysis
Section titled “Legal and Technical Analysis”- Morgan Lewis: California’s SB 1047 Would Impose New Safety Requirements↗ - Legal analysis of requirements
- Gibson Dunn: Regulating the Future: Eight Key Takeaways from California’s SB 1047↗ - Post-veto analysis
- Orrick: California Looks to Regulate Cutting-Edge Frontier AI Models: 5 Things to Know↗ - Technical requirements breakdown
- DLA Piper: California’s SB-1047: Understanding the Safe and Secure Innovation for Frontier AI Act↗ - Early analysis
- Fenwick: Technological Challenges for Regulatory Thresholds of AI Compute↗ - Analysis of compute thresholds
Senator Wiener’s Office
Section titled “Senator Wiener’s Office”- Senator Wiener: Groundbreaking AI Bill Advances to Assembly Floor↗ - Official statement on amendments
- Senator Wiener: Bipartisan Vote, Senate Passes Landmark AI Safety Bill↗ - Official statement on passage
- Lawfare Daily Podcast: State Senator Scott Wiener on SB 1047↗ - In-depth interview
Industry Perspectives
Section titled “Industry Perspectives”- Andreessen Horowitz: What You Need to Know About SB 1047↗ - Venture capital perspective
- Safe and Secure AI: Letter to YC & a16z↗ - Response from supporters
- Brookings: Misrepresentations of California’s AI safety bill↗ - Defense of bill against criticism
Reference
Section titled “Reference”- Wikipedia: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act↗ - Overview and timeline
AI Transition Model Context
Section titled “AI Transition Model Context”SB 1047 (though vetoed) represented a template for how legislation could affect the Ai Transition Model:
| Factor | Parameter | Impact |
|---|---|---|
| Civilizational Competence | Regulatory Capacity | Would have required safety testing and shutdown capabilities for frontier models |
| Misalignment Potential | Safety Culture Strength | Mandatory third-party auditing would have raised safety standards |
| Transition Turbulence | Racing Intensity | Compute thresholds (10^26 FLOP) target models posing systemic risk |
The bill’s veto demonstrated the political difficulty of frontier AI regulation; Governor Newsom cited concerns about targeting “the most basic functions” of AI systems.