Skip to content

Failed and Stalled AI Policy Proposals

📋Page Status
Quality:80 (Comprehensive)
Importance:72.5 (High)
Last edited:2025-12-28 (10 days ago)
Words:3.7k
Structure:
📊 4📈 1🔗 35📚 00%Score: 11/15
LLM Summary:Analysis examines failed AI legislation including California's SB 1047 veto and 150+ stalled federal bills in the 118th Congress, identifying industry opposition ($61.5M from Big Tech in 2024, 648 companies lobbying on AI up 141% YoY), definitional challenges, and speed mismatches between technology and legislation as primary failure patterns. OpenAI increased lobbying spend 7x to $1.76M in 2024. Evidence suggests incremental approaches with industry support succeed more than comprehensive frameworks.
Policy

Failed and Stalled AI Proposals

Importance72
PurposeLearning from unsuccessful efforts
CoverageUS, International

Failed and stalled AI policy proposals provide critical insights into the political economy of AI governance, revealing systematic patterns that explain why comprehensive regulation remains elusive despite widespread concern about AI risks. The failure rate for ambitious AI legislation is remarkably high: during the 118th Congress, lawmakers introduced over 150 AI-related bills with none passing into law. Meanwhile, industry opposition has intensified dramatically, with 648 companies lobbying on AI in 2024 versus 458 in 2023—a 141% year-over-year increase—while Big Tech firms spent $11.5 million on lobbying in 2024, up 13% from the previous year.

These failures illuminate fundamental tensions in AI governance: the speed mismatch between rapid technological development and deliberative legislative processes, the challenge of defining “artificial intelligence” in legally precise terms, and the complex jurisdictional landscape where multiple agencies and levels of government claim regulatory authority. Perhaps most significantly, failed proposals demonstrate how industry opposition mobilizes around specific regulatory mechanisms, particularly liability provisions and mandatory compliance requirements, while showing greater tolerance for disclosure obligations and voluntary frameworks.

The pattern of failures suggests that successful AI governance may require accepting incremental progress rather than comprehensive solutions, with voluntary industry commitments serving as necessary stepping stones to eventual binding regulation. This dynamic has profound implications for AI safety, as it may mean that meaningful oversight emerges only after significant harms occur, rather than through proactive prevention.

Major Failed and Stalled AI Proposals (2019-2024)

Section titled “Major Failed and Stalled AI Proposals (2019-2024)”
ProposalJurisdictionYearKey ProvisionsPrimary Failure ReasonStatus
California SB 1047California2024Safety testing for models >$100M compute, shutdown requirements, liabilityGovernor veto citing federal preemption, industry oppositionVetoed Sep 2024
Algorithmic Accountability ActUS Federal2019, 2022, 2023Impact assessments for automated decision systemsFailed to exit committee in three consecutive CongressesStalled
SAFE Innovation FrameworkUS Federal2024Regulatory sandboxes for AI developmentJurisdictional conflicts, partisan disagreementStalled in committee
AI Labeling ActUS Federal2024Mandatory disclosure for AI-generated contentIndustry lobbying, definitional challengesStalled in committee
National AI Commission ActUS Federal2023-2024Independent AI oversight bodyDiluted to advisory function, opposition to new authoritiesWeakened/Stalled
UN AI TreatyInternational2024Binding international AI governanceUS-China competition, verification challenges, sovereignty concernsNo progress
10-Year State AI MoratoriumUS Federal2024Preempt all state AI regulationsStripped by 99-1 Senate vote in budget reconciliationRejected

California SB 1047: A Case Study in State-Level Challenges

Section titled “California SB 1047: A Case Study in State-Level Challenges”

California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act represented the most ambitious state-level AI regulation attempted in the United States. Introduced by Senator Scott Wiener in February 2024, the bill established safety testing requirements for AI models trained with more than $100 million in compute resources or equivalent computational power, required developers to implement shutdown capabilities and conduct red-team evaluations before deployment, and created potential liability for developers whose models caused critical harms defined as mass casualties, critical infrastructure damage, or economic losses exceeding $100 million. Governor Newsom vetoed the bill on September 29, 2024, stating that “SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.”

The bill passed both houses of the California legislature with bipartisan support, demonstrating that concerns about AI risks transcend traditional partisan divides. The opposition coalition proved formidable: Meta, OpenAI, and House Speaker Nancy Pelosi opposed the bill, with Members of California’s congressional delegation—including Speaker Emerita Pelosi, and Representatives Ro Khanna, Anna Eschoo, Zoe Lofgren, and Jay Obernolte—publicly urging the Governor to reject it. Meanwhile, the Center for AI Safety, Elon Musk, and the L.A. Times editorial board supported the bill, revealing unusual cross-cutting alliances on AI safety.

The veto message highlighted several critical vulnerabilities in state-level AI regulation. Newsom cited concerns that the bill’s focus on high-cost, large-scale models would provide a “false sense of security,” emphasizing that smaller, specialized models could pose equally significant risks. Senator Scott Wiener criticized the veto as “a setback for artificial intelligence accountability,” noting that “this veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers.”

The SB 1047 failure revealed how definitional challenges plague AI legislation. The bill’s reliance on compute thresholds and cost metrics created apparent precision but masked underlying questions about what constitutes a “frontier” AI model. As capabilities improve and costs decline, static definitions risk becoming either overly broad or quickly obsolete. Industry critics successfully argued that such rigid criteria could not anticipate future technological developments or distinguish between beneficial and harmful applications.

Federal AI Legislation: Congressional Gridlock and Competing Priorities

Section titled “Federal AI Legislation: Congressional Gridlock and Competing Priorities”

The 118th Congress saw unprecedented introduction of AI-related legislation, with over 150 bills addressing various aspects of artificial intelligence governance, yet none passed into law. Major proposals included the Algorithmic Accountability Act, reintroduced for the third time by Senators Ron Wyden and Cory Booker and Rep. Yvette Clarke, which would have required automated decision system impact assessments for large companies; the Protect Elections from Deceptive AI Act to prohibit AI-generated political deepfakes; and the ASSESS AI Act directing a task force on AI privacy and civil liberties implications.

Congressional dysfunction played a significant role in these failures, reflecting broader challenges in American legislative processes rather than AI-specific issues. The vacuum has driven state-level action: nearly 700 AI-related state bills were introduced in 2024, with nearly 100 passing into law. This proliferation prompted a counter-reaction at the federal level: the House attempted to include a 10-year moratorium on state and local AI laws in budget reconciliation, which was stripped by a near-unanimous 99-1 Senate vote.

Industry lobbying proved particularly effective at the federal level. OpenAI increased its lobbying spend nearly seven-fold to $1.76 million in 2024, while Anthropic more than doubled its spending from $180,000 to $120,000. Together, Big Tech companies employed nearly 300 lobbyists in 2024—approximately one for every two members of Congress. The narrative that regulation would harm American competitiveness against China proved especially potent, with AI companies increasingly positioning the technology as pivotal to national security to argue for government support rather than regulation.

The stalled federal legislation also reflected deeper partisan divisions about the appropriate role of government in technology regulation. Republican lawmakers generally favored market-driven approaches and expressed skepticism about new regulatory authorities, while Democrats emphasized civil rights protections and algorithmic bias concerns. These different priorities made comprehensive legislation difficult to construct and sustain.

International Treaty Efforts: Great Power Competition and Sovereignty Concerns

Section titled “International Treaty Efforts: Great Power Competition and Sovereignty Concerns”

International efforts to establish binding AI governance agreements have consistently failed to achieve meaningful progress, despite widespread recognition that AI’s global nature requires coordinated responses. The UN Secretary-General’s High-level Advisory Body on AI released its final report “Governing AI for Humanity” in September 2024, recommending seven initiatives including an international scientific panel and global AI data framework, but no binding treaty or enforcement mechanism has emerged. A UN report found that 118 countries were not parties to any significant international AI governance initiatives, with only seven developed nations participating in all major frameworks.

Great power competition, particularly US-China tensions, represents the primary obstacle to international AI agreements. The US explicitly rejected “centralized control and global governance” of AI at the UN Global Dialogue, signaling skepticism of UN-anchored rule-setting. The US also declined to sign the nonbinding Paris AI summit declaration in 2025. Both superpowers view AI capabilities as strategic assets essential to military and economic competitiveness, making them reluctant to accept external constraints. Western nations worry that UN involvement could open the door to Chinese and autocratic influence over AI governance.

Verification challenges compound these political obstacles. Unlike nuclear or chemical weapons, AI capabilities are largely software-based and can be rapidly modified or concealed. International monitoring would require unprecedented access to corporate research facilities and source code, raising both security and intellectual property concerns. The dual-use nature of most AI research makes it difficult to identify which activities warrant international oversight.

The G7’s Hiroshima AI Process exemplifies the limitations of voluntary international approaches. The October 2023 code of conduct received significant attention but is explicitly designed as a transitional measure—a “voluntary guidance for actions by organizations developing the most advanced AI systems”—while governments develop binding regulation. The framework’s effectiveness depends entirely on corporate self-interest rather than binding obligations, and it remains to be seen whether it will spread beyond G7 nations to the Global South countries not represented at the negotiating table.

Loading diagram...

Analysis of failed AI proposals reveals sophisticated industry opposition strategies that go beyond traditional lobbying. Technology companies have invested heavily in policy expertise, hiring former government officials and establishing dedicated government relations teams. This investment has created an asymmetric information advantage, where industry representatives often possess deeper technical knowledge than legislative staff, allowing them to shape debates around implementation feasibility and unintended consequences.

Company/Category2024 Lobbying SpendYoY ChangeLobbyistsKey Focus Areas
Meta$14.4M+27%AI regulation, content moderation
Microsoft$10.35MCHIPS Act, AI, facial recognition
ByteDance (TikTok)$10.4M+19%TikTok ban, AI policy
OpenAI$1.76M+577%AI research, benchmarking, safety
Anthropic$120K+157%AI safety, frontier model policy
Cohere$130K+229%Enterprise AI, safety standards
Big Tech Total$11.5M+13%~300National security, competitiveness
Companies lobbying on AI+141%648 totalAcross all AI policy areas

Sources: OpenSecrets, Issue One, TechCrunch, MIT Technology Review

The “innovation flight” narrative has proven particularly effective, warning that strict regulation will drive AI development to more permissive jurisdictions. This argument resonates with policymakers concerned about economic competitiveness, particularly in states like California that depend heavily on technology sector employment. However, empirical evidence for this claim remains limited—financial services and pharmaceutical companies continue to invest heavily in highly regulated jurisdictions when market opportunities justify compliance costs.

Astroturfing efforts have supplemented direct lobbying, with industry-funded organizations presenting themselves as grassroots voices for innovation or consumer interests. These groups amplify concerns about regulatory overreach while obscuring their corporate funding sources. The proliferation of such organizations makes it difficult for policymakers to assess genuine public opinion versus manufactured opposition.

Industry opposition also exploits the collective action problem in AI safety. Even companies that privately acknowledge safety risks may oppose regulation if competitors could gain advantages by avoiding compliance costs. This dynamic suggests that voluntary industry initiatives, while valuable for norm-setting, may be insufficient for addressing systemic risks that require universal participation.

The failure to establish precise, legally enforceable definitions of artificial intelligence has undermined numerous regulatory proposals. Traditional legal frameworks assume clear categorical boundaries, but AI exists on a spectrum of capabilities that resist binary classification. The European Union’s AI Act attempted to address this challenge through risk-based categorization, but implementation guidance reveals ongoing struggles with edge cases and technological evolution.

Technical complexity creates additional barriers to effective regulation. Many AI governance proposals rely on metrics like computational resources or model parameters that may not correlate with actual capabilities or risks. The focus on “frontier” or “advanced” AI systems often assumes linear progression in capabilities, when breakthrough developments may emerge from unexpected research directions or architectural innovations.

The dual-use nature of AI technology complicates regulatory design, as the same underlying capabilities can enable both beneficial and harmful applications. Unlike nuclear technology, where weapons applications are clearly distinguishable from civilian uses, AI systems designed for legitimate purposes can be adapted for malicious ends with minimal modification. This reality makes it difficult to craft regulations that prevent harm without stifling beneficial innovation.

Speed of technological development outpaces regulatory comprehension, creating a persistent knowledge gap between cutting-edge AI capabilities and policymaker understanding. By the time comprehensive legislation addresses current AI systems, the technology has evolved in ways that make existing frameworks obsolete. This dynamic suggests need for more adaptive regulatory approaches, but such flexibility conflicts with legal requirements for predictability and due process.

Jurisdictional Complexity and Coordination Failures

Section titled “Jurisdictional Complexity and Coordination Failures”

Failed AI proposals often founder on jurisdictional complexity, with multiple agencies and levels of government claiming overlapping authority. In the United States, the Federal Trade Commission, Securities and Exchange Commission, Food and Drug Administration, and various other agencies all assert relevance to AI governance within their respective domains. This fragmentation creates regulatory gaps where harmful activities fall between jurisdictions, while also enabling forum shopping by companies seeking the most permissive oversight.

State versus federal tensions have proven particularly problematic for comprehensive AI regulation. Federal preemption arguments against state initiatives like SB 1047 assume that uniform national standards are preferable to regulatory experimentation, but federal inaction leaves this assumption untested. The resulting stalemate benefits companies that prefer regulatory uncertainty to clear but demanding requirements.

International coordination failures reflect deeper structural problems in global governance systems designed for a world of discrete nation-states rather than borderless digital technologies. Existing international institutions lack both technical expertise and enforcement authority to address AI governance effectively. Treaty-making processes that require consensus among hundreds of nations are poorly suited to rapidly evolving technologies that demand quick responses to emerging risks.

The failure of international coordination also stems from asymmetric capabilities and interests among nations. Countries with advanced AI industries have different priorities from those primarily concerned about being subject to others’ AI systems. This dynamic creates resistance to governance frameworks that might constrain technological leaders while providing insufficient protection for other nations.

Approach TypeExamplesPassage RateIndustry OppositionKey Success Factors
Comprehensive FrameworksSB 1047, Algorithmic Accountability ActVery Low (~5%)High ($10M+)Rare; typically requires crisis
Sectoral/Use-Case SpecificColorado AI Act, AI in hiring lawsMedium (30-40%)ModerateNarrow scope, clear harms
Disclosure RequirementsAB 2013 (CA), AI labelingHigher (50-60%)Low-ModerateNo liability, transparency focus
Executive OrdersBiden AI EO (Oct 2023)N/A (executive)LowUses existing authority
Voluntary FrameworksG7 Hiroshima Process, NIST RMFHigh (non-binding)MinimalIndustry-friendly, no enforcement
International TreatiesUN binding AI treatyNear ZeroN/AUS-China competition blocks

Incremental Progress Over Comprehensive Frameworks

Section titled “Incremental Progress Over Comprehensive Frameworks”

Analysis of successful AI governance initiatives reveals that incremental approaches with limited scope achieve higher passage rates than comprehensive frameworks attempting to address multiple AI governance challenges simultaneously. Executive orders like President Biden’s October 2023 AI Executive Order succeed by building on existing regulatory authorities rather than creating new ones, while narrow sectoral regulations addressing specific applications (like AI in hiring or medical devices) face less opposition than broad technology mandates.

Disclosure requirements prove more politically viable than liability provisions or performance mandates. Requirements for algorithmic transparency or AI-generated content labeling typically generate less industry opposition than rules imposing legal responsibility for harmful outcomes. This pattern suggests that information-based interventions may be necessary precursors to more substantive regulatory obligations.

Voluntary frameworks often serve as stepping stones to mandatory requirements, allowing industry to demonstrate either compliance or inadequacy of self-regulation. The development of technical standards through organizations like NIST provides foundations for future regulatory requirements while building consensus around best practices. However, the timeline for this progression remains uncertain and may depend on catalyzing events that shift political incentives.

Multi-stakeholder approaches that include industry participation from early stages show higher success rates than adversarial regulatory processes. The UK’s AI Safety Institute model of collaborative risk assessment and the EU’s approach of extensive industry consultation during AI Act development both demonstrate how procedural inclusion can build legitimacy for substantive requirements. However, such approaches risk capture by well-resourced industry participants unless carefully designed to include diverse perspectives.

Building Coalitions and Managing Opposition

Section titled “Building Coalitions and Managing Opposition”

Successful AI governance initiatives typically build broad coalitions that include both technology industry participants and civil rights advocates, rather than relying solely on safety-focused arguments. Bipartisan framing that emphasizes economic competitiveness, national security, and innovation leadership alongside safety concerns proves more durable than approaches that position regulation as primarily about constraining industry.

The role of catalyzing events in overcoming political resistance cannot be understated. Financial crisis prompted financial services regulation, data breaches enabled privacy legislation, and algorithmic bias scandals facilitated AI transparency requirements. However, relying on such events for regulatory progress means that governance often lags behind harm, making proactive approaches more desirable but politically more difficult.

Technical communities within industry can serve as allies for safety-focused regulation when their concerns align with external advocacy. AI safety researchers within major technology companies often share external researchers’ concerns about risks, though their ability to influence corporate positions may be limited. Building relationships with these internal allies can provide valuable intelligence about industry positions and potentially moderate opposition to reasonable regulatory proposals.

International coordination can strengthen domestic regulatory efforts by reducing concerns about competitive disadvantage. The EU AI Act’s passage made it easier for other jurisdictions to consider similar requirements, as companies were already adapting to European standards. However, this dynamic depends on major markets taking initial regulatory steps, creating first-mover disadvantages that must be overcome through political leadership.

The immediate trajectory of AI governance depends heavily on several key variables that will largely determine whether the pattern of failure continues or shifts toward more successful regulatory outcomes. The 2024 US elections and subsequent congressional composition will significantly influence federal AI legislation prospects, with different electoral outcomes suggesting vastly different regulatory approaches. A continuation of divided government likely means persistent gridlock on comprehensive AI legislation, while unified party control could enable either aggressive regulation or systematic deregulation depending on which party prevails.

State-level initiatives appear more likely to succeed in the near term, particularly in jurisdictions where SB 1047’s failure demonstrated the political viability of AI safety concerns even when specific legislation fails. Colorado’s AI bias law and similar proposals in New York and Illinois suggest that narrower, use-case specific regulation may achieve passage where comprehensive frameworks fail. However, the federal preemption argument that contributed to SB 1047’s veto remains a significant challenge for ambitious state-level initiatives.

Industry positions show signs of evolution, with some major AI developers acknowledging the inevitability of regulation and seeking to influence its form rather than prevent it entirely. This shift from opposition to engagement could reduce the systematic resistance that has characterized failed proposals, though it may also lead to industry capture of regulatory processes. The key question is whether this engagement represents genuine acceptance of safety constraints or strategic positioning to minimize regulatory burden.

International coordination faces continued challenges from great power competition, but the proliferation of national AI strategies and governance initiatives may create opportunities for bottom-up coordination around technical standards and best practices. The AI Safety Institutes network, initiated by the UK and now including multiple countries, represents a promising model for technical cooperation that avoids the political challenges of binding agreements.

The medium-term trajectory of AI governance will likely be shaped by whether voluntary industry commitments prove adequate to address emerging risks or whether their limitations become apparent through failure to prevent harmful incidents. Current voluntary frameworks rely heavily on corporate self-assessment and public commitments without independent verification mechanisms. If these approaches prove inadequate—either through obvious failures or more subtle erosion of safety practices under competitive pressure—political support for mandatory regulation may increase substantially.

Technological developments over this period will significantly influence regulatory approaches, potentially making current debates obsolete. The emergence of artificial general intelligence or highly capable autonomous systems could create risks that dwarf current concerns, shifting political calculations about acceptable regulatory costs. Conversely, if AI capabilities plateau or prove more manageable than current concerns suggest, the urgency driving regulatory efforts may diminish.

The international landscape may evolve toward greater fragmentation or coordination depending on geopolitical developments. Continued US-China competition suggests persistent barriers to comprehensive international agreements, but shared interests in preventing catastrophic risks could enable limited cooperation. The development of technical standards and safety practices through multilateral institutions may provide foundations for future coordination even without binding treaties.

Liability frameworks represent a critical uncertainty that could fundamentally alter the regulatory landscape. Current opposition to liability provisions reflects uncertainty about AI capabilities and appropriate responsibility allocation, but major AI-caused harms could shift both public opinion and legal precedent toward stricter accountability. The development of case law around AI liability through tort litigation may provide de facto regulation even without comprehensive legislation.

The long-term trajectory of AI governance depends on fundamental questions about technological development, international order, and democratic governance that remain deeply uncertain. If AI development continues its current trajectory toward increasingly capable and general systems, the stakes for governance failures may become existentially high, potentially overcoming political obstacles that currently prevent ambitious regulation. However, if AI capabilities stabilize or develop along different trajectories than currently anticipated, existing governance approaches may prove adequate or require fundamental reconceptualization.

The relationship between democratic governance and AI oversight presents particular challenges that may reshape political systems themselves. AI’s speed and complexity may exceed the capacity of traditional democratic deliberation, potentially requiring new institutions or decision-making processes. The failure of current governance approaches may reflect not just political obstacles but fundamental limitations of existing democratic institutions when confronting rapidly evolving technological risks.

International governance may require new institutions and approaches that go beyond current state-centric models. The failure of traditional treaty-making processes for AI governance suggests need for more adaptive and technically informed international coordination mechanisms. Whether existing international institutions can evolve to meet these challenges or whether new forms of global governance will emerge remains an open question with profound implications for AI safety and human welfare.

The ultimate success or failure of AI governance may depend on whether humanity can develop institutional innovations that match the pace and complexity of technological development while preserving democratic accountability and human agency. The pattern of current failures suggests that existing approaches are inadequate, but whether better alternatives will emerge before they become urgently necessary remains uncertain.


Failed AI policy proposals reveal constraints on improving the Ai Transition Model:

FactorParameterImpact
Civilizational CompetenceRegulatory Capacity$61.5M Big Tech lobbying (648 companies, up 141% YoY) systematically opposes comprehensive regulation
Civilizational CompetenceInstitutional Quality150+ bills in 118th Congress with zero becoming law shows governance speed mismatch
Transition TurbulenceRacing IntensityIndustry preference for minimal regulation allows competitive pressure to override safety

Pattern suggests incremental approaches with industry support succeed more than comprehensive frameworks; definitional challenges remain a persistent barrier.