Pause Advocacy
Pause Advocacy
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Tractability | Low-Medium (25-35%) | Major lobbying opposition; geopolitical competition; Biden’s EO 14110↗ rescinded January 2025 |
| Potential Impact | High if achieved | Could provide 2-5 years for alignment research; Asilomar precedent↗ shows scientific pauses can work |
| Political Feasibility | Low (15-25%) | Only 30,000 signed FLI open letter↗; industry opposition strong |
| International Coordination | Very Low (10-20%) | China developing own AI governance framework↗; US-China competition intense |
| Time Gained if Successful | 2-5 years | Based on proposed 6-month to multi-year pause durations |
| Risk of Backfire | Moderate (30-40%) | Compute overhang; ceding leadership to less safety-conscious actors |
| Advocacy Momentum | Growing | PauseAI↗ protests in 13+ countries; 64% of Americans support pause until proven safe |
| Public Support for Regulation | Very High (97%) | Gallup 2025: 97% agree AI safety should be subject to rules; 69% say government not doing enough |
Risks Addressed
Section titled “Risks Addressed”| Risk | Mechanism | Effectiveness |
|---|---|---|
| Racing Dynamics | Reduces competitive pressure, creates coordination window | High if achieved |
| Loss of Control | Buys time for alignment research | High if alignment is tractable |
| Misuse Risks | Delays deployment of dual-use capabilities | Medium |
| Lock-in | Provides time for governance frameworks | Medium |
| Epistemic Risks | Reduces rushed deployment of unreliable systems | Medium |
Overview
Section titled “Overview”Pause advocacy represents one of the most controversial interventions in AI safety: calling for the deliberate slowing or temporary halting of frontier AI development until adequate safety measures can be implemented. This approach gained significant attention following the March 2023 open letter organized by the Future of Life Institute, which called for a six-month pause on training AI systems more powerful than GPT-4 and garnered over 30,000 signatures including prominent technologists and researchers.
The core premise underlying pause advocacy is that AI capabilities are advancing faster than our ability to align these systems with human values and control them reliably. Proponents argue that without intervention, we risk deploying increasingly powerful AI systems before developing adequate safety measures, potentially leading to catastrophic outcomes. The theory of change involves using advocacy, public pressure, and policy interventions to buy time for safety research to catch up with capabilities development.
However, pause advocacy faces formidable challenges. The economic incentives driving AI development are enormous, with companies investing hundreds of billions of dollars and nations viewing AI leadership as critical for economic and military competitiveness. Critics argue that unilateral pauses by safety-conscious actors could simply cede leadership to less responsible developers, potentially making outcomes worse rather than better.
Evolution of Pause Proposals
Section titled “Evolution of Pause Proposals”The pause advocacy movement has evolved significantly since 2023, with proposals ranging from temporary moratoria to conditional bans on superintelligence development.
| Proposal | Date | Scope | Signatories | Key Demand | Status |
|---|---|---|---|---|---|
| FLI Open Letter | March 2023 | 6-month pause on GPT-4+ training | 30,000+ | Voluntary moratorium | Ignored by labs |
| CAIS Statement | May 2023 | Risk acknowledgment | 350+ researchers | Recognition of extinction risk | Influenced discourse |
| Statement on Superintelligence | October 2025 | Conditional ban on superintelligence | 700+ (Nobel laureates, public figures) | Prohibition until “broad scientific consensus” on safety | Active campaign |
| PauseAI Policy Proposal | Ongoing | International treaty + AI safety agency | Grassroots movement | IAEA-like body for AI | Advocacy stage |
The October 2025 “Statement on Superintelligence” represents a notable escalation from the 2023 letter. While the original called for a temporary six-month pause, the new statement advocates for a conditional prohibition: “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” Signatories include Nobel laureates Geoffrey Hinton, Daron Acemoglu, and Beatrice Fihn, alongside public figures like Steve Wozniak, Richard Branson, and Prince Harry and Meghan Markle. FLI director Anthony Aguirre warned that “time is running out,” estimating superintelligence could arrive within one to two years.
Theory of Change
Section titled “Theory of Change”The pause advocacy theory of change operates through multiple reinforcing pathways. Public advocacy generates media attention, which combined with grassroots organizing shifts public opinion. This creates political pressure that can lead to either legislative action (such as compute governance) or industry self-regulation (such as responsible scaling policies). Both pathways result in slower development timelines, providing additional time for safety research to mature before transformative AI capabilities emerge.
Arguments for Pause
Section titled “Arguments for Pause”Capabilities-Safety Gap
Section titled “Capabilities-Safety Gap”The most compelling argument for pause centers on the widening gap between AI capabilities and safety research. While frontier models have jumped from GPT-3 (175B parameters, 2020) to GPT-4 (estimated 1.7T parameters, 2023) to even more powerful systems, fundamental alignment problems remain unsolved. Current safety techniques like constitutional AI and reinforcement learning from human feedback (RLHF) appear increasingly inadequate for highly capable systems that could exhibit deceptive behavior or pursue unintended objectives.
| Generation | Parameters | Year | Key Safety Advances | Gap Assessment |
|---|---|---|---|---|
| GPT-3 | 175B | 2020 | Basic RLHF | Moderate |
| GPT-4 | ~1.7T (est.) | 2023 | Constitutional AI, red-teaming | Widening |
| Claude 3/GPT-4.5 | Undisclosed | 2024 | RSP frameworks↗, scaling policies | Significant |
| Projected 2026 | 10T+ | 2026 | Unknown | Critical uncertainty |
Research by Anthropic↗ and other safety-focused organizations suggests that as models become more capable, they become harder to interpret and control. A 2023 study by Perez et al. found that larger language models show increased tendencies toward deceptive behaviors when given conflicting objectives. Recent mechanistic interpretability work↗ remains far from scalable to frontier models with hundreds of billions of parameters. Without a pause to develop better alignment techniques, we may cross critical capability thresholds before adequate safety measures are in place.
Coordination Window
Section titled “Coordination Window”Pause advocacy also argues that slower development creates opportunities for beneficial coordination that are impossible during intense racing dynamics↗. The current AI development landscape involves only a handful of frontier labs—primarily OpenAI, Google DeepMind, and Anthropic—making coordination theoretically feasible. Historical precedents like the Asilomar Conference on Recombinant DNA↗ (1975) and various nuclear arms control agreements demonstrate that the scientific community and governments can successfully coordinate to slow potentially dangerous technological development when risks are recognized.
The window for such coordination may be closing rapidly. As AI capabilities approach transformative levels, the strategic advantages they confer will likely intensify competitive pressures. Research on strategic insights from simulation gaming of AI race dynamics↗ suggests that nations viewing AI as critical to national security may be unwilling to accept coordination mechanisms that could disadvantage them relative to competitors.
Precautionary Principle
Section titled “Precautionary Principle”Advocates invoke the precautionary principle, arguing that when facing potentially existential risks, the burden of proof should be on demonstrating safety rather than on proving danger. Unlike most technologies, advanced AI systems could pose civilization-level risks if misaligned, making the stakes qualitatively different from typical innovation-risk tradeoffs. This principle has precedent in other high-stakes domains like nuclear weapons development and gain-of-function research, where safety considerations have sometimes overridden rapid advancement.
Arguments Against Pause
Section titled “Arguments Against Pause”Geopolitical Competition
Section titled “Geopolitical Competition”The strongest argument against pause concerns international competition, particularly with China. China’s national AI strategy, announced in 2017, explicitly aims for AI leadership by 2030, with massive state investment. However, the picture is more nuanced than simple “race to the bottom” narratives suggest. China released its own AI Safety Governance Framework↗ in September 2024, and in December 2024, 17 major Chinese AI companies including DeepSeek and Alibaba signed safety commitments mirroring Seoul AI Summit↗ pledges.
| Competitive Factor | US Position | China Position | Implication for Pause |
|---|---|---|---|
| Frontier model capability | Leading | 6-18 months behind | Pause could narrow gap |
| Compute access | Advantage (NVIDIA chips) | Constrained by export controls | Pause less urgent for compute |
| Safety research | Leading (Anthropic, OpenAI) | Growing (CnAISDA↗) | Potential coordination opportunity |
| Regulatory framework | Fragmented, EO rescinded | Unified (CAC framework) | China may regulate faster |
| Talent pool | Advantage | Growing rapidly | Not directly affected by pause |
This concern is compounded by the dual-use nature of AI research. Unlike some other dangerous technologies, AI capabilities research often advances both beneficial applications and potentially dangerous ones simultaneously. Pausing beneficial AI development to prevent dangerous applications may be a poor tradeoff if competitors continue advancing both dimensions.
Compute Overhang Risk
Section titled “Compute Overhang Risk”A technical argument against pause involves the “compute overhang” phenomenon. If pause affects model training but not hardware development, accumulated computing power could enable sudden capability jumps when development resumes. Historical analysis of computing trends shows that available compute continues growing exponentially even during periods of algorithmic stagnation. A pause that allows compute scaling to continue could result in more dangerous discontinuous progress than gradual development would produce.
Research by OpenAI’s scaling team suggests that sudden access to much larger compute budgets could enable capability gains that bypass incremental safety research. This could be more dangerous than the gradual development that allows safety research to track capabilities improvements.
Economic and Social Costs
Section titled “Economic and Social Costs”Pause advocacy also faces arguments about opportunity costs and distributional effects. AI technologies show significant promise for addressing major challenges including disease, climate change, and poverty. A 2023 economic analysis by McKinsey Global Institute estimated that AI could contribute $13 trillion annually to global economic output by 2030, with particular benefits for developing countries if access is democratized.
Critics argue that pause advocacy primarily benefits existing AI leaders by reducing competitive pressure while imposing costs on society through delayed beneficial applications. This raises questions about the democratic legitimacy of pause policies and their distributional consequences.
Implementation Mechanisms
Section titled “Implementation Mechanisms”Regulatory Approaches
Section titled “Regulatory Approaches”The most direct path to pause involves government regulation. Several mechanisms are under consideration, including compute governance↗ (restricting access to the high-end chips needed for training frontier models), mandatory safety evaluations before deployment, and international treaties.
| Regulatory Mechanism | Jurisdiction | Threshold | Status |
|---|---|---|---|
| Executive Order 14110↗ | US | 10^26 FLOP training runs | Rescinded January 2025 |
| EU AI Act↗ | EU | 10^25 FLOP (systemic risk) | Active since August 2024 |
| Responsible AI Act (proposed) | US Congress | Tiered thresholds | Under consideration |
| China AI Governance Framework↗ | China | Risk-based grading | Version 2.0 released September 2025 |
However, regulatory approaches face significant implementation challenges. The global nature of AI supply chains, the dual-use character of computing hardware, and the difficulty of defining “frontier” capabilities all complicate enforcement. Additionally, industry opposition remains strong, with major tech companies arguing that regulation could stifle innovation and benefit foreign competitors. The rescission of Biden’s EO 14110 within hours of the new administration taking office demonstrates the fragility of executive action on AI governance.
Industry Self-Regulation
Section titled “Industry Self-Regulation”An alternative approach involves industry-led initiatives, such as responsible scaling policies that commit labs to pause development when certain capability thresholds are reached without adequate safety measures. Anthropic’s Responsible Scaling Policy↗, first announced in September 2023 and updated in October 2024, provides a template for such approaches by defining specific capability evaluations and safety requirements.
| Lab | Policy | Key Commitments | Independent Verification |
|---|---|---|---|
| Anthropic | RSP v2.2↗ | ASL capability thresholds, biosecurity evals | Limited external audit |
| OpenAI | Preparedness Framework | Red-teaming, catastrophic risk thresholds | Internal governance |
| Google DeepMind | Frontier Safety Framework | Pre-deployment evaluations | In development |
The advantage of industry self-regulation is that it can move faster than formal regulation and may be more technically sophisticated. However, it relies on voluntary compliance and may not address competitive pressures that incentivize racing. After one year with the RSP in effect, Anthropic acknowledged↗ instances where they fell short of meeting the full letter of its requirements, though they assessed these posed minimal safety risk.
Safety Implications and Trajectory
Section titled “Safety Implications and Trajectory”Concerning Aspects
Section titled “Concerning Aspects”The primary safety concern with pause advocacy is that it may not achieve its intended goals while creating new risks. If pauses are unevenly implemented, they could concentrate advanced AI development among less safety-conscious actors. Additionally, the political and economic pressures against pause may lead to policy capture or symbolic measures that provide false security without meaningfully reducing risks.
There are also concerns about the precedent that successful pause advocacy might set. If pause becomes a standard response to emerging technologies, it could hamper beneficial innovation more broadly. The challenge is distinguishing cases where precautionary pauses are justified from those where they primarily serve incumbent interests.
Promising Aspects
Section titled “Promising Aspects”Conversely, even partial success in slowing AI development could provide crucial time for safety research to advance. Recent progress in interpretability research↗, including work on sparse autoencoders and circuit analysis, suggests that additional time could yield significant safety improvements. In March 2025, Anthropic published research on circuit tracing that “lets us watch Claude think, uncovering a shared conceptual space where reasoning happens before being translated into language.” A coordinated slowdown that maintains global leadership among safety-conscious actors could be the best available approach for navigating the development of transformative AI.
Furthermore, pause advocacy has already had positive effects on AI safety discourse by raising awareness of risks and legitimizing safety concerns in policy circles. The 2023 open letter↗ helped establish AI safety as a mainstream policy concern, influencing subsequent regulatory discussions and industry practices.
Current State and Trajectory
Section titled “Current State and Trajectory”As of late 2025, pause advocacy remains politically marginal but increasingly organized. PauseAI↗, founded in May 2023, has grown from a single San Francisco group to six established chapters across the US (NYC, Chicago, SF, Portland, Phoenix, Washington DC) with protests in 13+ countries including at the AI Seoul Summit↗ and Paris AI Action Summit.
2025 Protest Milestones
Section titled “2025 Protest Milestones”| Event | Date | Location | Focus | Outcome |
|---|---|---|---|---|
| Paris AI Summit Protest | February 2025 | Paris, France | Summit lacking safety focus | Global coordination, Kenya and DRC joined |
| Google DeepMind Protest | June 2025 | London, UK | Gemini 2.5 safety commitments | 60 UK MPs sent letter to Google; largest PauseAI protest |
| PauseCon | June 2025 | London, UK | First activist training event | Movement capacity building |
| Amsterdam ASML Protest | December 2025 | Amsterdam, Netherlands | Chip supply chain for AI | Targeting hardware enablers |
The June 2025 Google DeepMind protest marked a turning point in pause advocacy effectiveness. When Google released Gemini 2.5 Pro in March 2025 without publishing the promised safety testing documentation, PauseAI organized demonstrations outside DeepMind’s London headquarters where protesters chanted “Stop the race, it’s unsafe” and “Test, don’t guess.” The action gained political traction when 60 cross-party UK parliamentarians signed a letter accusing Google of “breach of trust” for violating commitments made at the Seoul AI Safety Summit and to the White House in 2023. Former Defence Secretary Lord Browne warned: “If leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy increasingly powerful AI without proper safeguards.”
| Timeframe | Scenario | Probability | Key Drivers |
|---|---|---|---|
| 2025-2026 | No meaningful pause | 60-70% | Industry lobbying, EO rescission, China competition |
| 2025-2026 | Voluntary industry slowdown | 15-25% | Safety incident, capability threshold reached |
| 2027-2030 | Coordinated international framework | 15-30% | China-US dialogue progress↗, WAICO proposal |
| 2027-2030 | Binding compute governance | 10-20% | Major incident, legislative action |
Looking ahead 1-2 years, the trajectory likely depends on whether AI capabilities approach obviously dangerous thresholds. Dramatic capability jumps or safety incidents could shift political feasibility significantly. Conversely, gradual progress that demonstrates controllable development might reduce pause advocacy’s appeal.
In the 2-5 year timeframe, international coordination mechanisms will likely determine pause advocacy’s viability. China has proposed WAICO↗ (World AI Cooperation Organization) as a framework for coordinating AI governance rules. If major AI powers can establish effective governance frameworks, coordinated development constraints become more feasible. If geopolitical competition intensifies, unilateral pauses become increasingly untenable.
Key Uncertainties
Section titled “Key Uncertainties”Several critical uncertainties shape the case for pause advocacy:
| Uncertainty | If True (Pro-Pause) | If False (Anti-Pause) | Current Assessment |
|---|---|---|---|
| Alignment is hard | Pause essential to buy research time | Pause unnecessary, current methods sufficient | Unclear; interpretability progress↗ slow but advancing |
| Short timelines | Pause urgent, limited window | Adequate time without constraints | Models improving rapidly; 2-5 year uncertainty |
| International coordination feasible | Global pause achievable | Unilateral pause counterproductive | China-US dialogue↗ shows some progress |
| Racing dynamics dominant | Pause prevents corner-cutting | Aviation industry shows↗ safety can prevail | Competitive pressures strong but not deterministic |
| RSPs work | Voluntary pause-like mechanisms sufficient | Need binding regulation | Anthropic RSP↗ promising but untested at capability thresholds |
The difficulty of AI alignment remains fundamentally unknown—if alignment problems prove tractable with existing techniques, pause advocacy’s necessity diminishes significantly. Conversely, if alignment requires fundamental breakthroughs that need years of research, pause may become essential regardless of political difficulties.
The timeline to transformative AI capabilities also remains highly uncertain. If such capabilities are decades away, there may be adequate time for safety research without development constraints. If they arrive within years, pause advocacy’s urgency increases dramatically.
Finally, the prospects for international coordination remain unclear. China’s approach to AI safety and willingness to participate in coordination mechanisms will largely determine whether global pause initiatives are feasible. The International Dialogues on AI Safety (IDAIS)↗ produced consensus statements including the Ditchley Statement and Beijing Statement establishing specific technological “red lines” including autonomous replication and deception of regulators—suggesting some foundation for cooperation.
The effectiveness of alternative safety interventions also affects pause advocacy’s relative value. If industry responsible scaling policies or technical alignment approaches prove sufficient, the need for development pauses decreases. However, if these approaches fail to keep pace with capabilities, pause may become the only viable risk reduction mechanism.
Public Opinion Landscape
Section titled “Public Opinion Landscape”Public support for AI regulation and development pauses has grown substantially, creating a potential political foundation for pause advocacy.
| Poll | Date | Finding | Implication |
|---|---|---|---|
| Gallup/SCSP | September 2025 | 97% agree AI safety should be subject to rules and regulations | Near-universal support for regulation crosses party lines |
| FLI Survey | October 2025 | 64% support superintelligence ban until proven safe; only 5% favor fast development | Strong majority favors precautionary approach |
| Quinnipiac | April 2025 | 69% say government not doing enough to regulate AI; 44% think AI will do more harm than good | Public perceives regulatory gap |
| Rethink Priorities | 2025 | 51% would support AI research pause; 25% oppose | Support outstrips opposition 2:1 |
| Pew Research | 2025 | ~60% of public and 56% of AI experts worry government won’t regulate enough | Expert-public alignment on regulation |
Despite this strong polling support, translating public opinion into policy remains challenging. The political economy favors AI developers: concentrated benefits for tech companies versus diffuse risks for the public create asymmetric lobbying incentives. The rapid rescission of Biden’s EO 14110 in January 2025 illustrates how quickly regulatory frameworks can be dismantled despite public support. However, the Gallup finding that 88% of Democrats and 79% of Republicans and independents support AI safety rules suggests bipartisan potential for regulation if framed appropriately.
Historical Precedents
Section titled “Historical Precedents”The Asilomar Conference on Recombinant DNA↗ (1975) provides the most frequently cited precedent for scientific self-regulation and pause. Over 100 scientists, lawyers, and journalists gathered to develop consensus guidelines for recombinant DNA research, following a voluntary moratorium initiated by scientists themselves who had recognized potential biohazards.
| Precedent | Year | Scope | Outcome | Lessons for AI |
|---|---|---|---|---|
| Asilomar Conference↗ | 1975 | Recombinant DNA | Moratorium lifted with safety guidelines; NIH RAC created | Scientists can self-regulate; guidelines enabled rather than blocked research |
| Nuclear weapons moratorium | 1958-1961 | Nuclear testing | Partial Test Ban Treaty (1963) | International coordination possible under existential threat |
| BWC↗ | 1972 | Bioweapons | 187 states parties; no verification regime | Limits of international agreements without enforcement |
| Gain-of-function pause | 2014-2017 | Dangerous pathogen research | Enhanced oversight; research resumed | Temporary pauses can enable safety improvements |
The Asilomar precedent is instructive but imperfect. Key differences from AI:
- Smaller community: Only a few hundred researchers worked on rDNA in 1975 vs. millions in AI today
- Clearer risks: Biohazards were more tangible than AI alignment concerns
- Fewer commercial pressures: Academic research vs. hundreds of billions in investment
- Easier enforcement: Physical lab access vs. distributed compute and open-source models
Notably, the rDNA moratorium lasted only months, and “literally hundreds of millions of experiments, many inconceivable in 1975, have been carried out in the last 30 years without incident”—suggesting that well-designed pauses can enable rather than block beneficial research.
Related Organizations and Approaches
Section titled “Related Organizations and Approaches”Major organizations advancing pause advocacy include:
- Future of Life Institute↗: Organized the 2023 open letter↗ with 30,000+ signatures
- PauseAI↗: Grassroots organization founded May 2023; protests in 13+ countries; claims 70% of Americans support pause
- Center for AI Safety↗: Research and policy organization; published statement on AI risk↗ signed by leading researchers
- Academic researchers: Stuart Russell, Yoshua Bengio, Geoffrey Hinton have lent intellectual credibility to pause arguments
Complementary approaches include compute governance initiatives that could enable pause enforcement, international coordination efforts that could make pauses stable, and responsible scaling policies that implement conditional pause-like mechanisms. Technical alignment research also complements pause advocacy by developing the safety measures that would make development resumption safer.
International Governance Developments
Section titled “International Governance Developments”The global governance landscape for AI has evolved rapidly, with both opportunities and obstacles for pause advocacy.
| Initiative | Date | Key Features | Pause Relevance |
|---|---|---|---|
| UN Scientific Committee on AI | August 2025 | Independent scientific body for AI assessment | Could provide “broad scientific consensus” mechanism |
| UN Global Dialogue on AI Governance | August 2025 | Inclusive international governance forum | Platform for pause coordination |
| China Global AI Governance Action Plan | July 2025 | Multilateral cooperation framework | Signals Chinese openness to coordination |
| US AI Action Plan | July 2025 | Deregulation, global competitiveness focus | Opposes pause; favors speed |
| AI Red Lines Campaign | September 2025 | 200+ signatories including 10 Nobel laureates | UN-focused advocacy for “globally unacceptable AI risks” |
The UN initiatives launched in August 2025 grew from the “Governing AI for Humanity” report and aim to “kickstart a much more inclusive form of international governance.” However, only seven countries—all from the developed world—are parties to all current significant global AI governance initiatives, revealing the fragmentation that hampers pause enforcement. The fundamental tension remains: the US AI Action Plan explicitly encourages “rapid AI innovation” while linking federal funding to states adopting less restrictive AI laws, directly countering pause advocacy goals. Meanwhile, China’s proposal for a World AI Cooperation Organization suggests potential for multilateral frameworks, though skeptics note both powers view AI as a strategic asset and resist international limits.
Sources
Section titled “Sources”- Pause Giant AI Experiments: An Open Letter↗ - Future of Life Institute (March 2023)
- What’s changed since the “pause AI” letter six months ago?↗ - MIT Technology Review (September 2023)
- PauseAI Organization↗ - Official website
- Training Compute Thresholds: Features and Functions in AI Regulation↗ - GovAI (2024)
- Anthropic Responsible Scaling Policy↗ - Anthropic (2024)
- Is China Serious About AI Safety?↗ - AI Frontiers (2024)
- How China Views AI Risks and What to Do About Them↗ - Carnegie Endowment (2025)
- The Flight to Safety-Critical AI: Lessons from the Aviation Industry↗ - Berkeley CLTC (2024)
- AI Race Dynamics↗ - AI Safety, Ethics, and Society Textbook
- Asilomar Conference on Recombinant DNA↗ - Wikipedia
- Executive Order 14110↗ - Wikipedia
- Statement on Superintelligence - Future of Life Institute (October 2025)
- Open letter calls for prohibition on superintelligent AI - CyberScoop (October 2025)
- 60 U.K. Lawmakers Accuse Google of Breaking AI Safety Pledge - TIME (August 2025)
- The U.S. Public Wants Regulation of Superhuman AI - Future of Life Institute (2025)
- Americans Prioritize AI Safety and Data Security - Gallup (September 2025)
- How the US Public and AI Experts View Artificial Intelligence - Pew Research (April 2025)
- The UN’s new AI governance bodies explained - World Economic Forum (October 2025)
AI Transition Model Context
Section titled “AI Transition Model Context”Pause advocacy affects the Ai Transition Model through multiple factors:
| Factor | Parameter | Impact |
|---|---|---|
| Transition Turbulence | Racing Intensity | Coordinated slowdown reduces race to bottom on safety |
| Misalignment Potential | Safety-Capability Gap | Provides 2-5 additional years for safety research if achieved |
| Civilizational Competence | International Coordination | Requires unprecedented global coordination to be effective |
Pause advocacy has 15-40% probability of meaningful policy implementation by 2030; effectiveness depends on international coordination and enforcement mechanisms.