When Will AGI Arrive?
AGI Timeline Debate
Perhaps the most consequential forecasting question in history: When will we develop AI systems that match or exceed human-level intelligence across virtually all domains?
The answer determines how much time we have to solve alignment, whether to prioritize AI safety over other causes, and how urgently we need governance frameworks.
Defining AGI
Section titled “Defining AGI”The challenge: No consensus definition of AGI
Common criteria:
- Can perform any intellectual task humans can
- Can learn new tasks quickly with minimal data
- Generalizes broadly across domains
- Autonomous planning and goal-pursuit
- Economic productivity matching human workers
Proxy metrics:
- Pass rigorous expert-level tests across domains
- Outperform median human on most economically valuable tasks
- Can do the job of an AI researcher (recursive self-improvement)
- $100B+ annual economic value
Note: Debate conflates different concepts:
- Human-level AI (matches median human)
- Transformative AI (drastically changes world)
- Artificial General Intelligence (truly general intelligence)
- Superintelligence (exceeds all humans)
Timeline Camps
Section titled “Timeline Camps”Key Forecasts and Positions
Section titled “Key Forecasts and Positions”When different people and organizations expect AGI
Key Cruxes
Section titled “Key Cruxes”❓Key Questions
What Would Update Timelines?
Section titled “What Would Update Timelines?”Evidence for shorter timelines:
- GPT-5/6 showing qualitative leap in reasoning and planning
- Successful scaling past data limits
- AI substantially accelerating AI research
- Solving ARC benchmark or similar generalization tests
- Continued exponential capability gains
Evidence for longer timelines:
- Scaling 100x with only incremental improvements
- Hitting hard data or compute walls
- Persistent failures on key capabilities despite scale
- Need for architectural breakthroughs that don’t arrive
- Progress slowing on key benchmarks
Historical Track Record
Section titled “Historical Track Record”Past AGI predictions:
- 1958: “Machines will be capable, within twenty years, of doing any work that a man can do” - Herbert Simon
- 1965: “Machines will be capable, within twenty years, of doing any work that a man can do” - Herbert Simon (updated)
- 1970: “In from three to eight years we will have a machine with the general intelligence of an average human being” - Marvin Minsky
- 1980s: Expert systems will lead to AGI by 2000
- 2000s: AGI by 2020
Pattern: Always 20-30 years away. Should we believe this time is different?
Arguments it’s different now:
- Have empirical scaling laws, not just speculation
- Concrete progress on benchmarks and capabilities
- Massive investment and resources
- Clear path forward (scaling) vs unknown unknowns
Arguments it’s the same:
- Still don’t understand intelligence
- Benchmarks may not capture true intelligence
- Economic and technical obstacles remain
- Same overconfidence as past predictions
The Distribution Shape
Section titled “The Distribution Shape”Most forecasters have heavy-tailed distributions:
Short tail (optimistic):
- 5-10% chance: AGI by 2027
- 20-25% chance: AGI by 2030
- Driven by: Scaling working, rapid progress, no blockers
Central mass:
- 50% chance: AGI by 2035-2040
- Most likely scenario: Continued progress with some obstacles
Long tail (pessimistic):
- 20-30% chance: AGI after 2050
- 5-10% chance: Never with current paradigms
- Driven by: Fundamental limits, need for new paradigms
Wide uncertainty is rational given:
- Deep uncertainty about scaling limits
- Unknown unknowns
- Dependence on definition
- Historical poor track record
Implications for Different Timelines
Section titled “Implications for Different Timelines”If AGI by 2027-2030:
- Extremely urgent to solve alignment NOW
- Current safety research may be too slow
- Need immediate governance action
- Race dynamics critical concern
- May not get warning signs
If AGI by 2030-2040:
- Time to iterate on safety
- Can learn from weaker systems
- Governance frameworks can develop
- Safety research can mature
- More coordination opportunities
If AGI after 2050:
- Safety research can be thorough
- Governance can be careful and democratic
- Current hype may be overblown
- Other causes may be higher priority
- Different paradigms may emerge
Economic vs Philosophical AGI
Section titled “Economic vs Philosophical AGI”Important distinction often blurred:
Economically transformative AI:
- Automates most jobs
- Generates trillions in value
- Fundamentally changes society
- Might come soon (2027-2035)
- Doesn’t require “general” intelligence
Philosophically general intelligence:
- True understanding across all domains
- Quick learning like humans
- Causal reasoning and abstraction
- Might require paradigm shifts
- Could be much further (2040+)
Why it matters:
- Economic transformation could happen without “AGI”
- Most impacts come from economic transformation
- But existential risk might require true AGI
- Definitions determine timeline estimates
The Compute Bottleneck
Section titled “The Compute Bottleneck”Different views on compute as limiting factor:
Optimistic: Compute is abundant
- Moore’s law continues
- Efficiency improvements ongoing
- Cloud compute scales easily
- No physical limits near
Pessimistic: Compute limits soon
- Training costs becoming prohibitive ($1B+)
- Energy and chip constraints
- Economic feasibility limits
- Can’t scale 1000x more
Resolution matters:
- If compute limits: Longer timelines, regulated by economics
- If compute abundant: Timelines depend on algorithmic progress
The China Factor
Section titled “The China Factor”How does China affect timelines?
Arguments China accelerates:
- Competition drives urgency
- Massive investment
- Less safety caution
- Different approaches might work
Arguments China doesn’t change much:
- US still ahead on capabilities
- Chinese models lag 1-2 years
- Limited to similar approaches
- Compute restrictions bite
Strategic implications:
- If China racing: Pressure for short timelines
- If US leads comfortably: Can afford to be cautious
- Matters for regulation and safety investment
Recursive Self-Improvement
Section titled “Recursive Self-Improvement”Wild card: AI accelerating its own development
If happens soon:
- Could dramatically shorten timelines
- “Singularity” scenario
- Hard to predict outcomes
- Very fast takeoff possible
If doesn’t happen:
- Progress continues at current pace
- More time to prepare
- Gradual development allows adjustment
Current status:
- AI assists with coding and research
- But not yet transformative acceleration
- Unclear if/when recursive improvement kicks in
Base Rates and Reference Classes
Section titled “Base Rates and Reference Classes”What should we compare to?
Reference class: Major technologies
- Electricity: 50 years from invention to transformation
- Computers: 40 years from invention to ubiquity
- Internet: 20 years from invention to transformation
- Suggests: Long timelines (decades)
Reference class: Exponential technologies
- Semiconductors: Exponential for 50+ years
- Genomics: Exponential progress continues
- Suggests: Continued rapid progress possible
Reference class: Breakthroughs
- Manhattan Project: 3 years when focused
- Apollo Program: 8 years with resources
- Suggests: Massive resources can compress timelines
Problem: AGI is unique, unclear which reference class applies