Long-Timelines Technical Worldview
Core belief: Transformative AI is further away than many think. This gives us time for careful, foundational research rather than rushed solutions.
Significantly longer than short-timelines views, enabling different strategic approaches
| Source | Estimate | Date |
|---|---|---|
| Long-timelines view | 20-40+ years | — |
Long-timelines view: Based on skepticism of current progress continuing at this rate
Lower due to time for solutions, but still taking risk seriously
| Source | Estimate | Date |
|---|---|---|
| Long-timelines view | 5-20% | — |
Long-timelines view: Time for careful research reduces risk
Overview
Section titled “Overview”The long-timelines technical worldview holds that transformative AI is decades away rather than years. This isn’t mere optimism or wishful thinking - it’s based on specific views about the difficulty of achieving human-level intelligence, skepticism about current paradigms, and historical patterns in AI progress.
This extended timeline fundamentally changes strategic priorities. Instead of rushing to patch current systems or advocating for immediate pause, long-timelines researchers can pursue deep, foundational work that might take decades to bear fruit.
Key distinction: This is not the same as the optimistic worldview. Long-timelines researchers take alignment seriously and don’t trust current techniques to scale. They’re not optimistic about alignment being easy - they’re pessimistic about timelines being short.
Characteristic Beliefs
Section titled “Characteristic Beliefs”| Crux | Typical Long-Timelines Position |
|---|---|
| Timelines | AGI 20-40+ years away |
| Paradigm | May need new paradigms (not just scaling) |
| Takeoff | Slow and observable |
| Alignment difficulty | Hard, but we have time |
| Current research relevance | Uncertain if current LLMs inform future AI |
| Deceptive alignment | Relevant but not imminent |
| Coordination | More feasible with longer timelines |
| P(doom) | 5-20% |
Timeline Arguments
Section titled “Timeline Arguments”Several independent arguments support longer timelines:
1. Intelligence is harder than it looks
Current AI systems are impressive but lack:
- Robust generalization across domains
- Abstract reasoning and planning
- Common sense understanding
- Ability to learn from limited data like humans
- Consciousness or genuine understanding (debated if necessary)
Each of these might require fundamental breakthroughs.
2. Historical track record
AI predictions have consistently been overoptimistic:
- 1960s: “AI in 20 years”
- 1980s: Expert systems would lead to AGI
- 2000s: Symbolic AI approaches
- Each paradigm hit barriers
Current deep learning might hit similar walls.
3. Scaling might not be enough
While scaling has driven recent progress:
- May hit diminishing returns
- Compute growth might slow (Moore’s law slowing)
- Data availability has limits
- Algorithmic efficiency gains are uncertain
- Qualitative jumps might require new ideas
4. Economic and institutional barriers
Even if technically feasible:
- Compute costs are enormous and growing
- Energy requirements may be prohibitive
- Regulatory barriers might emerge
- Social and political pushback
- Economic incentives might shift
Takeoff Speed
Section titled “Takeoff Speed”Long-timelines researchers typically expect slow takeoff:
Gradual progress: Incremental improvements across many years
- Can observe AI getting more capable
- Time to respond to warning signs
- Opportunities to iterate on alignment
Multiple bottlenecks: Progress limited by many factors
- Hardware constraints
- Data availability
- Algorithmic insights
- Integration challenges
- Social and regulatory adaptation
Continuous deployment: AI capabilities integrated gradually
- Society adapts incrementally
- Institutions evolve alongside AI
- Norms and regulations co-develop
This contrasts sharply with fast takeoff scenarios where recursive self-improvement leads to rapid capability explosion.
Key Proponents and Perspectives
Section titled “Key Proponents and Perspectives”Academic Researchers
Section titled “Academic Researchers”Rodney Brooks (former MIT, iRobot founder)
“We’re not going to see human-level AI in my lifetime, and I’m not that old.”
Arguments:
- Intelligence requires embodiment and interaction
- Current systems lack common sense
- Fundamental architectural changes needed
- Overestimates based on narrow task performance
Gary Marcus (NYU, AI researcher)
Emphasizes limitations of current deep learning:
- Brittleness and lack of robustness
- Need for hybrid approaches
- Generalization failures
- Understanding vs. pattern matching
Melanie Mitchell (Santa Fe Institute)
Research on analogical reasoning and abstraction:
- Current AI lacks abstract reasoning
- Analogy-making is fundamental to intelligence
- Deep learning alone insufficient
Technical Skeptics
Section titled “Technical Skeptics”Many researchers in traditional AI, cognitive science, and neuroscience hold longer timelines based on:
- Complexity of biological intelligence
- Gap between narrow and general intelligence
- Unsolved problems in cognition
Some Alignment Researchers
Section titled “Some Alignment Researchers”Not all alignment researchers believe in short timelines:
- Focus on foundational theory that takes time
- Skeptical of current LLM relevance to AGI
- Prefer deep solutions over quick patches
Priority Approaches
Section titled “Priority Approaches”Given long-timelines beliefs, research priorities differ from short-timelines views:
1. Agent Foundations
Section titled “1. Agent Foundations”Deep theoretical work on fundamental questions:
Decision theory:
- How should rational agents behave?
- Logical uncertainty
- Updateless decision theory
- Embedded agency
Value alignment theory:
- What does it mean for an agent to have values?
- How can values be specified?
- Corrigibility and interruptibility
- Utility function construction
Ontological crises:
- How do agents update when their world model changes fundamentally?
- Preserving values across paradigm shifts
Advantage of long timelines: This work might take 10-20 years to mature, which is fine if AGI is 30+ years away.
2. Interpretability and Understanding
Section titled “2. Interpretability and Understanding”Deep understanding of how AI systems work:
Mechanistic interpretability:
- Reverse-engineer neural networks
- Understand individual neurons and circuits
- Build comprehensive models of model internals
Theoretical foundations:
- Why do neural networks generalize?
- What are the fundamental limits?
- Mathematical theory of deep learning
Conceptual understanding:
- What are models actually learning?
- Representations and abstractions
- Transfer and generalization
Advantage of long timelines: Can build interpretability tools gradually, improving them over decades.
3. Foundational Research
Section titled “3. Foundational Research”First-principles approaches without time pressure:
Alternative paradigms:
- Explore architectures beyond current deep learning
- Investigate hybrid systems
- Study biological intelligence for insights
Robustness and verification:
- Formal methods for AI
- Provable safety properties
- Mathematical guarantees
Comprehensive testing:
- Extensive empirical research
- Long-term studies of AI behavior
- Edge case exploration
Advantage of long timelines: Can pursue high-risk, high-reward research without urgency.
4. Field Building
Section titled “4. Field Building”Growing the community for long-term impact:
Academic infrastructure:
- University departments and programs
- Curriculum development
- Textbooks and educational materials
Talent pipeline:
- Undergraduate and graduate training
- Interdisciplinary programs
- Career paths in alignment
Research ecosystem:
- Conferences and workshops
- Journals and publications
- Collaboration networks
Advantage of long timelines: Field-building pays off over decades.
5. Careful Empirical Work
Section titled “5. Careful Empirical Work”Thorough investigation of current systems:
Understanding limitations:
- Where do current approaches fail?
- What are fundamental vs. contingent limits?
- Generalization studies
Alignment properties:
- How do current alignment techniques work?
- What are their scaling properties?
- When do they break down?
Transfer studies:
- Will current insights transfer to future AI?
- What’s paradigm-specific vs. general?
Advantage of long timelines: Can be thorough rather than rushed.
Deprioritized Approaches
Section titled “Deprioritized Approaches”Given long-timelines beliefs, some approaches are less urgent:
| Approach | Why Less Urgent |
|---|---|
| Pause advocacy | Less immediate urgency |
| RLHF improvements | May not transfer to future paradigms |
| Current-system safety | Systems may not be path to AGI |
| Race dynamics | More time reduces racing pressure |
| Quick fixes | Can pursue robust solutions instead |
Note: “Less urgent” doesn’t mean “useless” - just different prioritization given beliefs.
Strongest Arguments
Section titled “Strongest Arguments”1. Historical Overoptimism
Section titled “1. Historical Overoptimism”AI predictions have been wrong for 60+ years:
1960s predictions: “Human-level AI in 20 years”
- Reality: Hit combinatorial explosion, “AI winter”
1980s expert systems: “Expert systems will transform economy”
- Reality: Brittleness, maintenance costs, another AI winter
Early 2000s: Various approaches to AGI
- Reality: Most failed to scale
Pattern: Each generation thinks they’re on the path to AGI. Each is wrong.
Implication: Current optimism about LLMs scaling to AGI might be similarly misplaced.
2. Current Systems’ Fundamental Limitations
Section titled “2. Current Systems’ Fundamental Limitations”Despite impressive performance, current AI lacks:
Robust generalization:
- Adversarial examples fool vision systems
- Out-of-distribution failures
- Brittle in novel situations
True understanding:
- Pattern matching vs. comprehension
- Lack of world models
- No common sense reasoning
Efficient learning:
- Require massive data (humans learn from few examples)
- Don’t transfer knowledge well across domains
- Can’t explain their reasoning reliably
Abstract reasoning:
- Struggle with novel problems requiring insight
- Limited analogical reasoning
- Poor at systematic generalization
These might require fundamental breakthroughs, not just scaling.
3. Scaling Has Limits
Section titled “3. Scaling Has Limits”Current progress relies on scaling, but:
Compute constraints:
- Energy costs grow exponentially
- Chip production has physical limits
- Economic viability uncertain at extreme scales
Data constraints:
- Already training on most of internet
- Synthetic data has quality issues
- Diminishing returns from more data
Algorithmic efficiency:
- Gains are uncertain and irregular
- May hit fundamental limits
- Efficiency improvements are hard to predict
Returns diminishing:
- Each order of magnitude improvement costs more
- Performance gains may be slowing
- Knee of the curve might be near
4. Intelligence Requires More Than Current Approaches
Section titled “4. Intelligence Requires More Than Current Approaches”Cognitive science and neuroscience suggest:
Embodiment: Intelligence might require physical interaction with world
Development: Human intelligence develops through years of experience
Architecture: Brain has specialized structures deep learning lacks
Mechanisms: Biological learning uses mechanisms we don’t understand
Consciousness: Role of consciousness in intelligence unclear
If any of these are necessary, current approaches are missing key ingredients.
5. Slow Takeoff Is Likely
Section titled “5. Slow Takeoff Is Likely”Multiple bottlenecks slow progress:
Integration challenges: Deploying AI into real systems takes time
Social adaptation: Society needs to adapt to new capabilities
Institutional barriers: Regulation, cultural resistance, coordination
Economic constraints: Funding and resources are limited
Technical obstacles: Each capability advance requires solving multiple problems
No reason to expect rapid discontinuities - smooth progress is default.
6. Time for Solutions Reduces Risk
Section titled “6. Time for Solutions Reduces Risk”Longer timelines mean:
Iterative improvement: Can refine alignment techniques over decades
Warning signs: Early systems give us data about problems
Coordination: More time for international cooperation
Institution building: Governance can develop alongside technology
Research maturation: Alignment solutions can be thoroughly tested
P(doom) is lower because we have time to get it right.
Main Criticisms and Counterarguments
Section titled “Main Criticisms and Counterarguments””This Is Just Wishful Thinking”
Section titled “”This Is Just Wishful Thinking””Critique: Long-timelines view is motivated by hoping for more time, not actual evidence.
Response:
- Based on specific technical arguments, not hope
- Historical track record supports skepticism
- Many long-timelines people still take risk seriously
- If anything, short timelines might be motivated by excitement/fear
”Might Miss Critical Window”
Section titled “”Might Miss Critical Window””Critique: If wrong about timelines, current window to shape AI development is missed.
Response:
- Can have uncertainty and hedge bets
- Foundational work pays off even in shorter timelines
- Better to have robust solutions late than rushed solutions now
- Can shift priorities if evidence changes
”Current Progress Is Different”
Section titled “”Current Progress Is Different””Critique: Unlike past failed approaches, deep learning and scaling are actually working. This time is different.
Response:
- Every generation thinks “this time is different”
- Deep learning has made progress but also has clear limits
- Scaling can’t continue indefinitely
- Path from current systems to AGI remains unclear
”LLMs Show Emergent Capabilities”
Section titled “”LLMs Show Emergent Capabilities””Critique: Large language models show unexpected emergent abilities, suggesting scaling might reach AGI.
Response:
- “Emergent” capabilities often just smooth trends that appear suddenly in metrics
- Still lack robust reasoning, planning, and understanding
- Emergence in narrow tasks doesn’t imply general intelligence
- May hit ceiling well below human-level
”Moravec’s Paradox Resolved”
Section titled “”Moravec’s Paradox Resolved””Critique: Deep learning solved perception problems thought to be hardest (vision, language). The rest will follow.
Response:
- Perception was hard for symbolic AI, not necessarily hardest overall
- Reasoning and planning might be fundamentally harder
- “Harder” tasks (like abstract reasoning) remain difficult for current AI
- Different problems might require different solutions
”Missing Urgency”
Section titled “”Missing Urgency””Critique: Even if timelines are long, should work urgently to be safe.
Response:
- Urgency doesn’t mean rushing to bad solutions
- Careful work is more valuable than hasty work
- Can be thorough without being complacent
- False urgency leads to wasted effort
”Paradigm Shifts Can Be Rapid”
Section titled “”Paradigm Shifts Can Be Rapid””Critique: Even if deep learning isn’t enough, sudden breakthroughs could change timelines overnight.
Response:
- Breakthroughs still require years to commercialize
- Integration takes time even if insight is sudden
- Most progress is gradual, not revolutionary
- Can update if breakthrough occurs
What Evidence Would Change This View?
Section titled “What Evidence Would Change This View?”Long-timelines researchers would update toward shorter timelines given:
Empirical Evidence
Section titled “Empirical Evidence”Dramatic capability jumps:
- Sudden broad improvements in reasoning, planning, generalization
- Systems achieving human-level performance across many domains
- Robust transfer across very different tasks
Continued scaling success:
- Scaling continuing to yield improvements for many more orders of magnitude
- No signs of diminishing returns
- Compute becoming dramatically cheaper
Novel architectures:
- New approaches that clearly address current limitations
- Hybrid systems combining strengths
- Brain-inspired architectures that work
Theoretical Breakthroughs
Section titled “Theoretical Breakthroughs”Understanding intelligence:
- Clear theory of general intelligence
- Demonstrations that current approach can reach AGI
- Proofs that remaining gaps are small
Solving key problems:
- Robust generalization achieved
- Common sense reasoning solved
- Transfer learning fully working
Economic and Social Factors
Section titled “Economic and Social Factors”Massive investment:
- 10x or 100x increase in AI investment
- Manhattan Project-scale efforts
- International cooperation to accelerate
Reduced barriers:
- Energy costs plummeting
- Chip production scaling dramatically
- Regulatory barriers cleared
Leading Indicators
Section titled “Leading Indicators”AI researchers converging:
- Expert consensus shifting to shorter timelines
- Specific path to AGI becoming clear
- Betting markets moving significantly
Internal evidence:
- Labs demonstrating clear progress toward AGI
- Roadmaps becoming clearer and more credible
- Intermediate milestones being hit faster than expected
Implications for Action and Career
Section titled “Implications for Action and Career”If you hold long-timelines beliefs, strategic implications include:
Research Career Paths
Section titled “Research Career Paths”Academic research:
- PhD programs in AI alignment
- Theoretical research with long time horizons
- Building foundational knowledge
Deep technical work:
- Agent foundations
- Interpretability theory
- Formal verification
- Mathematical approaches
Interdisciplinary work:
- Cognitive science and AI
- Neuroscience-inspired AI
- Philosophy of mind and AI
Advantage: Can pursue questions requiring 5-10 year research programs
Field Building
Section titled “Field Building”Education and training:
- Develop curricula
- Write textbooks
- Train next generation
Community building:
- Organize conferences
- Build research networks
- Create institutions
Public scholarship:
- Explain AI alignment to broader audiences
- Attract talent to the field
- Build prestige and legitimacy
Advantage: Field-building investments pay off over decades
Careful Empirical Work
Section titled “Careful Empirical Work”Current systems research:
- Thorough investigation of limitations
- Understanding what transfers to future systems
- Building tools and methodologies
Comprehensive testing:
- Long-term studies
- Edge case exploration
- Robustness analysis
Advantage: Can be thorough rather than rushed
Strategic Positioning
Section titled “Strategic Positioning”Flexibility:
- Build skills that remain valuable across scenarios
- Create options for different timeline outcomes
- Hedge uncertainty
Sustainable pace:
- Marathon, not sprint
- Avoid burnout from false urgency
- Build career that lasts decades
Leverage points:
- Focus on work with long-term impact
- Build infrastructure others can use
- Create knowledge that persists
Internal Diversity
Section titled “Internal Diversity”The long-timelines worldview includes significant variation:
Timeline Estimates
Section titled “Timeline Estimates”Medium (20-30 years): More cautious, still somewhat urgent
Long (30-50 years): Standard long-timelines position
Very long (50+ years): Highly skeptical of current approaches
Risk Assessment
Section titled “Risk Assessment”Moderate risk, long timelines: Still concerned but have time
Low risk, long timelines: Technical problem is tractable with time
High risk, long timelines: Hard problem, fortunately have time
Research Focus
Section titled “Research Focus”Pure theory: Agent foundations, decision theory
Applied theory: Interpretability, verification
Empirical: Understanding current systems
Hybrid: Combination of approaches
Attitude Toward Current Work
Section titled “Attitude Toward Current Work”Skeptical: Current LLM work likely irrelevant to AGI
Uncertain: Might be relevant, worth studying
Engaged: Working on current systems while believing AGI is far
Relationship to Other Worldviews
Section titled “Relationship to Other Worldviews”vs. Doomer
Section titled “vs. Doomer”Disagreements:
- Fundamental disagreement on timelines
- Different urgency levels
- Different research priorities
Agreements:
- Alignment is hard
- Current techniques may not scale
- Take risk seriously
vs. Optimistic
Section titled “vs. Optimistic”Disagreements:
- Long-timelines folks more worried about alignment difficulty
- Don’t trust market to provide safety
- More skeptical of current approaches
Agreements:
- Have time for solutions
- Catastrophe is not inevitable
- Research can make progress
vs. Governance-Focused
Section titled “vs. Governance-Focused”Disagreements:
- Less urgency about policy
- More focus on technical foundations
- Different time horizons
Agreements:
- Multiple approaches needed
- Coordination is valuable
- Institutions matter
Practical Considerations
Section titled “Practical Considerations”Career Planning
Section titled “Career Planning”Skill development: Can pursue deep expertise
Network building: Relationships develop over years
Institution building: Create enduring organizations
Work-life balance: Sustainable pace over decades
Research Strategy
Section titled “Research Strategy”Patient capital: Pursue high-risk, long-horizon research
Foundational work: Build knowledge infrastructure
Replication and verification: Be thorough
Documentation: Create resources for future researchers
Community Norms
Section titled “Community Norms”Thorough review: Take time for peer review
Replication: Verify important results
Education: Train people properly
Standards: Build quality norms
Representative Quotes
Section titled “Representative Quotes”“Every decade, people think AGI is 20 years away. It’s been this way for 60 years. Maybe we should update on that.” - Rodney Brooks
“Current AI is like a high school student who crammed for the test - impressive performance on specific tasks, but lacking deep understanding.” - Gary Marcus
“The gap between narrow AI and general intelligence is not about scale - it’s about fundamental architecture and learning mechanisms we don’t yet understand.” - Melanie Mitchell
“I’d rather solve alignment properly over 20 years than rush to a solution in 5 years that fails catastrophically.” - Long-timelines researcher
“The best research takes time. If we have that time, we should use it wisely rather than pretending we don’t.” - Academic alignment researcher
Common Misconceptions
Section titled “Common Misconceptions”“Long-timelines people aren’t worried about AI risk”: False - they take it seriously but believe we have time
“It’s just procrastination”: No - it’s a belief about technology development pace
“They’re not working on alignment”: Many do foundational alignment work
“They think alignment is easy”: No - they think it’s hard but we have time to solve it
“They’re out of touch with recent progress”: Many are deep in the technical details
Strategic Implications
Section titled “Strategic Implications”If Long Timelines Are Correct
Section titled “If Long Timelines Are Correct”Good news:
- Time for careful research
- Can build robust solutions
- Opportunity for coordination
- Field can mature properly
Challenges:
- Maintaining focus over decades
- Avoiding complacency
- Sustaining funding and interest
- Adapting as technology evolves
If Wrong (Timelines Are Short)
Section titled “If Wrong (Timelines Are Short)”Risks:
- Missing critical window
- Foundational work not finished
- Solutions not ready
- Institutions not built
Mitigations:
- Maintain some urgency even with long-timelines belief
- Monitor leading indicators
- Be prepared to shift priorities
- Hedge with faster-payoff work
Recommended Reading
Section titled “Recommended Reading”Arguments for Longer Timelines
Section titled “Arguments for Longer Timelines”- AI Impacts: Likelihood of Discontinuous Progress↗
- Gary Marcus: Deep Learning Alone Won’t Get Us to AGI↗
- Melanie Mitchell: Why AI Is Harder Than We Think↗
Technical Limitations
Section titled “Technical Limitations”- On the Measure of Intelligence↗ - François Chollet
- Shortcut Learning in Deep Neural Networks↗
- Underspecification in Machine Learning↗
Cognitive Science Perspectives
Section titled “Cognitive Science Perspectives”- Building Machines That Learn and Think Like People↗
- Research on analogical reasoning and abstraction
- Studies on human vs. AI learning
Foundational Research
Section titled “Foundational Research”- Agent Foundations for Aligning Machine Intelligence↗
- Embedded Agency↗
- Interpretability research programs
Field Building
Section titled “Field Building”- A Guide to Writing High-Quality LessWrong Posts↗
- Academic AI safety syllabi
- Career guides for alignment research