Skip to content

Long-Term Trajectory: Research Report

📋Page Status
Quality:3 (Stub)⚠️
Words:1.2k
Backlinks:3
Structure:
📊 14📈 0🔗 4📚 54%Score: 11/15
FindingKey DataImplication
Pivotal periodNext 10-50 yearsDecisions matter enormously
Trajectory varianceFrom flourishing to extinctionStakes are maximal
Lock-in riskEarly AI choices may be permanentReversibility crucial
Value embeddingAI encodes values that persistValue alignment critical
Cosmic stakesBillions of future people affectedLong-term thinking essential

Advanced AI will likely be one of the most important factors determining humanity’s long-term trajectory—whether human civilization flourishes for millions of years, achieves its potential across the cosmos, or faces permanent catastrophe or stagnation. The decisions made during the development and deployment of transformative AI systems may be among the most consequential in human history, with effects that persist indefinitely.

The key insight of long-termism is that future generations matter morally, and there may be vastly more of them than people alive today. If humanity has a good trajectory, trillions of people could live flourishing lives over cosmic timescales. If AI development goes badly, this entire future could be foreclosed. This asymmetry suggests that ensuring good long-term outcomes should be a major priority, even if the probability of influencing them seems small.

Several features of AI make it particularly relevant to long-term trajectory. AI systems may become more capable than humans across all domains, giving AI-controlling entities enormous power. AI values and goals, once embedded, may be difficult to change. AI could enable lock-in of particular systems or values. And AI development is happening now, during a period when humanity still has significant agency over AI’s direction.


ScenarioDescriptionLong-Term Population
FlourishingHumanity expands, solves problemsTrillions+
StagnationCivilization persists, doesn’t advanceBillions
Lock-inSuboptimal state becomes permanentUnknown
CollapseCivilization destroyed, recoversUncertain
ExtinctionNo future humansZero
FactorMechanism
CapabilityAI may exceed human capability
SpeedAI development faster than adaptation
PersistenceAI systems may be hard to modify
ScaleAI enables coordination/control at scale
TimingChoices made now may lock in

DomainPotentialTimeline
Scientific discoverySolve remaining problemsMedium-term
Economic abundanceEliminate material scarcityLong-term
HealthCure diseases, extend lifespanMedium-term
SpaceEnable cosmic expansionLong-term
GovernanceBetter coordination, less conflictUncertain
DomainRiskPermanence
ExtinctionUnaligned AI eliminates humanityPermanent
Lock-inWrong values embedded permanentlyVery long
TotalitarianismAI enables permanent oppressionVery long
StagnationAI doesn’t help, may hurtLong
ConflictAI-enabled warfarePotentially permanent
UncertaintyRange of ViewsImportance
AI timelinesYears to decadesHigh
Alignment difficultyTractable to impossibleCritical
Governance feasibilityAchievable to hopelessHigh
Human relevanceContinued to obsoleteHigh
Lock-in riskLow to inevitableCritical
TechnologyLong-Term ImpactAI Difference
AgricultureEnabled civilizationAI more general
WritingEnabled knowledge accumulationAI more powerful
ScienceEnabled technologyAI faster
NuclearCould destroy civilizationAI more controllable?
ComputingTransformed economyAI more autonomous

FactorMechanismStatus
Alignment researchEnsure AI shares human valuesUnderfunded
Governance capacityControl AI developmentWeak
CoordinationGlobal agreement on AIVery limited
Safety cultureLabs prioritize safetyVariable
Long-term thinkingConsider future generationsGrowing but minority
FactorMechanismStatus
Racing dynamicsSafety sacrificed for speedStrong
Short-termismFocus on near-term over long-termDominant
Coordination failureCan’t agree on constraintsPersistent
Capability outpacing safetyAI too capable to controlGrowing concern
Power concentrationFew actors control AIIncreasing

ImprovementMechanismTractability
Aligned AIAI that shares human valuesUncertain but researched
Good governanceEffective AI oversightDifficult but possible
Slow takeoffTime for adaptationSomewhat influential
Value preservationHuman values persistRequires explicit effort
ReversibilityCan correct mistakesRequires design
WorseningMechanismCurrent Risk
Misaligned AIAI pursues wrong goalsSignificant
Value lock-inWrong values embeddedSignificant
Power concentrationFew control AIHigh
Governance failureCan’t control AIHigh
Short-term focusIgnore long-termVery high

ReasonExplanation
Influence windowCan still shape AI development
Trajectory sensitivitySmall changes now have large long-term effects
Lock-in preventionPrevent irreversible bad outcomes
Option preservationKeep possibilities open
LeverActorsImpact Potential
Alignment researchResearchersHigh if successful
GovernanceGovernments, internationalHigh if implemented
Lab practicesAI companiesHigh but concentrated
Public awarenessMedia, advocatesMedium-High
Funding allocationFunders, governmentsHigh

Related FactorConnection
Existential CatastropheWorst-case trajectory
Lock-in ScenariosLock-in determines trajectory
Technical AI SafetySafety enables good trajectory
AI GovernanceGovernance shapes trajectory