Optimistic Alignment Worldview
Core belief: Alignment is a hard but tractable engineering problem. Current progress is real, and with continued effort, we can develop AI safely.
Lower than other worldviews due to belief in tractability of alignment and iteration
| Source | Estimate | Date |
|---|---|---|
| Optimistic view | Under 5% | — |
Optimistic view: Alignment is tractable, iteration will work
Overview
Section titled “Overview”The optimistic alignment worldview holds that while AI safety is important and requires serious work, the problem is solvable through continued research, iteration, and engineering. This isn’t naive optimism or wishful thinking - it’s based on specific beliefs about the nature of alignment, empirical progress to date, and analogies to other technological challenges.
Optimists believe we’re making real progress on alignment, that progress will continue, and that we’ll have opportunities to iterate and improve as AI capabilities advance. They see alignment as fundamentally an engineering challenge rather than an unsolvable theoretical problem.
Key distinction: Optimistic doesn’t mean “unconcerned.” Many optimists work hard on alignment. The difference is in their assessment of tractability and default outcomes.
Characteristic Beliefs
Section titled “Characteristic Beliefs”| Crux | Typical Optimist Position |
|---|---|
| Timelines | Variable (not the key crux) |
| Paradigm | Either way, alignment scales |
| Takeoff | Slow enough to iterate |
| Alignment difficulty | Engineering problem, not fundamental |
| Instrumental convergence | Weak or avoidable through training |
| Deceptive alignment | Unlikely in practice |
| Current techniques | Show real progress, will improve |
| Iteration | Can learn from deploying systems |
| Coordination | Achievable with effort |
| P(doom) | under 5% |
Core Assumptions
Section titled “Core Assumptions”1. Alignment and Capability Are Linked
Optimists often believe that making AI more capable naturally makes it more aligned:
- Better models understand instructions better
- Improved reasoning helps models follow intent
- Enhanced understanding reduces accidental misalignment
- Capability to understand human values is itself a capability
2. We Can Iterate
Unlike one-shot scenarios:
- Deploy systems incrementally
- Learn from each generation
- Fix problems as they arise
- Gradual improvement over time
3. Current Progress Is Real
Success with RLHF, Constitutional AI, etc. demonstrates:
- Alignment techniques work in practice
- We can measure and improve alignment
- Problems are empirically solvable
- Continuous improvement is happening
4. Default Outcomes Aren’t Catastrophic
Without specific malign intent or extreme scenarios:
- Systems follow training objectives
- Misalignment is local and fixable
- Humans maintain oversight
- Society adapts and responds
Key Proponents
Section titled “Key Proponents”Industry Researchers
Section titled “Industry Researchers”Many researchers at AI labs hold optimistic views:
Jan Leike (OpenAI Alignment Lead, formerly DeepMind)
Leads work on:
- Scalable oversight
- RLHF improvements
- Empirical alignment research
While serious about safety, optimistic that approaches scale.
Dario Amodei (Anthropic CEO)
“I think the alignment problem is solvable. It’s hard, but it’s the kind of hard that yields to concentrated effort.”
Founded Anthropic specifically to work on alignment, but from tractability perspective.
Paul Christiano (OpenAI, now independent)
More nuanced than pure optimism, but:
- Works on empirical alignment techniques
- Believes in scalable oversight
- Thinks iteration can work
Academic Perspectives
Section titled “Academic Perspectives”Andrew Ng (Stanford)
“Worrying about AI safety is like worrying about overpopulation on Mars.”
Represents extreme end - thinks risk is overblown.
Yann LeCun (Meta, NYU)
Skeptical of existential risk:
- Current approaches won’t lead to dangerous AGI
- Intelligence and goals are not orthogonal in practice
- Society will adapt as technology develops
Stuart Russell (UC Berkeley)
Nuanced position:
- Takes risk seriously
- But believes provably beneficial AI is achievable
- Research program assumes tractability
Effective Accelerationists (e/acc)
Section titled “Effective Accelerationists (e/acc)”More extreme optimistic position:
- AI development should be accelerated
- Benefits vastly outweigh risks
- Slowing down is harmful
- Market will handle safety
Note: e/acc is more extreme than typical optimistic alignment view.
Priority Approaches
Section titled “Priority Approaches”Given optimistic beliefs, research priorities emphasize empirical iteration:
1. RLHF and Preference Learning
Section titled “1. RLHF and Preference Learning”Continue improving what’s working:
Reinforcement Learning from Human Feedback:
- Scales to larger models
- Improves with more data
- Can be refined iteratively
- Shows measurable progress
Constitutional AI:
- AI helps with its own alignment
- Scalable to superhuman systems
- Reduces need for human feedback
- Self-improving safety
Preference learning:
- Better models of human preferences
- Handling uncertainty and disagreement
- Robust aggregation methods
Why prioritize: These techniques work now and can improve continuously.
2. Empirical Evals and Red Teaming
Section titled “2. Empirical Evals and Red Teaming”Catch problems through testing:
Dangerous capability evals:
- Test for specific risks
- Measure progress and regression
- Inform deployment decisions
- Build confidence in safety
Red teaming:
- Adversarial testing
- Find failures before deployment
- Iterate based on findings
- Continuous improvement
Benchmarking:
- Standardized safety metrics
- Track progress over time
- Compare approaches
- Accountability
Why prioritize: Empirical evidence beats theoretical speculation.
3. Scalable Oversight
Section titled “3. Scalable Oversight”Extend human judgment to superhuman systems:
Iterated amplification:
- Break hard tasks into easier subtasks
- Recursively apply oversight
- Scale to complex problems
- Maintain human values
Debate:
- Models argue both sides
- Humans judge between arguments
- Adversarial setup catches errors
- Scales to superhuman reasoning
Recursive reward modeling:
- Models help evaluate their own outputs
- Bootstrap to higher capability levels
- Maintain alignment through scaling
Why prioritize: Provides path to aligning superhuman AI.
4. AI-Assisted Alignment
Section titled “4. AI-Assisted Alignment”Use AI to help solve alignment:
Automated interpretability:
- Models explain their own reasoning
- Scale interpretation to large models
- Continuous monitoring
Automated red teaming:
- Models find their own failures
- Exhaustive testing
- Faster iteration
Alignment research assistance:
- Models help solve alignment problems
- Accelerate research
- Leverage AI capabilities for safety
Why prioritize: Powerful tool that improves with AI capability.
5. Lab Safety Culture
Section titled “5. Lab Safety Culture”Get practices right inside organizations:
Internal processes:
- Safety reviews before deployment
- Clear escalation paths
- Whistleblower protections
- Safety budgets and teams
Culture and norms:
- Reward safety work
- Value responsible deployment
- Share safety techniques
- Transparency about risks
Voluntary standards:
- Industry best practices
- Pre-deployment testing
- Incident reporting
- Continuous improvement
Why prioritize: Good practices reduce risk regardless of technical solutions.
Deprioritized Approaches
Section titled “Deprioritized Approaches”From optimistic perspective, some approaches seem less valuable:
| Approach | Why Less Important |
|---|---|
| Pause advocacy | Unnecessary and potentially harmful |
| Agent foundations | Too theoretical, unlikely to help |
| Compute governance | Overreach, centralization risks |
| Fast takeoff scenarios | Unlikely, not worth optimizing for |
| Deceptive alignment research | Solving problems that won’t arise |
Note: “Less important” reflects beliefs about likelihood and tractability, not dismissiveness.
Strongest Arguments
Section titled “Strongest Arguments”1. Empirical Progress Is Real
Section titled “1. Empirical Progress Is Real”We’ve made measurable progress on alignment:
RLHF success:
- GPT-3 vs. GPT-3.5 (ChatGPT) - dramatic improvement in following instructions
- Models are more helpful, harmless, and honest
- Techniques generalize across scales and domains
- Continuous refinement works
Constitutional AI:
- Models can evaluate their own outputs
- Self-improvement on safety metrics
- Reduced need for human oversight
Jailbreak resistance:
- Early models were trivially jailbroken
- Modern models much more robust
- Adversarial training improves defenses
- Red team, patch, repeat
This demonstrates: Alignment is empirically tractable, not theoretically impossible.
2. Each Generation Provides Data
Section titled “2. Each Generation Provides Data”Unlike one-shot scenarios, we get feedback:
Continuous deployment:
- GPT-3 → GPT-3.5 → GPT-4 → future models
- Learn from each generation
- Identify problems early
- Fix issues before scaling further
Real-world testing:
- Millions of users find edge cases
- Diverse applications reveal failures
- Empirical data beats speculation
- Iterate based on evidence
Gradual scaling:
- Don’t jump from weak to superhuman
- Intermediate stages are testable
- Warning signs will appear
- Time to respond
This enables: Continuous improvement rather than betting everything on first attempt.
3. Humans Have Solved Hard Problems Before
Section titled “3. Humans Have Solved Hard Problems Before”Historical precedent for managing powerful technologies:
Nuclear weapons:
- Existentially dangerous
- Yet no nuclear war in 80 years
- Treaties, norms, and institutions work
- Careful management succeeds
Biotechnology:
- Gain-of-function research
- Asilomar conference
- Self-governance by scientists
- Regulation and oversight
Aviation safety:
- Inherently dangerous technology
- Through iteration, became extremely safe
- Continuous improvement works
- Engineering and regulation together
Pharmaceuticals:
- Potentially deadly if misused
- Extensive testing and regulation
- Mostly prevent harmful drugs
- Balance benefit and risk
This suggests: We can manage AI similarly - not perfectly, but well enough.
4. Alignment and Capability May Be Linked
Section titled “4. Alignment and Capability May Be Linked”Contrary to orthogonality thesis:
Understanding human values requires capability:
- Must understand humans to align with them
- Better models of human preferences need intelligence
- Reasoning about values is itself reasoning
Training dynamics favor alignment:
- Deception is complex and difficult
- Direct pursuit of goals is simpler
- Training selects for simplicity
- Aligned behavior is more robust
Instrumental value of cooperation:
- Cooperating with humans is instrumentally useful
- Deception has costs and risks
- Working with humans leverages human capabilities
- Partnership is mutually beneficial
Empirical evidence:
- More capable models tend to be more aligned
- GPT-4 more aligned than GPT-3
- Larger models follow instructions better
This implies: Capability advances help with alignment, not just make it harder.
5. Catastrophic Scenarios Require Specific Failures
Section titled “5. Catastrophic Scenarios Require Specific Failures”Existential risk requires:
- Creating superintelligent AI
- That is misaligned in specific ways
- That we can’t detect or correct
- That takes catastrophic action
- That we can’t stop
- All before we fix any of these problems
Each is a conjunction: Probability multiplies
We have chances to intervene: At each step
This suggests: P(doom) is low, not high.
6. Incentives Support Safety
Section titled “6. Incentives Support Safety”Unlike doomer view, optimists see aligned incentives:
Reputational costs:
- Labs that deploy unsafe AI face backlash
- Negative publicity hurts business
- Safety sells
Liability:
- Companies can be sued for harms
- Legal system provides incentives
- Insurance requires safety measures
User preferences:
- People prefer safe, aligned AI
- Market rewards trustworthy systems
- Aligned AI is better product
Employee values:
- Researchers care about safety
- Internal pressure for responsible development
- Whistleblowers can expose problems
Regulatory pressure:
- Governments will regulate if needed
- Public concern drives policy
- International cooperation possible
This means: Default isn’t “race to the bottom” but “race to safe and beneficial.”
7. Deceptive Alignment Is Unlikely
Section titled “7. Deceptive Alignment Is Unlikely”While theoretically possible, practically improbable:
Training dynamics:
- Deception is complex to learn
- Direct goal pursuit is simpler
- Simplicity bias favors non-deception
Detection opportunities:
- Models must show aligned behavior during training
- Hard to maintain perfect deception
- Interpretability catches inconsistencies
Instrumental convergence is weak:
- Most goals don’t require human extinction
- Cooperation often more effective than conflict
- Paperclip maximizer scenarios are contrived
No reason to expect it:
- Pure speculation without empirical evidence
- Based on specific assumed architectures
- May not apply to actual systems we build
8. Society Will Adapt
Section titled “8. Society Will Adapt”Humans and institutions are adaptive:
Regulatory response:
- Governments react to problems
- Can slow or stop development if needed
- Public pressure drives action
Cultural evolution:
- Norms develop around new technology
- Education and awareness spread
- Best practices emerge
Technical countermeasures:
- Security research advances
- Defenses improve
- Tools for oversight develop
This provides: Additional layers of safety beyond pure technical alignment.
Main Criticisms and Counterarguments
Section titled “Main Criticisms and Counterarguments””Success on Weak Systems Doesn’t Predict Success on Strong Ones”
Section titled “”Success on Weak Systems Doesn’t Predict Success on Strong Ones””Critique: RLHF works on GPT-4, but will it work on superintelligent AI?
Optimistic response:
- Every generation has been more capable and more aligned
- Techniques improve as we scale
- Can test at each level before scaling further
- No evidence of fundamental barrier
- Burden of proof is on those claiming discontinuity
”Underrates Qualitative Shifts”
Section titled “”Underrates Qualitative Shifts””Critique: Human-level to superhuman is a qualitative shift. All bets are off.
Optimistic response:
- We’ve seen many “qualitative shifts” in AI already
- Each time, techniques adapted
- Gradual scaling means incremental shifts
- We’ll see warning signs before catastrophic shift
- Can stop if we’re not ready
”Optimism Motivated by Industry Incentives”
Section titled “”Optimism Motivated by Industry Incentives””Critique: Researchers at labs have incentive to downplay risk.
Optimistic response:
- Ad hominem doesn’t address arguments
- Many optimistic academics have no industry ties
- Some pessimists also work at labs
- Arguments should be evaluated on merits
- Many optimists take safety seriously and work hard on it
“‘We’ll Figure It Out’ Isn’t a Plan”
Section titled ““‘We’ll Figure It Out’ Isn’t a Plan””Critique: Vague optimism that iteration will work isn’t sufficient.
Optimistic response:
- Not just vague hope - specific technical approaches
- Empirical evidence that iteration works
- Concrete research programs with measurable progress
- Historical precedent for solving hard problems
- Better than paralysis from overconfidence in doom
”One Mistake Could Be Fatal”
Section titled “”One Mistake Could Be Fatal””Critique: Can’t iterate on existential failures.
Optimistic response:
- True, but risk per deployment is low
- Multiple chances to course-correct before catastrophe
- Warning signs will appear
- Can build in safety margins
- Defense in depth provides redundancy
”Ignores Theoretical Arguments”
Section titled “”Ignores Theoretical Arguments””Critique: Dismisses solid theoretical work on inner alignment, deceptive alignment, etc.
Optimistic response:
- Not dismissing - questioning applicability
- Theory makes specific assumptions that may not hold
- Empirical work is more reliable than speculation
- Can address theoretical concerns if they arise in practice
- Balance theory and empirics
”Overconfident in Slow Takeoff”
Section titled “”Overconfident in Slow Takeoff””Critique: Fast takeoff is possible, leaving no time to iterate.
Optimistic response:
- Multiple bottlenecks slow progress
- Recursive self-improvement faces barriers
- No empirical evidence for fast takeoff
- Can monitor for warning signs
- Adjust if evidence changes
What Evidence Would Change This View?
Section titled “What Evidence Would Change This View?”Optimists would update toward pessimism given:
Empirical Failures
Section titled “Empirical Failures”Alignment techniques stop working:
- RLHF and similar approaches fail to scale
- Techniques that worked on GPT-4 fail on GPT-5
- Clear ceiling on current approaches
- Fundamental barriers emerge
Deceptive behavior observed:
- Models demonstrably hiding true capabilities or goals
- Systematic deception that’s hard to detect
- Alignment appearing to work then catastrophically failing
- Evidence of mesa-optimization with misaligned goals
Inability to detect misalignment:
- Interpretability research hitting walls
- Can’t distinguish aligned from misaligned systems
- Red teaming consistently missing problems
- No reliable safety signals
Theoretical Developments
Section titled “Theoretical Developments”Proofs of fundamental difficulty:
- Mathematical proofs that alignment can’t scale
- Demonstrations that orthogonality thesis has teeth
- Clear arguments that iteration must fail
- Showing that current approaches are doomed
Clear paths to catastrophe:
- Specific, plausible scenarios for x-risk
- Demonstrations that defenses won’t work
- Evidence that safeguards can be bypassed
- Showing multiple failure modes converge
Capability Developments
Section titled “Capability Developments”Very fast progress:
- Sudden, discontinuous capability jumps
- Evidence of potential for explosive recursive self-improvement
- Timelines much shorter than expected
- Window for iteration closing
Misalignment scales with capability:
- More capable models are harder to align
- Negative relationship between capability and alignment
- Emerging misalignment in frontier systems
Institutional Failures
Section titled “Institutional Failures”Racing dynamics worsen:
- Clear evidence that competition overrides safety
- Labs cutting safety corners under pressure
- International race to the bottom
- Coordination proving impossible
Safety work deprioritized:
- Labs systematically underinvesting in safety
- Safety researchers marginalized
- Deployment decisions ignoring safety
Implications for Action and Career
Section titled “Implications for Action and Career”If you hold optimistic beliefs, strategic implications include:
Technical Research
Section titled “Technical Research”Empirical alignment work:
- RLHF and successors
- Scalable oversight
- Preference learning
- Constitutional AI
Interpretability:
- Understanding current models
- Automated interpretation
- Mechanistic interpretability
Evaluation:
- Safety benchmarks
- Red teaming
- Dangerous capability detection
Why: These have near-term payoff and compound over time.
Lab Engagement
Section titled “Lab Engagement”Work at AI labs:
- Influence from inside
- Implement safety practices
- Build safety culture
- Deploy responsibly
Industry positions:
- Safety engineering roles
- Evaluation and testing
- Policy and governance
- Product safety
Why: Where the work happens is where you can have impact.
Deployment and Applications
Section titled “Deployment and Applications”Beneficial applications:
- Using AI to solve important problems
- Accelerating beneficial research
- Improving human welfare
- Demonstrating positive uses
Careful deployment:
- Responsible release strategies
- Monitoring and feedback
- Iterative improvement
- Learning from real use
Why: Beneficial AI has value and provides data for improvement.
Measured Communication
Section titled “Measured Communication”Avoid hype:
- Realistic about both capabilities and risks
- Neither minimize nor exaggerate
- Evidence-based claims
- Nuanced discussion
Public education:
- Help people understand AI
- Discuss safety productively
- Build informed public
- Support good policy
Why: Balanced communication supports good decision-making.
Internal Diversity
Section titled “Internal Diversity”The optimistic worldview has significant variation:
Degree of Optimism
Section titled “Degree of Optimism”Moderate optimism: Takes risks seriously, believes they’re manageable
Strong optimism: Confident in tractability, low P(doom)
Extreme optimism (e/acc): Risks overblown, acceleration is good
Technical Basis
Section titled “Technical Basis”Empirical optimists: Based on observed progress
Theoretical optimists: Based on beliefs about intelligence and goals
Historical optimists: Based on precedent of solving hard problems
Motivation
Section titled “Motivation”Safety-focused: Work hard on alignment from optimistic perspective
Capability-focused: Prioritize beneficial applications
Acceleration-focused: Believe speed is good
Engagement with Risk Arguments
Section titled “Engagement with Risk Arguments”Engaged optimists: Seriously engage with doomer arguments, still conclude optimism
Dismissive: Don’t take risk arguments seriously
Unaware: Haven’t deeply considered arguments
Relationship to Other Worldviews
Section titled “Relationship to Other Worldviews”vs. Doomer
Section titled “vs. Doomer”Fundamental disagreements:
- Nature of alignment difficulty
- Whether iteration is possible
- Default outcomes
- Tractability of solutions
Some agreements:
- AI is transformative
- Alignment requires work
- Some risks exist
vs. Governance-Focused
Section titled “vs. Governance-Focused”Agreements:
- Institutions matter
- Need good practices
- Coordination is valuable
Disagreements:
- Optimists think market provides more safety
- Less emphasis on regulation
- More trust in voluntary action
vs. Long-Timelines
Section titled “vs. Long-Timelines”Agreements on some points:
- Can iterate and improve
- Not emergency panic mode
Disagreements:
- Optimists think alignment is easier
- Different regardless of timelines
- Optimists more engaged with current systems
Practical Considerations
Section titled “Practical Considerations”Working in Industry
Section titled “Working in Industry”Advantages:
- Access to frontier models
- Resources for research
- Real-world impact
- Competitive compensation
Challenges:
- Pressure to deploy
- Competitive dynamics
- Potential incentive misalignment
- Public perception
Research Priorities
Section titled “Research Priorities”Focus on:
- High-feedback work (learn quickly)
- Practical applications (deployable)
- Measurable progress (know if working)
- Collaborative approaches (leverage resources)
Communication Strategy
Section titled “Communication Strategy”With pessimists:
- Acknowledge valid concerns
- Engage seriously with arguments
- Find common ground
- Collaborate where possible
With public:
- Balanced messaging
- Neither panic nor complacency
- Evidence-based
- Actionable
With policymakers:
- Support sensible regulation
- Oppose harmful overreach
- Provide technical expertise
- Build trust
Representative Quotes
Section titled “Representative Quotes”“The alignment problem is real and important. It’s also solvable through continued research and iteration. We’re making measurable progress.” - Jan Leike
“Every generation of AI has been both more capable and more aligned than the previous one. That trend is likely to continue.” - Optimistic researcher
“We should be thoughtful about AI safety, but we shouldn’t let speculative fears prevent us from realizing enormous benefits.” - Andrew Ng
“The same capabilities that make AI powerful also make it easier to align. Understanding human values is itself a capability that improves with intelligence.” - Capability-alignment linking argument
“Look at the actual empirical results: GPT-4 is dramatically safer than GPT-2. RLHF works. Constitutional AI works. We’re getting better at this.” - Empirically-focused optimist
“The key question isn’t whether we’ll face challenges, but whether we’ll rise to meet them. History suggests we will.” - Historical optimist
Common Misconceptions
Section titled “Common Misconceptions”“Optimists don’t care about safety”: False - many work hard on alignment
“It’s just wishful thinking”: No - based on specific technical and empirical arguments
“Optimists think AI is risk-free”: No - they think risks are manageable
“They’re captured by industry”: Many optimistic academics have no industry ties
“They haven’t thought about the arguments”: Many have deeply engaged with pessimistic views
“Optimism means acceleration”: Not necessarily - can be optimistic about alignment while being careful about deployment
Strategic Implications
Section titled “Strategic Implications”If Optimists Are Correct
Section titled “If Optimists Are Correct”Good news:
- AI can be developed safely
- Enormous benefits are achievable
- Iteration and improvement work
- Catastrophic risk is low
Priorities:
- Continue empirical research
- Deploy carefully and learn
- Build beneficial applications
- Support good governance
If Wrong (Risk Is Higher)
Section titled “If Wrong (Risk Is Higher)”Dangers:
- Insufficient preparation
- Overconfidence
- Missing warning signs
- Inadequate safety margins
Mitigations:
- Take safety seriously even with optimism
- Build in margins
- Monitor for warning signs
- Update on evidence
Spectrum of Optimism
Section titled “Spectrum of Optimism”Conservative Optimism
Section titled “Conservative Optimism”- P(doom) ~5%
- Takes safety very seriously
- Works hard on alignment
- Careful deployment
- Engaged with risk arguments
Example: Many industry safety researchers
Moderate Optimism
Section titled “Moderate Optimism”- P(doom) ~1-2%
- Important to work on safety
- Confident in tractability
- Balance benefits and risks
- Evidence-based
Example: Many academic researchers
Strong Optimism
Section titled “Strong Optimism”- P(doom) under 1%
- Risk is overblown
- Focus on benefits
- Market and iteration will solve it
- Skeptical of doom arguments
Example: Some senior researchers
Extreme Optimism (e/acc)
Section titled “Extreme Optimism (e/acc)”- P(doom) ~0%
- Risk is FUD
- Accelerate development
- Slowing down is harmful
- Dismissive of safety concerns
Example: Effective accelerationists
Recommended Reading
Section titled “Recommended Reading”Optimistic Perspectives
Section titled “Optimistic Perspectives”- AI Safety Seems Hard to Measure↗ - Anthropic
- Constitutional AI: Harmlessness from AI Feedback↗
- Scalable Oversight Approaches↗
Empirical Progress
Section titled “Empirical Progress”- Training Language Models to Follow Instructions with Human Feedback↗ - InstructGPT paper
- Anthropic’s Work on AI Safety↗
- OpenAI alignment research
Debate and Discussion
Section titled “Debate and Discussion”- Against AI Doomerism↗ - Yann LeCun
- Response to Concerns About AI↗
- Debates between optimists and pessimists
Nuanced Positions
Section titled “Nuanced Positions”Critiques of Pessimism
Section titled “Critiques of Pessimism”- Against AI Doom↗
- Why AI X-Risk Skepticism?↗
- Rebuttals to specific doom arguments
Historical Analogies
Section titled “Historical Analogies”- Nuclear safety and governance
- Aviation safety improvements
- Pharmaceutical regulation
- Biotechnology self-governance