Research Reports
Research reports are deep-dive investigations into specific AI safety topics. They serve as:
- Comprehensive documentation of research findings with citations
- Input for diagram creation - the Causal Factors section informs cause-effect diagrams
Available Reports
Section titled “Available Reports”AI Transition Model - Factors
Section titled “AI Transition Model - Factors”| Report | Topic | Quality | Created |
|---|---|---|---|
| Adaptability | Civilizational Competence | 3 | 2025-01-08 |
| Adoption | AI Capabilities | 3 | 2025-01-08 |
| AI Governance | Misalignment Potential | 3 | 2025-01-08 |
| AI Ownership - Companies | AI Ownership | 3 | 2025-01-08 |
| AI Ownership - Countries | AI Ownership | 3 | 2025-01-08 |
| AI Ownership - Shareholders | AI Ownership | 3 | 2025-01-08 |
| AI Talent Concentration | AI Capabilities | 3 | 2025-01-07 |
| AI Uses - Coordination | AI Uses | 3 | 2025-01-08 |
| AI Uses - Governments | AI Uses | 3 | 2025-01-08 |
| AI Uses - Industries | AI Uses | 3 | 2025-01-08 |
| Algorithms (AI Capabilities) | AI Capabilities | 3 | 2025-01-08 |
| Biological Threat Exposure | Misuse Potential | 3 | 2025-01-08 |
| Civilizational Epistemics | Civilizational Competence | 3 | 2025-01-08 |
| Civilizational Governance | Civilizational Competence | 3 | 2025-01-08 |
| Compute (AI Capabilities) | AI Capabilities | 3 | 2025-01-08 |
| Cyber Threat Exposure | Misuse Potential | 3 | 2025-01-08 |
| Economic Stability | Transition Turbulence | 3 | 2025-01-08 |
| Lab Safety Practices | Misalignment Potential | 3 | 2025-01-08 |
| Racing Intensity | Transition Turbulence | 3 | 2025-01-08 |
| Recursive AI Capabilities | AI Uses | 3 | 2025-01-08 |
| Robot Threat Exposure | Misuse Potential | 3 | 2025-01-08 |
| Surprise Threat Exposure | Misuse Potential | 3 | 2025-01-08 |
| Technical AI Safety | Misalignment Potential | 3 | 2025-01-08 |
AI Transition Model - Scenarios & Outcomes
Section titled “AI Transition Model - Scenarios & Outcomes”| Report | Topic | Quality | Created |
|---|---|---|---|
| Economic Power Lock-in | Long-term Lock-in Scenarios | 3 | 2025-01-08 |
| Epistemic Lock-in | Long-term Lock-in Scenarios | 3 | 2025-01-08 |
| Existential Catastrophe | Outcomes | 3 | 2025-01-08 |
| Gradual AI Takeover | AI Takeover Scenarios | 3 | 2025-01-07 |
| Long-Term Trajectory | Outcomes | 3 | 2025-01-08 |
| Political Power Lock-in | Long-term Lock-in Scenarios | 3 | 2025-01-08 |
| Rapid AI Takeover | AI Takeover Scenarios | 3 | 2025-01-08 |
| Rogue Actor | Human Catastrophe Scenarios | 3 | 2025-01-08 |
| State Actor | Human Catastrophe Scenarios | 3 | 2025-01-08 |
| Suffering Lock-in | Long-term Lock-in Scenarios | 3 | 2025-01-08 |
| Values Lock-in | Long-term Lock-in Scenarios | 3 | 2025-01-08 |
AI Transition Model - Parameters
Section titled “AI Transition Model - Parameters”| Report | Topic | Quality | Created |
|---|---|---|---|
| AI Control Concentration | Power & Control | 3 | 2025-01-08 |
| Alignment Robustness | Safety Metrics | 3 | 2025-01-08 |
| Coordination Capacity | Governance | 3 | 2025-01-08 |
| Epistemic Health | Society | 3 | 2025-01-08 |
| Epistemics (Parameter) | Society | 3 | 2025-01-08 |
| Governance (Parameter) | Governance | 3 | 2025-01-08 |
| Human Agency | Society | 3 | 2025-01-08 |
| Human Expertise | Society | 3 | 2025-01-08 |
| Human Oversight Quality | Safety Metrics | 3 | 2025-01-08 |
| Information Authenticity | Society | 3 | 2025-01-08 |
| Institutional Quality | Governance | 3 | 2025-01-08 |
| International Coordination | Governance | 3 | 2025-01-08 |
| Interpretability Coverage | Safety Metrics | 3 | 2025-01-08 |
| Preference Authenticity | Society | 3 | 2025-01-08 |
| Reality Coherence | Society | 3 | 2025-01-08 |
| Regulatory Capacity | Governance | 3 | 2025-01-08 |
| Safety Capability Gap | Safety Metrics | 3 | 2025-01-08 |
| Safety Culture Strength | Safety Metrics | 3 | 2025-01-08 |
| Societal Resilience | Society | 3 | 2025-01-08 |
| Societal Trust | Society | 3 | 2025-01-08 |
Risk Topics
Section titled “Risk Topics”| Report | Topic | Quality | Created |
|---|---|---|---|
| Autonomous Weapons | Misuse Risks | 3 | 2025-01-08 |
| Bioweapons | Misuse Risks | 4 | 2025-01-08 |
| Concentration of Power | Structural Risks | 3 | 2025-01-08 |
| Corrigibility Failure | Accident Risks | 3 | 2025-01-08 |
| Deceptive Alignment | Accident Risks | 3 | 2025-01-08 |
| Disinformation | Misuse Risks | 3 | 2025-01-08 |
| Emergent Capabilities | Accident Risks | 3 | 2025-01-08 |
| Goal Misgeneralization | Accident Risks | 3 | 2025-01-08 |
| Instrumental Convergence | Accident Risks | 3 | 2025-01-08 |
| Lock-in | Structural Risks | 3 | 2025-01-08 |
| Mesa-Optimization | Accident Risks | 3 | 2025-01-08 |
| Multipolar Trap | Structural Risks | 3 | 2025-01-08 |
| Power-Seeking AI | Accident Risks | 3 | 2025-01-08 |
| Racing Dynamics | Structural Risks | 3 | 2025-01-08 |
| Reward Hacking | Accident Risks | 3 | 2025-01-08 |
| Scheming | Accident Risks | 3 | 2025-01-08 |
| Sharp Left Turn | Accident Risks | 3 | 2025-01-08 |
| Sycophancy | Accident Risks | 3 | 2025-01-08 |
| Treacherous Turn | Accident Risks | 3 | 2025-01-08 |
Creating New Reports
Section titled “Creating New Reports”Use Claude Code with the research-report skill:
"Create a research report on [topic]"Or manually follow the Research Report Style Guide.
Report Quality Levels
Section titled “Report Quality Levels”| Level | Criteria |
|---|---|
| 1 | Basic outline, fewer than 10 sources, major gaps |
| 2 | Main points covered, 10-15 sources, some gaps |
| 3 | Solid coverage, 15-25 sources, minor gaps |
| 4 | Comprehensive, 25-35 sources, well-structured |
| 5 | Authoritative, 35+ sources, original synthesis |
Integration with Diagrams
Section titled “Integration with Diagrams”After a report is complete, use the cause-effect-diagram skill to convert the Causal Factors section into a visual diagram:
"Create a cause-effect diagram based on the [topic] research report"