Browse All Pages
Browse all pages in the knowledge base. Use quality ratings to find the most developed content, or filter by category.
351Total
302With Importance
67.0Avg Importance
307With Quality
Columns: Imp = Importance (0-100)Qual = Quality (0-100)Struct = Structural score (tables, diagrams, sections)Words = Word countLinks = Backlinks from other pagesGap = Priority score (high importance + low quality)Age = Days since last editRefs = Resource references with hover tooltipsUnconv = Unconverted links (could have hover tooltips)Dup = Max similarity to other pages (hover for list)
351 of 351 results
| 95 | 88 | 12/15 | Scheming & Deception Detection | Responses | 1.9k | 0 | 7 | 10d | — | — | 21% |
| 92 | 88 | 11/15 | Solution Cruxes | Cruxes | 3.5k | 0 | 4 | 10d | 62 | — | 20% |
| 92 | 88 | 15/15 | AI Risk Portfolio Analysis | Models | 2.2k | 2 | 4 | 10d | 19 | 10 | 16% |
| 92 | 88 | 15/15 | Intervention Effectiveness Matrix | Models | 4.3k | 1 | 4 | 10d | 55 | 10 | 19% |
| 92 | 88 | 15/15 | Instrumental Convergence | Risks | 4.2k | 10 | 4 | 10d | 63 | 7 | 24% |
| 90 | 4 | 13/15 | Carlsmith's Six-Premise Argument | Models | 1.9k | 0 | 86 | 5d | 7 | — | 17% |
| 90 | 91 | 12/15 | Safety-Capability Gap | Parameters | 2.6k | 10 | -1 | 9d | 44 | — | 19% |
| 88 | 92 | 10/15 | AI Control | Responses | 2.6k | 8 | -4 | 11d | 15 | — | 20% |
| 88 | 91 | 15/15 | Alignment Robustness | Parameters | 2.9k | 12 | -3 | 9d | 24 | 4 | 21% |
| 87 | 88 | 11/15 | Warning Signs Model | Models | 3.5k | 2 | -1 | 12d | 60 | — | 18% |
| 87 | 88 | 9/15 | Safety Research Allocation Model | Models | 1.8k | 0 | -1 | 11d | 26 | — | 15% |
| 87 | 89 | 15/15 | Intervention Timing Windows | Models | 4.4k | 0 | -2 | 10d | 41 | 7 | 18% |
| 87 | 88 | 9/15 | AI Governance and Policy | Responses | 2.6k | 1 | -1 | 11d | 71 | — | 21% |
| 85 | 87 | 10/15 | Large Language Models | Capabilities | 1.5k | 2 | -2 | 14d | 22 | — | 16% |
| 85 | 82 | 10/15 | Long-Horizon Autonomous Tasks | Capabilities | 1.6k | 0 | 3 | 14d | 39 | — | 16% |
| 85 | 82 | 12/15 | Self-Improvement and Recursive Enhancement | Capabilities | 4.0k | 5 | 3 | 10d | 27 | — | 22% |
| 85 | 88 | 10/15 | AI Capabilities Metrics | Metrics | 3.0k | 1 | -3 | — | 52 | — | 17% |
| 85 | 88 | 12/15 | AI Proliferation Risk Model | Models | 1.9k | 1 | -3 | 12d | 23 | — | 16% |
| 85 | 88 | 14/15 | AI Uplift Assessment Model | Models | 4.5k | 1 | -3 | 10d | 12 | 4 | 21% |
| 85 | 82 | 11/15 | Capability Threshold Model | Models | 2.9k | 2 | 3 | 10d | 78 | — | 20% |
| 85 | 88 | 11/15 | Corrigibility Failure Pathways | Models | 2.0k | 2 | -3 | — | 25 | — | 20% |
| 85 | 88 | 9/15 | Power-Seeking Emergence Conditions Model | Models | 2.3k | 1 | -3 | — | 23 | — | 20% |
| 85 | 89 | 11/15 | AI-Assisted Alignment | Responses | 1.4k | 0 | -4 | 10d | 29 | — | 16% |
| 85 | 88 | 11/15 | AI Alignment | Responses | 2.6k | 0 | -3 | 10d | 47 | — | 19% |
| 85 | 88 | 11/15 | Corrigibility Research | Responses | 2.0k | 4 | -3 | 10d | 14 | — | 21% |
| 85 | 82 | 11/15 | Evals & Red-teaming | Responses | 2.4k | 3 | 3 | 10d | 15 | — | 16% |
| 85 | 82 | 11/15 | Mechanistic Interpretability | Responses | 3.2k | 15 | 3 | 10d | 24 | — | 23% |
| 85 | 82 | 11/15 | Research Agenda Comparison | Responses | 4.4k | 0 | 3 | 10d | 39 | — | 22% |
| 85 | 82 | 11/15 | Scalable Oversight | Responses | 1.3k | 13 | 3 | 10d | 31 | — | 23% |
| 85 | 88 | 10/15 | Technical AI Safety Research | Responses | 2.6k | 0 | -3 | 10d | 56 | — | 20% |
| 85 | 89 | 10/15 | Coordination Technologies | Responses | 2.2k | 0 | -4 | 14d | 40 | — | 17% |
| 85 | 82 | 9/15 | AI Evaluation | Responses | 1.2k | 0 | 3 | 11d | 35 | — | 19% |
| 85 | 82 | 11/15 | Policy Effectiveness Assessment | Responses | 3.4k | 0 | 3 | 10d | 27 | — | 23% |
| 85 | 88 | 11/15 | Responsible Scaling Policies | Responses | 4.0k | 0 | -3 | 10d | 18 | — | 24% |
| 85 | 82 | 11/15 | Corrigibility Failure | Risks | 2.9k | 8 | 3 | 10d | 55 | — | 24% |
| 85 | 88 | 10/15 | Deceptive Alignment | Risks | 1.4k | 20 | -3 | 14d | 21 | — | 16% |
| 85 | 82 | 11/15 | Power-Seeking AI | Risks | 2.2k | 11 | 3 | 10d | 25 | — | 23% |
| 85 | 82 | 11/15 | Scheming | Risks | 3.1k | 8 | 3 | 10d | 17 | — | 23% |
| 85 | 82 | 11/15 | Sharp Left Turn | Risks | 3.3k | 3 | 3 | 10d | 34 | — | 22% |
| 85 | 91 | 15/15 | Interpretability Coverage | Parameters | 3.3k | 4 | -6 | 9d | 28 | 9 | 23% |
| 85 | 91 | 12/15 | Racing Intensity | Parameters | 3.4k | 13 | -6 | 9d | 65 | — | 21% |
| 84 | 82 | 9/15 | Autonomous Coding | Capabilities | 1.5k | 2 | 2 | 14d | 18 | — | 16% |
| 84 | 82 | 11/15 | Situational Awareness | Capabilities | 2.9k | 6 | 2 | 10d | 11 | — | 22% |
| 84 | 82 | 11/15 | Accident Risk Cruxes | Cruxes | 3.0k | 0 | 2 | 10d | 46 | — | 20% |
| 84 | 88 | 12/15 | Risk Cascade Pathways | Models | 1.8k | 2 | -4 | 12d | 19 | — | 15% |
| 84 | 82 | 11/15 | Expected Value of AI Safety Research | Models | 1.3k | 1 | 2 | 12d | 31 | — | 16% |
| 84 | 88 | 10/15 | Scheming Likelihood Assessment | Models | 1.5k | 2 | -4 | — | 15 | — | 20% |
| 84 | 82 | 10/15 | AI-Bioweapons Timeline Model | Models | 2.6k | 1 | 2 | 12d | — | — | 20% |
| 84 | 88 | 10/15 | Risk Activation Timeline Model | Models | 2.9k | 2 | -4 | 12d | 24 | — | 15% |
| 84 | 88 | 13/15 | Lab Safety Culture | Responses | 3.6k | 0 | -4 | 10d | 36 | 9 | 19% |
| 83 | 88 | 12/15 | Defense in Depth Model | Models | 1.7k | 1 | -5 | 12d | 23 | — | 14% |
| 82 | 85 | 9/15 | Persuasion and Social Manipulation | Capabilities | 1.8k | 0 | -3 | 14d | 22 | — | 16% |
| 82 | 88 | 14/15 | Reasoning and Planning | Capabilities | 4.0k | 1 | -6 | 10d | 43 | 2 | 22% |
| 82 | 88 | 14/15 | Tool Use and Computer Use | Capabilities | 3.3k | 1 | -6 | 10d | 28 | 1 | 22% |
| 82 | 92 | 13/15 | The Case AGAINST AI Existential Risk | Debates | 1.8k | 0 | -10 | 10d | 21 | 2 | 23% |
| 82 | 80 | 9/15 | Why Alignment Might Be Hard | Debates | 4.2k | 0 | 2 | 10d | 25 | — | 24% |
| 82 | 88 | 10/15 | AGI Timeline | Forecasting | 1.1k | 0 | -6 | 11d | 19 | — | 17% |
| 82 | 91 | 12/15 | Alignment Progress | Metrics | 4.6k | 3 | -9 | 10d | 63 | — | 19% |
| 82 | 87 | 11/15 | Lab Behavior & Industry | Metrics | 4.4k | 3 | -5 | 10d | 50 | — | 19% |
| 82 | 80 | 11/15 | Critical Uncertainties Model | Models | 2.6k | 0 | 2 | 10d | 30 | — | 17% |
| 82 | 88 | 11/15 | Multipolar Trap Dynamics Model | Models | 1.8k | 2 | -6 | 12d | 25 | — | 17% |
| 82 | 88 | 10/15 | Racing Dynamics Impact Model | Models | 1.7k | 3 | -6 | 13d | 23 | — | 18% |
| 82 | 88 | 14/15 | Risk Interaction Matrix Model | Models | 2.7k | 1 | -6 | 10d | 13 | 11 | 16% |
| 82 | 88 | 11/15 | Risk Interaction Network | Models | 2.0k | 2 | -6 | 11d | 25 | — | 16% |
| 82 | 88 | 11/15 | International AI Coordination Game | Models | 1.9k | 3 | -6 | 11d | 36 | — | 18% |
| 82 | 88 | 12/15 | Worldview-Intervention Mapping | Models | 2.3k | 1 | -6 | 12d | 25 | — | 15% |
| 82 | 88 | 10/15 | Capability-Alignment Race Model | Models | 1.1k | 0 | -6 | 10d | 14 | — | 17% |
| 82 | 87 | 11/15 | Deceptive Alignment Decomposition Model | Models | 2.2k | 4 | -5 | 12d | 19 | — | 18% |
| 82 | 88 | 12/15 | Goal Misgeneralization Probability Model | Models | 1.8k | 0 | -6 | 12d | 23 | — | 17% |
| 82 | 88 | 12/15 | Mesa-Optimization Risk Analysis | Models | 1.7k | 0 | -6 | 12d | 35 | — | 19% |
| 82 | 88 | 12/15 | Capabilities-to-Safety Pipeline Model | Models | 2.5k | 1 | -6 | 11d | 23 | — | 15% |
| 82 | 88 | 10/15 | AI Safety Talent Supply/Demand Gap Model | Models | 2.6k | 1 | -6 | 11d | 17 | — | 15% |
| 82 | 88 | 14/15 | Multi-Agent Safety | Responses | 3.1k | 0 | -6 | 10d | 14 | — | 17% |
| 82 | 80 | 11/15 | Compute Monitoring | Responses | 4.5k | 0 | 2 | 10d | 24 | — | 23% |
| 82 | 88 | 10/15 | Voluntary Industry Commitments | Responses | 3.6k | 3 | -6 | 11d | 8 | — | 24% |
| 82 | 85 | 13/15 | International Coordination Mechanisms | Responses | 3.8k | 0 | -3 | 10d | 42 | 1 | 25% |
| 82 | 88 | 8/15 | California SB 1047 | Responses | 3.5k | 0 | -6 | 10d | 28 | — | 19% |
| 82 | 88 | 10/15 | EU AI Act | Responses | 1.5k | 5 | -6 | 11d | 19 | — | 16% |
| 82 | 85 | 11/15 | Open Source Safety | Responses | 2.1k | 0 | -3 | 10d | 56 | — | 13% |
| 82 | 88 | 14/15 | Pause Advocacy | Responses | 4.0k | 0 | -6 | 10d | 51 | 2 | 20% |
| 82 | 88 | 11/15 | Emergent Capabilities | Risks | 2.2k | 1 | -6 | 10d | 48 | — | 17% |
| 82 | 82 | 11/15 | Goal Misgeneralization | Risks | 2.5k | 8 | 0 | 10d | 24 | — | 22% |
| 82 | 82 | 11/15 | Mesa-Optimization | Risks | 2.9k | 9 | 0 | 10d | 25 | — | 23% |
| 82 | 82 | 11/15 | Treacherous Turn | Risks | 2.7k | 4 | 0 | 10d | 20 | — | 23% |
| 82 | 78 | 13/15 | Bioweapons | Risks | 9.9k | 8 | 4 | 10d | 72 | — | 21% |
| 82 | 78 | 10/15 | Lock-in | Risks | 3.5k | 9 | 4 | 10d | 107 | — | 23% |
| 82 | 78 | 11/15 | Multipolar Trap | Risks | 3.3k | 8 | 4 | 10d | 16 | — | 20% |
| 82 | 88 | 10/15 | Racing Dynamics | Risks | 1.9k | 30 | -6 | 14d | 53 | — | 21% |
| 82 | 91 | 15/15 | Biological Threat Exposure | Parameters | 3.1k | 4 | -9 | 9d | 5 | 2 | 19% |
| 82 | 91 | 15/15 | Human Oversight Quality | Parameters | 3.5k | 10 | -9 | 9d | 25 | 11 | 19% |
| 81 | 88 | 14/15 | Large Language Models | Foundation models | 2.4k | 0 | -7 | 10d | 14 | 6 | 20% |
| 81 | 80 | 11/15 | Reward Hacking | Risks | 3.1k | 12 | 1 | 10d | 31 | — | 21% |
| 80 | 78 | 12/15 | Agentic AI | Capabilities | 4.4k | 5 | 2 | 10d | 42 | — | 22% |
| 80 | 68 | 12/15 | Alignment Robustness Trajectory | Models | 1.5k | 0 | 12 | 9d | — | — | 15% |
| 80 | 91 | 15/15 | International Coordination | Parameters | 3.0k | 9 | -11 | 9d | 14 | 6 | 23% |
| 80 | 91 | 15/15 | Safety Culture Strength | Parameters | 2.4k | 7 | -11 | 9d | 7 | 9 | 16% |
| 79 | 82 | 9/15 | The Case FOR AI Existential Risk | Debates | 4.9k | 0 | -3 | 10d | 37 | — | 24% |
| 79 | 82 | 10/15 | AGI Development | Forecasting | 1.2k | 0 | -3 | 11d | 18 | — | 17% |
| 79 | 87 | 10/15 | AI-Human Hybrid Systems | Responses | 1.9k | 0 | -8 | 11d | 34 | — | 16% |
| 79 | 87 | 12/15 | Model Registries | Responses | 1.8k | 0 | -8 | 10d | — | — | 17% |
| 79 | 78 | 10/15 | Governance-Focused Worldview | Worldviews | 3.8k | 0 | 1 | 10d | 39 | — | 21% |
| 78 | 87 | 13/15 | Misuse Risk Cruxes | Cruxes | 1.8k | 0 | -9 | 10d | 15 | 14 | 14% |
| 78 | 82 | 10/15 | Why Alignment Might Be Easy | Debates | 3.9k | 0 | -4 | 10d | 41 | — | 22% |
| 78 | 82 | 8/15 | Pause and Redirect - The Deliberate Path | Future projections | 5.0k | 0 | -4 | 10d | 46 | — | 25% |
| 78 | 85 | 10/15 | Compute & Hardware | Metrics | 3.7k | 2 | -7 | 10d | 78 | — | 14% |
| 78 | 82 | 12/15 | Compounding Risks Analysis | Models | 1.8k | 3 | -4 | 12d | 28 | — | 16% |
| 78 | 88 | 11/15 | Autonomous Weapons Escalation Model | Models | 2.6k | 0 | -10 | 11d | 23 | — | 14% |
| 78 | 87 | 11/15 | Bioweapons Attack Chain Model | Models | 2.0k | 1 | -9 | 12d | 21 | — | 15% |
| 78 | 82 | 9/15 | Autonomous Cyber Attack Timeline | Models | 1.7k | 1 | -4 | 11d | 35 | — | 16% |
| 78 | 82 | 10/15 | Instrumental Convergence Framework | Models | 2.4k | 0 | -4 | 12d | 21 | — | 18% |
| 78 | 80 | 11/15 | Multi-Actor Strategic Landscape | Models | 1.9k | 0 | -2 | 10d | 26 | — | 16% |
| 78 | 82 | 12/15 | Reward Hacking Taxonomy and Severity Model | Models | 6.6k | 0 | -4 | 10d | 22 | — | 23% |
| 78 | 82 | 10/15 | METR | Organizations | 3.7k | 7 | -4 | 10d | 23 | — | 24% |
| 78 | 82 | 11/15 | Anthropic Core Views | Responses | 3.1k | 1 | -4 | 10d | 57 | — | 20% |
| 78 | 82 | 10/15 | Constitutional AI | Responses | 1.1k | 0 | -4 | 11d | 18 | — | 14% |
| 78 | 82 | 10/15 | Red Teaming | Responses | 967 | 0 | -4 | 11d | 12 | — | 18% |
| 78 | 82 | 11/15 | Representation Engineering | Responses | 1.7k | 0 | -4 | 10d | — | — | 17% |
| 78 | 82 | 11/15 | Influencing AI Labs Directly | Responses | 3.4k | 0 | -4 | 10d | 25 | — | 19% |
| 78 | 82 | 9/15 | Field Building Analysis | Responses | 3.4k | 0 | -4 | 10d | 46 | — | 16% |
| 78 | 82 | 11/15 | AI Chip Export Controls | Responses | 4.2k | 0 | -4 | 10d | 34 | — | 23% |
| 78 | 82 | 12/15 | Hardware-Enabled Governance | Responses | 1.8k | 0 | -4 | 10d | — | — | 17% |
| 78 | 82 | 11/15 | International Compute Regimes | Responses | 5.5k | 0 | -4 | 10d | 29 | — | 23% |
| 78 | 82 | 14/15 | Compute Thresholds | Responses | 3.4k | 0 | -4 | 10d | — | 13 | 23% |
| 78 | 82 | 10/15 | International AI Safety Summits | Responses | 4.0k | 3 | -4 | 10d | 13 | — | 25% |
| 78 | 82 | 11/15 | Seoul AI Safety Summit Declaration | Responses | 2.9k | 0 | -4 | 10d | 31 | — | 21% |
| 78 | 82 | 11/15 | Colorado AI Act (SB 205) | Responses | 3.4k | 0 | -4 | 10d | 48 | — | 21% |
| 78 | 87 | 15/15 | US Executive Order on AI | Responses | 3.3k | 5 | -9 | 10d | 30 | 2 | 21% |
| 78 | 82 | 11/15 | AI Safety Institutes | Responses | 4.3k | 0 | -4 | 10d | 37 | — | 25% |
| 78 | 82 | 11/15 | AI Whistleblower Protections | Responses | 1.8k | 0 | -4 | 10d | — | — | 14% |
| 78 | 82 | 11/15 | Distributional Shift | Risks | 2.6k | 1 | -4 | 10d | 14 | — | 18% |
| 78 | 81 | 9/15 | Sandbagging | Risks | 2.0k | 7 | -3 | 10d | 27 | — | 20% |
| 78 | 88 | 11/15 | Authoritarian Takeover | Risks | 2.6k | 2 | -10 | 10d | 31 | — | 18% |
| 78 | 91 | 15/15 | Epistemic Health | Parameters | 2.7k | 11 | -13 | 9d | 15 | 3 | 25% |
| 77 | 80 | 11/15 | Scientific Research Capabilities | Capabilities | 7.0k | 0 | -3 | 10d | 28 | — | 21% |
| 77 | 78 | 12/15 | Flash Dynamics Threshold Model | Models | 2.9k | 1 | -1 | 12d | — | — | 16% |
| 75 | 70 | 12/15 | Parameter Interaction Network | Models | 1.4k | 0 | 5 | 9d | — | — | 14% |
| 75 | 82 | 7/15 | Expertise Atrophy Progression Model | Models | 2.6k | 2 | -7 | 13d | — | — | 19% |
| 75 | 82 | 11/15 | AI Safety Training Programs | Responses | 1.7k | 0 | -7 | 10d | — | — | 14% |
| 75 | 87 | 11/15 | Epistemic Security | Responses | 3.5k | 1 | -12 | 10d | 47 | — | 23% |
| 75 | 82 | 11/15 | Institutional Decision Capture | Risks | 7.7k | 1 | -7 | 10d | 39 | — | 21% |
| 75 | 91 | 12/15 | AI Control Concentration | Parameters | 3.1k | 8 | -16 | 9d | 58 | — | 18% |
| 75 | — | 5/15 | Coordination Capacity | Parameters | 251 | 3 | — | 9d | — | — | 13% |
| 75 | 91 | 15/15 | Cyber Threat Exposure | Parameters | 3.5k | 4 | -16 | 9d | 22 | 3 | 19% |
| 75 | 91 | 15/15 | Regulatory Capacity | Parameters | 3.4k | 4 | -16 | 9d | 14 | 11 | 22% |
| 74 | 82 | 9/15 | Safety Research & Resources | Metrics | 1.6k | 3 | -8 | — | 27 | — | 15% |
| 74 | 71 | 9/15 | Irreversibility Threshold Model | Models | 3.1k | 0 | 3 | 12d | — | — | 19% |
| 74 | 82 | 11/15 | Authentication Collapse Timeline Model | Models | 6.3k | 2 | -8 | 12d | 5 | — | 23% |
| 74 | 78 | 11/15 | RLHF / Constitutional AI | Responses | 2.3k | 3 | -4 | 10d | 27 | — | 19% |
| 74 | 82 | 10/15 | Corporate Responses | Responses | 1.0k | 0 | -8 | 11d | 12 | — | 15% |
| 74 | 82 | 6/15 | Authoritarian Tools | Risks | 1.7k | 5 | -8 | 14d | 42 | — | 20% |
| 73 | 82 | 10/15 | NIST AI Risk Management Framework | Responses | 2.9k | 1 | -9 | 10d | 10 | — | 24% |
| 73 | 82 | 11/15 | AI Standards Bodies | Responses | 3.6k | 0 | -9 | 10d | 27 | — | 24% |
| 73 | 82 | 7/15 | Proliferation | Risks | 1.3k | 4 | -9 | 14d | 64 | — | 17% |
| 72 | 76 | 10/15 | Structural Risk Cruxes | Cruxes | 1.9k | 0 | -4 | 10d | 34 | — | 21% |
| 72 | 78 | 9/15 | Misaligned Catastrophe - The Bad Ending | Future projections | 4.3k | 0 | -6 | 10d | 33 | — | 24% |
| 72 | 82 | 11/15 | Technical Pathway Decomposition | Models | 2.3k | 0 | -10 | 10d | 24 | — | 19% |
| 72 | 78 | 11/15 | Automation Bias Cascade Model | Models | 3.7k | 2 | -6 | 11d | — | — | 20% |
| 72 | 82 | 10/15 | Cyber Offense-Defense Balance Model | Models | 2.7k | 1 | -10 | 12d | — | — | 19% |
| 72 | 78 | 8/15 | Feedback Loop & Cascade Model | Models | 1.2k | 0 | -6 | 11d | — | — | 16% |
| 72 | 78 | 11/15 | Safety-Capability Tradeoff Model | Models | 5.0k | 3 | -6 | 12d | — | — | 21% |
| 72 | 70 | 12/15 | Safety Culture Equilibrium | Models | 1.4k | 0 | 2 | 9d | — | — | 14% |
| 72 | 78 | 9/15 | Lock-in Probability Model | Models | 467 | 0 | -6 | 9d | 11 | — | — |
| 72 | 82 | 8/15 | Societal Response & Adaptation Model | Models | 992 | 0 | -10 | 11d | — | — | 16% |
| 72 | 82 | 10/15 | Anthropic | Organizations | 1.5k | 31 | -10 | 14d | 13 | — | 19% |
| 72 | 78 | 11/15 | Agent Foundations | Responses | 2.3k | 0 | -6 | 10d | 20 | — | 16% |
| 72 | 82 | 11/15 | Preference Optimization Methods | Responses | 1.9k | 0 | -10 | 10d | — | — | 19% |
| 72 | 83 | 10/15 | Content Authentication & Provenance | Responses | 2.5k | 1 | -11 | 10d | 29 | — | 13% |
| 72 | 82 | 11/15 | AI-Assisted Deliberation Platforms | Responses | 3.6k | 0 | -10 | 10d | 68 | — | 19% |
| 72 | 80 | 11/15 | Failed and Stalled AI Policy Proposals | Responses | 3.7k | 0 | -8 | 10d | 28 | — | 22% |
| 72 | 82 | 11/15 | Cyberweapons | Risks | 3.2k | 8 | -10 | 10d | 61 | — | 18% |
| 72 | 91 | 12/15 | Information Authenticity | Parameters | 2.6k | 6 | -19 | 9d | 50 | — | 22% |
| 72 | 91 | 15/15 | Institutional Quality | Parameters | 3.1k | 6 | -19 | 9d | 9 | 7 | 22% |
| 70 | 68 | 12/15 | Regulatory Capacity Threshold Model | Models | 1.4k | 0 | 2 | 9d | — | — | 14% |
| 70 | 91 | 15/15 | Human Expertise | Parameters | 3.5k | 6 | -21 | 9d | 23 | 3 | 19% |
| 70 | 91 | 15/15 | Reality Coherence | Parameters | 3.3k | 2 | -21 | 9d | 15 | 2 | 25% |
| 70 | 91 | 15/15 | Societal Resilience | Parameters | 2.8k | 3 | -21 | 9d | 4 | 3 | 18% |
| 68 | 72 | 8/15 | Expert Opinion | Metrics | 2.3k | 2 | -4 | 14d | — | — | 19% |
| 68 | 80 | 11/15 | Canada AIDA | Responses | 4.1k | 0 | -12 | 10d | 19 | — | 20% |
| 68 | 80 | 11/15 | US State AI Legislation | Responses | 3.8k | 0 | -12 | 10d | 21 | — | 21% |
| 68 | 82 | 10/15 | Public Education | Responses | 917 | 0 | -14 | 11d | 31 | — | 11% |
| 68 | 82 | 10/15 | Flash Dynamics | Risks | 3.3k | 4 | -14 | 10d | 27 | — | 17% |
| 67 | 72 | 3/15 | Open vs Closed Source AI | Debates | 407 | 0 | -5 | — | — | — | — |
| 67 | 72 | 9/15 | AI-Augmented Forecasting | Responses | 2.6k | 0 | -5 | 11d | 10 | — | 20% |
| 67 | 80 | 12/15 | Autonomous Weapons | Risks | 2.9k | 3 | -13 | 10d | 33 | — | 17% |
| 67 | 78 | 10/15 | Enfeeblement | Risks | 1.3k | 3 | -11 | 14d | 21 | — | 14% |
| 66 | 72 | 9/15 | AI Lab Incentives Model | Models | 1.3k | 4 | -6 | 12d | — | — | 16% |
| 65 | 72 | 3/15 | Epistemic Cruxes | Cruxes | 600 | 0 | -7 | 14d | — | — | 13% |
| 65 | 91 | 15/15 | Societal Trust | Parameters | 2.8k | 10 | -26 | 9d | 12 | 10 | 23% |
| 64 | 82 | 10/15 | Multipolar Competition - The Fragmented World | Future projections | 4.6k | 0 | -18 | 10d | 25 | — | 24% |
| 64 | 78 | 10/15 | Whistleblower Dynamics Model | Models | 6.4k | 0 | -14 | 11d | — | — | 22% |
| 64 | 72 | 6/15 | Economic Disruption Impact Model | Models | 2.1k | 2 | -8 | 13d | — | — | 17% |
| 64 | 72 | 11/15 | Winner-Take-All Concentration Model | Models | 3.0k | 2 | -8 | 12d | — | — | 17% |
| 64 | 72 | 8/15 | Consensus Manufacturing Dynamics Model | Models | 1.5k | 0 | -8 | 12d | — | — | 15% |
| 64 | 82 | 10/15 | AI Surveillance and Regime Durability Model | Models | 3.3k | 0 | -18 | 10d | 29 | — | 18% |
| 64 | 78 | 10/15 | UK AI Safety Institute | Organizations | 3.6k | 9 | -14 | 10d | 20 | — | 23% |
| 64 | 72 | 10/15 | Apollo Research | Organizations | 1.7k | 4 | -8 | 14d | 4 | — | 20% |
| 64 | 82 | 10/15 | Prediction Markets | Responses | 1.6k | 0 | -18 | 11d | 34 | — | 10% |
| 64 | 82 | 9/15 | Steganography | Risks | 1.1k | 0 | -18 | 11d | 19 | — | 18% |
| 64 | 65 | 8/15 | Sycophancy | Risks | 338 | 7 | -1 | 9d | 4 | — | — |
| 64 | 72 | 9/15 | Authentication Collapse | Risks | 963 | 2 | -8 | 14d | 15 | — | 12% |
| 64 | 78 | 10/15 | Mass Surveillance | Risks | 3.1k | 5 | -14 | 10d | — | — | 21% |
| 64 | 82 | 10/15 | Winner-Take-All Dynamics | Risks | 1.5k | 5 | -18 | 14d | 30 | — | 15% |
| 63 | 72 | 12/15 | Trust Cascade Failure Model | Models | 3.5k | 4 | -9 | 12d | 3 | — | 22% |
| 62 | 78 | 10/15 | Is Interpretability Sufficient for Safety? | Debates | 1.8k | 0 | -16 | 10d | 16 | — | 21% |
| 62 | 72 | 2/15 | Should We Pause AI Development? | Debates | 673 | 0 | -10 | — | — | — | 13% |
| 62 | 78 | 8/15 | Slow Takeoff Muddle - Muddling Through | Future projections | 4.9k | 0 | -16 | 10d | 33 | — | 26% |
| 62 | 78 | 9/15 | Geopolitics & Coordination | Metrics | 3.3k | 2 | -16 | — | 27 | — | 19% |
| 62 | 72 | 6/15 | Meta & Structural Indicators | Metrics | 3.1k | 0 | -10 | — | 56 | — | 18% |
| 62 | 78 | 10/15 | LAWS Proliferation Model | Models | 5.4k | 0 | -16 | 12d | — | — | 21% |
| 62 | 72 | 10/15 | Deepfakes Authentication Crisis Model | Models | 4.7k | 2 | -10 | 11d | — | — | 23% |
| 62 | 72 | 8/15 | Institutional Adaptation Speed Model | Models | 2.4k | 3 | -10 | 11d | — | — | 16% |
| 62 | 72 | 9/15 | Electoral Impact Assessment Model | Models | 2.5k | 0 | -10 | 11d | — | — | 15% |
| 62 | 78 | 10/15 | Authoritarian Tools Diffusion Model | Models | 7.0k | 0 | -16 | 12d | — | — | 22% |
| 62 | 78 | 10/15 | Sycophancy Feedback Loop Model | Models | 3.3k | 3 | -16 | 13d | 4 | — | 20% |
| 62 | 72 | 12/15 | Epistemic Collapse Threshold Model | Models | 1.4k | 3 | -10 | 12d | — | — | 22% |
| 62 | 78 | 10/15 | ARC (Alignment Research Center) | Organizations | 1.6k | 8 | -16 | 14d | 12 | — | 20% |
| 62 | 82 | 10/15 | Epoch AI | Organizations | 1.5k | 2 | -20 | 14d | 28 | — | 15% |
| 62 | 82 | 12/15 | GovAI | Organizations | 1.7k | 5 | -20 | 10d | — | 1 | 12% |
| 62 | 78 | 10/15 | MIRI | Organizations | 2.0k | 9 | -16 | 14d | 16 | — | 16% |
| 62 | 78 | 11/15 | China AI Regulations | Responses | 3.6k | 1 | -16 | 10d | 44 | — | 23% |
| 62 | 82 | 11/15 | Labor Transition & Economic Resilience | Responses | 1.7k | 0 | -20 | 10d | — | — | 15% |
| 62 | 72 | 4/15 | Automation Bias | Risks | 1.5k | 3 | -10 | 14d | — | — | 17% |
| 62 | 78 | 11/15 | Consensus Manufacturing | Risks | 3.5k | 4 | -16 | 10d | 30 | — | 20% |
| 62 | 78 | 10/15 | Epistemic Sycophancy | Risks | 3.5k | 3 | -16 | 10d | 28 | — | 19% |
| 62 | 78 | 10/15 | AI Knowledge Monopoly | Risks | 1.9k | 1 | -16 | 14d | 38 | — | 14% |
| 62 | 78 | 10/15 | Epistemic Learned Helplessness | Risks | 1.5k | 4 | -16 | 5d | 21 | — | 12% |
| 62 | 82 | 10/15 | Scientific Knowledge Corruption | Risks | 1.2k | 1 | -20 | 14d | 30 | — | 11% |
| 62 | 68 | 6/15 | Trust Cascade Failure | Risks | 1.8k | 2 | -6 | 14d | 9 | — | 22% |
| 62 | 72 | 8/15 | Disinformation | Risks | 3.0k | 12 | -10 | 14d | 107 | — | 23% |
| 62 | 72 | 10/15 | Irreversibility | Risks | 3.5k | 4 | -10 | 10d | 36 | — | 23% |
| 60 | 91 | 15/15 | Human Agency | Parameters | 3.0k | 9 | -31 | 9d | 19 | 3 | 21% |
| 60 | 91 | 15/15 | Preference Authenticity | Parameters | 3.2k | 6 | -31 | 9d | 20 | — | 21% |
| 58 | 72 | 9/15 | Media-Policy Feedback Loop Model | Models | 2.8k | 1 | -14 | 11d | — | — | 21% |
| 58 | 72 | 10/15 | Redwood Research | Organizations | 1.5k | 6 | -14 | 14d | 16 | — | 18% |
| 58 | 82 | 11/15 | Deepfake Detection | Responses | 1.7k | 0 | -24 | 10d | — | — | 13% |
| 55 | 78 | 11/15 | US AI Safety Institute | Organizations | 4.1k | 2 | -23 | 10d | 26 | — | 25% |
| 55 | 62 | 7/15 | Optimistic Alignment Worldview | Worldviews | 3.6k | 0 | -7 | — | 12 | — | 22% |
| 55 | 91 | 12/15 | Economic Stability | Parameters | 2.4k | 6 | -36 | 9d | 50 | — | 17% |
| 54 | 78 | 11/15 | Mainstream Era (2020-Present) | History | 4.3k | 0 | -24 | 10d | 14 | — | 18% |
| 54 | 72 | 5/15 | Economic & Labor Metrics | Metrics | 2.9k | 3 | -18 | — | 89 | — | 14% |
| 54 | 78 | 7/15 | Preference Manipulation Drift Model | Models | 2.0k | 2 | -24 | 12d | — | — | 16% |
| 52 | 72 | 11/15 | Disinformation Detection Arms Race Model | Models | 2.7k | 1 | -20 | 11d | — | — | 18% |
| 52 | 68 | 7/15 | Trust Erosion Dynamics Model | Models | 1.7k | 2 | -16 | 12d | — | — | 16% |
| 52 | 72 | 10/15 | OpenAI | Organizations | 2.1k | 16 | -20 | 14d | 12 | — | 19% |
| 52 | 72 | 9/15 | Legal Evidence Crisis | Risks | 1.1k | 1 | -20 | 14d | 21 | — | 12% |
| 52 | 78 | 10/15 | Deepfakes | Risks | 1.5k | 11 | -26 | 14d | 35 | — | 18% |
| 52 | 68 | 8/15 | AI Doomer Worldview | Worldviews | 2.1k | 0 | -16 | — | 10 | — | 21% |
| 48 | 72 | 3/15 | Is Scaling All You Need? | Debates | 333 | 0 | -24 | — | — | — | 13% |
| 48 | 52 | 8/15 | Long-Timelines Technical Worldview | Worldviews | 3.1k | 0 | -4 | — | 10 | — | 21% |
| 45 | 72 | 11/15 | Cyber Psychosis Cascade Model | Models | 2.6k | 0 | -27 | 11d | — | — | 17% |
| 45 | 72 | 8/15 | Surveillance Chilling Effects Model | Models | 2.3k | 0 | -27 | 11d | — | — | 16% |
| 44 | 42 | 5/15 | Deep Learning Revolution (2012-2020) | History | 2.2k | 0 | 2 | 14d | — | — | 16% |
| 44 | 72 | 10/15 | Post-Incident Recovery Model | Models | 1.9k | 0 | -28 | 11d | — | — | 13% |
| 44 | 72 | 10/15 | Reality Fragmentation Network Model | Models | 1.8k | 3 | -28 | 11d | 1 | — | 16% |
| 44 | 72 | 10/15 | CHAI (Center for Human-Compatible AI) | Organizations | 1.3k | 1 | -28 | 14d | 10 | — | 17% |
| 43 | 52 | 3/15 | Government Regulation vs Industry Self-Governance | Debates | 808 | 0 | -9 | — | — | — | 15% |
| 43 | 72 | 10/15 | Public Opinion Evolution Model | Models | 2.9k | 0 | -29 | 11d | — | — | 21% |
| 42 | 52 | 2/15 | Is AI Existential Risk Real? | Debates | 32 | 0 | -10 | — | — | — | — |
| 42 | 48 | 2/15 | Aligned AGI - The Good Ending | Future projections | 3.6k | 0 | -6 | — | — | — | 26% |
| 42 | 48 | 5/15 | Early Warnings (1950s-2000) | History | 2.6k | 0 | -6 | 2w | — | — | 17% |
| 42 | 72 | 8/15 | Public Opinion & Awareness | Metrics | 2.6k | 2 | -30 | — | 10 | — | 16% |
| 42 | 78 | 11/15 | Expertise Atrophy Cascade Model | Models | 4.2k | 2 | -36 | 12d | — | — | 20% |
| 42 | 71 | 10/15 | Fraud Sophistication Curve Model | Models | 3.6k | 0 | -29 | 11d | — | — | 22% |
| 42 | 72 | 10/15 | FAR AI | Organizations | 1.4k | 0 | -30 | 5d | 11 | — | 16% |
| 42 | 78 | 11/15 | Epistemic Infrastructure | Responses | 2.8k | 0 | -36 | 10d | 59 | — | 20% |
| 42 | 72 | 9/15 | Historical Revisionism | Risks | 1.3k | 2 | -30 | 14d | 19 | — | 12% |
| 42 | 78 | 10/15 | AI-Powered Fraud | Risks | 1.4k | 1 | -36 | 14d | 28 | — | 18% |
| 38 | 45 | 4/15 | The MIRI Era (2000-2015) | History | 2.5k | 0 | -7 | 14d | — | — | 20% |
| 38 | 72 | 10/15 | Conjecture | Organizations | 1.4k | 0 | -34 | 14d | 16 | — | 15% |
| 38 | 72 | 7/15 | Cyber Psychosis & AI-Induced Psychological Harm | Risks | 938 | 0 | -34 | 14d | 47 | — | 11% |
| 35 | 72 | 3/15 | When Will AGI Arrive? | Debates | 1.0k | 0 | -37 | — | — | — | 15% |
| 35 | 42 | 5/15 | Key Publications | History | 2.6k | 0 | -7 | 14d | — | — | 20% |
| 35 | 78 | 10/15 | CAIS (Center for AI Safety) | Organizations | 847 | 2 | -43 | 14d | 20 | — | 15% |
| 35 | 78 | 13/15 | Demis Hassabis | People | 3.2k | 1 | -43 | 10d | 20 | — | 14% |
| 32 | 48 | 2/15 | xAI | Organizations | 2.2k | 1 | -16 | 14d | — | — | 14% |
| 32 | 48 | 2/15 | Ilya Sutskever | People | 1.1k | 1 | -16 | 14d | — | — | 15% |
| 25 | 72 | 10/15 | Google DeepMind | Organizations | 1.9k | 8 | -47 | 5d | 14 | — | 16% |
| 25 | 42 | 2/15 | Chris Olah | People | 1.1k | 4 | -17 | 14d | — | — | 16% |
| 25 | 42 | 2/15 | Dan Hendrycks | People | 994 | 1 | -17 | 14d | — | — | 15% |
| 25 | 72 | 9/15 | Geoffrey Hinton | People | 1.8k | 2 | -47 | 14d | 20 | — | 17% |
| 25 | 78 | 10/15 | Holden Karnofsky | People | 1.5k | 1 | -53 | 14d | 24 | — | 17% |
| 25 | 42 | 2/15 | Jan Leike | People | 893 | 5 | -17 | 14d | — | — | 15% |
| 25 | 52 | 2/15 | Stuart Russell | People | 977 | 0 | -27 | 14d | — | — | 17% |
| 25 | 78 | 9/15 | Yoshua Bengio | People | 1.5k | 2 | -53 | 14d | 15 | — | 17% |
| 25 | 78 | 5/15 | Epistemic Collapse | Risks | 147 | 9 | -53 | 9d | — | — | 14% |
| 25 | 91 | 9/15 | Expertise Atrophy | Risks | 610 | 4 | -66 | 9d | — | — | 17% |
| 25 | 91 | 9/15 | Preference Manipulation | Risks | 540 | 2 | -66 | 9d | 7 | — | 19% |
| 25 | 78 | 4/15 | Reality Fragmentation | Risks | 154 | 5 | -53 | 9d | — | — | 14% |
| 25 | 91 | 9/15 | Trust Decline | Risks | 583 | 7 | -66 | 9d | 7 | — | 16% |
| 25 | 91 | 9/15 | Concentration of Power | Risks | 479 | 17 | -66 | 9d | 9 | — | 16% |
| 25 | 91 | 9/15 | Economic Disruption | Risks | 466 | 7 | -66 | 9d | 13 | — | 16% |
| 25 | 91 | 9/15 | Erosion of Human Agency | Risks | 572 | 8 | -66 | 9d | 17 | — | 19% |
| 24 | 72 | 10/15 | Toby Ord | People | 1.9k | 2 | -48 | 14d | 25 | — | 15% |
| 23 | 82 | 10/15 | Paul Christiano | People | 1.2k | 6 | -59 | 5d | 18 | — | 15% |
| 22 | 35 | 2/15 | Connor Leahy | People | 1.0k | 1 | -13 | 14d | — | — | 15% |
| 22 | 72 | 10/15 | Daniela Amodei | People | 858 | 0 | -50 | 11d | 6 | — | 15% |
| 22 | 72 | 10/15 | Dario Amodei | People | 1.6k | 3 | -50 | 14d | 21 | — | 17% |
| 22 | 48 | 2/15 | Eliezer Yudkowsky | People | 703 | 2 | -26 | 14d | — | — | 16% |
| 15 | 35 | 2/15 | Neel Nanda | People | 944 | 1 | -20 | 14d | — | — | 16% |
| 15 | 42 | 2/15 | Nick Bostrom | People | 960 | 2 | -27 | 14d | — | — | 16% |
| 15 | 25 | 2/15 | External Resources | Other | 27 | 0 | -10 | — | — | — | — |
| 5 | 45 | 10/15 | Model Style Guide | Other | 2.4k | 0 | -40 | — | — | — | 15% |
| — | — | 2/15 | Concepts Directory | Other | 25 | 0 | — | — | — | — | — |
| — | — | 1/15 | _ENHANCEMENT_TODO | Models | 479 | 0 | — | — | — | — | — |
| — | — | 10/15 | _STYLE_GUIDE | Models | 887 | 0 | — | — | — | — | 15% |
| — | — | 11/15 | Intervention Portfolio | Responses | 1.7k | 0 | — | 4d | — | — | 16% |
| — | — | 4/15 | Adoption (AI Capabilities) | Factors | 249 | 0 | — | 1d | — | — | 11% |
| — | — | 4/15 | Algorithms (AI Capabilities) | Factors | 220 | 0 | — | 1d | — | — | — |
| — | — | 4/15 | Compute (AI Capabilities) | Factors | 144 | 0 | — | 1d | — | — | — |
| — | — | 10/15 | Companies (AI Ownership) | Factors | 557 | 0 | — | 2d | — | — | 15% |
| — | — | 10/15 | Countries (AI Ownership) | Factors | 470 | 0 | — | 2d | — | — | 15% |
| — | — | 10/15 | Shareholders (AI Ownership) | Factors | 477 | 0 | — | 2d | — | — | 15% |
| — | — | 10/15 | Coordination (AI Uses) | Factors | 477 | 0 | — | 2d | — | — | 15% |
| — | — | 10/15 | Governments (AI Uses) | Factors | 516 | 0 | — | 2d | — | — | 15% |
| — | — | 9/15 | Industries (AI Uses) | Factors | 433 | 0 | — | 2d | — | — | 14% |
| — | — | 10/15 | Recursive AI Capabilities | Factors | 691 | 0 | — | 2d | — | — | 13% |
| — | — | 4/15 | Adaptability (Civ. Competence) | Factors | 235 | 0 | — | 1d | — | — | 18% |
| — | — | 4/15 | Epistemics (Civ. Competence) | Factors | 270 | 0 | — | 1d | — | — | 18% |
| — | — | 4/15 | Governance (Civ. Competence) | Factors | 236 | 0 | — | 1d | — | — | 17% |
| — | — | 4/15 | AI Governance | Factors | 215 | 0 | — | 1d | — | — | 17% |
| — | — | 4/15 | Lab Safety Practices | Factors | 227 | 0 | — | 1d | — | — | 11% |
| — | — | 4/15 | Technical AI Safety | Factors | 213 | 0 | — | 1d | — | — | — |
| — | — | 4/15 | Biological Threat Exposure | Factors | 208 | 4 | — | 1d | — | — | 19% |
| — | — | 3/15 | Cyber Threat Exposure | Factors | 200 | 4 | — | 1d | — | — | 19% |
| — | — | 4/15 | Robot Threat Exposure | Factors | 214 | 0 | — | 1d | — | — | 23% |
| — | — | 4/15 | Surprise Threat Exposure | Factors | 214 | 0 | — | 1d | — | — | 19% |
| — | — | 4/15 | Economic Stability | Factors | 205 | 6 | — | 1d | — | — | 17% |
| — | — | 4/15 | Racing Intensity | Factors | 239 | 13 | — | 1d | — | — | 21% |
| — | — | 9/15 | Existential Catastrophe | Outcomes | 491 | 4 | — | 9d | — | — | 17% |
| — | — | 9/15 | Long-term Trajectory | Outcomes | 717 | 3 | — | 5d | — | — | 17% |
| — | — | 10/15 | Societal Adaptability | Parameters | 380 | 0 | — | 9d | — | — | 18% |
| — | — | 9/15 | Epistemic Foundation | Parameters | 376 | 0 | — | 9d | — | — | 18% |
| — | — | 9/15 | Governance Capacity | Parameters | 355 | 0 | — | 9d | — | — | 17% |
| — | — | 10/15 | Robot Threat Exposure | Parameters | 782 | 0 | — | 2d | — | — | 23% |
| — | — | 11/15 | Surprise Threat Exposure | Parameters | 816 | 0 | — | 2d | — | — | 19% |
| — | — | 13/15 | Gradual AI Takeover | Scenarios | 841 | 0 | — | 2d | — | 3 | 16% |
| — | — | 10/15 | Rapid AI Takeover | Scenarios | 757 | 0 | — | 2d | — | 2 | 16% |
| — | — | 10/15 | Rogue Actor Catastrophe | Scenarios | 811 | 0 | — | 2d | — | — | 18% |
| — | — | 11/15 | State-Caused Catastrophe | Scenarios | 831 | 0 | — | 2d | — | — | 18% |
| — | — | 9/15 | Epistemic Lock-in | Scenarios | 785 | 0 | — | 2d | — | — | 18% |
| — | — | 10/15 | Power Lock-in | Scenarios | 948 | 0 | — | 5d | — | — | 14% |
| — | — | 10/15 | Suffering Lock-in | Scenarios | 731 | 0 | — | 2d | — | — | 12% |
| — | — | 11/15 | Value Lock-in | Scenarios | 925 | 0 | — | 2d | — | — | 16% |
| — | — | 2/15 | Parameter Table | Other | 0 | 0 | — | — | — | — | — |
| — | 15 | 2/15 | Browse by Tag | Other | 25 | 0 | — | — | — | — | — |
| — | — | 6/15 | Automation Tools | Other | 797 | 0 | — | — | — | — | 14% |
| — | 72 | 8/15 | Content Database System | Other | 928 | 0 | — | 11d | — | — | 14% |
| — | 45 | 7/15 | Enhancement Queue | Other | 276 | 0 | — | 11d | — | — | — |
| — | 72 | 7/15 | Knowledge Base Style Guide | Other | 777 | 0 | — | 12d | — | — | 10% |
| — | 45 | 8/15 | Mermaid Diagram Style Guide | Other | 422 | 0 | — | — | — | — | — |
| — | 25 | 3/15 | Project Roadmap | Other | 438 | 0 | — | 5d | — | — | 10% |