People
Overview
Section titled “Overview”This section profiles key individuals shaping AI safety research, policy, and public discourse. Profiles include their contributions, positions on key debates, and organizational affiliations.
Featured Researchers
Section titled “Featured Researchers”AI Safety Pioneers
Section titled “AI Safety Pioneers”- Eliezer Yudkowsky - MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100 founder, early AI risk advocate
- Nick Bostrom - FHI founder, Superintelligence author
- Stuart Russell - UC Berkeley, Human Compatible author
Alignment Researchers
Section titled “Alignment Researchers”- Paul Christiano - ARC founder, iterated amplification
- Jan Leike - Former OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100 alignment lead
- Chris Olah - AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100, interpretability pioneer
- Neel Nanda - DeepMind, mechanistic interpretabilitySafety AgendaInterpretabilityMechanistic interpretability has extracted 34M+ interpretable features from Claude 3 Sonnet with 90% automated labeling accuracy and demonstrated 75-85% success in causal validation, though less th...Quality: 66/100
Lab Leaders
Section titled “Lab Leaders”- Dario Amodei - Anthropic CEO
- Daniela Amodei - Anthropic President
- Demis Hassabis - DeepMind CEO
- Ilya Sutskever - Former OpenAI Chief Scientist
Public Voices
Section titled “Public Voices”- Geoffrey Hinton - “Godfather of AI”, recent safety advocate
- Yoshua Bengio - Turing Award winner, safety advocate
- Connor Leahy - ConjectureLab ResearchConjectureConjecture is a 30-40 person London-based AI safety org founded 2021, pursuing Cognitive Emulation (CoEm) - building interpretable AI from ground-up rather than aligning LLMs - with $30M+ Series A ...Quality: 37/100 CEO, public communicator
Effective Altruism & Policy
Section titled “Effective Altruism & Policy”- Holden Karnofsky - Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100, key funder
- Toby Ord - FHI, The Precipice author
- Dan Hendrycks - CAISLab ResearchCAISCAIS is a research organization that has distributed $2M+ in compute grants to 200+ researchers, published 50+ safety papers including benchmarks adopted by Anthropic/OpenAI, and organized the May ...Quality: 42/100 director
Profile Contents
Section titled “Profile Contents”Each profile includes:
- Background - Career history and key contributions
- Positions - Views on key AI safety debates
- Affiliations - Organizations and collaborations
- Key publications - Influential papers and writings