Skip to content
This site is deprecated. See the new version.

People

This section profiles key individuals shaping AI safety research, policy, and public discourse. Profiles include their contributions, positions on key debates, and organizational affiliations.

  • Eliezer Yudkowsky - MIRI founder, early AI risk advocate
  • Nick Bostrom - FHI founder, Superintelligence author
  • Stuart Russell - UC Berkeley, Human Compatible author
  • Paul Christiano - ARC founder, iterated amplification
  • Jan Leike - Former OpenAI alignment lead
  • Chris Olah - Anthropic, interpretability pioneer
  • Neel Nanda - DeepMind, mechanistic interpretability
  • Geoffrey Hinton - “Godfather of AI”, recent safety advocate
  • Yoshua Bengio - Turing Award winner, safety advocate
  • Connor Leahy - Conjecture CEO, public communicator
  • Holden Karnofsky - Coefficient Giving, key funder
  • Toby Ord - FHI, The Precipice author
  • Dan Hendrycks - CAIS director

Each profile includes:

  • Background - Career history and key contributions
  • Positions - Views on key AI safety debates
  • Affiliations - Organizations and collaborations
  • Key publications - Influential papers and writings