Skip to content

Government AI Safety Institutes

Government AI safety institutes evaluate AI systems, develop safety standards, and inform policy. They represent a new approach to AI governance—technical safety evaluation as a government function.

Government Institutes
government

UK AI Safety Institute

First government AI safety institute. Focuses on frontier model evaluation.

government

US AI Safety Institute

Part of NIST. Develops AI safety standards and evaluation frameworks.

FunctionDescription
Pre-deployment evaluationTesting frontier models before release
Standards developmentCreating safety benchmarks and requirements
Incident monitoringTracking AI-related incidents and near-misses
International coordinationWorking with other governments on AI safety
Policy adviceInforming legislation and regulation

Multiple countries are establishing AI safety institutes, potentially forming an international network for:

  • Shared evaluation frameworks
  • Coordinated safety standards
  • Information sharing about risks
  • Joint research initiatives