Government AI Safety Institutes
Summary
Section titled “Summary”Government AI safety institutes evaluate AI systems, develop safety standards, and inform policy. They represent a new approach to AI governance—technical safety evaluation as a government function.
AI Safety Institutes
Section titled “AI Safety Institutes”Government Institutes
government
UK AI Safety Institute
First government AI safety institute. Focuses on frontier model evaluation.
government
US AI Safety Institute
Part of NIST. Develops AI safety standards and evaluation frameworks.
Key Functions
Section titled “Key Functions”| Function | Description |
|---|---|
| Pre-deployment evaluation | Testing frontier models before release |
| Standards development | Creating safety benchmarks and requirements |
| Incident monitoring | Tracking AI-related incidents and near-misses |
| International coordination | Working with other governments on AI safety |
| Policy advice | Informing legislation and regulation |
Emerging International Network
Section titled “Emerging International Network”Multiple countries are establishing AI safety institutes, potentially forming an international network for:
- Shared evaluation frameworks
- Coordinated safety standards
- Information sharing about risks
- Joint research initiatives