Key Metrics & Estimates
Overview
Section titled “Overview”Understanding AI risk requires tracking concrete, measurable indicators. This section catalogs key metrics across domains—from compute growth to public opinion—that help assess where we are and where we’re heading.
These metrics serve multiple purposes:
- Situational awareness: Understanding the current state of AI development
- Forecasting: Inputs for predicting future trajectories
- Evaluation: Measuring the effectiveness of safety interventions
- Communication: Grounding abstract discussions in concrete numbers
Metric Categories
Section titled “Metric Categories”| Category | Description |
|---|---|
| Compute & Hardware | GPU production, training compute, efficiency trends |
| AI Capabilities | Benchmarks, task performance, capability trajectories |
| Economic & Labor | Investment, automation, productivity impacts |
| Safety Research | Researcher headcount, funding, publication rates |
| Alignment Progress | Interpretability, robustness, alignment tax |
| Governance & Policy | Regulations, enforcement, international agreements |
| Lab Behavior | RSP compliance, safety practices, transparency |
| Public Opinion | Awareness, concern, trust levels |
| Expert Opinion | P(doom), timelines, researcher surveys |
| Geopolitics | US-China dynamics, talent flows, coordination |
| Structural Indicators | Information quality, institutional capacity, resilience |
How to Use This Section
Section titled “How to Use This Section”For Researchers
Section titled “For Researchers”- Find current best estimates for key parameters
- Identify data gaps and measurement challenges
- Track changes over time
For Forecasters
Section titled “For Forecasters”- Input variables for models and predictions
- Base rates and reference classes
- Uncertainty ranges and confidence intervals
For Policymakers
Section titled “For Policymakers”- Evidence base for regulatory decisions
- Monitoring indicators for AI governance
- International comparison data
Data Quality Notes
Section titled “Data Quality Notes”Metrics vary significantly in:
- Availability: Some are publicly tracked; others require inference
- Reliability: Some come from rigorous measurement; others from surveys or estimates
- Timeliness: Some update continuously; others are snapshots
- Comparability: Definitions and methodologies may differ across sources
Each page notes data quality and limitations for specific metrics.
Key Sources
Section titled “Key Sources”| Source | Coverage |
|---|---|
| Epoch AI↗ | Compute trends, notable models database |
| AI Index (Stanford HAI)↗ | Comprehensive annual report |
| State of AI Report↗ | Industry trends, research progress |
| CAIS Surveys↗ | Expert opinion on AI risk |
| Our World in Data↗ | Long-term trends, public data |
| Metaculus↗ | Forecasts on AI milestones |
| AI Safety Papers Database↗ | Safety research tracking |