Epistemic & Forecasting Organizations
Page Status
Quality:70 (Good)⚠️
Importance:72 (High)
Last edited:2026-01-29 (2 weeks ago)
Words:217
Structure:
📊 1📈 0🔗 14📚 0•33%Score: 5/15
Issues (1):
- QualityRated 70 but structure suggests 33 (overrated by 37 points)
Overview
Section titled “Overview”This section covers organizations focused on improving forecasting, epistemic tools, and quantitative reasoning—particularly as applied to AI safety and existential risk assessment. These organizations provide critical infrastructure for understanding AI timelines, evaluating interventions, and making better decisions under uncertainty.
Key Organizations
Section titled “Key Organizations”| Organization | Focus | Key Products/Projects |
|---|---|---|
| Epoch AI | AI trends research & compute tracking | ML Trends Database, Parameter Counts, Training Compute Estimates |
| Metaculus | Prediction aggregation platform | AI Forecasting, AGI Timeline Questions, Tournaments |
| Forecasting Research Institute | Forecasting methodology research | XPT Tournament, ForecastBenchProjectForecastBenchForecastBench is a dynamic, contamination-free benchmark with 1,000 continuously-updated questions comparing LLM forecasting to superforecasters. GPT-4.5 achieves 0.101 Brier score vs 0.081 for sup...Quality: 53/100, Superforecaster Studies |
| QURI | Epistemic tools development | SquiggleProjectSquiggleSquiggle is a domain-specific probabilistic programming language optimized for intuition-driven estimation rather than data-driven inference, developed by QURI and adopted primarily in the EA commu...Quality: 41/100 Language, Squiggle Hub, MetaforecastProjectMetaforecastMetaforecast is a forecast aggregation platform combining 2,100+ questions from 10+ sources (Metaculus, Manifold, Polymarket, etc.) with daily updates via automated scraping. Created by QURI, it pr...Quality: 35/100 |
| Manifold | Prediction marketsApproachPrediction MarketsPrediction markets achieve Brier scores of 0.16-0.24 (15-25% better than polls) by aggregating dispersed information through financial incentives, with platforms handling $1-3B annually. For AI saf...Quality: 56/100 platform | AI Markets, ManifestOrganizationManifestManifest is a 2024 forecasting conference that generated significant controversy within EA/rationalist communities due to speaker selection including individuals associated with race science, highl...Quality: 50/100 Conference, ManifundOrganizationManifundManifund is a $2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors ($50K-400K budgets), fiscal sponsorship, ...Quality: 50/100 |
Why Epistemic InfrastructureApproachEpistemic InfrastructureComprehensive analysis of epistemic infrastructure showing AI fact-checking achieves 85-87% accuracy at $0.10-$1.00 per claim versus $50-200 for human verification, while Community Notes reduces mi...Quality: 59/100 Matters for AI Safety
Section titled “Why Epistemic Infrastructure Matters for AI Safety”Forecasting and epistemic tools are essential for AI safety because:
- Timeline Uncertainty: AI development trajectories are highly uncertain; better forecasting helps allocate resources appropriately
- Intervention Evaluation: Quantifying the expected impact of safety interventions requires probabilistic reasoning tools
- Early Warning: Prediction markets and forecasting platforms can provide early signals about concerning developments
- Decision Support: Policymakers and researchers need calibrated uncertainty estimates, not false precision
- Accountability: Track records create feedback loops that improve institutional decision-making
Related Resources
Section titled “Related Resources”- AGI TimelineConceptAGI TimelineComprehensive synthesis of AGI timeline forecasts showing dramatic acceleration: expert median dropped from 2061 (2018) to 2047 (2023), Metaculus from 50 years to 5 years since 2020, with current p...Quality: 59/100 - Forecasts on when transformative AI may arrive
- Alignment EvaluationsApproachAlignment EvaluationsComprehensive review of alignment evaluation methods showing Apollo Research found 1-13% scheming rates across frontier models, while anti-scheming training reduced covert actions 30x (13%→0.4%) bu...Quality: 65/100 - Methods for assessing AI safety