Tools & Platforms
Specific tools and platforms in the epistemic infrastructureApproachEpistemic InfrastructureComprehensive analysis of epistemic infrastructure showing AI fact-checking achieves 85-87% accuracy at $0.10-$1.00 per claim versus $50-200 for human verification, while Community Notes reduces mi...Quality: 59/100 space:
Forecasting & Prediction:
- SquiggleProjectSquiggleSquiggle is a domain-specific probabilistic programming language optimized for intuition-driven estimation rather than data-driven inference, developed by QURI and adopted primarily in the EA commu...Quality: 41/100: Probabilistic programming language for uncertainty quantification
- SquiggleAIProjectSquiggleAISquiggleAI is an LLM tool (primarily Claude Sonnet 4.5) that generates probabilistic Squiggle models from natural language, using ~20K tokens of cached documentation to produce 100-500 line models ...Quality: 37/100: LLM-powered probabilistic model generation
- MetaforecastProjectMetaforecastMetaforecast is a forecast aggregation platform combining 2,100+ questions from 10+ sources (Metaculus, Manifold, Polymarket, etc.) with daily updates via automated scraping. Created by QURI, it pr...Quality: 35/100: Cross-platform forecast aggregation
Benchmarking:
- ForecastBenchProjectForecastBenchForecastBench is a dynamic, contamination-free benchmark with 1,000 continuously-updated questions comparing LLM forecasting to superforecasters. GPT-4.5 achieves 0.101 Brier score vs 0.081 for sup...Quality: 53/100: Dynamic benchmark for AI forecasting capabilities
- AI Forecasting Benchmark TournamentProjectAI Forecasting Benchmark TournamentQuarterly competition (Q2 2025: 348 questions, 54 bot-makers, $30K prizes) comparing human Pro Forecasters against AI bots, with statistical testing showing humans maintain significant lead (p=0.00...Quality: 41/100: Human vs AI forecasting competition
Research Coordination:
- XPTProjectXPT (Existential Risk Persuasion Tournament)A 2022 forecasting tournament with 169 participants found superforecasters severely underestimated AI progress (2.3% probability for IMO gold vs actual 2025 achievement) and gave 8x lower AI extinc...Quality: 54/100: Structured adversarial collaboration on existential risks
Verification:
- X Community NotesProjectX Community NotesCommunity Notes uses a bridging algorithm requiring cross-partisan consensus to display fact-checks, reducing retweets 25-50% when notes appear. However, only 8.3% of notes achieve visibility, taki...Quality: 54/100: Crowdsourced fact-checking at scale
Knowledge Coordination:
- Longterm WikiProjectLongterm WikiA self-referential documentation page describing the Longterm Wiki platform itself—a strategic intelligence tool with ~550 pages, crux mapping of ~50 uncertainties, and quality scoring across 6 dim...Quality: 63/100: Strategic intelligence platform for AI safety prioritization
- Stampy / AISafety.infoProjectStampy / AISafety.infoAISafety.info is a volunteer-maintained wiki with 280+ answers on AI existential risk, complemented by Stampy, an LLM chatbot searching 10K-100K alignment documents via RAG. Features include a Disc...Quality: 45/100: AI safety Q&A wiki with LLM chatbot
- MIT AI Risk RepositoryProjectMIT AI Risk RepositoryThe MIT AI Risk Repository catalogs 1,700+ AI risks from 65+ frameworks into a searchable database with dual taxonomies (causal and domain-based). Updated quarterly since August 2024, it provides t...Quality: 40/100: Database of 1,700+ categorized AI risks
These tools aim to improve decision-making on high-stakes questions, particularly around AI development and existential risk.