Epistemic & Coordination Solutions
The Opportunity
Section titled “The Opportunity”The same AI capabilities that create epistemic risks can potentially be harnessed for defense. AI can:
- Verify claims faster than humans can
- Aggregate distributed knowledge effectively
- Coordinate large groups around shared goals
- Detect manipulation and deception
- Preserve and organize knowledge
This section catalogs concrete strategies and technologies that could help societies maintain epistemic integrity and coordinate effectively in an AI-saturated environment.
Core Strategies
Section titled “Core Strategies”AI-Enhanced Verification
Section titled “AI-Enhanced Verification”Using AI to verify claims, detect manipulation, and authenticate content:
- AI-Assisted Fact-Checking — AI systems that help verify claims at scale
- Content Authentication — Provenance, watermarking, and verification systems
- Deepfake Detection — AI systems to detect synthetic media
Collective Intelligence
Section titled “Collective Intelligence”Harnessing distributed human knowledge and judgment with AI augmentation:
- Prediction Markets — Markets for aggregating probabilistic beliefs
- AI-Augmented Forecasting — Combining AI and human judgment for better predictions
- Deliberation Platforms — AI-assisted democratic deliberation
- Wisdom of Crowds — Mechanisms for aggregating distributed knowledge
Coordination Technologies
Section titled “Coordination Technologies”Tools for enabling large-scale cooperation and coordination:
- Commitment Mechanisms — Credible commitment devices for AI governance
- Multi-Stakeholder Platforms — Coordination across governments, labs, and civil society
- AI-Assisted Negotiation — AI systems that help parties find agreements
Epistemic Infrastructure
Section titled “Epistemic Infrastructure”Foundational systems for maintaining shared knowledge:
- Knowledge Graphs — Structured knowledge representation with provenance
- Epistemic Auditing — Tools for evaluating information quality
- AI-Human Hybrid Systems — Designs that combine AI and human strengths
Why AI Can Help
Section titled “Why AI Can Help”The Defense Case
Section titled “The Defense Case”| AI Threat | Potential AI Defense |
|---|---|
| AI-generated misinformation | AI detection, provenance verification |
| Personalized manipulation | AI that explains manipulation attempts |
| Scale of fake content | AI verification at matching scale |
| Coordination attacks | AI-assisted collective defense |
| Expertise erosion | AI-augmented human expertise |
Key Insight: Asymmetry Can Cut Both Ways
Section titled “Key Insight: Asymmetry Can Cut Both Ways”The offense-defense asymmetry (generation easier than verification) is real, but:
- Verification compounds: Once verified, information stays verified
- Trust is valuable: Verified sources become focal points
- Defense can coordinate: Attackers often can’t coordinate as well
- AI advantages scale: Defensive AI improves with resources
Design Principles
Section titled “Design Principles”1. Human-AI Complementarity
Section titled “1. Human-AI Complementarity”Neither humans nor AI alone are sufficient:
- AI: Speed, scale, consistency
- Humans: Judgment, values, accountability
- Together: Combine strengths, check weaknesses
2. Adversarial Robustness
Section titled “2. Adversarial Robustness”Systems must work despite:
- Active attackers trying to subvert them
- Edge cases designed to fool AI
- Coordination among bad actors
- Evolving attack techniques
3. Incentive Alignment
Section titled “3. Incentive Alignment”Systems work when:
- Participants benefit from honest behavior
- Gaming is harder than sincere participation
- Costs fall on bad actors, not good ones
- Long-term reputation matters
4. Gradual Trust Building
Section titled “4. Gradual Trust Building”Start small and build:
- Begin with low-stakes applications
- Demonstrate reliability before scaling
- Build track record over time
- Allow for correction and improvement
5. Decentralization and Redundancy
Section titled “5. Decentralization and Redundancy”Avoid single points of failure:
- Multiple independent verifiers
- Diverse approaches and methods
- Open protocols others can implement
- No single entity controls the system
Current State
Section titled “Current State”What Exists Now
Section titled “What Exists Now”| Category | Examples | Maturity |
|---|---|---|
| Prediction markets | Polymarket, Metaculus, Manifold | Growing |
| Fact-checking AI | ClaimBuster, Full Fact AI | Early |
| Content authentication | C2PA, Content Credentials | Emerging |
| Deliberation platforms | Polis, All Our Ideas | Niche use |
| Knowledge platforms | Wikipedia, Semantic Scholar | Established |
What’s Missing
Section titled “What’s Missing”| Gap | Importance |
|---|---|
| Cross-platform verification | High |
| Real-time deepfake detection | High |
| AI-assisted diplomatic negotiation | Medium |
| Global epistemic infrastructure | High |
| Incentive-aligned knowledge curation | High |
Key Questions
Section titled “Key Questions”- Can AI-enabled defense keep pace with AI-enabled offense?
- What institutional structures are needed to deploy these solutions?
- How do we bootstrap trust in new verification systems?
- Can coordination solutions work across geopolitical divides?
- What research is most neglected relative to importance?