Funders
AI Safety Funding Overview
Total Field: ~$100-500M/year
Largest Funder: Open Philanthropy (~$50-100M/year)
Grant Range: $5K to $5M+
Note: Growing but still under 1% of AI capabilities funding
Summary
Section titled “Summary”AI safety funding has grown dramatically since 2015 but remains a small fraction of overall AI investment. Understanding the funding landscape is critical for anyone seeking to work on AI safety.
Growth Trends
Section titled “Growth Trends”AI safety funding has increased approximately 10-20x since 2019, driven by:
- Growing awareness of AI risks among major philanthropists
- Formation of dedicated safety teams at frontier AI labs
- Increased government interest (AISI, NIST, EU AI Office)
- New funders entering the space (Anthropic LTBT, Schmidt Futures)
Comparison to Capabilities Funding
Section titled “Comparison to Capabilities Funding”AI safety funding remains ~0.1-0.5% of total AI investment:
- Total AI investment: ~$100-200B/year (venture capital + corporate R&D)
- AI capabilities funding vastly outpaces safety work
- Some argue this ratio should be inverted given the stakes
Major Funders
Section titled “Major Funders”Funder Comparison
Section titled “Funder Comparison”| Funder | Annual Amount | Grant Size | Speed | Application | Best For |
|---|---|---|---|---|---|
| Open Philanthropy | $50-100M | $50K-$20M | 3-6 months | Relationship-driven | Large orgs, established researchers |
| SFF | $20-60M | $50K-$2M | 6-8 weeks | Open rounds | Speculative research, new approaches |
| LTFF | $5-15M | $5K-$500K | 2-6 weeks | Rolling | Individuals, small projects, upskilling |
| Anthropic LTBT | TBD | $100K-$5M+ | TBD | Developing | High-quality research, complementary to Anthropic |
| Government (AISI/NSF) | $10-50M | $100K-$5M | 6-12 months | RFPs, standard process | Academic researchers, standards work |
Funding by Category
Section titled “Funding by Category”Technical Research
Well-funded: Mechanistic interpretability, Evaluations and benchmarks, Scalable oversight, RLHF and training methods
Underfunded: Agent foundations, Formal verification, Novel training paradigms, Worst-case safety
Major Funders: Open Phil, SFF, LTFF, Anthropic LTBT
Typical Grants: $100K-$2M
Governance and Policy
Well-funded: Think tank research, Government engagement, International coordination
Underfunded: Corporate governance, Enforcement mechanisms, Non-US policy work, Subnational governance
Major Funders: Open Phil, Schmidt Futures, government sources
Typical Grants: $200K-$5M
Field-Building
Activities: AI safety courses and programs, Conferences and workshops, Career advising and placement, Community infrastructure
Major Funders: Open Phil, LTFF, SFF
Typical Grants: $50K-$1M
Communications and Education
Activities: Public outreach, Educational content, Media and journalism, Advocacy
Major Funders: Open Phil (selective), LTFF (small grants)
Typical Grants: $20K-$300K
Grant Size Breakdown
Section titled “Grant Size Breakdown”Small Grants$5K-$50K
Typical Uses:
- Upskilling (3-6 months)
- Pilot projects
- Travel and conferences
- Course development
- Part-time research
Primary Funders: LTFF, Manifund, University AI safety groups
Easiest tier - Lower bar, faster turnaroundMedium Grants$50K-$500K
Typical Uses:
- Independent research (1-2 years)
- Small organization operations
- Specific research projects
- Field-building initiatives
Primary Funders: LTFF (upper range), SFF, Open Phil (lower range)
Moderate - Need track record or strong planLarge Grants$500K-$5M+
Typical Uses:
- Multi-year research programs
- Organization operations
- Major initiatives
- Team funding
Primary Funders: Open Phil, Anthropic LTBT, Schmidt Futures, Government contracts
Difficult - Need strong track record and institutional credibilityHow to Get Funded
Section titled “How to Get Funded”Before Applying
- Research the funder: Understand their priorities and past grants
- Check fit: Does your project align with their focus?
- Build track record: Create public work showing your ability
- Get feedback: Talk to others who've received grants
Writing Strong Applications
- Lead with impact: How does this reduce AI risk?
- Be specific: Concrete plans, not vague aspirations
- Show capability: Evidence you can deliver
- Right-size budget: Justify costs, don't over or undershoot
- Timeline: Realistic milestones
Common Mistakes
- Too vague: 'I want to work on AI safety' without specifics
- No track record: Asking for funding without demonstrated ability
- Wrong funder: Applying to funders focused on different areas
- Unrealistic scope: Proposing to solve alignment in 6 months
- Poor communication: Unclear writing or logic
Timeline Expectations
Section titled “Timeline Expectations”| Funder | Expected Timeline |
|---|---|
| LTFF | 2-6 weeks |
| SFF | 6-8 weeks (during grant rounds) |
| Open Phil | 3-6 months |
| Government | 6-12 months |
Alternative Funding Routes
Section titled “Alternative Funding Routes”Employment vs. Grants
Section titled “Employment vs. Grants”Consider employment if:
- You want to work on specific problems organizations are tackling
- You value mentorship and collaboration
- You prefer stable, long-term funding
- You’re early in your career
Consider grants if:
- You have a specific research agenda
- You want independence
- You have experience and track record
- You need flexibility
Regranting Programs
Section titled “Regranting Programs”Some organizations accept donations and regrant:
- Manifund: Platform for small AI safety grants
- Effective Altruism Infrastructure Fund: For community infrastructure
- University groups: Some have small grant budgets
Fellowships and Programs
Section titled “Fellowships and Programs”Funded programs as an alternative to direct grants:
- MATS: ML Alignment & Theory Scholars (stipended)
- AI Safety Camp: Short programs (volunteer, some travel funding)
- Bluedot Impact: Courses with some fellowships
- Apart Research: Hackathon-style programs
Resources
Section titled “Resources”Application Resources
Section titled “Application Resources”- EA Forum: Search for “grant report” to see what gets funded
- Alignment Forum: Technical research discussions
- LessWrong: AI safety community
- 80,000 Hours: Career advice including funding options
Finding Opportunities
Section titled “Finding Opportunities”- Subscribe to funder newsletters
- Follow AI safety organizations on social media
- Join AI safety Slack/Discord communities
- Attend conferences and workshops