Skip to content

Funders

AI Safety Funding Overview
Total Field: ~$100-500M/year
Largest Funder: Open Philanthropy (~$50-100M/year)
Grant Range: $5K to $5M+
Note: Growing but still under 1% of AI capabilities funding

AI safety funding has grown dramatically since 2015 but remains a small fraction of overall AI investment. Understanding the funding landscape is critical for anyone seeking to work on AI safety.

AI safety funding has increased approximately 10-20x since 2019, driven by:

  • Growing awareness of AI risks among major philanthropists
  • Formation of dedicated safety teams at frontier AI labs
  • Increased government interest (AISI, NIST, EU AI Office)
  • New funders entering the space (Anthropic LTBT, Schmidt Futures)

AI safety funding remains ~0.1-0.5% of total AI investment:

  • Total AI investment: ~$100-200B/year (venture capital + corporate R&D)
  • AI capabilities funding vastly outpaces safety work
  • Some argue this ratio should be inverted given the stakes

Open Philanthropy
Survival and Flourishing Fund (SFF)
Long-Term Future Fund (LTFF)
Anthropic Long-Term Benefit Trust (LTBT)

FunderAnnual AmountGrant SizeSpeedApplicationBest For
Open Philanthropy$50-100M$50K-$20M3-6 monthsRelationship-drivenLarge orgs, established researchers
SFF$20-60M$50K-$2M6-8 weeksOpen roundsSpeculative research, new approaches
LTFF$5-15M$5K-$500K2-6 weeksRollingIndividuals, small projects, upskilling
Anthropic LTBTTBD$100K-$5M+TBDDevelopingHigh-quality research, complementary to Anthropic
Government (AISI/NSF)$10-50M$100K-$5M6-12 monthsRFPs, standard processAcademic researchers, standards work

Technical Research
Well-funded: Mechanistic interpretability, Evaluations and benchmarks, Scalable oversight, RLHF and training methods
Underfunded: Agent foundations, Formal verification, Novel training paradigms, Worst-case safety
Major Funders: Open Phil, SFF, LTFF, Anthropic LTBT
Typical Grants: $100K-$2M
Governance and Policy
Well-funded: Think tank research, Government engagement, International coordination
Underfunded: Corporate governance, Enforcement mechanisms, Non-US policy work, Subnational governance
Major Funders: Open Phil, Schmidt Futures, government sources
Typical Grants: $200K-$5M
Field-Building
Activities: AI safety courses and programs, Conferences and workshops, Career advising and placement, Community infrastructure
Major Funders: Open Phil, LTFF, SFF
Typical Grants: $50K-$1M
Communications and Education
Activities: Public outreach, Educational content, Media and journalism, Advocacy
Major Funders: Open Phil (selective), LTFF (small grants)
Typical Grants: $20K-$300K

Small Grants$5K-$50K
Typical Uses:
  • Upskilling (3-6 months)
  • Pilot projects
  • Travel and conferences
  • Course development
  • Part-time research
Primary Funders: LTFF, Manifund, University AI safety groups
Easiest tier - Lower bar, faster turnaround
Medium Grants$50K-$500K
Typical Uses:
  • Independent research (1-2 years)
  • Small organization operations
  • Specific research projects
  • Field-building initiatives
Primary Funders: LTFF (upper range), SFF, Open Phil (lower range)
Moderate - Need track record or strong plan
Large Grants$500K-$5M+
Typical Uses:
  • Multi-year research programs
  • Organization operations
  • Major initiatives
  • Team funding
Primary Funders: Open Phil, Anthropic LTBT, Schmidt Futures, Government contracts
Difficult - Need strong track record and institutional credibility

Before Applying
  • Research the funder: Understand their priorities and past grants
  • Check fit: Does your project align with their focus?
  • Build track record: Create public work showing your ability
  • Get feedback: Talk to others who've received grants
Writing Strong Applications
  • Lead with impact: How does this reduce AI risk?
  • Be specific: Concrete plans, not vague aspirations
  • Show capability: Evidence you can deliver
  • Right-size budget: Justify costs, don't over or undershoot
  • Timeline: Realistic milestones
Common Mistakes
  • Too vague: 'I want to work on AI safety' without specifics
  • No track record: Asking for funding without demonstrated ability
  • Wrong funder: Applying to funders focused on different areas
  • Unrealistic scope: Proposing to solve alignment in 6 months
  • Poor communication: Unclear writing or logic
FunderExpected Timeline
LTFF2-6 weeks
SFF6-8 weeks (during grant rounds)
Open Phil3-6 months
Government6-12 months

Consider employment if:

  • You want to work on specific problems organizations are tackling
  • You value mentorship and collaboration
  • You prefer stable, long-term funding
  • You’re early in your career

Consider grants if:

  • You have a specific research agenda
  • You want independence
  • You have experience and track record
  • You need flexibility

Some organizations accept donations and regrant:

  • Manifund: Platform for small AI safety grants
  • Effective Altruism Infrastructure Fund: For community infrastructure
  • University groups: Some have small grant budgets

Funded programs as an alternative to direct grants:

  • MATS: ML Alignment & Theory Scholars (stipended)
  • AI Safety Camp: Short programs (volunteer, some travel funding)
  • Bluedot Impact: Courses with some fellowships
  • Apart Research: Hackathon-style programs

  • EA Forum: Search for “grant report” to see what gets funded
  • Alignment Forum: Technical research discussions
  • LessWrong: AI safety community
  • 80,000 Hours: Career advice including funding options
  • Subscribe to funder newsletters
  • Follow AI safety organizations on social media
  • Join AI safety Slack/Discord communities
  • Attend conferences and workshops