Longtermist Funders
- QualityRated 3 but structure suggests 80 (underrated by 77 points)
Overview
Section titled “Overview”Longtermist funders provide critical financial support for organizations working on AI safety, existential risk reduction, and related cause areas. The funding landscape is characterized by a relatively small number of major philanthropists and foundations that provide the majority of resources, with additional support from regranting programs and smaller donors.
The field has experienced significant growth in funding over the past decade, though it remains small relative to overall AI development spending. Major shifts occurred in 2022-2023 with the FTX collapse eliminating a significant planned funding source, though other funders have partially filled the gap.
Comprehensive Funder Comparison
Section titled “Comprehensive Funder Comparison”By Annual Giving and Focus Area
Section titled “By Annual Giving and Focus Area”| Funder | Annual Giving | AI Safety | Global Health | Science | Education | Other |
|---|---|---|---|---|---|---|
| Gates Foundation | ≈$7B | Minimal | $4B | $1B | $500M | $1B |
| Wellcome Trust | ≈$1.5B | Minimal | $500M | $800M | — | $200M |
| Chan Zuckerberg InitiativeOrganizationChan Zuckerberg InitiativeThe Chan Zuckerberg Initiative is a philanthropic LLC that has pivoted dramatically from broad social causes to AI-powered biomedical research, with substantial funding ($10B+ over next decade) but...Quality: 50/100 | ≈$1B | $0 | $200M | $800M | $30M | — |
| Howard Hughes Medical Institute | ≈$1B | $0 | Minimal | $1B | — | — |
| Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 | ≈$700M | $65M | $300M | $50M | — | $285M |
| MacArthur FoundationOrganizationMacArthur FoundationComprehensive profile of the $9 billion MacArthur Foundation documenting its evolution from 1978 to present, with $8.27 billion in total grants across climate, criminal justice, nuclear threats, an...Quality: 65/100 | ≈$260M | Minimal | — | $50M | — | $200M |
| Hewlett FoundationOrganizationWilliam and Flora Hewlett FoundationThe Hewlett Foundation is a $14.8 billion philanthropic organization that focuses primarily on AI cybersecurity rather than AI alignment or existential risk, distinguishing it from AI safety-focuse...Quality: 55/100 | ≈$473M | $8M | — | — | $100M | $365M |
| Survival and Flourishing FundOrganizationSurvival and Flourishing Fund (SFF)SFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100 | ≈$35M | $30M | — | — | — | $5M |
| Schmidt FuturesOrganizationSchmidt FuturesSchmidt Futures is a major philanthropic initiative founded by Eric Schmidt that has committed substantial funding to AI safety research ($135M across AI2050 and AI Safety Science programs) while a...Quality: 60/100 | ≈$200M | $5M | — | $100M | $50M | $45M |
| Long-Term Future FundOrganizationLong-Term Future Fund (LTFF)LTFF is a regranting program that has distributed $20M since 2017 (approximately $10M to AI safety) with median grants of $25K, filling a critical niche between personal savings and institutional f...Quality: 56/100 | ≈$5-10M | $5-10M | — | — | — | — |
| ManifundOrganizationManifundManifund is a $2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors ($50K-400K budgets), fiscal sponsorship, ...Quality: 50/100 | ≈$2-5M | $1-3M | — | — | — | $1-2M |
Key Individual Philanthropists
Section titled “Key Individual Philanthropists”| Person | Net Worth | Annual Giving | AI Safety | Lifetime Total | Primary Vehicle |
|---|---|---|---|---|---|
| Bill Gates | ≈$130B | ≈$5B | Minimal | $50B+ | Gates Foundation |
| Elon Musk (Funder)AnalysisElon Musk (Funder)Elon Musk's philanthropy represents a massive gap between potential and actual impact. With ~$400B net worth and a 2012 Giving Pledge commitment, he has given only ~$250M annually through his found...Quality: 45/100 | ≈$400B | ≈$250M | Minimal | ≈$8B | Musk Foundation |
| Mark Zuckerberg | ≈$200B | ≈$1B | $0 | ≈$8B | CZIOrganizationChan Zuckerberg InitiativeThe Chan Zuckerberg Initiative is a philanthropic LLC that has pivoted dramatically from broad social causes to AI-powered biomedical research, with substantial funding ($10B+ over next decade) but...Quality: 50/100 |
| Dustin MoskovitzResearcherDustin MoskovitzDustin Moskovitz and Cari Tuna have given $4B+ since 2011, with ~$336M (12% of total) directed to AI safety through Coefficient Giving, making them the largest individual AI safety funders globally...Quality: 49/100 | ≈$17B | ≈$700M | $65M | $4B+ | Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 |
| MacKenzie Scott | ≈$35B | ≈$3-4B | Unknown | $17B+ | Direct giving |
| Jaan TallinnResearcherJaan TallinnComprehensive profile of Jaan Tallinn documenting $150M+ lifetime AI safety giving (86% of $51M in 2024), primarily through SFF ($34.33M distributed in 2025). Net worth likely $3-10B+ (2019 public ...Quality: 53/100 | ≈$500M | ≈$50M | $40M+ | $100M+ | SFFOrganizationSurvival and Flourishing Fund (SFF)SFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100, direct |
| Vitalik Buterin (Funder)OrganizationVitalik Buterin (Funder)Vitalik Buterin's 2021 donation of $665.8M in cryptocurrency to FLI was one of the largest single donations to AI safety in history. Beyond this landmark gift, he gives ~$50M annually to AI safety ...Quality: 45/100 | ≈$500M | ≈$50M | $15M+ | $800M+ | FLIOrganizationFuture of Life Institute (FLI)Comprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause ...Quality: 46/100 ($665M), MIRI, Balvi |
| Eric Schmidt | ≈$25B | ≈$200M | $5M | $1B+ | Schmidt Futures |
AI Safety Funding Concentration
Section titled “AI Safety Funding Concentration”The AI safety funding landscape is highly concentrated among a few donors:
| Funder | AI Safety (Annual) | % of Total AI Safety Funding |
|---|---|---|
| Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 | $65M | ≈55% |
| Survival and Flourishing FundOrganizationSurvival and Flourishing Fund (SFF)SFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100 | $30M | ≈25% |
| Jaan TallinnResearcherJaan TallinnComprehensive profile of Jaan Tallinn documenting $150M+ lifetime AI safety giving (86% of $51M in 2024), primarily through SFF ($34.33M distributed in 2025). Net worth likely $3-10B+ (2019 public ...Quality: 53/100 (direct) | $10M | ≈8% |
| Vitalik ButerinOrganizationVitalik Buterin (Funder)Vitalik Buterin's 2021 donation of $665.8M in cryptocurrency to FLI was one of the largest single donations to AI safety in history. Beyond this landmark gift, he gives ~$50M annually to AI safety ...Quality: 45/100 | $5-15M | ≈5-10% |
| Long-Term Future FundOrganizationLong-Term Future Fund (LTFF)LTFF is a regranting program that has distributed $20M since 2017 (approximately $10M to AI safety) with median grants of $25K, filling a critical niche between personal savings and institutional f...Quality: 56/100 | $5-10M | ≈5% |
| Other sources | $5-10M | ≈5% |
| Total estimated | ≈$120-150M/year | 100% |
Untapped Philanthropic Potential
Section titled “Untapped Philanthropic Potential”Several major philanthropists have significant resources but minimal AI safety engagement:
| Person | Net Worth | Current AI Safety | Potential (1% of net worth) |
|---|---|---|---|
| Elon MuskAnalysisElon Musk (Funder)Elon Musk's philanthropy represents a massive gap between potential and actual impact. With ~$400B net worth and a 2012 Giving Pledge commitment, he has given only ~$250M annually through his found...Quality: 45/100 | $400B | ≈$0 | $4B/year |
| Mark Zuckerberg | $200B | $0 | $2B/year |
| Bill Gates | $130B | Minimal | $1.3B/year |
| Larry Ellison | $230B | $0 | $2.3B/year |
| Jeff Bezos | $200B | $0 | $2B/year |
If these five individuals allocated just 1% of their net worth annually to AI safety, it would represent $11.6B/year — roughly 80x current total funding.
AI Safety Funders (Detailed)
Section titled “AI Safety Funders (Detailed)”| Organization | Type | Annual Giving (Est.) | Primary Focus | Key Grantees |
|---|---|---|---|---|
| Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 | Foundation | $65M AI safety | Technical alignment, governance, evals | MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100, RedwoodOrganizationRedwood ResearchA nonprofit AI safety and security research organization founded in 2021, known for pioneering AI Control research, developing causal scrubbing interpretability methods, and conducting landmark ali...Quality: 78/100, METRLab ResearchMETRMETR conducts pre-deployment dangerous capability evaluations for frontier AI labs (OpenAI, Anthropic, Google DeepMind), testing autonomous replication, cybersecurity, CBRN, and manipulation capabi...Quality: 66/100, GovAI |
| Survival and Flourishing FundOrganizationSurvival and Flourishing Fund (SFF)SFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100 | Donor Lottery | $30M | AI safety, x-risk | MIRI, ARC Evals, SERI, CAISLab ResearchCAISCAIS is a research organization that has distributed $2M+ in compute grants to 200+ researchers, published 50+ safety papers including benchmarks adopted by Anthropic/OpenAI, and organized the May ...Quality: 42/100 |
| Long-Term Future FundOrganizationLong-Term Future Fund (LTFF)LTFF is a regranting program that has distributed $20M since 2017 (approximately $10M to AI safety) with median grants of $25K, filling a critical niche between personal savings and institutional f...Quality: 56/100 | Regranting | $5-10M | AI safety, x-risk research | Individual researchers, small orgs |
| ManifundOrganizationManifundManifund is a $2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors ($50K-400K budgets), fiscal sponsorship, ...Quality: 50/100 | Regranting Platform | $2-5M | EA causes broadly | Community projects |
Non-AI-Safety Major Funders
Section titled “Non-AI-Safety Major Funders”| Organization | Type | Annual Giving | Focus Areas | AI Safety |
|---|---|---|---|---|
| Gates Foundation | Foundation | $7B | Global health, poverty, education | Minimal |
| Wellcome Trust | Foundation | $1.5B | Health research, science | Minimal |
| Chan Zuckerberg InitiativeOrganizationChan Zuckerberg InitiativeThe Chan Zuckerberg Initiative is a philanthropic LLC that has pivoted dramatically from broad social causes to AI-powered biomedical research, with substantial funding ($10B+ over next decade) but...Quality: 50/100 | LLC | $1B | AI-biology, disease cures | $0 |
| Hewlett FoundationOrganizationWilliam and Flora Hewlett FoundationThe Hewlett Foundation is a $14.8 billion philanthropic organization that focuses primarily on AI cybersecurity rather than AI alignment or existential risk, distinguishing it from AI safety-focuse...Quality: 55/100 | Foundation | $473M | Environment, democracy, education | $8M (cybersecurity) |
| MacArthur FoundationOrganizationMacArthur FoundationComprehensive profile of the $9 billion MacArthur Foundation documenting its evolution from 1978 to present, with $8.27 billion in total grants across climate, criminal justice, nuclear threats, an...Quality: 65/100 | Foundation | $260M | Climate, justice, nuclear risk | Minimal |
| Schmidt FuturesOrganizationSchmidt FuturesSchmidt Futures is a major philanthropic initiative founded by Eric Schmidt that has committed substantial funding to AI safety research ($135M across AI2050 and AI Safety Science programs) while a...Quality: 60/100 | LLC | $200M | Science, AI applications, talent | $5M |
AI Safety Funding Landscape
Section titled “AI Safety Funding Landscape”Broader Philanthropy Landscape (For Context)
Section titled “Broader Philanthropy Landscape (For Context)”The Scale Gap
Section titled “The Scale Gap”| Category | Annual Funding | Notes |
|---|---|---|
| AI Safety (total) | ≈$120-150M | Highly concentrated |
| Gates Foundation alone | ≈$7,000M | 50x AI safety total |
| AI capabilities (industry) | ≈$50,000M+ | 400x AI safety total |
| Global philanthropy | ≈$500,000M | 4,000x AI safety total |
Pending Major Funding Sources
Section titled “Pending Major Funding Sources”Anthropic-Derived Capital
Section titled “Anthropic-Derived Capital”Anthropic (Funder)AnalysisAnthropic (Funder)Comprehensive model of EA-aligned philanthropic capital at Anthropic. At $350B valuation: $25-70B risk-adjusted EA capital expected. Sources: all 7 co-founders pledged 80% of equity, but only 2/7 (...Quality: 65/100 represents potentially the largest future source of longtermist philanthropic capital. At Anthropic’s current $350B valuation:
| Source | Estimated Value | EA Likelihood | Notes |
|---|---|---|---|
| Founder pledges (7 founders, 80%) | $39-59B | 2/7 strongly EA-aligned | Only Dario & Daniela have documented EA connections |
| Jaan TallinnResearcherJaan TallinnComprehensive profile of Jaan Tallinn documenting $150M+ lifetime AI safety giving (86% of $51M in 2024), primarily through SFF ($34.33M distributed in 2025). Net worth likely $3-10B+ (2019 public ...Quality: 53/100 stake | $2-6B (conservative) | Very high | Series A lead investor |
| Dustin MoskovitzResearcherDustin MoskovitzDustin Moskovitz and Cari Tuna have given $4B+ since 2011, with ~$336M (12% of total) directed to AI safety through Coefficient Giving, making them the largest individual AI safety funders globally...Quality: 49/100 stake | $3-9B | Certain | $500M+ already in nonprofit |
| Employee pledges + matching | $20-40B | High (in DAFs) | Historical 3:1 matching reduced to 1:1 for new hires |
| Total risk-adjusted | $25-70B | — | Wide range reflects cause allocation uncertainty |
Key uncertainties:
- Only 2/7 founders have documented strong EA connections—71% of founder equity may go to non-EA causes
- Matching program reduced from 3:1 at 50% to 1:1 at 25% for new employees
- IPO timeline: 2026-2027 expected; capital deployment likely 2027-2035
For comparison, this $25-70B range represents 170-470x current annual AI safety funding of ≈$150M. Even if only 10% ultimately reaches EA causes, it would still be transformative.
See Anthropic (Funder)AnalysisAnthropic (Funder)Comprehensive model of EA-aligned philanthropic capital at Anthropic. At $350B valuation: $25-70B risk-adjusted EA capital expected. Sources: all 7 co-founders pledged 80% of equity, but only 2/7 (...Quality: 65/100 for comprehensive analysis.
OpenAI Foundation
Section titled “OpenAI Foundation”The OpenAI FoundationOrganizationOpenAI FoundationThe OpenAI Foundation holds 26% equity (~\$130B) in OpenAI Group PBC with governance control, but detailed analysis of board member incentives reveals strong bias toward capital preservation over p...Quality: 87/100 holds 26% of OpenAI, worth approximately $130B at current valuations. Unlike Anthropic’s pledge-based model, the Foundation has direct legal control over these assets. Cause allocation is uncertain—the Foundation’s stated mission focuses on “safe AGI” but specific philanthropic priorities are undisclosed.
Recent Trends
Section titled “Recent Trends”2024-2025 Developments:
- Coefficient Giving launched $40M AI Safety Request for Proposals (January 2025)
- SFF allocated $34.33M, with 86% going to AI-related projects
- Coefficient Giving (formerly Open Philanthropy) rebranded in November 2025
- LTFF continued steady grantmaking at ≈$5M annually
- Anthropic founders announced 80% donation pledges (January 2026)
Post-FTX Landscape:
- Future Fund’s collapse eliminated ≈$160M in committed grants
- Some organizations faced funding crises; others found alternative support
- Field-wide diversification of funding sources