FLI AI Safety Index Summer 2025
Summary
The FLI AI Safety Index Summer 2025 assesses leading AI companies' safety efforts, finding widespread inadequacies in risk management and existential safety planning. Anthropic leads with a C+ grade, while most companies score poorly across critical safety domains.
Review
The Future of Life Institute's AI Safety Index provides a comprehensive evaluation of seven leading AI companies' safety practices, revealing critical systemic weaknesses in responsible AI development. The assessment spans six domains: Risk Assessment, Current Harms, Safety Frameworks, Existential Safety, Governance & Accountability, and Information Sharing, with independent expert reviewers conducting rigorous evaluations. The report's most alarming finding is the fundamental disconnect between companies' ambitious AI development goals and their minimal safety preparations. Despite claims of approaching artificial general intelligence (AGI) within the decade, no company scored above a D in Existential Safety planning. This suggests a profound lack of coherent risk management strategies, with companies racing toward potentially transformative technologies without adequate safeguards. The index highlights the urgent need for external regulation, independent oversight, and a more systematic approach to identifying and mitigating potential catastrophic risks.
Key Points
- Anthropic leads with C+ grade, but no company demonstrates comprehensive AI safety practices
- Companies claim AGI readiness but lack substantive existential safety planning
- Capability development is outpacing risk management efforts across the industry