Structure: đ 14 đ 0 đ 4 đ 5 â˘4% Score: 11/15
Finding Key Data Implication Extreme concentration 4-5 labs: 95%+ frontier Few companies decide AI future Corporate structure varies Nonprofit, PBC, public division Different incentives Talent concentration Top labs employ most key researchers Knowledge concentrated Strategic divergence Different approaches to safety/openness Varied risk profiles Governance gaps Limited oversight of corporate AI decisions Accountability weak
The development of frontier AI is concentrated among a remarkably small number of companies. OpenAI, Anthropic, Google DeepMind, and Meta account for virtually all frontier model development in the West, with a handful of Chinese companies (Baidu, Alibaba, ByteDance) filling a similar role in China. These companiesâ decisions about what to build, how to build it, and when to deploy it shape the trajectory of AI more than any government policy or international agreement.
The companies differ significantly in structure and approach. OpenAI began as a nonprofit but transitioned to a âcapped-profitâ structure with Microsoft as a major partner. Anthropic is a public benefit corporation explicitly focused on AI safety. Google DeepMind is a division of Alphabet, subject to public company pressures. Meta AI operates within Metaâs social media business context. These different structures create different incentive patterns and strategic approaches.
Corporate governance of AI is a critical but underexplored issue. The executives and boards of these companies make decisions with global implications, but with limited accountability to affected populations. Traditional corporate governance mechanismsâshareholder voting, board oversight, market competitionâmay be poorly suited to governing technology with existential implications.
Corporate Power Over AI
In most domains, we expect markets, governments, and civil society to balance corporate power. In frontier AI, the technology is so concentrated and fast-moving that a handful of corporate leaders have unprecedented influence over humanityâs future.
Company Structure Parent/Partners Founded OpenAI Capped-profit + nonprofit Microsoft partnership 2015 Anthropic Public Benefit Corporation Amazon, Google investments 2021 Google DeepMind Division of Alphabet Public company subsidiary 2010/merged 2023 Meta AI Division of Meta Public company Various xAI Private company Elon Musk 2023
Channel Description Model development What capabilities exist Deployment decisions Who has access, when Safety investment How much risk mitigation API policies Rules for downstream use Open/closed decisions Model accessibility
Company Estimated Market Share (Frontier) Key Products OpenAI 35-40% GPT-4, GPT-4o, o1 Google DeepMind 25-30% Gemini family Anthropic 15-20% Claude 3, Claude 3.5 Meta 10-15% Llama 3 (open weights) Others <10% Various
Market Share vs Capability
Market share by revenue differs from capability leadership. A company with smaller revenue might have more capable models. Measurement is further complicated by different evaluation methods.
Company Safety Team Size RSP/Safety Framework Openness Approach OpenAI 50+ (reduced) Preparedness Framework Closed, API Anthropic 100+ Responsible Scaling Policy Closed, API Google DeepMind 100+ Frontier Safety Framework Mostly closed Meta 30+ Responsible AI Open weights xAI Unknown Limited public info Open weights
Company Ownership Profit Motive Accountability OpenAI Microsoft 49%, nonprofit rest Capped, complex Board, Microsoft Anthropic Investors PBC mission constraint Board, mission Google DeepMind Alphabet (public) Strong Public markets Meta Public (Zuckerberg control) Strong Zuckerberg xAI Private (Musk) Unknown Musk
Company Estimated Top Researchers % of Global Top Talent Google/DeepMind 200+ 25%+ OpenAI 100+ 15%+ Anthropic 75+ 10%+ Meta 75+ 10%+ Top 5 Chinese companies 150+ 15%+ All others Remaining 25%
Factor Mechanism Trend Capital requirements $1B+ per frontier run Increasing Talent scarcity Few top researchers Slowly improving Data advantages Proprietary data matters Moderate Compute access Partnership with cloud providers Concentrated First-mover advantage Early leads compound Strong
Factor Mechanism Status Open weights Meta/others release models Active but contested Algorithmic efficiency Reduce compute needs Progressing New entrants Startups, national labs Some emergence Antitrust action Break up concentrations Limited
Issue Description Example Board-management tension Boards struggle to oversee technical decisions OpenAI 2023 crisis Safety-product tension Safety teams vs. deployment pressure Reported at multiple labs Founder power Individual founders have outsized influence Multiple companies Transparency Limited visibility into decisions Universal
Gap Description Risk Regulatory lag No comprehensive AI company regulation High Accountability vacuum Unclear responsibility for AI harm High Democratic input No public say in AI strategy High International coordination No global corporate AI governance High
Corporate Decisions, Global Consequences
A handful of corporate executives make decisions about what AI capabilities to develop, what safeguards to implement, and when to deploy. These decisions affect all of humanity but are made with minimal public input.
Dynamic Description Effect on Safety Capability racing Labs race to release best models Negative Talent poaching Competition for researchers Mixed Partnership competition Cloud/compute deals Mixed API competition Price and feature competition Neutral Safety positioning Some labs compete on safety Positive
Initiative Participants Status Frontier Model Forum OpenAI, Anthropic, Google, Microsoft Active Safety information sharing Some labs Limited Standard development Various Early Joint RSP development Coordinated commitments Some progress