Concentration of Power
Page Status
Quality:91 (Comprehensive)⚠️
Importance:25 (Peripheral)
Last edited:2025-12-29 (9 days ago)
Words:479
Backlinks:17
Structure:
📊 4📈 0🔗 19📚 0•17%Score: 9/15
LLM Summary:Concentration of power is the risk of AI enabling unprecedented control by few organizations. This is a short reference page—see Power Distribution parameter for comprehensive analysis.
Risk
Concentration of Power
Importance25
CategoryStructural Risk
SeverityHigh
Likelihoodmedium-high
Timeframe2030
MaturityGrowing
TypeStructural/Systemic
Related
Overview
Section titled “Overview”AI is enabling unprecedented concentration of power in the hands of a few organizations, fundamentally altering traditional power structures across economic, political, and military domains. Unlike previous technologies that affected specific sectors, AI’s general-purpose nature creates advantages that compound across all areas of human activity.
For comprehensive analysis, see AI Control Concentration, which covers:
- Current power distribution metrics across actors
- Concentration mechanisms (compute, data, talent, capital)
- Factors that increase and decrease concentration
- Intervention effectiveness and policy options
- Trajectory scenarios through 2035
Risk Assessment
Section titled “Risk Assessment”| Dimension | Current Status | 5-10 Year Likelihood | Severity |
|---|---|---|---|
| Economic concentration | 5 firms control 80%+ AI cloud | Very High (85%+) | Extreme |
| Compute barriers | $100M+ for frontier training | Very High (90%+) | High |
| Talent concentration | Top 50 researchers at 6 labs | High (75%) | High |
| Regulatory capture risk | Early lobbying influence | High (70%) | High |
| Geopolitical concentration | US-China duopoly emerging | Very High (90%+) | Extreme |
Key Concentration Mechanisms
Section titled “Key Concentration Mechanisms”| Mechanism | Current State | Barrier Effect |
|---|---|---|
| Compute requirements | $100M+, 25,000+ GPUs for frontier models↗ | Only ~20 organizations can train frontier models |
| Cloud infrastructure | AWS, Azure, GCP control 68%↗ | Essential gatekeepers for AI development |
| Chip manufacturing | NVIDIA 95%+ market share↗ | Critical chokepoint |
| Capital requirements | Microsoft $13B+ into OpenAI↗ | Only largest tech firms can compete |
| 2030 projection | $1-10B per model↗ | Likely fewer than 10 organizations capable |
Why Concentration Matters for AI Safety
Section titled “Why Concentration Matters for AI Safety”| Concern | Mechanism |
|---|---|
| Democratic accountability | Small groups make decisions affecting billions without representation |
| Single points of failure | Concentration creates systemic risk if key actors fail |
| Regulatory capture | Concentrated interests shape rules in their favor |
| Values alignment | Whose values get embedded when few control development? |
| Geopolitical instability | AI advantage could upset international balance |
Responses That Address This Risk
Section titled “Responses That Address This Risk”| Response | Mechanism | Status |
|---|---|---|
| Compute Governance | Control access to training resources | Emerging |
| Antitrust enforcement | Break up concentrated power | Limited application |
| Open-source AI | Distribute capabilities broadly | Active but contested |
| International coordination | Prevent winner-take-all dynamics | Early stage |
See AI Control Concentration for detailed analysis.
Related Pages
Section titled “Related Pages”Primary Reference
Section titled “Primary Reference”- AI Control Concentration — Comprehensive parameter page with mechanisms, measurement, and interventions
Related Risks
Section titled “Related Risks”- Lock-in — Path dependencies reinforcing concentration
- Racing Dynamics — Competition accelerating unsafe development
- Authoritarian Takeover — Concentrated power enabling authoritarianism
Related Parameters
Section titled “Related Parameters”- Regulatory Capacity — Government ability to constrain concentration
- Coordination Capacity — Multi-actor cooperation on governance
- Institutional Quality — Checks and balances strength
Sources
Section titled “Sources”- Microsoft-OpenAI partnership↗
- GPT-4 training requirements↗
- AI Now Institute: Compute sovereignty↗
- RAND: AI-enabled authoritarianism↗
What links here
- AI Control Concentrationparameter
- Racing Dynamics Game Theory Modelmodeloutcome
- Multipolar Trap Coordination Modelmodeloutcome
- Winner-Take-All Market Dynamics Modelmodelmechanism
- Concentration of Power Systems Modelmodelanalyzes
- Lock-in Irreversibility Modelmodelmechanism
- Economic Disruption Structural Modelmodelconsequence
- Google DeepMindlab
- Compute Governancepolicy
- AI Authoritarian Toolsrisk
- Economic Disruptionrisk
- Irreversibilityrisk
- Lock-inrisk
- Authoritarian Takeoverrisk
- Multipolar Traprisk
- AI Mass Surveillancerisk
- Winner-Take-All Dynamicsrisk