Open vs Closed Source AI
Open vs Closed Source AI
One of the most heated debates in AI: Should powerful AI models be released as open source (weights publicly available), or kept closed to prevent misuse?
What’s At Stake
Section titled “What’s At Stake”Open source means releasing model weights so anyone can download, modify, and run the model locally:
- Examples: Llama 2, Mistral, Falcon
- Can’t be recalled or controlled after release
- Anyone can fine-tune for any purpose
Closed source means keeping weights proprietary, providing access only via API:
- Examples: GPT-4, Claude, Gemini
- Lab maintains control and can monitor usage
- Can update, revoke access, refuse harmful requests
Current Landscape
Section titled “Current Landscape”| Name | Openness | Access | Safety | Customization | Cost | Control |
|---|---|---|---|---|---|---|
| GPT-4 | Closed | API only | Strong guardrails, monitored | Limited | Pay per token | OpenAI maintains full control |
| Claude 3 | Closed | API only | Constitutional AI, monitored | Limited | Pay per token | Anthropic maintains full control |
| Llama 2 70B | Open weights | Download and run locally | Basic guardrails, easily removed | Full fine-tuning possible | Free (need own compute) | No control after release |
| Mistral 7B/8x7B | Open weights | Download and run locally | Minimal restrictions | Full fine-tuning possible | Free (need own compute) | No control after release |
Key Positions
Section titled “Key Positions”Where different actors stand on releasing model weights
Key Cruxes
Section titled “Key Cruxes”❓Key Questions
Possible Middle Grounds
Section titled “Possible Middle Grounds”Several proposals try to capture benefits of both:
Staged Release
- Release with 6-12 month delay after initial deployment
- Allows monitoring for risks before open release
- Example: Not done yet, but proposed
Structured Access
- Provide weights to vetted researchers
- More access than API, less than fully public
- Example: GPT-2 XL initially
Differential Access
- Smaller models open, frontier models closed
- Balance innovation with safety
- Example: Current status quo
Safety-Contingent Release
- Release if safety evaluations pass thresholds
- Create clear criteria for release decisions
- Example: Anthropic’s RSP (for deployment, not release)
Open Source with Hardware Controls
- Release weights but require specialized hardware to run
- Harder but not perfect control
- Example: Not implemented
The International Dimension
Section titled “The International Dimension”This debate has geopolitical implications:
If US/Western labs stay closed:
- May slow dangerous capabilities
- But China may open source strategically
- Could lose innovation race
If US/Western labs open source:
- Loses monitoring capability
- But levels playing field globally
- Benefits developing world
Coordination problem:
- Optimal if all major powers coordinate
- But unilateral restraint may not work
- Race dynamics push toward openness
Implications for Different Risks
Section titled “Implications for Different Risks”The open vs closed question has different implications for different risks:
Misuse risks (bioweapons, cyberattacks):
- Clear case for closed: irreversibility, removal of guardrails
- Open source dramatically increases risk
Accident risks (unintended behavior):
- Mixed: Open source enables safety research but also deployment
- Depends on whether scrutiny or proliferation dominates
Structural risks (power concentration):
- Clear case for open: prevents monopoly
- But only if open source is actually accessible (requires compute)
Race dynamics:
- Open source may accelerate race (lower barriers)
- But also may reduce pressure (can build on shared base)