Racing Intensity
Racing Intensity
Overview
Section titled “Overview”Racing Intensity measures the degree of competitive pressure between AI developers that incentivizes speed over safety. Lower racing intensity is better for AI safety outcomes—it allows developers to invest in safety research, conduct thorough evaluations, and coordinate on standards without fear of falling behind. When intensity is high, actors cut corners on safety to avoid falling behind competitors. Recent empirical evidence shows this pressure is intensifying: the 2024 FLI AI Safety Index↗ found that “existential safety remains the industry’s core structural weakness—all of the companies reviewed are racing toward AGI/superintelligence without presenting any explicit plans for controlling or aligning such smarter-than-human technology.” Market conditions, geopolitical dynamics, and coordination mechanisms all influence whether this pressure intensifies or moderates.
This parameter underpins multiple critical dimensions of AI safety. High racing intensity diverts resources from safety to capabilities, with safety budget allocations declining 50% from 12% to 6% of R&D spending across major labs between 2022-2024. Competitive pressure leads to premature deployment—Google launched Bard just 3 months after ChatGPT with only 2 weeks of safety evaluation, compared to pre-2022 norms of 3-6 months. Racing undermines careful, collaborative safety research culture, as demonstrated by 340% increased staff turnover in safety teams following competitive events. Finally, high intensity makes safety agreements harder to maintain: the 2024 Seoul AI Safety Summit↗ produced voluntary commitments from 16 companies, but Carnegie Endowment analysis found these “often need to be more robust to ensure meaningful compliance.”
Understanding racing intensity as a parameter (rather than just a “racing dynamics risk”) enables symmetric analysis that identifies both intensifying factors and moderating mechanisms, intervention targeting that focuses on what actually reduces competitive pressure, threshold identification that recognizes dangerous intensity levels before harm occurs, and causal clarity that separates the pressure itself from its consequences. This framing reveals leverage points: while we cannot eliminate competition, we can reduce its intensity through coordination mechanisms, regulatory pressure, and market incentives that internalize safety costs.
Parameter Network
Section titled “Parameter Network”Contributes to: Governance Capacity (inverse), Misuse Potential
Primary outcomes affected:
- Existential Catastrophe ↑↑↑ — Racing degrades safety margins, widening the safety-capability gap
- Transition Smoothness ↑↑ — Racing creates instability and undermines coordination
Quantitative Framework
Section titled “Quantitative Framework”Racing intensity can be operationalized through multiple measurable indicators that track competitive pressure across commercial, geopolitical, and safety dimensions:
| Indicator Category | Metric | Low Racing | Medium Racing | High Racing | Current (2024-25) |
|---|---|---|---|---|---|
| Timeline Pressure | Safety evaluation duration | 12-16 weeks | 6-10 weeks | 2-6 weeks | 4-6 weeks (High) |
| Resource Allocation | Safety as % of R&D budget | Above 10% | 6-10% | Below 6% | 6% (High threshold) |
| Market Competition | Major release frequency | Annual | Bi-annual | Quarterly | 3-4 months (High) |
| Talent Competition | Safety staff turnover spike | Below 50% | 50-150% | Above 200% | 340% (Critical) |
| Coordination Stability | Voluntary commitment adherence | Above 80% | 50-80% | Below 50% | ~60% (Medium-High) |
| Geopolitical Tension | Investment growth rate | Below 20% | 20-50% | Above 50% | Post-DeepSeek surge (High) |
Composite Racing Intensity Score (0-100 scale, weighted average):
- 2020-2021: 35-40 (Low-Medium) — Pre-ChatGPT baseline
- 2022-2023: 65-70 (Medium-High) — Post-ChatGPT commercial surge
- 2024: 75-80 (High) — Sustained pressure, coordination fragility
- 2025 (Q1): 80-85 (High-Critical) — DeepSeek geopolitical shock
The composite score integrates six indicator categories with empirically derived thresholds. The 2024-2025 trajectory shows racing intensity approaching critical levels (85+), where coordination mechanisms face collapse and safety margins fall below minimum viable levels identified in [da39d35d613fd8c7].
Current State Assessment
Section titled “Current State Assessment”Timeline Compression Evidence
Section titled “Timeline Compression Evidence”The 2024 FLI AI Safety Index↗ evaluated six leading AI companies (Anthropic, OpenAI, Google DeepMind, xAI, Meta, Alibaba Cloud) and found “a clear divide persists between the top performers and the rest” on safety practices. Meanwhile, analysis from the AI Index 2024↗ documented dramatic timeline compression across the industry:
| Safety Activity | Pre-ChatGPT Duration | Post-ChatGPT Duration | Reduction |
|---|---|---|---|
| Initial Safety Evaluation | 12-16 weeks | 4-6 weeks | 70% |
| Red Team Assessment | 8-12 weeks | 2-4 weeks | 75% |
| Alignment Testing | 20-24 weeks | 6-8 weeks | 68% |
| External Review | 6-8 weeks | 1-2 weeks | 80% |
The [52c56891fbc1959a] tracked 233 AI-related incidents in 2024, up 56% from 149 in 2023, suggesting that compressed timelines are manifesting as safety failures in deployment.
Resource Allocation Shifts
Section titled “Resource Allocation Shifts”| Metric | 2022 | 2024 | Trend |
|---|---|---|---|
| Safety budget (% of R&D) | 12% | 6% | -50% |
| Safety staff turnover after competitive events | Baseline | +340% | Severe increase |
| AI researcher compensation | Baseline | +180% | Talent wars |
Commercial Competition Timeline
Section titled “Commercial Competition Timeline”| Lab | Response Time to ChatGPT | Safety Evaluation Time | Market Pressure Score |
|---|---|---|---|
| Google (Bard) | 3 months | 2 weeks | 9.2/10 |
| Microsoft (Copilot) | 2 months | 3 weeks | 8.8/10 |
| Anthropic↗ (Claude) | 4 months | 6 weeks | 7.5/10 |
| Meta (LLaMA) | 5 months | 4 weeks | 6.9/10 |
Data compiled from industry reports and Stanford HAI AI Index 2024↗
What “Low Racing Intensity” Looks Like
Section titled “What “Low Racing Intensity” Looks Like”Low racing intensity doesn’t mean slow development—it means development where safety considerations don’t systematically lose to competitive pressure:
Key Characteristics
Section titled “Key Characteristics”- Adequate safety timelines: Evaluations not compressed beyond minimum viable duration
- Sustained safety investment: Resources don’t shift away from safety during competitive events
- Coordination stability: Safety commitments hold under competitive pressure
- Deployment patience: Labs willing to delay releases for safety reasons
- Talent retention: Safety researchers not systematically poached for capabilities work
Historical Baseline
Section titled “Historical Baseline”Before ChatGPT’s November 2022 launch:
- Safety evaluation timelines of 3-6 months were standard
- Major labs maintained dedicated safety teams with stable funding
- Deployment decisions included genuine safety considerations
- Academic collaboration on safety research was more open
Factors That Increase Intensity (Threats)
Section titled “Factors That Increase Intensity (Threats)”This diagram illustrates the self-reinforcing dynamics of racing intensity. Multiple intensifying factors (competitor releases like ChatGPT and DeepSeek R1, geopolitical competition, investor pressure, and talent wars) converge to create high competitive pressure. This pressure manifests through timeline compression (70-80% reduction in evaluation periods) and budget reallocation away from safety (12% to 6% of R&D). These resource constraints force safety corner-cutting, which elevates risk—as evidenced by the 56% year-over-year increase in AI incidents documented in 2024. Major safety incidents could trigger three divergent trajectories: crisis-driven coordination that reduces racing intensity (15-25% probability), normalized risk-taking that maintains the status quo (25-35%), or paradoxically accelerated racing as actors scramble to “win” before regulation arrives (40-50%). The feedback loop from escalation back to competitive pressure represents the self-reinforcing trap that makes racing intensity particularly difficult to escape once established.
Commercial Competition
Section titled “Commercial Competition”| Factor | Mechanism | Current Status |
|---|---|---|
| First-mover advantage | Early entrants capture market share | ChatGPT reached 100M users in 2 months |
| Investor pressure | VCs demand rapid scaling | $47B allocated to AI capability development (2024) |
| Talent competition | Labs bid up researcher salaries | 180% compensation increase since ChatGPT |
| Customer expectations | Enterprise buyers expect rapid feature releases | Quarterly release cycles now standard |
Geopolitical Competition
Section titled “Geopolitical Competition”The January 2025 DeepSeek R1 release↗—achieving GPT-4-level performance with 95% fewer resources—was called an “AI Sputnik moment”↗ by multiple analysts. CSIS analysis↗ found that “DeepSeek’s breakthrough exposed a strategic miscalculation that had defined American AI policy for years: the belief that controlling advanced chips would permanently cripple China’s ambitions.” The company trained R1 using older H800 GPUs that fell below export control thresholds, demonstrating that algorithmic efficiency could compensate for hardware disadvantages. This development significantly intensified racing dynamics by:
- Invalidating US strategy: Export controls designed to maintain 2-3 year leads proved insufficient
- Accelerating investment: Both US and China are “set to put even more financial resources into AI” according to [b0e63ccdb332db60]
- Forcing decoupling: By late 2025, “the U.S. and China had severely decoupled their AI ecosystems—splitting hardware, software, standards, and supply chains” per [0397dadc79e7e3ae]
- Militarizing competition: Both nations began “embedding civilian AI advances into military doctrine” according to [c19eddb152d05207]
| Country | 2024 AI Investment | Strategic Focus | Safety Prioritization | Post-DeepSeek Trajectory |
|---|---|---|---|---|
| United States | $109.1B | Capability leadership | Medium | Intensifying R&D, stricter controls |
| China | $9.3B | Efficiency/autonomy | Low | Proven capability, increased confidence |
| EU | $12.7B | Regulation/ethics | High | Attempting third-way leadership |
| UK | $3.2B | Safety research | High | Neutral coordination venue |
Source: Stanford HAI AI Index 2025↗ and CSIS AI Competition Analysis↗
Coordination Failures
Section titled “Coordination Failures”| Failure Mode | Description | Evidence |
|---|---|---|
| Commitment credibility | Labs can’t verify competitors’ safety claims | No third-party verification protocols |
| Defection incentives | First to cut corners gains advantage | Bard launch demonstrated willingness to rush |
| Information asymmetry | Can’t confirm competitors’ actual practices | Safety research quality hard to assess externally |
Factors That Decrease Intensity (Supports)
Section titled “Factors That Decrease Intensity (Supports)”This diagram shows the virtuous cycle that can reduce racing intensity. Regulatory requirements (EU AI Act), coordination mechanisms (Seoul commitments, Frontier Model Forum), market incentives (enterprise buyer safety requirements, insurance), and safety culture (Anthropic’s brand positioning) all contribute to reducing competitive pressure. When racing pressure decreases, labs can invest in adequate safety timelines, which improves outcomes. Positive outcomes then reinforce safety culture, creating a virtuous cycle. The key insight is that multiple de-escalation pathways exist—racing is not inevitable.
Coordination Mechanisms
Section titled “Coordination Mechanisms”| Mechanism | Description | Status |
|---|---|---|
| Voluntary commitments | Seoul AI Safety Summit↗ (16 signatories) | Limited enforcement |
| Safety research sharing | Frontier Model Forum↗ ($10M fund) | 23% participation rate |
| Pre-competitive collaboration | Partnership on AI↗ working groups | Active |
| Academic consortiums | MILA↗, Stanford HAI↗ | Neutral venues |
Regulatory Pressure
Section titled “Regulatory Pressure”| Regulation | Mechanism | Effect on Racing |
|---|---|---|
| EU AI Act↗ | Mandatory requirements | Levels playing field |
| UK AI Safety Institute↗ | Evaluation standards | Creates delay norms |
| NIST AI RMF↗ | Framework standards | Industry baseline |
Market Mechanisms
Section titled “Market Mechanisms”| Mechanism | Description | Adoption |
|---|---|---|
| Insurance requirements | Liability for deployment above capability thresholds | Emerging |
| Enterprise buyer demands | Customer safety certification requirements | Growing |
| ESG criteria | Investor focus on safety metrics | Increasing |
| Reputational pressure | Media coverage of safety leadership | Moderate |
Cultural Shifts
Section titled “Cultural Shifts”| Factor | Description | Evidence |
|---|---|---|
| Safety leadership as brand | Anthropic’s positioning | Market differentiation |
| Academic recognition | Safety research career incentives | Growing field |
| Whistleblower culture | Internal pressure for safety | Public departures from labs |
Evidence That De-escalation Mechanisms Work
Section titled “Evidence That De-escalation Mechanisms Work”Despite concerning trends, multiple de-escalation mechanisms are demonstrably functional:
| Evidence | Finding | Implication |
|---|---|---|
| Anthropic’s market success | Valued at $60B+ while prioritizing safety | Safety-first positioning commercially viable |
| EU AI Act compliance | Labs investing in compliance rather than relocating | Regulation can set floor without flight |
| Frontier Model Forum | $10M collective safety investment; information sharing protocols | Industry coordination possible |
| UK AISI evaluations | Labs voluntarily submitting to pre-deployment testing | Norms for independent review emerging |
| Enterprise buyer demands | Fortune 500 increasingly requiring safety certifications | Market creating safety incentives |
| Safety researcher hiring | Major labs expanding safety teams post-2023 | Some resource allocation toward safety |
| Historical precedent | Nuclear arms control, Montreal Protocol succeeded | Technology coordination achievable |
The racing narrative, while supported by real competitive pressure, may understate the countervailing forces. Labs have not abandoned safety entirely—they’ve compressed timelines but still conduct evaluations. Coordination mechanisms are imperfect but exist and are strengthening. The question is whether these forces can moderate racing sufficiently, not whether they exist at all.
Why This Parameter Matters
Section titled “Why This Parameter Matters”Consequences of High Racing Intensity
Section titled “Consequences of High Racing Intensity”Analysis from Dan Hendrycks’ 2024 AI Safety textbook↗ warns that “competitive pressures may lead militaries and corporations to hand over excessive power to AI systems, resulting in increased risks of large-scale wars, mass unemployment, and eventual loss of human control.” [da39d35d613fd8c7] on speed-quality tradeoffs found that “the consequences of mismanaging this tradeoff have tangible, severe impacts on human life, economic stability, and physical safety.”
| Domain | Impact | Severity | 2024 Evidence |
|---|---|---|---|
| Safety corner-cutting | Evaluations compressed, risks missed | High | 233 AI incidents (up 56% YoY) |
| Premature deployment | Systems released before adequate testing | Very High | Bard rushed in 3 months vs 6-month norm |
| Research culture | Safety work deprioritized | High | Safety staff turnover +340% |
| Coordination failure | Agreements collapse under pressure | Critical | Voluntary commitments lack enforcement |
Risk Assessment by Intensity Level
Section titled “Risk Assessment by Intensity Level”| Intensity Level | Safety Timeline | Coordination | Risk Profile | Probability Estimate |
|---|---|---|---|---|
| Low | 3-6 months | Stable | Manageable | 15-25% (declining) |
| Medium | 4-8 weeks | Stressed | Elevated | 35-45% (current state) |
| High | 2-4 weeks | Fragile | Dangerous | 20-30% (trend direction) |
| Critical | Days | Collapsed | Extreme | 10-15% (crisis scenario) |
Racing Intensity and Existential Risk
Section titled “Racing Intensity and Existential Risk”High racing intensity directly increases existential risk through multiple pathways. The [7ac691ae1e4ecec9] research covered 43 games between 2020-2024 and found that “race dynamics increase the chances for all kinds of risks and reducing such dynamics should improve risk management across the board.” [7fe1e8f86703b52d]’s seminal analysis on “racing to the precipice” identified how “competitive pressure could drive unsafe AI development” through structural incentive misalignment.
- AGI race with inadequate alignment: 40-50% probability of major harm if racing continues at high intensity (expert surveys, FHI 2024↗)
- Military AI deployment pressure: 55-70% probability of regional conflicts involving autonomous systems by 2030 under high racing
- Coordination window closure: Racing may foreclose opportunities for safety agreements, with Brookings analysis↗ noting coordination becomes “exponentially harder” as capability gaps widen
- Safety research capacity: [ea3e8f6ca91c7dba] warns that “if AI systems substantially speed up developers, this could signal rapid acceleration of AI R&D progress generally, which may lead to proliferation risks, breakdowns in safeguards and oversight”
Trajectory and Scenarios
Section titled “Trajectory and Scenarios”Current Trajectory
Section titled “Current Trajectory”| Trend | Assessment | Evidence |
|---|---|---|
| Commercial competition | Intensifying | Major release every 3-4 months |
| Geopolitical pressure | Increasing | DeepSeek “Sputnik moment” |
| Coordination efforts | Growing but fragile | Seoul commitments, AISI |
| Regulatory pressure | Increasing | EU AI Act implementation |
Scenario Analysis
Section titled “Scenario Analysis”| Scenario | Probability | Racing Intensity Outcome | Key Drivers |
|---|---|---|---|
| Coordination Success | 25-35% | Intensity reduces; safety timelines stabilize | EU AI Act enforcement; market demand for safety; geopolitical détente |
| Managed Competition | 30-40% | Competition continues but within guardrails; safety standards enforced | Regulation establishes floor; voluntary commitments partially hold; market differentiation on safety |
| Fragile Equilibrium | 15-25% | Current intensity maintained with stress; neither improving nor worsening | Mixed signals; some coordination, some defection |
| Escalation | 10-20% | Racing intensifies; safety margins erode further | Geopolitical crisis; major capability breakthrough; coordination collapse |
Note: The probability of positive or stable scenarios (“Coordination Success” + “Managed Competition” = 55-75%) reflects that multiple de-escalation mechanisms are active and strengthening. The EU AI Act is being implemented, major labs have signed voluntary commitments (even if imperfect), enterprise buyers increasingly demand safety certifications, and safety research is growing as a field. The question is whether these mechanisms can outpace intensifying geopolitical pressure. Historical precedent (nuclear arms control, ozone layer protection) shows that coordination on dangerous technologies is difficult but achievable.
Critical Uncertainties
Section titled “Critical Uncertainties”| Uncertainty | Resolution Importance | Current Assessment |
|---|---|---|
| DeepSeek impact on US-China dynamics | Very High | Likely intensifying |
| EU AI Act enforcement | High | Unknown |
| Voluntary commitment durability | High | Fragile |
| Next major capability breakthrough | Very High | Unpredictable |
Key Debates
Section titled “Key Debates”Is Racing Inevitable?
Section titled “Is Racing Inevitable?”Inevitability view holds that economic incentives are structural, geopolitical competition cannot be coordinated away, and first-mover advantages are too large to forgo. McKinsey’s 2025 State of AI↗ report found that “organizations recognize AI risks, but fewer than two-thirds are implementing concrete safeguards,” suggesting a persistent action gap even where awareness exists.
Contingency view argues historical precedent exists for technology coordination (nuclear non-proliferation, ozone layer protection), market mechanisms can internalize safety costs through liability and insurance requirements, and cultural and regulatory shifts remain possible. Brookings Institution analysis↗ advocates for “formal mechanisms for coordination between institutions to prevent duplication of efforts and ensure AI governance initiatives reinforce one another.”
The empirical evidence from 2024-2025 suggests racing intensity is neither inevitable nor easily controlled. The Carnegie Endowment assessment↗ concluded: “The global community must move from symbolic gestures to enforceable commitments” as “voluntary commitments play a crucial role but often need to be more robust to ensure meaningful compliance.”
Optimal Racing Level
Section titled “Optimal Racing Level”Some racing is beneficial: Competition drives innovation, with diverse approaches exploring solution space. The Stanford AI Index 2025↗ documented breakthrough innovations from competitive pressure. Monopoly concentrates power and creates single points of failure, arguably increasing structural risk.
Current racing is excessive: Safety margins have fallen below minimum viable levels—compressed from 12-16 weeks to 4-6 weeks for initial evaluations represents 70% reduction that [da39d35d613fd8c7] suggests is insufficient for high-stakes systems. Coordination mechanisms are failing, with the 2024 Seoul Summit↗ producing commitments that “create a fragmented environment in which companies pick and choose which guidelines to follow.” The trajectory is toward higher intensity post-DeepSeek, with both superpowers increasing investment in a context of declining trust.
Related Pages
Section titled “Related Pages”Related Risks
Section titled “Related Risks”- Racing Dynamics — The structural risk from high racing intensity
- Multipolar Trap — Coordination failure dynamics that intensify racing
- Winner-Take-All Dynamics — First-mover advantages that drive racing
- Concentration of Power — Power consolidation from racing winners
- Economic Disruption — Labor market shocks from racing-driven deployment
Related Interventions
Section titled “Related Interventions”- Responsible Scaling Policies — Industry self-governance to moderate racing
- Voluntary Commitments — International coordination mechanisms
- International AI Safety Summits — Diplomatic coordination efforts
- Seoul AI Safety Summit Declaration — 2024 voluntary commitments
- AI Chip Export Controls — Hardware-based racing moderation
- EU AI Act — Regulatory approach to level playing field
- NIST AI Risk Management Framework — Standards to reduce racing pressure
Related Parameters
Section titled “Related Parameters”- Safety Culture Strength — Internal safety prioritization that resists racing pressure
- Coordination Capacity — Industry cooperation that reduces competitive intensity
- International Coordination — Geopolitical cooperation level
- Regulatory Capacity — Government ability to moderate racing through policy
- Safety-Capability Gap — The gap that racing widens
- AI Control Concentration — Concentration dynamics from racing outcomes
Measurement Challenges
Section titled “Measurement Challenges”Quantifying racing intensity faces several methodological obstacles. First, information asymmetry prevents external observers from verifying actual safety timelines and resource allocations—labs self-report these metrics with varying transparency standards. The 2024 FLI AI Safety Index↗ noted difficulty obtaining consistent data across companies. Second, leading indicators lag outcomes: by the time timeline compression appears in public reports, competitive dynamics have already intensified for 6-12 months. Third, multidimensional tradeoffs make single composite scores potentially misleading—a lab might score well on resource allocation but poorly on deployment timelines. Finally, counterfactual ambiguity obscures whether observed behavior reflects racing pressure or other factors (technical constraints, strategic choices, capability limitations).
Despite these challenges, converging evidence from multiple sources—industry reports (Stanford AI Index↗), expert surveys (FLI Safety Index↗), incident tracking ([52c56891fbc1959a]), and geopolitical analysis (CSIS↗)—provides robust triangulation that racing intensity has increased substantially from 2022-2025 baseline levels.
Sources & Key Research
Section titled “Sources & Key Research”2024-2025 Empirical Evidence
Section titled “2024-2025 Empirical Evidence”- FLI AI Safety Index 2024↗ — Evaluation of 6 major labs on safety practices
- Stanford AI Index 2024-2025↗ — Comprehensive industry metrics and trends
- [52c56891fbc1959a] — 233 documented incidents, up 56% YoY
- [ea3e8f6ca91c7dba] — AI acceleration risks
- International AI Safety Report↗ — Capability advancement tracking
Geopolitical Analysis
Section titled “Geopolitical Analysis”- CSIS: DeepSeek and US-China AI Race↗ — Export control effectiveness
- [c19eddb152d05207] — Strategic implications
- [b0e63ccdb332db60] — Pluralization of AI development
- [0397dadc79e7e3ae] — Decoupling dynamics
Coordination & Governance
Section titled “Coordination & Governance”- Carnegie Endowment: AI Governance Arms Race↗ — Summit effectiveness assessment
- Brookings: International AI Cooperation↗ — Coordination mechanisms
- McKinsey State of AI 2025↗ — Industry safeguard adoption
- Seoul AI Safety Summit↗ — 16-company voluntary commitments
- Frontier Model Forum↗ — Industry coordination forum
Academic & Safety Research
Section titled “Academic & Safety Research”- [7fe1e8f86703b52d] — Foundational racing dynamics model
- [7ac691ae1e4ecec9] — 43 games (2020-2024)
- Dan Hendrycks AI Safety Textbook↗ — Competitive pressure risks
- [da39d35d613fd8c7] — High-stakes systems analysis
- Future of Humanity Institute↗ — Existential risk surveys
- Epoch AI↗ — AI development trends
Historical Context
Section titled “Historical Context”- Stanford HAI AI Index↗ — Multi-year trend analysis
- RAND AI Competition Analysis↗ — Strategic competition frameworks
- Partnership on AI↗ — Multi-stakeholder coordination
What links here
- Safety-Capability Gapparameter
- Safety Researchmetricmeasures
- Lab Behaviormetricmeasures
- Expert Opinionmetricmeasures
- Compute & Hardwaremetricmeasures
- Misuse Potentialrisk-factorcomposed-of
- Carlsmith's Six-Premise Argumentmodelmodels
- Racing Dynamics Impact Modelmodelmodels
- Multipolar Trap Dynamics Modelmodelaffects
- Racing Dynamics Game Theory Modelmodelmodels
- Lab Incentives Modelmodelaffects
- Parameter Interaction Network Modelmodelaffects
- Safety Culture Equilibrium Modelmodelmodels