Skip to content

Expertise Atrophy Cascade Model

📋Page Status
Quality:78 (Good)
Importance:42.5 (Reference)
Last edited:2025-12-26 (12 days ago)
Words:4.2k
Backlinks:2
Structure:
📊 13📈 2🔗 3📚 013%Score: 11/15
LLM Summary:Mathematical model of skill degradation from AI dependency showing expertise loss cascades across individual (1-5 years), institutional (5-15 years), and generational (15-40 years) timescales. Central estimates suggest AI-induced degradation rate of 0.05/month with practice displacement exponent of 1.5, and generational transmission efficiency dropping from 0.80 to 0.45 with AI assistance.
Model

Expertise Atrophy Cascade Model

Importance42
Model TypeCascade Analysis
Target RiskExpertise Atrophy
Key FindingComplete knowledge loss within 15-30 years with high AI use
Model Quality
Novelty
4
Rigor
4
Actionability
5
Completeness
5

This model analyzes expertise atrophy as a cascading process where AI assistance in one domain triggers skill degradation in dependent domains, creating multi-generation feedback loops that compound over time. Unlike previous automation waves that displaced discrete manual tasks, AI assistance pervades cognitive domains—writing, reasoning, analysis, diagnosis, design—meaning atrophy cascades can propagate through entire professional ecosystems and ultimately affect civilizational knowledge preservation.

The central insight is that skill dependencies create multiplicative vulnerability: when foundational skill A atrophies due to AI assistance, dependent skills B, C, and D become unreliable even if they are independently practiced. A programmer who loses debugging capability cannot effectively design systems, evaluate solutions, or train junior developers—each capability degrades the others in a reinforcing spiral. The model quantifies these cascade dynamics and identifies intervention points where preservation efforts offer the highest leverage.

Central Question: At what rate does AI-assisted work degrade human expertise, through what mechanisms do these effects cascade across skill dependencies and generations, and which interventions can preserve critical capabilities before irreversibility thresholds are crossed?

The policy implications are significant because atrophy operates on generational timescales that exceed typical planning horizons. If expertise loss becomes visible only through system failures, recovery may be more difficult. This creates a potential “atrophy trap”—where recognizing and remediating capability loss becomes harder as expertise declines. However, as discussed in the Counter-Arguments section below, market incentives and institutional adaptation may prevent this worst-case outcome.

Expertise atrophy cascades operate across three interconnected levels, each with distinct dynamics and timescales. Individual skill cascades occur within years, institutional cascades within decades, and generational cascades across 20-40 years. The interaction between levels creates feedback loops that accelerate degradation beyond what any single-level analysis would predict.

LevelTimescaleCascade Sequence
1. Individual1-5 yearsAI assists → Practice declines → Proficiency drops → Dependent skills atrophy → Increased AI dependence (loop)
2. Institutional5-15 yearsIndividual expertise declines → Training quality degrades → New hires less capable → Knowledge gaps → Recovery becomes impossible
3. Generational15-40 yearsGen 1 has skills, uses AI → Gen 2 learns with AI → Gen 2 cannot train Gen 3 → Knowledge effectively lost
Loading diagram...

The core skill degradation model captures the interaction between practice, natural decay, and AI-induced atrophy. For a skill SS with proficiency P(t)P(t) at time tt:

dPdt=αPractice(t)βP(t)γAI_Use(t)P(t)\frac{dP}{dt} = \alpha \cdot \text{Practice}(t) - \beta \cdot P(t) - \gamma \cdot \text{AI\_Use}(t) \cdot P(t)

Where:

  • α\alpha = Learning rate from deliberate practice (0.05-0.15 per practice hour)
  • β\beta = Natural decay rate (0.01-0.03 per month without active use)
  • γ\gamma = AI-induced degradation multiplier (0.02-0.08 per month of AI reliance)

The critical insight is that AI use simultaneously reduces practice (substitution effect) and accelerates decay (offloading effect). Practice displacement follows a power law:

Practice(t)=Practice0(1AI_Use(t))δ\text{Practice}(t) = \text{Practice}_0 \cdot (1 - \text{AI\_Use}(t))^\delta

Where δ\delta ranges from 1.2 to 1.8, capturing the observation that AI use disproportionately reduces the most challenging practice opportunities—precisely those that build deep expertise.

Combining these effects yields the cascade acceleration factor:

Cascade Factor=i=1nwijdeps(i)(1Pj(t)Pj(0))\text{Cascade Factor} = \sum_{i=1}^{n} w_i \cdot \prod_{j \in \text{deps}(i)} \left(1 - \frac{P_j(t)}{P_j(0)}\right)

This formula captures how degradation in foundational skills (high wiw_i) propagates through dependency chains, with each dependent skill’s atrophy multiplied by the degradation of its prerequisites.

ParameterSymbolLow EstimateCentralHigh EstimateConfidenceKey Uncertainty
Learning rate per practice hourα\alpha0.050.100.15MediumIndividual variation
Natural monthly decayβ\beta0.010.020.03HighSkill type dependency
AI-induced monthly degradationγ\gamma0.020.050.08LowLimited longitudinal data
Practice displacement exponentδ\delta1.21.51.8MediumVaries by domain
Competence thresholdPcP_c0.650.700.75HighTask complexity
Functionality thresholdPfP_f0.350.400.45HighError tolerance
Dependence thresholdPdP_d0.150.200.25MediumRecovery feasibility
Generational transmission efficiency (without AI)τ0\tau_00.700.800.90MediumTraining quality
Generational transmission efficiency (with AI)τAI\tau_{AI}0.300.450.60LowEmerging phenomenon

Expertise exists along a continuum, but three thresholds mark qualitatively different states with distinct implications for system resilience and recovery potential.

The competence threshold (P=0.70P = 0.70) represents the minimum proficiency for independent, high-quality work. Above this level, practitioners can perform complex tasks without assistance, reliably detect errors in their own work and others’, evaluate novel approaches, and effectively train the next generation of practitioners. This is the threshold required for knowledge transmission and institutional resilience.

The functionality threshold (P=0.40P = 0.40) marks the minimum for assisted performance. Practitioners can complete routine tasks with AI support and recognize obvious errors when flagged, but cannot reliably detect subtle mistakes, handle novel situations, or train others effectively. Systems staffed at this level operate normally under routine conditions but fail unpredictably under stress.

The dependence threshold (P=0.20P = 0.20) represents effective capability loss. Even with AI assistance, practitioners cannot reliably complete tasks or detect AI errors. They cannot evaluate AI suggestions or recognize when AI outputs are inappropriate. Knowledge transmission is impossible, and recovery requires external intervention—if recovery remains possible at all.

Initial StateAI Use LevelTime to Competence LossTime to Functionality LossTime to DependenceRecovery Feasibility
Expert (P0=0.95P_0 = 0.95)Low (20%)10-15 years25-35 years40+ yearsFull
Expert (P0=0.95P_0 = 0.95)Medium (50%)4-7 years10-15 years20-25 yearsPartial
Expert (P0=0.95P_0 = 0.95)High (80%)1.5-3 years4-7 years10-15 yearsDifficult
Expert (P0=0.95P_0 = 0.95)Complete (95%)6-12 months2-4 years6-10 yearsVery difficult
Journeyman (P0=0.75P_0 = 0.75)Medium (50%)2-4 years6-10 years15-20 yearsPartial
Journeyman (P0=0.75P_0 = 0.75)High (80%)8-18 months3-5 years8-12 yearsDifficult
Novice (P0=0.50P_0 = 0.50)High (80%)Immediate1-2 years4-7 yearsMay never develop

The table reveals a critical asymmetry: skill acquisition requires sustained deliberate practice over years, but atrophy can occur within months under high AI dependency. This asymmetry means that brief periods of intense AI use can create deficits requiring years to remediate—and remediation itself requires the expertise that may have been lost.

Medical diagnosis exemplifies how AI assistance can trigger capability loss across interdependent clinical skills. The cascade begins with AI-assisted image interpretation, where radiologists rely on algorithms to flag abnormalities. As manual interpretation practice declines, the ability to recognize subtle or unusual patterns atrophies. This undermines the development of clinical intuition that comes from correlating imaging findings with patient outcomes.

Loading diagram...

The quantitative progression follows documented patterns from aviation automation and early medical AI deployment. Initial efficiency gains mask developing fragility as the expertise required to catch AI errors progressively erodes.

Cascade StageTimelineSkill ImpactDetection IndicatorIntervention Point
AI adoptionYears 0-2Minimal (-5 to -10%)Increased throughputMandate manual practice quotas
Automation complacencyYears 2-5Moderate (-15 to -25%)AI confirmation without reviewRequire independent diagnosis before AI
Pattern recognition lossYears 5-8Significant (-30 to -45%)Rising miss rates on unusual casesNovel case rotations
Clinical intuition atrophyYears 8-12Severe (-40 to -60%)Cannot teach clinical reasoningPreserve expert cadres
Knowledge transmission failureYears 12-20Critical (-50 to -75%)Training program quality declineDocumentation and simulation
System dependenceYears 20+Catastrophic (-70 to -90%)System failures during AI downtimeMay be irreversible

Software development expertise degrades through a similar cascade, with AI code generation undermining the skill chain from basic syntax fluency through system architecture. The progression is particularly concerning because each level of programming expertise builds on the levels below it—architectural thinking requires deep code reading experience, which requires debugging skill, which requires syntactic fluency.

Current evidence from GitHub Copilot adoption suggests this cascade is already underway. Stack Overflow traffic declined approximately 40% from 2022 to 2024, correlating with coding assistant adoption. Junior developers report decreased confidence in reading unfamiliar code and reduced debugging capability when AI assistance is unavailable. Senior developers observe that code review quality has declined as reviewers increasingly trust AI-generated code without deep examination.

Skill LevelAI Impact MechanismCurrent Status (2024)3-Year Projection5-Year Projection10-Year Projection
Syntax fluencyAI writes boilerplate, reducing exposure-15 to -25% for AI users-25 to -35%-40 to -55%-60 to -75%
Code readingLess need to read others’ code-10 to -20%-20 to -35%-35 to -50%-50 to -70%
DebuggingAI suggests fixes directly-15 to -25%-30 to -45%-45 to -60%-65 to -80%
Algorithm designAI generates solutions-10 to -15%-20 to -30%-35 to -50%-55 to -70%
System architectureRequires all lower skills-5 to -10%-15 to -25%-30 to -45%-50 to -65%
Problem decompositionMeta-skill over all above-5 to -10%-15 to -25%-25 to -40%-45 to -60%

The most consequential aspect of expertise atrophy operates across generational timescales. Each generation can only transmit knowledge and skills it actually possesses, creating a ratchet effect where capability loss in one generation permanently constrains the next.

The generational skill transmission model captures this dynamic:

Sn+1=τ(AIn)Sn(1AI Dependencyn+11+AI Dependencyn+1)S_{n+1} = \tau(AI_n) \cdot S_n \cdot \left(1 - \frac{\text{AI Dependency}_{n+1}}{1 + \text{AI Dependency}_{n+1}}\right)

Where:

  • SnS_n = Skill level of generation nn (normalized to S0=1.0S_0 = 1.0)
  • τ(AIn)\tau(AI_n) = Transmission efficiency, declining with AI use (0.80 baseline → 0.45 with heavy AI)
  • AI Dependency\text{AI Dependency} = Fraction of work performed with AI assistance
GenerationPeriodBaseline SkillAI DependencyTransmission to NextSkill LevelCan Train Next GenKnowledge Status
Gen 0 (Pre-AI)Pre-20201.000%0.80-0.90100%YesIntact
Gen 1 (AI-Assisted)2020-20350.80-0.9040-60%0.50-0.7065-85%PartiallyDegrading
Gen 2 (AI-Native)2035-20500.50-0.7070-85%0.30-0.5035-55%MarginallyCritical
Gen 3 (AI-Dependent)2050-20650.30-0.5085-95%0.20-0.3515-35%NoEffectively lost
Gen 4+ (Post-Knowledge)2065+0.20-0.3595%+Unknown5-20%NoLost

The table reveals that irreversibility likely occurs between Generation 1 and Generation 2—during the 2030-2045 period. By the time Generation 2 reaches professional maturity, they may lack sufficient expertise to recognize what has been lost or to train Generation 3 effectively. This creates the “atrophy trap” where society cannot diagnose its own capability loss because the diagnostic capability has itself atrophied.

Loop 1: Individual Skill-Dependency Spiral

Section titled “Loop 1: Individual Skill-Dependency Spiral”

The individual skill-dependency loop operates on annual timescales and represents the fundamental mechanism of atrophy. AI use reduces practice opportunities, causing skill decay, which increases perceived AI necessity, leading to more AI use. This loop has an amplification factor of 1.3-1.7x per cycle, meaning that without intervention, AI dependency approximately doubles every 2-3 years.

The loop exhibits two stable equilibria and one unstable equilibrium. The low-AI-use, high-skill equilibrium is stable but economically disadvantaged in competitive contexts. The high-AI-use, low-skill equilibrium is stable but fragile to AI system failures. The intermediate state of medium AI use with medium skill is unstable and tends to tip toward high AI use due to competitive and convenience pressures.

Economic dynamics amplify individual atrophy into collective lock-in. AI-assisted workers demonstrate higher productivity on measurable metrics, leading organizations to prefer AI-using employees. Non-users face career disadvantages, creating pressure for universal adoption. As AI use becomes mandatory, skills atrophy universally, and the economy becomes structurally dependent on AI systems.

This loop has an amplification factor of 1.5-2.5x, driven by market dynamics. Once a majority of workers are AI-dependent, returning to AI-free operation becomes economically impossible regardless of whether AI remains available. The loop is particularly difficult to escape because the costs of AI dependency (fragility, skill loss) are diffuse and long-term, while the benefits (productivity, convenience) are immediate and measurable.

The training loop operates across generational timescales and represents the pathway to irreversibility. Experts who use AI extensively cannot effectively train juniors in fundamental skills they no longer practice. Juniors learn AI-mediated approaches from the start, becoming more AI-dependent than their teachers. When these juniors become the senior generation, they cannot train the next generation at all.

This loop has an amplification factor of 1.2-1.4x per generation, but because generations span 15-25 years, the effects compound dramatically over time. The critical intervention window is before Generation 1 (current AI-assisted workers) loses the ability to train Generation 2 effectively—roughly 2030-2040 for most domains.

ScenarioProbabilityAI Adoption TrajectoryIntervention Level2035 Skill Level2050 Skill LevelRecovery Feasibility
Unmanaged cascade35%Rapid, uncontrolledMinimal45-55%20-35%Very low
Partial intervention40%Rapid with some preservationModerate55-70%35-50%Partial
Active preservation15%Managed with skill quotasHigh70-85%55-70%Good
Market correction8%High adoption followed by backlashVariable50-65%45-60%Moderate
Technological plateau2%AI capabilities stallLow (not needed)80-90%70-85%Full
DomainCurrent AI PenetrationCascade VelocitySeverity if LostPreservation PriorityIntervention Tractability
Medical diagnosisMedium (30-40%)FastCriticalVery HighMedium
Aviation pilotingHigh (60-70%)MediumCriticalVery HighHigh
Software developmentHigh (50-70%)Very FastHighHighMedium
Legal research/reasoningMedium (40-50%)FastMedium-HighMediumMedium
Scientific researchMedium (30-50%)MediumVery HighVery HighLow
Engineering analysisLow-Medium (20-35%)MediumCriticalVery HighHigh
Financial analysisMedium (40-50%)FastMediumMediumMedium
Emergency responseLow (15-25%)SlowCriticalHighHigh
Air traffic controlMedium (35-45%)MediumCriticalVery HighHigh
Nuclear operationsLow (10-20%)SlowCatastrophicExtremeVery High

Counter-Arguments: Why Severe Atrophy May Not Occur

Section titled “Counter-Arguments: Why Severe Atrophy May Not Occur”

The analysis above presents expertise atrophy as a likely cascading risk, but several factors could prevent or substantially mitigate this outcome. A balanced assessment requires engaging with reasons for skepticism.

If skill atrophy starts causing significant problems (system failures, quality declines, liability issues), powerful economic incentives emerge:

SignalLikely Market ResponseHistorical Parallel
AI-dependent workers perform worse in novel situationsPremium salaries for verified skilled workersCraft labor premiums in manufacturing
System failures during AI downtimeInvestment in backup human capabilityDisaster recovery planning
Quality problems trace to skill gapsCertification and training requirementsProfessional licensure
Liability for AI-assisted errorsInsurance requirements for human oversightMedical malpractice standards

Organizations that experience skill-related failures have strong incentives to correct course. The aviation industry’s response to automation complacency—adding manual flying requirements—demonstrates this dynamic already operates.

Several factors suggest the model may overestimate atrophy velocity:

  • Deep expertise is durable: Experts with 10+ years of practice retain substantial capability even with reduced practice. The “use it or lose it” dynamic is weaker for deeply encoded skills.
  • AI as training tool: Well-designed AI systems could enhance rather than replace skill development—immediate feedback, personalized difficulty, unlimited practice opportunities.
  • Meta-skills persist: Higher-order skills (critical thinking, problem decomposition, judgment) may be less vulnerable to atrophy than domain-specific techniques.
  • Generational adaptation: New generations may develop different but equally valuable cognitive capabilities adapted to AI-augmented work.

Previous automation waves triggered concerns about skill loss that proved partially unfounded:

AutomationPredicted OutcomeActual Outcome
Calculators (1970s)“Students won’t learn math”Math education adapted; conceptual understanding emphasized
Spell-checkers (1990s)“Writing skills will collapse”Writing quality maintained; focus shifted to higher-level composition
GPS navigation”Spatial cognition will atrophy”Limited evidence of broad cognitive impact despite widespread adoption
Industrial automation”Manufacturing expertise will disappear”Expertise shifted to maintenance, programming, quality control

In each case, skills evolved rather than simply degrading. New forms of expertise emerged that complemented rather than competed with automation.

The Model Assumes Worst-Case Adoption Patterns

Section titled “The Model Assumes Worst-Case Adoption Patterns”

The projections assume:

  • Near-universal high-level AI adoption without preservation efforts
  • No significant market correction or regulatory response
  • AI capabilities continue improving smoothly
  • No development of “skill-preserving AI” design patterns

If any of these assumptions fail, outcomes improve substantially. The “partial intervention” scenario (40% probability) and “market correction” scenario (8%) represent more likely outcomes than “unmanaged cascade.”

Counter-arguments are strongest if:

  • Skill degradation becomes visible through failures before irreversibility
  • Professional communities establish effective preservation norms
  • AI tools are designed to augment rather than replace skill development
  • Regulatory requirements mandate human capability maintenance

They’re weakest if:

  • Competitive pressure drives universal AI adoption before problems emerge
  • Atrophy operates too slowly to trigger timely market correction
  • The “atrophy trap” genuinely prevents recognition of capability loss
  • AI capabilities improve faster than institutions can adapt

Revised assessment: Given adaptive capacity, the “unmanaged cascade” scenario probability may be closer to 20-25% rather than 35%, while “partial intervention” and “market correction” together may reach 55-60%. The overall trajectory remains concerning but less deterministic than the base model suggests.

Prevention is the highest-leverage intervention, but the window is closing for many domains. Effective prevention requires institutional commitment to skill preservation before AI benefits become apparent enough to drive adoption pressure.

InterventionMechanismEffectivenessCostImplementation DifficultyTime to Impact
Mandatory manual practice quotasMaintains practice volume70-90% skill preservation10-25% productivity costHigh (regulatory)1-2 years
AI-free certification pathwaysCreates skill signals60-80% for certified cohortModerateMedium2-4 years
Foundational curriculum requirementsEnsures pre-AI learning50-70% for new entrantsLowMedium-High5-10 years
Practice-before-AI protocolsSequences skill development55-75% for adoptersLowMedium1-3 years

Mitigation Strategies (AI Adoption 30-70%)

Section titled “Mitigation Strategies (AI Adoption 30-70%)”

Once AI adoption exceeds prevention thresholds, mitigation focuses on slowing cascade velocity and preserving critical capabilities within subset populations.

InterventionMechanismEffectivenessCostImplementation DifficultyTime to Impact
Expert preservation cadresDedicated AI-free practitioners60-80% in preserved group15-25% overheadHigh3-5 years
Skill recovery programsIntensive retraining40-60% partial recoveryHigh per personVery High2-5 years
Degraded-mode trainingPrepares for AI unavailability30-50% emergency capabilityModerateMedium2-4 years
Knowledge documentationCaptures expertise before loss20-40% of tacit knowledgeModerate (one-time)Medium1-3 years
Rotational AI-free periodsScheduled manual practice45-65% skill maintenance20-30% productivity costMedium-High1-2 years

Recovery after high AI penetration is extremely difficult and may be impossible for some domains. The expertise required to design and implement recovery programs may itself have atrophied.

InterventionMechanismEffectivenessCostImplementation DifficultyTime to Impact
Knowledge archaeologyReconstruct from documentation20-40% recoveryVery HighExtreme10-20 years
Expert importationRecruit from less-affected regions30-50% for specific domainsHighHigh5-10 years
Simulation-based rebuildingPractice on synthetic tasks25-45% skill reconstructionVery HighVery High10-15 years
Apprenticeship revivalIntensive human-to-human transfer35-55% if experts availableVery HighExtreme15-25 years

Several historical episodes of expertise loss provide partial analogies to AI-induced atrophy, though none match the scope and velocity of potential AI-driven cascades.

The post-Roman engineering collapse saw sophisticated techniques for concrete, aqueduct construction, and architectural design lost across Europe within two centuries. Knowledge preserved in texts could not substitute for apprenticeship traditions that were disrupted by political collapse. Recovery required roughly a millennium, and some techniques (Roman marine concrete) were only recently reverse-engineered.

GPS navigation’s impact on spatial cognition provides a more recent and directly relevant precedent. Studies of London taxi drivers showed measurable hippocampal changes correlated with navigational expertise, and subsequent research documented cognitive decline with GPS dependence. While not safety-critical for most users, this demonstrates that cognitive offloading to technology produces measurable neurological effects on relevant capabilities.

Aviation automation provides the clearest current evidence. Pilot manual flying hours have declined approximately 60% since 1990, while automation-related incidents have increased substantially. Multiple accident investigations (Air France 447, Asiana 214) identified skill degradation as contributing factors. The aviation industry has implemented partial countermeasures (manual flying requirements), but faces ongoing pressure to further automate.

The key difference with AI is scope. Previous automation affected discrete tasks (navigation, aircraft control) while AI assistance pervades all cognitive domains simultaneously. This means atrophy cascades can propagate across the entire economy rather than being contained within specific sectors.

This model incorporates several simplifying assumptions that may not hold in practice. Individual variation in atrophy rates is substantial—some practitioners maintain skills despite extensive AI use, while others degrade rapidly. The parameter estimates carry uncertainty ranges of 40-60% for atrophy rates and 30-50% for generational transmission, compounding to order-of-magnitude uncertainty in long-term projections.

The model assumes AI capabilities remain stable or improve, which may not hold. A technological plateau or capability regression would substantially alter projections. Similarly, the model does not account for potential emergence of new human expertise categories that complement AI rather than competing with it.

Intervention effectiveness estimates are largely theoretical, as no jurisdiction has implemented systematic skill preservation at scale. The historical analogies provide partial validation, but AI-induced atrophy may differ in important ways from previous automation effects.

Key Questions

At what proficiency level does expertise atrophy become practically irreversible?
Can AI-assisted training methods be designed that build rather than undermine foundational skills?
Which cognitive capabilities are most resistant to AI-induced atrophy?
How can organizations detect expertise atrophy before system failures reveal it?
Will economic incentives naturally select for skill preservation, or is regulatory intervention required?
Can documented knowledge substitute for tacit expertise when human practitioners are unavailable?

The model suggests that effective response requires immediate action across multiple time horizons. In the immediate term (2025-2028), organizations and regulators should conduct critical skill audits to identify preservation priorities, establish baseline measurements before further degradation obscures the reference point, and implement practice mandates in safety-critical domains where the intervention window remains open.

In the medium term (2028-2035), focus should shift to ensuring Generation 1 can effectively train Generation 2 while expertise remains sufficient, building preserved expert cadres who maintain AI-independent capability, and developing documentation and simulation resources that capture tacit knowledge.

In the long term (2035-2050), resilient institutional structures that maintain capability diversity will be essential, along with international coordination to prevent race-to-the-bottom dynamics where competitive pressure drives universal atrophy, and investment in recovery capabilities for domains where atrophy has already progressed.

The fundamental challenge is that potential atrophy costs are diffuse and long-term while AI adoption benefits are immediate and concentrated. This creates a collective action problem where individually rational choices could lead to collectively suboptimal outcomes. However, market signals from skill-related failures, professional community responses, and regulatory adaptations may trigger corrective actions before worst-case scenarios materialize. The “unmanaged cascade” scenario (now estimated at 20-25% probability) represents a risk worth monitoring and preparing for, rather than an inevitable outcome.

The model draws on established research in automation effects, cognitive offloading, and skill acquisition/decay:

  • Carr, N. (2014). The Glass Cage: Automation and Us. W.W. Norton. Comprehensive analysis of automation’s cognitive effects.
  • Parasuraman, R., & Manzey, D. (2010). Complacency and bias in human use of automation. Human Factors, 52(3), 381-410.
  • Casner, S. M., & Schooler, J. W. (2014). Thoughts in flight: Automation use and pilots’ task-related and task-unrelated thought. Human Factors, 56(3), 433-442.
  • FAA (2013). Operational Use of Flight Path Management Systems. Federal Aviation Administration.
  • NTSB accident reports: Air France 447 (2012), Asiana 214 (2014), and related automation-factor incidents.
  • Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676-688.
  • Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory. Science, 333(6043), 776-778.
  • Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain drain: The mere presence of one’s own smartphone reduces available cognitive capacity. Journal of the Association for Consumer Research, 2(2), 140-154.
  • Maguire, E. A., et al. (2000). Navigation-related structural change in the hippocampi of taxi drivers. PNAS, 97(8), 4398-4403.