Skip to content

Automation Tools

This page documents all automation tools available for maintaining and improving the knowledge base.

ToolPurposeCommand
Page ImproverImprove pages to Q5 qualitynode scripts/page-improver.mjs <page-id>
Content GraderAuto-grade pages via AInode scripts/grade-content.mjs
ValidatorsCheck content qualitynpm run validate
Knowledge BaseScan content, generate summariesnpm run kb:scan
Data BuilderRegenerate entity datanpm run build:data
Document EnhancerUnified content management CLInode scripts/document-enhancer.mjs

The recommended way to improve wiki pages to quality 5.

Terminal window
# List pages that need improvement (sorted by priority)
node scripts/page-improver.mjs --list
# Get improvement prompt for a specific page
node scripts/page-improver.mjs economic-disruption
# Show page info only (no prompt)
node scripts/page-improver.mjs racing-dynamics --info
# Filter by quality and importance
node scripts/page-improver.mjs --list --max-qual 3 --min-imp 50
ElementRequirement
Quick Assessment Table5+ rows, 3 columns (Dimension, Assessment, Evidence)
Substantive Tables2+ additional tables with real data
Mermaid Diagram1+ showing key relationships
Citations10+ real URLs from authoritative sources
Quantified ClaimsReplace “significant” with “25-40%” etc.
Word Count800+ words of substantive content
ModelCost per Page
Opus 4.5$3-5
Sonnet 4.5$0.50-1.00
  • Gold standard: src/content/docs/knowledge-base/risks/misuse/bioweapons.mdx
  • Good example: src/content/docs/knowledge-base/risks/structural/racing-dynamics.mdx

Uses Claude Sonnet API to automatically grade pages with importance, quality, and AI-generated summaries.

Terminal window
# Preview what would be graded (no API calls)
node scripts/grade-content.mjs --dry-run
# Grade a specific page
node scripts/grade-content.mjs --page scheming
# Grade pages and apply to frontmatter
node scripts/grade-content.mjs --limit 10 --apply
# Grade a category with parallel processing
node scripts/grade-content.mjs --category responses --parallel 3
# Skip already-graded pages
node scripts/grade-content.mjs --skip-graded --limit 50
OptionDescription
--page IDGrade a single page
--dry-runPreview without API calls
--limit NOnly process N pages
--parallel NProcess N pages concurrently (default: 1)
--category XOnly process pages in category
--skip-gradedSkip pages with existing importance
--applyWrite grades to frontmatter (caution)
--output FILEWrite results to JSON file

Importance (0-100):

  • 90-100: Essential for prioritization (core interventions, key risk mechanisms)
  • 70-89: High value (concrete responses, major risk categories)
  • 50-69: Useful context (supporting analysis, secondary risks)
  • 30-49: Reference material (historical, profiles, niche)
  • 0-29: Peripheral (internal docs, stubs)

Quality (0-100):

  • 80-100: Comprehensive (2+ tables, 1+ diagram, 5+ citations, quantified claims)
  • 60-79: Good (1+ table, 3+ citations, mostly prose)
  • 40-59: Adequate (structure but lacks tables/citations)
  • 20-39: Draft (poorly structured, heavy bullets, no evidence)
  • 0-19: Stub (minimal content)

~$0.02 per page, ~$6 for all 329 pages


Run all content quality checks:

Terminal window
npm run validate # Run all validators
npm run validate:ci # CI mode (JSON output)
CommandDescription
npm run validate:styleStyle guide compliance (sections, structure)
npm run validate:stalenessContent freshness (review dates, age)
npm run validate:consistencyCross-page consistency (estimates, terminology)
npm run validate:dataEntity data integrity (references, required fields)
npm run validate:linksInternal link validation
npm run validate:mdxMDX syntax errors
npm run validate:sidebarSidebar configuration (index pages)
npm run validate:typesUI components handle all entity types
npm run validate:qualityContent quality metrics
Terminal window
# Skip specific checks
node scripts/validate-all.mjs --skip=orphans,staleness
# Stop on first failure
node scripts/validate-all.mjs --fail-fast

SQLite-based system for managing content, sources, and AI summaries.

Requires .env file:

ANTHROPIC_API_KEY=sk-ant-...
Terminal window
npm run kb:scan # Scan MDX files, extract sources, populate database
npm run kb:summarize # Generate AI summaries
npm run kb:stats # Show database statistics
Terminal window
# Scan content (run after editing MDX files)
node scripts/scan-content.mjs
node scripts/scan-content.mjs --force # Rescan all files
node scripts/scan-content.mjs --verbose # Show per-file progress
# Generate summaries
node scripts/generate-summaries.mjs --batch 50 # Summarize 50 articles
node scripts/generate-summaries.mjs --type sources # Summarize sources
node scripts/generate-summaries.mjs --model sonnet # Use Sonnet
node scripts/generate-summaries.mjs --id deceptive-alignment # Specific article
node scripts/generate-summaries.mjs --dry-run # Preview only

All cached data is in .cache/ (gitignored):

  • .cache/knowledge.db - SQLite database
  • .cache/sources/ - Fetched source documents
TaskModelCost
Summarize all 311 articlesHaiku~$2-3
Summarize all 793 sourcesHaiku~$10-15

Important: Data build must run before site build.

Terminal window
npm run build:data # Regenerate all data files
npm run dev # Auto-runs build:data first
npm run build # Auto-runs build:data first

After running build:data:

  • src/data/database.json - Main entity database
  • src/data/entities.json - Entity definitions
  • src/data/backlinks.json - Cross-references
  • src/data/tagIndex.json - Tag index
  • src/data/pathRegistry.json - URL path mappings
  • src/data/pages.json - Page metadata for scripts
Terminal window
npm run sync:descriptions # Sync model descriptions from files
npm run extract # Extract data from pages
npm run generate-yaml # Generate YAML from data
npm run cleanup-data # Clean up data files

Unified tool for managing and improving content quality.

Terminal window
# List pages by priority (gap = importance - quality × 20)
node scripts/document-enhancer.mjs list --sort gap --limit 20
# Show details for a specific page
node scripts/document-enhancer.mjs show scheming
# Grade pages using Claude API
node scripts/document-enhancer.mjs grade --limit 5 --dry-run
node scripts/document-enhancer.mjs grade --apply
# Enhance low-quality pages
node scripts/document-enhancer.mjs enhance --min-imp 70 --max-qual 2 --dry-run
node scripts/document-enhancer.mjs enhance --page language-models
OptionDescription
--dry-runPreview without API calls
--limit NProcess only N pages
--applyApply changes directly to files
--model XUse specific Claude model
--min-imp NMinimum importance (enhance)
--max-qual NMaximum quality (enhance)
--page IDTarget specific page

Terminal window
# Find URLs that can be converted to <R> components
node scripts/map-urls-to-resources.mjs expertise-atrophy # Specific file
node scripts/map-urls-to-resources.mjs # All files
node scripts/map-urls-to-resources.mjs --stats # Statistics only
# Auto-convert markdown links to R components
node scripts/convert-links-to-r.mjs --dry-run # Preview
node scripts/convert-links-to-r.mjs --apply # Apply changes
Terminal window
node scripts/export-resources.mjs # Export resource data
node scripts/fix-resource-summaries.mjs # Fix summary issues

Terminal window
# Generate a model page from YAML input
node scripts/generate-content.mjs --type model --file input.yaml
# Generate a risk page
node scripts/generate-content.mjs --type risk --file input.yaml
# Generate a response page
node scripts/generate-content.mjs --type response --file input.yaml
Terminal window
node scripts/batch-summaries.mjs # Generate summaries for multiple pages

Terminal window
npm run test # Run all tests
npm run test:lib # Test library functions
npm run test:validators # Test validator functions

Terminal window
npm run lint # Check for linting issues
npm run lint:fix # Fix linting issues
npm run format # Format all files
npm run format:check # Check formatting without changing

Convention: All temporary/intermediate files go in .claude/temp/ (gitignored).

Scripts that generate intermediate output (like grading results) write here by default. This keeps the project root clean and prevents accidental commits.


  1. Find candidates:

    Terminal window
    node scripts/page-improver.mjs --list --max-qual 3
  2. Get improvement prompt:

    Terminal window
    node scripts/page-improver.mjs economic-disruption
  3. Run in Claude Code with the generated prompt

  4. Validate the result:

    Terminal window
    npm run validate:mdx
    npm run validate:style
  1. Preview:

    Terminal window
    node scripts/grade-content.mjs --skip-graded --dry-run
  2. Grade and apply:

    Terminal window
    node scripts/grade-content.mjs --skip-graded --apply --parallel 3
  3. Review results:

    Terminal window
    cat .claude/temp/grades-output.json
Terminal window
npm run validate
Terminal window
npm run build:data
npm run validate:data