2 Commits

Author SHA1 Message Date
74870f7c0d Add Ollama backend + Qwen3 local inference support
- Extractor now supports two backends: ollama (local) and anthropic (cloud)
- Default is ollama with qwen3:14b (fits 16GB VRAM)
- Set num_ctx to 32768 for full-script processing
- Added --backend and --ollama-url CLI flags
- Added The Last Backup test script
- Tested: 12/12 scenes valid on dialogue_heavy, 12/13 on the_last_backup

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 16:28:53 -07:00
87d0af0748 Phase 1 implementation: script ingestion + AI extraction pipeline
Complete working pipeline from Fountain script to validated scene JSON:
- Schemas (Pydantic): all 7 layers defined upfront
- Fountain parser + normalizer (Layer 1)
- AI scene extractor with prompt contracts (Layer 2)
- Schema validator + scene-specific semantic validator
- Structured JSON logging per layer/scene execution
- Versioned output writer (never overwrites)
- Retry engine with 4-level failure escalation
- Stop condition evaluator (per-unit + global halts)
- Diff/drift detector for re-run comparison
- CLI entry point with --dry-run, --scene, --test, --force
- 3 test scripts (dialogue-heavy, action-heavy, nonstandard)
- Expected output files for regression testing

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 15:49:43 -07:00