5 Commits

Author SHA1 Message Date
3b5ef3596a Phase 3: Department Interpretation — Directing + Cinematography per scene
Layer 4 implementation:
- Per-scene directing: scene_objective, audience_takeaway, pacing, dramatic
  beats, subtext, continuity considerations
- Per-scene cinematography: camera style, lens, movement, framing, DoF,
  color palette, visual emphasis, continuity considerations
- All interpretation grounded in L2 scene data + L3 Production Bible
- Bible entries passed as context per scene (matching characters + location)
- Validator: empty fields, broken refs, bible ref checks, uncertain values
- Per-scene versioned output + combined department_interpretation_v1.json
- CLI: --phase 3, --scene N for single-scene re-run

Tested on the_last_backup: 12/12 scenes valid, 0 failures, 5 warnings
(false positives from prop names in all-caps like BLACK PORTABLE SSD)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 17:31:23 -07:00
17e410751c Phase 2: Production Bible — Character + Location bibles from scene data
Layer 3 implementation:
- Character Bible: canonical names, aliases, arcs, relationships, wardrobe
  states, emotional arcs, reference prompts — all grounded in scene evidence
- Location Bible: canonical names, variants, descriptions, types, features,
  mood associations, reference prompts — all grounded in scene evidence
- Combined Production Bible output for downstream layers
- Bible validator: duplicate detection, scene reference checks, hallucination
  detection, UNKNOWN field flagging
- Prompt contracts: L3_character_bible_v1, L3_location_bible_v1
- Named versioned output: character_bible_v1.json, location_bible_v1.json,
  production_bible_v1.json
- CLI: --phase 2 runs bible only, --phase omitted runs both phases
- OutputWriter: added write_named/write_named_raw for non-scene outputs

Tested on the_last_backup: 3 characters, 5 locations, 0 hallucinations,
3 warnings (UNKNOWN physical_description — correct behavior)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 16:51:55 -07:00
74870f7c0d Add Ollama backend + Qwen3 local inference support
- Extractor now supports two backends: ollama (local) and anthropic (cloud)
- Default is ollama with qwen3:14b (fits 16GB VRAM)
- Set num_ctx to 32768 for full-script processing
- Added --backend and --ollama-url CLI flags
- Added The Last Backup test script
- Tested: 12/12 scenes valid on dialogue_heavy, 12/13 on the_last_backup

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 16:28:53 -07:00
87d0af0748 Phase 1 implementation: script ingestion + AI extraction pipeline
Complete working pipeline from Fountain script to validated scene JSON:
- Schemas (Pydantic): all 7 layers defined upfront
- Fountain parser + normalizer (Layer 1)
- AI scene extractor with prompt contracts (Layer 2)
- Schema validator + scene-specific semantic validator
- Structured JSON logging per layer/scene execution
- Versioned output writer (never overwrites)
- Retry engine with 4-level failure escalation
- Stop condition evaluator (per-unit + global halts)
- Diff/drift detector for re-run comparison
- CLI entry point with --dry-run, --scene, --test, --force
- 3 test scripts (dialogue-heavy, action-heavy, nonstandard)
- Expected output files for regression testing

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 15:49:43 -07:00
2218e47c1f Initial commit 2026-04-06 22:34:26 +00:00