golangLAKEHOUSE/reports/reality-tests/playbook_lift_real_004.md
root 0331288641 playbook_lift: LLM-based role extractor closes shorthand bleed (real_004)
real_003 left a known-weak hole: shorthand-style queries
("{count} {role} {city} {state} ...") have no separator between
role and city, so a regex can't reliably extract — leaving the
cross-role gate disabled when both record AND query are shorthand.

This commit adds a roleExtractor with regex-first + LLM fallback:

- Regex first (fast, deterministic) — handles need + client_first +
  looking from real_003b. ~75% of styles, no LLM cost paid.
- LLM fallback when regex returns empty AND model is configured —
  Ollama-shape /api/chat with format=json, schema-tight prompt,
  temperature 0. ~1-3s on local qwen2.5.
- Per-process cache — paraphrase + rejudge passes reuse the same
  query 4× per run; cache prevents 4× LLM cost.
- Off-by-default — opt-in via -llm-role-extract flag (CLI) and
  LLM_ROLE_EXTRACT=1 env var (harness wrapper). real_003b shipping
  config unchanged unless explicitly enabled.

8 new tests in scripts/playbook_lift/main_test.go:
- TestRoleExtractor_RegexFirst: LLM not called when regex matches
- TestRoleExtractor_LLMFallback: shorthand goes to LLM
- TestRoleExtractor_LLMOffLeavesEmpty: opt-in default preserved
- TestRoleExtractor_Cache: 3 calls = 1 LLM hit
- TestRoleExtractor_NilSafe: nil receiver runs regex only
- TestExtractRoleViaLLM_HTTPError + _BadJSON: failure paths
- TestRoleExtractor_ClosesCrossRoleShorthandBleed: synthetic
  witness for the real_003 scenario — both record + query are
  shorthand, regex returns "" for both, LLM produces DIFFERENT
  role tokens for CNC vs Forklift, so matrix gate's cross-role
  rejection (locked separately in
  TestInjectPlaybookMisses_RoleGateRejectsCrossRole) fires
  correctly. This is the load-bearing verification.

Reality test real_004 ran the same 40-query stress as real_003 with
LLM extraction on. Cross-style same-role boosts fired correctly
across all 4 styles for Loaders + Packers + Shipping Clerk clusters
(including shorthand → other-style transfer). No cross-role bleed
observed. The reality test alone can't be a clean "with vs without"
comparison (HNSW build is non-deterministic across runs, and
real_004 stochastics didn't trigger a shorthand recording at all),
which is why the unit-test witness exists.

Production note (in real_004_findings.md): LLM extraction is for
reality-test coverage of arbitrary query shapes. Production should
extract role at INGEST time (when the inbox parser already runs an
LLM) and pass already-resolved role through requests — same shape
as multi_coord_stress's existing Demand{Role: ...} model. The hot
path should never need the harness extractor's per-query LLM cost.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 22:51:27 -05:00

7.4 KiB
Raw Blame History

Playbook-Lift Reality Test — Run real_004

Generated: 2026-05-01T03:48:39.932829485Z Judge: qwen2.5:latest (Ollama, resolved from config [models].local_judge) Corpora: workers,ethereal_workers Workers limit: 5000 Queries: tests/reality/real_coord_queries_v2.txt (40 executed) K per pass: 10 Paraphrase pass: disabled Re-judge pass: disabled Evidence: reports/reality-tests/playbook_lift_real_004.json


Headline

Metric Value
Total queries run 40
Cold-pass discoveries (judge-best ≠ top-1) 5
Warm-pass lifts (recorded playbook → top-1) 4
No change (judge-best already top-1, no playbook needed) 36
Playbook boosts triggered (warm pass) 16
Mean Δ top-1 distance (warm cold) -0.11025716

Verbatim lift rate: 4 of 5 discoveries became top-1 after warm pass.


Per-query results

# Query Cold top-1 Cold judge-best (rank/rating) Recorded? Warm top-1 Judge-best warm rank Lift
1 Need 5 Warehouse Associates in Kansas City MO starting at 09 w-3288 0/4 w-3288 0 no
2 Parallel Machining needs 5 Warehouse Associates in Kansas Ci w-3288 0/4 w-3288 0 no
3 Looking for 5 Warehouse Associates at Parallel Machining in w-3288 0/4 w-3288 0 no
4 5 Warehouse Associates Kansas City MO 09:00 Parallel Machini w-3288 0/4 w-3288 0 no
5 Need 1 Forklift Operator in Detroit MI starting at 15:00 for w-2136 0/5 w-2136 0 no
6 Beacon Freight needs 1 Forklift Operator in Detroit MI at 15 e-6723 2/2 e-6723 2 no
7 Looking for 1 Forklift Operator at Beacon Freight in Detroit w-4766 0/5 w-4766 0 no
8 1 Forklift Operator Detroit MI 15:00 Beacon Freight e-5818 0/4 e-5818 0 no
9 Need 4 Loaders in Indianapolis IN starting at 12:00 for Midw e-9877 0/4 w-398 2 no
10 Midway Distribution needs 4 Loaders in Indianapolis IN at 12 e-4537 4/5 ✓ w-2148 w-398 1 no
11 Looking for 4 Loaders at Midway Distribution in Indianapolis w-962 6/4 ✓ w-398 w-398 0 YES
12 4 Loaders Indianapolis IN 12:00 Midway Distribution e-9877 0/4 w-2148 2 no
13 Need 3 Warehouse Associates in Fort Wayne IN starting at 17: w-2857 1/4 ✓ w-1063 w-1063 0 YES
14 Cornerstone Fabrication needs 3 Warehouse Associates in Fort w-4703 0/2 w-1063 1 no
15 Looking for 3 Warehouse Associates at Cornerstone Fabricatio w-1784 0/3 w-1063 1 no
16 3 Warehouse Associates Fort Wayne IN 17:30 Cornerstone Fabri w-2754 2/3 w-1063 3 no
17 Need 4 Pickers in Detroit MI starting at 13:30 for Beacon Fr e-2247 1/2 e-2247 1 no
18 Beacon Freight needs 4 Pickers in Detroit MI at 13:30 e-2247 2/2 e-2247 2 no
19 Looking for 4 Pickers at Beacon Freight in Detroit MI for 13 e-2247 1/2 e-2247 1 no
20 4 Pickers Detroit MI 13:30 Beacon Freight e-2247 0/3 e-2247 0 no
21 Need 2 Packers in Joliet IL starting at 09:30 for Parallel M e-846 2/3 e-2120 0 no
22 Parallel Machining needs 2 Packers in Joliet IL at 09:30 e-846 2/4 ✓ e-2120 e-2120 0 YES
23 Looking for 2 Packers at Parallel Machining in Joliet IL for e-846 1/2 e-2120 2 no
24 2 Packers Joliet IL 09:30 Parallel Machining e-846 2/3 e-2120 0 no
25 Need 3 Assemblers in Flint MI starting at 08:30 for Heritage w-4124 9/3 w-4124 9 no
26 Heritage Foods needs 3 Assemblers in Flint MI at 08:30 w-4124 3/3 w-4124 3 no
27 Looking for 3 Assemblers at Heritage Foods in Flint MI for 0 w-4124 2/2 w-4124 2 no
28 3 Assemblers Flint MI 08:30 Heritage Foods w-4124 3/3 w-4124 3 no
29 Need 3 Packers in Flint MI starting at 12:30 for Parallel Ma e-6019 8/2 e-6019 8 no
30 Parallel Machining needs 3 Packers in Flint MI at 12:30 e-6019 9/2 e-6019 9 no
31 Looking for 3 Packers at Parallel Machining in Flint MI for e-6019 0/1 e-6019 0 no
32 3 Packers Flint MI 12:30 Parallel Machining e-2006 0/3 e-2006 0 no
33 Need 1 Shipping Clerk in Flint MI starting at 17:00 for Pion w-3988 2/4 ✓ w-1367 w-1367 0 YES
34 Pioneer Assembly needs 1 Shipping Clerk in Flint MI at 17:00 w-3988 1/3 w-1367 2 no
35 Looking for 1 Shipping Clerk at Pioneer Assembly in Flint MI w-3988 0/2 w-1367 1 no
36 1 Shipping Clerk Flint MI 17:00 Pioneer Assembly e-4849 0/2 w-1367 1 no
37 Need 1 CNC Operator in Detroit MI starting at 17:30 for Beac e-489 5/3 e-489 5 no
38 Beacon Freight needs 1 CNC Operator in Detroit MI at 17:30 e-489 0/2 e-489 0 no
39 Looking for 1 CNC Operator at Beacon Freight in Detroit MI f e-489 0/2 e-489 0 no
40 1 CNC Operator Detroit MI 17:30 Beacon Freight e-489 3/3 e-489 3 no

Honesty caveats

  1. Judge IS the ground truth proxy. Without human-labeled relevance, the LLM judge's verdict is what defines "best." If `` rates badly, the lift number is meaningless. To validate the judge itself, sample 510 verdicts manually and check agreement.
  2. Score-1.0 boost = distance halved. Playbook math is distance' = distance × (1 - 0.5 × score). Lift requires the judge-best result's pre-boost distance to be ≤ 2× the cold top-1's distance, otherwise even halving doesn't promote it. Tight clusters → little visible lift.
  3. Verbatim vs paraphrase. The verbatim lift rate (above) is the cheap case — same query, recorded playbook, expected boost. The paraphrase pass (when enabled) is the actual learning property: similar-but-different queries hitting a recorded playbook. Compare verbatim and paraphrase lift rates — paraphrase should be lower (semantic-distance gates some playbook hits) but non-zero is the meaningful signal.
  4. Multi-corpus skew. Default corpora=workers,ethereal_workers — if all judge-best results land in one corpus, the matrix layer's purpose isn't being tested. Check per-corpus distribution in the JSON.
  5. Judge resolution. This run used qwen2.5:latest from config [models].local_judge. Bumping the judge for run #N+1 means editing one line in lakehouse.toml.
  6. Paraphrase generation also uses the judge. The same model that rates relevance also rephrases queries. A judge that's bad at rating staffing queries is probably also bad at rephrasing them. Worth sanity-checking a sample of paraphrase_query values in the JSON before trusting the paraphrase lift number.

Next moves

  • If lift rate ≥ 50% of discoveries: matrix layer + playbook is doing real work. Move to paraphrase queries + tag-based boost (currently ignored).
  • If lift rate < 20%: investigate why — judge variance, distance gap too wide, or playbook math too gentle. The score=1.0 / 0.5× formula may need retuning.
  • If discovery rate (cold judge-best ≠ top-1) is itself low: cosine is already close to optimal on this query distribution. Either the corpus is too narrow or the queries are too easy.