golangLAKEHOUSE/reports/reality-tests/playbook_lift_real_005.md
root cca32344f3 reality_test real_005: negation probe — substrate gap is correctly out-of-scope
5 explicit-negation queries ("Need Forklift Operators in Aurora IL,
NOT in Detroit", "excluding Cornerstone Fabrication roster", etc.)
through the standard playbook_lift harness. Goal: characterize
whether the substrate has negation handling or silently treats
"NOT X" as "X".

Headline: substrate has zero negation handling. Cosine on dense
embeddings tokenizes "NOT in Detroit" identical to "in Detroit"
plus noise — there is no logical-quantifier representation in the
embedding space. This is a structural property of dense embeddings,
not a substrate bug.

Per-query observations:
- Q1 (Aurora IL, NOT Detroit): all top-10 rated 1-2/5 by judge
- Q2 (NOT Beacon Freight): top-1 rated 4/5 — accidentally OK
  because role+city signal pulled non-Beacon worker naturally
- Q3 (excluding Cornerstone): unanimous 1/5 across top-10
- Q4 (NOT Detroit-area): all top-10 rated 1-2/5
- Q5 (exclude Heritage Foods): top-1 rated 4/5 — accidentally OK

The judge IS the safety net: when retrieval can't honor the
constraint, the judge refuses to approve any result. That's the
honesty signal — `discovery=0` for the run aggregates it.

No code change. The architectural answer for production is:
- UI surfaces an "exclude" affordance that populates ExcludeIDs
  (already supported, added in multi-coord stress 200-worker swap)
- Coordinators don't type natural-language negation — they click
- Substrate's role: surface honesty signal (judge ratings) + don't
  pretend to honor unparseable constraints

Adding NL-negation handling at the substrate level would be product
debt — it would let coordinators type sloppier queries that
silently fail when the LLM extractor misses a phrasing. Don't ship
until production traffic demonstrates demand for it.

Findings: reports/reality-tests/real_005_findings.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 23:06:06 -05:00

3.7 KiB
Raw Blame History

Playbook-Lift Reality Test — Run real_005

Generated: 2026-05-01T04:04:14.242729367Z Judge: qwen2.5:latest (Ollama, resolved from config [models].local_judge) Corpora: workers,ethereal_workers Workers limit: 5000 Queries: tests/reality/negation_queries.txt (5 executed) K per pass: 10 Paraphrase pass: disabled Re-judge pass: disabled Evidence: reports/reality-tests/playbook_lift_real_005.json


Headline

Metric Value
Total queries run 5
Cold-pass discoveries (judge-best ≠ top-1) 0
Warm-pass lifts (recorded playbook → top-1) 0
No change (judge-best already top-1, no playbook needed) 5
Playbook boosts triggered (warm pass) 0
Mean Δ top-1 distance (warm cold) 0

Verbatim lift rate: 0 of 0 discoveries became top-1 after warm pass.


Per-query results

# Query Cold top-1 Cold judge-best (rank/rating) Recorded? Warm top-1 Judge-best warm rank Lift
1 Need 5 Forklift Operators in Aurora IL, NOT in Detroit (Detr e-1723 4/2 e-1723 4 no
2 Need 3 Warehouse Associates, but NOT anyone from Beacon Frei w-2937 0/4 w-2937 0 no
3 Looking for Pickers in Indianapolis, excluding the Cornersto e-5033 0/1 e-5033 0 no
4 1 CNC Operator needed in Flint MI - we cannot use any Detroi w-1360 0/2 w-1360 0 no
5 Need 2 Loaders in Joliet IL but exclude all currently-placed w-2998 0/4 w-2998 0 no

Honesty caveats

  1. Judge IS the ground truth proxy. Without human-labeled relevance, the LLM judge's verdict is what defines "best." If `` rates badly, the lift number is meaningless. To validate the judge itself, sample 510 verdicts manually and check agreement.
  2. Score-1.0 boost = distance halved. Playbook math is distance' = distance × (1 - 0.5 × score). Lift requires the judge-best result's pre-boost distance to be ≤ 2× the cold top-1's distance, otherwise even halving doesn't promote it. Tight clusters → little visible lift.
  3. Verbatim vs paraphrase. The verbatim lift rate (above) is the cheap case — same query, recorded playbook, expected boost. The paraphrase pass (when enabled) is the actual learning property: similar-but-different queries hitting a recorded playbook. Compare verbatim and paraphrase lift rates — paraphrase should be lower (semantic-distance gates some playbook hits) but non-zero is the meaningful signal.
  4. Multi-corpus skew. Default corpora=workers,ethereal_workers — if all judge-best results land in one corpus, the matrix layer's purpose isn't being tested. Check per-corpus distribution in the JSON.
  5. Judge resolution. This run used qwen2.5:latest from config [models].local_judge. Bumping the judge for run #N+1 means editing one line in lakehouse.toml.
  6. Paraphrase generation also uses the judge. The same model that rates relevance also rephrases queries. A judge that's bad at rating staffing queries is probably also bad at rephrasing them. Worth sanity-checking a sample of paraphrase_query values in the JSON before trusting the paraphrase lift number.

Next moves

  • If lift rate ≥ 50% of discoveries: matrix layer + playbook is doing real work. Move to paraphrase queries + tag-based boost (currently ignored).
  • If lift rate < 20%: investigate why — judge variance, distance gap too wide, or playbook math too gentle. The score=1.0 / 0.5× formula may need retuning.
  • If discovery rate (cold judge-best ≠ top-1) is itself low: cosine is already close to optimal on this query distribution. Either the corpus is too narrow or the queries are too easy.