root eb4308d8fd real_006 diagnosis: Q43 leak is cross-city, not cross-role
Traced w-279's path through the substrate. The leak source is Q49
(Packers in Indianapolis IN for Midway Distribution), NOT Q18 as
the initial reading suspected.

Q49 recorded w-279 with role=Packers, client=Midway Distribution,
city=Indianapolis. Q43 (Packer in Chicago IL for Midway Distribution)
ran later. roleEqual("Packer","Packers") → both normalize to
"packer" → role gate passes (correctly, by design — they ARE the
same role under plural-strip). Cosine distance between Q49's
recorded query and Q43's query is small enough to fit inside the
0.20 inject threshold because role + client + count + time-token
dominate the embedding (only the city and singular/plural noun
differ). Inject fires, w-279 surfaces at Q43's warm top-1 in
Chicago, judge correctly rates 2/5 — wrong city.

The role gate IS working. What's missing is a CITY gate. Real_002's
fix targeted cross-role bleed (Forklift → CNC). real_006 surfaced
cross-city bleed within same role + same client — a hole prior
tests structurally couldn't reach because they all sourced from
rows 0-9 where no such pair existed.

Concrete fix surface documented (1 new field, 2 gate checks, 1
regex, ~5 tests). Half a session of work, same shape as real_002.
Not implementing tonight — diagnosis only.

The 18 unit-level role-gate tests still pass, confirming the gate
is doing what it was specified to do. The bug is a missing
specification, not a broken implementation.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 05:13:19 -05:00
..

reports/reality-tests — does the 5-loop substrate actually work?

Reality tests measure product outcomes, not substrate health. The 21 smokes prove the system runs; the proof harness proves the system makes the claims it claims; reality tests answer: does the small-model pipeline + matrix indexer + playbook give measurably better results than raw cosine?

This is the gate from project_small_model_pipeline_vision.md: "the playbook + matrix indexer must give the results we're looking for." Single load-bearing criterion. Throughput, scaling, code elegance are secondary.


What lives here

Each reality test is a numbered run that produces:

  • <test>_<NNN>.json — raw structured evidence (per-query data, summary metrics)
  • <test>_<NNN>.md — human-readable report with headline metrics, per-query table, honesty caveats, next moves

Runs are append-only. Earlier runs stay in tree as historical baseline.


Test catalog

playbook_lift_<NNN> — does the playbook actually lift the right answer?

Driver: scripts/playbook_lift.shbin/playbook_lift Queries: tests/reality/playbook_lift_queries.txt Pipeline: cold pass → LLM judge → playbook record → warm pass → measure ranking shift.

The headline question: when the LLM judge finds a better answer than cosine top-1, can the playbook boost it to top-1 on the next run? If yes, the learning loop closes; if no, the matrix layer + playbook is infrastructure for a thesis that doesn't pay rent.

See the run reports for honesty caveats — chiefly that the LLM judge IS the ground-truth proxy.


Running a reality test

# Defaults: judge resolved from lakehouse.toml [models].local_judge,
# workers limit 5000, run id 001
./scripts/playbook_lift.sh

# Re-run with a different judge to check inter-judge agreement
# (env JUDGE_MODEL overrides the config tier)
JUDGE_MODEL=qwen3:latest RUN_ID=002 ./scripts/playbook_lift.sh

# Smaller scale for fast iteration
WORKERS_LIMIT=1000 K=5 RUN_ID=dev ./scripts/playbook_lift.sh

Judge resolution priority (Phase 3, 2026-04-29):

  1. -judge flag on the Go driver (explicit override)
  2. JUDGE_MODEL env var (operator override)
  3. lakehouse.toml [models].local_judge (default)
  4. Hardcoded qwen3.5:latest (last-resort fallback if config missing)

This means model bumps land in lakehouse.toml, not in this script or the Go driver. Bumping local_judge to a stronger local model (e.g. when qwen4 ships) takes one line.

Requires: Ollama on :11434 with nomic-embed-text + the resolved judge model loaded. Skips cleanly (exit 0) if Ollama is absent.


Interpreting results

Three thresholds matter on the playbook_lift tests:

Lift rate (lifts / discoveries) Verdict
≥ 50% Loop closes — playbook is doing real work, move to paraphrase queries
20-50% Lift exists but inconsistent — investigate boost math (score × 0.5) or judge variance
< 20% Loop is not pulling its weight — diagnose before adding more components

A separate concern: discovery rate (cold judge-best ≠ cold top-1). If discovery is itself rare (< 30% of queries), cosine is already close to optimal on this query distribution and the matrix+playbook layer has little headroom. That's not necessarily a bug — but it means the value gate has to come from somewhere else (multi-corpus retrieval, domain-specific tags, drift signal).


What this is not

  • Not a benchmark. No comparison against external systems; only internal cold-vs-warm.
  • Not a regression gate. Each run is a snapshot. Scores will drift with corpus changes, judge updates, and playbook math tuning. Don't wire just verify to demand a minimum lift.
  • Not human-validated. The LLM judge is the ground truth proxy. Sample 5-10 verdicts manually per run to sanity-check the judge isn't pathological.