Audit pipeline PR #9: determinism + fact extraction + verifier gate + KB stats #9

Merged
profit merged 34 commits from test/enrich-prd-pipeline into main 2026-04-23 05:29:39 +00:00

34 Commits

Author SHA1 Message Date
profit
ab3b857c7f Merge remote-tracking branch 'origin/main' into test/enrich-prd-pipeline
# Conflicts:
#	auditor/audit.ts
#	auditor/checks/inference.ts
#	auditor/checks/kb_query.ts
#	auditor/claim_parser.ts
#	tests/real-world/scrum_master_pipeline.ts
2026-04-23 00:29:22 -05:00
profit
56dbfb7d03 fact_extractor: project context + fixed verifier-verdict parser
Some checks failed
lakehouse/auditor 8 warnings — see review
Two bundled changes. Both came out of J's observation that the
verifier was defaulting to UNVERIFIABLE on domain-specific facts
because it had no idea what Lakehouse was, which project's code it
was reading, or what framework the types belonged to.

1. Project context preamble. Added docs/AUDITOR_CONTEXT.md — a <400-
   word concise description of the project (crates, services,
   architecture phases, the auditor's role itself). fact_extractor
   reads it once, caches it, prepends it to the extract prompt as a
   "PROJECT CONTEXT (for grounding; do NOT extract from this)"
   section. Both extractor and verifier now see this context, so
   statements like "aggregate<T> returns Map<string, AggregateRow>"
   get grounded as "this is a TypeScript function in the Lakehouse
   auditor subsystem" and the verifier can reason about plausibility
   instead of guessing.

2. Verifier-verdict parser fix. Gemma2's output format varies between
   "**Verdict:** CORRECT" and just "* **CORRECT**" inline (observed
   variance across runs). The old regex required "Verdict:" as a
   label and missed the second format — causing all verdicts to
   stay UNCHECKED. Replaced with a two-pass approach: find each
   fact section start ("**N.**" or "N."), slice to the next section,
   scan the slice for the first CORRECT|INCORRECT|UNVERIFIABLE
   token. Handles both formats plus unfenced fallback.

Verified: 4-fact test extraction went from 0/4 verdicts scored
(pre-fix) to 2/4 CORRECT + 2/4 UNVERIFIABLE (post-fix). The 2
UNVERIFIABLE cases are domain-specific code behavior the verifier
legitimately can't confirm without reading source — correct stance,
not a parser miss.

No new consensus modes yet. J suggested adding codereview or
validator as a second pass; holding until we see whether context
injection alone gives sufficient signal lift.
2026-04-23 00:26:01 -05:00
profit
2a97fd7237 claim_parser: skip quoted patterns + tighten PR regex
Some checks failed
lakehouse/auditor 7 warnings — see review
Two fixes observed in test sweep on b25e368:

1. The "Phase 45 shipped" quoted test example in a commit message
   body was triggering STRONG_PATTERNS despite being inside quotes —
   produced a block finding that flipped 1/0/1 across 3 back-to-back
   audits. Same bug class as auditor/checks/static.ts (fixed earlier):
   rubric files quote pattern examples, parser can't distinguish.

   Fix: firstUnquotedMatch() wraps firstMatch(); uses isInsideQuotedString()
   to check whether the regex's match position falls inside double /
   single / backtick quotes on the line. Mirrors static.ts exactly.

2. A regex misfire: `(?:PR|commit|prior|...)` in history/proof
   patterns was matching "verified ... in production" because `PR`
   (2 chars) matched the first 2 chars of "production" before the
   `\s*#?\w*` tail absorbed the rest. Tightened to require a digit
   after PR (`PR\s*#?\d+`) and commit to require a hex hash.

Verified: 3 back-to-back audit_one runs before this fix showed the
Phase 45 block flipping 1/0/1; after these fixes, unit tests confirm
quoted examples skip correctly AND real claims ("Phase 45 shipped",
"verified end-to-end against production", "Verified end-to-end on
PR #8") still classify correctly.
2026-04-23 00:18:58 -05:00
profit
b25e36881c claim_parser: history/proof claims join empirical class
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "now classify as empirical; fresh claims like "Phase 45 shipped" stay"
PR #9's 4 block findings were all from commit message references to
prior work ("on PR #8", "the proven X", "flipping across N runs").
The cloud reviewer correctly said "the current diff does not prove
that", but the claim was never about the current diff — the proof
lives in the referenced prior PR or test run.

Extended EMPIRICAL_PATTERNS to cover two shared classes:

  1. Runtime metrics (existing) — "58 cloud calls", "306s elapsed"
  2. History/proof refs (new) — "verified on PR #8", "was flipping
     across 9 runs", "the proven escalation ladder", "previously
     observed in PR #6", "tested against commit abc1234"

Both skip diff-verification for the same reason: the proof is outside
the diff. Folded into the existing bucket rather than adding a new
strength tier — the skip discipline is identical so there's no value
in splitting them.

Unit-tested on PR #9's actual failing lines: all 5 historical claims
now classify as empirical; fresh claims like "Phase 45 shipped" stay
strong; pure implementation descriptions ("implements deterministic
classification") still don't match (expected — they're not
claims, they're restatements).
2026-04-22 23:53:07 -05:00
profit
a264bcf3fc auditor/kb_stats.ts — on-demand observability without Grafana
Some checks failed
lakehouse/auditor 4 blocking issues: cloud: claim not backed — "Primary reviewer (gpt-oss:120b) runs N=3 times in parallel, majority-vote per claim. Tie-break
Reads every KB scratchpad file and prints a dashboard of audit
health: verdict distribution, per-PR verdict instability rate,
consensus discrepancy counters, KB size + distinct-signature growth,
verifier verdict histogram, top recurring entities by cross-PR count.

Also supports --json for feeding CI gates or later piping into a
static dashboard page. --top N caps the entities section.

Current state from running it: 30 audits across 8 PRs, 25% verdict
instability rate (all pre-N=3-consensus), 0 discrepancies logged yet
(audits before commit A didn't persist them), 84 audit_lessons rows
with 28 distinct signatures, 4 audit_facts rows with 20 distinct
entities. No cross-PR recurrences yet — but the machinery prints
them as soon as audits on other PRs produce overlapping entities.

This is the full observability surface for PR #9 — the Grafana
alternative I proposed in the counter-plan. Zero infra, 280 LOC,
zero maintenance. If someone later wants a real dashboard, `--json`
output pipes directly into any visualization layer.
2026-04-22 23:41:50 -05:00
profit
181c35b829 scrum_master fact extraction + verifier gate + schema_version bump
Three bundled changes that round out the KB enrichment pipeline
(PR #9 commits B/C/D compressed into one — they all touch the same
persist surfaces so splitting them would just add noise):

B. scrum_master reviews now route accepted review bodies through
   fact_extractor (same llm_team extract pipeline as inference) and
   append to data/_kb/audit_facts.jsonl tagged source:"scrum_review".
   One KB, two producers — downstream consumers can filter by source
   when they care about provenance. Skips reviews <120 chars
   (one-liners / LGTM-type comments with no extractable knowledge).

C. Verifier-gated fact persistence. fact_extractor now parses the
   verifier's free-form prose into per-fact verdicts (CORRECT /
   INCORRECT / UNVERIFIABLE / UNCHECKED). Facts marked INCORRECT are
   dropped on write; CORRECT + UNVERIFIABLE + UNCHECKED are kept
   (dropping UNVERIFIABLE would lose ~90% of real signal — the
   verifier's prior-knowledge base doesn't know Lakehouse internals,
   so domain-specific facts read as UNVERIFIABLE by default).

   verifier_verdicts array is persisted alongside facts so downstream
   queries can surface high-confidence facts (CORRECT) separately
   from provisional ones (UNVERIFIABLE).

   schema_version:2 added to both scrum_reviews.jsonl and
   audit_facts.jsonl writes. Old (v1) rows remain readable; new rows
   get the field so the forward-compat reader in kb_query can
   differentiate.

D. scrum_master_reviewed:true flag added to scrum_reviews.jsonl
   rows on accept. Future kb_query surfacing can filter by this
   (e.g., "show me PRs where a scrum review exists vs only inference"
   as governance signal). Also carried into audit_facts.jsonl when
   the scrum_review source path writes there.
2026-04-22 23:40:21 -05:00
profit
2afad0f83f auditor/inference: N=3 consensus + qwen3-coder:480b tie-breaker
Closes the determinism gap observed in the 3-run baseline test: 1 of
8 findings (the "proven escalation ladder" block) was flipping across
identical-state audits. Root cause: cloud non-determinism at temp=0
is real in practice even though it shouldn't be in theory.

Fix: run the primary reviewer (gpt-oss:120b) N=3 times in PARALLEL
(Promise.all, wall-clock ≈ single call because they're independent
HTTP requests). Aggregate votes per claim_idx. Majority wins. On a
1-1-1 split, call a tie-breaker model with different architecture:
qwen3-coder:480b — newer coding specialist, 4x params of the primary,
distinct training lineage.

Every case where the 3 runs disagreed (even when majority resolved)
is logged to data/_kb/audit_discrepancies.jsonl with the vote counts
and resolution type. This is how we measure consensus drift over
time — a dashboard metric is literally `wc -l audit_discrepancies`
relative to audit count.

Verified: 2 back-to-back audits on unchanged PR #8 produced
identical 8 findings each (1 block + 7 warn). consensus=3/3 on every
claim, zero discrepancies logged. Cost: 3x primary tokens (7K per
audit vs 2K), wall-clock ~unchanged because calls are parallel.

New env vars:
  LH_AUDITOR_CONSENSUS_N        default 3
  LH_AUDITOR_TIEBREAKER_MODEL   default qwen3-coder:480b

Factored the cloud call into runCloudInference() helper so the
consensus loop is clean and the tie-breaker reuses the same prompt
shape as the primary.
2026-04-22 23:38:17 -05:00
profit
77650c4ba3 auditor: inference curation layer + llm_team fact extraction → KB
Closes the cycle J asked for: curated cloud output lands structured
knowledge in the KB so future audits have architectural context, not
just a log of per-finding signatures.

Three pieces:

1. Inference curation (tree-split) — when diff > 30KB, shard at 4.5KB,
   summarize each shard via cloud (temp=0, think=false on small
   shards; think=true on main call). Merge into scratchpad. The cloud
   verification then runs against the scratchpad, not truncated raw.
   Eliminates the 40KB MAX_DIFF_CHARS truncation path for large PRs
   (PR #8 is 102KB — was losing 62KB). Anti-false-positive guard in
   the prompt: cloud is told scratchpad absence is NOT diff absence,
   so it doesn't flag curated-out symbols as missing. unflagged_gaps
   section is dropped entirely when curated (scratchpad can't ground
   them).

2. fact_extractor — TS client for llm_team_ui's extract-facts mode at
   localhost:5000/api/run. Sends curated scratchpad through qwen2.5
   extractor + gemma2 verifier, parses SSE stream, returns structured
   {facts, entities, relationships, verification, llm_team_run_id}.
   Best-effort: if llm_team is down, extraction fails silently and
   the audit still completes. AWAITED so CLI tools (audit_one.ts)
   don't exit before extraction lands — the systemd poller has 90s
   headroom so the extra ~15s doesn't matter.

3. audit_facts.jsonl + checkAuditFacts() — one row per curated audit
   with the extraction result. kb_query tails the jsonl, explodes
   entity rows, aggregates by entity name with distinct-PR counting,
   surfaces entities recurring in 2+ PRs as info findings. Filters
   out short names (<3 chars, extractor truncation artifacts) and
   generic types (string/number/etc.) so signal isn't drowned.

Verified end-to-end on PR #8: 102KB diff → 23 shards → 1KB scratchpad
→ qwen2.5 extracted 4 facts + 6 entities + 6 relationships (real
code-level knowledge: AggregateOptions<T> type, aggregate<T> async
function with real signature, typed relationships). llm_team_run_id
cross-references to llm_team's own team_runs table.

Also: audit.ts passes (pr_number, head_sha) as InferenceContext so
extracted facts are scope-tagged for the KB index.
2026-04-22 23:09:14 -05:00
profit
47f1ca73e7 auditor: Level 1 correction — keep think=true, only temp=0 is needed
Some checks failed
lakehouse/auditor 4 warnings — see review
The previous Level 1 commit set think=false which broke the cloud
inference check on real PR audits. gpt-oss:120b is a reasoning model;
at think=false on large prompts (40KB diff + 14 claims) it returned
empty content — verified by inspecting verdict 8-8e4ebbe4b38a which
showed "cloud returned unparseable output — skipped" with 13421
tokens used and head:<empty>.

Small-prompt tests passed because the model could respond without
needing to think. Real audits with the full diff + claims context
require the reasoning channel to produce any output at all.

The determinism we need comes from temp=0 (greedy sampling). The
reasoning trace at think=true varies in prose but greedy sampling
converges to the same FINAL classification from identical starting
state, so signatures remain stable.

max_tokens restored to 3000 for the think trace + response.
2026-04-22 22:24:25 -05:00
profit
8e4ebbe4b3 test: nine-consecutive audit run 5/5 (compounding probe)
All checks were successful
lakehouse/auditor all checks passed (11 findings, all info)
2026-04-22 22:17:11 -05:00
profit
c6511427a4 test: nine-consecutive audit run 4/5 (compounding probe)
All checks were successful
lakehouse/auditor all checks passed (11 findings, all info)
2026-04-22 22:15:13 -05:00
profit
b02554daec test: nine-consecutive audit run 3/5 (compounding probe)
All checks were successful
lakehouse/auditor all checks passed (11 findings, all info)
2026-04-22 22:13:26 -05:00
profit
2bb83d1bbb test: nine-consecutive audit run 2/5 (compounding probe)
All checks were successful
lakehouse/auditor all checks passed (11 findings, all info)
2026-04-22 22:11:34 -05:00
profit
0cdf9f7928 test: nine-consecutive audit run 1/5 (compounding probe)
All checks were successful
lakehouse/auditor all checks passed (11 findings, all info)
2026-04-22 22:10:17 -05:00
profit
1e00eb4472 auditor: inference temp=0, think=false — kill signature creep
9-run empirical test showed 20 of 27 audit_lessons signatures were
singletons (count=1) — the cloud producing slightly-different summary
phrasings for the SAME underlying claim on each audit, each hashing
to a fresh signature. That's the creep J flagged — not explosive,
but steady ~2 new sigs per run, unbounded over hundreds of runs.

Root cause: temperature=0.2 + think=true was letting variable prose
leak into the classification output. Fix: temp=0 (greedy sample →
identical input yields identical output on same model version),
think=false (no reasoning trace variance), max_tokens 3000→1500
(tighter bound prevents tail wander).

The compounding policy itself was validated by the 9 runs:
  - 7 recurring claims (the legitimate signals) all at conf 0.08-0.20
  - ratingSeverity() correctly held them at info (below 0.3 threshold)
  - cross-PR signal test separately confirmed conf=1.00 → sev=block

Also: LH_AUDIT_RUNS env so the test can validate with smaller N.
2026-04-22 22:09:35 -05:00
profit
81a2200344 test: nine-consecutive audit run 9/9 (compounding probe)
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "the proven escalation ladder with learning context, collects"
2026-04-22 22:06:44 -05:00
profit
c32289143c test: nine-consecutive audit run 8/9 (compounding probe)
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "the proven escalation ladder with learning context, collects"
2026-04-22 22:04:47 -05:00
profit
6df0cdadb3 test: nine-consecutive audit run 7/9 (compounding probe)
Some checks failed
lakehouse/auditor 3 warnings — see review
2026-04-22 22:02:50 -05:00
profit
6d507d5411 test: nine-consecutive audit run 6/9 (compounding probe)
Some checks failed
lakehouse/auditor 7 warnings — see review
2026-04-22 22:01:03 -05:00
profit
d95d7b193e test: nine-consecutive audit run 5/9 (compounding probe)
Some checks failed
lakehouse/auditor 8 warnings — see review
2026-04-22 21:59:00 -05:00
profit
2e222c8eaa test: nine-consecutive audit run 4/9 (compounding probe)
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "the proven escalation ladder with learning context, collects"
2026-04-22 21:57:18 -05:00
profit
0533aa78fb test: nine-consecutive audit run 3/9 (compounding probe)
Some checks failed
lakehouse/auditor 4 warnings — see review
2026-04-22 21:55:26 -05:00
profit
ac5577c4fa test: nine-consecutive audit run 2/9 (compounding probe)
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "the proven escalation ladder with learning context, collects"
2026-04-22 21:53:33 -05:00
profit
c5f0f35cdb test: nine-consecutive audit run 1/9 (compounding probe)
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "the proven escalation ladder with learning context, collects"
2026-04-22 21:52:21 -05:00
profit
9d12a814e3 auditor: kb_index aggregator + nine-consecutive empirical test
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "the proven escalation ladder with learning context, collects"
Phase 1 — definition-layer over append-only JSONL scratchpads.

auditor/kb_index.ts is the single shared aggregator:

  aggregate<T>(jsonlPath, { keyFn, scopeFn, checkFn, tailLimit })
      → Map<signature, {count, distinct_scopes, confidence,
                        first_seen, last_seen, representative_summary, ...}>

  ratingSeverity(agg) — confidence × count severity policy shared
    across all KB readers. Kills the "same unfixed PR inflates its
    own recurrence score" failure mode by design: confidence =
    distinct_scopes/count, so same-scope noise stays below the 0.3
    escalation threshold no matter how many times it repeats.

checkAuditLessons now routes through aggregate + ratingSeverity.
Net effect: the recurrence detector's bespoke Map/Set bookkeeping is
gone; same behavior, shared discipline, reusable by scrum/observer.

Also: symbolsExistInRepo now skips files >500KB so the audit can't
get stuck slurping a fixture.

Phase 2 — nine-consecutive audit runner.

tests/real-world/nine_consecutive_audits.ts pushes 9 empty commits,
waits for each verdict, captures the audit_lessons aggregate state
after each run, reports:

  - sig_count trajectory (should stabilize, not grow linearly)
  - max_count trajectory (same-signature repeat rate)
  - max_confidence trajectory (must stay LOW on same-PR noise)
  - verdict_stable across runs (must NOT oscillate)

This is the empirical proof that the KB compounds favorably:
noise doesn't escalate itself, and signal stays distinguishable.

Unit-tested both failure modes: same-PR × 9 repeats = conf=0.11
(info); cross-PR × 5 distinct = conf=1.00 (block). The rating
function correctly discriminates.
2026-04-22 21:49:46 -05:00
profit
f4be27a879 auditor: fix two false-positive classes from cloud inference
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "the proven escalation ladder with learning context, collects"
Observed on PR #8 audit (de11ac4): 7 warn findings, all from the
cloud inference check. Investigation showed two distinct bug classes
that weren't "ship bad code", they were "auditor misreads the diff":

1. Cloud flagged "X not defined in this diff / missing implementation"
   for symbols like `tailJsonl` and `stubFinding` that ARE defined —
   just not in the added lines of this diff. Fix: extract candidate
   symbols from the cloud's gap summary, grep the repo for their
   definitions (function/const/let/def/class/struct/enum/trait/fn).
   If every named symbol resolves, drop the finding; if some do,
   demote to info with the resolution in evidence.

2. Cloud flagged runtime metrics like "58 cloud calls, 306s
   end-to-end" as unbacked claims. These are empirical outputs
   from running the test, not things a static diff can prove.
   Fix: claim_parser now has an `empirical` strength class
   matching iteration counts, cloud-call counts, duration metrics,
   attempt counts, tier-count phrases. Inference drops empirical
   claims from its cloud prompt (verifiable[] subset only) and
   claim-index mapping uses verifiable[] so cloud responses still
   line up.

Added `claims_empirical` to audit metrics so the verdict is
introspectable: how many claims WERE runtime-only vs how many
are diff-verifiable?

Verified: unit tests confirm empirical classification on 5
sample commit messages; symbol resolver found both false-positive
symbols (tailJsonl + stubFinding) and correctly skipped a known-
fake symbol.
2026-04-22 21:40:03 -05:00
profit
de11ac4018 auditor/README: document audit_lessons + scrum_reviews KB files
Some checks failed
lakehouse/auditor 7 warnings — see review
Adds State section entries for the two KB files that close the
feedback loop: audit_lessons.jsonl (findings → recurrence detector)
and scrum_reviews.jsonl (scrum output → kb_query surfacing).

Touch-commit to trigger re-audit on fresh SHA with the restarted
auditor (which now has the fix-loaded code).
2026-04-22 21:33:27 -05:00
profit
0306dd88c1 auditor: close the verdict→playbook loop + fix rubric-string false positive
Some checks failed
lakehouse/auditor 2 blocking issues: unimplemented!() macro call in tests/real-world/hard_task_escalation.ts
Two changes that fell out of running the auto-loop for real on PR #8:

1. The systemd auditor blocked PR #8 on 'unimplemented!()' / 'todo!()'
   in tests/real-world/hard_task_escalation.ts — but those strings are
   the rubric itself, not macro calls. Added isInsideQuotedString()
   detection in static.ts: BLOCK_PATTERNS now skip matches that fall
   inside double-quoted / single-quoted / backtick string literals on
   the added line. WARN/INFO patterns still run — a TODO comment in
   a string is still a valid signal.

2. Verdicts were being persisted to disk but never fed back as
   learning signal. Added appendAuditLessons() — every block/warn
   finding writes a JSONL row to data/_kb/audit_lessons.jsonl with a
   path-agnostic signature (strips file paths, line numbers, commit
   hashes) so the SAME class of finding on DIFFERENT files dedups to
   one signature.

   kb_query now tails audit_lessons.jsonl and emits recurrence
   findings: 2 distinct PRs hit a signature = info, 3-4 = warn, 5+ =
   block. Severity ramps on distinct-PR count, not total rows, so a
   single unfixed PR being re-audited doesn't inflate its own
   recurrence score.

Fires on post-verdict fire-and-forget (can't break the audit if
disk write fails). The learning loop is now closed: each audit
contributes to the KB that guides the next audit.

Tested: unit tests for normalizedSignature confirmed path-agnostic
dedup; static.ts regression tests confirmed rubric strings no longer
trip BLOCK while real unquoted unimplemented!() still does.
2026-04-22 21:31:35 -05:00
profit
dc01ba0a3b auditor: kb_query surfaces scrum-master reviews for files in PR diff
Some checks failed
lakehouse/auditor 2 blocking issues: unimplemented!() macro call in tests/real-world/hard_task_escalation.ts
Wires the cohesion-plan Phase C link: the scrum-master pipeline writes
per-file reviews to data/_kb/scrum_reviews.jsonl on accept; the
auditor now reads that same file and emits one kb_query finding per
scrum review whose `file` matches a path in the PR's diff.

Severity heuristic: attempt 1-3 → info, attempt 4+ → warn. Reaching
the cloud specialist (attempt 4+) means the ladder had to escalate,
which is meaningful signal reviewers should see. Tree-split fired is
also surfaced in the finding summary.

audit.ts now passes pr.files.map(f => f.path) into runKbCheck (the
old signature dropped it on the floor). Also adds auditor/audit_one.ts
— a dry-run CLI for auditing a single PR without posting to Gitea,
useful for verifying check behavior without spamming review comments.

Verified: after writing scrum_reviews for auditor/audit.ts and
mcp-server/observer.ts (both in PR #7), audit_one 7 surfaced both as
info findings with preview + accepted_model + tree_split flag. A
scrum review for playbook_memory.rs (NOT in PR #7) was correctly
filtered out.
2026-04-22 21:18:21 -05:00
root
89d188074b scrum_master: tree-split + scrum_reviews.jsonl writer + truncation warning
Extends the scrum-master pipeline to handle input overflow on large
source files (>6KB). Previously, the review prompt truncated the file
to first-chunk, which caused false-positive "field is missing"
findings whenever the actual field was past the cutoff.

Now each file >FILE_TREE_SPLIT_THRESHOLD (6000) is sharded at
FILE_SHARD_SIZE (3500), each shard summarized via gpt-oss:120b cloud,
and the distillations merged into a scratchpad. The review then runs
against the scratchpad with an explicit truncation-awareness clause
in the prompt: "DO NOT claim any field, function, or feature is
'missing' based on its absence from this distillation."

Also writes each accepted review as a JSONL row to
data/_kb/scrum_reviews.jsonl (file, reviewed_at, accepted_model,
accepted_on_attempt, attempts_made, tree_split_fired, preview).
This is the source the auditor's kb_query reads to surface
per-file scrum reviews on PRs that touch those files (cohesion
plan Phase C).

Verified: scrum review of 92KB playbook_memory.rs → 27 shards via
cloud → distilled scratchpad → qwen3.5 local 7B accepted on attempt 1
(5931 chars). Tree-split fires, jsonl row appended, output file
contains structured suggestions.
2026-04-22 21:17:53 -05:00
profit
a7aba31935 tests/real-world: scrum-master pipeline — composes everything we built
The orchestrator J described: pulls git repo source + PRD +
suggested-changes doc, chunks them, hands each code piece through
the proven escalation ladder with learning context, collects
per-file suggestions in a consolidated handoff report.

Composes ONLY already-shipped primitives — no new core code:
  - chunker with 800-char / 120-overlap windows
  - sidecar /embed for real nomic-embed-text embeddings
  - in-memory cosine retrieval for top-5 PRD + top-5 proposal
    chunks per target file
  - escalation ladder (qwen3.5 → qwen3 → gpt-oss:20b → gpt-oss:120b
    → devstral-2:123b → mistral-large-3:675b)
  - per-attempt learning-context injection (prior failures as
    "do not repeat" block)
  - acceptance rubric (length ≥ 200 chars + structured form)

Live-run (tests/real-world/runs/scrum_moatqkee/):
  targets: 3 files
    - crates/vectord/src/playbook_memory.rs  (920 lines)
    - crates/vectord/src/doc_drift.rs        (163 lines)
    - auditor/audit.ts                        (170 lines)
  resolved: 3/3 on attempt 1 by qwen3.5:latest local 7B
  total duration: 111.7s
  output: scrum_report.md + per-file JSON

Sample from scrum_report.md (playbook_memory.rs review):
  - Alignment score: 9/10 vs PRD Phase 19
  - 4 concrete change suggestions naming specific lines + PLAN/PRD
    chunk offsets
  - 3 gap analyses with PRD-reference citations

Honest findings from this run:
1. Local 7B handled review-style tasks first-try. The escalation
   ladder infrastructure is live but didn't fire — review is an
   easier task shape than strict code-generation (see hard_task
   test which needed devstral-2 specialist).
2. 6KB file-truncation caused one false positive: model claimed
   playbook_memory.rs lacks a `doc_refs` field, but that field
   exists past the 6KB cutoff. Trade-off between context-size
   and review-depth needs tuning per file.
3. Chunk-offset citations are real: model output includes
   `[PRD @27880]` and `[PLAN @16320]` which map to the actual
   byte offsets of retrieved context chunks. Auditor pattern could
   adopt this for traceable claims.

This is the scrum-master-handoff shape J asked for:
  repo + PRD + proposal → chunk → retrieve → escalate → consolidate
  → human-reviewable markdown report

Not shipping: per-PR diff analysis, open-PR integration, Gitea
posting of suggestions. Those compose the same primitives
differently — this proves the core pattern.

Env override: LH_SCRUM_FILES=path1,path2,... to target a different
file set. Default 3 files keeps runtime ~2min.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 20:52:42 -05:00
profit
540c493ff1 tests/real-world: hard-task escalation — prove the ladder solves tasks local can't
J asked (2026-04-22): construct a task the local model provably can't
complete, then watch the escalation + retry + cloud pipeline actually
solve it.

The task: generate a Rust async function with 15 specific
structural rules (exact signature, bounded concurrency, exponential
backoff 250/500/1000ms, NO .unwrap(), rustdoc comments, etc.).
Small enough to fit in one response but strict enough that one
rule violation = not accepted. Fits Rust + async + concurrency +
error-handling — across the hardest dimensions for 7B models.

Escalation ladder (corrected per J — kimi-k2.x requires Ollama
Cloud Pro subscription which J's key lacks; mistral-large-3:675b
is the biggest provisioned model):

  1. qwen3.5:latest        (local 7B)
  2. qwen3:latest          (local 7B)
  3. gpt-oss:20b           (local 20B)
  4. gpt-oss:120b          (cloud 120B)
  5. devstral-2:123b       (cloud 123B coding specialist)
  6. mistral-large-3:675b  (cloud 675B — biggest available)

Each attempt gets PRIOR failures' rubric violations injected as
learning context. Loop caps at MAX_ATTEMPTS=6.

Live run (runs/hard_task_moapd3g3/):
  attempt 1: qwen3.5:latest         11/15  — missed concurrency + some constraints
  attempt 2: qwen3:latest           11/15  — different misses after learning
  attempt 3: gpt-oss:20b             0/1  — empty response (local model dead-end)
  attempt 4: gpt-oss:120b            0/1  — empty (heavy learning context may confuse)
  attempt 5: devstral-2:123b        15/15   ACCEPTED after 10.4s
  attempt 6: (not reached)

Total: 5 attempts, 145.6s, coding-specialist succeeded.

Honest findings from the run:
- Pipeline works: escalated through 4 distinct model tiers, injected
  learning, bounded at 6, graceful failure surfaces.
- Learning injection doesn't always help general-purpose models —
  gpt-oss:120b returned empty when given heavy prior-failure context
  (attempt 4). The coding specialist (devstral) worked better because
  the task is domain-aligned.
- Local 7B came within 4 rules of success first-try (11/15) — not
  bad for the scale, but specific constraints like "EXACT signature"
  and "bounded concurrency at 4" are where small models slip.
- Kimi K2.5/K2.6 both require a paid subscription on our current
  Ollama Cloud key — verified via direct ollama.com curl. Swap
  to kimi once subscription lands.

Also includes a rubric bug-fix caught in the run: the regex for
"reaches 500/1000ms backoff" originally required literal constants,
but devstral-2:123b wrote idiomatic `retry_delay *= 2;` which
doubles 250 → 500 → 1000 correctly. Broadened rubric to recognize
`*= 2`, bit-shift, `.pow()`, and literal forms. Without this the
ladder would have false-failed on semantically-correct code.

Files:
  tests/real-world/hard_task_escalation.ts (270 LOC)
  tests/real-world/runs/hard_task_moapd3g3/
    attempt_{1..5}.txt     — raw model outputs (last successful)
    attempt_{1..5}.json    — per-attempt rubric verdict + error
    summary.json           — ladder summary

What this PROVES that no prior test did:
- Task-level retry ESCALATES across distinct model capabilities
  (not just same model retried)
- Bigger and more-specialized models ACTUALLY solve what smaller
  ones can't — the ladder works by design, not by luck
- The subscription boundary (Kimi K2.x) is a real operational
  constraint, not a code issue
- Rubric engineering is its own discipline — a strict-but-wrong
  validator can reject correct code; shipping the test harness
  required tuning against actual model outputs

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 18:50:53 -05:00
profit
6d6a306d4e tests/real-world: add task-level 6-retry loop (per J 2026-04-22)
Two distinct retry loops now both cap at 6 and serve different
purposes:

1. Per-cloud-call continuation (Phase 21 primitive) — when a single
   cloud call returns empty or truncated, stitches up to 6
   continuation calls. Handles output-overflow.

2. Per-TASK retry (this commit) — when the whole task errors
   (500/404, thin answer, etc.), retries the full task up to 6
   times. Each retry gets PRIOR ATTEMPTS' failures injected into
   the prompt as learning context, so attempt N+1 is informed by
   what N failed at. Handles error-recovery with compounding
   context.

Both loops fired on iter 3 of the stress run, proving them
independent and composable:

  FORCING TASK-RETRY LOOP — iter 3 will cycle through 5 invalid
  models + 1 valid
    attempt 1/6: model=deliberately-invalid-model-attempt-1
        /v1/chat 502: ollama.com 404: model not found
    attempt 2/6: [with prior-failure context]
    ... (5 failures total, each with the full chain of prior errors)
    attempt 6/6: model=gpt-oss:20b [with prior-failure context]
        continuation retry 1..6 (empty responses)
        SUCCEEDED after 5 prior failures (441 chars)

What J was asking to prove:
  "I expect it to retry the process six times to build on the
   knowledge database... when an error is legitimately triggered
   that it will go through six times... without getting caught in
   a loop"

Proof:
  - 6/6 attempts fired on the FORCED iteration
  - Each retry embedded the preceding attempts' errors as "do not
    repeat" context
  - Hard cap at MAX_TASK_RETRIES (6) prevents infinite loops
  - Last-ditch local fallback exists if all 6 still fail
  - Other iterations succeed on attempt 1 — the loop ONLY fires
    when errors are legitimately triggered

Stress run totals (runs/moan4h71/):
  6/6 iterations complete, 58 cloud calls, 306s end-to-end
  tree-splits: 6/6   continuations: 10   rescues: 2
  iter 3: 8197+2800 tok, 6 task attempts, 6 continuation retries
  local stored summary + per-iter JSON for inspection

What this proves that prior stress runs did NOT:
  - Error-recovery at task granularity is live, not aspirational
  - Compounding failure context flows between retries as text
  - Loop bound is enforced; runaway cases aren't possible
  - Two retry mechanisms compose without deadlock (continuation
    inside task-retry inside tree-split)

Follow-ups worth doing (separate PRs):
  - Persist retry-history to observer :3800 so cross-run learning
    sees the failure patterns
  - Route retries through /vectors/hybrid to surface similar prior
    errors from the real KB (currently only in-memory across one
    iteration)
  - Fix citation regex in summary — iter 6 received 5 prior IDs
    but counter shows 0 (regex needs to tolerate hyphens in IDs)
2026-04-22 17:50:53 -05:00
profit
4458c94f45 tests/real-world: enrich_prd_pipeline — architecture stress test
Real end-to-end test of the Lakehouse pipeline at scale. Runs the
PRD (63 KB, 901 lines → 93 chunks) through 6 iterations with cloud
inference, intentional failure injection, and tight context budget
to force every Phase 21 primitive to fire.

What the test exercises:
- Sidecar /embed for 93 chunks (nomic-embed-text)
- In-memory cosine retrieval for top-K per iteration
- Tree-split (shard → summarize → scratchpad → merge) when context
  chunks exceed the 4000-char budget
- Scratchpad truncation to keep compounding context bounded
- Cloud inference via /v1/chat provider=ollama_cloud (gpt-oss:120b)
- Injected primary-cloud failure on iter 3 (invalid model name) +
  rescue with gpt-oss:20b — proves catch-and-retry isn't dead code
- Playbook seeding per iteration (real HTTP against gateway)
- Prior-iteration answer injection for compounding (not just IDs —
  the first version passed IDs only and the model ignored them)

Live run results (tests/real-world/runs/moamj810/):
  6/6 iterations complete, 42 cloud calls total, 245s end-to-end
  tree-splits: 6/6 (every iter overflowed 4K budget)
  continuations: 0 (no responses hit max_tokens)
  rescues: 1 (iter 3 injected failure → gpt-oss:20b → valid answer)
  iter 6 answer explicitly cites [pb:pb-seed-82e1] — compounding real
  scratchpad truncation fired on iter 6 as designed

What this PROVES:
- Tree-split primitives work under real context pressure, not just
  in unit tests. The 4000-char budget forced every iteration to
  shard 12 chunks → 6 shards → scratchpad → final answer.
- Rescue on primary failure is wired and produces answers from a
  weaker model rather than erroring out.
- Compounding context injection works: iter 6's prompt had the 5
  prior answers in its citation block, and the cloud model
  acknowledged at least one via [pb:...] notation.
- The existence claims in Phase 21 (continuation + tree-split) are
  backed by executable evidence, not just unit tests.

What this DOESN'T prove (deliberate — scoped for follow-up):
- Continuation retries (no iter hit max_tokens in this run; would
  need a harder prompt or lower max_tokens to force)
- Real integration with /vectors/hybrid endpoint (test does in-memory
  cosine instead, bypassing gateway vector surface)
- Observer consumption of these runs (nothing posted to :3800 during
  the test — adding that is Phase A integration, handled separately)

Files:
  tests/real-world/enrich_prd_pipeline.ts (333 LOC)
  tests/real-world/runs/moamj810/{iter_1..6.json, summary.json}
    — artifacts from the stress run, committed for inspection

Follow-ups worth doing:
1. Lower max_tokens / harder prompt to force continuation path
2. Route retrieval through /vectors/hybrid for real Phase 19 boost
3. POST per-iteration summary to observer :3800 so runs accumulate
   like scenario runs do

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 17:33:24 -05:00