Extends the scrum-master pipeline to handle input overflow on large
source files (>6KB). Previously, the review prompt truncated the file
to first-chunk, which caused false-positive "field is missing"
findings whenever the actual field was past the cutoff.
Now each file >FILE_TREE_SPLIT_THRESHOLD (6000) is sharded at
FILE_SHARD_SIZE (3500), each shard summarized via gpt-oss:120b cloud,
and the distillations merged into a scratchpad. The review then runs
against the scratchpad with an explicit truncation-awareness clause
in the prompt: "DO NOT claim any field, function, or feature is
'missing' based on its absence from this distillation."
Also writes each accepted review as a JSONL row to
data/_kb/scrum_reviews.jsonl (file, reviewed_at, accepted_model,
accepted_on_attempt, attempts_made, tree_split_fired, preview).
This is the source the auditor's kb_query reads to surface
per-file scrum reviews on PRs that touch those files (cohesion
plan Phase C).
Verified: scrum review of 92KB playbook_memory.rs → 27 shards via
cloud → distilled scratchpad → qwen3.5 local 7B accepted on attempt 1
(5931 chars). Tree-split fires, jsonl row appended, output file
contains structured suggestions.
The orchestrator J described: pulls git repo source + PRD +
suggested-changes doc, chunks them, hands each code piece through
the proven escalation ladder with learning context, collects
per-file suggestions in a consolidated handoff report.
Composes ONLY already-shipped primitives — no new core code:
- chunker with 800-char / 120-overlap windows
- sidecar /embed for real nomic-embed-text embeddings
- in-memory cosine retrieval for top-5 PRD + top-5 proposal
chunks per target file
- escalation ladder (qwen3.5 → qwen3 → gpt-oss:20b → gpt-oss:120b
→ devstral-2:123b → mistral-large-3:675b)
- per-attempt learning-context injection (prior failures as
"do not repeat" block)
- acceptance rubric (length ≥ 200 chars + structured form)
Live-run (tests/real-world/runs/scrum_moatqkee/):
targets: 3 files
- crates/vectord/src/playbook_memory.rs (920 lines)
- crates/vectord/src/doc_drift.rs (163 lines)
- auditor/audit.ts (170 lines)
resolved: 3/3 on attempt 1 by qwen3.5:latest local 7B
total duration: 111.7s
output: scrum_report.md + per-file JSON
Sample from scrum_report.md (playbook_memory.rs review):
- Alignment score: 9/10 vs PRD Phase 19
- 4 concrete change suggestions naming specific lines + PLAN/PRD
chunk offsets
- 3 gap analyses with PRD-reference citations
Honest findings from this run:
1. Local 7B handled review-style tasks first-try. The escalation
ladder infrastructure is live but didn't fire — review is an
easier task shape than strict code-generation (see hard_task
test which needed devstral-2 specialist).
2. 6KB file-truncation caused one false positive: model claimed
playbook_memory.rs lacks a `doc_refs` field, but that field
exists past the 6KB cutoff. Trade-off between context-size
and review-depth needs tuning per file.
3. Chunk-offset citations are real: model output includes
`[PRD @27880]` and `[PLAN @16320]` which map to the actual
byte offsets of retrieved context chunks. Auditor pattern could
adopt this for traceable claims.
This is the scrum-master-handoff shape J asked for:
repo + PRD + proposal → chunk → retrieve → escalate → consolidate
→ human-reviewable markdown report
Not shipping: per-PR diff analysis, open-PR integration, Gitea
posting of suggestions. Those compose the same primitives
differently — this proves the core pattern.
Env override: LH_SCRUM_FILES=path1,path2,... to target a different
file set. Default 3 files keeps runtime ~2min.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
J asked (2026-04-22): construct a task the local model provably can't
complete, then watch the escalation + retry + cloud pipeline actually
solve it.
The task: generate a Rust async function with 15 specific
structural rules (exact signature, bounded concurrency, exponential
backoff 250/500/1000ms, NO .unwrap(), rustdoc comments, etc.).
Small enough to fit in one response but strict enough that one
rule violation = not accepted. Fits Rust + async + concurrency +
error-handling — across the hardest dimensions for 7B models.
Escalation ladder (corrected per J — kimi-k2.x requires Ollama
Cloud Pro subscription which J's key lacks; mistral-large-3:675b
is the biggest provisioned model):
1. qwen3.5:latest (local 7B)
2. qwen3:latest (local 7B)
3. gpt-oss:20b (local 20B)
4. gpt-oss:120b (cloud 120B)
5. devstral-2:123b (cloud 123B coding specialist)
6. mistral-large-3:675b (cloud 675B — biggest available)
Each attempt gets PRIOR failures' rubric violations injected as
learning context. Loop caps at MAX_ATTEMPTS=6.
Live run (runs/hard_task_moapd3g3/):
attempt 1: qwen3.5:latest 11/15 — missed concurrency + some constraints
attempt 2: qwen3:latest 11/15 — different misses after learning
attempt 3: gpt-oss:20b 0/1 — empty response (local model dead-end)
attempt 4: gpt-oss:120b 0/1 — empty (heavy learning context may confuse)
attempt 5: devstral-2:123b 15/15 ✅ ACCEPTED after 10.4s
attempt 6: (not reached)
Total: 5 attempts, 145.6s, coding-specialist succeeded.
Honest findings from the run:
- Pipeline works: escalated through 4 distinct model tiers, injected
learning, bounded at 6, graceful failure surfaces.
- Learning injection doesn't always help general-purpose models —
gpt-oss:120b returned empty when given heavy prior-failure context
(attempt 4). The coding specialist (devstral) worked better because
the task is domain-aligned.
- Local 7B came within 4 rules of success first-try (11/15) — not
bad for the scale, but specific constraints like "EXACT signature"
and "bounded concurrency at 4" are where small models slip.
- Kimi K2.5/K2.6 both require a paid subscription on our current
Ollama Cloud key — verified via direct ollama.com curl. Swap
to kimi once subscription lands.
Also includes a rubric bug-fix caught in the run: the regex for
"reaches 500/1000ms backoff" originally required literal constants,
but devstral-2:123b wrote idiomatic `retry_delay *= 2;` which
doubles 250 → 500 → 1000 correctly. Broadened rubric to recognize
`*= 2`, bit-shift, `.pow()`, and literal forms. Without this the
ladder would have false-failed on semantically-correct code.
Files:
tests/real-world/hard_task_escalation.ts (270 LOC)
tests/real-world/runs/hard_task_moapd3g3/
attempt_{1..5}.txt — raw model outputs (last successful)
attempt_{1..5}.json — per-attempt rubric verdict + error
summary.json — ladder summary
What this PROVES that no prior test did:
- Task-level retry ESCALATES across distinct model capabilities
(not just same model retried)
- Bigger and more-specialized models ACTUALLY solve what smaller
ones can't — the ladder works by design, not by luck
- The subscription boundary (Kimi K2.x) is a real operational
constraint, not a code issue
- Rubric engineering is its own discipline — a strict-but-wrong
validator can reject correct code; shipping the test harness
required tuning against actual model outputs
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two distinct retry loops now both cap at 6 and serve different
purposes:
1. Per-cloud-call continuation (Phase 21 primitive) — when a single
cloud call returns empty or truncated, stitches up to 6
continuation calls. Handles output-overflow.
2. Per-TASK retry (this commit) — when the whole task errors
(500/404, thin answer, etc.), retries the full task up to 6
times. Each retry gets PRIOR ATTEMPTS' failures injected into
the prompt as learning context, so attempt N+1 is informed by
what N failed at. Handles error-recovery with compounding
context.
Both loops fired on iter 3 of the stress run, proving them
independent and composable:
FORCING TASK-RETRY LOOP — iter 3 will cycle through 5 invalid
models + 1 valid
attempt 1/6: model=deliberately-invalid-model-attempt-1
/v1/chat 502: ollama.com 404: model not found
attempt 2/6: [with prior-failure context]
... (5 failures total, each with the full chain of prior errors)
attempt 6/6: model=gpt-oss:20b [with prior-failure context]
continuation retry 1..6 (empty responses)
SUCCEEDED after 5 prior failures (441 chars)
What J was asking to prove:
"I expect it to retry the process six times to build on the
knowledge database... when an error is legitimately triggered
that it will go through six times... without getting caught in
a loop"
Proof:
- 6/6 attempts fired on the FORCED iteration
- Each retry embedded the preceding attempts' errors as "do not
repeat" context
- Hard cap at MAX_TASK_RETRIES (6) prevents infinite loops
- Last-ditch local fallback exists if all 6 still fail
- Other iterations succeed on attempt 1 — the loop ONLY fires
when errors are legitimately triggered
Stress run totals (runs/moan4h71/):
6/6 iterations complete, 58 cloud calls, 306s end-to-end
tree-splits: 6/6 continuations: 10 rescues: 2
iter 3: 8197+2800 tok, 6 task attempts, 6 continuation retries
local stored summary + per-iter JSON for inspection
What this proves that prior stress runs did NOT:
- Error-recovery at task granularity is live, not aspirational
- Compounding failure context flows between retries as text
- Loop bound is enforced; runaway cases aren't possible
- Two retry mechanisms compose without deadlock (continuation
inside task-retry inside tree-split)
Follow-ups worth doing (separate PRs):
- Persist retry-history to observer :3800 so cross-run learning
sees the failure patterns
- Route retries through /vectors/hybrid to surface similar prior
errors from the real KB (currently only in-memory across one
iteration)
- Fix citation regex in summary — iter 6 received 5 prior IDs
but counter shows 0 (regex needs to tolerate hyphens in IDs)
Real end-to-end test of the Lakehouse pipeline at scale. Runs the
PRD (63 KB, 901 lines → 93 chunks) through 6 iterations with cloud
inference, intentional failure injection, and tight context budget
to force every Phase 21 primitive to fire.
What the test exercises:
- Sidecar /embed for 93 chunks (nomic-embed-text)
- In-memory cosine retrieval for top-K per iteration
- Tree-split (shard → summarize → scratchpad → merge) when context
chunks exceed the 4000-char budget
- Scratchpad truncation to keep compounding context bounded
- Cloud inference via /v1/chat provider=ollama_cloud (gpt-oss:120b)
- Injected primary-cloud failure on iter 3 (invalid model name) +
rescue with gpt-oss:20b — proves catch-and-retry isn't dead code
- Playbook seeding per iteration (real HTTP against gateway)
- Prior-iteration answer injection for compounding (not just IDs —
the first version passed IDs only and the model ignored them)
Live run results (tests/real-world/runs/moamj810/):
6/6 iterations complete, 42 cloud calls total, 245s end-to-end
tree-splits: 6/6 (every iter overflowed 4K budget)
continuations: 0 (no responses hit max_tokens)
rescues: 1 (iter 3 injected failure → gpt-oss:20b → valid answer)
iter 6 answer explicitly cites [pb:pb-seed-82e1] — compounding real
scratchpad truncation fired on iter 6 as designed
What this PROVES:
- Tree-split primitives work under real context pressure, not just
in unit tests. The 4000-char budget forced every iteration to
shard 12 chunks → 6 shards → scratchpad → final answer.
- Rescue on primary failure is wired and produces answers from a
weaker model rather than erroring out.
- Compounding context injection works: iter 6's prompt had the 5
prior answers in its citation block, and the cloud model
acknowledged at least one via [pb:...] notation.
- The existence claims in Phase 21 (continuation + tree-split) are
backed by executable evidence, not just unit tests.
What this DOESN'T prove (deliberate — scoped for follow-up):
- Continuation retries (no iter hit max_tokens in this run; would
need a harder prompt or lower max_tokens to force)
- Real integration with /vectors/hybrid endpoint (test does in-memory
cosine instead, bypassing gateway vector surface)
- Observer consumption of these runs (nothing posted to :3800 during
the test — adding that is Phase A integration, handled separately)
Files:
tests/real-world/enrich_prd_pipeline.ts (333 LOC)
tests/real-world/runs/moamj810/{iter_1..6.json, summary.json}
— artifacts from the stress run, committed for inspection
Follow-ups worth doing:
1. Lower max_tokens / harder prompt to force continuation path
2. Route retrieval through /vectors/hybrid for real Phase 19 boost
3. POST per-iteration summary to observer :3800 so runs accumulate
like scenario runs do
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
J asked directly: "did we implement our memory findings so that our
knowledge base and our configuration playbook [work] seamlessly with
whatever input they're given?" Honest answer tonight was "one of five
findings shipped, normalizer is the blocker." This closes that gap.
NORMALIZER (tests/multi-agent/normalize.ts):
Accepts structured JSON, natural language, or mixed. Returns canonical
NormalizedInput { role, city, state, count, client, deadline, intent,
confidence, extraction_method, missing_fields } for any downstream
consumer.
Three-tier path:
1. Structured fast-path — already-shaped input skips LLM
2. Regex path — "need 3 welders in Nashville, TN" parses without LLM.
City/state parser tightened to 1-3 capitalized words + "in {city}"
anchor preference + case-exact full-state-name variants to prevent
"Forklift Operators in Chicago" being captured as the city name
3. LLM fallback — qwen3 local with think:false + 400 max_tokens for
inputs the regex can't handle
Unit tests (tests/multi-agent/normalize.test.ts): 9/9 pass. Covers
structured fast-path, misplacement→rescue intent, state-name→abbrev
conversion, regex extraction from natural language, plural role +
full state name edge case, rescue intent keyword precedence, partial
input reporting missing fields, empty object fallthrough, async/sync
parity on clean inputs.
UNIFIED MEMORY QUERY (tests/multi-agent/memory_query.ts):
One function, five parallel fan-outs, one bundle returned:
- playbook_workers — hybrid_search via gateway with use_playbook_memory
- pathway_recommendation — KB recommender for this sig
- neighbor_signatures — K-NN sigs weighted by staffer competence
- prior_lessons — T3 overseer lessons filtered by city/state
- top_staffers — competence-sorted leaderboard
- discovered_patterns — top workers endorsed across past playbooks
for this (role, city, state)
- latency_ms — per-source + total
Every branch is best-effort: one source down doesn't break the bundle.
HTTP ENDPOINT (mcp-server/index.ts):
POST /memory/query with body {input: <anything>} → MemoryQueryResult
Returns the same shape the TS function does. Typed with types.ts for
future UI consumption.
VERIFIED:
curl POST /memory/query with structured {role,city,state,count}
→ extraction_method=structured, 10 playbook workers, top score 0.878
curl POST /memory/query with "I need 3 welders in Nashville, TN"
→ extraction_method=regex (no LLM call), 319ms total, 8 endorsements
for Lauren Gomez auto-discovered as top Nashville Welder
Honest remaining gaps (documented for next phase):
- Mem0 ADD/UPDATE/DELETE/NOOP — we still only ADD + mark_failed
- Zep validity windows — playbook entries have timestamps but no
retirement semantic
- Letta working-memory / hot cache — every query scans all 1560
playbook entries
- Memory profiles / scoped queries — global pool, no per-staffer
private subsets
2 of 5 findings now shipped (multi-strategy retrieval in Rust, input
normalization + unified query in TS). The remaining 3 are architectural
additions queued as Phase 25 items — validity windows first since it's
the most load-bearing for long-running systems.
Closes the gap J flagged: observer wraps MCP:3700, scenarios hit
gateway:3100 directly, observer idle at 0 ops across 3600+ cycles.
Now scenarios POST per-event outcomes to observer's new HTTP ingest
on :3800, observer consumes them alongside MCP-wrapped ops, ERROR_
ANALYZER and PLAYBOOK_BUILDER loops see the full picture.
observer.ts:
- Bun.serve() HTTP listener on OBSERVER_PORT (default 3800):
GET /health — basic + ring depth
GET /stats — total / success / failure / by_source / recent
scenario ops digest
POST /event — accept scenario outcome, shape it into ObservedOp
with source="scenario" + staffer_id + sig_hash +
event_kind + role/city/state + rescue flags
- recordExternalOp() — shared ring-buffer insert so the main analyzer
+ playbook builder don't care where the op came from
- ObservedOp extended with provenance fields
persistOp() FIX — old path POSTed to /ingest/file?name=observed_operations
which REPLACES the dataset (flagged in feedback_ingest_replace_semantics.md).
Every op was silently wiping all prior ops. Replaced with append to
data/_observer/ops.jsonl so the historical trace is durable across
analyzer cycles and process restarts.
scenario.ts:
- OBSERVER_URL env (default http://localhost:3800)
- postObserverEvent() helper with 2s AbortSignal.timeout so observer
being down doesn't block scenario flow
- Per-event POST after ctx.results.push(result), carrying staffer_id,
sig_hash (via imported computeSignature), event_kind + role + city
+ state + count + rescue_attempted / rescue_succeeded + truncated
output_summary
VERIFIED:
curl POST /event → {"accepted":true,"ring_size":1}
curl GET /stats → {"total":1,"successes":1,"by_source":{"scenario":1},
"recent_scenario_ops":[{...staffer_id,kind,role}]}
Final v3 demo leaderboard (9 runs per staffer, cumulative 3 batches):
James (local): 92.9% fill, 36.8 cites, score 0.775 — RANK 1
Maria (full): 81.0% fill, 26.2 cites, score 0.727
Sam (basic): 61.9% fill, 28.2 cites, score 0.640
Alex (minimal): 59.5% fill, 32.2 cites, score 0.631
Honest finding: Alex has MORE citations than Sam despite NO T3 and NO
rescue. Playbook inheritance alone is firing hardest when overseer is
absent. The 59.5% fill rate (up from 0% when qwen2.5 was executor)
proves cloud-exec + playbook inheritance is the floor the architecture
delivers.
Local gpt-oss:20b T3 outperforms cloud gpt-oss:120b T3 by 12pt fill
rate on this workload — cloud overseer paying latency+variance for
no measurable gain, worth flagging in next models.json tune.
J flagged the audit: "make sure everything flows coherently, no
pseudocode or unnecessary patches or ignoring any particular part of
what we built." This is that pass.
PRD.md updates:
- Phase 19 refinement block — geo-filter + role-prefilter WIRED with
citation density numbers (0.32 → 1.38, and 2 → 28 on same scenario).
- Phase 20 rewrite — mistral dropped, qwen3.5 + qwen3 local hot path,
think:false as the key mechanical finding, kimi-k2.6 upgrade path.
- Phase 21 status block — think plumbing + cloud executor routing
added after original commit.
- Phase 22 item B (cloud rescue) — pivot sanitizer, rescue verified
1/3 on stress_01.
- Phase 23 NEW — staffer identity + tool_level + competence-weighted
retrieval + kb_staffer_report. Auto-discovered worker labels called
out with real numbers (Rachel Lewis 12× across 4 staffers).
- Phase 24 NEW — Observer/Autotune integration gap DOCUMENTED, not
fixed. Observer has been idle at 0 ops for 3600+ cycles because
scenarios hit gateway:3100 directly, bypassing MCP:3700 which the
observer wraps. This is the honest "we're not using it in these
tests" signal J surfaced. Fix deferred; gap visible now.
PHASES.md:
- Appended Phases 20-23 as checked, Phase 24 as unchecked gap.
- Updated footer count: 102 unit tests across all layers.
- Latest line updated with 14× citation lift + 46.4pt tool-asymmetry
finding.
scenario.ts:
- snapshotConfig() was defined but never called. Now fires at every
scenario start with a stable sha256 hash over the active model set +
tool_level + cloud flags. config_snapshots.jsonl finally populates,
which the error_corrections diff path needs to work correctly.
kb.test.ts (new): 4 signature invariant tests — stability across
unrelated fields (date, contract, staffer), sensitivity to role/city/
count changes, digest shape. All pass under `bun test`.
service.rs: 6 Rust extractor tests for extract_target_geo +
extract_target_role — basic, missing-state-returns-none, word
boundary (civilian != city), multi-word role, absent role, quoted
value parse. All pass under `cargo test -p vectord --lib extractor_tests`.
Dangling items now honestly documented rather than silently pending:
- Chunking cache (config/models.json SPEC, not wired) — flagged
- Playbook versioning (SPEC, not wired) — flagged
- Observer integration (WIRED but disconnected) — new Phase 24
Two coupled changes from the 2026 agent-memory research + tool
asymmetry findings.
SCENARIO (weak-tier cloud substitute):
qwen2.5 collapsed to 0/14 across the basic/minimal tool_levels.
Replace with cloud kimi-k2.5 on Ollama Cloud — same family as k2.6
(pro-tier locked today, on J's upgrade path). Plumb cloud flag
through ACTIVE_EXECUTOR_CLOUD / ACTIVE_REVIEWER_CLOUD into
generateContinuable so executor/reviewer can route to cloud when
tool_level requires. think:false supported by Kimi family.
Tool level mapping (revised):
full — qwen3.5 local + qwen3 local + cloud gpt-oss:120b T3 + rescue
local — qwen3.5 local + qwen3 local + local gpt-oss:20b T3 + rescue
basic — kimi-k2.5 cloud + qwen3 local + local T3, no rescue
minimal — kimi-k2.5 cloud + qwen3 local, no T3, no rescue.
Playbook inheritance alone on the decision path.
This is the honest version of J's "minimal tools still works via
inheritance" hypothesis — with the executor no longer broken at the
tokenizer level, we can actually measure whether playbook retrieval
substitutes for missing overseers.
PLAYBOOK_MEMORY (multi-strategy retrieval):
Zep / Mem0 research shows multi-strategy rerank (semantic + keyword +
graph + temporal) outperforms single-strategy cosine. Lakehouse now
has a two-tier:
1. Exact (role, city, state) match: skip cosine, assign similarity=1.0,
take up to top_k/2+1 slots. These are identity-class neighbors —
the strongest possible signal.
2. Cosine fallback within the same (city, state) but different role:
fills remaining slots.
Exposed as compute_boost_for_filtered_with_role(target_geo, target_role).
Backwards-compatible: compute_boost_for_filtered forwards with role=None
so existing callers keep their current behavior.
Service.rs wires both: extract_target_geo and extract_target_role pull
from the executor's SQL filter. grab_eq_value is factored out of
extract_target_geo so both lookups share one parser. Diagnostic log
now prints target_role alongside target_geo for every hybrid_search:
playbook_boost: boosts=88 sources=39 parsed=39 matched=5
target_geo=Some(("Nashville", "TN")) target_role=Some("Welder")
Verified: Nashville Welder query returns 5/10 boosted workers in
top_k with clean role+geo provenance.
Research sources: atlan.com Agent Memory Frameworks 2026, Mem0 paper
(arxiv 2504.19413), Zep/Graphiti LongMemEval comparison, ossinsight
Agent Memory Race 2026.
kimi-k2.6 on current key returns 403 — pro-tier upgrade required.
kimi-k2.5 is the substitute today; swap to k2.6 by renaming one line
in applyToolLevel once the subscription lands.
Staffer.tool_level now controls which subsystems a specific run gets:
full — qwen3.5 + qwen3 + cloud T3 + cloud rescue
local — qwen3.5 + qwen3 + local gpt-oss:20b T3 + rescue
basic — qwen2.5 + qwen2.5 + local T3, no rescue
minimal — qwen2.5 + qwen2.5, NO T3, NO rescue. Playbook
inheritance only.
applyToolLevel() mutates module-scoped ACTIVE_* slots each run from the
env defaults, so prior staffer's overrides never leak. Hot-path code
reads ACTIVE_EXECUTOR / ACTIVE_REVIEWER / ACTIVE_T3_DISABLED /
ACTIVE_OVERVIEW_CLOUD / ACTIVE_RETRY_ON_FAIL instead of the baked
constants.
The architectural question this answers: does playbook_memory
inheritance carry enough knowledge to let a weakly-tooled coordinator
still produce usable outcomes? "Minimal" Alex runs qwen2.5 exec + no
reviewer overseer + no cloud rescue. If Alex still fills events at a
reasonable rate, the playbook system is the real knowledge carrier —
the senior stack is nice-to-have, not the sine qua non.
Demo personas mapped:
Maria (senior, 48mo, full)
James (mid, 14mo, local)
Sam (junior, 4mo, basic)
Alex (trainee, 1mo, minimal)
Same 3 contracts (Nashville downtown, Joliet warehouse, Indianapolis
assembly) across all four → 12 runs. KB + kb_staffer_report.py
leaderboard already wired; competence_score will now reflect real tool
asymmetry instead of LLM sampling variance.
Matrix-index the "who handled this" dimension so top staffers become
the training signal and juniors inherit their playbooks automatically
via the boost pipeline. Auto-discovered indicators emerge from
comparing trajectories across staffers on similar contracts — that was
always the architectural point; this wires the last piece.
ContractTerms:
- deadline, budget_total_usd, budget_per_hour_max, local_bonus_per_hour,
local_bonus_radius_mi, fill_requirement ("paramount" | "preferred")
- Attached to ScenarioSpec, propagated into T3 checkpoint + cloud
rescue prompts so cloud reasons about trade-offs (pivot within bonus
radius first; respect per-hour cap; split across cities when
fill_requirement=paramount).
Staffer:
- {id, name, tenure_months, role: senior|mid|junior|trainee}
- On ScenarioSpec; logged at scenario start; attached to KB outcome
- Recomputed StafferStats written to data/_kb/staffers.jsonl after
every run: total_runs, fill_rate, avg_turns, avg_citations,
rescue_rate, competence_score.
- Competence formula: 0.45*fill_rate + 0.20*turn_efficiency +
0.20*citation_density + 0.15*rescue_rate. Normalized to 0..1.
findNeighbors now returns weighted_score = cosine × best_staffer_competence
(floored at 0.3 so high-similarity low-competence neighbors still
surface). pathway_recommender prompt shows the top staffer's identity
so cloud knows WHOSE playbook it's synthesizing from.
Demo infrastructure:
- tests/multi-agent/gen_staffer_demo.ts: 4 personas (Maria senior,
James mid, Sam junior, Alex trainee) × 3 contracts (Nashville Welder,
Joliet Warehouse, Indianapolis Assembly). 12 scenarios total.
- scripts/run_staffer_demo.sh: runs the 12 sequentially with
LH_OVERVIEW_CLOUD=1. Post-run calls kb_staffer_report.py.
- scripts/kb_staffer_report.py: leaderboard + cross-staffer worker
overlap (names endorsed by ≥2 staffers → auto-discovered high-value
workers). Top vs bottom differential.
gen_scenarios.ts (Phase 22 generator) also now emits contract terms
on 70% of generated specs — future KB batches populate with realistic
constraint patterns instead of bare role+city+count.
Stress scenario from item A intentionally NOT the production test.
Real staffing has constraints; Nashville contract + staffer demo is
the honest test of whether the architecture produces measurable
differential between coordinator skill levels.
Demo batch launched — 12 runs × ~3min each ≈ 40min unattended. Report
emitted after batch.
When a scenario event fails (drift abort or other error) and
LH_RETRY_ON_FAIL is on (default when cloud T3 is enabled), ask cloud
for a concrete pivot — new city, role, or count — then re-run the
event with the remediation's fields. Capped at 1 retry per event so a
genuinely-impossible scenario can't burn budget.
requestCloudRemediation(event, result):
- Feeds the same diagnostic bundle T3 checkpoints get (SQL filters,
row counts, SQL errors, reviewer drift reasons, gap signals).
- Prompt demands structured JSON: {retry, new_city, new_role,
new_count, rationale}.
- Cloud is instructed to pivot to NEAREST alternate city when
zero-supply detected, broaden role when uniquely scarce, reduce
count when clearly unachievable, or return retry=false when no
pivot seems viable.
EventResult additions:
- retry_attempt, retry_remediation (with rationale + cloud_model +
duration), retry_result (full inner result shape), original_event.
- If retry succeeded, it becomes the primary result and original_event
preserves what was attempted first. If retry also failed, the
primary stays the failure and retry is recorded alongside.
Sanitizer on cloud output: model sometimes emits "Hammond, IN" in
new_city with "IN" in a non-existent new_state field, producing
"Hammond, IN, IN" downstream. Split new_city on comma, take first
token as city, extract state if present after the comma. Original
event's state is the fallback.
VERIFIED on stress_01.json with LH_OVERVIEW_CLOUD=1:
Without rescue (item A baseline): 1/5 events ok
With rescue (item B): 3/5 events ok
Gary IN misplacement: drift → cloud proposed South Bend IN → retry
filled 1/1. Rationale stored in retry_remediation for forensics.
Known limits surfaced (future work):
- City-field mangling failed one rescue before the sanitizer landed;
next run will use the fix.
- Cloud picks alternate cities without knowing ground-truth supply.
Flint → Saginaw pivoted but Saginaw also had sparse Welders.
Future: expose a /vectors/supply-estimate endpoint cloud can consult
before proposing a pivot.
Proves cloud passthrough works end-to-end AND fixes the diagnostic
quality problem that first run surfaced.
STRESS SCENARIO (tests/multi-agent/scenarios/stress_01.json):
Five genuinely hard events with varied failure modes:
- Gary, IN 5× Electrician: ZERO supply (city not in workers_500k)
- Peoria, IL 8× Safety Coordinator: scarce role, initial pool only 5
- Flint, MI 3× Welder: ZERO supply
- Grand Rapids, MI 4× Tool & Die Maker: scarce but solvable
- Gary, IN 1× Electrician misplacement: repeats event 1's impossibility
FIRST RUN (stress v1) — cloud passthrough works, diagnosis vague:
T3 checkpoint: "Potential drift flags for upcoming role"
Lesson: "Before dispatching, query pool status. Update turn counter..."
Generic tactical advice that doesn't address the real problem.
Root cause: T3 prompt only saw outcome summary, not the raw
SQL/pool/drift signals the executor had in its log.
DIAGNOSTIC FIX:
- Added LogEntry[] `sharedLog` parameter to runAgentFill so the caller
retains the trace even when runAgentFill throws drift-abort.
- EventResult gained `diagnostic_log` field populated on both OK and
FAIL paths.
- extractDiagnostics() pulls SQL filters, hybrid_search row counts,
SQL errors, and reviewer drift notes from the log.
- Checkpoint prompt now includes FAILURE FORENSICS block for failed
events: SQL filters attempted, row counts, errors, drift reasons,
and an explicit teaching note about zero-supply detection.
- Cross-day lesson prompt flags each event with [ZERO-SUPPLY: pivot
city needed] tag when drift reasons mention "no match"/"no
candidates"/"0 rows". PRIORITY clause in the prompt tells the model
its lesson MUST name alternate cities when that tag appears.
SECOND RUN (stress v2 with enriched prompt) — cloud diagnosis sharp:
T3 after Flint: risk="Zero candidate supply for Welder in Flint"
hint="search Welder×3 in Saginaw, MI (≈30 mi) or
expand role to Metal Fabricator"
T3 after Gary: risk="Zero supply for Electrician in Gary, IN"
hint="Pivot to Chicago, IL (≈40 min); broaden to
Electrical Technician within 60 min radius"
Lesson: specific, per-city, with distances, role-broadening
fallback, and pre-loading strategy — actionable for item B retry.
Cloud 120b call latencies consistent: 4.8-8.0s per prompt. Cloud
passthrough proven under stress.
Fill outcomes unchanged (1/5 — correct rejection of three impossible
events + one propagating JSON emission edge case on retry pivot
reasoning). The knowledge to rescue them now exists in the lesson;
item B wires the retry.
ITEM 1 — k CAP + REASON FIELD
The hybrid_search default k was hard-coded to 10. For multi-fill events
(5× expansion, 4× emergency) that's pool=10 → propose 5-of-10, half
the candidates become the answer with no room for rejection. Executor
prompt now instructs k to scale with target_count: k = max(count*5, 20),
cap 80. Default helper bumped 10 → 20.
Fill.reason dropped from required to optional. Nothing downstream ever
consumed it — resolveWorkerIds, sealSale, retrospective all use
candidate_id and name. Models loved to write 100-150 char justifications
per fill; on 4+ fills that blew the JSON budget before the structure
closed. Test 1 run result after this change: FIRST EVER 5/5 on the
Riverfront Steel scenario, 13 total turns across 5 events. The event
that failed last run (emergency 4×Loader with truncated reason-field
continuation) now clears in 2 turns.
Progression:
mistral baseline: 0/5
qwen3.5 + continuation + think:false: 4/5
qwen3.5 + k=20 + no-reason: 5/5 ✓
ITEM 2 — SCENARIO GENERATOR (NOT YET TESTED E2E)
tests/multi-agent/gen_scenarios.ts emits N deterministic ScenarioSpecs
with varied clients (15 companies), cities (20 Midwest cities known
to exist in workers_500k), role mixes (14 industrial staffing roles,
weighted realistic), and event sequences. Each gets a unique sig_hash
so the KB populates with distinct neighbor signatures.
scripts/run_kb_batch.sh runs all generated specs sequentially against
scenario.ts, logs per-scenario outcomes, and reports KB state at the
end. Each run takes ~2-4min; 20-30 scenarios = 1-2hr unattended.
Next: test the generator+batch on a small N (3-5) to verify KB
populates correctly and pathway recommendations start getting neighbor
signal instead of cold-starts. Then item 3 (Rust re-weighting of
hybrid_search by playbook_memory success).
Meta-layer over Phase 19 playbook_memory. Phase 19 answers "which
WORKERS worked for this event"; KB answers "which CONFIG worked for
this playbook signature" — model choice, budget hints, pathway notes,
error corrections.
tests/multi-agent/kb.ts:
- computeSignature(): stable sha256 hash of the (kind, role, count,
city, state) tuple sequence. Same scenario shape → same sig.
- indexRun(): extracts sig, embeds spec digest via sidecar, appends
outcome record, upserts signature to data/_kb/signatures.jsonl.
- findNeighbors(): cosine-ranks the k most-similar signatures from
prior runs for a target spec.
- detectErrorCorrections(): scans outcomes for same-sig fail→succeed
pairs, diffs the model set, logs to error_corrections.jsonl.
- recommendFor(): feeds target digest + k-NN neighbors + recent
corrections to the overview model, gets back a structured JSON
recommendation (top_models, budget_hints, pathway_notes), appends
to pathway_recommendations.jsonl. JSON-shape constrained so the
executor can inherit it mechanically.
- loadRecommendation(): at scenario start, pulls newest rec matching
this sig (or nearest).
scenario.ts:
- Reads KB recommendation at startup (alongside prior lessons).
- Injects pathway_notes into guidanceFor() executor context.
- After retrospective, indexes the run + synthesizes next rec.
Cold-start behavior: first run with no history writes a low-confidence
"no prior data" rec so the signal that something was attempted is
captured. Second run gets "low confidence, 0 neighbors" until a third
distinct sig gives the embedder something to compare against — hence
the upcoming scenario generator.
VERIFIED:
- data/_kb/ populated after one scenario run: 1 outcome (sig=4674…,
4/5 ok, 16 turns total), 1 signature, 2 recs (cold + post-run).
- Recommendation JSON-parsed cleanly from gpt-oss:20b overview model.
PRD Phase 22 added with file layout, cycle description, and the
rationale for file-based MVP → Rust port progression that matches
how Phase 21 primitives shipped.
What's NOT here yet (batched follow-ups per J's request, tested
between each):
- Lift the k=10 hybrid_search cap to adaptive k=max(count*5, 20)
- Scenario generator to bulk-populate KB with varied signatures
- Rust re-weighting: push playbook_memory success signal INTO
hybrid_search scoring, not just post-hoc boost
Three coupled fixes that together turned the Riverfront Steel scenario
from 0/5 (mistral) to 4/5 (qwen3.5) with T3 flagging real staffing
concerns rather than linter advice.
MODEL SWAP
- Executor: mistral → qwen3.5:latest (9.7B, 262K ctx, thinking).
mistral's decoder emitted malformed JSON on complex SQL filters
regardless of prompt; J called it — stop using mistral.
- Reviewer: qwen2.5 → qwen3:latest (40K ctx)
- Applied to scenario.ts, orchestrator.ts, network_proving.ts,
run_e2e_rated.ts
CONTINUATION PRIMITIVE (agent.ts)
- generateContinuable(): empty-response → geometric backoff retry;
truncated-JSON → continue from partial as scratchpad; bounded by
budget cap + max_continuations. No more "bump max_tokens until it
stops truncating" tourniquet.
- generateTreeSplit(): map-reduce for oversized input corpora with
running scratchpad digest, reduce pass for final synthesis.
- Empty text no longer throws — it's a signal to continuable that
thinking ate the budget.
think:false FOR HOT PATH
- qwen3.5 burned ~650 tokens of hidden thinking for trivial JSON
emission. For executor/reviewer/draft: think:false. For T3/T4/T5
overseers: thinking stays on (that's the point).
- Sidecar generate endpoint accepts `think` bool, passes through to
Ollama's /api/generate.
VERIFIED OUTCOMES
Riverfront Steel 2026-04-21, qwen3.5+continuable+think:false:
08:00 baseline_fill 3/3 4 turns
10:30 recurring 2/2 3 turns (1 playbook citation)
12:15 expansion 0/5 drift-aborted (5-fill orchestration
problem, separate work)
14:00 emergency 4/4 3 turns (1 citation)
15:45 misplacement 1/1 3 turns
→ T3 caught Patrick Ross double-booking across events
→ T3 flagged forklift cert drift on the event that failed
→ Cross-day lesson proposed "maintain buffer of ≥3 emergency
candidates, pre-fetch certs for expansion, booking system
cross-check" — real staffing advice, not generic linter output
PRD PHASE 21 rewritten to reflect the actual primitive shape (two-
call map-reduce with scratchpad glue) instead of the tourniquet
approach originally documented. Rust port queued for next sprint.
scripts/ab_t3_test.sh: A/B harness that chains B→C→D runs and emits
tests/multi-agent/playbooks/ab_scorecard.json.
PRD: add Phase 20 (model matrix, wired) and Phase 21 (context stability,
partial). Phase 21 exists because LLM Team hit this exact wall — running
multi-model ranking on large context silently truncated, rankings
degraded, no pipeline caught it. The stable answer: every agent call
goes through a budget check against the model's declared context_window
minus safety_margin, with a declared overflow_policy when the check
fails.
config/models.json:
- context_window + context_budget per tier
- overflow_policies block: summarize_oldest_tool_results_via_t3,
chunk_lessons_via_cosine_topk, two_pass_map_reduce,
escalate_to_kimi_k2_1t_or_split_decision
- chunking_cache spec (data/_chunk_cache/, corpus-hash keyed)
agent.ts:
- estimateTokens() chars/4 biased safe ~15%
- CONTEXT_WINDOWS table (fallback; prod reads models.json)
- assertContextBudget() — throws on overflow with exact numbers, can
bypass with bypass_budget:true for callers with their own policy
- Wired into generate() and generateCloud() so EVERY call is checked
scenario.ts:
- T3 lesson archive to data/_playbook_lessons/*.json (the old
/vectors/playbook_memory/seed path was silently failing with HTTP 400
because it requires 'fill: Role xN in City, ST' operation shape)
- loadPriorLessons() at scenario start — filters by city/state match,
date-sorted, takes top-3
- prior_lessons.json archived per-run (honest signal for A/B)
- guidanceFor() injects up to 2 prior lessons (≤500 chars each) into
the executor's per-event context
- Retrospective shows explicit "Prior lessons loaded: N" line
Verified: mistral correctly rejects a 150K-char prompt (7532 tokens
over), gpt-oss:120b accepts it with 90K headroom. The enforcement is
in-band on every call now, not an afterthought.
Full chunking service (Rust) remains deferred to the sprint this feeds:
crates/aibridge/src/budget.rs + chunk.rs + storaged/chunk_cache.rs
config/models.json is the authoritative catalog. Hot path (T1/T2) stays
local; cloud is consulted only for overview (T3), strategic (T4), and
gatekeeper (T5) calls. J named qwen3.5 + newer models (minimax-m2.7,
glm-5, qwen3-next) specifically — all mapped with real reachable IDs
verified against ollama.com/api/tags.
Tier shape:
- t1_hot mistral + qwen2.5 local — 50-200 calls/scenario
- t2_review qwen2.5 + qwen3 local — 5-14 calls/event
- t3_overview gpt-oss:120b cloud — 1-3 calls/scenario
- t4_strategic qwen3.5:397b + glm-4.7 — 1-10 calls/day
- t5_gatekeeper kimi-k2-thinking — 1-5 calls/day, audit-logged
Rate budgets are declared in-config — Ollama Cloud paid tier is generous
but we cap overview/strategic/gatekeeper so no single rogue scenario can
blow the day's quota.
Experimental rotation list wired but disabled by default. When enabled,
T4 randomly routes 10% of calls to a rotating minimax/GLM/qwen-next/
deepseek/nemotron/cogito/mistral-large candidate, logs comparisons, and
auto-promotes after 3 rotations of wins.
Playbook versioning SPEC embedded under `playbook_versioning` key: every
seed gets version + parent_id + retired_at + architecture_snapshot, so
when a schema migration breaks a playbook we can pinpoint which change
retired it. Implementation flagged for next sprint (touches gateway +
catalogd + mcp-server) — not wired here.
- scenario.ts now loads config/models.json at init, env vars still override
- mcp-server exposes /models/matrix read-only so UI can render it
Hot path (T1/T2) stays mistral + qwen2.5. The new T3 tier runs a
thinking model SPARINGLY — after every misplacement, every N-th event
(default N=3), and once post-scenario for the cross-day lesson.
- agent.ts: generateCloud() for Ollama Cloud (gpt-oss:120b etc). Uses
the same /api/generate shape; thinking field is discarded.
- scenario.ts: runOverviewCheckpoint + runCrossDayLesson. Outputs land
in checkpoints.jsonl and lesson.md. Lesson also seeds playbook_memory
under operation "cross-day-lesson-{date}" — future runs pick it up
through the existing similarity boost.
- Env knobs: LH_OVERVIEW_CLOUD=1 routes T3 to cloud, LH_OVERVIEW_MODEL
overrides (default gpt-oss:20b local, gpt-oss:120b cloud),
LH_T3_CHECKPOINT_EVERY controls cadence, LH_T3_DISABLE=1 turns it off.
Why this shape: prior feedback_phase19_seed_text.md warned that verbose
seeds dilute the embedding and silently kill the boost. T3's rich prose
goes to lesson.md; the embedded "approach" + "context" stay terse.
Verified end-to-end: local 20b checkpoint 10.9s, lesson 4.0s; cloud
120b lesson 3.7s. Cloud output is both faster AND more specific than
local (sequenced, tactical, logging advice included).
parseAction now strips stray `)` before `}` and trailing commas —
qwen2.5 emits those regularly on tool_call outputs; soft-fix beats
retry-loops. hybrid_search no longer hard-requires `question`; defaults
to "qualified available workers" when the model drops it (mistral's
most common failure mode on complex events).
Kept original TOOL_CATALOG shape (args examples only, not full
action envelopes). The verbose few-shot version from the prior
iteration confused mistral into wrapping propose_done as tool_call.
Scenario V7 result: expansion (5 Forklift Ops) and emergency
(4 Loaders) — previously-failing complex events — now seal reliably.
Pool sizes: 687 and 380 from 500K corpus. Patterns endpoint produces
real operator-actionable signals:
expansion: "recurring certifications: Forklift (40%), OSHA-10 (40%)
· recurring skills: mill (40%) · archetype mostly: leader
· reliability median 0.83"
Baseline + recurring are now flaky (inverted trade-off, pure
model-reliability variance).
Upgrades to tests/multi-agent/scenario.ts to exercise the full Path 1+2
feature set on a real warehouse-client week (5 events on one client):
- Hard SCHEMA ENFORCEMENT block in every event's guidance. Prior runs
had mistral read narrative words ("shift", "recurring", "expansion")
as SQL column names. Schema is now locked explicitly with valid
columns listed and CAST guidance for availability + reliability.
- playbook_memory_k bumped 10 → 100 to match server default.
- Canonical short seed text (operation + "{kind} fill via hybrid
search" + "{role} fill in {city}, {state}"). Verbose LLM rationales
dilute embeddings and silently kill boost (Pass 1 finding).
- /vectors/playbook_memory/mark_failed fires automatically on
misplacement events — records the no-shower's failure so future
searches for same city+role dampen their boost.
- /vectors/playbook_memory/patterns call per event — surfaces what the
meta-index discovered (recurring certs/skills/archetype/reliability)
for that query into the dispatch log and retrospective.
- Retrospective now includes a workers-touched audit table (every
worker who reached a decision, with outcome column) and a
discovered-patterns-evolution section across events.
Honest limitations this surfaced in the real run:
- mistral's executor prompt-adherence degrades on high-count events
(5+ fills) and scenario-specific language (emergency/misplacement).
3 of 5 events aborted via drift guard. Baseline + recurring sealed
cleanly with real fills + SMS + emails + seeded playbooks.
- worker_id resolution returns "undefined" for some names when name
matching is ambiguous in workers_500k (multiple workers with same
name in same city).
Backend:
- crates/vectord/src/playbook_memory.rs (new): Phase 19 in-memory boost
store with seed/rebuild/snapshot, plus temporal decay (e^-age/30 per
playbook), persist_to_sql endpoint backing successful_playbooks_live,
and discover_patterns endpoint for meta-index pattern aggregation
(recurring certs/skills/archetype/reliability across similar past fills).
- DEFAULT_TOP_K_PLAYBOOKS bumped 5 → 25; old default silently missed
most boosts when memory had > 25 entries.
- service.rs: new routes /vectors/playbook_memory/{seed,rebuild,stats,
persist_sql,patterns}.
Bun staffing co-pilot (mcp-server/):
- /search, /match, /verify, /proof, /simulation/run, MCP tools all
forward use_playbook_memory:true and playbook_memory_k:25 to the
hybrid endpoint. Boost was previously dark across the entire app.
- /log no longer POSTs to /ingest/file — that endpoint REPLACES the
dataset's object list, so single-row CSV writes were wiping all prior
rows in successful_playbooks (sp_rows went 33→1 in one /log call).
/log now seeds playbook_memory with canonical short text and calls
/persist_sql to keep successful_playbooks_live in sync.
- /simulation/run cumulative end-of-week CSV write removed for the same
reason. Per-day per-contract /seed (added in this session) is the
accumulating feedback path now.
- search.html addWorkerInsight renders a green "Endorsed · N playbooks"
chip with playbook citations when boost > 0.
Internal Dioxus UI (crates/ui/):
- Dashboard phase list rewritten through Phase 19 (was stuck at "Phase
16: File Watcher" / "Phase 17: DB Connector" — both wrong).
- Removed fabricated "27ms" stat label.
- Ask tab examples + SQL default replaced with real staffing prompts
against candidates/clients/job_orders (was referencing nonexistent
employees/products/events).
- New Playbook tab exposes /vectors/playbook_memory/{stats,rebuild} and
side-by-side hybrid search (boost OFF vs ON) with citations.
Tests (tests/multi-agent/):
- run_e2e_rated.ts: parallel two-agent (mistral + qwen2.5) build phase
+ verifier rating (geo, auth, persist, boost, speed → /10).
- network_proving.ts: continuous build → verify → repeat with
staffing-recruiter profile hot-swap; geo-discrimination check.
- chain_of_custody.ts: single recruiter operation traced through every
layer (Bun /search, direct /vectors/hybrid parity, /log, SQL,
playbook_memory growth, profile activation, post-op boost lift).
- queryd: SessionContext with custom URL scheme to avoid path doubling with LocalFileSystem
- queryd: ListingTable registration from catalog ObjectRefs with schema inference
- queryd: POST /query/sql returns JSON {columns, rows, row_count}
- queryd→catalogd wiring: reads all datasets, registers as named tables
- gateway: wires QueryEngine with shared store + registry
- e2e verified: SELECT *, WHERE/ORDER BY, COUNT/AVG all correct
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>