lakehouse/auditor 1 blocking issue: cloud: claim not backed — "journal event verified live (total_events_created 0→1 after probe)."
## Infrastructure (scrum loop hardening)
crates/gateway/src/v1/openrouter.rs — new OpenRouter provider
Direct HTTPS to openrouter.ai/api/v1/chat/completions with OpenAI-compatible shape.
Key resolution: OPENROUTER_API_KEY env → /home/profit/.env → /root/llm_team_config.json
(shares LLM Team UI's quota). Added after iter 5 hit repeated Ollama Cloud 502s on
kimi-k2:1t — different provider backbone as rescue rung. Unit tests pin the URL
stripping and OpenAI wire shape.
crates/gateway/src/v1/mod.rs + main.rs
Added `"openrouter" | "openrouter_free"` arm to /v1/chat dispatch.
V1State.openrouter_key loaded at startup via openrouter::resolve_openrouter_key()
mirroring the Ollama Cloud pattern. Startup log:
"v1: OpenRouter key loaded — /v1/chat provider=openrouter enabled"
tests/real-world/scrum_master_pipeline.ts
* 9-rung ladder — kimi-k2:1t → qwen3-coder:480b → deepseek-v3.1:671b →
mistral-large-3:675b → gpt-oss:120b → qwen3.5:397b → openrouter/gpt-oss-120b:free
→ openrouter/gemma-3-27b-it:free → local qwen3.5:latest.
Added qwen3-coder:480b as rung 2 after live probes confirmed it rescues
kimi-k2:1t 502s cleanly (0.9s latency, substantive reviews).
Dropped devstral-2 (displaced by qwen3-coder); dropped kimi-k2.6 (not available);
dropped minimax-m2.7 (returned 0 chars / 400 thinking tokens).
Local fallback promoted qwen3.5:latest per J's direction 2026-04-24.
* MAX_ATTEMPTS bumped 6 → 9 to accommodate the rescue tier.
* Tree-split scratchpad fixed — was concatenating shard markers directly
into the reviewer input, causing kimi-k2:1t to write titles like
"Forensic Audit Report – file.rs (shard 3)". Now uses internal §N§
markers during accumulation and runs a proper reduce step that
collapses per-shard digests into ONE coherent file-level synthesis
with markers stripped. Matches the Phase 21 aibridge::tree_split
map→reduce design. Fallback to stripped scratchpad if reducer returns thin.
tests/real-world/scrum_applier.ts — NEW (737 lines)
The auto-apply pipeline. Reads scrum_reviews.jsonl, filters rows where
gradient_tier ∈ {auto, dry_run} AND confidence_avg ≥ MIN_CONF (default 90),
asks the reviewer model for concrete old_string/new_string patch JSON,
applies via text replacement, runs cargo check after each file, commits
if green and reverts if red. Deny-list: /etc/, config/, ops/, auditor/,
docs/, data/, mcp-server/, ui/, sidecar/, scripts/. Hard caps: per-patch
confidence ≥ MIN_CONF, old_string must be exactly unique, max 20 lines per
patch. Never runs on main without explicit LH_APPLIER_BRANCH override.
Audit trail in data/_kb/auto_apply.jsonl.
Empirical behavior (dry-run over iter 4 reviews):
5 eligible files → 1 green commit-ready, 2 build-red reverts, 2 all-rejected
The build-green gate caught 2 bad patches before they'd have merged.
mcp-server/observer.ts — LLM Team code_review escalation
When a sig_hash accumulates ≥3 failures (ESCALATION_THRESHOLD), fire-and-forget
POST /api/run?mode=code_review at localhost:5000 with the failure cluster context.
Parses facts/entities/relationships/file_hints from the response. Writes to a
new data/_kb/observer_escalations.jsonl surface. Answers J's vision of the
observer triggering richer LLM Team calls when failures pile up.
Non-blocking: runs parallel to existing qwen2.5 analyzer, never replaces it.
Tracks escalated sig_hashes in a session-local Set to avoid re-hammering
LLM Team when a cluster persists across observer cycles.
crates/aibridge/src/context.rs
First auto-applied patch produced by scrum_applier.ts (dry-run path —
applier writes files in dry-run mode but doesn't commit; bug noted for
iter 6 fix). Adds #[deprecated] annotation to the inline estimate_tokens
helper pointing callers to the centralized shared::model_matrix::ModelMatrix
entry point (P21-002 — duplicate token-estimator surfaces). Cargo check
passes with the annotation (verified by applier's own build gate).
## Visual Control Plane (UI)
ui/server.ts — Bun.serve on :3950 with /data/* fan-out:
/data/services, /data/reviews, /data/metrics, /data/trust, /data/overrides,
/data/findings, /data/outcomes, /data/audit_facts, /data/file/:path,
/data/refactor_signals, /data/search?q=, /data/signal_classes,
/data/logs/:svc (journalctl tail per systemd unit), /data/scrum_log.
Bug fix: tryFetch always attempts JSON.parse before falling back to text
— observer's Bun.serve returns JSON without application/json content-type,
which was displaying stats as a raw string ("0 ops" on map) before.
ui/index.html + ui.css — dark neo-brutalist shell. 6 views:
MAP (D3 force-graph + overlays) / TRACE (per-file iter history) /
TRAJECTORY (signal-class cards + refactor-signals table + reverse-index
search box) / METRICS (every card has SOURCE + GOOD lines explaining
where the number comes from and what target trajectory means) /
KB (card grid with tooltips on every field) / CONSOLE (per-service
journalctl tabs).
ui/ui.js — polling client, D3 wiring, signal-class panel, refactor-signals
table, reverse-index search, per-service console tabs. Bug fix:
renderNodeContext had Object.entries() iterating string characters when
/health returned a plain string — now guards with typeof check so
"lakehouse ok" renders as one row instead of "0 l / 1 a / 2 k / ...".
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "journal event verified live (total_events_created 0→1 after probe)."
Auditor correctly flagged the '3 → 6' score claim as unbacked by diff
(consensus: 3/3 not-backed). The claim referenced scrum_reviews.jsonl —
an external metric file — which the auditor cannot verify against
source changes alone. Rewrote the PR body to only claim what's
directly verifiable from the diff (committed tests, committed code
paths, committed startup logging). Trajectory data remains in
docs/SCRUM_LOOP_NOTES.md for historical reference but is no longer
asserted as fact in the PR body.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Responds to PR #10 auditor block (2/2 blocking: "claim not backed"):
the auditor's N=3 cloud consensus flagged the "verified live" language
in the description as unbacked by the diff. That was fair — the
verification was a manual curl probe, not committed code.
Committed verification now lives in the diff:
* journal_record_ingest_increments_counter
- mirrors the /ingest/file success path against an in-memory store
- asserts total_events_created: 0 → 1 after record_ingest
- asserts the event is retrievable by entity_id with correct fields
* optional_journal_field_none_is_valid_back_compat
- pins IngestState.journal as Option<Journal>
- forces explicit reconsideration if a refactor makes it mandatory
* journal_record_event_fields_match_adr_012_schema
- pins the 11-field ADR-012 event schema against field-rot
3/3 pass. Resolves block 2. Block 1 ("no changes to ingestd/service.rs
appear in the diff") was a tree-split shard-leakage false positive —
the diff at lines 37-40 + 149-163 clearly adds the journal wiring;
this commit moves those lines into direct test-exercised contact so
the next audit cycle has fewer shards to stitch together.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Apply the highest-confidence findings from the Phase 0→42 forensic sweep
after four scrum-master iterations under the adversarial prompt. Each fix
is independently validated by a later scrum iteration scoring the same
file higher under the same bar.
Code changes
────────────
P5-001 — crates/gateway/src/auth.rs + main.rs
api_key_auth was marked #[allow(dead_code)] and never wrapped around
the router, so `[auth] enabled=true` logged a green message and
enforced nothing. Now wired via from_fn_with_state, with constant-time
header compare and /health exempted for LB probes.
P42-001 — crates/truth/src/lib.rs
TruthStore::check() ignored RuleCondition entirely — signature looked
like enforcement, body returned every action unconditionally. Added
evaluate(task_class, ctx) that actually walks FieldEquals / FieldEmpty /
FieldGreater / Always against a serde_json::Value via dot-path lookup.
check() kept for back-compat. Tests 14 → 24 (10 new exercising real
pass/fail semantics). serde_json moved to [dependencies].
P9-001 (partial) — crates/ingestd/src/service.rs
Added Optional<Journal> to IngestState + a journal.record_ingest() call
on /ingest/file success. Gateway wires it with `journal.clone()` before
the /journal nest consumes the original. First-ever internal mutation
journal event verified live (total_events_created 0→1 after probe).
Iter-4 scrum scored these files higher under same prompt:
ingestd/src/service.rs 3 → 6 (P9-001 visible)
truth/src/lib.rs 3 → 4 (P42-001 visible)
gateway/src/auth.rs 3 → 4 (P5-001 visible)
gateway/src/execution_loop 4 → 6 (indirect)
storaged/src/federation 3 → 4 (indirect)
Infrastructure additions
────────────────────────
* tests/real-world/scrum_master_pipeline.ts
- cloud-first ladder: kimi-k2:1t → deepseek-v3.1:671b → mistral-large-3:675b
→ gpt-oss:120b → devstral-2:123b → qwen3.5:397b (deep final thinker)
- LH_SCRUM_FORENSIC env: injects SCRUM_FORENSIC_PROMPT.md as adversarial preamble
- LH_SCRUM_PROPOSAL env: per-iter fix-wave doc override
- Confidence extraction (markdown + JSON), schema v4 KB rows with:
verdict, critical_failures_count, verified_components_count,
missing_components_count, output_format, gradient_tier
- Model trust profile written per file-accept to data/_kb/model_trust.jsonl
- Fire-and-forget POST to observer /event so by_source.scrum appears in /stats
* mcp-server/observer.ts — unchanged in shape, confirmed receiving scrum events
* ui/ — new Visual Control Plane on :3950
- Bun.serve with /data/{services,reviews,metrics,trust,overrides,findings,file,refactor_signals,search,logs/:svc,scrum_log}
- Views: MAP (D3 graph, 5 overlays) / TRACE (per-file iter timeline) /
TRAJECTORY (refactor signals + reverse index search) / METRICS (explainers
with SOURCE + GOOD lines) / KB (card grid with tooltips) / CONSOLE (per-service
journalctl tail, tabs for gateway/sidecar/observer/mcp/ctx7/auditor/langfuse)
- tryFetch always attempts JSON.parse (fix for observer returning JSON without content-type)
- renderNodeContext primitive-vs-object guard (fix for gateway /health string)
* docs/SCRUM_FIX_WAVE.md — iter-specific scope directing the scrum
* docs/SCRUM_FORENSIC_PROMPT.md — adversarial audit prompt (verdict/critical/verified schema)
* docs/SCRUM_LOOP_NOTES.md — iteration observations + fix-next-loop queue
* docs/SYSTEM_EVOLUTION_LAYERS.md — Layers 1-10 roadmap (trust profiling, execution DNA, drift sentinel, etc)
Measurements across iterations
──────────────────────────────
iter 1 (soft prompt, gpt-oss:120b): mean score 5.00/10
iter 3 (forensic, kimi-k2:1t): mean score 3.56/10 (−1.44 — bar raised)
iter 4 (same bar, post fixes): mean score 4.00/10 (+0.44 — fixes landed)
Score movement iter3→iter4: ↑5 ↓1 =12
21/21 first-attempt accept by kimi-k2:1t in iter 4
20/21 emitted forensic JSON (richer signal than markdown)
16 verified_components captured (proof-of-life, new metric)
Permission Gradient distribution: 0 auto · 16 dry_run · 4 sim · 1 block
Observer loop: by_source {scrum: 21, langfuse: 1985, phase24_audit: 1}
v1/usage: 224 requests, 477K tokens, all tracked
Signal classes per file (iter 3 → iter 4):
CONVERGING: 1 (ingestd/service.rs — fix clearly landed)
LOOPING: 4 (catalogd/registry, main, queryd/service, vectord/index_registry)
ORBITING: 1 (truth — novel findings surfacing as surface ones fix)
PLATEAU: 9 (scores flat with high confidence — diminishing returns)
MIXED: 6
Loop thesis status
──────────────────
A file's score rises only when the scrum confirms a real fix landed.
No false positives yet across 3 iterations. Fixes applied to 3 files all
raised their independent scores under the same adversarial prompt. Loop
is measurable, not hand-wavy.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
lakehouse/auditor cloud: claim not backed — "Previously the hybrid fixture honestly reported layer 5 as 404/unimplemented. With this PR it flips "
Closes the last open loop of Phase 45. Previously, playbooks could
carry doc_refs (slice 1) and the context7 bridge could report drift
(slice 2) — but nothing tied them together. An operator had no way
to say "check this playbook against its doc sources and flag it if
the docs moved." This slice wires that.
Ships:
- crates/vectord/src/doc_drift.rs — thin context7 bridge client.
No cache (bridge has its own 5-min TTL). No retry (transient
failure = Unknown outcome, caller decides).
- PlaybookMemory::flag_doc_drift(id) — stamps doc_drift_flagged_at
idempotently. Once flagged, compute_boost_for_filtered_with_role
excludes the entry from both the non-geo and geo-indexed boost
paths until resolved.
- PlaybookMemory::resolve_doc_drift(id) — human re-admission.
Stamps doc_drift_reviewed_at which clears the boost exclusion.
- PlaybookMemory::get_entry(id) — new read-only accessor the
handler uses to read doc_refs without exposing the state lock.
- POST /vectors/playbook_memory/doc_drift/check/{id}
- POST /vectors/playbook_memory/doc_drift/resolve/{id}
Design call: Unknown outcomes from the bridge (bridge down, tool
not in context7, no snippet_hash recorded) are NEVER enough to
flag. Only a positive drifted=true from the bridge flips the flag.
A down bridge doesn't silently drift-flag every playbook.
Tests (5 new, in upsert_tests mod):
- flag_doc_drift_stamps_timestamp_and_persists
- flag_doc_drift_is_idempotent_on_already_flagged
- resolve_doc_drift_clears_flag_admission_gate
- boost_excludes_flagged_unreviewed_entries
- boost_re_admits_resolved_entries
14/14 upsert tests pass (9 pre-existing + 5 new).
Live end-to-end — hybrid fixture on auditor/scaffold (merged to
main at b6d69b2) now shows:
overall: PASS
shipped: [38, 40, 45.1, 45.2, 45.3]
placeholder: [—]
✓ Phase 38 /v1/chat 4039ms
✓ Phase 40 Langfuse trace 11ms
✓ Phase 45.1 seed + doc_refs 748ms
✓ Phase 45.2 bridge diff 563ms
✓ Phase 45.3 drift-check endpoint 116ms ← was a 404 before this
First time the fixture reports overall=PASS with zero placeholder
layers. The honest "not built" signal on layer 5 is now honestly
"built and working."
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Promotes two previously manual-start Bun services to systemd
so they survive restarts + run continuously.
- ops/systemd/lakehouse-auditor.service — polls Gitea every 90s,
runs 4 audit checks per PR head SHA, posts commit status + review
comment. Runs as root to match existing lakehouse-* service
conventions on this host; can read /home/profit/.git-credentials
(0600 profit:profit).
- ops/systemd/lakehouse-context7-bridge.service — HTTP wrapper on
:3900 for Phase 45 doc-drift detection. Decoupled from gateway;
runs independently.
- ops/systemd/install.sh — idempotent installer (copy → daemon-reload
→ enable --now). Prints post-install active/enabled status.
- ops/systemd/README.md — run/stop/logs/pause docs.
Pause control stays per-service (bot.paused / auditor.paused files
at repo root). Not wired to branch protection yet — the auditor's
commit status is currently advisory, not enforcing. Flip via Gitea
branch_protections API when confident.
After merging main (with the UpsertOutcome struct-like enum shape
from PR #2), the 4 new upsert tests needed pattern-match updates:
UpsertOutcome::Added(_) → UpsertOutcome::Added { .. }
9/9 upsert tests pass.
Auditor found this via hybrid fixture 2026-04-22. Blocks the serde-tag-newtype shape by converting to struct-like variants. See PR #2 body for full context.
Manual merge: auditor commit status was failure due to 1 false-positive inference finding on a commit-message reference; underlying fix is verified (curl against live gateway confirmed all 3 upsert paths return valid JSON). Proceeding per human review.
lakehouse/auditor all checks passed (3 findings, all info)
The auditor's hybrid fixture (branch auditor/scaffold) surfaced this
on 2026-04-22. A re-seed of the same (operation, day) pair with new
endorsed_names merged the names but silently discarded the incoming
doc_refs and valid_until fields. schema_fingerprint was partially
handled (set-if-Some) but doc_refs and valid_until weren't touched.
Root cause: the UPDATE arm of upsert_entry at playbook_memory.rs:609
only covered:
- endorsed_names (union-merge)
- timestamp
- embedding (if Some)
- schema_fingerprint (if Some)
Fix:
- valid_until — refresh if caller provides one
- doc_refs — merge by tool (case-insensitive). Same-tool new entry
supersedes older one; different-tool refs are appended. Empty
incoming doc_refs preserves existing (don't wipe on partial seed).
4 new regression tests under upsert_tests:
- update_merges_doc_refs_with_existing_ones
- update_same_tool_supersedes_older_version
- update_preserves_existing_doc_refs_when_new_entry_has_none
- update_refreshes_valid_until_when_caller_provides_one
Test result: 9/9 upsert tests pass (4 new + 5 pre-existing).
Branch basis note: this branch is off main, so the UpsertOutcome enum
here still has the newtype variants Added(String) / Noop(String). PR
#2 (fix/upsert-outcome-serde) changes that enum to struct-like. When
PR #2 merges first this branch needs a trivial rebase; the UPDATE
arm logic is untouched by that change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
lakehouse/auditor all checks passed (4 findings, all info)
auditor/index.ts (task #9) — the top-level poller. 90s interval,
dedupes by head SHA via data/_auditor/state.json, supports --once
for CLI testing. Env gates: LH_AUDITOR_RUN_DYNAMIC=1 to include
the hybrid fixture (default off; it mutates live state),
LH_AUDITOR_SKIP_INFERENCE=1 for fast runs without cloud calls.
Single-shot run proof (task #10):
cycle 1: 2 open PRs
audit PR #2 f0a3ed68 "Fix: UpsertOutcome newtype serde panic"
verdict=block, 9 findings (1 block, 5 warn, 3 info)
audit PR #1 039ed324 "Auditor: PR-claim hard-block reviewer"
verdict=approve, 4 findings (0 block, 0 warn, 4 info)
audits_run=2, state persisted
Commit statuses and issue comments posted live to Gitea. PR #2 is
currently hard-blocked (lakehouse/auditor commit status = failure);
PR #1 has a passing status. State survives restart — next cycle
skips already-audited SHAs.
Both PRs now have the audit comment with per-check breakdown.
Operator can read the comment, fix blocking findings (or defend
them with a reply), push a new commit; auditor re-audits on new
SHA, verdict updates, merge gate responds accordingly.
The full loop J asked for is closed:
1. static check caught own Phase 45 placeholder (b933334)
2. hybrid fixture caught UpsertOutcome serde panic (9c893fb)
3. LLM-Team-style codereview caught ternary bug (5bbcaf4)
4. auditor poller now runs on every open PR, block/approve with
evidence, re-audits on new SHAs
Tasks done: 1-11 (except 12, a scoped follow-up fix for UPDATE
branch dropping doc_refs). The auditor is running, catching real
bugs in its own build, and gating merges.
After the serde fix (PR #2, fix/upsert-outcome-serde) landed on main,
re-running this fixture STILL reported "doc_refs field is empty" —
but with a different root cause than the panic.
Root cause: pre-fix runs panicked on response serialization but had
already added entries to state (panic happened between upsert_entry
returning and the handler's serde_json::json! of the response). So
state.json was polluted with __auditor_test_worker__ entries from
those runs, WITHOUT doc_refs (doc_refs wasn't even wired at the time
those state rows were written).
The fixture's `find(endorsed_names.includes(TEST_WORKER_NAME))` was
picking the oldest polluted entry, not the fresh one.
Compounding: discovered a secondary bug while investigating —
upsert_entry's UPDATE branch only merges endorsed_names. doc_refs,
schema_fingerprint, valid_until on an UPDATE are silently dropped.
Filed as task #12, separate PR to follow.
Fix in this fixture: use a nonce suffix on both TEST_WORKER_NAME and
TEST_OPERATION so every run is guaranteed to hit the ADD path in
upsert_entry, sidestepping the UPDATE bug AND eliminating state
pollution entirely.
Live re-run after this edit:
✓ Phase 38 /v1/chat 449ms, 42 tokens
✓ Phase 40 Langfuse trace 20ms
✓ Phase 45.1 seed + doc_refs 239ms, doc_refs.length=1 persisted
✓ Phase 45.2 bridge diff 2ms, drifted=true
✗ Phase 45.3 drift-check HONEST 404 (endpoint not built)
shipped_phases: [38, 40, 45.1, 45.2] (was [38, 40, 45.2])
placeholder: [45.3] (was [45.1, 45.3])
One fewer placeholder — exactly because the serde fix merged on
fix/upsert-outcome-serde and the fixture now cleanly exercises the
path. The loop is:
fixture finds bug → PR fixes bug → fixture re-run confirms fix →
one fewer placeholder.
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "Verified live after gateway restart:"
playbook_memory.rs:257 — UpsertOutcome had two newtype variants
carrying a bare String:
Added(String)
Noop(String)
under #[serde(tag = "mode")]. serde cannot tag newtype variants of
primitive types, so every serialization threw:
"cannot serialize tagged newtype variant UpsertOutcome::Added
containing a string"
This caused gateway /vectors/playbook_memory/seed to panic the
tokio worker on EVERY call that reached Added or Noop, returning
an empty socket close to the client. The bug was silent from commit
640db8c (Phase 26, 2026-04-21) until 2026-04-22 when the auditor's
hybrid fixture (auditor/fixtures/hybrid_38_40_45.ts on the
auditor/scaffold branch) exercised the endpoint live and gateway
logs showed the panic.
Fix — convert both newtype variants to struct-like:
Added { playbook_id: String }
Noop { playbook_id: String }
Updated all 7 construction + pattern-match sites. Updated rustdoc
on the enum explaining why the shape is what it is.
JSON wire format is now uniform across all three variants:
{"mode":"added","playbook_id":"pb-..."}
{"mode":"updated","playbook_id":"pb-...","merged_names":[...]}
{"mode":"noop","playbook_id":"pb-..."}
Verified live after gateway restart:
curl /seed new payload → mode=added, playbook 860231f5
curl /seed new payload + doc_refs → mode=added, playbook 11d348d9
curl /seed identical re-submit → mode=noop, same id 860231f5,
entries_after unchanged (Mem0
contract intact)
Tests: 51/51 vectord lib tests green. Release build clean.
This is a follow-up bug fix landed in its own branch
(fix/upsert-outcome-serde) rather than commingled with other work.
The auditor's hybrid fixture on the auditor/scaffold branch will
now light up layer 3 (phase45_seed_with_doc_refs) as a pass once
this merges — previously it failed here with an empty socket close.
Caught by running a side-test through LLM Team's run_codereview
flow (gpt-oss:120b reviewer) against this fixture, 2026-04-22.
BEFORE:
const ourStart = Date.parse(
l1.evidence.match(/tokens=/) ? result.ran_at : result.ran_at
);
// Both branches return result.ran_at — the ternary is meaningless.
// result.ran_at is the fixture start time, NOT the moment we fired
// /v1/chat. Any trace created between fixture-start and chat-fetch
// would false-negative.
AFTER:
const chat_request_sent_ms = Date.now(); // captured before layer 1
// ...
const recent = items.filter(t =>
Date.parse(t.timestamp) >= chat_request_sent_ms
);
Re-ran the fixture against the live stack — layers 1,2,4 still pass
(no regression); layer 2 trace matched at age=2494ms which is within
the chat-to-trace propagation window. Layers 3,5 still fail for the
original unrelated reasons (UpsertOutcome serde panic + Phase 45
slice 3 endpoint not built).
First concrete act-on-finding from a code-checker run. The process
works.
auditor/fixtures/hybrid_38_40_45.ts — the never-before-run hybrid
test. Exercises Phase 38 /v1/chat → Phase 40 Langfuse → Phase 45
slice 1 seed+doc_refs → Phase 45 slice 2 bridge drift → (expected-
fail) Phase 45 slice 3 drift-check endpoint.
auditor/fixtures/cli.ts — standalone runner. Human-readable summary
to stderr, machine-readable JSON to stdout, exit code 0/1/2 for
pass / fail / partial_pass.
Live run results — honest measurements, not hand-waved:
✓ Phase 38 /v1/chat returns 9 visible tokens, 6.7s latency
("docker run is a common Docker command.")
✓ Phase 40 Langfuse trace 18a8a0b7 landed in 2.5s
✗ Phase 45.1 seed endpoint returns empty reply — discovered a
PRE-EXISTING BUG unrelated to doc_refs:
playbook_memory.rs:257 UpsertOutcome has newtype
variants Added(String) and Noop(String) under
#[serde(tag="mode")] — serde panics on serialize.
panicked at crates/vectord/src/service.rs:2323:
Error("cannot serialize tagged newtype variant
UpsertOutcome::Added containing a string")
Reproduced: curl /seed with AND without doc_refs
both get "Empty reply from server" (socket closed
mid-response). This bug has existed since Phase 26
shipped (commit 640db8c, 2026-04-21). No test or
caller in the repo exercised the response path live
against the gateway until this fixture did.
✓ Phase 45.2 context7 bridge confirms drift: current hash
475a0396ca436bba vs our stale input, upstream last
updated 2026-04-20
✗ Phase 45.3 /doc_drift/check endpoint — correctly unreachable
because layer 3 blocked us from getting a playbook_id;
endpoint still doesn't exist independent of that
Real numbers published: per-layer latency_ms, token counts,
trace_age_ms, library_id, current_hash_length. All stored in the
JSON output for downstream audit.
Value delivered: the fixture's first live run found a bug that
unit tests, compile checks, and my own "phase shipped" commits all
missed. Exactly the gap J called out — the auditor is doing what
it's supposed to do.
Bug fix is a SEPARATE concern: new task #11 tracks a separate PR
(fix/upsert-outcome-serde) so the audit finding and the fix stay
cleanly attributed.
auditor/checks/static.ts — grep-style scan of PR diffs, no AST,
no LLM. High-signal patterns only.
Severity grading:
- BLOCK — unimplemented!(), todo!(), panic!("not implemented"),
throw new Error("not implemented")
- WARN — TODO/FIXME/XXX/HACK in added lines;
new pub struct fields with <2 mentions in the diff
(added but nobody reads it — placeholder state)
- INFO — hardcoded "placeholder"/"dummy"/"foobar"/"changeme"/"xxx"
strings in added lines
Live-proven — the existential test J asked for:
vs PR #1 (scaffold): 0 findings (all scaffold fields cross-
reference within the diff)
vs commit 2a4b81b (Phase 5 WARN: every DocRef field (tool,
45 first slice — I version_seen, snippet_hash, source_url,
half-admitted placeholder): seen_at) added with 0 read-sites in
the diff
That's the auditor flagging my own "Phase 45 first slice" commit as
state-without-consumer, which is exactly what I half-admitted it
was. If PR #1 had required auditor-pass (branch protection), the
DocRef commit would have been blocked pre-merge. The auditor works
because it agreed with the honest read.
Next: dynamic hybrid test fixture (task #4) — the never-run multi-
layer pipeline test.
auditor/claim_parser.ts — reads PR body + commit messages, extracts
ship-claims. Regex-based, intentionally not LLM-driven: the parser's
job is to surface claim substrates, not to judge them (that's the
inference check's job, runs later with cloud model).
Three strength tiers:
- strong — "verified end-to-end", "live-proven", "production-ready",
"phase N shipped", "proven"
- moderate — "shipped", "landed", "green", "passing", "works",
"complete", "done"
- weak — "should work", "expected to", "probably"
Live-proven against PR #1 (this PR): 4 claims extracted from
1 commit (2 strong, 2 moderate). "live-proven" correctly tagged as
strong (it IS a stronger claim than "shipped").
Next: static diff check consumes these claims + the PR diff to find
placeholder patterns — empty fns, TODO, unwired fields, etc.
All-Bun sub-agent that watches open PRs on Gitea, reads ship-claims,
and hard-blocks merges when the code doesn't back the claim. First
commit of N; this is the skeleton. Dynamic/static/inference/kb checks
+ poller land in follow-up commits on this same branch.
- auditor/types.ts — Claim, Finding, Verdict, PrSnapshot shapes
- auditor/gitea.ts — minimal API client (listOpenPrs, getPrDiff,
postCommitStatus, postReview). Live-proven: returned 0 open PRs
against our repo (which IS the current state — every commit today
went to main directly, which is the problem this auditor is meant
to prevent)
- auditor/policy.ts — stub `assembleVerdict` + severity rules.
Intentionally conservative defaults: strong claim + zero evidence
= block, not warn.
- auditor/README.md — how to run + the hard-block mechanism
Workflow discipline change: starting with this branch, no more
direct pushes to main. Every change lands as a PR. When this
auditor is fully built and running, it'll review its own
completion PR — the recursive self-test.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Bun bridge on :3900 that wraps context7's public API and exposes the
surface gateway consumes for Phase 45 drift checks. Own port so a
failure here never tips over mcp-server on :3700.
Endpoints:
GET /health status + cache stats
GET /docs/:tool resolve tool → library_id → fetch
docs → return descriptor
{snippet_hash, last_updated,
source_url, docs_preview, ...}
GET /docs/:tool/diff?since=X compare current snippet_hash to X;
returns {drifted: bool, current,
previous, preview if drifted}
GET /cache debug dump of cached entries
Implementation notes:
- 5 minute in-memory cache (context7 rate-limits by IP; gateway
drift-checks are the hot caller)
- 1500-token slices from context7 (enough for drift-meaningful
hash, not so much we hammer their API)
- snippet_hash = SHA-256 prefix (16 hex chars) of fetched content
- Library resolution prefers "finalized" state; falls back to top
result if none finalized
Verified live against context7.com:
- /health → ok, 0 cache, 300s TTL
- /docs/docker → library_id /docker/docs,
title "Docker", hash
475a0396ca436bba, last
updated 2026-04-20
- /docs/docker (again) → cache hit, 0.37ms
(5400× speedup)
- /docs/docker/diff?since=stale-hash-0000 → drifted=true, preview
included
- /docs/docker/diff?since=<current hash> → drifted=false, preview
omitted (honest: no
drift to show)
Not yet wired:
- Gateway consumer (Phase 45 slice 3):
/vectors/playbook_memory/doc_drift/check/{id} calls this bridge
and updates DocRef.snippet_hash + doc_drift_flagged_at
- Systemd unit (bridge is manual-start for now, same as bot/)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase J keeps asking for: playbooks know which external docs they
used, get flagged when those docs drift. This commit ships the data
model; context7 bridge + drift check endpoints land in follow-ups.
Added to crates/vectord/src/playbook_memory.rs:
- pub struct DocRef { tool, version_seen, snippet_hash, source_url,
seen_at } — one external doc reference
- PlaybookEntry.doc_refs: Vec<DocRef> — empty on legacy entries,
serde default ensures pre-Phase-45 persisted state loads cleanly
- PlaybookEntry.doc_drift_flagged_at: Option<String> — set by the
(future) drift-check code when context7 reports newer version
- PlaybookEntry.doc_drift_reviewed_at: Option<String> — set by
human via /resolve endpoint after reviewing the diagnosis
- impl Default for PlaybookEntry — collapses most test-helper
constructors from 17 explicit fields to 6-9 fields +
..Default::default()
Updated SeedPlaybookRequest + RevisePlaybookRequest (service.rs) to
accept optional doc_refs: the seed/revise endpoints already take the
field, downstream drift detection (Phase 45.2) consumes it.
Docs: docs/CONTROL_PLANE_PRD.md gains full Phase 45 spec with gate
criteria, non-goals, and risk notes.
Tests: 51/51 vectord lib tests green (same count as before, field
additions are backward-compat).
Memory: project_doc_drift_vision.md written so this keeps coming
back to the front of mind.
Next slices (same phase): context7 HTTP bridge in mcp-server,
/vectors/playbook_memory/doc_drift/check/{id} endpoint, overview-
model drift synthesis writing to data/_kb/doc_drift_corrections.jsonl,
boost exclusion for flagged+unreviewed entries.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
J flagged that a prior version of this stack had Langfuse traces
piping into the observer + Gitea MCP for repo ops — lost. Adding
these as explicit Phase 40 deliverables alongside routing engine
+ Gemini/Claude adapters.
Findings during scope-check:
- Langfuse container is already running (Up 2 days, langfuse:2,
localhost:3001 healthcheck passes)
- mcp-server/tracing.ts + package.json already have SDK wired
- Credentials pk-lf-staffing / sk-lf-staffing-secret (from env)
- Gitea MCP binary still installed at gitea-mcp@0.0.10
So recovery here is mostly re-connecting existing infra:
1. Add Rust-side Langfuse client for /v1/chat tracing (gateway
currently bypasses tracing, mcp-server already has it)
2. Wire Langfuse → observer :3800 pipe
3. Register Gitea MCP in mcp-server/index.ts tool list
Each landing as part of Phase 40 when the routing engine ships.
Second provider wired. /v1/chat now routes by optional `provider`
field: default "ollama" hits local via sidecar, "ollama_cloud"
(or "cloud") hits ollama.com/api/generate directly with Bearer auth.
Key sourced at gateway startup from OLLAMA_CLOUD_KEY env, then
/root/llm_team_config.json (providers.ollama_cloud.api_key), then
OLLAMA_CLOUD_API_KEY env. Config source matches LLM Team convention.
Shape-identical to scenario.ts::generateCloud — same endpoint, same
body, same Bearer auth. Cloud path bypasses sidecar entirely (sidecar
is local-only by design, mirrors TS agent.ts).
Changes:
- crates/gateway/src/v1/ollama_cloud.rs (new, 130 LOC) — reqwest
client, resolve_cloud_key(), chat() adapter, CloudGenerateBody /
CloudGenerateResponse wire shapes
- crates/gateway/src/v1/ollama.rs — flatten_messages_public()
re-export so sibling adapters reuse the shape collapse
- crates/gateway/src/v1/mod.rs — provider field on ChatRequest,
dispatch match in chat() handler, ollama_cloud_key on V1State
- crates/gateway/src/main.rs — resolves cloud key at startup,
logs which source provided it
- crates/gateway/Cargo.toml — reqwest 0.12 with rustls-tls
Verified end-to-end after restart:
- provider=ollama → qwen3.5:latest local (~400ms, Phase 38 unchanged)
- provider=ollama_cloud + model=gpt-oss:120b → real 225-word
technical response in 5.4s, 313 tokens
Tests: 9/9 green (7 from Phase 38 + 2 new for cloud body serialization
and key resolver shape).
Not in this slice: trait extraction (full Phase 39 scope adds
ProviderAdapter trait + OpenRouter adapter + fallback chain logic).
These land next with Phase 40 routing engine on top.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Live-test caught the Phase 21 thinking-model trap on first call.
qwen3.5 with max_tokens=50 and default think behavior burned all 50
tokens on hidden reasoning; visible content was "". completion_tokens
exactly matching max_tokens was the tell.
Adapter now defaults think: Some(false) matching scenario.ts hot-path
discipline. Callers that want reasoning (overseers, T3+) opt in via
a non-OpenAI `think: true` extension field on the request.
Verified end-to-end after restart:
- "Lakehouse supports ACID and raw data." (5 words, 516ms)
- "tokio\nasync-std\nsmol" (3 Rust crates, 391ms)
- /v1/usage accumulates across calls (2 req / 95 total tokens)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
First slice of the control-plane pivot. OpenAI-compatible surface
over the existing aibridge → Ollama path. Additive — no existing
routes touched. All 7 unit tests green, release build clean.
What ships:
- crates/gateway/src/v1/mod.rs — router, V1State (ai_client + Usage
counter), ChatRequest/ChatResponse/Message/UsageBlock types, handlers
for /chat, /usage, /sessions. OpenAI-compatible field shapes:
{model, messages[{role,content}], temperature?, max_tokens?, stream?}
- crates/gateway/src/v1/ollama.rs — shape adapter. Flattens messages
into (system, prompt), calls aibridge.generate, unwraps response
back into OpenAI /v1/chat shape. Prefers sidecar-reported tokens;
falls back to chars/4 ceiling estimate matching Phase 21 convention.
- crates/gateway/src/main.rs — one new mod, one .nest("/v1", ...)
Tests (7/7):
- chat_request_parses_openai_shape
- chat_request_accepts_minimal
- usage_counter_default_is_zero
- flatten_separates_system_from_turns
- flatten_concatenates_multiple_system_messages
- flatten_with_no_system_returns_empty_system
- estimate_tokens_chars_div_4_ceiling
Not in this phase (per CONTROL_PLANE_PRD.md): streaming, tool calls,
session state, multi-provider, fallback chain, cost gating. All
land in Phases 39-44.
Next: live-test POST /v1/chat after gateway restart, then migrate
bot/propose.ts off direct sidecar calls to prove the loop end-to-end.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 21 — Rust port of scratchpad + tree-split primitives (companion to
the 2026-04-21 TS shipment). New crates/aibridge modules:
context.rs — estimate_tokens (chars/4 ceil), context_window_for,
assert_context_budget returning a BudgetCheck with
numeric diagnostics on both success and overflow.
Windows table mirrors config/models.json.
continuation.rs — generate_continuable<G: TextGenerator>. Handles the
two failure modes: empty-response from thinking
models (geometric 2x budget backoff up to budget_cap)
and truncated-non-empty (continuation with partial
as scratchpad). is_structurally_complete balances
braces then JSON.parse-checks. Guards the degen case
"all retries empty, don't loop on empty partial".
tree_split.rs — generate_tree_split map->reduce with running
scratchpad. Per-shard + reduce-prompt go through
assert_context_budget first; loud-fails rather than
silently truncating. Oldest-digest-first scratchpad
truncation at scratchpad_budget (default 6000 t).
TextGenerator trait (native async-fn-in-trait, edition 2024). AiClient
implements it; ScriptedGenerator test double lets tests inject canned
sequences without a live Ollama.
GenerateRequest gained think: Option<bool> — forwards to sidecar for
per-call hidden-reasoning opt-out on hot-path JSON emitters. Three
existing callsites updated (rag.rs x2, service.rs hybrid answer).
Phase 27 — Playbook versioning. PlaybookEntry gained four optional
fields (all #[serde(default)] so pre-Phase-27 state loads as roots):
version u32, default 1
parent_id Option<String>, previous version's playbook_id
superseded_at Option<String>, set when newer version replaces
superseded_by Option<String>, the playbook_id that replaced
New methods:
revise_entry(parent_id, new_entry) — appends new version, stamps
superseded_at+superseded_by on parent, inherits parent_id and sets
version = parent + 1 on the new entry. Rejects revising a retired
or already-superseded parent (tip-of-chain is the only valid
revise target).
history(playbook_id) — returns full chain root->tip from any node.
Walks parent_id back to root, then superseded_by forward to tip.
Cycle-safe.
Superseded entries excluded from boost (same rule as retired): filter
in compute_boost_for_filtered_with_role (both active-entries prefilter
and geo-filtered path), rebuild_geo_index, and upsert_entry's existing-
idx search. status_counts returns (total, retired, superseded, failures);
/status JSON reports active = total - retired - superseded.
Endpoints:
POST /vectors/playbook_memory/revise
GET /vectors/playbook_memory/history/{id}
Doc-sync — PHASES.md + PRD.md drifted from git after Phases 24-26
shipped. Fixes applied:
- Phase 24 marked shipped (commit b95dd86) with detail of observer
HTTP ingest + scenario outcome streaming. PRD "NOT YET WIRED"
rewritten to reflect shipped state.
- Phase 25 (validity windows, commit e0a843d) added to PHASES +
PRD.
- Phase 26 (Mem0 upsert + Letta hot cache, commit 640db8c) added.
- Phase 27 entry added to both docs.
- Phase 19.6 time decay corrected: was documented as "deferred",
actually wired via BOOST_HALF_LIFE_DAYS = 30.0 in playbook_memory.rs.
- Phase E/Phase 8 tombstone-at-compaction limit note updated —
Phase E.2 closed it.
Tests: 8 new version_tests in vectord (chain-metadata stamping,
retired/superseded parent rejection, boost exclusion, history from
root/tip/middle, legacy default round-trip, status counts). 25 new
aibridge tests (context/continuation/tree_split). Workspace total
145 green (was 120).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the two remaining 2026-era memory findings. Both are
optimizations per J's framing — not load-bearing, but good data
hygiene + future-proofing at scale.
MEM0 UPSERT (data hygiene):
Before: /seed always appended. A scenario re-running the same
operation on the same day wrote duplicate entries, inflating the
playbook corpus with near-identical rows.
Now: upsert_entry(new) inspects existing non-retired entries and
decides ADD / UPDATE / NOOP:
ADD → no matching (operation, day, city, state) tuple, append
UPDATE → match exists with different names → merge (union, stable
order), refresh timestamp, keep original playbook_id so
citations stay valid
NOOP → match exists with identical names → skip, return id
Day-granularity keying on timestamp YYYY-MM-DD means intraday
re-seeds dedup but tomorrow's same-operation is a fresh ADD. Retired
entries don't block new seeds — they're out of scope anyway.
Seed endpoint returns {outcome: {mode, playbook_id, merged_names?},
entries_after}. Append=false retains old replace-all semantics.
5 unit tests pass: first_seed_is_add, identical_reseed_is_noop,
same_day_different_names_updates_and_merges, different_day_same_op_is_add,
retired_entry_doesnt_block_new_seed.
Live verified: three successive seeds with (Alice), (Alice),
(Alice, Bob) left entry count unchanged at 1936 with merged names
{Alejandro, Lauren, Alice, Bob}. Previously would have been 3
appends.
LETTA GEO HOT CACHE (scale primitive):
Added geo_index: HashMap<(city_lower, state_upper), Vec<usize>>
alongside PlaybookMemoryState. Rebuilt on every mutation: set_entries,
retire_one, retire_on_schema_drift, upsert_entry, load_from_storage.
compute_boost_for_filtered_with_role now uses the index for O(1) geo
lookup instead of scanning all entries. At current scale (1.9K) the
scan was sub-ms; at 100K+ the scan becomes the dominant cost. The
hot cache future-proofs without adding an LRU abstraction.
Retired entries excluded from index; valid_until still checked on the
hot path since it can elapse between rebuilds.
Owns cloned PlaybookEntries in the geo_filtered vector so the state
read-lock is released before cosine scoring — avoids lock contention
on the scoring path.
Memory-findings progress: 5 of 5 shipped.
✓ Multi-strategy parallel retrieval (Phase 19 refinement)
✓ Input normalization + unified /memory/query (Phase 24 TS)
✓ Zep validity windows (Phase 25)
✓ Mem0 UPSERT (Phase 26 today)
✓ Letta geo hot cache (Phase 26 today)
All 18 playbook_memory tests pass.
Full audit pass on devop.live/lakehouse/spec. Five chapters were
stale, one had an outright incorrect line. Scope was bigger than
ch6 alone — J asked "you want to update all" and the honest answer
was yes.
Ch 1 (Repository layout):
- mcp-server row gains /memory/query, /models/matrix, /system/summary,
observer.ts with :3800 listener
- tests/multi-agent/ row lists all new files: kb.ts, normalize.ts,
memory_query.ts, gen_scenarios.ts, gen_staffer_demo.ts, and the
colocated unit tests (kb.test.ts, normalize.test.ts)
- NEW config/ row documents models.json as the 5-tier matrix
- data/ row enumerates the four learning-loop directories:
_kb/, _playbook_lessons/, _observer/, _chunk_cache/
Ch 3 (Measurement & indexing):
- NEW "Model matrix (Phase 20)" subsection — 5-tier table (T1 hot /
T2 review / T3 overview / T4 strategic / T5 gatekeeper), per-tier
primary model, frequency, the think:false mechanical finding
called out with the 650-token reasoning-budget example
- NEW "Continuation primitive (Phase 21)" paragraph
- NEW "Per-staffer tool_level (Phase 23)" section with full/local/
basic/minimal mapping and the 46pt fill-rate delta from the 36-run
demo
Ch 7 (Scale story):
- FIX: playbook_memory growth bullet was claiming "No TTL or merge
policy" — Phase 25 added retirement via valid_until +
schema_fingerprint + /retire endpoint. Rewritten to name current
state (1936 entries, active vs retired split exposed).
Ch 8 (Error surfaces):
- Five new rows added to the failure-mode table:
* Zero-supply city → cloud rescue (Phase 22 item B) with the
Gary IN → South Bend IN concrete example
* LLM truncation → generateContinuable (Phase 21)
* Schema migration → /vectors/playbook_memory/retire (Phase 25)
* Observer unreachable → scenario silent-skip + append journal
survivability
Ch 9 (Per-staffer context):
- NEW "Staffer identity + competence-weighted retrieval (Phase 23)"
section with the competence_score formula and findNeighbors
weighted_score
- NEW "Auto-discovered reliable-performer labels" section naming
Rachel D. Lewis (18 endorsements) and Angela U. Ward (19) as
concrete output of 36-run demo
Ch 10 (A day in the life):
- Added 17:15 timeline entry — Kim using /memory/query with natural
language, regex normalizer extracting role/city/count in 0ms
- 17:00 entry updated to mention KB indexing + pathway recommendation
+ observer stream
- 22:00 entry updated to mention detectErrorCorrections nightly scan
Ch 11 (Known limits & non-goals):
- FIX: "playbook_memory compaction" bullet rewritten since retirement
is now wired; reframed as the honest Mem0 UPDATE/NOOP gap
- Added Letta hot cache deferred item with honest "cheap at 1.9K,
will bite at 100K" framing
- Added Chunking cache (Phase 21 Rust port) deferred item
- Added Observer → autotune feedback wire deferred item (Phase 26+)
Footer bumped v1 2026-04-20 → v2 2026-04-21 with Phase list.
Verified all updates live on devop.live/lakehouse/spec.
Addresses the load-bearing memory gap J flagged: playbook entries
had timestamps but no retirement semantic. When a schema migration
changed a column or a seasonal contract ended, stale playbooks kept
boosting candidates silently. Zep 2026-era finding — temporal
validity is the single highest-value memory-hygiene primitive.
SCHEMA (PlaybookEntry gains four optional fields, serde default):
schema_fingerprint — SHA-256 over dataset (column, type) tuples at
seed time. Missing = legacy entry, never
auto-retired on drift.
valid_until — RFC3339 hard expiry. compute_boost skips
entries past this moment.
retired_at — Set by retire_one or retire_on_schema_drift.
Retired entries excluded from all boost
calculations but kept in journal.
retirement_reason — Human-readable: "schema_drift: ...",
"expired: ...", "manual: ..."
RETRIEVAL PATH (compute_boost_for_filtered_with_role):
Before geo+cosine, active_entries filter removes anything retired
OR past valid_until. Uses chrono::Utc::now() once per call, no per-
entry clock queries.
NEW METHODS on PlaybookMemory:
retire_one(playbook_id, reason)
retire_on_schema_drift(city, state, current_fp, reason) — idempotent,
scopes by (city, state) so a Nashville migration doesn't touch
Chicago. Skips legacy entries with no fingerprint.
status_counts() -> (total, retired, failures)
HTTP ENDPOINTS:
POST /vectors/playbook_memory/retire
{playbook_id, reason} → retire by id
{city, state, current_schema_fingerprint, reason} → schema drift
GET /vectors/playbook_memory/status
{total, active, retired, failures}
SEED REQUEST extended with optional schema_fingerprint + valid_until
so the orchestrator (scenario.ts) can pass the current schema hash
when seeding, without a round trip through catalogd.
UNIT TESTS (5/5 pass): retire_one_marks_entry_and_persists,
retired_entries_do_not_boost, expired_valid_until_is_skipped,
schema_drift_retires_mismatched_fingerprints_only,
schema_drift_skips_other_cities.
LIVE VERIFIED: /status on current state = 1936 entries, 43 failures.
POST /retire with a sample playbook_id → "retired":1, /status now
reports active=1935, retired=1.
Memory-findings progress: 3 of 5 shipped.
✓ Multi-strategy parallel retrieval (Phase 19 refinement)
✓ Input normalization + unified /memory/query (Phase 24 TS)
✓ Zep-style validity windows (Phase 25, tonight)
⏳ Mem0 UPDATE / DELETE / NOOP ops (dedup same-(op,date) seeds)
⏳ Letta working-memory hot cache (not biting at 1.5K entries)
J asked directly: "did we implement our memory findings so that our
knowledge base and our configuration playbook [work] seamlessly with
whatever input they're given?" Honest answer tonight was "one of five
findings shipped, normalizer is the blocker." This closes that gap.
NORMALIZER (tests/multi-agent/normalize.ts):
Accepts structured JSON, natural language, or mixed. Returns canonical
NormalizedInput { role, city, state, count, client, deadline, intent,
confidence, extraction_method, missing_fields } for any downstream
consumer.
Three-tier path:
1. Structured fast-path — already-shaped input skips LLM
2. Regex path — "need 3 welders in Nashville, TN" parses without LLM.
City/state parser tightened to 1-3 capitalized words + "in {city}"
anchor preference + case-exact full-state-name variants to prevent
"Forklift Operators in Chicago" being captured as the city name
3. LLM fallback — qwen3 local with think:false + 400 max_tokens for
inputs the regex can't handle
Unit tests (tests/multi-agent/normalize.test.ts): 9/9 pass. Covers
structured fast-path, misplacement→rescue intent, state-name→abbrev
conversion, regex extraction from natural language, plural role +
full state name edge case, rescue intent keyword precedence, partial
input reporting missing fields, empty object fallthrough, async/sync
parity on clean inputs.
UNIFIED MEMORY QUERY (tests/multi-agent/memory_query.ts):
One function, five parallel fan-outs, one bundle returned:
- playbook_workers — hybrid_search via gateway with use_playbook_memory
- pathway_recommendation — KB recommender for this sig
- neighbor_signatures — K-NN sigs weighted by staffer competence
- prior_lessons — T3 overseer lessons filtered by city/state
- top_staffers — competence-sorted leaderboard
- discovered_patterns — top workers endorsed across past playbooks
for this (role, city, state)
- latency_ms — per-source + total
Every branch is best-effort: one source down doesn't break the bundle.
HTTP ENDPOINT (mcp-server/index.ts):
POST /memory/query with body {input: <anything>} → MemoryQueryResult
Returns the same shape the TS function does. Typed with types.ts for
future UI consumption.
VERIFIED:
curl POST /memory/query with structured {role,city,state,count}
→ extraction_method=structured, 10 playbook workers, top score 0.878
curl POST /memory/query with "I need 3 welders in Nashville, TN"
→ extraction_method=regex (no LLM call), 319ms total, 8 endorsements
for Lauren Gomez auto-discovered as top Nashville Welder
Honest remaining gaps (documented for next phase):
- Mem0 ADD/UPDATE/DELETE/NOOP — we still only ADD + mark_failed
- Zep validity windows — playbook entries have timestamps but no
retirement semantic
- Letta working-memory / hot cache — every query scans all 1560
playbook entries
- Memory profiles / scoped queries — global pool, no per-staffer
private subsets
2 of 5 findings now shipped (multi-strategy retrieval in Rust, input
normalization + unified query in TS). The remaining 3 are architectural
additions queued as Phase 25 items — validity windows first since it's
the most load-bearing for long-running systems.