Phase 45 slice 4: batch scan + T3 drift-correction synthesis #6
Loading…
x
Reference in New Issue
Block a user
No description provided.
Delete Branch "phase/45-slice-4"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Closes Phase 45 per the PRD spec
PRD Phase 45 deliverables are now:
DocRef+doc_refsfield onPlaybookEntry(slice 1)context7_bridge.ts(slice 2)/doc_drift/check+/resolve(slice 3)/doc_drift/scan+ T3 synthesis →data/_kb/doc_drift_corrections.jsonl(this PR)What's in this PR
crates/vectord/src/drift_synth.rs— 240 LOC, 5 unit tests/doc_drift/scanhandler inservice.rs/doc_drift/check(fire-and-forget per drifted tool)context7_bridge.ts(slice 2 had none)Live-verified
Seed → scan Toledo OH → drifted=1 flagged=1 synthesis_spawned=1 → cat doc_drift_corrections.jsonl shows record with real diff_summary + recommended_action from gpt-oss:120b (model honestly noted preview-unavailable rather than fabricating).
What this PR does NOT ship
The cohesion concern J flagged: auditor's kb_query + inference checks don't yet consult hybrid_search / context7 / KB neighbors as context. That's the integration-test work on a separate branch. This PR closes Phase 45's stated deliverables cleanly; the meta-index-gets-smarter loop is the next effort.
Closes the PRD-listed remaining deliverables for Phase 45: - POST /vectors/playbook_memory/doc_drift/scan - T3 synthesis writing data/_kb/doc_drift_corrections.jsonl - Backfill: unit tests for mcp-server/context7_bridge.ts (slice 2 never had any) crates/vectord/src/drift_synth.rs (NEW, 240 LOC) - DriftCorrection shape matching the PRD spec exactly - synthesize(): HTTPS POST to ollama.com/api/generate with gpt-oss:120b. Prompt explicitly instructs the model to admit "preview insufficient" rather than fabricate a diff. - append_correction(): JSONL append to data/_kb/ with mkdir -p on the parent; atomic at line level on Linux for typical sizes. - spawn_synthesize_and_append(): fire-and-forget wrapper. Never blocks the handler. No cloud key → skipped silently with a tracing::warn. Cloud failure → logged + dropped. - resolve_cloud_key(): same sources v1/ollama_cloud.rs uses (env OLLAMA_CLOUD_KEY → /root/llm_team_config.json → env OLLAMA_CLOUD_API_KEY). - 5 unit tests: JSON extraction (first object, code fences, unclosed), prompt composition, jsonl append shape. crates/vectord/src/service.rs - /playbook_memory/doc_drift/scan — iterates active entries with doc_refs, optional (city, state, max_entries) filter. Per-entry: bridge check → flag if drifted → spawn synthesis per drifted tool. Honest response: scanned, eligible, drifted, newly_flagged, unknown, synthesis_spawned, details[]. - /playbook_memory/doc_drift/check/{id}: slice 3 handler now also spawns synthesis per drifted tool. Response adds synthesis_spawned: bool. mcp-server/context7_bridge.ts - Export normalizeTool + hashContent for testing. - Guard Bun.serve() behind `if (import.meta.main)` so imports don't double-bind :3900 (collides with systemd service). mcp-server/context7_bridge.test.ts (NEW, 6 tests) - normalizeTool: lowercases + trims, preserves internal chars - hashContent: deterministic, sensitive to 1-char change, 16 hex chars, differs for empty vs whitespace Live verification (after gateway restart): seed playbook pb-seed-88abc7d1 with doc_refs[docker v23.0.0, stale hash] POST /doc_drift/scan {city:"Toledo", state:"OH", max_entries:5} → scanned=1 drifted=1 newly_flagged=1 synthesis_spawned=1 unknown=0 wait 30s cat data/_kb/doc_drift_corrections.jsonl → 1 record (603 bytes) with diff_summary + recommended_action from gpt-oss:120b. Model correctly noted "preview unavailable" rather than fabricating. Tests: 6 bridge tests + 6 drift_synth tests + 51 pre-existing vectord lib tests. All green. Release build clean. NOT in this PR (deliberately — cohesion review pending): - Auditor's kb_query check consulting hybrid_search + context7 - Auditor's inference check consuming KB neighbors + drift corrections as context - Observer → KB → auditor feedback loop beyond append - Integration test exercising the full smarter-DB loop - Python script (sidecar/*, scripts/*) inventory Those are the cohesion work J flagged — handled on a separate branch after this merges. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>Auditor verdict: ✅
approveOne-liner: all checks passed (3 findings, all info)
Head SHA:
7fe47babd92aAudited at: 2026-04-22T22:17:00.227Z
dynamic — 1 findings (0 block, 0 warn, 1 info)
ℹ️ info — dynamic check skipped — skipped by options
skipped by optionsinference — 1 findings (0 block, 0 warn, 1 info)
ℹ️ info — cloud review completed (model=gpt-oss:120b, tokens=7313)
claim_verdicts: 1, unflagged_gaps: 0kb_query — 1 findings (0 block, 0 warn, 1 info)
ℹ️ info — KB: 69 recent scenario runs, 209/289 events ok (fail rate 27.7%)
most recent: scenario-2026-04-21T05-29-34recent failing sigs: 5745bcd5e4c68591, 5745bcd5e4c68591, caeeeffc69d36009Metrics
Lakehouse auditor · SHA
7fe47bab· re-audit on new commit flips the status automatically.Auditor verdict: ✅
approveOne-liner: all checks passed (3 findings, all info)
Head SHA:
7fe47babd92aAudited at: 2026-04-22T22:20:40.076Z
dynamic — 1 findings (0 block, 0 warn, 1 info)
ℹ️ info — dynamic check skipped — skipped by options
skipped by optionsinference — 1 findings (0 block, 0 warn, 1 info)
ℹ️ info — cloud review completed (model=gpt-oss:120b, tokens=7103)
claim_verdicts: 1, unflagged_gaps: 0kb_query — 1 findings (0 block, 0 warn, 1 info)
ℹ️ info — KB: 69 recent scenario runs, 209/289 events ok (fail rate 27.7%)
most recent: scenario-2026-04-21T05-29-34recent failing sigs: 5745bcd5e4c68591, 5745bcd5e4c68591, caeeeffc69d36009Metrics
Lakehouse auditor · SHA
7fe47bab· re-audit on new commit flips the status automatically.Auditor verdict: ⚠️
request_changesOne-liner: cloud-flagged gap not in any claim: append_correction test writes a temp file directly instead of invoking the append_correction function, leaving the actua
Head SHA:
7fe47babd92aAudited at: 2026-04-22T22:22:01.111Z
dynamic — 1 findings (0 block, 0 warn, 1 info)
ℹ️ info — dynamic check skipped — skipped by options
skipped by optionsinference — 2 findings (0 block, 1 warn, 1 info)
ℹ️ info — cloud review completed (model=gpt-oss:120b, tokens=7482)
claim_verdicts: 1, unflagged_gaps: 1⚠️ warn — cloud-flagged gap not in any claim: append_correction test writes a temp file directly instead of invoking the append_correction function, leaving the actua
location: crates/vectord/src/drift_synth.rs:~150kb_query — 1 findings (0 block, 0 warn, 1 info)
ℹ️ info — KB: 70 recent scenario runs, 210/290 events ok (fail rate 27.6%)
most recent: ?recent failing sigs: 5745bcd5e4c68591, 5745bcd5e4c68591, caeeeffc69d36009Metrics
Lakehouse auditor · SHA
7fe47bab· re-audit on new commit flips the status automatically.Closing — superseded. The
/playbook_memory/doc_drift/scanendpoint shipped onmainvia inline implementation in commit6cafa7e(Phase 45 closure 2026-04-27). Thecrates/vectord/src/drift_synth.rsextraction in this PR was not the path that landed; the inlinescan_doc_drifthandler atservice.rs:2608is the canonical version. Different code, same surface — close.Pull request closed