J said "let's go" → "next" (option 3): actual flip via Bun
mcp-server. Done. Real Bun-frontend traffic now reaches the Go
substrate via /_go/* on Bun :3700, routed to the persistent Go
gateway at :4110.
Companion change in /home/profit/lakehouse (Rust legacy):
mcp-server/index.ts: new /_go/* pass-through, opt-in via
GO_LAKEHOUSE_URL env var. Off-by-default (returns 503 on
/_go/* with rationale). Existing /api/* (Rust gateway) path
unchanged. Committed locally on the demo/post-pr11 branch.
System config:
/etc/systemd/system/lakehouse-agent.service.d/go-cutover.conf
adds Environment=GO_LAKEHOUSE_URL=http://127.0.0.1:4110 to
the systemd-managed Bun service. Reversible via systemctl
revert lakehouse-agent.
Live verification (operator curl through Bun frontend):
- /_go/health: gateway responds {"status":"ok","service":"gateway"}
- /_go/v1/embed: nomic-embed-text-v2-moe vectors, dim=768
- /_go/v1/matrix/search vs persistent 200-worker corpus:
rank=0 id=w-43 Brian Ramirez (Forklift Operator, Springfield IL)
rank=1 id=w-102 Laura Long (Forklift Operator, Cleveland OH)
rank=2 id=w-101 Terrence Gray (Forklift Operator, Champaign IL)
3/3 role match, top-1 in IL exactly
- /api/health: lakehouse ok (Rust path unchanged — control verified)
What this is NOT:
- Not an nginx flip — devop.live/lakehouse/* still goes through
/api/* → Rust :3100. /_go/* is parallel slice for opt-in.
- Not a tool-level cutover — each /_go/<path> is a manual choice;
no automatic mapping of Rust paths to Go equivalents.
- Not a transformation layer — caller sends Go-shaped requests
(e.g. /_go/v1/embed expects {texts, model}, not {text}).
Three cutover unit properties verified:
- ADDITIVE: zero modification to any existing Bun tool
- REVERSIBLE: unset GO_LAKEHOUSE_URL → /_go/* → 503
- ISOLATED: Rust gateway state unaffected (different port,
different binary, different MinIO bucket)
This is the cutover slice operators can use to validate Go-side
handlers under realistic frontend conditions before any
production-traffic flip. Next step (deferred): pick a specific
mcp-server tool to optionally route through Go with response-
shape adapter — that's a product-visible flip rather than this
infrastructure-visible slice.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three real-shape demand queries against the long-running 11-daemon
stack with 500 workers ingested from workers_500k.parquet (real
production data). Substrate is producing useful answers:
Q1 (Forklift @ Aurora IL): 5/5 role match, top 3 in IL, dist 0.44-0.46
Q2 (CNC @ Detroit MI): top-1 in Detroit MI exactly, role pulls
Machine Operator (semantic neighbor)
Q3 (Warehouse @ Indianapolis IN): top-1 in Indianapolis IN, 5/5 role
match, dist 0.42-0.54
This is the FIRST end-to-end coordinator-shape query against the
persistent Go stack — every prior reality test (real_001..real_005)
ran through harness-transient stacks that died on exit. This one
ran against daemons that have been up for minutes and stayed up
through retrieval.
Geo is load-bearing: top-1 city/state matched in 3/3 queries.
Embedder treats geography as a primary feature.
Q2's CNC→Machine Operator gap exposes the playbook learning loop's
purpose: judge would rate this ~3/5; the first time a coordinator
approves a Machine Operator for a CNC Operator query, that
recording starts shifting substrate behavior. That's the loop
we've been building toward — the persistent stack is now the
substrate that loop will run on.
Evidence: reports/cutover/persistent_stack_first_query.md (full
top-K tables + read on each query).
What this does NOT prove:
- Production-volume load (3 queries, 500 workers)
- Concurrent latency
- Full 5-loop substrate (this exercised retrieval only; no
playbook recordings exist on the persistent stack yet)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
J's "let's go" instruction: leave OPEN list behind, push the Go
substrate forward into actual deployment shape. This commit marks
the first time the Go side has run as long-running daemons rather
than per-harness transient processes, and the first time the
shared cross-runtime longitudinal log has carried a Go-emitted
entry alongside the Rust ones.
What landed:
scripts/cutover/start_go_stack.sh — the persistent-stack runbook.
Brings up all 11 daemons (storaged → catalogd → ingestd → queryd
→ embedd → vectord → pathwayd → observerd → matrixd → gateway,
plus chatd-if-not-already-up) in dependency order via nohup +
disown. Anchored pkill per feedback_pkill_scope (never bare
"bin/"). Logs land in /tmp/gostack-logs/<bin>.log, one per daemon.
Verified live state:
- All 11 services healthy on :3110 + :3211-:3220
- gateway → embedd proxy returns nomic-embed-text-v2-moe vectors
- chatd reports 5/5 providers loaded
- No port collision with Rust gateway on :3100
- Daemons stay up after exit of the start script (production shape,
not harness-transient)
audit_baselines.jsonl crosses the runtime boundary:
- 7 Rust-emitted entries (last: ca7375ea 2026-04-27)
- 1 Go-emitted entry (ee2a40c 2026-05-01T07:53:54Z) appended via
./bin/audit_full -append-baseline
- Same envelope shape, same metric set, same drift comparator
semantics — operators running either runtime grow the same log
What this DOES prove:
- Substrate parity at deployment shape (not just unit tests)
- Cross-runtime artifact write-side compatibility (was previously
proven on read side via audit_baselines roundtrip)
- The deploy machinery works end-to-end for the persistent case
What this does NOT prove (still ahead):
- Real coordinator traffic against the Go stack (no nginx flip yet;
devop.live/lakehouse/ still serves through Rust)
- Go-side production materializer (Phase 2 is observer-only)
- Replay tool parity (Phase 7 is observer-only)
- The 5-loop product gate against actual humans
reports/cutover/SUMMARY.md now logs three new rows:
- audit-FULL with 12/12 phases ported
- First Go-emitted audit_baselines entry
- Persistent Go stack live
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes 4 of the 5 phases the initial audit-FULL port left as
deferred. The pattern: most "deferred" phases didn't actually need
the un-ported Rust pieces — they were observer-mode by design and
just needed to read existing on-disk artifacts.
Phase 1 (schema validators) → ported via exec.Command:
Invokes `go test ./internal/distillation/...` — the Go equivalent
of Rust's `bun test auditor/schemas/distillation/`. New
GoTestModule field on AuditFullOptions controls the package
pattern; empty disables the invocation (test mode, prevents
recursion when audit-full is invoked from inside `go test`).
Phase 2 (evidence materialization) → ported as observer:
Reads data/evidence/ directly and tallies rows + tier-1 source
hits. Doesn't re-run the materializer (which is Rust-side TS).
Emits p2_evidence_rows + p2_evidence_skips metrics matching
Rust shape — drop-in audit_baselines.jsonl entries possible.
Phase 5 (run summary) → ported as observer:
Reads reports/distillation/{run_id}/summary.json + 5 stage
receipts. Validates schema_version=1, run_hash sha256, git_commit
40-char hex, all stage receipts decode as JSON. Full schema
validation (StageReceipt schema) is intentionally NOT ported —
it would require porting the TS schemas/distillation/ validators
in full; basic shape checks catch the load-bearing invariants.
Phase 7 (replay log) → ported as observer:
Reads data/_kb/replay_runs.jsonl, validates last 50 rows parse
as JSON. Skips the live-replay invocation that Rust's phase 7
also does — porting Rust replay.ts is substantial and not in
scope. The "log shape sanity" check is what audit-full actually
needs; the live invocation is a separate concern.
Phase 6 (acceptance gate) — STILL SKIPPED:
Rust acceptance.ts is a TS-only fixture harness with bun-specific
deps. Porting the fixtures (tests/fixtures/distillation/acceptance/)
+ the 22-invariant runner to Go is an ADR-worth undertaking.
Documented in the header comment.
Live-data probe (against /home/profit/lakehouse):
Skips count: 4 → 1 (only phase 6).
Required checks: 6/6 → 12/12 PASS.
New metric: p2_evidence_rows=1055, BYTE-EQUAL to the Rust
pipeline's collect.records_out from the latest summary.json.
Cross-runtime parity now extends across phases 0/1/2/3/4/5/7.
6 new tests:
- TestPhase2_EvidenceTallyFromOnDisk: row + tier-1-hit tallying
- TestPhase5_FullSummaryFlow: complete run-summary fixture passes
- TestPhase5_ShortRunHashCaught: bad run_hash fails required check
- TestPhase7_ReplayLogReadsFromDisk: row-count reporting
- TestPhase7_MalformedTailRowsCaught: structural parse failure
- TestRunAuditFull_FullFixtureFlow updated to seed evidence/ +
reports/distillation/ for the phases now wired.
Cleanup: removed local sortStrings helper (replaced with sort.Strings
now that `sort` is imported for phase 5's mtime-sort).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Same shape of proof as embed_parity.sh for the embed endpoint:
take the just-shipped Go port (ca142b9) and validate it against
the actual production data the Rust legacy emits, not just unit-
test fixtures. Locks the cross-runtime parity that operators
running mixed pipelines depend on.
scripts/cutover/audit_baselines_validate.go:
- Reads /home/profit/lakehouse/data/_kb/audit_baselines.jsonl
- Parses every entry via the Go AuditBaseline struct
- Round-trips the last entry: encode → decode → field-by-field
equality check (catches any silently-dropped JSON keys)
- Calls LoadLastBaseline against the live file (proves the public
API works on real shapes, not just inline parsing)
- Computes BuildAuditDriftTable(first → last) — full-window
lineage drift over the captured baselines
Live-data probe results (reports/cutover/audit_baselines_roundtrip.md):
- 7 entries parse without error
- Round-trip is byte-equal on every metric + every header field
- Drift table fires the expected verdicts:
- p2_evidence_rows 12→82 (+583%) → warn (above 20% threshold)
- p3_accepted/partial/rejected/human 0→non-zero → warn (the
zero-baseline edge case TestBuildAuditDriftTable_ZeroBaseline
was designed to lock — verified now firing on real history)
- p4_* metrics +0% → ok (stable across the window)
What this does NOT prove (documented in the report): the Go-side
audit-FULL pipeline that PRODUCES baselines doesn't exist yet.
Only the load/append/drift substrate is ported. Operators running
audit-full from Go would still need a metric-collection pass —
that's a separate port deliberately not in this wave.
reports/cutover/SUMMARY.md gains a new row alongside the embed
parity entries; cutover-prep verification log keeps the
discipline of "verified against real data, not just fixtures."
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
First concrete cutover artifact: scripts/cutover/embed_parity.sh
brings up Go embedd + gateway alongside the live Rust gateway,
hits both /ai/embed and /v1/embed with the same forced model, and
emits a per-date verdict report under reports/cutover/.
Why embed first: the parity invariant is one math identity (cosine
sim of vectors against same input). Retrieve has thousands of edge
cases. If embed parity holds, all downstream vector consumers
inherit confidence; if it doesn't, we catch it in 30s instead of
after a flip.
Verdict 2026-04-30: 5/5 samples cosine=1.000000 with model forced
to nomic-embed-text (v1). Same with nomic-embed-text-v2-moe (both
Ollamas have it loaded). Math is provably equivalent across the
gateway plumbing.
Drift catalog (reports/cutover/SUMMARY.md):
- URL: Rust /ai/embed vs Go /v1/embed
- Wire: Rust {embeddings, dimensions} (plural) vs Go {vectors,
dimension} (singular). Wire-format adapter is the only real
cutover work for this endpoint.
- L2 norm: Rust unit vectors (~1.0); Go raw Ollama (~20-23). Same
direction (cos=1.0); harmless under cosine-distance HNSW (which
is Go vectord's default), but worth fixing in internal/embed/
before extending to euclidean indexes.
reports/cutover/ now tracked (joined the scrum/ + reality-tests/
exemptions in .gitignore).
Next probe: /v1/matrix/retrieve ↔ Rust /vectors/hybrid for the
real user-facing retrieve path. Embed parity gives that probe a
clean foundation.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>