Closes J's 2026-05-02 multi-call observability gap: a single
/v1/iterate session with N retries used to surface in Langfuse as
N+1 disconnected traces (one per /v1/chat hop + one for the iterate
request itself), with no parent/child linkage. Operators couldn't
scroll the retry chain in one trace tree to spot where grounding
failed.
## Wire-level change
- New header constant `shared.TraceIDHeader = "X-Lakehouse-Trace-Id"`
- `langfuseMiddleware` honors the header on inbound requests: if
set, reuses that trace id instead of minting a new one. Stashes
the trace id on the request context so handlers can attach
application-level child spans.
- `validatord.chatCaller` forwards the header to chatd. Every chat
hop in an iterate session lands as a child of the parent trace.
## Application-level spans
- `validator.IterateConfig` gains `Tracer` (optional callback).
When wired, each iteration attempt emits one Langfuse span
via `validator.AttemptSpan`:
Name: iterate.attempt[N]
Input: { iteration, model, provider, prompt }
Output: { verdict, raw, error }
Level: WARNING when verdict != accepted
- `validatord.iterTracer` is the production hook — bridges
`validator.Tracer` → `langfuse.Client.Span`.
- `IterateRequest`/`IterateResponse`/`IterateFailure` gain
`TraceID`; each `IterateAttempt` gains `SpanID`. The /v1/iterate
caller can pivot from the JSON response straight into the
Langfuse trace tree.
## What an operator sees post-cutover
GET /v1/iterate {kind=fill, prompt=...} → Trace TR-1
├─ http.request span (from middleware)
├─ iterate.attempt[0] span (validator.Iterate emit)
│ input: prompt+model
│ output: { verdict: validation_failed, error: ..., raw }
├─ chatd /v1/chat call (X-Lakehouse-Trace-Id: TR-1)
│ ├─ http.request span (chatd middleware)
│ └─ chatd-internal spans (existing)
├─ iterate.attempt[1] span
└─ ...
All in one Langfuse trace tree, not N+1 separate traces.
## Hallucinated-worker safety net is unchanged
The /v1/iterate flow's hard correctness gate is still
FillValidator + WorkerLookup. Phantom candidate IDs raise
ValidationError::Consistency which 422s and forces the iteration
loop to retry. The trace-id propagation is the OBSERVABILITY layer
on top — it makes the existing safety net's outcomes visible per-call,
not a replacement for it.
## Verification
- internal/validator: 4 new tests
- TestIterate_TracerEmitsSpanPerAttempt — span/attempt count + SpanID
- TestIterate_NoTraceIDSkipsTracer — no orphan spans without trace_id
- TestIterate_ChatCallerReceivesTraceID — propagation contract
- (existing iterate tests updated for new ChatCaller signature)
- internal/shared: 1 new test
- TestLangfuseMiddleware_HonorsTraceIDHeader — cross-service linkage
- cmd/validatord: existing HTTP tests still PASS via the dual-shape
UnmarshalJSON contract.
- validatord_smoke.sh: 5/5 PASS through gateway :3110 (unchanged).
- Full go test ./... green across 33 packages.
## Architecture invariant added
STATE_OF_PLAY "DO NOT RELITIGATE" gains a paragraph documenting
the X-Lakehouse-Trace-Id header contract + the iterate.attempt[N]
span emission. Future-Claude won't re-propose "wire trace-id
propagation" — the header IS the wiring.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
golangLAKEHOUSE
Go reimplementation of the Lakehouse — a versioned knowledge substrate for staffing analytics + local AI workloads.
Status
Phase G0 complete + G1/G1P/G2 shipped. Six binaries plus a seventh (vectord) and an eighth (embedd) on top, fronted by a single gateway. Acceptance smokes green for D1-D6 + G1 + G1P + G2.
End-to-end staffing co-pilot pipeline functional through the gateway:
text → /v1/embed → /v1/vectors/index/<name>/add
text → /v1/embed → /v1/vectors/index/<name>/search → top-K hits
Plus the SQL path:
CSV → /v1/ingest (parses, writes Parquet via storaged, registers
manifest with catalogd)
SQL → /v1/sql (DuckDB over the registered Parquets via httpfs)
See docs/PHASE_G0_KICKOFF.md for the day-by-day record (D1-D6 +
real-scale validation + G1/G1P/G2 pointer at the bottom).
Service inventory
| Bin | Port | Role |
|---|---|---|
gateway |
3110 | Reverse proxy fronting all backing services |
storaged |
3211 | Object I/O over S3 (MinIO in dev) |
catalogd |
3212 | Parquet manifest registry, ADR-020 idempotency |
ingestd |
3213 | CSV → Parquet → register loop |
queryd |
3214 | DuckDB SELECT over registered Parquets via httpfs |
vectord |
3215 | HNSW vector search (+ optional persistence to storaged) |
embedd |
3216 | Text → vector via Ollama (default nomic-embed-text 768-d) |
mcpd |
stdio | Model Context Protocol server (Claude Desktop / Code consumers) |
MCP server
bin/mcpd exposes Lakehouse capabilities as MCP tools over stdio:
list_datasets, get_manifest, query_sql, embed_text, search_vectors.
All tools proxy to the gateway, so the gateway must be up first.
Wire into Claude Desktop / Claude Code by adding to the MCP config:
{
"mcpServers": {
"lakehouse": {
"command": "/path/to/golangLAKEHOUSE/bin/mcpd",
"args": ["--gateway", "http://127.0.0.1:3110"]
}
}
}
Replaces the Bun mcp-server.ts MCP-tool surface from the Rust system.
HTTP demo routes (the staffing co-pilot UI) stay Bun until G5.
Acceptance smokes
scripts/d1_smoke.sh # 5-binary skeleton + chi /health + gateway proxy probes
scripts/d2_smoke.sh # storaged GET/PUT/LIST/DELETE + 256 MiB cap + concurrency cap
scripts/d3_smoke.sh # catalogd register/manifest/list + rehydrate-across-restart
scripts/d4_smoke.sh # ingestd CSV → Parquet round-trip + schema-drift 409
scripts/d5_smoke.sh # queryd DuckDB SELECT through httpfs over MinIO
scripts/d6_smoke.sh # full ingest → query through gateway only
scripts/g1_smoke.sh # vectord HNSW recall + dim mismatch + duplicate-create 409
scripts/g1p_smoke.sh # vectord state survives kill+restart via storaged
scripts/g2_smoke.sh # embed → vectord add → search round-trip
Or run the full gate via the task runner (see below):
just verify # vet + tests + 9 smokes; ~33s wall
Task runner
just # show available recipes
just verify # full Sprint 0 gate (vet + tests + 9 smokes)
just smoke <day> # single smoke (d1..d6, g1, g1p, g2)
just doctor # check cold-start deps; --json for CI
just install-hooks # install pre-push hook that runs just verify
After a fresh clone, run just install-hooks once so git push is
gated on the same green chain that ran here. Hook lives in
.git/hooks/pre-push (not tracked; recreated by the recipe).
Cold-start dependencies
- Go 1.25+ at
/usr/local/go/bin(arrow-go pulled the 1.25 floor) gcc+libc-devfor the DuckDB cgo binding (ADR-001 §1.1)justtask runner (apt install juston Debian 13+)- MinIO running on
:9000with bucketlakehouse-go-primary - Ollama running on
:11434withnomic-embed-textloaded (G2) /etc/lakehouse/secrets-go.tomlwith[s3.primary]credentials (storaged + queryd both read this)
just doctor probes all of the above and reports the fix command
for each missing dep. CI / scripts can use just doctor --json.
Layout
docs/ Direction + spec + ADRs + day-by-day
cmd/ One main package per binary
internal/ Shared packages — storeclient, catalogclient,
secrets, shared, embed, gateway, plus
per-service implementation packages
scripts/ Smokes + ancillary tooling
Reading order
docs/PRD.md— what we're building and whydocs/SPEC.md— how, per-componentdocs/DECISIONS.md— ADRs (ADR-001 foundational)docs/PHASE_G0_KICKOFF.md— day-by-day from D1 through G2docs/RUST_PATHWAY_MEMORY_NOTE.md— historical reference for the Rust era's pathway memory (not migrated, by ADR-001 #5)
Predecessor
The Rust Lakehouse this rewrite supersedes lives at
git.agentview.dev/profit/lakehouse. It remains the live system
serving devop.live/lakehouse/ until this Go implementation reaches
feature parity per docs/SPEC.md §7. Then Rust enters
maintenance-only mode.