11 Commits

Author SHA1 Message Date
root
f9e72412c1 validatord: /v1/validate + /v1/iterate HTTP surface (port 3221)
Closes the last "Go primary" backlog item in
docs/ARCHITECTURE_COMPARISON.md. Go now owns the entire validator
path end-to-end — no Rust dep for staffing safety net.

Architecture: cmd/validatord on :3221 hosts both endpoints. Calls
chatd directly for the iterate loop's LLM hop (no gateway
self-loopback like the Rust shape). Gateway proxies /v1/validate +
/v1/iterate to validatord.

What's in:
- internal/validator/playbook.go — 3rd validator kind (PRD checks:
  fill: prefix, endorsed_names ≤ target_count×2, fingerprint required)
- internal/validator/lookup_jsonl.go — JSONL roster loader (Parquet
  deferred; producer one-liner documented in package comment)
- internal/validator/iterate.go — ExtractJSON helper + Iterate
  orchestrator with ChatCaller seam for unit tests
- cmd/validatord/main.go — HTTP routes, roster load, chat client
- internal/shared/config.go — ValidatordConfig + gateway URL field
- lakehouse.toml — [validatord] section
- cmd/gateway/main.go — proxy routes for /v1/validate + /v1/iterate

Smoke: 5/5 PASS through gateway :3110:
  ✓ playbook happy path
  ✓ playbook missing fingerprint → 422 schema/fingerprint
  ✓ phantom candidate W-PHANTOM → 422 consistency
  ✓ unknown kind → 400
  ✓ roster loaded with 3 records

go test ./... green across 33 packages.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 03:53:20 -05:00
root
05273ac06b phase 4: chatd — multi-provider LLM dispatcher (ollama / cloud / openrouter / opencode / kimi)
new cmd/chatd on :3220 routes /v1/chat to the right provider based
on model-name prefix or :cloud suffix. closes the architectural gap
named in lakehouse.toml [models]: tiers map to model IDs, but until
phase 4 there was no service that could actually CALL those models
from go.

routing rules (registry.Resolve):
  ollama/<m>          → local Ollama (prefix stripped)
  ollama_cloud/<m>    → Ollama Cloud
  <m>:cloud           → Ollama Cloud (suffix variant — kimi-k2.6:cloud)
  openrouter/<v>/<m>  → OpenRouter (prefix stripped, OpenAI-compat)
  opencode/<m>        → OpenCode unified Zen+Go
  kimi/<m>            → Kimi For Coding (api.kimi.com/coding/v1)
  bare names          → local Ollama (default)

provider implementations:
- internal/chat/types.go      Provider interface, Request/Response, errors
- internal/chat/registry.go   prefix + :cloud suffix dispatch
- internal/chat/ollama.go     local Ollama via /api/chat (think=false default)
- internal/chat/ollama_cloud.go  Ollama Cloud via /api/generate (Bearer auth)
- internal/chat/openai_compat.go shared OpenAI Chat Completions for the
                                 OpenRouter/OpenCode/Kimi family
- internal/chat/builder.go    BuildRegistry from BuilderInput;
                              ResolveKey reads env then .env file fallback

config:
- ChatdConfig in internal/shared/config.go with bind, ollama_url,
  per-provider key env names + .env fallback paths, timeout
- Gateway gains chatd_url + /v1/chat + /v1/chat/* routes
- lakehouse.toml [chatd] block with /etc/lakehouse/<provider>.env defaults

tests (19 in internal/chat):
- registry: prefix + :cloud + errors + telemetry + provider listing
- ollama: happy path + prefix strip + format=json + 500 mapping +
  flatten_messages
- openai_compat: happy path + format=json + 429 mapping + zero-choices

think=false default in ollama + ollama_cloud — local hot path skips
reasoning, low-budget callers (the playbook_lift judge at max_tokens=10)
get direct answers instead of empty content + done_reason=length.
proven via chatd_smoke acceptance.

acceptance gate: scripts/chatd_smoke.sh — 6/6 PASS:
1. /v1/chat/providers lists exactly registered providers (1 in dev mode)
2. bare model → ollama default with content + token counts + latency
3. explicit ollama/<m> → prefix stripped at upstream
4. <m>:cloud without ollama_cloud registered → 404 (no silent fall-through)
5. unknown/<m> → falls through to default → upstream 502 (no prefix rewrite)
6. missing model field → 400

just verify: PASS (vet + 30 packages × short tests + 9 smokes).
chatd_smoke is a domain smoke (not in just verify, mirrors matrix /
observer / pathway pattern).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 00:08:29 -05:00
root
bc9ab93afe H: observerd — autonomous-iteration witness loop (SPEC §2 port)
Port of the load-bearing pieces of mcp-server/observer.ts (Rust
system, 852 lines TS) per SPEC §2's named target. Implements PRD
loop 3 ("Observer loop — watches each run, refines configs").

Routes (all under /v1/observer/* via gateway):
  GET  /observer/health   — liveness
  GET  /observer/stats    — total / successes / failures /
                             by_source / recent_scenario_ops
                             (matches Rust JSON shape exactly)
  POST /observer/event    — record one ObservedOp; auto-defaults
                             timestamp + source, validates required
                             fields (endpoint), persists to JSONL,
                             appends to ring buffer

Architecture:
  - internal/observer/types.go — ObservedOp model + Source taxonomy
    (mcp / scenario / langfuse / overseer_correction). Mirrors the
    Rust shape so JSON round-trips during cutover.
  - internal/observer/store.go — Store + Persistor. Ring buffer cap
    matches Rust's 2000; recent_scenarios cap matches Rust's 10.
    Same persist-then-apply order as pathwayd; same corruption-
    tolerant replay (skip malformed lines + warn).
  - cmd/observerd — :3219 HTTP service, fronted by gateway as
    /v1/observer/*.
  - lakehouse.toml + DefaultConfig — [observerd] block matches the
    pathwayd pattern (Bind + PersistPath; empty path = ephemeral).

Tests + smoke (all PASS):
  - 7 unit tests in store_test.go: validation, default fields,
    stats aggregation, recent-scenarios cap + ordering, ring-buffer
    rollover at cap, JSONL round-trip persistence, corruption-
    tolerant replay (1 valid + 1 corrupt + 1 valid → 2 applied)
  - scripts/observer_smoke.sh: 4 assertions through gateway —
    record 5 events (3 ok / 2 fail across 2 sources), stats
    aggregates correctly, empty-endpoint→400, kill+restart preserves
    via JSONL replay (5 ops, 3 ok, 2 err survive)

Deferred (named in package + cmd doc, not in this commit):
  - POST /observer/review (cloud-LLM hand-review fall-back). The
    heuristic-only path could land cheaply but the productized
    cloud path (qwen3-coder fall-back) is multi-day port.
  - Background loops: analyzeErrors, consolidatePlaybooks,
    tailOverseerCorrections (read overseer_corrections.jsonl into
    the ring buffer once per cycle).
  - escalateFailureClusterToLLMTeam (failure clustering trigger
    that posts to LLM Team's /api/run with code_review mode).

/relevance is NOT duplicated — already ported in 9588bd8 to
internal/matrix/relevance.go (component 3 of SPEC §3.4).

16-smoke regression all green (D1-D6, G1, G1P, G2, storaged_cap,
pathway, matrix, relevance, downgrade, playbook, observer).
13 binaries now: gateway, storaged, catalogd, ingestd, queryd,
vectord, embedd, pathwayd, matrixd, observerd, mcpd, fake_ollama
(plus catalogd-only test build).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:18:02 -05:00
root
c1d96b7b60 matrixd: multi-corpus retrieve+merge — SPEC §3.4 component 2 of 5
Lands the matrix indexer's first piece per docs/SPEC.md §3.4:
multi-corpus retrieve+merge with corpus attribution per result.
Future components (relevance filter, downgrade gate, learning-loop
integration) layer on top of this surface.

Architecture:
  - internal/matrix/retrieve.go — Retriever takes (query, corpora,
    k, per_corpus_k), parallel-fans across vectord indexes, merges
    by distance ascending, preserves corpus origin per hit
  - cmd/matrixd — HTTP service on :3217, fronts /v1/matrix/*
  - gateway proxy + [matrixd] config + lakehouse.toml entry
  - Either query_text (matrix calls embedd) or query_vector
    (caller pre-embedded) — vector takes precedence if both set

Error policy: fail-loud on any corpus error. Silent partial returns
would lie about coverage, defeating the matrix's whole purpose.
Bubbles vectord errors as 502 (upstream), validation as 400.

Smoke (scripts/matrix_smoke.sh, 6 assertions PASS first try):
  - /matrix/corpora lists indexes
  - Multi-corpus search returns hits from BOTH corpora
  - Top hit is the globally-closest across all corpora
    (b-near beats a-near at distance 0.05 vs 0.1 — proves merge)
  - Metadata round-trips through the merge
  - Distances ascending in result list
  - Negative paths: empty corpora → 400, missing corpus → 502,
    no query → 400

12-smoke regression sweep all green (D1-D6, G1, G1P, G2,
storaged_cap, pathway, matrix).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 18:39:17 -05:00
root
afbb506dbc pathwayd: HTTP service over internal/pathway · 11/11 smoke gate
Network-callable Mem0-style trace memory at :3217, fronted by gateway
/v1/pathway/*. Closes the ADR-004 wire-up: store substrate landed in
2a6234f, this lands the HTTP surface + [pathwayd] config + acceptance
gate.

Smoke proves the architecturally distinctive properties: Revise →
History walks the predecessor chain backward (audit trail), Retire
excludes from Search default but stays Get-able, AddIdempotent bumps
replay_count without replacing — and all survive kill+restart via
JSONL log replay.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 17:49:42 -05:00
root
fa56134b90 ADR-003 wiring: Bearer token + IP allowlist middleware
Implements the auth posture from ADR-003 (commit 0d18ffa). Two
independent layers — Bearer token (constant-time compare via
crypto/subtle) and IP allowlist (CIDR set) — composed in shared.Run
so every binary inherits the same gate without per-binary wiring.

Together with the bind-gate from commit 6af0520, this mechanically
closes audit risks R-001 + R-007:
  - non-loopback bind without auth.token = startup refuse
  - non-loopback bind WITH auth.token + override env = allowed
  - loopback bind = all gates open (G0 dev unchanged)

internal/shared/auth.go (NEW)
  RequireAuth(cfg AuthConfig) returns chi-compatible middleware.
  Empty Token + empty AllowedIPs → pass-through (G0 dev mode).
  Token-only → 401 Bearer mismatch.
  AllowedIPs-only → 403 source IP not in CIDR set.
  Both → both gates apply.
  /health bypasses both layers (load-balancer / liveness probes
  shouldn't carry tokens).

  CIDR parsing pre-runs at boot; bare IP (no /N) treated as /32 (or
  /128 for IPv6). Invalid entries log warn and drop, fail-loud-but-
  not-fatal so a typo doesn't kill the binary.

  Token comparison: subtle.ConstantTimeCompare on the full
  "Bearer <token>" wire-format string. Length-mismatch returns 0
  (per stdlib spec), so wrong-length tokens reject without timing
  leak. Pre-encoded comparison slice stored in the middleware
  closure — one allocation per request.

  Source-IP extraction prefers net.SplitHostPort fallback to
  RemoteAddr-as-is for httptest compatibility. X-Forwarded-For
  support is a follow-up when a trusted proxy fronts the gateway
  (config knob TBD per ADR-003 §"Future").

internal/shared/server.go
  Run signature: gained AuthConfig parameter (4th arg).
  /health stays mounted on the outer router (public).
  Registered routes go inside chi.Group with RequireAuth applied —
  empty config = transparent group.
  Added requireAuthOnNonLoopback startup check: non-loopback bind
  with empty Token = refuse to start (cites R-001 + R-007 by name).

internal/shared/config.go
  AuthConfig type added with TOML tags. Fields: Token, AllowedIPs.
  Composed into Config under [auth].

cmd/<svc>/main.go × 7 (catalogd, embedd, gateway, ingestd, queryd,
storaged, vectord, mcpd is unaffected — stdio doesn't bind a port)
  Each call site adds cfg.Auth as the 4th arg to shared.Run. No
  other changes — middleware applies via shared.Run uniformly.

internal/shared/auth_test.go (12 test funcs)
  Empty config pass-through, missing-token 401, wrong-token 401,
  correct-token 200, raw-token-without-Bearer-prefix 401, /health
  always public, IP allowlist allow + reject, bare IP /32, both
  layers when both configured, invalid CIDR drop-with-warn, RemoteAddr
  shape extraction. The constant-time comparison is verified by
  inspection (comments in auth.go) plus the existence of the
  passthrough test (length-mismatch case).

Verified:
  go test -count=1 ./internal/shared/  — all green (was 21, now 33 funcs)
  just verify                            — vet + test + 9 smokes 33s
  just proof contract                    — 53/0/1 unchanged

Smokes + proof harness keep working without any token configuration:
default Auth is empty struct → middleware is no-op → existing tests
pass unchanged. To exercise the gate, operators set [auth].token in
lakehouse.toml (or, per the "future" note in the ADR, via env var).

Closes audit findings:
  R-001 HIGH — fully mechanically closed (was: partial via bind gate)
  R-007 MED  — fully mechanically closed (was: design-only ADR-003)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 07:11:34 -05:00
root
0f79bce948 Batch 3: cmd/<bin>/main_test.go × 6 — closes R-005
Adds main_test.go for each of the 6 cmd binaries that lacked them
(storaged already had main_test.go; that's where the pattern came
from). Each test file focuses on the cmd-specific surface — route
mounts, body caps, decode/validation paths — without re-testing
internal package logic that's covered elsewhere.

cmd/catalogd/main_test.go — 6 funcs
  TestRoutesMounted: chi.Walk asserts /catalog/{register,manifest/*,list}
  TestHandleRegister_BodyTooLarge: 5 MiB body → 4xx
  TestHandleRegister_MalformedJSON: 400
  TestHandleRegister_EmptyName_400: ErrEmptyName surfaces as 400
  TestHandleGetManifest_404 + TestHandleList_EmptyShape

cmd/embedd/main_test.go — 8 funcs
  stubProvider implements embed.Provider deterministically
  TestRoutesMounted, MalformedJSON_400, EmptyTextRejected_400 (per
    scrum O-W3), UpstreamError_502 (provider error → 502, not 500),
    HappyPath_ProviderEcho, BodyTooLarge (4xx range), TestItoa
    (covers the no-strconv helper)

cmd/gateway/main_test.go — 4 funcs
  TestMustParseUpstream_HappyPaths: 3 valid URLs
  TestMustParseUpstream_FailureExits: re-execs the test binary in a
    subprocess with env flag (standard pattern for testing os.Exit
    callers); subprocess invokes mustParseUpstream("127.0.0.1:3211")
    [missing scheme]; expects exit non-zero. Same pattern for garbage.
  TestUpstreamConfigKeys_DocumentedShape: locks the 6 _url keys

cmd/ingestd/main_test.go — 7 funcs
  Stubs both storaged and catalogd via httptest.Server so the cmd
  layer can be exercised without bringing the full chain up.
  TestHandleIngest_MissingNameQueryParam: 400 with "name" in body
  TestHandleIngest_MalformedMultipart: 400
  TestHandleIngest_MissingFormFile: 400 (valid multipart, wrong field)
  TestHandleIngest_BodyTooLarge: 4xx
  TestEscapeKeyPath: 6-case URL-escape table (apostrophe, space, etc.)
  TestParquetKeyPath_Format: locks the datasets/<n>/<fp>.parquet shape
    per scrum C-DRIFT (any rename breaks idempotent re-ingest)

cmd/queryd/main_test.go — 6 funcs
  Tests pre-DB paths (decode, body cap, empty SQL); db.QueryContext
  itself needs DuckDB so it's covered by GOLAKE-040 in the proof
  harness, not unit tests. handlers.db = nil here is intentional.
  TestHandleSQL_EmptySQL_400: 3 cases (empty, whitespace, mixed-WS)
  TestMaxSQLBodyBytes_Reasonable: locks the 64 KiB constant in a
    sane range so a refactor can't blow it open
  TestPrimaryBucket_Constant: locks "primary" — secrets lookup uses
    this; rename = silent secret-resolution failure at boot

cmd/vectord/main_test.go — 14 funcs
  All 6 routes verified mounted. handlers.persist = nil = pure
  in-memory mode; persistence is GOLAKE-070 in the proof harness.
  Coverage of every error branch in handleCreate/Add/Search/Delete:
    missing index → 404, dim mismatch → 400, empty items → 400,
    empty id → 400, malformed JSON → 400, body too large → 4xx,
    happy create → 201, happy list → 200.

One real finding caught during writing:
  Body-cap rejection is sometimes 413 (typed MaxBytesError survives
  unwrap) and sometimes 400 (decoder wraps it as a generic decode
  error). Both are valid client-error contracts; the contract isn't
  "exactly 413" but "fails loud as 4xx, never silent 200 or 5xx."
  Tests assert 4xx range. The proof harness's
  proof_assert_status_4xx already had this shape — just bringing
  the unit tests in line with it.

Verified:
  go test -count=1 -short ./cmd/...  — all 7 packages green
  just verify                         — vet + test + 9 smokes 35s

Closes audit risk R-005 (6/7 cmd/main.go untested). Combined with
the proof harness's wiring coverage, every cmd-level handler now
has both unit-test and integration-test coverage of the wiring
layer. R-005 → CLOSED.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 06:18:46 -05:00
root
9ee7fc5550 G2: embedd — text → vector via Ollama · 2 scrum fixes
Bridges the missing piece for the staffing co-pilot: text inputs to
vectord-shaped vectors. Standalone cmd/embedd on :3216 fronted by
gateway at /v1/embed. Pluggable embed.Provider interface (G2 ships
Ollama; OpenAI/Voyage swap in via the same interface in G3+).

Wire format:
  POST /v1/embed {"texts":[...], "model":"..."}  // model optional
  → 200 {"model","dimension","vectors":[[...]]}

Default model: nomic-embed-text (768-d). Ollama returns float64;
provider converts to float32 at the boundary so vectors flow through
vectord/HNSW without re-conversion.

Acceptance smoke 5/5 PASS — including the architectural payoff:
end-to-end embed → vectord add → search by re-embedded text returns
recall=1 at distance 5.96e-8 (float32 precision noise on identical
unit vectors). The staffing co-pilot pipeline (text → vector →
similarity search) is now functional end-to-end.

All 9 smokes (D1-D6 + G1 + G1P + G2) PASS deterministically.

Cross-lineage scrum on shipped code:
  - Opus 4.7 (opencode):                    0 BLOCK + 4 WARN + 3 INFO
  - Kimi K2-0905 (openrouter):              0 BLOCK + 2 WARN + 1 INFO
  - Qwen3-coder (openrouter):               "No BLOCKs" (3 tokens)

Fixed (2 — 1 convergent + 1 single-reviewer):
  C1 (Opus + Kimi convergent WARN): per-text 60s timeout × N-text
    batch was up to N×60s with no batch-level cap. One stuck Ollama
    call would stall the whole handler indefinitely. Fix:
    context.WithTimeout(r.Context(), 60s) wraps the entire batch.
  O-W3 (Opus WARN): empty strings in texts went to Ollama unchecked,
    producing version-dependent garbage. Fix: reject "" with 400 at
    the handler boundary so callers get a deterministic answer
    instead of an upstream-conditional 502.

Deferred (4): drainAndClose 64KiB cap (matches G0 pattern), no
concurrency limit on /embed (single-tenant G2), missing Accept
header (exotic-proxy concern), MaxBytesError string-match
redundancy (paranoia layer kept consistent across codebase).

Zero false positives this round — Qwen returned 3 tokens "No BLOCKs"
and the other two reviewers' findings were all real.

Setup confirmed: Ollama 0.21.0 on :11434 with nomic-embed-text loaded.
Per-text /api/embeddings used (forward-compat with 0.21+); newer
0.4+ /api/embed batch endpoint can swap in via the Provider interface.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 01:42:27 -05:00
root
b8c072cf0b G1: vectord — HNSW vector search via coder/hnsw · 6 scrum fixes applied
First G1+ piece. Standalone vectord service with in-memory HNSW
indexes keyed by string IDs and optional opaque JSON metadata.
Wraps github.com/coder/hnsw v0.6.1 (pure Go, no cgo). New port
:3215 with /v1/vectors/* routed through gateway.

API:
  POST   /v1/vectors/index            create
  GET    /v1/vectors/index            list
  GET    /v1/vectors/index/{name}     get info
  DELETE /v1/vectors/index/{name}
  POST   /v1/vectors/index/{name}/add (batch)
  POST   /v1/vectors/index/{name}/search

Acceptance smoke 7/7 PASS — including recall=1 on inserted vector
w-042 (cosine distance 5.96e-8, float32 precision noise), 200-
vector batch round-trip, dim mismatch → 400, missing index → 404,
duplicate create → 409.

Two upstream library quirks worked around in the wrapper:
  1. coder/hnsw.Add panics with "node not added" on re-adding an
     existing key (length-invariant fires because internal
     delete+re-add doesn't change Len). Pre-Delete fixes for n>1.
  2. Delete of the LAST node leaves layers[0] non-empty but
     entryless; next Add SIGSEGVs in Dims(). Workaround: when
     re-adding to a 1-node graph, recreate the underlying graph
     fresh via resetGraphLocked().

Cross-lineage scrum on shipped code:
  - Opus 4.7 (opencode):                 0 BLOCK + 4 WARN + 3 INFO
  - Kimi K2-0905 (openrouter):           2 BLOCK + 2 WARN + 1 INFO
  - Qwen3-coder (openrouter):            "No BLOCKs" (4 tokens)

Fixed (4 real + 2 cleanup):
  O-W1: Lookup returned the raw []float32 from coder/hnsw — caller
    mutation would corrupt index. Now copies before return.
  O-W3: NaN/Inf vectors poison HNSW (distance comparisons return
    false for both < and >, breaking heap invariants). Zero-norm
    under cosine produces NaN. Now validated at Add time.
  K-B1: Re-adding with nil metadata silently cleared the existing
    entry — JSON-omitted "metadata" field deserializes as nil,
    making upsert non-idempotent. Now nil = "leave alone"; explicit
    {} or Delete to clear.
  O-W4: Batch Add with mid-batch failure left items 0..N-1
    committed and item N rejected. Now pre-validates all IDs+dims
    before any Add.
  O-I1: jsonItoa hand-roll replaced with strconv.Itoa — no
    measured allocation win.
  O-I2: distanceFn re-resolved per Search → use stored i.g.Distance.

Dismissed (2 false positives):
  K-B2 "MaxBytesReader applied after full read" — false, applied
    BEFORE Decode in decodeJSON
  K-W1 "Search distances under read lock might see invalidated
    slices from concurrent Add" — false, RWMutex serializes
    write-lock during Add against read-lock during Search

Deferred (3): HTTP server timeouts (consistent G0 punt),
Content-Type validation (internal service behind gateway), Lookup
dim assertion (in-memory state can't drift).

The K-B1 finding is worth pausing on: nil metadata on re-add is
the kind of API ergonomics bug only a code-reading reviewer
catches — smoke would never detect it because the smoke always
sends explicit metadata. Three lines changed in Add; the resulting
API matches what callers actually expect.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 00:50:28 -05:00
root
b1d52306ad G0 D6: gateway reverse proxy fronting all 4 backing services · 2 scrum fixes · G0 COMPLETE
Last day of Phase G0. Gateway promotes the D1 stub endpoints into
real reverse-proxies on :3110 fronting storaged + catalogd + ingestd
+ queryd. /v1 prefix lives at the edge — internal services route on
/storage, /catalog, /ingest, /sql, with the prefix stripped by a
custom Director per Kimi K2's D1-plan finding.

Routes:
  /v1/storage/*  → storaged
  /v1/catalog/*  → catalogd
  /v1/ingest     → ingestd
  /v1/sql        → queryd

Acceptance smoke 6/6 PASS — every assertion goes through :3110, none
direct to backing services. Full ingest → storage → catalog → query
round-trip verified end-to-end. The smoke's "rows[0].name=Alice"
assertion is the architectural payoff: five binaries, six HTTP
routes, one round-trip through one edge.

Cross-lineage scrum on shipped code:
  - Opus 4.7 (opencode):                      1 BLOCK + 2 WARN + 2 INFO
  - Kimi K2-0905 (openrouter):                1 BLOCK + 3 WARN + 1 INFO (3 false positives, all from one wrong TrimPrefix theory)
  - Qwen3-coder (openrouter):                 5 completion tokens — "No BLOCKs."

Fixed (2, both Opus single-reviewer):
  O-BLOCK: Director path stripping fails if upstream URL has a
    non-empty path. The default Director's singleJoiningSlash runs
    BEFORE the custom code, so an upstream like http://host/api
    produces /api/v1/storage/... after the join — then TrimPrefix("/v1")
    is a no-op because the string starts with /api. Fix: strip /v1
    BEFORE calling origDirector. New TestProxy_SubPathUpstream regression
    locks this in. Today: bare-host URLs only, dormant — but moving
    gateway behind a sub-path in prod would have silently 404'd.
  O-WARN2: url.Parse is permissive — typo "127.0.0.1:3211" (no scheme)
    parses fine, produces empty Host, every request 502s. mustParseUpstream
    fail-fast at startup with a clear message naming the offending
    config field.

Dismissed (3, all Kimi, same false TrimPrefix theory):
  K-BLOCK "TrimPrefix loops forever on //v1storage" — false, single
    check-and-trim, no loop
  K-WARN "no upper bound on repeated // removal" — same false theory
  K-WARN "goroutines leak if upstream parse fails while binaries
    running" — confused scope; binaries are separate OS processes
    launched by the smoke script

D1 smoke updated (post-D6): the 501 stub probes are gone (gateway no
longer stubs /v1/ingest and /v1/sql). Replaced with proxy probes that
verify gateway forwards malformed requests to ingestd and queryd. Launch
order changed from parallel to dep-ordered (storaged → catalogd →
ingestd → queryd → gateway) since catalogd's rehydrate now needs
storaged, queryd's initial Refresh needs catalogd.

All six G0 smokes (D1 through D6) PASS end-to-end after every fix
round. Phase G0 substrate is complete: 5 binaries, 6 routes, 25 fixes
applied across 6 days from cross-lineage review.

G1+ next: gRPC adapters, Lance/HNSW vector indices, Go MCP SDK port,
distillation rebuild, observer + Langfuse integration.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 00:21:54 -05:00
Claw
1142f54f23 G0 D1 ships: skeleton + chi + /health × 5 binaries · acceptance gate PASSED
Phase G0 Day 1 executed end-to-end after a third-pass review by
qwen3-coder:480b consolidated all findings across Opus/Kimi/Qwen
lineages.

Cross-lineage review consolidation (3 model passes + 1 runtime pass):
- Opus 4.7: 9 findings · 7 fixed inline · 2 deferred
- Kimi K2.6: 2 BLOCKs (introduced by Opus fixes) · 2 fixed
- Qwen3-coder:480b: 2 WARNs · 1 fixed (D2.4 256 MiB cap + 4-slot
  semaphore on PUTs) · 1 deferred (Q2 view refresh batching)
- Runtime smoke: 1 finding (port 3100 collision with live Rust
  lakehouse) · fixed (Go dev ports shifted to 3110+)
- Total: 14 findings · 11 fixed · 3 deferred to G2

What landed in code:
- internal/shared/server.go — chi factory, slog JSON, /health,
  graceful shutdown via signal.NotifyContext
- internal/shared/config.go — TOML loader, DefaultConfig, -config flag
- cmd/{gateway,storaged,catalogd,ingestd,queryd}/main.go — five
  binaries, each ~30 lines using the shared factory
- lakehouse.toml — G0 dev defaults (3110-3214)
- scripts/d1_smoke.sh — repeatable smoke that exits 0 on PASS
- go.mod / go.sum — chi v5.2.5, pelletier/go-toml/v2 v2.3.0

Verified end-to-end via scripts/d1_smoke.sh:
- All 5 /health endpoints return 200 with correct service name
- Gateway /v1/ingest + /v1/sql stubs return 501 with X-Lakehouse-Stub
- Graceful shutdown logs cleanly on SIGTERM
- DuckDB cgo path verified separately (sql.Open("duckdb","") + ping)

D1 ACCEPTANCE GATE: PASSED.

Next: D2 — storaged S3 GET/PUT/LIST against MinIO.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 07:00:37 -05:00