root fa56134b90 ADR-003 wiring: Bearer token + IP allowlist middleware
Implements the auth posture from ADR-003 (commit 0d18ffa). Two
independent layers — Bearer token (constant-time compare via
crypto/subtle) and IP allowlist (CIDR set) — composed in shared.Run
so every binary inherits the same gate without per-binary wiring.

Together with the bind-gate from commit 6af0520, this mechanically
closes audit risks R-001 + R-007:
  - non-loopback bind without auth.token = startup refuse
  - non-loopback bind WITH auth.token + override env = allowed
  - loopback bind = all gates open (G0 dev unchanged)

internal/shared/auth.go (NEW)
  RequireAuth(cfg AuthConfig) returns chi-compatible middleware.
  Empty Token + empty AllowedIPs → pass-through (G0 dev mode).
  Token-only → 401 Bearer mismatch.
  AllowedIPs-only → 403 source IP not in CIDR set.
  Both → both gates apply.
  /health bypasses both layers (load-balancer / liveness probes
  shouldn't carry tokens).

  CIDR parsing pre-runs at boot; bare IP (no /N) treated as /32 (or
  /128 for IPv6). Invalid entries log warn and drop, fail-loud-but-
  not-fatal so a typo doesn't kill the binary.

  Token comparison: subtle.ConstantTimeCompare on the full
  "Bearer <token>" wire-format string. Length-mismatch returns 0
  (per stdlib spec), so wrong-length tokens reject without timing
  leak. Pre-encoded comparison slice stored in the middleware
  closure — one allocation per request.

  Source-IP extraction prefers net.SplitHostPort fallback to
  RemoteAddr-as-is for httptest compatibility. X-Forwarded-For
  support is a follow-up when a trusted proxy fronts the gateway
  (config knob TBD per ADR-003 §"Future").

internal/shared/server.go
  Run signature: gained AuthConfig parameter (4th arg).
  /health stays mounted on the outer router (public).
  Registered routes go inside chi.Group with RequireAuth applied —
  empty config = transparent group.
  Added requireAuthOnNonLoopback startup check: non-loopback bind
  with empty Token = refuse to start (cites R-001 + R-007 by name).

internal/shared/config.go
  AuthConfig type added with TOML tags. Fields: Token, AllowedIPs.
  Composed into Config under [auth].

cmd/<svc>/main.go × 7 (catalogd, embedd, gateway, ingestd, queryd,
storaged, vectord, mcpd is unaffected — stdio doesn't bind a port)
  Each call site adds cfg.Auth as the 4th arg to shared.Run. No
  other changes — middleware applies via shared.Run uniformly.

internal/shared/auth_test.go (12 test funcs)
  Empty config pass-through, missing-token 401, wrong-token 401,
  correct-token 200, raw-token-without-Bearer-prefix 401, /health
  always public, IP allowlist allow + reject, bare IP /32, both
  layers when both configured, invalid CIDR drop-with-warn, RemoteAddr
  shape extraction. The constant-time comparison is verified by
  inspection (comments in auth.go) plus the existence of the
  passthrough test (length-mismatch case).

Verified:
  go test -count=1 ./internal/shared/  — all green (was 21, now 33 funcs)
  just verify                            — vet + test + 9 smokes 33s
  just proof contract                    — 53/0/1 unchanged

Smokes + proof harness keep working without any token configuration:
default Auth is empty struct → middleware is no-op → existing tests
pass unchanged. To exercise the gate, operators set [auth].token in
lakehouse.toml (or, per the "future" note in the ADR, via env var).

Closes audit findings:
  R-001 HIGH — fully mechanically closed (was: partial via bind gate)
  R-007 MED  — fully mechanically closed (was: design-only ADR-003)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 07:11:34 -05:00

golangLAKEHOUSE

Go reimplementation of the Lakehouse — a versioned knowledge substrate for staffing analytics + local AI workloads.

Status

Phase G0 complete + G1/G1P/G2 shipped. Six binaries plus a seventh (vectord) and an eighth (embedd) on top, fronted by a single gateway. Acceptance smokes green for D1-D6 + G1 + G1P + G2.

End-to-end staffing co-pilot pipeline functional through the gateway:

text → /v1/embed → /v1/vectors/index/<name>/add
text → /v1/embed → /v1/vectors/index/<name>/search → top-K hits

Plus the SQL path:

CSV  → /v1/ingest    (parses, writes Parquet via storaged, registers
                      manifest with catalogd)
SQL  → /v1/sql       (DuckDB over the registered Parquets via httpfs)

See docs/PHASE_G0_KICKOFF.md for the day-by-day record (D1-D6 + real-scale validation + G1/G1P/G2 pointer at the bottom).

Service inventory

Bin Port Role
gateway 3110 Reverse proxy fronting all backing services
storaged 3211 Object I/O over S3 (MinIO in dev)
catalogd 3212 Parquet manifest registry, ADR-020 idempotency
ingestd 3213 CSV → Parquet → register loop
queryd 3214 DuckDB SELECT over registered Parquets via httpfs
vectord 3215 HNSW vector search (+ optional persistence to storaged)
embedd 3216 Text → vector via Ollama (default nomic-embed-text 768-d)
mcpd stdio Model Context Protocol server (Claude Desktop / Code consumers)

MCP server

bin/mcpd exposes Lakehouse capabilities as MCP tools over stdio: list_datasets, get_manifest, query_sql, embed_text, search_vectors. All tools proxy to the gateway, so the gateway must be up first.

Wire into Claude Desktop / Claude Code by adding to the MCP config:

{
  "mcpServers": {
    "lakehouse": {
      "command": "/path/to/golangLAKEHOUSE/bin/mcpd",
      "args": ["--gateway", "http://127.0.0.1:3110"]
    }
  }
}

Replaces the Bun mcp-server.ts MCP-tool surface from the Rust system. HTTP demo routes (the staffing co-pilot UI) stay Bun until G5.

Acceptance smokes

scripts/d1_smoke.sh   # 5-binary skeleton + chi /health + gateway proxy probes
scripts/d2_smoke.sh   # storaged GET/PUT/LIST/DELETE + 256 MiB cap + concurrency cap
scripts/d3_smoke.sh   # catalogd register/manifest/list + rehydrate-across-restart
scripts/d4_smoke.sh   # ingestd CSV → Parquet round-trip + schema-drift 409
scripts/d5_smoke.sh   # queryd DuckDB SELECT through httpfs over MinIO
scripts/d6_smoke.sh   # full ingest → query through gateway only
scripts/g1_smoke.sh   # vectord HNSW recall + dim mismatch + duplicate-create 409
scripts/g1p_smoke.sh  # vectord state survives kill+restart via storaged
scripts/g2_smoke.sh   # embed → vectord add → search round-trip

Or run the full gate via the task runner (see below):

just verify     # vet + tests + 9 smokes; ~33s wall

Task runner

just                 # show available recipes
just verify          # full Sprint 0 gate (vet + tests + 9 smokes)
just smoke <day>     # single smoke (d1..d6, g1, g1p, g2)
just doctor          # check cold-start deps; --json for CI
just install-hooks   # install pre-push hook that runs just verify

After a fresh clone, run just install-hooks once so git push is gated on the same green chain that ran here. Hook lives in .git/hooks/pre-push (not tracked; recreated by the recipe).

Cold-start dependencies

  • Go 1.25+ at /usr/local/go/bin (arrow-go pulled the 1.25 floor)
  • gcc + libc-dev for the DuckDB cgo binding (ADR-001 §1.1)
  • just task runner (apt install just on Debian 13+)
  • MinIO running on :9000 with bucket lakehouse-go-primary
  • Ollama running on :11434 with nomic-embed-text loaded (G2)
  • /etc/lakehouse/secrets-go.toml with [s3.primary] credentials (storaged + queryd both read this)

just doctor probes all of the above and reports the fix command for each missing dep. CI / scripts can use just doctor --json.

Layout

docs/                         Direction + spec + ADRs + day-by-day
cmd/                          One main package per binary
internal/                     Shared packages — storeclient, catalogclient,
                                secrets, shared, embed, gateway, plus
                                per-service implementation packages
scripts/                      Smokes + ancillary tooling

Reading order

  1. docs/PRD.md — what we're building and why
  2. docs/SPEC.md — how, per-component
  3. docs/DECISIONS.md — ADRs (ADR-001 foundational)
  4. docs/PHASE_G0_KICKOFF.md — day-by-day from D1 through G2
  5. docs/RUST_PATHWAY_MEMORY_NOTE.md — historical reference for the Rust era's pathway memory (not migrated, by ADR-001 #5)

Predecessor

The Rust Lakehouse this rewrite supersedes lives at git.agentview.dev/profit/lakehouse. It remains the live system serving devop.live/lakehouse/ until this Go implementation reaches feature parity per docs/SPEC.md §7. Then Rust enters maintenance-only mode.

Description
Go reimplementation of the Lakehouse — versioned knowledge substrate for staffing analytics + local AI workloads
Readme 3.2 MiB
Languages
Go 79.4%
Shell 20.1%
Just 0.3%
Dockerfile 0.2%