16 Commits

Author SHA1 Message Date
root
a81291e38c proof harness Phase A: scaffolding + canary case green
Per docs/TEST_PROOF_SCOPE.md, building the claims-verification tier
above the smoke chain. This commit lays the scaffolding and proves
the orchestrator end-to-end with one canary case (00_health).

What landed:

  tests/proof/
    README.md             how to read a report, layout, modes
    claims.yaml           24 claims enumerated (GOLAKE-001..100)
    run_proof.sh          orchestrator with --mode {contract|integration|performance}
                          and --no-bootstrap / --regenerate-{rankings,baseline}
    lib/
      env.sh              service URLs, report dir, mode, git context
      http.sh             curl wrappers writing per-probe JSON + body + headers
      assert.sh           proof_assert_{eq,ne,contains,lt,gt,status,json_eq} +
                          proof_skip — each emits one JSONL record per call
      metrics.sh          start/stop timers, value capture, RSS sampling,
                          percentile compute (for Phase D)
    cases/
      00_health.sh        canary — gateway + 6 services /health → 200,
                          body identifies service, latency < 500ms (21 assertions)
    fixtures/
      csv/workers.csv     spec's 5-row deterministic CSV
      text/docs.txt       4 deterministic vector docs
      expected/queries.json  expected results for the 5 SQL assertions

Wired into the task runner:

  just proof contract       # canary only this commit
  just proof integration    # Phase C
  just proof performance    # Phase D

.gitignore: /tests/proof/reports/* with !.gitkeep — same pattern as
reports/scrum/_evidence/. Per-run output is a runtime artifact.

Specs landed alongside (J's drops):
  docs/TEST_PROOF_SCOPE.md           the harness contract this implements
  docs/CLAUDE_REFACTOR_GUARDRAILS.md process discipline this harness obeys

Verified end-to-end (cached binaries):
  just proof contract        wall < 2s, 21 pass / 0 fail / 0 skip
  just verify                wall 31s, vet + test + 9 smokes still green

Two bugs fixed during canary run, both in run_proof.sh aggregation:
- grep -c exits 1 on zero matches; the `|| echo 0` form concatenated
  "0\n0" and broke jq --argjson + integer comparison. Fixed via a
  _count helper that captures count-or-zero cleanly.
- per-case table iterated case scripts (filename-based) but cases
  write evidence under CASE_ID. Switched to JSONL-file iteration so
  multi-case scripts work and the mapping is faithful.

Phase B (contract cases) lands next: 05_embedding, 06_vector_add,
08_gateway_contracts, 09_failure_modes. Each sourcing the same lib
helpers and writing to the same report shape.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 05:08:51 -05:00
root
91edd43164 scrum audit: 5 reports under reports/scrum/ · score 35/60
Adapts docs/SCRUM.md framework (originally written for the
matrix-agent-validated repo) to the Go rewrite. Five deliverables:

  golang-lakehouse-scrum-test.md  top-line + scoring + verdict
  risk-register.md                12 findings, R-001..R-012
  claim-coverage-table.md         claim/test/risk for Sprint 2
  sprint-backlog.md               5 sprints, ~2 weeks of work
  acceptance-gates.md             DoD as runnable commands

Every claim cites file:line, command output, or "missing evidence."
Smoke chain ran clean (33s wall, all 9 PASS) and is captured in
reports/scrum/_evidence/smoke_chain.log (gitignored — runtime artifact).

Scoring:
  Reproducibility       7/10  9 smokes deterministic, no just/CI gate
  Test Coverage         6/10  internal/ packages tested, 6/7 cmd/ aren't
  Trust Boundary        7/10  escapes ok, zero auth, /sql is RCE-eq off-loopback
  Memory Correctness    3/10  pathway/playbook/observer not yet ported
  Deployment Readiness  4/10  no REPLICATION, no env template, no systemd
  Maintainability       8/10  no god-files, 7 lean binaries, ADRs current

Top three risks:
  R-001 HIGH  queryd /sql + DuckDB + non-loopback bind = RCE-equivalent
  R-002 HIGH  internal/shared (server.go + config.go) zero tests
  R-003 HIGH  internal/storeclient zero tests, used by 2 services
  R-004 MED   9-smoke chain green but not gated (no justfile/hook)

The audit is the work; refactors come after. Sprint 0 owns coverage
+ CI gating; Sprint 1 owns trust-boundary decisions; Sprints 2-3 are
mostly design-bar work for unbuilt agent components.

.gitignore exception: /reports/* + !/reports/scrum/ keeps reports/
a runtime-artifact directory while exposing reports/scrum/ as
tracked documentation. Mirrors the pattern future audit passes will
land in.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 04:51:47 -05:00
root
1f700e731d Staffing scale test: full 500K through gateway → embedd → vectord pipeline
scripts/staffing_500k/main.go: driver that reads workers_500k.csv,
embeds combined-text per worker via /v1/embed, adds to vectord index
"workers_500k", runs canonical staffing queries against the populated
index. Reproducible end-to-end test of the staffing co-pilot pipeline
at production scale.

Run results (2026-04-29 ~02:30):
  500,000 vectors ingested in 35m 36s (~234/sec avg)
  vectord peak RSS 4.5 GB (~9 KB/vector incl. HNSW graph)
  Query latency: embed 40-59ms + search 1-3ms = ~50ms end-to-end
  GPU avg ~65% (Ollama not the bottleneck — vectord Add is)

Semantic recall on canonical queries:
  "electrician with industrial wiring": top 2 are literal Electricians (d=0.30)
  "CNC operator with first article": Assembler / Quality Techs (adjacent, d=0.24)
  "forklift driver OSHA-30": warehouse roles (d=0.33)
  "warehouse picker night shift bilingual": Material Handlers (d=0.31)
  "dental hygienist": Production Workers at d=0.49+ — correctly
    LOW-similarity, signals "no dental hygienists in this manufacturing
    dataset" rather than hallucinating a fake match.

Documented gaps:
  - storaged's 256 MiB PUT cap blocks single-file LHV1 persistence
    above ~150K vectors at d=768. Test ran with persistence disabled.
  - vectord Add is RWMutex-serialized — with GPU at 65% util this is
    the throughput cap. Concurrent Adds would be 2-3x faster but
    require careful audit of coder/hnsw thread-safety (G1 scrum
    documented two known quirks).

PHASE_G0_KICKOFF.md gains a "Staffing scale test" section with full
metrics + the gaps-surfaced list. The architectural payoff is real:
six binaries, one HTTP route, ~50ms from text query to top-K
semantically-relevant workers across 500K records.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 02:31:30 -05:00
root
0cb29cda15 docs: README + PHASE_G0_KICKOFF reflect post-G0 state (G1, G1P, G2)
README was stuck on "Pre-Phase G0, implementation has not started"
while we shipped through G2. Updated to reflect the current 7-binary
service inventory, the 9 acceptance smokes, the cold-start deps
(MinIO bucket, Ollama with nomic-embed-text, secrets-go.toml).

PHASE_G0_KICKOFF gains a "Post-G0 work" pointer at the end —
brief table mapping each G1+/G2 commit to its smoke + scrum-fix
count. Full per-day detail stays in commit messages and the
project memory file.

No code changes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 01:45:59 -05:00
root
d023b07b30 Real-scale validation post-G0: configurable ingest cap + workers_500k metrics
Validated G0 substrate against the production workers_500k.parquet
dataset (18 cols × 500,000 rows). Findings + one applied fix:

Finding #1 (FIXED): ingestd's hardcoded 256 MiB cap rejected the 500K
CSV (344 MiB) with 413. Cap fired correctly, no OOM. Extracted to
[ingestd].max_ingest_bytes config field; default 256 MiB, override
per deployment for known-large workloads. With cap bumped to 512 MiB,
500K ingest succeeds in 3.12s with ingestd peak RSS 209 MiB.

Finding #2 (deferred): ingestd doesn't release memory between
ingests. Go runtime conservative; long-running daemon, fine.

Finding #3: DuckDB-via-httpfs is healthy at 500K. GROUP BY 45ms,
count(*) 24ms, AVG 47ms, schema introspection 25ms. Sub-linear
scaling vs 100K — the s3:// read path is not a bottleneck.

Finding #4: ADR-010 type inference correctly handled real staffing
data. worker_id → BIGINT, numeric scores → DOUBLE, multi-line
resume_text → VARCHAR. 1000-row sample sufficient.

Finding #5: Go's encoding/csv handles RFC 4180 quoted-comma fields
and multi-line quoted text without LazyQuotes — confirming the D4
scrum's dismissal of Qwen's BLOCK on this point.

Net: substrate handles production-scale data with one config knob.
No correctness issues, no OOMs, no silent type errors.
All 6 G0 smokes still PASS after the cap-config change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 00:32:08 -05:00
root
b1d52306ad G0 D6: gateway reverse proxy fronting all 4 backing services · 2 scrum fixes · G0 COMPLETE
Last day of Phase G0. Gateway promotes the D1 stub endpoints into
real reverse-proxies on :3110 fronting storaged + catalogd + ingestd
+ queryd. /v1 prefix lives at the edge — internal services route on
/storage, /catalog, /ingest, /sql, with the prefix stripped by a
custom Director per Kimi K2's D1-plan finding.

Routes:
  /v1/storage/*  → storaged
  /v1/catalog/*  → catalogd
  /v1/ingest     → ingestd
  /v1/sql        → queryd

Acceptance smoke 6/6 PASS — every assertion goes through :3110, none
direct to backing services. Full ingest → storage → catalog → query
round-trip verified end-to-end. The smoke's "rows[0].name=Alice"
assertion is the architectural payoff: five binaries, six HTTP
routes, one round-trip through one edge.

Cross-lineage scrum on shipped code:
  - Opus 4.7 (opencode):                      1 BLOCK + 2 WARN + 2 INFO
  - Kimi K2-0905 (openrouter):                1 BLOCK + 3 WARN + 1 INFO (3 false positives, all from one wrong TrimPrefix theory)
  - Qwen3-coder (openrouter):                 5 completion tokens — "No BLOCKs."

Fixed (2, both Opus single-reviewer):
  O-BLOCK: Director path stripping fails if upstream URL has a
    non-empty path. The default Director's singleJoiningSlash runs
    BEFORE the custom code, so an upstream like http://host/api
    produces /api/v1/storage/... after the join — then TrimPrefix("/v1")
    is a no-op because the string starts with /api. Fix: strip /v1
    BEFORE calling origDirector. New TestProxy_SubPathUpstream regression
    locks this in. Today: bare-host URLs only, dormant — but moving
    gateway behind a sub-path in prod would have silently 404'd.
  O-WARN2: url.Parse is permissive — typo "127.0.0.1:3211" (no scheme)
    parses fine, produces empty Host, every request 502s. mustParseUpstream
    fail-fast at startup with a clear message naming the offending
    config field.

Dismissed (3, all Kimi, same false TrimPrefix theory):
  K-BLOCK "TrimPrefix loops forever on //v1storage" — false, single
    check-and-trim, no loop
  K-WARN "no upper bound on repeated // removal" — same false theory
  K-WARN "goroutines leak if upstream parse fails while binaries
    running" — confused scope; binaries are separate OS processes
    launched by the smoke script

D1 smoke updated (post-D6): the 501 stub probes are gone (gateway no
longer stubs /v1/ingest and /v1/sql). Replaced with proxy probes that
verify gateway forwards malformed requests to ingestd and queryd. Launch
order changed from parallel to dep-ordered (storaged → catalogd →
ingestd → queryd → gateway) since catalogd's rehydrate now needs
storaged, queryd's initial Refresh needs catalogd.

All six G0 smokes (D1 through D6) PASS end-to-end after every fix
round. Phase G0 substrate is complete: 5 binaries, 6 routes, 25 fixes
applied across 6 days from cross-lineage review.

G1+ next: gRPC adapters, Lance/HNSW vector indices, Go MCP SDK port,
distillation rebuild, observer + Langfuse integration.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 00:21:54 -05:00
root
9e9e4c26a4 G0 D5: queryd DuckDB SELECT over Parquet via httpfs · 4 scrum fixes
Phase G0 Day 5 ships queryd: in-memory DuckDB with custom Connector
that runs INSTALL httpfs / LOAD httpfs / CREATE OR REPLACE SECRET
(TYPE S3) on every new connection, sourced from SecretsProvider +
shared.S3Config. SetMaxOpenConns(1) so registrar's CREATE VIEWs and
handler's SELECTs serialize through one connection (avoids cross-
connection MVCC visibility edge cases).

Registrar.Refresh reads catalogd /catalog/list, runs CREATE OR
REPLACE VIEW "name" AS SELECT * FROM read_parquet('s3://bucket/key')
per manifest, drops views for removed manifests, skips on unchanged
updated_at (the implicit etag). Drop pass runs BEFORE create pass so
a poison manifest can't block other manifest refreshes (post-scrum
C1 fix).

POST /sql with JSON body {"sql":"…"} returns
{"columns":[{"name":"id","type":"BIGINT"},…], "rows":[[…]],
"row_count":N}. []byte → string conversion so VARCHAR rows
JSON-encode as text. 30s default refresh ticker, configurable via
[queryd].refresh_every.

Cross-lineage scrum on shipped code:
  - Opus 4.7 (opencode):                      1 BLOCK + 4 WARN + 4 INFO
  - Kimi K2-0905 (openrouter):                2 BLOCK + 2 WARN + 1 INFO
  - Qwen3-coder (openrouter):                 2 BLOCK + 1 WARN + 1 INFO

Fixed (4):
  C1 (Opus + Kimi convergent): Refresh aborts on first per-view error
    → drop pass first, collect errors, errors.Join. Poison manifest
    no longer blocks the rest of the catalog from re-syncing.
  B-CTX (Opus BLOCK): bootstrap closure captured OpenDB's ctx →
    cancelled-ctx silently fails every reconnect. context.Background()
    inside closure; passed ctx only for initial Ping.
  B-LEAK (Kimi BLOCK): firstLine(stmt) truncated CREATE SECRET to 80
    chars but those 80 chars contained KEY_ID + SECRET prefix → log
    aggregator captures credentials. Stable per-statement labels +
    redactCreds() filter on wrapped DuckDB errors.
  JSON-ERR (Opus WARN): swallowed json.Encode error → silent
    truncated 200 on unsupported column types. slog.Warn the failure.

Dismissed (4 false positives):
  Qwen BLOCK "bootstrap not transactional" — DuckDB DDL is auto-commit
  Qwen BLOCK "MaxBytesReader after Decode" — false, applied before
  Kimi BLOCK "concurrent Refresh + user SELECT deadlock" — not a
    deadlock, just serialization, by design with 10s timeout retry
  Kimi WARN "dropView leaves r.known inconsistent" — current code
    returns before the delete; the entry persists for retry

Critical reviewer behavior: 1 convergent BLOCK between Opus + Kimi
on the per-view error blocking, plus two independent single-reviewer
BLOCKs (B-CTX, B-LEAK) that smoke could never have caught. The
B-LEAK fix uses defense-in-depth: never pass SQL into the error
path AND redact known cred values from DuckDB's own error message.

DuckDB cgo path: github.com/duckdb/duckdb-go/v2 v2.10502.0 (per
ADR-001 §1) on Go 1.25 + arrow-go. Smoke 6/6 PASS after every
fix round.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 00:10:55 -05:00
root
c1e411347a G0 D4: ingestd CSV → Parquet → catalogd register · 2 scrum fixes
Phase G0 Day 4 ships ingestd: multipart CSV upload, Arrow schema
inference per ADR-010 (default-to-string on ambiguity), single-pass
streaming CSV → Parquet via pqarrow batched writer (Snappy compressed,
8192 rows per batch), PUT to storaged at content-addressed key
datasets/<name>/<fp_hex>.parquet, register manifest with catalogd.
Acceptance smoke 6/6 PASS including idempotent re-ingest (proves
inference is deterministic — same CSV always produces same fingerprint)
and schema-drift → 409 (proves catalogd's gate fires on ingest traffic).

Schema fingerprint is SHA-256 over (name, type) tuples in header order
using ASCII record/unit separators (0x1e/0x1f) so column names with
commas can't collide. Nullability intentionally NOT in the fingerprint
— a column gaining nulls isn't a schema change.

Cross-lineage scrum on shipped code:
  - Opus 4.7 (opencode):                       4 WARN + 3 INFO (after 2 self-retracted BLOCKs)
  - Kimi K2-0905 (openrouter):                 1 BLOCK + 2 WARN + 1 INFO
  - Qwen3-coder (openrouter):                  2 BLOCK + 2 WARN + 2 INFO

Fixed (2, both Opus single-reviewer):
  C-DRIFT: PUT-then-register on fixed datasets/<name>/data.parquet
    meant a schema-drift ingest overwrote the live parquet BEFORE
    catalogd's 409 fired → storaged inconsistent with manifest.
    Fix: content-addressed key datasets/<name>/<fp_hex>.parquet.
    Drift writes to a different file (orphan in G2 GC scope); the
    live data is never corrupted.
  C-WCLOSE: pqarrow.NewFileWriter not Closed on error paths leaks
    buffered column data + OS resources per failed ingest.
    Fix: deferred guarded close with wClosed flag.

Dismissed (5, all false positives):
  Qwen BLOCK "csv.Reader needs LazyQuotes=true for multi-line" — false,
    Go csv handles RFC 4180 multi-line quoted fields by default
  Qwen BLOCK "row[i] OOB" — already bounds-checked at schema.go:73
    and csv.go:201
  Kimi BLOCK "type assertion panic if pqarrow reorders fields" —
    speculative, no real path
  Kimi WARN + Qwen WARN×2 "RecordBuilder leak on early error" —
    false convergent. Outer defer rb.Release() captures the current
    builder; in-loop release runs before reassignment. No leak.

Deferred (6 INFO + accepted-with-rationale on 3 WARN): sample
boundary type mismatch (G0 cap bounds peak), string-match
paranoia on http.MaxBytesError, multipart double-buffer (G2 spool-
to-disk), separator validation, body close ordering, etc.

The D4 scrum produced fewer real findings than D3 (2 vs 6) — both
were architectural hazards smoke wouldn't catch because the smoke's
"schema drift → 409" assertion was passing even in the corrupted-
state world. The 409 fires correctly; what was wrong was the PUT
having already mutated the live parquet before the validation check.
Opus's PUT-then-register read of the order is exactly the kind of
architectural insight the cross-lineage scrum is designed to surface.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 23:50:10 -05:00
root
66a704ca3e G0 D3: catalogd Parquet manifests + ADR-020 idempotent register · 6 scrum fixes
Phase G0 Day 3 ships catalogd: Arrow Parquet manifest codec, in-memory
registry with the ADR-020 idempotency contract (same name+fingerprint
reuses dataset_id; different fingerprint → 409 Conflict), HTTP client
to storaged for persistence, and rehydration on startup. Acceptance
smoke 6/6 PASSES end-to-end including rehydrate-across-restart — the
load-bearing test that the catalog/storaged service split actually
preserves state.

dataset_id derivation diverges from Rust: UUIDv5(namespace, name)
instead of v4 surrogate. Same name on any box generates the same
dataset_id; rehydrate after disk loss converges to the same identity
rather than silently re-issuing. Namespace pinned at
a8f3c1d2-4e5b-5a6c-9d8e-7f0a1b2c3d4e — every dataset_id ever issued
depends on these bytes.

Cross-lineage scrum on shipped code:
  - Opus 4.7 (opencode):                       1 BLOCK + 5 WARN + 3 INFO
  - Kimi K2-0905 (openrouter, validated D2):   2 BLOCK + 2 WARN + 1 INFO
  - Qwen3-coder (openrouter):                  2 BLOCK + 2 WARN + 2 INFO

Fixed:
  C1 list-offsets BLOCK (3-way convergent) → ValueOffsets(0) + bounds
  C2 Rehydrate mutex held across I/O → swap-under-brief-lock pattern
  S1 split-brain on persist failure → candidate-then-swap
  S2 brittle string-match for 400 vs 500 → ErrEmptyName/ErrEmptyFingerprint sentinels
  S3 Get/List shallow-copy aliasing → cloneManifest deep copy
  S4 keep-alive socket leak on error paths → drainAndClose helper

Dismissed (false positives, all single-reviewer):
  Kimi BLOCK "Decode crashes on empty Parquet" — already handled
  Kimi INFO "safeKey double-escapes" — wrong, splitting before escape is required
  Qwen INFO "rb.NewRecord() error unchecked" — API returns no error

Deferred to G1+: name validation regex, per-call deadlines, Snappy
compression, list pagination continuation tokens (storaged caps at
10k with sentinel for now).

Build clean, vet clean, all tests pass, smoke 6/6 PASS after every
fix round. arrow-go/v18 + google/uuid added; Go 1.24 → 1.25 forced
by arrow-go's minimum.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 23:36:57 -05:00
root
8cfcdb8e5f G0 D2: storaged S3 GET/PUT/LIST/DELETE · 3-lineage scrum · 4 fixes applied
Phase G0 Day 2 ships storaged: aws-sdk-go-v2 wrapper + chi routes
binding 127.0.0.1:3211 with 256 MiB MaxBytesReader, Content-Length
up-front 413, and a 4-slot non-blocking semaphore returning 503 +
Retry-After:5 when full. Acceptance smoke (6/6 probes) PASSES against
the dedicated MinIO bucket lakehouse-go-primary, isolated from the
Rust system's lakehouse bucket during coexistence.

Cross-lineage scrum on the shipped code:
  - Opus 4.7 (opencode): 1 BLOCK + 3 WARN + 3 INFO
  - Qwen3-coder (openrouter): 2 BLOCK + 1 WARN + 1 INFO (3 false positives)
  - Kimi K2-0905 (openrouter, after route-shopping past opencode's 4k
    cap and the direct adapter's empty-content reasoning bug):
    1 BLOCK + 2 WARN + 1 INFO

Fixed:
  C1 buildRegistry ctx cancel footgun → context.Background()
     (Opus + Kimi convergent; future credential refresh chains)
  C2 MaxBytesReader unwrap through manager.Uploader multipart
     goroutines → Content-Length up-front 413 + string-suffix fallback
     (Opus + Kimi convergent; latent 500-instead-of-413 in 5-256 MiB range)
  C3 Bucket.List unbounded accumulation → MaxListResults=10_000 cap
     (Opus + Kimi convergent; OOM guard)
  S1 PUT response Content-Type: application/json (Opus single-reviewer)

Strict validateKey policy (J approved): rejects empty, >1024B, NUL,
leading "/", ".." path components, CR/LF/tab control characters.
DELETE exposed at HTTP layer (J approved option A) for symmetry +
smoke ergonomics.

Build clean, vet clean, all unit tests pass, smoke 6/6 PASS after
every fix round. go.mod 1.23 → 1.24 (required by aws-sdk-go-v2).

Process finding worth recording: opencode caps non-streaming Kimi at
max_tokens=4096; the direct kimi.com adapter consumed 8192 tokens of
reasoning but surfaced empty content; openrouter/moonshotai/kimi-k2-0905
delivered structured output in ~33s. Future Kimi scrums should default
to that route.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 23:23:03 -05:00
Claw
ad2ec1aca9 G0 D1 hardened: 3-lineage scrum review on shipped code · 7 fixes applied
Code-review pass after D1 shipped, all three model lineages running
in parallel against the actual Go source (not docs):

Convergent findings (≥2 reviewers — high confidence):
- C1 BLOCK · Run() errCh/select race could silently drop fast bind
  errors. Fixed: net.Listen() now runs synchronously before the
  goroutine; bind errors surface as Run()'s return value.
- C2 BLOCK · scripts/d1_smoke.sh sleep 0.5 races bind on cold boxes.
  Fixed: replaced with poll_health() loop, 5s/svc budget, 50ms poll.
- C3 WARN · LoadConfig silent fallback when file missing. Fixed:
  emits slog.Warn with path + hint when path given but file absent.

Single-reviewer fixes:
- S1 WARN · slog.SetDefault inside Run() mutated global state from a
  library function. Fixed: Run() no longer calls SetDefault.
- S2 WARN · os.IsNotExist → errors.Is(err, fs.ErrNotExist) idiom.
- S6 WARN · smoke double-curl collapsed to single curl -i parse.

Second-pass Opus review on post-fix code caught one more:
- head -1 on curl -i fragile against 1xx interim lines. Fixed:
  awk picks the last HTTP/* status line (robust to 100 Continue).

Accepted with rationale (deferred or planned):
- S3 secrets-in-lakehouse.toml: D2.3 SecretsProvider already planned
- S4 5x cmd/*/main.go duplication: defer until D2 reveals real
  per-service config consumption
- S5 /health log volume: defer post-G0, not on k8s yet
- 2nd-pass theoreticals: clean-exit-no-Shutdown path doesn't trigger,
  defensive defer ln.Close() aspirational, etc.

Verification:
- go build ./cmd/...  exit 0
- go vet ./...         clean
- ./scripts/d1_smoke.sh  D1 acceptance gate: PASSED
- 3-lineage code review · 14 findings · 7 fixed · 0 deferred · 5
  accepted with rationale

Total D1 review coverage across the phase:
- 3 doc-review passes (Opus + Kimi + Qwen) — 13 findings, 10 fixed
- 1 runtime smoke — 1 finding (port 3100 collision), fixed
- 1 code-review parallel pass — 14 findings, 7 fixed
- 1 code-review second pass (Opus) — 1 actionable, fixed
- Cumulative: 29 findings · 19 fixed inline · 5 accepted · 5 deferred

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 07:07:50 -05:00
Claw
1142f54f23 G0 D1 ships: skeleton + chi + /health × 5 binaries · acceptance gate PASSED
Phase G0 Day 1 executed end-to-end after a third-pass review by
qwen3-coder:480b consolidated all findings across Opus/Kimi/Qwen
lineages.

Cross-lineage review consolidation (3 model passes + 1 runtime pass):
- Opus 4.7: 9 findings · 7 fixed inline · 2 deferred
- Kimi K2.6: 2 BLOCKs (introduced by Opus fixes) · 2 fixed
- Qwen3-coder:480b: 2 WARNs · 1 fixed (D2.4 256 MiB cap + 4-slot
  semaphore on PUTs) · 1 deferred (Q2 view refresh batching)
- Runtime smoke: 1 finding (port 3100 collision with live Rust
  lakehouse) · fixed (Go dev ports shifted to 3110+)
- Total: 14 findings · 11 fixed · 3 deferred to G2

What landed in code:
- internal/shared/server.go — chi factory, slog JSON, /health,
  graceful shutdown via signal.NotifyContext
- internal/shared/config.go — TOML loader, DefaultConfig, -config flag
- cmd/{gateway,storaged,catalogd,ingestd,queryd}/main.go — five
  binaries, each ~30 lines using the shared factory
- lakehouse.toml — G0 dev defaults (3110-3214)
- scripts/d1_smoke.sh — repeatable smoke that exits 0 on PASS
- go.mod / go.sum — chi v5.2.5, pelletier/go-toml/v2 v2.3.0

Verified end-to-end via scripts/d1_smoke.sh:
- All 5 /health endpoints return 200 with correct service name
- Gateway /v1/ingest + /v1/sql stubs return 501 with X-Lakehouse-Stub
- Graceful shutdown logs cleanly on SIGTERM
- DuckDB cgo path verified separately (sql.Open("duckdb","") + ping)

D1 ACCEPTANCE GATE: PASSED.

Next: D2 — storaged S3 GET/PUT/LIST against MinIO.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 07:00:37 -05:00
Claw
a74fdb1204 docs: Phase G0 kickoff — Kimi K2.6 cross-lineage review pass
Second-pass review via opencode/kimi-k2.6 (different lineage than
Opus 4.7 used in the first pass) caught 2 BLOCKs that Opus missed —
and that the Opus-pass fixes themselves introduced:

- K1: D0.6 used `go install pkg@latest` to verify cgo, but that
  command requires a main package; duckdb-go/v2 is a library, so
  the verification fails BEFORE exercising cgo and could pass on a
  broken-cgo box. Replaced with a real compile-and-run smoke
  (tmp module + 5-line main.go that imports + calls sql.Open).
- K2: Gateway stubbed /v1/ingest and /v1/sql in D1.10, but ingestd
  serves /ingest and queryd serves /sql. httputil.NewSingleHostReverseProxy
  preserves the inbound path by default — D6.1 now specifies a
  custom Director that strips the /v1 prefix before forwarding.

Demonstrates the cross-lineage rotation value: one model's review
of the original ≠ different model's review of the post-fix version.
Same dynamic the Rust auditor exploits with Kimi/Haiku/Opus.

Disposition table appended below the Opus pass for full audit trail.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 06:50:54 -05:00
Claw
ed3ccf7c53 docs: Phase G0 kickoff plan + scrum-style independent review
7-day day-by-day plan for the smallest end-to-end ingest+query path
in Go: D0 ops setup → D1 skeleton + chi + /health × 5 binaries → D2
storaged S3 → D3 catalogd Parquet manifests → D4 ingestd CSV→Parquet
→ D5 queryd DuckDB → D6 gate-day end-to-end → D7 cleanup + retro.

Plan was reviewed by opencode/claude-opus-4-7 via the gateway
(same path the production overseer correction loop uses post-G0).
9 findings (2 BLOCK + 5 WARN + 2 INFO):

- 2 BLOCK fixed inline:
  - cgo build dependency surfaced on D0 not D5
  - DuckDB CREATE SECRET (S3) plumbed from SecretsProvider on D5.1
- 4 of 5 WARN fixed inline:
  - storaged binds 127.0.0.1 only + 2 GiB body cap
  - queryd uses TTL-cached views + etag invalidation, not refresh-per-call
  - gateway reverse-proxy stubbed on D1.10 (501), promoted on D6
  - ADR stubs go in at start of D4/D5, finalized on D7
- 1 WARN deferred (orphan GC on two-phase write — punted to G2)
- 1 WARN accepted with note (shared-server.go refactor — G1+ follow-up)
- 2 INFO fixed inline (go mod tidy timing, ADR-after-fact inversion)

Disposition table appended to the doc itself for auditability —
matches the human_overrides.jsonl pattern from the Rust auditor.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 06:47:15 -05:00
Claw
29468b1413 docs: 2026-04-28 upstream survey — three SPEC-changing pivots
Pre-Phase-G0 research sweep against current Go ecosystem state. Three
upstream changes that the day-of SPEC missed:

1. DuckDB Go binding ownership transferred. marcboeker/go-duckdb is
   deprecated as of v2.5.0 — official maintainer is now
   github.com/duckdb/duckdb-go/v2 (DuckDB team + Marc Boeker joint
   hand-off). Current v2.10502.0 / DuckDB v1.5.2. SPEC §3.1 +
   component table updated.

2. Official Go MCP SDK exists. Switching from mark3labs/mcp-go
   (community) to github.com/modelcontextprotocol/go-sdk (official,
   Google collaboration, v1.5.0 stable, 4.4k stars, targets MCP spec
   2025-11-25). Component table updated.

3. arrow-go is on v18, not v15. v18.5.2 (March 2026) has parquet
   encryption fixes relevant for PII-masked safe views. PRD locked
   stack + SPEC component table updated.

Validated unchanged: coder/hnsw (220 stars, active), chi (still the
clean-architecture pick over fiber/gin/echo).

Surfaced for future use: anthropics/anthropic-sdk-go (official,
available for direct Claude calls bypassing opencode if ever needed),
duckdb-wasm (browser-side analytics future option), IVF as HNSW
fallback if recall gate fails.

See docs/RESEARCH_LOG_2026-04-28.md for full survey + sources.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 06:40:26 -05:00
Claw
f07668064e docs: seed PRD + SPEC for the Go-direction rewrite
Two documents only — no Go code yet. PRD restates the problem and
preserves the Rust PRD's invariants verbatim, then maps the locked
stack to Go libraries and surfaces four hard problems (DuckDB-via-cgo
for the query engine, Lance dropped, Dioxus → HTMX, arrow-go maturity).
SPEC walks each Rust crate + TS surface and tags the port with library
choice / effort estimate / risk + a 5-phase migration plan from
skeleton (Phase G0) to demo parity (Phase G5).

Six open questions remain that gate Phase G0:
- DuckDB cgo OK?
- HTMX vs React for the UI?
- Repo location?
- Distillation v1.0.0 port verbatim or rebuild?
- Pathway memory data — port 88 traces or start clean?
- Auditor lineage — port audit_baselines.jsonl or restart?

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 06:35:23 -05:00