Adds main_test.go for each of the 6 cmd binaries that lacked them
(storaged already had main_test.go; that's where the pattern came
from). Each test file focuses on the cmd-specific surface — route
mounts, body caps, decode/validation paths — without re-testing
internal package logic that's covered elsewhere.
cmd/catalogd/main_test.go — 6 funcs
TestRoutesMounted: chi.Walk asserts /catalog/{register,manifest/*,list}
TestHandleRegister_BodyTooLarge: 5 MiB body → 4xx
TestHandleRegister_MalformedJSON: 400
TestHandleRegister_EmptyName_400: ErrEmptyName surfaces as 400
TestHandleGetManifest_404 + TestHandleList_EmptyShape
cmd/embedd/main_test.go — 8 funcs
stubProvider implements embed.Provider deterministically
TestRoutesMounted, MalformedJSON_400, EmptyTextRejected_400 (per
scrum O-W3), UpstreamError_502 (provider error → 502, not 500),
HappyPath_ProviderEcho, BodyTooLarge (4xx range), TestItoa
(covers the no-strconv helper)
cmd/gateway/main_test.go — 4 funcs
TestMustParseUpstream_HappyPaths: 3 valid URLs
TestMustParseUpstream_FailureExits: re-execs the test binary in a
subprocess with env flag (standard pattern for testing os.Exit
callers); subprocess invokes mustParseUpstream("127.0.0.1:3211")
[missing scheme]; expects exit non-zero. Same pattern for garbage.
TestUpstreamConfigKeys_DocumentedShape: locks the 6 _url keys
cmd/ingestd/main_test.go — 7 funcs
Stubs both storaged and catalogd via httptest.Server so the cmd
layer can be exercised without bringing the full chain up.
TestHandleIngest_MissingNameQueryParam: 400 with "name" in body
TestHandleIngest_MalformedMultipart: 400
TestHandleIngest_MissingFormFile: 400 (valid multipart, wrong field)
TestHandleIngest_BodyTooLarge: 4xx
TestEscapeKeyPath: 6-case URL-escape table (apostrophe, space, etc.)
TestParquetKeyPath_Format: locks the datasets/<n>/<fp>.parquet shape
per scrum C-DRIFT (any rename breaks idempotent re-ingest)
cmd/queryd/main_test.go — 6 funcs
Tests pre-DB paths (decode, body cap, empty SQL); db.QueryContext
itself needs DuckDB so it's covered by GOLAKE-040 in the proof
harness, not unit tests. handlers.db = nil here is intentional.
TestHandleSQL_EmptySQL_400: 3 cases (empty, whitespace, mixed-WS)
TestMaxSQLBodyBytes_Reasonable: locks the 64 KiB constant in a
sane range so a refactor can't blow it open
TestPrimaryBucket_Constant: locks "primary" — secrets lookup uses
this; rename = silent secret-resolution failure at boot
cmd/vectord/main_test.go — 14 funcs
All 6 routes verified mounted. handlers.persist = nil = pure
in-memory mode; persistence is GOLAKE-070 in the proof harness.
Coverage of every error branch in handleCreate/Add/Search/Delete:
missing index → 404, dim mismatch → 400, empty items → 400,
empty id → 400, malformed JSON → 400, body too large → 4xx,
happy create → 201, happy list → 200.
One real finding caught during writing:
Body-cap rejection is sometimes 413 (typed MaxBytesError survives
unwrap) and sometimes 400 (decoder wraps it as a generic decode
error). Both are valid client-error contracts; the contract isn't
"exactly 413" but "fails loud as 4xx, never silent 200 or 5xx."
Tests assert 4xx range. The proof harness's
proof_assert_status_4xx already had this shape — just bringing
the unit tests in line with it.
Verified:
go test -count=1 -short ./cmd/... — all 7 packages green
just verify — vet + test + 9 smokes 35s
Closes audit risk R-005 (6/7 cmd/main.go untested). Combined with
the proof harness's wiring coverage, every cmd-level handler now
has both unit-test and integration-test coverage of the wiring
layer. R-005 → CLOSED.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Validated G0 substrate against the production workers_500k.parquet
dataset (18 cols × 500,000 rows). Findings + one applied fix:
Finding #1 (FIXED): ingestd's hardcoded 256 MiB cap rejected the 500K
CSV (344 MiB) with 413. Cap fired correctly, no OOM. Extracted to
[ingestd].max_ingest_bytes config field; default 256 MiB, override
per deployment for known-large workloads. With cap bumped to 512 MiB,
500K ingest succeeds in 3.12s with ingestd peak RSS 209 MiB.
Finding #2 (deferred): ingestd doesn't release memory between
ingests. Go runtime conservative; long-running daemon, fine.
Finding #3: DuckDB-via-httpfs is healthy at 500K. GROUP BY 45ms,
count(*) 24ms, AVG 47ms, schema introspection 25ms. Sub-linear
scaling vs 100K — the s3:// read path is not a bottleneck.
Finding #4: ADR-010 type inference correctly handled real staffing
data. worker_id → BIGINT, numeric scores → DOUBLE, multi-line
resume_text → VARCHAR. 1000-row sample sufficient.
Finding #5: Go's encoding/csv handles RFC 4180 quoted-comma fields
and multi-line quoted text without LazyQuotes — confirming the D4
scrum's dismissal of Qwen's BLOCK on this point.
Net: substrate handles production-scale data with one config knob.
No correctness issues, no OOMs, no silent type errors.
All 6 G0 smokes still PASS after the cap-config change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
queryd (D5) needs the same HTTP client to catalogd that ingestd uses,
but the client lived in internal/ingestd — having queryd import from
ingestd would invert the data-flow direction (ingestd is upstream of
queryd; the package dep should not point back). Extract to a shared
internal/catalogclient/ package now, before D5 forces it under
implementation pressure.
Adds the List(ctx) method queryd will need for view registration.
Unit tests cover Register success/conflict and List success/error
paths against an httptest.Server fake.
ingestd's import flips from internal/ingestd → internal/catalogclient;
the wire format and behavior are unchanged. All four smokes (D1/D2/D3/
D4) PASS unchanged. DuckDB cgo path re-verified with the official
github.com/duckdb/duckdb-go/v2 (per ADR-001) on Go 1.25 + arrow-go.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase G0 Day 4 ships ingestd: multipart CSV upload, Arrow schema
inference per ADR-010 (default-to-string on ambiguity), single-pass
streaming CSV → Parquet via pqarrow batched writer (Snappy compressed,
8192 rows per batch), PUT to storaged at content-addressed key
datasets/<name>/<fp_hex>.parquet, register manifest with catalogd.
Acceptance smoke 6/6 PASS including idempotent re-ingest (proves
inference is deterministic — same CSV always produces same fingerprint)
and schema-drift → 409 (proves catalogd's gate fires on ingest traffic).
Schema fingerprint is SHA-256 over (name, type) tuples in header order
using ASCII record/unit separators (0x1e/0x1f) so column names with
commas can't collide. Nullability intentionally NOT in the fingerprint
— a column gaining nulls isn't a schema change.
Cross-lineage scrum on shipped code:
- Opus 4.7 (opencode): 4 WARN + 3 INFO (after 2 self-retracted BLOCKs)
- Kimi K2-0905 (openrouter): 1 BLOCK + 2 WARN + 1 INFO
- Qwen3-coder (openrouter): 2 BLOCK + 2 WARN + 2 INFO
Fixed (2, both Opus single-reviewer):
C-DRIFT: PUT-then-register on fixed datasets/<name>/data.parquet
meant a schema-drift ingest overwrote the live parquet BEFORE
catalogd's 409 fired → storaged inconsistent with manifest.
Fix: content-addressed key datasets/<name>/<fp_hex>.parquet.
Drift writes to a different file (orphan in G2 GC scope); the
live data is never corrupted.
C-WCLOSE: pqarrow.NewFileWriter not Closed on error paths leaks
buffered column data + OS resources per failed ingest.
Fix: deferred guarded close with wClosed flag.
Dismissed (5, all false positives):
Qwen BLOCK "csv.Reader needs LazyQuotes=true for multi-line" — false,
Go csv handles RFC 4180 multi-line quoted fields by default
Qwen BLOCK "row[i] OOB" — already bounds-checked at schema.go:73
and csv.go:201
Kimi BLOCK "type assertion panic if pqarrow reorders fields" —
speculative, no real path
Kimi WARN + Qwen WARN×2 "RecordBuilder leak on early error" —
false convergent. Outer defer rb.Release() captures the current
builder; in-loop release runs before reassignment. No leak.
Deferred (6 INFO + accepted-with-rationale on 3 WARN): sample
boundary type mismatch (G0 cap bounds peak), string-match
paranoia on http.MaxBytesError, multipart double-buffer (G2 spool-
to-disk), separator validation, body close ordering, etc.
The D4 scrum produced fewer real findings than D3 (2 vs 6) — both
were architectural hazards smoke wouldn't catch because the smoke's
"schema drift → 409" assertion was passing even in the corrupted-
state world. The 409 fires correctly; what was wrong was the PUT
having already mutated the live parquet before the validation check.
Opus's PUT-then-register read of the order is exactly the kind of
architectural insight the cross-lineage scrum is designed to surface.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>