Closes the documented 500K-test gap (memory project_golang_lakehouse:
"storaged 256 MiB PUT cap blocks single-file LHV1 persistence above
~150K vectors at d=768"). Vectord persistence under "_vectors/" now
gets a 4 GiB cap; everything else (parquets, manifests, ingest)
keeps the 256 MiB default.
Why per-prefix and not "raise globally":
- 256 MiB cap is a real DoS protection — runaway clients can't
drain the daemon. Raising it for ALL traffic would expand the
attack surface for routine paths that have no need.
- Per-prefix preserves existing protection while opening the one
documented production-scale path.
Why not split LHV1 across multiple keys (the alternative):
- G1P shipped a single-Put framed format SPECIFICALLY to eliminate
the torn-write class (memory: "Single Put eliminates the torn-
write class that the 3-way convergent scrum finding identified").
- Multi-key LHV1 would re-introduce the half-saved-state failure
mode we just paid to fix. Streaming via existing manager.Uploader
is the better architectural answer.
Why not bump the cap operationally via env/config:
- Future operator-driven cap can drop in cleanly via the
maxPutBytesFor function. Started with hardcoded 4 GiB to keep
this commit small; config knob is a follow-up if production
workloads diverge from the documented 500K-vector ceiling.
manager.Uploader is already streaming-multipart on the outbound
S3 side; the inbound MaxBytesReader cap is a safety gate, not a
memory bottleneck. So raising it for vectord just lets the
existing streaming path actually flow, without introducing new
memory pressure (4-slot semaphore × 4 GiB worst case = 16 GiB
only if all slots simultaneously max out — vanishingly unlikely).
Implementation:
cmd/storaged/main.go:
new constant maxPutBytesVectors = 4 GiB (covers >700K vectors @ d=768)
new constant vectorsPrefix = "_vectors/" (synced with vectord.VectorPrefix)
new function maxPutBytesFor(key) → cap-by-prefix
handlePut: ContentLength check + MaxBytesReader use the per-key cap
cmd/storaged/main_test.go (3 new test funcs):
TestMaxPutBytesFor: 7 cases incl. nested prefix, substring-but-not-
prefix, empty key, parquet/manifest paths.
TestVectorPrefixSyncWithVectord: regression test that asserts
vectorsPrefix == vectord.VectorPrefix. A future rename surfaces
here instead of silently bypassing the larger cap.
TestVectorCapAccommodates500KStaffingTest: bounds the cap above
the documented production workload (~700 MiB conservative).
Verified:
go test ./cmd/storaged/ — all green (was 1 func, now 4)
just verify — 9 smokes still pass · 32s wall
just proof contract — 53/0/1 unchanged
Out of scope for this commit (deserves its own):
- Heavy integration smoke: 200K dim=768 synthetic vectors → ~700
MiB LHV1 → kill+restart vectord → recall=1. ~5-10 min wall;
follow-up if you want production-scale persistence verified
end-to-end. Unit tests + existing g1p_smoke cover the wiring.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
golangLAKEHOUSE
Go reimplementation of the Lakehouse — a versioned knowledge substrate for staffing analytics + local AI workloads.
Status
Phase G0 complete + G1/G1P/G2 shipped. Six binaries plus a seventh (vectord) and an eighth (embedd) on top, fronted by a single gateway. Acceptance smokes green for D1-D6 + G1 + G1P + G2.
End-to-end staffing co-pilot pipeline functional through the gateway:
text → /v1/embed → /v1/vectors/index/<name>/add
text → /v1/embed → /v1/vectors/index/<name>/search → top-K hits
Plus the SQL path:
CSV → /v1/ingest (parses, writes Parquet via storaged, registers
manifest with catalogd)
SQL → /v1/sql (DuckDB over the registered Parquets via httpfs)
See docs/PHASE_G0_KICKOFF.md for the day-by-day record (D1-D6 +
real-scale validation + G1/G1P/G2 pointer at the bottom).
Service inventory
| Bin | Port | Role |
|---|---|---|
gateway |
3110 | Reverse proxy fronting all backing services |
storaged |
3211 | Object I/O over S3 (MinIO in dev) |
catalogd |
3212 | Parquet manifest registry, ADR-020 idempotency |
ingestd |
3213 | CSV → Parquet → register loop |
queryd |
3214 | DuckDB SELECT over registered Parquets via httpfs |
vectord |
3215 | HNSW vector search (+ optional persistence to storaged) |
embedd |
3216 | Text → vector via Ollama (default nomic-embed-text 768-d) |
Acceptance smokes
scripts/d1_smoke.sh # 5-binary skeleton + chi /health + gateway proxy probes
scripts/d2_smoke.sh # storaged GET/PUT/LIST/DELETE + 256 MiB cap + concurrency cap
scripts/d3_smoke.sh # catalogd register/manifest/list + rehydrate-across-restart
scripts/d4_smoke.sh # ingestd CSV → Parquet round-trip + schema-drift 409
scripts/d5_smoke.sh # queryd DuckDB SELECT through httpfs over MinIO
scripts/d6_smoke.sh # full ingest → query through gateway only
scripts/g1_smoke.sh # vectord HNSW recall + dim mismatch + duplicate-create 409
scripts/g1p_smoke.sh # vectord state survives kill+restart via storaged
scripts/g2_smoke.sh # embed → vectord add → search round-trip
Or run the full gate via the task runner (see below):
just verify # vet + tests + 9 smokes; ~33s wall
Task runner
just # show available recipes
just verify # full Sprint 0 gate (vet + tests + 9 smokes)
just smoke <day> # single smoke (d1..d6, g1, g1p, g2)
just doctor # check cold-start deps; --json for CI
just install-hooks # install pre-push hook that runs just verify
After a fresh clone, run just install-hooks once so git push is
gated on the same green chain that ran here. Hook lives in
.git/hooks/pre-push (not tracked; recreated by the recipe).
Cold-start dependencies
- Go 1.25+ at
/usr/local/go/bin(arrow-go pulled the 1.25 floor) gcc+libc-devfor the DuckDB cgo binding (ADR-001 §1.1)justtask runner (apt install juston Debian 13+)- MinIO running on
:9000with bucketlakehouse-go-primary - Ollama running on
:11434withnomic-embed-textloaded (G2) /etc/lakehouse/secrets-go.tomlwith[s3.primary]credentials (storaged + queryd both read this)
just doctor probes all of the above and reports the fix command
for each missing dep. CI / scripts can use just doctor --json.
Layout
docs/ Direction + spec + ADRs + day-by-day
cmd/ One main package per binary
internal/ Shared packages — storeclient, catalogclient,
secrets, shared, embed, gateway, plus
per-service implementation packages
scripts/ Smokes + ancillary tooling
Reading order
docs/PRD.md— what we're building and whydocs/SPEC.md— how, per-componentdocs/DECISIONS.md— ADRs (ADR-001 foundational)docs/PHASE_G0_KICKOFF.md— day-by-day from D1 through G2docs/RUST_PATHWAY_MEMORY_NOTE.md— historical reference for the Rust era's pathway memory (not migrated, by ADR-001 #5)
Predecessor
The Rust Lakehouse this rewrite supersedes lives at
git.agentview.dev/profit/lakehouse. It remains the live system
serving devop.live/lakehouse/ until this Go implementation reaches
feature parity per docs/SPEC.md §7. Then Rust enters
maintenance-only mode.