Phase G0 Day 3 ships catalogd: Arrow Parquet manifest codec, in-memory registry with the ADR-020 idempotency contract (same name+fingerprint reuses dataset_id; different fingerprint → 409 Conflict), HTTP client to storaged for persistence, and rehydration on startup. Acceptance smoke 6/6 PASSES end-to-end including rehydrate-across-restart — the load-bearing test that the catalog/storaged service split actually preserves state. dataset_id derivation diverges from Rust: UUIDv5(namespace, name) instead of v4 surrogate. Same name on any box generates the same dataset_id; rehydrate after disk loss converges to the same identity rather than silently re-issuing. Namespace pinned at a8f3c1d2-4e5b-5a6c-9d8e-7f0a1b2c3d4e — every dataset_id ever issued depends on these bytes. Cross-lineage scrum on shipped code: - Opus 4.7 (opencode): 1 BLOCK + 5 WARN + 3 INFO - Kimi K2-0905 (openrouter, validated D2): 2 BLOCK + 2 WARN + 1 INFO - Qwen3-coder (openrouter): 2 BLOCK + 2 WARN + 2 INFO Fixed: C1 list-offsets BLOCK (3-way convergent) → ValueOffsets(0) + bounds C2 Rehydrate mutex held across I/O → swap-under-brief-lock pattern S1 split-brain on persist failure → candidate-then-swap S2 brittle string-match for 400 vs 500 → ErrEmptyName/ErrEmptyFingerprint sentinels S3 Get/List shallow-copy aliasing → cloneManifest deep copy S4 keep-alive socket leak on error paths → drainAndClose helper Dismissed (false positives, all single-reviewer): Kimi BLOCK "Decode crashes on empty Parquet" — already handled Kimi INFO "safeKey double-escapes" — wrong, splitting before escape is required Qwen INFO "rb.NewRecord() error unchecked" — API returns no error Deferred to G1+: name validation regex, per-call deadlines, Snappy compression, list pagination continuation tokens (storaged caps at 10k with sentinel for now). Build clean, vet clean, all tests pass, smoke 6/6 PASS after every fix round. arrow-go/v18 + google/uuid added; Go 1.24 → 1.25 forced by arrow-go's minimum. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
golangLAKEHOUSE
Go reimplementation of the Lakehouse — a versioned knowledge substrate for staffing analytics + local AI workloads.
Status
Pre-Phase G0. Documents seeded; Go module declared; implementation
has not started. See docs/PRD.md for direction and docs/SPEC.md
for the component-by-component port plan.
Phase G0 prerequisites (must be done before any code lands)
- Install Go 1.23+ on the dev box. Not currently present at
/usr/local/goor elsewhere on the build machine. Standard install:curl -L https://go.dev/dl/go1.23.linux-amd64.tar.gz | sudo tar -C /usr/local -xz echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc - Ensure cgo toolchain is present (gcc + libc-dev) — required by
the DuckDB binding per ADR-001 §1.1.
apt install build-essentialon Debian-based systems. - Initialize the dependency tree with
go mod tidyoncecmd/gateway/main.godeclares its first imports.
Layout
docs/ Direction + spec + ADRs
cmd/ (forthcoming) main packages — one per service
internal/ (forthcoming) shared packages
web/ (forthcoming) HTMX templates + static
scripts/ (forthcoming) cold-start, smoke, distill
tests/ (forthcoming) golden files, integration tests
Reading order
docs/PRD.md— what we're building and whydocs/SPEC.md— how, per-componentdocs/DECISIONS.md— ADRs, starting with ADR-001 (foundational)docs/RUST_PATHWAY_MEMORY_NOTE.md— historical reference for the Rust era's pathway memory state (not migrated)
Predecessor
The Rust Lakehouse this rewrite supersedes lives at
git.agentview.dev/profit/lakehouse. It remains the live system until
this Go implementation reaches feature parity (per docs/SPEC.md §7).
Description
Go reimplementation of the Lakehouse — versioned knowledge substrate for staffing analytics + local AI workloads
Languages
Go
79.4%
Shell
20.1%
Just
0.3%
Dockerfile
0.2%