Phase G0 Day 4 ships ingestd: multipart CSV upload, Arrow schema
inference per ADR-010 (default-to-string on ambiguity), single-pass
streaming CSV → Parquet via pqarrow batched writer (Snappy compressed,
8192 rows per batch), PUT to storaged at content-addressed key
datasets/<name>/<fp_hex>.parquet, register manifest with catalogd.
Acceptance smoke 6/6 PASS including idempotent re-ingest (proves
inference is deterministic — same CSV always produces same fingerprint)
and schema-drift → 409 (proves catalogd's gate fires on ingest traffic).
Schema fingerprint is SHA-256 over (name, type) tuples in header order
using ASCII record/unit separators (0x1e/0x1f) so column names with
commas can't collide. Nullability intentionally NOT in the fingerprint
— a column gaining nulls isn't a schema change.
Cross-lineage scrum on shipped code:
- Opus 4.7 (opencode): 4 WARN + 3 INFO (after 2 self-retracted BLOCKs)
- Kimi K2-0905 (openrouter): 1 BLOCK + 2 WARN + 1 INFO
- Qwen3-coder (openrouter): 2 BLOCK + 2 WARN + 2 INFO
Fixed (2, both Opus single-reviewer):
C-DRIFT: PUT-then-register on fixed datasets/<name>/data.parquet
meant a schema-drift ingest overwrote the live parquet BEFORE
catalogd's 409 fired → storaged inconsistent with manifest.
Fix: content-addressed key datasets/<name>/<fp_hex>.parquet.
Drift writes to a different file (orphan in G2 GC scope); the
live data is never corrupted.
C-WCLOSE: pqarrow.NewFileWriter not Closed on error paths leaks
buffered column data + OS resources per failed ingest.
Fix: deferred guarded close with wClosed flag.
Dismissed (5, all false positives):
Qwen BLOCK "csv.Reader needs LazyQuotes=true for multi-line" — false,
Go csv handles RFC 4180 multi-line quoted fields by default
Qwen BLOCK "row[i] OOB" — already bounds-checked at schema.go:73
and csv.go:201
Kimi BLOCK "type assertion panic if pqarrow reorders fields" —
speculative, no real path
Kimi WARN + Qwen WARN×2 "RecordBuilder leak on early error" —
false convergent. Outer defer rb.Release() captures the current
builder; in-loop release runs before reassignment. No leak.
Deferred (6 INFO + accepted-with-rationale on 3 WARN): sample
boundary type mismatch (G0 cap bounds peak), string-match
paranoia on http.MaxBytesError, multipart double-buffer (G2 spool-
to-disk), separator validation, body close ordering, etc.
The D4 scrum produced fewer real findings than D3 (2 vs 6) — both
were architectural hazards smoke wouldn't catch because the smoke's
"schema drift → 409" assertion was passing even in the corrupted-
state world. The 409 fires correctly; what was wrong was the PUT
having already mutated the live parquet before the validation check.
Opus's PUT-then-register read of the order is exactly the kind of
architectural insight the cross-lineage scrum is designed to surface.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
97 lines
3.2 KiB
Go
97 lines
3.2 KiB
Go
// catalog_client.go — HTTP client to catalogd. ingestd ships
|
|
// manifests through here after writing the Parquet to storaged.
|
|
// Symmetric in shape with internal/catalogd/store_client.go: thin
|
|
// wrapper, drain-and-close discipline, sentinel errors for 4xx
|
|
// classes that the handler maps back to HTTP.
|
|
package ingestd
|
|
|
|
import (
|
|
"bytes"
|
|
"context"
|
|
"encoding/json"
|
|
"errors"
|
|
"fmt"
|
|
"io"
|
|
"net/http"
|
|
"strings"
|
|
"time"
|
|
|
|
"git.agentview.dev/profit/golangLAKEHOUSE/internal/catalogd"
|
|
)
|
|
|
|
// CatalogClient talks HTTP to catalogd's /catalog/* routes.
|
|
type CatalogClient struct {
|
|
baseURL string
|
|
hc *http.Client
|
|
}
|
|
|
|
// ErrFingerprintConflict mirrors catalogd's 409 — same name,
|
|
// different schema fingerprint.
|
|
var ErrFingerprintConflict = errors.New("ingestd: catalogd reports schema fingerprint conflict (409)")
|
|
|
|
// NewCatalogClient builds a client against catalogd's base URL
|
|
// (e.g. "http://127.0.0.1:3212").
|
|
func NewCatalogClient(baseURL string) *CatalogClient {
|
|
return &CatalogClient{
|
|
baseURL: strings.TrimRight(baseURL, "/"),
|
|
hc: &http.Client{Timeout: 30 * time.Second},
|
|
}
|
|
}
|
|
|
|
// RegisterRequest mirrors catalogd's POST /catalog/register body.
|
|
type RegisterRequest struct {
|
|
Name string `json:"name"`
|
|
SchemaFingerprint string `json:"schema_fingerprint"`
|
|
Objects []catalogd.Object `json:"objects"`
|
|
RowCount *int64 `json:"row_count,omitempty"`
|
|
}
|
|
|
|
// RegisterResponse mirrors catalogd's 200/conflict response.
|
|
type RegisterResponse struct {
|
|
Manifest *catalogd.Manifest `json:"manifest"`
|
|
Existing bool `json:"existing"`
|
|
}
|
|
|
|
// Register POSTs to /catalog/register. Returns ErrFingerprintConflict
|
|
// on 409, the decoded response on 200, an error on anything else.
|
|
func (c *CatalogClient) Register(ctx context.Context, req *RegisterRequest) (*RegisterResponse, error) {
|
|
body, err := json.Marshal(req)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("marshal register: %w", err)
|
|
}
|
|
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, c.baseURL+"/catalog/register", bytes.NewReader(body))
|
|
if err != nil {
|
|
return nil, fmt.Errorf("register req: %w", err)
|
|
}
|
|
httpReq.Header.Set("Content-Type", "application/json")
|
|
httpReq.ContentLength = int64(len(body))
|
|
|
|
resp, err := c.hc.Do(httpReq)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("register do: %w", err)
|
|
}
|
|
defer drainAndClose(resp.Body)
|
|
|
|
if resp.StatusCode == http.StatusConflict {
|
|
preview, _ := io.ReadAll(io.LimitReader(resp.Body, 256))
|
|
return nil, fmt.Errorf("%w: %s", ErrFingerprintConflict, string(preview))
|
|
}
|
|
if resp.StatusCode != http.StatusOK {
|
|
preview, _ := io.ReadAll(io.LimitReader(resp.Body, 256))
|
|
return nil, fmt.Errorf("register status %d: %s", resp.StatusCode, string(preview))
|
|
}
|
|
var out RegisterResponse
|
|
if err := json.NewDecoder(resp.Body).Decode(&out); err != nil {
|
|
return nil, fmt.Errorf("register decode: %w", err)
|
|
}
|
|
return &out, nil
|
|
}
|
|
|
|
// drainAndClose mirrors the catalogd store_client helper — drain a
|
|
// bounded amount of body bytes before close so HTTP/1.1 keep-alive
|
|
// pool reuse stays healthy on error paths.
|
|
func drainAndClose(body io.ReadCloser) {
|
|
_, _ = io.Copy(io.Discard, io.LimitReader(body, 64<<10))
|
|
_ = body.Close()
|
|
}
|