golangLAKEHOUSE/tests/proof/cases/03_ingest_csv_to_parquet.sh
root 1313eb2173 proof harness Phase C: 6 integration cases · 104/0/1 green
Adds the integration tier — full chain CSV→Parquet→SQL and full
text→embed→vector→search. All 10 cases (4 contract + 6 integration)
end-to-end deterministic; 8s wall total.

Cases added:
  01_storage_roundtrip.sh
    GOLAKE-010-012. PUT 1KiB → GET sha256-equal → LIST contains key
    → DELETE 200/204 → GET 404. Deterministic key under
    proof/<case_id>/ so concurrent runs don't collide.

  02_catalog_manifest.sh
    GOLAKE-020-022. Fresh register existing=false → manifest read
    matches → list contains dataset_id → idempotent re-register
    existing=true with stable dataset_id → schema-drift register
    409 (the ADR-020 contract). Per-run unique name via
    PROOF_RUN_ID so existing=false is meaningful.

  03_ingest_csv_to_parquet.sh
    GOLAKE-030. workers.csv (5 rows) via /v1/ingest multipart →
    parquet object on storaged → catalog manifest with row_count=5.
    Verifies content-addressed key shape (datasets/<n>/<fp>.parquet).

  04_query_correctness.sh
    GOLAKE-040. The 5 SQL assertions from fixtures/expected/queries.json
    against the workers fixture: count=5, Chicago=2, max=95,
    safety→Barbara, Houston avg=89.5. Iterates the YAML claims, runs
    each query, compares response columns to expected values.

  06_vector_add_search.sh integration extension
    GOLAKE-051. text → /v1/embed (4 docs from fixtures/text/docs.txt)
    → vectord add → search by query embedding. Top-1 ID per query
    asserted against fixtures/expected/rankings.json. First run (or
    --regenerate-rankings) writes the fixture and emits a skip with
    explicit reason; subsequent runs assert against it.

  07_vector_persistence_restart.sh
    GOLAKE-070. add 4 unit-basis vectors → search → record top-1
    distance → SIGTERM vectord → restart with the same --config →
    poll /health for 8s → search again → top-1 ID and distance match
    bit-identically. Skips with reason if vectord PID can't be found
    or post-restart bind times out.

Two harness improvements landed alongside:

  run_proof.sh writes a temp lakehouse_proof.toml with
  refresh_every="500ms" override and passes --config to all booted
  binaries. Production default is 30s; 04_query_correctness needs
  queryd to pick up the new view within a tick. Production config
  unchanged.

  cleanup() now pgreps for any orphan bin/<svc> processes (anchored
  to start-of-argv per memory feedback_pkill_scope.md) so a case
  that restarts a service mid-run still gets cleaned up.

  lib/http.sh adds proof_call(case_id, probe, method, url, args...)
  — escape hatch for cases that need raw curl args (multipart -F,
  custom headers). Used by 03_ingest for the multipart upload that
  conflicts with proof_post's --data + Content-Type defaults.

  lib/env.sh exports PROOF_RUN_ID — short unique id derived from the
  report directory timestamp. Used by 02 and 07 for fresh-each-run
  state isolation.

Two real findings recorded as evidence (no code changes):
  - rankings.json fixture pinned: 4 queries → 4 distinct top-1 docs
    via nomic-embed-text. A model swap that changes ranking now
    fails the harness loudly; --regenerate-rankings is the override.
  - vectord persistence kill+restart preserves top-1 distance
    bit-identically — the LHV1 single-Put framed format from
    G1P round-trips exactly through Save/Load.

Verified end-to-end:
  just proof contract       — 53 pass (4 cases)
  just proof integration    — 104 pass (10 cases) · 8s wall
  just verify               — 9 smokes still green · 33s wall

Phase D (performance baseline) lands next: 10_perf_baseline measures
rows/sec ingest, vectors/sec add, p50/p95 query+search latency, RSS,
CPU. First run writes tests/proof/baseline.json; later runs diff
against it.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 05:26:00 -05:00

81 lines
3.6 KiB
Bash
Executable File

#!/usr/bin/env bash
# 03_ingest_csv_to_parquet.sh — GOLAKE-030.
# Ingests fixtures/csv/workers.csv via /v1/ingest, verifies the parquet
# object lands on storaged and catalogd registers a matching manifest.
# Leaves data in place so 04_query_correctness can SELECT against it.
set -uo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/../lib/env.sh"
source "${SCRIPT_DIR}/../lib/http.sh"
source "${SCRIPT_DIR}/../lib/assert.sh"
CASE_ID="GOLAKE-030"
CASE_NAME="Ingest CSV → Parquet → catalog manifest"
CASE_TYPE="integration"
if [ "${1:-}" = "--metadata-only" ]; then return 0 2>/dev/null || exit 0; fi
DATASET="proof_workers"
CSV_FIXTURE="${PROOF_REPO_ROOT}/tests/proof/fixtures/csv/workers.csv"
# Record fixture sha for the evidence chain.
CSV_SHA=$(sha256sum "$CSV_FIXTURE" | awk '{print $1}')
echo "{\"fixture\":\"workers.csv\",\"sha256\":\"$CSV_SHA\"}" \
> "${PROOF_REPORT_DIR}/raw/outputs/${CASE_ID}_fixture.json"
# Idempotent prelude — schema-drift would 409, but identical-fp is fine.
# We can't easily delete a catalog entry; rely on idempotent re-ingest.
# If a prior run with different csv content registered DATASET, this
# would 409 — which would be a real finding worth surfacing.
# Ingest. /v1/ingest takes ?name=<n> in the query and a multipart form
# with the CSV file under any field name (handler reads the first file).
# proof_post / proof_put set Content-Type + --data which conflict with
# multipart -F; use proof_call for direct curl-arg pass-through.
proof_call "$CASE_ID" "ingest" POST \
"${PROOF_GATEWAY_URL}/v1/ingest?name=${DATASET}" \
-F "file=@${CSV_FIXTURE}" >/dev/null
ingest_status=$(proof_status_of "$CASE_ID" "ingest")
proof_assert_eq "$CASE_ID" "ingest → 200" "200" "$ingest_status"
# Halt the rest of the case if ingest didn't succeed — the downstream
# claims would all fail for the same reason, no point recording N
# duplicate failures.
if [ "$ingest_status" != "200" ]; then
proof_skip "$CASE_ID" "downstream claims skipped — ingest failed" \
"see raw/http/${CASE_ID}/ingest.body for upstream error"
return 0 2>/dev/null || exit 0
fi
ingest_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/ingest.body"
# Response shape: {manifest, existing, row_count, parquet_size, parquet_key}.
row_count=$(jq -r '.row_count' "$ingest_body")
proof_assert_eq "$CASE_ID" "ingest reports row_count = 5" "5" "$row_count"
parquet_size=$(jq -r '.parquet_size' "$ingest_body")
proof_assert_gt "$CASE_ID" "parquet_size > 0" "$parquet_size" "0"
parquet_key=$(jq -r '.parquet_key' "$ingest_body")
proof_assert_ne "$CASE_ID" "parquet_key non-empty" "" "$parquet_key"
# Content-addressed keys are datasets/<name>/<fp_hex>.parquet per memory `c1e4113`.
proof_assert_contains "$CASE_ID" "parquet_key is content-addressed under datasets/${DATASET}/" \
"datasets/${DATASET}/" "$parquet_key"
# Verify the parquet object actually exists on storaged.
proof_get "$CASE_ID" "storage_list" \
"${PROOF_GATEWAY_URL}/v1/storage/list" >/dev/null
list_body=$(proof_body_of "$CASE_ID" "storage_list")
proof_assert_contains "$CASE_ID" "storaged LIST contains parquet_key" \
"$parquet_key" "$list_body"
# Verify catalogd has a matching manifest.
proof_get "$CASE_ID" "catalog_manifest" \
"${PROOF_GATEWAY_URL}/v1/catalog/manifest/${DATASET}" >/dev/null
proof_assert_eq "$CASE_ID" "catalog manifest GET → 200" "200" \
"$(proof_status_of "$CASE_ID" "catalog_manifest")"
manifest_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/catalog_manifest.body"
manifest_row_count=$(jq -r '.row_count' "$manifest_body")
proof_assert_eq "$CASE_ID" "manifest row_count = 5" "5" "$manifest_row_count"