golangLAKEHOUSE/tests/proof/cases/02_catalog_manifest.sh
root 1313eb2173 proof harness Phase C: 6 integration cases · 104/0/1 green
Adds the integration tier — full chain CSV→Parquet→SQL and full
text→embed→vector→search. All 10 cases (4 contract + 6 integration)
end-to-end deterministic; 8s wall total.

Cases added:
  01_storage_roundtrip.sh
    GOLAKE-010-012. PUT 1KiB → GET sha256-equal → LIST contains key
    → DELETE 200/204 → GET 404. Deterministic key under
    proof/<case_id>/ so concurrent runs don't collide.

  02_catalog_manifest.sh
    GOLAKE-020-022. Fresh register existing=false → manifest read
    matches → list contains dataset_id → idempotent re-register
    existing=true with stable dataset_id → schema-drift register
    409 (the ADR-020 contract). Per-run unique name via
    PROOF_RUN_ID so existing=false is meaningful.

  03_ingest_csv_to_parquet.sh
    GOLAKE-030. workers.csv (5 rows) via /v1/ingest multipart →
    parquet object on storaged → catalog manifest with row_count=5.
    Verifies content-addressed key shape (datasets/<n>/<fp>.parquet).

  04_query_correctness.sh
    GOLAKE-040. The 5 SQL assertions from fixtures/expected/queries.json
    against the workers fixture: count=5, Chicago=2, max=95,
    safety→Barbara, Houston avg=89.5. Iterates the YAML claims, runs
    each query, compares response columns to expected values.

  06_vector_add_search.sh integration extension
    GOLAKE-051. text → /v1/embed (4 docs from fixtures/text/docs.txt)
    → vectord add → search by query embedding. Top-1 ID per query
    asserted against fixtures/expected/rankings.json. First run (or
    --regenerate-rankings) writes the fixture and emits a skip with
    explicit reason; subsequent runs assert against it.

  07_vector_persistence_restart.sh
    GOLAKE-070. add 4 unit-basis vectors → search → record top-1
    distance → SIGTERM vectord → restart with the same --config →
    poll /health for 8s → search again → top-1 ID and distance match
    bit-identically. Skips with reason if vectord PID can't be found
    or post-restart bind times out.

Two harness improvements landed alongside:

  run_proof.sh writes a temp lakehouse_proof.toml with
  refresh_every="500ms" override and passes --config to all booted
  binaries. Production default is 30s; 04_query_correctness needs
  queryd to pick up the new view within a tick. Production config
  unchanged.

  cleanup() now pgreps for any orphan bin/<svc> processes (anchored
  to start-of-argv per memory feedback_pkill_scope.md) so a case
  that restarts a service mid-run still gets cleaned up.

  lib/http.sh adds proof_call(case_id, probe, method, url, args...)
  — escape hatch for cases that need raw curl args (multipart -F,
  custom headers). Used by 03_ingest for the multipart upload that
  conflicts with proof_post's --data + Content-Type defaults.

  lib/env.sh exports PROOF_RUN_ID — short unique id derived from the
  report directory timestamp. Used by 02 and 07 for fresh-each-run
  state isolation.

Two real findings recorded as evidence (no code changes):
  - rankings.json fixture pinned: 4 queries → 4 distinct top-1 docs
    via nomic-embed-text. A model swap that changes ranking now
    fails the harness loudly; --regenerate-rankings is the override.
  - vectord persistence kill+restart preserves top-1 distance
    bit-identically — the LHV1 single-Put framed format from
    G1P round-trips exactly through Save/Load.

Verified end-to-end:
  just proof contract       — 53 pass (4 cases)
  just proof integration    — 104 pass (10 cases) · 8s wall
  just verify               — 9 smokes still green · 33s wall

Phase D (performance baseline) lands next: 10_perf_baseline measures
rows/sec ingest, vectors/sec add, p50/p95 query+search latency, RSS,
CPU. First run writes tests/proof/baseline.json; later runs diff
against it.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 05:26:00 -05:00

93 lines
3.9 KiB
Bash
Executable File

#!/usr/bin/env bash
# 02_catalog_manifest.sh — GOLAKE-020 + GOLAKE-021 + GOLAKE-022.
# Catalog register idempotency + manifest read + list inclusion +
# schema-drift 409 (the ADR-020 contract). Uses a synthetic manifest
# referencing a fake parquet object so we don't depend on prior ingest.
set -uo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/../lib/env.sh"
source "${SCRIPT_DIR}/../lib/http.sh"
source "${SCRIPT_DIR}/../lib/assert.sh"
CASE_ID="GOLAKE-020-022"
CASE_NAME="Catalog manifest — register idempotent + drift 409"
CASE_TYPE="integration"
if [ "${1:-}" = "--metadata-only" ]; then return 0 2>/dev/null || exit 0; fi
# Fresh-each-run name so the existing=false assertion is meaningful.
# Catalog dataset_id is deterministic UUIDv5 from name; reusing the
# same name across runs would always show existing=true on second run.
NAME="proof_catalog_${PROOF_RUN_ID}"
FP_A="sha256:proof_test_fp_aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
FP_B="sha256:proof_test_fp_bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
reg_body() {
local name="$1" fp="$2"
cat <<JSON
{
"name": "${name}",
"schema_fingerprint": "${fp}",
"objects": [{"key": "datasets/${name}/${fp}.parquet", "size": 1024}],
"row_count": 5
}
JSON
}
# Fresh register.
proof_post "$CASE_ID" "register_first" \
"${PROOF_GATEWAY_URL}/v1/catalog/register" \
"application/json" "$(reg_body "$NAME" "$FP_A")" >/dev/null
proof_assert_eq "$CASE_ID" "first register → 200" "200" \
"$(proof_status_of "$CASE_ID" "register_first")"
first_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/register_first.body"
existing_first=$(jq -r '.existing' "$first_body")
proof_assert_eq "$CASE_ID" "first register existing=false" \
"false" "$existing_first"
dataset_id_first=$(jq -r '.manifest.dataset_id' "$first_body")
proof_assert_ne "$CASE_ID" "first register dataset_id non-empty" "" "$dataset_id_first"
# Manifest read matches what was registered.
proof_get "$CASE_ID" "manifest_read" \
"${PROOF_GATEWAY_URL}/v1/catalog/manifest/${NAME}" >/dev/null
proof_assert_eq "$CASE_ID" "manifest read → 200" "200" \
"$(proof_status_of "$CASE_ID" "manifest_read")"
read_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/manifest_read.body"
read_fp=$(jq -r '.schema_fingerprint' "$read_body")
proof_assert_eq "$CASE_ID" "manifest schema_fingerprint matches" \
"$FP_A" "$read_fp"
read_id=$(jq -r '.dataset_id' "$read_body")
proof_assert_eq "$CASE_ID" "manifest dataset_id matches" \
"$dataset_id_first" "$read_id"
# List contains the dataset.
proof_get "$CASE_ID" "list" \
"${PROOF_GATEWAY_URL}/v1/catalog/list" >/dev/null
proof_assert_eq "$CASE_ID" "list → 200" "200" \
"$(proof_status_of "$CASE_ID" "list")"
list_body=$(proof_body_of "$CASE_ID" "list")
proof_assert_contains "$CASE_ID" "list contains dataset_id" \
"$dataset_id_first" "$list_body"
# Idempotent re-register with same name+fp → existing=true, dataset_id stable.
proof_post "$CASE_ID" "register_second" \
"${PROOF_GATEWAY_URL}/v1/catalog/register" \
"application/json" "$(reg_body "$NAME" "$FP_A")" >/dev/null
proof_assert_eq "$CASE_ID" "second register → 200" "200" \
"$(proof_status_of "$CASE_ID" "register_second")"
second_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/register_second.body"
existing_second=$(jq -r '.existing' "$second_body")
proof_assert_eq "$CASE_ID" "second register existing=true (idempotent)" \
"true" "$existing_second"
dataset_id_second=$(jq -r '.manifest.dataset_id' "$second_body")
proof_assert_eq "$CASE_ID" "dataset_id stable across re-register" \
"$dataset_id_first" "$dataset_id_second"
# Schema drift — different fp on same name → 409 (ADR-020).
proof_post "$CASE_ID" "register_drift" \
"${PROOF_GATEWAY_URL}/v1/catalog/register" \
"application/json" "$(reg_body "$NAME" "$FP_B")" >/dev/null
proof_assert_eq "$CASE_ID" "drift register → 409 (ADR-020)" "409" \
"$(proof_status_of "$CASE_ID" "register_drift")"