proof harness Phase C: 6 integration cases · 104/0/1 green
Adds the integration tier — full chain CSV→Parquet→SQL and full
text→embed→vector→search. All 10 cases (4 contract + 6 integration)
end-to-end deterministic; 8s wall total.
Cases added:
01_storage_roundtrip.sh
GOLAKE-010-012. PUT 1KiB → GET sha256-equal → LIST contains key
→ DELETE 200/204 → GET 404. Deterministic key under
proof/<case_id>/ so concurrent runs don't collide.
02_catalog_manifest.sh
GOLAKE-020-022. Fresh register existing=false → manifest read
matches → list contains dataset_id → idempotent re-register
existing=true with stable dataset_id → schema-drift register
409 (the ADR-020 contract). Per-run unique name via
PROOF_RUN_ID so existing=false is meaningful.
03_ingest_csv_to_parquet.sh
GOLAKE-030. workers.csv (5 rows) via /v1/ingest multipart →
parquet object on storaged → catalog manifest with row_count=5.
Verifies content-addressed key shape (datasets/<n>/<fp>.parquet).
04_query_correctness.sh
GOLAKE-040. The 5 SQL assertions from fixtures/expected/queries.json
against the workers fixture: count=5, Chicago=2, max=95,
safety→Barbara, Houston avg=89.5. Iterates the YAML claims, runs
each query, compares response columns to expected values.
06_vector_add_search.sh integration extension
GOLAKE-051. text → /v1/embed (4 docs from fixtures/text/docs.txt)
→ vectord add → search by query embedding. Top-1 ID per query
asserted against fixtures/expected/rankings.json. First run (or
--regenerate-rankings) writes the fixture and emits a skip with
explicit reason; subsequent runs assert against it.
07_vector_persistence_restart.sh
GOLAKE-070. add 4 unit-basis vectors → search → record top-1
distance → SIGTERM vectord → restart with the same --config →
poll /health for 8s → search again → top-1 ID and distance match
bit-identically. Skips with reason if vectord PID can't be found
or post-restart bind times out.
Two harness improvements landed alongside:
run_proof.sh writes a temp lakehouse_proof.toml with
refresh_every="500ms" override and passes --config to all booted
binaries. Production default is 30s; 04_query_correctness needs
queryd to pick up the new view within a tick. Production config
unchanged.
cleanup() now pgreps for any orphan bin/<svc> processes (anchored
to start-of-argv per memory feedback_pkill_scope.md) so a case
that restarts a service mid-run still gets cleaned up.
lib/http.sh adds proof_call(case_id, probe, method, url, args...)
— escape hatch for cases that need raw curl args (multipart -F,
custom headers). Used by 03_ingest for the multipart upload that
conflicts with proof_post's --data + Content-Type defaults.
lib/env.sh exports PROOF_RUN_ID — short unique id derived from the
report directory timestamp. Used by 02 and 07 for fresh-each-run
state isolation.
Two real findings recorded as evidence (no code changes):
- rankings.json fixture pinned: 4 queries → 4 distinct top-1 docs
via nomic-embed-text. A model swap that changes ranking now
fails the harness loudly; --regenerate-rankings is the override.
- vectord persistence kill+restart preserves top-1 distance
bit-identically — the LHV1 single-Put framed format from
G1P round-trips exactly through Save/Load.
Verified end-to-end:
just proof contract — 53 pass (4 cases)
just proof integration — 104 pass (10 cases) · 8s wall
just verify — 9 smokes still green · 33s wall
Phase D (performance baseline) lands next: 10_perf_baseline measures
rows/sec ingest, vectors/sec add, p50/p95 query+search latency, RSS,
CPU. First run writes tests/proof/baseline.json; later runs diff
against it.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
6d18394416
commit
1313eb2173
63
tests/proof/cases/01_storage_roundtrip.sh
Executable file
63
tests/proof/cases/01_storage_roundtrip.sh
Executable file
@ -0,0 +1,63 @@
|
||||
#!/usr/bin/env bash
|
||||
# 01_storage_roundtrip.sh — GOLAKE-010 + GOLAKE-011 + GOLAKE-012.
|
||||
# PUT bytes → GET bytes-equal → LIST contains key → DELETE → GET 404.
|
||||
# Uses a deterministic key under proof/<case_id>/ so concurrent runs
|
||||
# don't collide and the bucket stays inspectable post-run.
|
||||
|
||||
set -uo pipefail
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "${SCRIPT_DIR}/../lib/env.sh"
|
||||
source "${SCRIPT_DIR}/../lib/http.sh"
|
||||
source "${SCRIPT_DIR}/../lib/assert.sh"
|
||||
|
||||
CASE_ID="GOLAKE-010-012"
|
||||
CASE_NAME="Storage round-trip — PUT → GET → LIST → DELETE → 404"
|
||||
CASE_TYPE="integration"
|
||||
if [ "${1:-}" = "--metadata-only" ]; then return 0 2>/dev/null || exit 0; fi
|
||||
|
||||
KEY="proof/${CASE_ID}/payload.bin"
|
||||
|
||||
# Deterministic 1 KiB payload — sha256 must round-trip.
|
||||
PAYLOAD_FILE="${PROOF_REPORT_DIR}/raw/outputs/${CASE_ID}.payload"
|
||||
mkdir -p "$(dirname "$PAYLOAD_FILE")"
|
||||
seq 1 256 | awk '{printf "%04d-line\n", $1}' > "$PAYLOAD_FILE"
|
||||
EXPECTED_SHA=$(sha256sum "$PAYLOAD_FILE" | awk '{print $1}')
|
||||
|
||||
# Idempotent prelude: clear any leftover from prior run.
|
||||
proof_delete "$CASE_ID" "pre_clean" \
|
||||
"${PROOF_GATEWAY_URL}/v1/storage/delete/${KEY}" >/dev/null
|
||||
|
||||
# PUT.
|
||||
proof_put "$CASE_ID" "put" \
|
||||
"${PROOF_GATEWAY_URL}/v1/storage/put/${KEY}" \
|
||||
"application/octet-stream" "@${PAYLOAD_FILE}" >/dev/null
|
||||
proof_assert_status_in "$CASE_ID" "PUT → 200 or 201" "200 201" "put"
|
||||
|
||||
# GET — bytes must round-trip.
|
||||
proof_get "$CASE_ID" "get" \
|
||||
"${PROOF_GATEWAY_URL}/v1/storage/get/${KEY}" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "GET → 200" "200" \
|
||||
"$(proof_status_of "$CASE_ID" "get")"
|
||||
ACTUAL_SHA=$(sha256sum \
|
||||
"${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/get.body" | awk '{print $1}')
|
||||
proof_assert_eq "$CASE_ID" "GET body sha256 matches PUT input" \
|
||||
"$EXPECTED_SHA" "$ACTUAL_SHA"
|
||||
|
||||
# LIST — must contain the key. /storage/list returns JSON array of keys.
|
||||
proof_get "$CASE_ID" "list" \
|
||||
"${PROOF_GATEWAY_URL}/v1/storage/list" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "LIST → 200" "200" \
|
||||
"$(proof_status_of "$CASE_ID" "list")"
|
||||
list_body=$(proof_body_of "$CASE_ID" "list")
|
||||
proof_assert_contains "$CASE_ID" "LIST contains the put key" "$KEY" "$list_body"
|
||||
|
||||
# DELETE.
|
||||
proof_delete "$CASE_ID" "del" \
|
||||
"${PROOF_GATEWAY_URL}/v1/storage/delete/${KEY}" >/dev/null
|
||||
proof_assert_status_in "$CASE_ID" "DELETE → 200 or 204" "200 204" "del"
|
||||
|
||||
# GET after DELETE → 404.
|
||||
proof_get "$CASE_ID" "get_after_delete" \
|
||||
"${PROOF_GATEWAY_URL}/v1/storage/get/${KEY}" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "GET after DELETE → 404" "404" \
|
||||
"$(proof_status_of "$CASE_ID" "get_after_delete")"
|
||||
92
tests/proof/cases/02_catalog_manifest.sh
Executable file
92
tests/proof/cases/02_catalog_manifest.sh
Executable file
@ -0,0 +1,92 @@
|
||||
#!/usr/bin/env bash
|
||||
# 02_catalog_manifest.sh — GOLAKE-020 + GOLAKE-021 + GOLAKE-022.
|
||||
# Catalog register idempotency + manifest read + list inclusion +
|
||||
# schema-drift 409 (the ADR-020 contract). Uses a synthetic manifest
|
||||
# referencing a fake parquet object so we don't depend on prior ingest.
|
||||
|
||||
set -uo pipefail
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "${SCRIPT_DIR}/../lib/env.sh"
|
||||
source "${SCRIPT_DIR}/../lib/http.sh"
|
||||
source "${SCRIPT_DIR}/../lib/assert.sh"
|
||||
|
||||
CASE_ID="GOLAKE-020-022"
|
||||
CASE_NAME="Catalog manifest — register idempotent + drift 409"
|
||||
CASE_TYPE="integration"
|
||||
if [ "${1:-}" = "--metadata-only" ]; then return 0 2>/dev/null || exit 0; fi
|
||||
|
||||
# Fresh-each-run name so the existing=false assertion is meaningful.
|
||||
# Catalog dataset_id is deterministic UUIDv5 from name; reusing the
|
||||
# same name across runs would always show existing=true on second run.
|
||||
NAME="proof_catalog_${PROOF_RUN_ID}"
|
||||
FP_A="sha256:proof_test_fp_aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
||||
FP_B="sha256:proof_test_fp_bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
|
||||
|
||||
reg_body() {
|
||||
local name="$1" fp="$2"
|
||||
cat <<JSON
|
||||
{
|
||||
"name": "${name}",
|
||||
"schema_fingerprint": "${fp}",
|
||||
"objects": [{"key": "datasets/${name}/${fp}.parquet", "size": 1024}],
|
||||
"row_count": 5
|
||||
}
|
||||
JSON
|
||||
}
|
||||
|
||||
# Fresh register.
|
||||
proof_post "$CASE_ID" "register_first" \
|
||||
"${PROOF_GATEWAY_URL}/v1/catalog/register" \
|
||||
"application/json" "$(reg_body "$NAME" "$FP_A")" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "first register → 200" "200" \
|
||||
"$(proof_status_of "$CASE_ID" "register_first")"
|
||||
|
||||
first_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/register_first.body"
|
||||
existing_first=$(jq -r '.existing' "$first_body")
|
||||
proof_assert_eq "$CASE_ID" "first register existing=false" \
|
||||
"false" "$existing_first"
|
||||
dataset_id_first=$(jq -r '.manifest.dataset_id' "$first_body")
|
||||
proof_assert_ne "$CASE_ID" "first register dataset_id non-empty" "" "$dataset_id_first"
|
||||
|
||||
# Manifest read matches what was registered.
|
||||
proof_get "$CASE_ID" "manifest_read" \
|
||||
"${PROOF_GATEWAY_URL}/v1/catalog/manifest/${NAME}" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "manifest read → 200" "200" \
|
||||
"$(proof_status_of "$CASE_ID" "manifest_read")"
|
||||
read_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/manifest_read.body"
|
||||
read_fp=$(jq -r '.schema_fingerprint' "$read_body")
|
||||
proof_assert_eq "$CASE_ID" "manifest schema_fingerprint matches" \
|
||||
"$FP_A" "$read_fp"
|
||||
read_id=$(jq -r '.dataset_id' "$read_body")
|
||||
proof_assert_eq "$CASE_ID" "manifest dataset_id matches" \
|
||||
"$dataset_id_first" "$read_id"
|
||||
|
||||
# List contains the dataset.
|
||||
proof_get "$CASE_ID" "list" \
|
||||
"${PROOF_GATEWAY_URL}/v1/catalog/list" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "list → 200" "200" \
|
||||
"$(proof_status_of "$CASE_ID" "list")"
|
||||
list_body=$(proof_body_of "$CASE_ID" "list")
|
||||
proof_assert_contains "$CASE_ID" "list contains dataset_id" \
|
||||
"$dataset_id_first" "$list_body"
|
||||
|
||||
# Idempotent re-register with same name+fp → existing=true, dataset_id stable.
|
||||
proof_post "$CASE_ID" "register_second" \
|
||||
"${PROOF_GATEWAY_URL}/v1/catalog/register" \
|
||||
"application/json" "$(reg_body "$NAME" "$FP_A")" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "second register → 200" "200" \
|
||||
"$(proof_status_of "$CASE_ID" "register_second")"
|
||||
second_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/register_second.body"
|
||||
existing_second=$(jq -r '.existing' "$second_body")
|
||||
proof_assert_eq "$CASE_ID" "second register existing=true (idempotent)" \
|
||||
"true" "$existing_second"
|
||||
dataset_id_second=$(jq -r '.manifest.dataset_id' "$second_body")
|
||||
proof_assert_eq "$CASE_ID" "dataset_id stable across re-register" \
|
||||
"$dataset_id_first" "$dataset_id_second"
|
||||
|
||||
# Schema drift — different fp on same name → 409 (ADR-020).
|
||||
proof_post "$CASE_ID" "register_drift" \
|
||||
"${PROOF_GATEWAY_URL}/v1/catalog/register" \
|
||||
"application/json" "$(reg_body "$NAME" "$FP_B")" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "drift register → 409 (ADR-020)" "409" \
|
||||
"$(proof_status_of "$CASE_ID" "register_drift")"
|
||||
80
tests/proof/cases/03_ingest_csv_to_parquet.sh
Executable file
80
tests/proof/cases/03_ingest_csv_to_parquet.sh
Executable file
@ -0,0 +1,80 @@
|
||||
#!/usr/bin/env bash
|
||||
# 03_ingest_csv_to_parquet.sh — GOLAKE-030.
|
||||
# Ingests fixtures/csv/workers.csv via /v1/ingest, verifies the parquet
|
||||
# object lands on storaged and catalogd registers a matching manifest.
|
||||
# Leaves data in place so 04_query_correctness can SELECT against it.
|
||||
|
||||
set -uo pipefail
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "${SCRIPT_DIR}/../lib/env.sh"
|
||||
source "${SCRIPT_DIR}/../lib/http.sh"
|
||||
source "${SCRIPT_DIR}/../lib/assert.sh"
|
||||
|
||||
CASE_ID="GOLAKE-030"
|
||||
CASE_NAME="Ingest CSV → Parquet → catalog manifest"
|
||||
CASE_TYPE="integration"
|
||||
if [ "${1:-}" = "--metadata-only" ]; then return 0 2>/dev/null || exit 0; fi
|
||||
|
||||
DATASET="proof_workers"
|
||||
CSV_FIXTURE="${PROOF_REPO_ROOT}/tests/proof/fixtures/csv/workers.csv"
|
||||
|
||||
# Record fixture sha for the evidence chain.
|
||||
CSV_SHA=$(sha256sum "$CSV_FIXTURE" | awk '{print $1}')
|
||||
echo "{\"fixture\":\"workers.csv\",\"sha256\":\"$CSV_SHA\"}" \
|
||||
> "${PROOF_REPORT_DIR}/raw/outputs/${CASE_ID}_fixture.json"
|
||||
|
||||
# Idempotent prelude — schema-drift would 409, but identical-fp is fine.
|
||||
# We can't easily delete a catalog entry; rely on idempotent re-ingest.
|
||||
# If a prior run with different csv content registered DATASET, this
|
||||
# would 409 — which would be a real finding worth surfacing.
|
||||
|
||||
# Ingest. /v1/ingest takes ?name=<n> in the query and a multipart form
|
||||
# with the CSV file under any field name (handler reads the first file).
|
||||
# proof_post / proof_put set Content-Type + --data which conflict with
|
||||
# multipart -F; use proof_call for direct curl-arg pass-through.
|
||||
proof_call "$CASE_ID" "ingest" POST \
|
||||
"${PROOF_GATEWAY_URL}/v1/ingest?name=${DATASET}" \
|
||||
-F "file=@${CSV_FIXTURE}" >/dev/null
|
||||
|
||||
ingest_status=$(proof_status_of "$CASE_ID" "ingest")
|
||||
proof_assert_eq "$CASE_ID" "ingest → 200" "200" "$ingest_status"
|
||||
|
||||
# Halt the rest of the case if ingest didn't succeed — the downstream
|
||||
# claims would all fail for the same reason, no point recording N
|
||||
# duplicate failures.
|
||||
if [ "$ingest_status" != "200" ]; then
|
||||
proof_skip "$CASE_ID" "downstream claims skipped — ingest failed" \
|
||||
"see raw/http/${CASE_ID}/ingest.body for upstream error"
|
||||
return 0 2>/dev/null || exit 0
|
||||
fi
|
||||
|
||||
ingest_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/ingest.body"
|
||||
|
||||
# Response shape: {manifest, existing, row_count, parquet_size, parquet_key}.
|
||||
row_count=$(jq -r '.row_count' "$ingest_body")
|
||||
proof_assert_eq "$CASE_ID" "ingest reports row_count = 5" "5" "$row_count"
|
||||
|
||||
parquet_size=$(jq -r '.parquet_size' "$ingest_body")
|
||||
proof_assert_gt "$CASE_ID" "parquet_size > 0" "$parquet_size" "0"
|
||||
|
||||
parquet_key=$(jq -r '.parquet_key' "$ingest_body")
|
||||
proof_assert_ne "$CASE_ID" "parquet_key non-empty" "" "$parquet_key"
|
||||
# Content-addressed keys are datasets/<name>/<fp_hex>.parquet per memory `c1e4113`.
|
||||
proof_assert_contains "$CASE_ID" "parquet_key is content-addressed under datasets/${DATASET}/" \
|
||||
"datasets/${DATASET}/" "$parquet_key"
|
||||
|
||||
# Verify the parquet object actually exists on storaged.
|
||||
proof_get "$CASE_ID" "storage_list" \
|
||||
"${PROOF_GATEWAY_URL}/v1/storage/list" >/dev/null
|
||||
list_body=$(proof_body_of "$CASE_ID" "storage_list")
|
||||
proof_assert_contains "$CASE_ID" "storaged LIST contains parquet_key" \
|
||||
"$parquet_key" "$list_body"
|
||||
|
||||
# Verify catalogd has a matching manifest.
|
||||
proof_get "$CASE_ID" "catalog_manifest" \
|
||||
"${PROOF_GATEWAY_URL}/v1/catalog/manifest/${DATASET}" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "catalog manifest GET → 200" "200" \
|
||||
"$(proof_status_of "$CASE_ID" "catalog_manifest")"
|
||||
manifest_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/catalog_manifest.body"
|
||||
manifest_row_count=$(jq -r '.row_count' "$manifest_body")
|
||||
proof_assert_eq "$CASE_ID" "manifest row_count = 5" "5" "$manifest_row_count"
|
||||
69
tests/proof/cases/04_query_correctness.sh
Executable file
69
tests/proof/cases/04_query_correctness.sh
Executable file
@ -0,0 +1,69 @@
|
||||
#!/usr/bin/env bash
|
||||
# 04_query_correctness.sh — GOLAKE-040.
|
||||
# Runs the 5 SQL assertions from fixtures/expected/queries.json against
|
||||
# the workers dataset ingested by 03_ingest_csv_to_parquet. Each query
|
||||
# is recorded with full evidence; this case is the canonical "does the
|
||||
# SQL path return correct results" claim.
|
||||
|
||||
set -uo pipefail
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "${SCRIPT_DIR}/../lib/env.sh"
|
||||
source "${SCRIPT_DIR}/../lib/http.sh"
|
||||
source "${SCRIPT_DIR}/../lib/assert.sh"
|
||||
|
||||
CASE_ID="GOLAKE-040"
|
||||
CASE_NAME="Query correctness — 5 SQL assertions on workers fixture"
|
||||
CASE_TYPE="integration"
|
||||
if [ "${1:-}" = "--metadata-only" ]; then return 0 2>/dev/null || exit 0; fi
|
||||
|
||||
DATASET="proof_workers"
|
||||
EXPECTED_FILE="${PROOF_REPO_ROOT}/tests/proof/fixtures/expected/queries.json"
|
||||
|
||||
# Spec's SQL fixtures use unquoted table name "workers" but ingestd
|
||||
# registers under whatever ?name= we passed in 03 — proof_workers.
|
||||
# Substitute on the fly so the queries still reference the right view.
|
||||
substitute_table() {
|
||||
sed "s/FROM workers/FROM ${DATASET}/g; s/from workers/from ${DATASET}/g"
|
||||
}
|
||||
|
||||
# Iterate the 5 queries.
|
||||
n=$(jq '.queries | length' "$EXPECTED_FILE")
|
||||
for i in $(seq 0 $((n-1))); do
|
||||
qid=$(jq -r ".queries[$i].id" "$EXPECTED_FILE")
|
||||
qclaim=$(jq -r ".queries[$i].claim" "$EXPECTED_FILE")
|
||||
qsql=$(jq -r ".queries[$i].sql" "$EXPECTED_FILE" | substitute_table)
|
||||
# Each expected key/value drives one assertion.
|
||||
expected_keys=$(jq -r ".queries[$i].expected | keys[]" "$EXPECTED_FILE")
|
||||
|
||||
# Build a minimal JSON body — escape the SQL via jq.
|
||||
body=$(jq -nc --arg sql "$qsql" '{sql:$sql}')
|
||||
|
||||
proof_post "$CASE_ID" "${qid}_query" \
|
||||
"${PROOF_GATEWAY_URL}/v1/sql" \
|
||||
"application/json" "$body" >/dev/null
|
||||
|
||||
qstatus=$(proof_status_of "$CASE_ID" "${qid}_query")
|
||||
proof_assert_eq "$CASE_ID" "${qid}: ${qclaim} — query status 200" \
|
||||
"200" "$qstatus"
|
||||
|
||||
# Skip the value assertions if the query failed.
|
||||
if [ "$qstatus" != "200" ]; then continue; fi
|
||||
|
||||
qbody="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/${qid}_query.body"
|
||||
|
||||
# queryd response shape: {columns: [{name,type}], rows: [[...]], row_count: N}
|
||||
# We compare each expected key against the value at column index for
|
||||
# that key in row 0.
|
||||
for ek in $expected_keys; do
|
||||
expected=$(jq -r ".queries[$i].expected.\"$ek\"" "$EXPECTED_FILE")
|
||||
# Find the column index for $ek in the response, then read row[0][idx].
|
||||
col_idx=$(jq -r --arg n "$ek" '.columns | map(.name) | index($n)' "$qbody")
|
||||
if [ "$col_idx" = "null" ]; then
|
||||
_proof_record "$CASE_ID" "${qid}: column ${ek} present in response" \
|
||||
fail "${ek}" "<missing>" "column not found in response"
|
||||
continue
|
||||
fi
|
||||
actual=$(jq -r ".rows[0][$col_idx]" "$qbody")
|
||||
proof_assert_eq "$CASE_ID" "${qid}: ${qclaim}" "$expected" "$actual"
|
||||
done
|
||||
done
|
||||
@ -75,3 +75,122 @@ proof_assert_lt "$CASE_ID" "top-1 distance < 0.001 (cosine self ≈ 0)" \
|
||||
proof_delete "$CASE_ID" "post_clean" \
|
||||
"${PROOF_GATEWAY_URL}/v1/vectors/index/${INDEX_NAME}" >/dev/null
|
||||
proof_assert_status_in "$CASE_ID" "delete index → 200 or 204" "200 204" "post_clean"
|
||||
|
||||
# ── integration tier — text → embed → add → search top-K ──────────
|
||||
# Skip in contract mode; full pipeline runs only when integration or
|
||||
# performance is the active mode.
|
||||
if [ "$PROOF_MODE" = "contract" ]; then return 0 2>/dev/null || exit 0; fi
|
||||
|
||||
# Switch CASE_ID for the integration claim — assertions land under
|
||||
# GOLAKE-051 in their own JSONL so the per-case-id table tracks them
|
||||
# distinctly from the contract claims above.
|
||||
CASE_ID="GOLAKE-051"
|
||||
|
||||
DOCS_FILE="${PROOF_REPO_ROOT}/tests/proof/fixtures/text/docs.txt"
|
||||
RANKINGS_FILE="${PROOF_REPO_ROOT}/tests/proof/fixtures/expected/rankings.json"
|
||||
SEM_INDEX="proof_sem_${PROOF_RUN_ID}"
|
||||
|
||||
# Pre-flight: skip the integration block cleanly if Ollama is down so
|
||||
# we don't get a wall of "502" failures and so spec rule "skipped !=
|
||||
# passed" stays honest.
|
||||
proof_post "$CASE_ID" "embed_health" "${PROOF_GATEWAY_URL}/v1/embed" \
|
||||
"application/json" '{"texts":["health probe"]}' >/dev/null
|
||||
embed_status=$(proof_status_of "$CASE_ID" "embed_health")
|
||||
if [ "$embed_status" != "200" ]; then
|
||||
proof_skip "$CASE_ID" "Embedding integration — Ollama unreachable" \
|
||||
"POST /v1/embed returned ${embed_status}; cannot exercise top-K ranking"
|
||||
return 0 2>/dev/null || exit 0
|
||||
fi
|
||||
|
||||
# Load 4 docs from fixture (tab-separated id<TAB>text).
|
||||
ids=()
|
||||
texts=()
|
||||
while IFS=$'\t' read -r id text; do
|
||||
[ -z "$id" ] && continue
|
||||
ids+=("$id")
|
||||
texts+=("$text")
|
||||
done < "$DOCS_FILE"
|
||||
|
||||
# Embed all 4 docs in one batch — single round trip.
|
||||
texts_json=$(printf '%s\n' "${texts[@]}" | jq -R . | jq -s .)
|
||||
embed_body=$(jq -nc --argjson texts "$texts_json" '{texts:$texts}')
|
||||
proof_post "$CASE_ID" "embed_docs" "${PROOF_GATEWAY_URL}/v1/embed" \
|
||||
"application/json" "$embed_body" >/dev/null
|
||||
embed_resp="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/embed_docs.body"
|
||||
proof_assert_eq "$CASE_ID" "embed 4 docs → 200" "200" \
|
||||
"$(proof_status_of "$CASE_ID" "embed_docs")"
|
||||
|
||||
# Create the dim=768 index.
|
||||
proof_post "$CASE_ID" "sem_create" "${PROOF_GATEWAY_URL}/v1/vectors/index" \
|
||||
"application/json" "{\"name\":\"${SEM_INDEX}\",\"dimension\":768}" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "create dim=768 index → 201" "201" \
|
||||
"$(proof_status_of "$CASE_ID" "sem_create")"
|
||||
|
||||
# Build add body: zip ids[i] with vectors[i] from embed response.
|
||||
ids_json=$(printf '%s\n' "${ids[@]}" | jq -R . | jq -s .)
|
||||
add_body=$(jq -nc --argjson ids "$ids_json" --slurpfile e "$embed_resp" '
|
||||
[range(0; ($ids | length)) | {id: $ids[.], vector: $e[0].vectors[.]}] | {items: .}
|
||||
')
|
||||
proof_post "$CASE_ID" "sem_add" \
|
||||
"${PROOF_GATEWAY_URL}/v1/vectors/index/${SEM_INDEX}/add" \
|
||||
"application/json" "$add_body" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "add 4 docs to index → 200" "200" \
|
||||
"$(proof_status_of "$CASE_ID" "sem_add")"
|
||||
|
||||
# Test queries. Each must return its corresponding doc as top-1.
|
||||
declare -a query_keys=("welder_chicago" "warehouse_safety" "detroit_electrical" "houston_pipefitter")
|
||||
declare -a query_texts=(
|
||||
"welder needed in Chicago"
|
||||
"warehouse safety crew"
|
||||
"Detroit electrical contractor"
|
||||
"Houston pipefitter"
|
||||
)
|
||||
|
||||
# Capture top-1 per query.
|
||||
declare -A actual_top1
|
||||
for i in "${!query_keys[@]}"; do
|
||||
key="${query_keys[$i]}"
|
||||
query="${query_texts[$i]}"
|
||||
qbody=$(jq -nc --arg q "$query" '{texts:[$q]}')
|
||||
proof_post "$CASE_ID" "embed_q_${key}" "${PROOF_GATEWAY_URL}/v1/embed" \
|
||||
"application/json" "$qbody" >/dev/null
|
||||
qvec=$(jq -c '.vectors[0]' \
|
||||
"${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/embed_q_${key}.body")
|
||||
sbody=$(jq -nc --argjson v "$qvec" '{vector:$v,k:1}')
|
||||
proof_post "$CASE_ID" "search_${key}" \
|
||||
"${PROOF_GATEWAY_URL}/v1/vectors/index/${SEM_INDEX}/search" \
|
||||
"application/json" "$sbody" >/dev/null
|
||||
top1=$(jq -r '.results[0].id' \
|
||||
"${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/search_${key}.body")
|
||||
actual_top1[$key]="$top1"
|
||||
done
|
||||
|
||||
# Assert against stored rankings — or write fixture on first run /
|
||||
# explicit --regenerate-rankings.
|
||||
need_regen=0
|
||||
[ ! -f "$RANKINGS_FILE" ] && need_regen=1
|
||||
[ "${PROOF_REGENERATE_RANKINGS:-0}" = "1" ] && need_regen=1
|
||||
|
||||
if [ "$need_regen" = "1" ]; then
|
||||
# Build JSON object {query_key: top1_id, ...} from the bash assoc array.
|
||||
out="{"
|
||||
sep=""
|
||||
for k in "${query_keys[@]}"; do
|
||||
out+="${sep}\"${k}\": \"${actual_top1[$k]}\""
|
||||
sep=","
|
||||
done
|
||||
out+="}"
|
||||
echo "$out" | jq . > "$RANKINGS_FILE"
|
||||
proof_skip "$CASE_ID" "rankings fixture regenerated — re-run to verify" \
|
||||
"wrote ${RANKINGS_FILE} from this run; assertions skipped this turn"
|
||||
else
|
||||
for k in "${query_keys[@]}"; do
|
||||
expected=$(jq -r ".${k}" "$RANKINGS_FILE")
|
||||
proof_assert_eq "$CASE_ID" "top-1 for query '${k}' matches fixture" \
|
||||
"$expected" "${actual_top1[$k]}"
|
||||
done
|
||||
fi
|
||||
|
||||
# Cleanup the semantic index.
|
||||
proof_delete "$CASE_ID" "sem_clean" \
|
||||
"${PROOF_GATEWAY_URL}/v1/vectors/index/${SEM_INDEX}" >/dev/null
|
||||
|
||||
130
tests/proof/cases/07_vector_persistence_restart.sh
Executable file
130
tests/proof/cases/07_vector_persistence_restart.sh
Executable file
@ -0,0 +1,130 @@
|
||||
#!/usr/bin/env bash
|
||||
# 07_vector_persistence_restart.sh — GOLAKE-070.
|
||||
# Verifies vectord persistence: add vectors, search, kill vectord,
|
||||
# restart, search again — top-1 ID and distance must match within
|
||||
# float-noise tolerance. The orchestrator's cleanup uses pgrep so the
|
||||
# restarted vectord gets cleaned up regardless of PID tracking.
|
||||
|
||||
set -uo pipefail
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "${SCRIPT_DIR}/../lib/env.sh"
|
||||
source "${SCRIPT_DIR}/../lib/http.sh"
|
||||
source "${SCRIPT_DIR}/../lib/assert.sh"
|
||||
|
||||
CASE_ID="GOLAKE-070"
|
||||
CASE_NAME="Vector persistence — kill+restart preserves state"
|
||||
CASE_TYPE="integration"
|
||||
if [ "${1:-}" = "--metadata-only" ]; then return 0 2>/dev/null || exit 0; fi
|
||||
|
||||
INDEX_NAME="proof_persist_${PROOF_RUN_ID}"
|
||||
VECTORD_LOG="${PROOF_REPORT_DIR}/raw/logs/vectord_restart.log"
|
||||
|
||||
# Pre-flight: vectord must be reachable.
|
||||
if ! curl -sf -m 1 "${PROOF_VECTORD_URL}/health" >/dev/null 2>&1; then
|
||||
proof_skip "$CASE_ID" "Persistence test — vectord unreachable" \
|
||||
"vectord not responding on :3215; harness bootstrap may have failed"
|
||||
return 0 2>/dev/null || exit 0
|
||||
fi
|
||||
|
||||
# Build deterministic vectors. Unit basis vectors so search is unambiguous.
|
||||
proof_post "$CASE_ID" "create_index" "${PROOF_GATEWAY_URL}/v1/vectors/index" \
|
||||
"application/json" \
|
||||
"{\"name\":\"${INDEX_NAME}\",\"dimension\":4}" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "create index → 201" "201" \
|
||||
"$(proof_status_of "$CASE_ID" "create_index")"
|
||||
|
||||
add_body='{"items":[
|
||||
{"id":"p1","vector":[1,0,0,0]},
|
||||
{"id":"p2","vector":[0,1,0,0]},
|
||||
{"id":"p3","vector":[0,0,1,0]},
|
||||
{"id":"p4","vector":[0,0,0,1]}
|
||||
]}'
|
||||
proof_post "$CASE_ID" "add_vectors" \
|
||||
"${PROOF_GATEWAY_URL}/v1/vectors/index/${INDEX_NAME}/add" \
|
||||
"application/json" "$add_body" >/dev/null
|
||||
proof_assert_eq "$CASE_ID" "add 4 vectors → 200" "200" \
|
||||
"$(proof_status_of "$CASE_ID" "add_vectors")"
|
||||
|
||||
# Pre-restart search — record top-1 as the canonical reference.
|
||||
search_body='{"vector":[1,0,0,0],"k":2}'
|
||||
proof_post "$CASE_ID" "pre_restart_search" \
|
||||
"${PROOF_GATEWAY_URL}/v1/vectors/index/${INDEX_NAME}/search" \
|
||||
"application/json" "$search_body" >/dev/null
|
||||
pre_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/pre_restart_search.body"
|
||||
pre_top1=$(jq -r '.results[0].id' "$pre_body")
|
||||
pre_dist=$(jq -r '.results[0].distance' "$pre_body")
|
||||
proof_assert_eq "$CASE_ID" "pre-restart top-1 = p1" "p1" "$pre_top1"
|
||||
|
||||
# ── kill vectord ────────────────────────────────────────────────
|
||||
echo "[case-07] killing vectord..." >> "$VECTORD_LOG"
|
||||
old_pid=$(pgrep -f "^[./]*bin/vectord($| )" | head -1)
|
||||
if [ -z "$old_pid" ]; then
|
||||
proof_skip "$CASE_ID" "vectord PID not found — can't test restart" \
|
||||
"pgrep returned no match for ^bin/vectord"
|
||||
return 0 2>/dev/null || exit 0
|
||||
fi
|
||||
kill "$old_pid" 2>/dev/null || true
|
||||
|
||||
# Wait for vectord to actually go down (so the restart path is exercised).
|
||||
deadline=$(($(date +%s) + 5))
|
||||
while [ "$(date +%s)" -lt "$deadline" ]; do
|
||||
if ! curl -sf -m 1 "${PROOF_VECTORD_URL}/health" >/dev/null 2>&1; then
|
||||
break
|
||||
fi
|
||||
sleep 0.1
|
||||
done
|
||||
|
||||
# Confirm it's down — if still up, kill -9.
|
||||
if curl -sf -m 1 "${PROOF_VECTORD_URL}/health" >/dev/null 2>&1; then
|
||||
kill -9 "$old_pid" 2>/dev/null || true
|
||||
sleep 0.5
|
||||
fi
|
||||
|
||||
# ── restart vectord ─────────────────────────────────────────────
|
||||
cd "$PROOF_REPO_ROOT"
|
||||
./bin/vectord --config "$PROOF_LAKEHOUSE_CONFIG" >> "$VECTORD_LOG" 2>&1 &
|
||||
new_pid=$!
|
||||
|
||||
# Poll for readiness — give it 8s like the bootstrap does.
|
||||
deadline=$(($(date +%s) + 8))
|
||||
ready=0
|
||||
while [ "$(date +%s)" -lt "$deadline" ]; do
|
||||
if curl -sf -m 1 "${PROOF_VECTORD_URL}/health" >/dev/null 2>&1; then
|
||||
ready=1; break
|
||||
fi
|
||||
sleep 0.1
|
||||
done
|
||||
|
||||
if [ "$ready" -eq 0 ]; then
|
||||
_proof_record "$CASE_ID" "vectord restart binds within 8s" \
|
||||
fail "ready" "timeout" "vectord did not respond to /health after restart; pid=${new_pid}"
|
||||
return 0 2>/dev/null || exit 0
|
||||
fi
|
||||
_proof_record "$CASE_ID" "vectord restart binds within 8s" \
|
||||
pass "ready" "ready" "old_pid=${old_pid} new_pid=${new_pid}"
|
||||
|
||||
# ── post-restart search ─────────────────────────────────────────
|
||||
proof_post "$CASE_ID" "post_restart_search" \
|
||||
"${PROOF_GATEWAY_URL}/v1/vectors/index/${INDEX_NAME}/search" \
|
||||
"application/json" "$search_body" >/dev/null
|
||||
|
||||
post_status=$(proof_status_of "$CASE_ID" "post_restart_search")
|
||||
proof_assert_eq "$CASE_ID" "post-restart search → 200" "200" "$post_status"
|
||||
|
||||
if [ "$post_status" != "200" ]; then
|
||||
proof_skip "$CASE_ID" "value assertions skipped — search failed" \
|
||||
"post-restart search returned ${post_status}; index may not have rehydrated"
|
||||
else
|
||||
post_body="${PROOF_REPORT_DIR}/raw/http/${CASE_ID}/post_restart_search.body"
|
||||
post_top1=$(jq -r '.results[0].id' "$post_body")
|
||||
post_dist=$(jq -r '.results[0].distance' "$post_body")
|
||||
proof_assert_eq "$CASE_ID" "post-restart top-1 ID matches pre-restart" \
|
||||
"$pre_top1" "$post_top1"
|
||||
# Distances should be bit-identical (same float32 graph reloaded).
|
||||
proof_assert_eq "$CASE_ID" "post-restart top-1 distance matches pre-restart" \
|
||||
"$pre_dist" "$post_dist"
|
||||
fi
|
||||
|
||||
# Cleanup.
|
||||
proof_delete "$CASE_ID" "post_clean" \
|
||||
"${PROOF_GATEWAY_URL}/v1/vectors/index/${INDEX_NAME}" >/dev/null
|
||||
6
tests/proof/fixtures/expected/rankings.json
Normal file
6
tests/proof/fixtures/expected/rankings.json
Normal file
@ -0,0 +1,6 @@
|
||||
{
|
||||
"welder_chicago": "doc-001",
|
||||
"warehouse_safety": "doc-002",
|
||||
"detroit_electrical": "doc-003",
|
||||
"houston_pipefitter": "doc-004"
|
||||
}
|
||||
@ -58,3 +58,11 @@ JSON
|
||||
fi
|
||||
|
||||
export PROOF_GIT_SHA="$(cd "$PROOF_REPO_ROOT" && git rev-parse HEAD 2>/dev/null || echo unknown)"
|
||||
|
||||
# A short unique id per orchestrator run, used by cases that need
|
||||
# fresh-each-run state (e.g. catalog dataset names that should be
|
||||
# absent on first register). Derived from the report dir basename so
|
||||
# all cases in one run share the same ID. Strip the "proof-" prefix
|
||||
# and dashes; use last 8 chars for brevity.
|
||||
_run_basename="$(basename "$PROOF_REPORT_DIR" | sed 's/proof-//; s/-//g; s/Z$//')"
|
||||
export PROOF_RUN_ID="${_run_basename: -8}"
|
||||
|
||||
@ -94,6 +94,20 @@ proof_delete() {
|
||||
_proof_http_run "$case_id" "$probe" DELETE "$url" "$@"
|
||||
}
|
||||
|
||||
# proof_call: escape hatch for cases that need full control of curl
|
||||
# args — multipart uploads (-F), custom headers, --form-string, etc.
|
||||
# proof_post / proof_put add a Content-Type header and --data body
|
||||
# that conflict with -F multipart, so use this for those cases.
|
||||
#
|
||||
# proof_call <case_id> <probe> <method> <url> [curl-args...]
|
||||
#
|
||||
# Example multipart POST:
|
||||
# proof_call "$CASE_ID" "ingest" POST "$URL" -F "file=@${PATH}"
|
||||
proof_call() {
|
||||
local case_id="$1" probe="$2" method="$3" url="$4"; shift 4
|
||||
_proof_http_run "$case_id" "$probe" "$method" "$url" "$@"
|
||||
}
|
||||
|
||||
# Helper accessors — reads the per-probe JSON.
|
||||
proof_status_of() {
|
||||
local case_id="$1" probe="$2"
|
||||
|
||||
@ -74,9 +74,20 @@ PIDS=()
|
||||
WE_BOOTED=0
|
||||
|
||||
cleanup() {
|
||||
if [ "$WE_BOOTED" -eq 1 ] && [ "${#PIDS[@]}" -gt 0 ]; then
|
||||
echo "[run_proof] cleanup: killing ${#PIDS[@]} services we started"
|
||||
kill "${PIDS[@]}" 2>/dev/null || true
|
||||
if [ "$WE_BOOTED" -eq 1 ]; then
|
||||
# Kill the original PIDs we recorded plus any restarts a case
|
||||
# might have done (07_vector_persistence_restart kills+restarts
|
||||
# vectord mid-case, which orphans the original PID and creates
|
||||
# a new one we never tracked). pgrep pattern is anchored to
|
||||
# bin/<name> at start-of-argv per memory feedback_pkill_scope.md.
|
||||
echo "[run_proof] cleanup: stopping services we started (incl. any restarts)"
|
||||
if [ "${#PIDS[@]}" -gt 0 ]; then
|
||||
kill "${PIDS[@]}" 2>/dev/null || true
|
||||
fi
|
||||
for svc in storaged catalogd ingestd queryd vectord embedd gateway; do
|
||||
pgrep -f "^[./]*bin/${svc}($| )" 2>/dev/null \
|
||||
| xargs -r kill 2>/dev/null || true
|
||||
done
|
||||
wait 2>/dev/null || true
|
||||
fi
|
||||
}
|
||||
@ -101,6 +112,13 @@ bootstrap_services() {
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Override queryd's refresh_every to 500ms so cases see new
|
||||
# manifests within a tick — production default is 30s, which races
|
||||
# against ingest→query cases. Default config left alone for prod.
|
||||
local CFG_OVERRIDE="${PROOF_REPORT_DIR}/raw/lakehouse_proof.toml"
|
||||
sed 's/^refresh_every *=.*/refresh_every = "500ms"/' lakehouse.toml > "$CFG_OVERRIDE"
|
||||
export PROOF_LAKEHOUSE_CONFIG="$CFG_OVERRIDE"
|
||||
|
||||
echo "[run_proof] bootstrap: launching services in dep order..."
|
||||
for SPEC in "storaged:3211" "catalogd:3212" "ingestd:3213" "queryd:3214" "vectord:3215" "embedd:3216" "gateway:3110"; do
|
||||
local NAME="${SPEC%:*}" PORT="${SPEC#*:}"
|
||||
@ -109,7 +127,8 @@ bootstrap_services() {
|
||||
echo " ✓ ${NAME} (:${PORT}) already up — leaving as-is"
|
||||
continue
|
||||
fi
|
||||
./bin/"$NAME" > "${PROOF_REPORT_DIR}/raw/logs/${NAME}.log" 2>&1 &
|
||||
./bin/"$NAME" --config "$CFG_OVERRIDE" \
|
||||
> "${PROOF_REPORT_DIR}/raw/logs/${NAME}.log" 2>&1 &
|
||||
PIDS+=("$!")
|
||||
if poll_health "$NAME" "$PORT"; then
|
||||
echo " ✓ ${NAME} (:${PORT}) booted"
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user