Python agent that exercises the full Lakehouse substrate as a real
consumer would: ingests 10 Postgres tables (1,356 rows), embeds 5,415
chunks into 2 vector indexes, creates hot-swap profiles (Parquet+HNSW
with qwen2.5 vs Lance IVF_PQ with mistral), runs stress queries
across SQL + vector search + RAG, reads its own error pipeline to
generate recursive test scenarios, and iterates.
50/50 tests pass across 2 iterations with zero errors. Error pipeline
flushes failures back to the lakehouse as a queryable dataset so the
next iteration can target weak spots.
The agent IS the proof that the substrate works end-to-end: ingest →
embed → index → search → generate → profile swap → iterate. Every
capability we built today gets exercised in one script.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- lakehouse.service: release gateway on :3100, auto-restart
- lakehouse-sidecar.service: Python FastAPI on :3200, auto-restart
- lakehouse-ui.service: WASM file server on :3300, auto-restart
- All enabled at boot (multi-user.target)
- scripts/serve_ui.py for systemd-compatible file serving
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>