profit ac01fffd9a checkpoint: matrix-agent-validated (2026-04-25)
Architectural snapshot of the lakehouse codebase at the point where the
full matrix-driven agent loop with Mem0 versioning + deletion was
validated end-to-end.

WHAT THIS REPO IS
A clean single-commit snapshot of the lakehouse code. Heavy test data
(.parquet datasets, vector indexes) excluded — see REPLICATION.md for
regen path. Full lakehouse history at git.agentview.dev/profit/lakehouse.

WHAT WAS PROVEN
- Vector retrieval across multi-corpora matrix (chicago_permits + entity
  briefs + sec_tickers + distilled procedural + llm_team runs)
- Observer hand-review (cloud + heuristic fallback) gating each candidate
- Local-model agent loop (qwen3.5:latest) with tool use + scratchpad
- Playbook seal on success → next-iter retrieval surfaces it as preamble
- Mem0 versioning + deletion in pathway_memory:
    * UPSERT: ADD on new workflow, UPDATE bumps replay_count on identical
    * REVISE: chains versions, parent.superseded_at + superseded_by stamped
    * RETIRE: marks specific trace retired with reason, excluded from retrieval
    * HISTORY: walks chain root→tip, cycle-safe

KEY DIRECTORIES
- crates/vectord/src/pathway_memory.rs — Mem0 ops live here
- crates/vectord/src/playbook_memory.rs — original Mem0 reference
- tests/agent_test/ — local-model agent harness + PRD + session archives
- scripts/dump_raw_corpus.sh — MinIO bucket dump (raw test corpus)
- scripts/vectorize_raw_corpus.ts — corpus → vector indexes
- scripts/analyze_chicago_contracts.ts — real inference pipeline
- scripts/seal_agent_playbook.ts — Mem0 upsert from agent traces

Replication: see REPLICATION.md for Debian 13 clean install + cloud-only
adaptation (no local Ollama).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 19:43:27 -05:00

44 lines
1.5 KiB
Bash
Executable File

#!/usr/bin/env bash
# Run all generated scenarios sequentially to populate the KB.
# Reads tests/multi-agent/scenarios/manifest.json and feeds each file
# to scenario.ts. Each scenario indexes into data/_kb/ automatically
# via the end-of-run hook. Exit code: 0 if all scenarios completed
# (event failures are NOT failures for the batch — we want the KB to
# record both successes AND failures).
set -e
cd "$(dirname "$0")/.."
export OLLAMA_CLOUD_KEY="$(python3 -c "import json; print(json.load(open('/root/llm_team_config.json'))['providers']['ollama_cloud']['api_key'])" 2>/dev/null || echo '')"
MANIFEST="tests/multi-agent/scenarios/manifest.json"
if [ ! -f "$MANIFEST" ]; then
echo "✗ no manifest at $MANIFEST — run: bun tests/multi-agent/gen_scenarios.ts <N>"
exit 1
fi
START_TS=$(date -Iseconds)
LOG_DIR="/tmp/lakehouse_kb_batch_$(date +%s)"
mkdir -p "$LOG_DIR"
echo "▶ KB batch start: $START_TS, logs → $LOG_DIR"
python3 -c "
import json
m = json.load(open('$MANIFEST'))
for s in m['scenarios']:
print(s['file'])
" | while read -r SCEN; do
SPEC="tests/multi-agent/scenarios/$SCEN"
BASE=$(basename "$SPEC" .json)
LOG="$LOG_DIR/${BASE}.log"
echo "$SCEN"
bun tests/multi-agent/scenario.ts "$SPEC" > "$LOG" 2>&1 || true
OK=$(grep -oP '\d+/\d+ events succeeded' "$LOG" | tail -1 || echo "no-result")
SIG=$(grep -oP 'KB indexed: sig=\K[a-f0-9]+' "$LOG" | tail -1 || echo "-")
echo "$OK; sig=$SIG"
done
echo "▶ KB batch done: $(date -Iseconds)"
echo "▶ KB state:"
wc -l data/_kb/*.jsonl 2>/dev/null || true