Architectural snapshot of the lakehouse codebase at the point where the
full matrix-driven agent loop with Mem0 versioning + deletion was
validated end-to-end.
WHAT THIS REPO IS
A clean single-commit snapshot of the lakehouse code. Heavy test data
(.parquet datasets, vector indexes) excluded — see REPLICATION.md for
regen path. Full lakehouse history at git.agentview.dev/profit/lakehouse.
WHAT WAS PROVEN
- Vector retrieval across multi-corpora matrix (chicago_permits + entity
briefs + sec_tickers + distilled procedural + llm_team runs)
- Observer hand-review (cloud + heuristic fallback) gating each candidate
- Local-model agent loop (qwen3.5:latest) with tool use + scratchpad
- Playbook seal on success → next-iter retrieval surfaces it as preamble
- Mem0 versioning + deletion in pathway_memory:
* UPSERT: ADD on new workflow, UPDATE bumps replay_count on identical
* REVISE: chains versions, parent.superseded_at + superseded_by stamped
* RETIRE: marks specific trace retired with reason, excluded from retrieval
* HISTORY: walks chain root→tip, cycle-safe
KEY DIRECTORIES
- crates/vectord/src/pathway_memory.rs — Mem0 ops live here
- crates/vectord/src/playbook_memory.rs — original Mem0 reference
- tests/agent_test/ — local-model agent harness + PRD + session archives
- scripts/dump_raw_corpus.sh — MinIO bucket dump (raw test corpus)
- scripts/vectorize_raw_corpus.ts — corpus → vector indexes
- scripts/analyze_chicago_contracts.ts — real inference pipeline
- scripts/seal_agent_playbook.ts — Mem0 upsert from agent traces
Replication: see REPLICATION.md for Debian 13 clean install + cloud-only
adaptation (no local Ollama).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
47 lines
1.9 KiB
Bash
Executable File
47 lines
1.9 KiB
Bash
Executable File
#!/usr/bin/env bash
|
||
# Run the 12 staffer-demo scenarios (4 personas × 3 contracts) with
|
||
# cloud T3 enabled. After each run, KB indexes the outcome and
|
||
# recomputes that staffer's competence_score. By the end, the top
|
||
# staffer's playbooks will be surfacing first in neighbor retrieval.
|
||
|
||
set -e
|
||
cd "$(dirname "$0")/.."
|
||
|
||
export OLLAMA_CLOUD_KEY="$(python3 -c "import json; print(json.load(open('/root/llm_team_config.json'))['providers']['ollama_cloud']['api_key'])" 2>/dev/null || echo '')"
|
||
|
||
MANIFEST="tests/multi-agent/scenarios/staffer_demo/manifest.json"
|
||
if [ ! -f "$MANIFEST" ]; then
|
||
echo "✗ no manifest — run: bun tests/multi-agent/gen_staffer_demo.ts"
|
||
exit 1
|
||
fi
|
||
|
||
START_TS=$(date -Iseconds)
|
||
LOG_DIR="/tmp/lakehouse_staffer_demo_$(date +%s)"
|
||
mkdir -p "$LOG_DIR"
|
||
echo "▶ Staffer demo start: $START_TS, logs → $LOG_DIR"
|
||
|
||
python3 -c "
|
||
import json
|
||
m = json.load(open('$MANIFEST'))
|
||
for s in m['scenarios']:
|
||
print(s['file'], '|', s['staffer'], '|', s['contract'])
|
||
" | while IFS='|' read -r SCEN STAFFER CONTRACT; do
|
||
SCEN=$(echo "$SCEN" | xargs)
|
||
STAFFER=$(echo "$STAFFER" | xargs)
|
||
CONTRACT=$(echo "$CONTRACT" | xargs)
|
||
SPEC="tests/multi-agent/scenarios/staffer_demo/$SCEN"
|
||
BASE=$(basename "$SPEC" .json)
|
||
LOG="$LOG_DIR/${BASE}.log"
|
||
echo " ▶ $STAFFER × $CONTRACT"
|
||
LH_OVERVIEW_CLOUD=1 bun tests/multi-agent/scenario.ts "$SPEC" > "$LOG" 2>&1 || true
|
||
OK=$(grep -oP '\d+/\d+ events succeeded' "$LOG" | tail -1 || echo "no-result")
|
||
RESCUES=$(grep -c "cloud rescue requested" "$LOG" || true)
|
||
RESCUE_OK=$(grep -c "retry outcome: ✓" "$LOG" || true)
|
||
SIG=$(grep -oP 'KB indexed: sig=\K[a-f0-9]+' "$LOG" | tail -1 || echo "-")
|
||
echo " → $OK; rescues=$RESCUES (${RESCUE_OK} succeeded); sig=$SIG"
|
||
done
|
||
|
||
echo "▶ Staffer demo done: $(date -Iseconds)"
|
||
echo "▶ Staffer competence leaderboard:"
|
||
python3 scripts/kb_staffer_report.py 2>/dev/null || echo "(run scripts/kb_staffer_report.py after batch completes)"
|