profit ac01fffd9a checkpoint: matrix-agent-validated (2026-04-25)
Architectural snapshot of the lakehouse codebase at the point where the
full matrix-driven agent loop with Mem0 versioning + deletion was
validated end-to-end.

WHAT THIS REPO IS
A clean single-commit snapshot of the lakehouse code. Heavy test data
(.parquet datasets, vector indexes) excluded — see REPLICATION.md for
regen path. Full lakehouse history at git.agentview.dev/profit/lakehouse.

WHAT WAS PROVEN
- Vector retrieval across multi-corpora matrix (chicago_permits + entity
  briefs + sec_tickers + distilled procedural + llm_team runs)
- Observer hand-review (cloud + heuristic fallback) gating each candidate
- Local-model agent loop (qwen3.5:latest) with tool use + scratchpad
- Playbook seal on success → next-iter retrieval surfaces it as preamble
- Mem0 versioning + deletion in pathway_memory:
    * UPSERT: ADD on new workflow, UPDATE bumps replay_count on identical
    * REVISE: chains versions, parent.superseded_at + superseded_by stamped
    * RETIRE: marks specific trace retired with reason, excluded from retrieval
    * HISTORY: walks chain root→tip, cycle-safe

KEY DIRECTORIES
- crates/vectord/src/pathway_memory.rs — Mem0 ops live here
- crates/vectord/src/playbook_memory.rs — original Mem0 reference
- tests/agent_test/ — local-model agent harness + PRD + session archives
- scripts/dump_raw_corpus.sh — MinIO bucket dump (raw test corpus)
- scripts/vectorize_raw_corpus.ts — corpus → vector indexes
- scripts/analyze_chicago_contracts.ts — real inference pipeline
- scripts/seal_agent_playbook.ts — Mem0 upsert from agent traces

Replication: see REPLICATION.md for Debian 13 clean install + cloud-only
adaptation (no local Ollama).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 19:43:27 -05:00

72 lines
2.4 KiB
Markdown

# Truth rules — file-backed policy
Phase 42 PRD: *"truth/ dir at repo root — rule files, versioned in git."*
This directory is the canonical home for TruthStore rules loaded from
disk. Each `*.toml` file holds a set of `TruthRule` records for one
task class. The truth crate's `load_from_dir(path)` walks this
directory, parses every `.toml` file, and registers the rules it finds.
## Structure
```
truth/
├── README.md ← this file
├── staffing.fill.toml ← rules for task_class="staffing.fill"
└── staffing.any.toml ← rules for task_class="staffing.any"
```
File naming is informational — `load_from_dir` respects whatever
`task_class` the rule declares internally, NOT the filename. Using
task-class-matching filenames is a convention for humans reading the
git tree.
## Rule shape
```toml
[[rule]]
id = "worker-active"
task_class = "staffing.fill"
description = "Worker must be active"
condition = { type = "FieldEquals", field = "worker.status", value = "active" }
action = { type = "Pass" }
```
`condition.type` is one of:
- `Always` — always true
- `FieldEquals { field, value }`
- `FieldMismatch { field, value }`
- `FieldEmpty { field }`
- `FieldGreater { field, threshold }`
- `FieldContainsAny { field, needles }`
`action.type` is one of:
- `Pass` — rule informational; no enforcement
- `Reject { message }` — short-circuit with error
- `Redact { fields }` — mutate the context, strip fields
- `Block { message }` — hard stop, alert
## Composition
The crate's `default_truth_store()` continues to register rules
**in code** for backward-compat. Operators can layer file-backed
rules ON TOP via `load_from_dir`:
```rust
let store = truth::default_truth_store();
let store = truth::load_from_dir(&store, "/home/profit/lakehouse/truth")?;
```
File-loaded rules are additive — they do NOT replace in-code rules.
This lets the staffing team tune rules at the file level (edit a
threshold, add a new `FieldContainsAny` blocklist) without waiting
for a code deploy.
## Explicit non-goals
- **No hot reload** — per Phase 42 PRD ("Truth reload is explicit
in this phase"). Operators bounce the gateway or POST `/v1/context`
refresh endpoint (future) to pick up changes.
- **No inheritance** — each file stands alone; rule IDs must be unique
across all files. Duplicate-ID detection is a load-time error.