matrix-agent-validated/mcp-server/AGENT_INSTRUCTIONS.md
profit ac01fffd9a checkpoint: matrix-agent-validated (2026-04-25)
Architectural snapshot of the lakehouse codebase at the point where the
full matrix-driven agent loop with Mem0 versioning + deletion was
validated end-to-end.

WHAT THIS REPO IS
A clean single-commit snapshot of the lakehouse code. Heavy test data
(.parquet datasets, vector indexes) excluded — see REPLICATION.md for
regen path. Full lakehouse history at git.agentview.dev/profit/lakehouse.

WHAT WAS PROVEN
- Vector retrieval across multi-corpora matrix (chicago_permits + entity
  briefs + sec_tickers + distilled procedural + llm_team runs)
- Observer hand-review (cloud + heuristic fallback) gating each candidate
- Local-model agent loop (qwen3.5:latest) with tool use + scratchpad
- Playbook seal on success → next-iter retrieval surfaces it as preamble
- Mem0 versioning + deletion in pathway_memory:
    * UPSERT: ADD on new workflow, UPDATE bumps replay_count on identical
    * REVISE: chains versions, parent.superseded_at + superseded_by stamped
    * RETIRE: marks specific trace retired with reason, excluded from retrieval
    * HISTORY: walks chain root→tip, cycle-safe

KEY DIRECTORIES
- crates/vectord/src/pathway_memory.rs — Mem0 ops live here
- crates/vectord/src/playbook_memory.rs — original Mem0 reference
- tests/agent_test/ — local-model agent harness + PRD + session archives
- scripts/dump_raw_corpus.sh — MinIO bucket dump (raw test corpus)
- scripts/vectorize_raw_corpus.ts — corpus → vector indexes
- scripts/analyze_chicago_contracts.ts — real inference pipeline
- scripts/seal_agent_playbook.ts — Mem0 upsert from agent traces

Replication: see REPLICATION.md for Debian 13 clean install + cloud-only
adaptation (no local Ollama).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 19:43:27 -05:00

111 lines
3.7 KiB
Markdown

# Lakehouse Agent Instructions
You are connected to a staffing intelligence system. Your job is to answer staffing questions, match workers to contracts, and track what works.
## Gateway
All tools are at `http://localhost:3700`. POST JSON, get JSON back.
## Tools
### `/search` — Hybrid SQL+Vector Search (use this most)
Find workers matching structured criteria + semantic meaning.
```json
POST /search
{
"question": "reliable forklift operators with hazmat certification",
"sql_filter": "role = 'Forklift Operator' AND state = 'IL' AND reliability > 0.8",
"top_k": 5
}
```
Every result is SQL-verified against the database. Trust the `sources` array — those workers exist with those exact skills and scores.
### `/sql` — Direct SQL Query
For exact counts, aggregations, and structured lookups.
```json
POST /sql
{ "sql": "SELECT role, COUNT(*) cnt FROM ethereal_workers GROUP BY role ORDER BY cnt DESC LIMIT 10" }
```
Tables: `ethereal_workers` (10K), `workers_100k` (100K), `candidates` (100K), `timesheets` (1M), `placements` (50K), `call_log` (800K), `email_log` (500K), `job_orders` (15K), `clients` (2K).
### `/match` — Match Workers to a Contract
```json
POST /match
{
"role": "Machine Operator",
"state": "IN",
"min_reliability": 0.8,
"required_certs": ["OSHA-10"],
"headcount": 5
}
```
Returns qualified, SQL-verified workers ranked by semantic fit.
### `/worker/:id` — Get One Worker
```
GET /worker/4925
```
Returns all fields: name, role, city, skills, certifications, scores, communications.
### `/ask` — RAG Question
For open-ended questions. Embeds your question, searches vector index, generates answer.
```json
POST /ask
{ "question": "What kinds of workers do we have in Ohio?" }
```
### `/log` — Record What Worked
After a successful operation, log it. Future runs can query past successes.
```json
POST /log
{
"operation": "Filled 3 forklift positions in Chicago",
"approach": "hybrid search: sql_filter role+state+reliability, vector rank by skills",
"result": "3/3 filled, all verified, client satisfied"
}
```
### `/playbooks` — Learn From Past Success
Before starting a task, check what worked before.
```
GET /playbooks?keyword=forklift
```
### `/profile/:id` — Swap Model Profile
Switch which Ollama model + data context is active.
```
POST /profile/agent-parquet (HNSW backend, qwen2.5)
POST /profile/agent-lance (Lance IVF_PQ backend, mistral)
```
### `/vram` — GPU Status
```
GET /vram
```
## Rules
1. **Never hallucinate.** Only state facts that appear in tool responses. If the data doesn't support an answer, say so.
2. **SQL first for structured questions.** "How many X in Y?" → use `/sql`. Don't guess counts.
3. **Hybrid for matching.** When finding workers for a contract, use `/search` or `/match` with `sql_filter` so results are verified.
4. **Log success.** After completing a task successfully, call `/log` so future agents can learn from it.
5. **Check playbooks.** Before a complex task, call `/playbooks` to see if a similar task has been done before.
6. **Verify before communicating.** Before drafting a message to a worker, confirm their details via `/worker/:id`.
## Workflow for Contract Filling
1. `GET /playbooks?keyword={role}` — check if this type was filled before
2. `POST /match` with role, state, min_reliability, required_certs
3. For each match: `GET /worker/:id` to confirm details
4. Draft communication using confirmed worker details
5. `POST /log` with outcome
## Available Profiles
| Profile | Backend | Model | Best for |
|---|---|---|---|
| `agent-parquet` | HNSW (RAM) | qwen2.5 | Fast precise search, <100K vectors |
| `agent-lance` | IVF_PQ (disk) | mistral | Large scale, append-heavy, random access |
Swap when you need different capabilities. Check `/vram` before swapping.