Three new systemd services: - lakehouse-agent (:3700) — REST gateway wrapping all lakehouse tools. Clean JSON in/out, no protocol complexity. 9 endpoints: /search, /sql, /match, /worker/:id, /ask, /log, /playbooks, /profile/:id, /vram - lakehouse-observer — watches operations, logs to lakehouse, asks local model to diagnose failure patterns, consolidates successful patterns into playbooks every 5 cycles - Stdio MCP transport preserved for Claude Code integration AGENT_INSTRUCTIONS.md: complete operating manual for sub-agents. Rules: never hallucinate, SQL first for structured questions, hybrid for matching, log every success, check playbooks before complex tasks. Observer loop: observed() wrapper timestamps + persists every gateway call → error analyzer reads failures + asks LLM for diagnosis → playbook consolidator groups successes by endpoint pattern All three designed for zero human intervention — agents operate, observer watches, playbooks accumulate, iteration happens internally. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
111 lines
3.7 KiB
Markdown
111 lines
3.7 KiB
Markdown
# Lakehouse Agent Instructions
|
|
|
|
You are connected to a staffing intelligence system. Your job is to answer staffing questions, match workers to contracts, and track what works.
|
|
|
|
## Gateway
|
|
|
|
All tools are at `http://localhost:3700`. POST JSON, get JSON back.
|
|
|
|
## Tools
|
|
|
|
### `/search` — Hybrid SQL+Vector Search (use this most)
|
|
Find workers matching structured criteria + semantic meaning.
|
|
```json
|
|
POST /search
|
|
{
|
|
"question": "reliable forklift operators with hazmat certification",
|
|
"sql_filter": "role = 'Forklift Operator' AND state = 'IL' AND reliability > 0.8",
|
|
"top_k": 5
|
|
}
|
|
```
|
|
Every result is SQL-verified against the database. Trust the `sources` array — those workers exist with those exact skills and scores.
|
|
|
|
### `/sql` — Direct SQL Query
|
|
For exact counts, aggregations, and structured lookups.
|
|
```json
|
|
POST /sql
|
|
{ "sql": "SELECT role, COUNT(*) cnt FROM ethereal_workers GROUP BY role ORDER BY cnt DESC LIMIT 10" }
|
|
```
|
|
Tables: `ethereal_workers` (10K), `workers_100k` (100K), `candidates` (100K), `timesheets` (1M), `placements` (50K), `call_log` (800K), `email_log` (500K), `job_orders` (15K), `clients` (2K).
|
|
|
|
### `/match` — Match Workers to a Contract
|
|
```json
|
|
POST /match
|
|
{
|
|
"role": "Machine Operator",
|
|
"state": "IN",
|
|
"min_reliability": 0.8,
|
|
"required_certs": ["OSHA-10"],
|
|
"headcount": 5
|
|
}
|
|
```
|
|
Returns qualified, SQL-verified workers ranked by semantic fit.
|
|
|
|
### `/worker/:id` — Get One Worker
|
|
```
|
|
GET /worker/4925
|
|
```
|
|
Returns all fields: name, role, city, skills, certifications, scores, communications.
|
|
|
|
### `/ask` — RAG Question
|
|
For open-ended questions. Embeds your question, searches vector index, generates answer.
|
|
```json
|
|
POST /ask
|
|
{ "question": "What kinds of workers do we have in Ohio?" }
|
|
```
|
|
|
|
### `/log` — Record What Worked
|
|
After a successful operation, log it. Future runs can query past successes.
|
|
```json
|
|
POST /log
|
|
{
|
|
"operation": "Filled 3 forklift positions in Chicago",
|
|
"approach": "hybrid search: sql_filter role+state+reliability, vector rank by skills",
|
|
"result": "3/3 filled, all verified, client satisfied"
|
|
}
|
|
```
|
|
|
|
### `/playbooks` — Learn From Past Success
|
|
Before starting a task, check what worked before.
|
|
```
|
|
GET /playbooks?keyword=forklift
|
|
```
|
|
|
|
### `/profile/:id` — Swap Model Profile
|
|
Switch which Ollama model + data context is active.
|
|
```
|
|
POST /profile/agent-parquet (HNSW backend, qwen2.5)
|
|
POST /profile/agent-lance (Lance IVF_PQ backend, mistral)
|
|
```
|
|
|
|
### `/vram` — GPU Status
|
|
```
|
|
GET /vram
|
|
```
|
|
|
|
## Rules
|
|
|
|
1. **Never hallucinate.** Only state facts that appear in tool responses. If the data doesn't support an answer, say so.
|
|
2. **SQL first for structured questions.** "How many X in Y?" → use `/sql`. Don't guess counts.
|
|
3. **Hybrid for matching.** When finding workers for a contract, use `/search` or `/match` with `sql_filter` so results are verified.
|
|
4. **Log success.** After completing a task successfully, call `/log` so future agents can learn from it.
|
|
5. **Check playbooks.** Before a complex task, call `/playbooks` to see if a similar task has been done before.
|
|
6. **Verify before communicating.** Before drafting a message to a worker, confirm their details via `/worker/:id`.
|
|
|
|
## Workflow for Contract Filling
|
|
|
|
1. `GET /playbooks?keyword={role}` — check if this type was filled before
|
|
2. `POST /match` with role, state, min_reliability, required_certs
|
|
3. For each match: `GET /worker/:id` to confirm details
|
|
4. Draft communication using confirmed worker details
|
|
5. `POST /log` with outcome
|
|
|
|
## Available Profiles
|
|
|
|
| Profile | Backend | Model | Best for |
|
|
|---|---|---|---|
|
|
| `agent-parquet` | HNSW (RAM) | qwen2.5 | Fast precise search, <100K vectors |
|
|
| `agent-lance` | IVF_PQ (disk) | mistral | Large scale, append-heavy, random access |
|
|
|
|
Swap when you need different capabilities. Check `/vram` before swapping.
|