Full audit pass on devop.live/lakehouse/spec. Five chapters were
stale, one had an outright incorrect line. Scope was bigger than
ch6 alone — J asked "you want to update all" and the honest answer
was yes.
Ch 1 (Repository layout):
- mcp-server row gains /memory/query, /models/matrix, /system/summary,
observer.ts with :3800 listener
- tests/multi-agent/ row lists all new files: kb.ts, normalize.ts,
memory_query.ts, gen_scenarios.ts, gen_staffer_demo.ts, and the
colocated unit tests (kb.test.ts, normalize.test.ts)
- NEW config/ row documents models.json as the 5-tier matrix
- data/ row enumerates the four learning-loop directories:
_kb/, _playbook_lessons/, _observer/, _chunk_cache/
Ch 3 (Measurement & indexing):
- NEW "Model matrix (Phase 20)" subsection — 5-tier table (T1 hot /
T2 review / T3 overview / T4 strategic / T5 gatekeeper), per-tier
primary model, frequency, the think:false mechanical finding
called out with the 650-token reasoning-budget example
- NEW "Continuation primitive (Phase 21)" paragraph
- NEW "Per-staffer tool_level (Phase 23)" section with full/local/
basic/minimal mapping and the 46pt fill-rate delta from the 36-run
demo
Ch 7 (Scale story):
- FIX: playbook_memory growth bullet was claiming "No TTL or merge
policy" — Phase 25 added retirement via valid_until +
schema_fingerprint + /retire endpoint. Rewritten to name current
state (1936 entries, active vs retired split exposed).
Ch 8 (Error surfaces):
- Five new rows added to the failure-mode table:
* Zero-supply city → cloud rescue (Phase 22 item B) with the
Gary IN → South Bend IN concrete example
* LLM truncation → generateContinuable (Phase 21)
* Schema migration → /vectors/playbook_memory/retire (Phase 25)
* Observer unreachable → scenario silent-skip + append journal
survivability
Ch 9 (Per-staffer context):
- NEW "Staffer identity + competence-weighted retrieval (Phase 23)"
section with the competence_score formula and findNeighbors
weighted_score
- NEW "Auto-discovered reliable-performer labels" section naming
Rachel D. Lewis (18 endorsements) and Angela U. Ward (19) as
concrete output of 36-run demo
Ch 10 (A day in the life):
- Added 17:15 timeline entry — Kim using /memory/query with natural
language, regex normalizer extracting role/city/count in 0ms
- 17:00 entry updated to mention KB indexing + pathway recommendation
+ observer stream
- 22:00 entry updated to mention detectErrorCorrections nightly scan
Ch 11 (Known limits & non-goals):
- FIX: "playbook_memory compaction" bullet rewritten since retirement
is now wired; reframed as the honest Mem0 UPDATE/NOOP gap
- Added Letta hot cache deferred item with honest "cheap at 1.9K,
will bite at 100K" framing
- Added Chunking cache (Phase 21 Rust port) deferred item
- Added Observer → autotune feedback wire deferred item (Phase 26+)
Footer bumped v1 2026-04-20 → v2 2026-04-21 with Phase list.
Verified all updates live on devop.live/lakehouse/spec.
Phase 8.5 was fully built on the Rust side (WorkspaceManager with
create/handoff/search/shortlist/activity/get/list, persisted to
object storage, zero-copy handoff between agents). Nothing surfaced
it in the recruiter UI. This page closes that gap.
/workspaces — split-pane UI:
Left: scrollable list of all workspaces, sorted by updated_at.
Each card shows name, tier pill (daily/weekly/monthly/pinned),
current owner, count of shortlisted candidates + activity events.
Right: selected workspace detail with five sections:
1. Header — name, tier, owner, created/updated dates, description,
previous-owners audit trail (each handoff is preserved)
2. Actions row — Hand off, Shortlist candidate, Save search, Log activity
3. Shortlist — candidates flagged with dataset + record_id + notes
4. Saved searches — named SQL queries the staffer wants to rerun
5. Activity — chronological (newest first) log of what happened
Four modals for the add/edit actions (create, handoff, shortlist,
save-search, log-activity). All forms POST through the existing
/api/* passthrough to the gateway's /workspaces/* routes.
End-to-end verified live:
1. Sarah creates 'Demo: Toledo Week 17' workspace
2. Shortlists Helen Sanchez (W500K-4661) with notes about prior endorsements
3. Logs activity: 'called — Helen confirmed Tuesday 7am shift'
4. Hands off to Kim with reason 'end of shift'
5. Kim opens the workspace: owner=kim, previous_owners=[{sarah→kim}],
sees all 3 prior events + the shortlisted Helen
— no data copy, pointer swap only (Phase 8.5 design)
Security: all dynamic content built via el(tag,cls,text) DOM helper.
Zero innerHTML on API-derived strings. Modal close-on-backdrop-click
is guarded to the backdrop element.
Nav updated across all 7 pages. Workspaces is the 7th tab.
Dashboard · Walkthrough · Architecture · Spec · Onboard · Alerts · Workspaces.
Converts the app from 'dashboard you visit' to 'system that finds you.'
Critical for the phone-first staffing shop that won't open a URL —
the system reaches out when something matters.
Daemon:
- Starts once per Bun process (guarded via globalThis sentinel)
- Default interval 15 min (configurable, min 1, max 1440)
- On each cycle, buildDigest() compares current state against prior
snapshot persisted in mcp-server/data/notification_state.json
- Events detected:
- risk_escalation: role moved to tight or critical (was ok/watch)
- deadline_approaching: staffing window falls within warn window
(default 7 days) AND deadline date differs from prior
- memory_growth: playbook_memory entries grew by >= 5 since last run
Channels (all opt-out individually via config):
- console: always on, logged to journalctl -u lakehouse-agent
- file: always on, appends JSONL to mcp-server/data/notifications.jsonl
- webhook: optional, POSTs {text, digest} to configured URL
(Slack incoming-webhook / Discord webhook / any custom endpoint)
Digest format (human-readable, fits in a Slack message):
LAKEHOUSE DIGEST — 2026-04-20 23:24
3 staffing deadlines within window:
• Production Worker — 2d to 2026-04-23 · demand 724
• Maintenance Tech — 4d to 2026-04-25 · demand 32
• Electrician — 5d to 2026-04-26 · demand 34
+779 new playbooks (total 779, 2204 endorsed names)
snapshot: 0 critical · 0 tight · $275,599,326 pipeline
/alerts page:
- Current status table (daemon state, interval, webhook, last run)
- Config form: enable toggle, interval, deadline warn window, webhook
URL + label (saved to data/notification_config.json)
- 'Fire a test digest now' button — force a cycle without waiting
- Recent digests panel shows the last 10 dispatches with full text
End-to-end verified live:
- Daemon armed successfully on startup
- First-run digest dispatched to console + file in <1s
- Events detected correctly: 3 deadlines within 7 days from real
Chicago permit data; 779 playbook entries surfaced as memory growth
- Digest text format is Slack-pastable
- Dispatch records appear in /alerts recent list
TDZ caveat: startAlertsDaemon() invocation moved to end of module so
all const/let in the alerts block evaluate before daemon reads them.
Previously failed with 'Cannot access X before initialization' when
the call lived near the top of the file. Nav added to all 6 pages:
Dashboard · Walkthrough · Architecture · Spec · Onboard · Alerts.
New /onboard page. Client-facing wizard for getting real data into
the system without engineering help.
Flow:
1. Drop a CSV (or click 'Use the sample as my data' — ships a 25-row
realistic staffing roster under /samples/staffing_roster_sample.csv)
2. Browser parses client-side. Columns auto-typed (text/int/decimal/
date). PII flagged by name hint AND content regex (emails, phones).
First rows previewed. Read-only — nothing written yet.
3. Name the dataset (lowercase+underscores). Commit.
4. Post-commit: dataset is live. Shows 4 next steps the operator can
take (SQL query, vector index, dashboard search, playbook training).
Backend:
- /onboard serves onboard.html
- /samples/*.csv serves CSV files from mcp-server/samples/ with
filename validation (only [a-zA-Z0-9_-.]+.csv, prevents path traversal)
- /onboard/ingest forwards multipart/form-data to gateway /ingest/file
preserving the boundary. The generic /api/* passthrough breaks
multipart because it reads as text and forwards as JSON; this route
uses arrayBuffer + original Content-Type.
Verified end-to-end: upload sample roster (25 rows, 12 columns) →
parse in browser → show columns + PII flags + preview → commit →
gateway writes Parquet, registers in catalog → immediately queryable:
SELECT * FROM onboard_demo2 LIMIT 3
→ Sarah Johnson, Forklift Operator, Chicago, IL, 0.92
Round-trip <1 second.
Nav updated on all pages to link Onboard. Shipped with a sample CSV
so the full flow is demonstrable without real client data.
When a real client shows up, same path — they upload their CSV.
No engineering ticket, no code change, no schema pre-definition.
Security: sample filename regex prevents path traversal. CSV parse
is client-side pure JS (no DOM injection). Commit uses existing
/ingest/file validation (schema fingerprint, PII server-side,
content-hash dedup).
J's ask: explain the full architecture so someone reading a README
can dispute it or recreate it. The repo isn't public yet; this page
IS the spec until it is.
Ch1 Repository layout — 13 crates + tests/multi-agent + docs + data,
with owned responsibility and file path per crate.
Ch2 Data ingest pipeline (8 steps) — sources (file/inbox/DB/cron),
parse+normalize with ADR-010 conservative typing, PII auto-tag,
dedup, Parquet write, catalog register with fingerprint gate,
mark embeddings stale, queryable immediately.
Ch3 Measurement & indexing — row count / fingerprint / owner /
sensitivity / freshness / lineage per dataset. HNSW vs Lance
tradeoff table with measured numbers (ADR-019). Autotune loop.
Per-profile scoping (Phase 17).
Ch4 Contract inference from external signal — Chicago permit feed
→ role mapping → worker count heuristic → timeline → hybrid
search with boost → pattern discovery → rendered card. All
pre-computed before staffer opens UI.
Ch5 What a CRM can't do — 11-row comparison table of capabilities.
Ch6 How it gets better over time — three paths:
- Phase 19 playbook boost (full math)
- Pattern discovery meta-index
- Autotune agent
Ch7 Scale story: 20 staffers, 300 contracts, midday +20/+1M surge
- Async gateway + per-staffer profile isolation + client blacklists
- 7-step surge handling flow (ingest, stale-mark, incremental refresh,
degradation, hot-swap, autotune re-enter)
- Known pain points: Ollama inference serial, RAM ceiling ~5M on
HNSW (mitigated by Lance), VRAM 1-2 models sequential,
playbook_memory unbounded.
Ch8 Error surfaces & recovery — 10-row table covering ingest schema
conflicts, bucket failures, ghost names, dual-agent drift,
empty searches, Ollama down, gateway restart, schema fingerprint
divergence. Every failure has a named surface and recovery path.
Ch9 Per-staffer context — active profile, workspace, client blacklist,
audit trail, daily summary. How 20 staffers don't see the same UI.
Ch10 Day in the life — 07:00 housekeeping → 07:30 refresh → 08:00
staffer opens → 08:15 drill down → 08:30 Call click → 09:00
second staffer shares memory → 12:30 surge → 14:00 no-show →
15:00 new embeddings live → 17:00 retrospective → 22:00
overnight trials.
Ch11 Known limits & non-goals — deferred (rate/margin, push, confidence
calibration, neural re-ranker, pm compaction, call_log cross-ref)
and explicitly out-of-scope (cloud, ACID, streaming, CRM replace,
proprietary formats, hard multi-tenant).
Also: nav updated on /dashboard, /console, /proof to link /spec.
Every architectural claim in the spec cites either a code path, an
ADR number, or a phase reference so someone skeptical can target
the specific artifact.