Adds /contractor page route plus /intelligence/contractor_profile
endpoint that fans out across OSHA, ticker, history, parent_link,
federal contracts, debarment, NLRB, ILSOS, news, diversity certs,
BLS macro — single per-contractor portfolio view across every
wired source.
search.html: mobile responsive layout, fixed bottom dock with
horizontal scroll-snap, legacy bridge row stacking, viewport
overflow guards.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
lakehouse/auditor 2 blocking issues: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
J's direction: the dashboard was explanatory but not *actionable* as
a staffing-matrix console. Refactor so the architecture claims from
docs/PRD.md surface as operational signals on every contract card.
Backend (mcp-server/index.ts):
+ GET|POST /intelligence/arch_signals — probes live substrate health
so the dashboard shows instant-search latency, index shape,
playbook-memory entries, and pathway-memory (ADR-021) trace count.
Fires one fresh /vectors/hybrid probe against workers_500k_v1 so
the "instant search" number on screen is live, not cached.
* /intelligence/permit_contracts now times every hybrid call per
contract and returns search_latency_ms, so the card can display
the per-query latency pill (⚡ 342ms).
+ Per-contract computed fields returned from the backend:
search_latency_ms — real /vectors/hybrid duration
fill_probability — base_pct (by pool_size×count ratio)
+ curve [d0, d3, d7, d14, d21, d30]
with cumulative fill% per bucket
economics — avg_pay_rate, gross_revenue,
gross_margin, margin_pct,
payout_window_days [30, 45],
over_bill_count,
over_bill_pool_margin_at_risk
shifts_needed — 1st/2nd/3rd/4th inferred from
permit work_type + description regex
* Pre-existing dangling-brace bug in api() fixed (the `activeTrace`
logging block had been misplaced at module scope, referencing
variables that only existed inside the function). Restart was
failing with "Unexpected }" at line 76. Moved tracing inside the
try block where parsed/path/body/ms are in scope.
Frontend (mcp-server/search.html):
+ Top "Substrate Signals" section — 4 live tiles (instant search,
index, playbook memory, pathway matrix). Color-codes latency
(green <100ms, amber <500ms, red otherwise).
+ "24/7 Shift Coverage" section — SVG 24-hour clock with 4 colored
shift arcs (1st/2nd/3rd/4th), current-time needle, center label
showing the live shift, per-shift contract count tiles beside.
4th shift assumes weekend/split; handles 3rd-shift wrap across
midnight by splitting into two arcs.
+ Per-card architecture pills: instant-search latency, SQL-filter
pool-size with k=200 boost note, shift requirements.
+ Per-card fill-probability horizontal stacked bar with day
markers (d0/d3/d7/d14/d21/d30) and per-bucket segment shading
(green → amber → orange → red as time decays).
+ Per-card economics 4-tile grid: Est. Revenue, Est. Margin (with
% colored by health), Payout Window (30–45d standard), Over-Bill
Pool count + margin at risk.
Architecture smoke test (tests/architecture_smoke.ts, earlier commit)
still green: 11/11 pass including the new /intelligence/arch_signals
+ permit_contracts enrichments.
J specifically wanted: "shoot for the stars · hyperfocus · our
architecture is better because it self-regulates, uses hot-swap,
pulls from real data, and shows instant searches from clever
indexing." Every one of those is now a specific visible signal on
the page, not prose in the README.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
J asked directly: "did we implement our memory findings so that our
knowledge base and our configuration playbook [work] seamlessly with
whatever input they're given?" Honest answer tonight was "one of five
findings shipped, normalizer is the blocker." This closes that gap.
NORMALIZER (tests/multi-agent/normalize.ts):
Accepts structured JSON, natural language, or mixed. Returns canonical
NormalizedInput { role, city, state, count, client, deadline, intent,
confidence, extraction_method, missing_fields } for any downstream
consumer.
Three-tier path:
1. Structured fast-path — already-shaped input skips LLM
2. Regex path — "need 3 welders in Nashville, TN" parses without LLM.
City/state parser tightened to 1-3 capitalized words + "in {city}"
anchor preference + case-exact full-state-name variants to prevent
"Forklift Operators in Chicago" being captured as the city name
3. LLM fallback — qwen3 local with think:false + 400 max_tokens for
inputs the regex can't handle
Unit tests (tests/multi-agent/normalize.test.ts): 9/9 pass. Covers
structured fast-path, misplacement→rescue intent, state-name→abbrev
conversion, regex extraction from natural language, plural role +
full state name edge case, rescue intent keyword precedence, partial
input reporting missing fields, empty object fallthrough, async/sync
parity on clean inputs.
UNIFIED MEMORY QUERY (tests/multi-agent/memory_query.ts):
One function, five parallel fan-outs, one bundle returned:
- playbook_workers — hybrid_search via gateway with use_playbook_memory
- pathway_recommendation — KB recommender for this sig
- neighbor_signatures — K-NN sigs weighted by staffer competence
- prior_lessons — T3 overseer lessons filtered by city/state
- top_staffers — competence-sorted leaderboard
- discovered_patterns — top workers endorsed across past playbooks
for this (role, city, state)
- latency_ms — per-source + total
Every branch is best-effort: one source down doesn't break the bundle.
HTTP ENDPOINT (mcp-server/index.ts):
POST /memory/query with body {input: <anything>} → MemoryQueryResult
Returns the same shape the TS function does. Typed with types.ts for
future UI consumption.
VERIFIED:
curl POST /memory/query with structured {role,city,state,count}
→ extraction_method=structured, 10 playbook workers, top score 0.878
curl POST /memory/query with "I need 3 welders in Nashville, TN"
→ extraction_method=regex (no LLM call), 319ms total, 8 endorsements
for Lauren Gomez auto-discovered as top Nashville Welder
Honest remaining gaps (documented for next phase):
- Mem0 ADD/UPDATE/DELETE/NOOP — we still only ADD + mark_failed
- Zep validity windows — playbook entries have timestamps but no
retirement semantic
- Letta working-memory / hot cache — every query scans all 1560
playbook entries
- Memory profiles / scoped queries — global pool, no per-staffer
private subsets
2 of 5 findings now shipped (multi-strategy retrieval in Rust, input
normalization + unified query in TS). The remaining 3 are architectural
additions queued as Phase 25 items — validity windows first since it's
the most load-bearing for long-running systems.
config/models.json is the authoritative catalog. Hot path (T1/T2) stays
local; cloud is consulted only for overview (T3), strategic (T4), and
gatekeeper (T5) calls. J named qwen3.5 + newer models (minimax-m2.7,
glm-5, qwen3-next) specifically — all mapped with real reachable IDs
verified against ollama.com/api/tags.
Tier shape:
- t1_hot mistral + qwen2.5 local — 50-200 calls/scenario
- t2_review qwen2.5 + qwen3 local — 5-14 calls/event
- t3_overview gpt-oss:120b cloud — 1-3 calls/scenario
- t4_strategic qwen3.5:397b + glm-4.7 — 1-10 calls/day
- t5_gatekeeper kimi-k2-thinking — 1-5 calls/day, audit-logged
Rate budgets are declared in-config — Ollama Cloud paid tier is generous
but we cap overview/strategic/gatekeeper so no single rogue scenario can
blow the day's quota.
Experimental rotation list wired but disabled by default. When enabled,
T4 randomly routes 10% of calls to a rotating minimax/GLM/qwen-next/
deepseek/nemotron/cogito/mistral-large candidate, logs comparisons, and
auto-promotes after 3 rotations of wins.
Playbook versioning SPEC embedded under `playbook_versioning` key: every
seed gets version + parent_id + retired_at + architecture_snapshot, so
when a schema migration breaks a playbook we can pinpoint which change
retired it. Implementation flagged for next sprint (touches gateway +
catalogd + mcp-server) — not wired here.
- scenario.ts now loads config/models.json at init, env vars still override
- mcp-server exposes /models/matrix read-only so UI can render it
Closes one of the Path 1 trust-break gaps. The scenario we kept flagging:
recruiter calls the system's top pick, worker quotes $35/hr, contract
pays $28/hr. First broken call kills the demo. This fixes it.
Heuristic (no schema change, derived at query time):
- Per worker: implied_pay_rate = role_base + (reliability × 4) + archetype_bump
role_base: Electrician $28, Welder $26, Machine Op $24, Maint $26,
Forklift Op $20, Loader $17, Warehouse Assoc $17, Quality Tech $23,
Production Worker $18 ...
archetype bump: specialist +4, leader +3, reliable +1, else 0
- Per contract: implied_bill_rate = role_base × 1.4
(40% markup — industry norm: pay + overhead + insurance + margin)
- Worker is 'over_bill_rate' when implied_pay_rate > contract's bill_rate
on a candidate-by-candidate basis
Backend (mcp-server/index.ts):
- ROLE_BASE_PAY_RATE + BILL_MARKUP constants
- impliedPayRate(worker), impliedBillRate(role) functions
- parseWorkerChunk() extracts role/reliability/archetype from vector text
- enrichWithRates() attaches implied_pay_rate on every /vectors/hybrid
source response. Called from /search and /intelligence/permit_contracts.
- /search accepts optional max_pay_rate number — if set, filters out
workers above that rate and reports pay_rate_filtered_out count.
- /intelligence/permit_contracts returns implied_bill_rate per contract
AND over_bill_rate boolean per candidate.
Frontend (search.html):
- Live Contracts cards show 'bill rate: $X/hr' under the headcount line
- Each candidate shows 'pay $X/hr' in the sub-line; red 'Over bill rate'
chip next to name when their pay exceeds the contract's bill rate
(hover reveals the exact numbers and why it's flagged)
- Main 'Search all workers' results now include 'pay $X/hr' in the
why-text (computeImpliedPayRate mirrored client-side to match Bun)
End-to-end verified live:
- Masonry Work permit, bill_rate $25.20/hr
Kathleen M. Gutierrez pay $25.56/hr → 🔴 OVER
Melissa C. Rivera pay $20.88/hr → 🟢 OK
- /search with max_pay_rate:32 filtered out 1 Toledo Welder above $32
- Main search shows 'pay $28.64/hr' in each result row
When real ATS data replaces synthetic workers_500k, same UI — the
client's real pay_rate column substitutes for the heuristic.
Phase 8.5 was fully built on the Rust side (WorkspaceManager with
create/handoff/search/shortlist/activity/get/list, persisted to
object storage, zero-copy handoff between agents). Nothing surfaced
it in the recruiter UI. This page closes that gap.
/workspaces — split-pane UI:
Left: scrollable list of all workspaces, sorted by updated_at.
Each card shows name, tier pill (daily/weekly/monthly/pinned),
current owner, count of shortlisted candidates + activity events.
Right: selected workspace detail with five sections:
1. Header — name, tier, owner, created/updated dates, description,
previous-owners audit trail (each handoff is preserved)
2. Actions row — Hand off, Shortlist candidate, Save search, Log activity
3. Shortlist — candidates flagged with dataset + record_id + notes
4. Saved searches — named SQL queries the staffer wants to rerun
5. Activity — chronological (newest first) log of what happened
Four modals for the add/edit actions (create, handoff, shortlist,
save-search, log-activity). All forms POST through the existing
/api/* passthrough to the gateway's /workspaces/* routes.
End-to-end verified live:
1. Sarah creates 'Demo: Toledo Week 17' workspace
2. Shortlists Helen Sanchez (W500K-4661) with notes about prior endorsements
3. Logs activity: 'called — Helen confirmed Tuesday 7am shift'
4. Hands off to Kim with reason 'end of shift'
5. Kim opens the workspace: owner=kim, previous_owners=[{sarah→kim}],
sees all 3 prior events + the shortlisted Helen
— no data copy, pointer swap only (Phase 8.5 design)
Security: all dynamic content built via el(tag,cls,text) DOM helper.
Zero innerHTML on API-derived strings. Modal close-on-backdrop-click
is guarded to the backdrop element.
Nav updated across all 7 pages. Workspaces is the 7th tab.
Dashboard · Walkthrough · Architecture · Spec · Onboard · Alerts · Workspaces.
Converts the app from 'dashboard you visit' to 'system that finds you.'
Critical for the phone-first staffing shop that won't open a URL —
the system reaches out when something matters.
Daemon:
- Starts once per Bun process (guarded via globalThis sentinel)
- Default interval 15 min (configurable, min 1, max 1440)
- On each cycle, buildDigest() compares current state against prior
snapshot persisted in mcp-server/data/notification_state.json
- Events detected:
- risk_escalation: role moved to tight or critical (was ok/watch)
- deadline_approaching: staffing window falls within warn window
(default 7 days) AND deadline date differs from prior
- memory_growth: playbook_memory entries grew by >= 5 since last run
Channels (all opt-out individually via config):
- console: always on, logged to journalctl -u lakehouse-agent
- file: always on, appends JSONL to mcp-server/data/notifications.jsonl
- webhook: optional, POSTs {text, digest} to configured URL
(Slack incoming-webhook / Discord webhook / any custom endpoint)
Digest format (human-readable, fits in a Slack message):
LAKEHOUSE DIGEST — 2026-04-20 23:24
3 staffing deadlines within window:
• Production Worker — 2d to 2026-04-23 · demand 724
• Maintenance Tech — 4d to 2026-04-25 · demand 32
• Electrician — 5d to 2026-04-26 · demand 34
+779 new playbooks (total 779, 2204 endorsed names)
snapshot: 0 critical · 0 tight · $275,599,326 pipeline
/alerts page:
- Current status table (daemon state, interval, webhook, last run)
- Config form: enable toggle, interval, deadline warn window, webhook
URL + label (saved to data/notification_config.json)
- 'Fire a test digest now' button — force a cycle without waiting
- Recent digests panel shows the last 10 dispatches with full text
End-to-end verified live:
- Daemon armed successfully on startup
- First-run digest dispatched to console + file in <1s
- Events detected correctly: 3 deadlines within 7 days from real
Chicago permit data; 779 playbook entries surfaced as memory growth
- Digest text format is Slack-pastable
- Dispatch records appear in /alerts recent list
TDZ caveat: startAlertsDaemon() invocation moved to end of module so
all const/let in the alerts block evaluate before daemon reads them.
Previously failed with 'Cannot access X before initialization' when
the call lived near the top of the file. Nav added to all 6 pages:
Dashboard · Walkthrough · Architecture · Spec · Onboard · Alerts.
New /onboard page. Client-facing wizard for getting real data into
the system without engineering help.
Flow:
1. Drop a CSV (or click 'Use the sample as my data' — ships a 25-row
realistic staffing roster under /samples/staffing_roster_sample.csv)
2. Browser parses client-side. Columns auto-typed (text/int/decimal/
date). PII flagged by name hint AND content regex (emails, phones).
First rows previewed. Read-only — nothing written yet.
3. Name the dataset (lowercase+underscores). Commit.
4. Post-commit: dataset is live. Shows 4 next steps the operator can
take (SQL query, vector index, dashboard search, playbook training).
Backend:
- /onboard serves onboard.html
- /samples/*.csv serves CSV files from mcp-server/samples/ with
filename validation (only [a-zA-Z0-9_-.]+.csv, prevents path traversal)
- /onboard/ingest forwards multipart/form-data to gateway /ingest/file
preserving the boundary. The generic /api/* passthrough breaks
multipart because it reads as text and forwards as JSON; this route
uses arrayBuffer + original Content-Type.
Verified end-to-end: upload sample roster (25 rows, 12 columns) →
parse in browser → show columns + PII flags + preview → commit →
gateway writes Parquet, registers in catalog → immediately queryable:
SELECT * FROM onboard_demo2 LIMIT 3
→ Sarah Johnson, Forklift Operator, Chicago, IL, 0.92
Round-trip <1 second.
Nav updated on all pages to link Onboard. Shipped with a sample CSV
so the full flow is demonstrable without real client data.
When a real client shows up, same path — they upload their CSV.
No engineering ticket, no code change, no schema pre-definition.
Security: sample filename regex prevents path traversal. CSV parse
is client-side pure JS (no DOM injection). Commit uses existing
/ingest/file validation (schema fingerprint, PII server-side,
content-hash dedup).
J's ask: explain the full architecture so someone reading a README
can dispute it or recreate it. The repo isn't public yet; this page
IS the spec until it is.
Ch1 Repository layout — 13 crates + tests/multi-agent + docs + data,
with owned responsibility and file path per crate.
Ch2 Data ingest pipeline (8 steps) — sources (file/inbox/DB/cron),
parse+normalize with ADR-010 conservative typing, PII auto-tag,
dedup, Parquet write, catalog register with fingerprint gate,
mark embeddings stale, queryable immediately.
Ch3 Measurement & indexing — row count / fingerprint / owner /
sensitivity / freshness / lineage per dataset. HNSW vs Lance
tradeoff table with measured numbers (ADR-019). Autotune loop.
Per-profile scoping (Phase 17).
Ch4 Contract inference from external signal — Chicago permit feed
→ role mapping → worker count heuristic → timeline → hybrid
search with boost → pattern discovery → rendered card. All
pre-computed before staffer opens UI.
Ch5 What a CRM can't do — 11-row comparison table of capabilities.
Ch6 How it gets better over time — three paths:
- Phase 19 playbook boost (full math)
- Pattern discovery meta-index
- Autotune agent
Ch7 Scale story: 20 staffers, 300 contracts, midday +20/+1M surge
- Async gateway + per-staffer profile isolation + client blacklists
- 7-step surge handling flow (ingest, stale-mark, incremental refresh,
degradation, hot-swap, autotune re-enter)
- Known pain points: Ollama inference serial, RAM ceiling ~5M on
HNSW (mitigated by Lance), VRAM 1-2 models sequential,
playbook_memory unbounded.
Ch8 Error surfaces & recovery — 10-row table covering ingest schema
conflicts, bucket failures, ghost names, dual-agent drift,
empty searches, Ollama down, gateway restart, schema fingerprint
divergence. Every failure has a named surface and recovery path.
Ch9 Per-staffer context — active profile, workspace, client blacklist,
audit trail, daily summary. How 20 staffers don't see the same UI.
Ch10 Day in the life — 07:00 housekeeping → 07:30 refresh → 08:00
staffer opens → 08:15 drill down → 08:30 Call click → 09:00
second staffer shares memory → 12:30 surge → 14:00 no-show →
15:00 new embeddings live → 17:00 retrospective → 22:00
overnight trials.
Ch11 Known limits & non-goals — deferred (rate/margin, push, confidence
calibration, neural re-ranker, pm compaction, call_log cross-ref)
and explicitly out-of-scope (cloud, ACID, streaming, CRM replace,
proprietary formats, hard multi-tenant).
Also: nav updated on /dashboard, /console, /proof to link /spec.
Every architectural claim in the spec cites either a code path, an
ADR number, or a phase reference so someone skeptical can target
the specific artifact.
J's ask: move the system from retrospective ranking to predictive
anticipation. Show it tracks the clock, not just the roster.
New endpoint /intelligence/staffing_forecast:
- Pulls 30-day Chicago permit window (200 permits)
- Maps work_type → role via industry heuristic
- Aggregates predicted worker demand per role
- Joins IL bench supply (workers_500k state='IL' group by role)
- Computes coverage_pct, reliable_coverage_pct
- Classifies risk: critical/tight/watch/ok
- Computes earliest staffing deadline per role
(permit issue_date + 31d = 45d construction start - 14d window)
- Surfaces recent Chicago playbook ops for the role-specific memory
New UI 'Staffing Forecast' section ABOVE Live Contracts:
- Top card: total construction value, permit count, workers needed,
critical/tight role count
- Per-role rows: demand vs available supply, coverage %, deadline
with red/amber/green urgency coloring
Per-contract timeline on Live Contracts:
- estimated_construction_start, staffing_window_opens, days_to_deadline
- urgency classification: overdue/urgent/soon/scheduled
- card border colored by urgency
- timeline line explicitly shows recruiter: OVERDUE/URGENT + days count
This is the 'system already thinks about when, not just who' surface
J was asking for. CRMs store; this anticipates.
Closing trust-breaks surfaced in the strategic audit.
A — MEMORY chip renders even when sparse:
Previously rendered nothing when no trait crossed threshold, which
recruiters would read as "system has no signal." Now explicitly
says "memory is sparse for this role+geo — no trait crossed
threshold" or "no similar past playbooks yet — first fill of this
kind will seed it." Honest when it doesn't know.
B — Removed /intelligence/learn dead endpoint:
Legacy CSV-writer path that destructively re-wrote
successful_playbooks. /log and /log_failure replace it cleanly.
Leaving dead code confuses future maintainers.
C — Narrative tooltips on Endorsed chips:
Hovering the green "Endorsed · N playbooks" chip now fetches
the worker's past operations from successful_playbooks_live and
shows a story: "Maria — past endorsements: • Welder x2 in
Toledo (2026-04-15), • Welder x1 in Toledo (2026-04-18)..."
Falls back to honest "narrative unavailable" if the seed
didn't land in SQL.
D — call_log infrastructure in worker modal:
New "Recent Contact" section queries call_log JOIN candidates by
name. Surfaces last 3 call entries with timestamp, recruiter,
disposition, duration. When empty (which is today's reality —
candidates table only has 1000 rows vs call_log's higher IDs),
shows an honest message about the data gap and what real ATS
integration would unlock.
Honest call: D ships infrastructure. Actual utility depends on
aligning candidate IDs between the candidates table and
call_log — current synthetic data doesn't cross-ref cleanly.
When real ATS data lands, this section becomes the
"system knows who we called yesterday" feature the recruiter
needs.
Deferred (would require a dedicated session):
- Rate awareness (needs worker pay_rate + contract bill_rate)
- Push / background daemon (Slack/SMS/email integration)
- Confidence calibration (needs a probabilistic ranking layer)
New endpoints:
- POST /clients/:client/blacklist { worker_id, name?, reason? }
- GET /clients/:client/blacklist → { client, entries }
- DELETE /clients/:client/blacklist/:worker_id → { removed, total }
Bun /search accepts optional `client` field. When present, loads that
client's blacklist and appends `AND worker_id NOT IN (...)` to the
SQL filter. Zero-cost if unused; clean trust-break avoidance when a
client has previously flagged a worker.
Persistence: mcp-server/data/client_blacklists.json, synchronous
writes via Bun.write. Scale target is hundreds of entries per client
tops — JSON is fine until we hit 10K+ per client.
Verified: worker_id 9326 (Carmen Green) blacklisted for AcmeCorp,
same Chicago Electrician search with client=AcmeCorp returns 196
sql_matches vs 197 without — exactly one excluded.
A — Patterns surface in main Worker Search:
/intelligence/chat smart_search fallback now calls /patterns in
parallel with hybrid, returns discovered_pattern + matched count.
search.html doSearch renders a green "MEMORY (N playbooks): ..."
chip above results so every recruiter query shows the meta-index
dimension, not just live-contract cards.
B — Compounding proven and default-k bumped:
Direct compounding test on Chicago Electrician:
- Run 0 (no seeds): Carmen Green not in top-5, boost 0
- After 3 seeds of identical operation: boost +0.250 (capped),
3 citations, lifted to #1. Each seed adds 1 citation. Cap
prevents one worker from dominating future searches.
- Required k=200 (not 25 or 50) — embedding band is narrow
(cosines 0.55-0.67 across all playbooks regardless of geo).
- Bumped defaults on /search, permit_contracts, and smart_search
to playbook_memory_k=200. Brute-force sub-ms at this scale.
New devop.live/lakehouse section pairs live public Chicago building
permits with derived staffing contracts, ranked candidates from the
500K worker bench, and meta-index discovered patterns per role+geo.
Makes the Phase 19 boost + Path 2 pattern discovery visible on real
external data, without needing a paying client to demo.
Backend:
- New /intelligence/permit_contracts endpoint
- Fetches 6 recent Chicago permits > $250K from the Socrata API
- Derives proposed fill: 1 worker per $150K of permit value (capped 2-8)
- For each: /vectors/hybrid with use_playbook_memory=true,
playbook_memory_k=25, auto availability>0.5 filter
- For each: /vectors/playbook_memory/patterns with k=25 min_freq=0.3
- Returns permit + proposed contract + top 5 candidates with boosts
and citations + discovered pattern + pattern_matched count
Frontend:
- New "Live Contracts" section on search.html between today's sim
contracts and Market Intelligence
- Per-permit card: cost + work_type + address + proposed role/count
+ pool size + top 3 candidates (with endorsement chip when boost
fires) + memory-derived pattern ("MEMORY (N playbooks): recurring
certifications: OSHA-10 47%, Forklift... · archetype mostly: ...")
Real working demo even without paying clients: shows the system
operating on genuinely external data with our synthetic-data-derived
learning applied.
New:
- /vectors/playbook_memory/patterns: meta-index pattern discovery.
Given a query, finds top-K similar playbooks, pulls each endorsed
worker's full workers_500k profile, aggregates shared traits (cert
frequencies, skill frequencies, modal archetype, reliability
distribution), returns a human-readable discovered_pattern. Surfaces
signals operators didn't explicitly query — the original PRD's
"identify things we didn't know" dimension.
- /vectors/playbook_memory/mark_failed: records worker failures per
(city, state, name). compute_boost_for applies 0.5^n penalty per
recorded failure, so 3 failures quarter a worker's positive boost and
5 effectively zero it. Path 1 negative signal — recruiter trust
depends on the system NOT recommending people who no-showed.
- Bun /log_failure: validates failed_names against workers_500k
(same ghost-guard as /log), forwards to /mark_failed.
Improved:
- /log now validates endorsed_names against workers_500k for the
contract's city+state before seeding. Ghost names (names that don't
correspond to real workers) are rejected in the response and excluded
from the seed, preventing silent boost failures.
- Bun /search auto-appends `CAST(availability AS DOUBLE) > 0.5` to
sql_filter when the caller didn't constrain availability. Opt out
with `include_unavailable: true`. Recruiter trust bug: surfacing
already-placed workers breaks the first call.
- DEFAULT_TOP_K_PLAYBOOKS 25 → 100. Direct cosine measurement showed
similarities cluster 0.55-0.67 across all playbooks regardless of
geo, so k=25 missed relevant geo-matched playbooks. Brute-force is
still sub-ms at this size.
Verified end-to-end on live data:
- Ghost names rejected on /log + /log_failure
- Availability filter drops unavailable workers from candidate pool
- Pattern discovery on unseen Cleveland OH Welder query returned
recurring skills (first aid 43%, grinder 43%, blueprint 43%) and
modal archetype (specialist) across 20 semantically similar past
playbooks in 0.24s
- Negative signal: Helen Sanchez boost dropped +0.250 → +0.163 after
3 failures recorded via /log_failure (34% reduction)
Two gap-fills surfaced by the real test on 2026-04-20:
1. /log no longer seeds endorsed_names that don't exist in workers_500k
for the contract's (city, state). Previously accepted ghost names
silently (entry count grew, SQL row landed, but boost never fired
because no real worker chunk matched the stored tuple). Response now
reports rejected_ghost_names and explains why seeding was skipped.
2. Bun /search auto-appends `CAST(availability AS DOUBLE) > 0.5` to
sql_filter when the caller didn't constrain availability themselves.
Recruiters expect "available workers" by default — surfacing someone
on an active placement would break trust on first contact.
Opt out with `include_unavailable: true`.
Verified: ghost names rejected end-to-end, real names accepted, mixed
input handled correctly. Availability filter drops ~10 workers from a
305-row Cleveland OH Welder pool to 295 actually-available.
Backend:
- crates/vectord/src/playbook_memory.rs (new): Phase 19 in-memory boost
store with seed/rebuild/snapshot, plus temporal decay (e^-age/30 per
playbook), persist_to_sql endpoint backing successful_playbooks_live,
and discover_patterns endpoint for meta-index pattern aggregation
(recurring certs/skills/archetype/reliability across similar past fills).
- DEFAULT_TOP_K_PLAYBOOKS bumped 5 → 25; old default silently missed
most boosts when memory had > 25 entries.
- service.rs: new routes /vectors/playbook_memory/{seed,rebuild,stats,
persist_sql,patterns}.
Bun staffing co-pilot (mcp-server/):
- /search, /match, /verify, /proof, /simulation/run, MCP tools all
forward use_playbook_memory:true and playbook_memory_k:25 to the
hybrid endpoint. Boost was previously dark across the entire app.
- /log no longer POSTs to /ingest/file — that endpoint REPLACES the
dataset's object list, so single-row CSV writes were wiping all prior
rows in successful_playbooks (sp_rows went 33→1 in one /log call).
/log now seeds playbook_memory with canonical short text and calls
/persist_sql to keep successful_playbooks_live in sync.
- /simulation/run cumulative end-of-week CSV write removed for the same
reason. Per-day per-contract /seed (added in this session) is the
accumulating feedback path now.
- search.html addWorkerInsight renders a green "Endorsed · N playbooks"
chip with playbook citations when boost > 0.
Internal Dioxus UI (crates/ui/):
- Dashboard phase list rewritten through Phase 19 (was stuck at "Phase
16: File Watcher" / "Phase 17: DB Connector" — both wrong).
- Removed fabricated "27ms" stat label.
- Ask tab examples + SQL default replaced with real staffing prompts
against candidates/clients/job_orders (was referencing nonexistent
employees/products/events).
- New Playbook tab exposes /vectors/playbook_memory/{stats,rebuild} and
side-by-side hybrid search (boost OFF vs ON) with citations.
Tests (tests/multi-agent/):
- run_e2e_rated.ts: parallel two-agent (mistral + qwen2.5) build phase
+ verifier rating (geo, auth, persist, boost, speed → /10).
- network_proving.ts: continuous build → verify → repeat with
staffing-recruiter profile hot-swap; geo-discrimination check.
- chain_of_custody.ts: single recruiter operation traced through every
layer (Bun /search, direct /vectors/hybrid parity, /log, SQL,
playbook_memory growth, profile activation, post-op boost lift).
- Leaflet.js map with dark tiles showing real Chicago building permits
- Dots sized and colored by project cost ($1B+ red, $100M+ orange, $10M+ blue)
- Hover any dot for project details — address, cost, description, date
- LIVE indicator with green pulse dot
- Timestamp showing when data was fetched
- "Verify source" link goes directly to Chicago Open Data portal
- "Refresh" button re-fetches from the API on click
- Expanded to 50 permits for denser map coverage
- Legend showing dot size scale
No one can say "you just typed those numbers in" when they can
click a dot on the map, see 10000 W OHARE ST, and verify it
themselves on data.cityofchicago.org.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
/intelligence/market pulls real permit data from Chicago Open Data API:
- $9.6B in active construction permits
- O'Hare expansion ($730M), new casino ($580M), transit station ($445M)
- Maps permit types to staffing roles (electrical→Electrician, masonry→Loader)
- Cross-references with our IL worker bench to show coverage gaps
- Electrician gap: only 1,036 reliable vs 63K estimated demand
Datalake page now shows three intelligence layers:
1. Contract simulation with scenario-driven matching
2. Market Intelligence with live permit data + bench analysis
3. System Learning with fill history and detected patterns
The staffing company sees demand forming before the phone rings.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Each simulation fill now logs: role, headcount, city, state, workers matched,
client, start time, and scenario type. One page refresh = ~20 playbook entries.
4 refreshes = 28 entries with patterns already forming.
Fixed activity counters: shows Contract Fills, Searches, and Patterns.
Activity feed now shows the actual fill data with worker names and scenarios.
This is the PRD's learning loop in action — the system records every
successful match so future queries can learn from past decisions.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Learning Loop:
- /intelligence/learn endpoint logs search→selection as playbook entry
- /intelligence/activity returns learning stats, patterns, and recent activity
- Call/SMS buttons trigger logSelection() — records what query led to what pick
- "System Learning" card on main page shows searches logged, patterns detected,
and recent activity feed with timestamps
- Every search-selection pair becomes institutional knowledge stored in the lakehouse
Smart Search on Main Page:
- doSearch() now routes through /intelligence/chat (smart NL parser)
- Extracts role, city, state, availability, reliability from natural language
- Shows understanding tags so staffer sees what the system parsed
- Returns workers with ZIP codes, availability %, reliability %, archetype
- "reliable forklift operator available in Nashville" → 10 Nashville forklift
operators with ZIP codes, all 86-98% reliable, all available — 372ms
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
"find me a warehouse worker available today near Nashville" now:
- Parses: role=warehouse, city=Nashville, available=true
- Builds SQL: role LIKE '%warehouse%' AND city='Nashville' AND availability>0.5
- Returns: 12 Nashville warehouse workers with ZIP codes, availability %,
reliability %, skills, certs, and archetype
- Shows understanding tags so user sees what the system parsed
- 414ms, 12 records — not a generic search, a targeted answer
Recognizes 20 role keywords, 40+ cities, 10 states, availability/reliability
signals from natural language. Falls through to vector search for anything
the parser doesn't catch.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New page at /lakehouse/console — a $200/hr consultant's intelligence product:
Morning Brief (auto-loads in ~120ms across 500K profiles):
- Workforce Pulse: total, reliable %, elite %, archetype breakdown
- Geographic Bench: state-by-state reliable % with weakest-state alert
- Comeback Watch: 15K improving workers who crossed 80% reliability
- Risk Watch: 5K erratic + 5K silent workers flagged automatically
- Ready & Waiting: available + reliable workers to call first
- Role Supply: 20 roles with supply/available/reliability
Conversational Chat with 5 intelligent routes:
- "Find someone like [Name] but in OH" → vector similarity search
- "Who could handle industrial electrical work?" → semantic role discovery
(finds workers for roles that DON'T EXIST in the database)
- "What if we lose our top 5 forklift operators?" → scenario analysis
with risk rating, bench depth, state-by-state breakdown
- "Which workers should we stop placing?" → risk flagging
- Default: hybrid SQL+vector search with LLM summary
Every response shows: query steps, records scanned, response time.
Transparency kills the "AI is making it up" argument.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Simulation now uses weighted random selection across 4 priority tiers:
- Urgent (walkoff, quarantine, no-show), High (new client, cert expiry, expansion),
Medium (recurring, seasonal, medical leave, cross-train), Low (future, exploratory)
- Color-coded scenario banners on ALL contracts, not just urgent
- Each scenario carries context (note) + recommended action
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Added 'How This Actually Works' section below the proof page:
1. CRM vs Lakehouse side-by-side — what's different in plain English
2. Your Data Never Leaves — local AI, local storage, your hardware
3. How It Handles Scale — HNSW (RAM, 1ms) + Lance (disk, 5ms at 10M)
4. Hot-Swap Profiles — 4 AI models explained by what they DO
5. Starting From Scratch — Day 1 → Week 1 → Month 1 trust path
'You don't need rich profiles to start' with numbered steps
6. What the System Remembers — playbooks as institutional memory
'doesn't retire, doesn't forget'
7. Measured Not Promised — table of real numbers with plain English
Addresses the legacy company pushback: explains WHY the architecture
matters, HOW sparse data becomes rich data over time, and that
everything runs on hardware they own with zero cloud dependency.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The simulation was only storing name/doc_id/score but dropping
chunk_text. Worker cards showed 'New — data builds with placements'
for every worker. Now includes the full profile text so cards render
skills (blue), certs (green), archetype (purple), and reliability/
availability meters.
Verified via Playwright: cards now show DeShawn Cook with 6S|Excel|SAP
skills, First Aid/CPR cert, flexible archetype, 72% reliability.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replaced complex dashboard with minimal search.html:
- No external JS/CSS files, no transpilation, no module imports
- Plain JS with .then() chains (no async/await compat issues)
- DOM-only rendering via createElement (no innerHTML with data)
- 20s AbortController timeout so fetch never hangs
- Detects /lakehouse/ proxy prefix automatically
- 7KB total, loads in 18ms
Calls lakehouse /vectors/hybrid directly — SQL filters always apply,
works even when HNSW isn't loaded (brute-force fallback).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
All gateway endpoints pointed to ethereal_workers_v1 (10K, W- prefix)
instead of workers_500k_v1 (50K, W500K- prefix). Filters appeared
broken because the vector results came from the wrong dataset —
IDs matched numerically but belonged to different workers.
Now: every search, match, and hybrid call uses workers_500k_v1.
Verified: 'experienced welder' + state=OH + role=Welder returns
5 Welders in OH (Carmen Perry, Janet White, Rachel Miller, etc).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
3 live demo searches run on page load against 500K real profiles:
'warehouse help' — CRM: 0, AI: finds Forklift Ops + Loaders
'someone good with machines who is dependable' — CRM: 0, AI: finds Machine Ops
'safety trained worker for chemical plant' — CRM: 0, AI: finds OSHA+Hazmat workers
Each shows the actual CRM keyword count (LIKE match) next to the AI
vector results with real worker names, roles, and cities. Not
described — demonstrated. The numbers come from queries that run
when the page loads.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Added @media(max-width:768px) breakpoints:
- 2-col grids → single column on mobile
- 3-col grids → single column
- 4-col model cards → 2-col
- Stats grid → 2-col
- Tables: horizontal scroll, smaller text
- Reduced padding and font sizes
- Hero title scales down
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rebuilt the page to address a staffing coordinator who's tired of
learning new tools. Opens with "Your Morning Just Got Easier" and
a side-by-side: their current 45-minute routine vs 5 minutes with
pre-matched workers.
Key messaging:
- "This isn't another CRM to learn"
- "We know what your day looks like" (checklist they'll recognize)
- Shows real matched workers WITH names, not abstract metrics
- "It understands what you mean" — warehouse help finds forklift ops
- "It already filtered the junk" — only workers worth calling
- "It runs on YOUR machine" — no cloud, no fees, no data leaving
Technical proof pushed below a divider for the skeptical team.
The staffer sees their contracts and their workers first.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rebuilt /proof to highlight the actual differentiator:
- Section 01: "What a CRM Does" — SQL keyword search, every CRM has this
- Section 02: "What AI + Vectors Do" — semantic understanding.
Side-by-side: CRM finds 0 results for "warehouse work" because no
profile contains that exact text. AI finds 5 verified workers because
it understands Forklift Operator + Loader = warehouse work.
- Section 03: 673K vectorized chunks, 98% recall, 10M at 5ms
- Section 04: Local GPU, 4 models, no cloud, no API fees
The point: this isn't another CRM search. It's an intelligence layer
that understands MEANING — and it runs entirely on your hardware.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three fixes:
1. CORS headers on all gateway responses (browser dashboard was
blocked by same-origin policy)
2. Dashboard JS uses window.location.origin instead of hardcoded
localhost:3700 (LAN browsers couldn't reach it)
3. Langfuse tracing wired into every gateway request — api() wrapper
creates spans for each lakehouse call, logGeneration for LLM calls.
Week simulation now produces 34 observations per run visible in
Langfuse UI.
7 traces confirmed in Langfuse after restart. Every /sql, /search,
/vram, /simulation call is tracked with timing + inputs + outputs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Week simulation engine: 5 business days, 4-8 contracts per day,
3 rotating staffers with handoffs between days. Runs hybrid search
per contract via the gateway. 28 contracts, 108/108 filled (100%),
5 emergencies, 4 handoffs, 3.2s total.
Dashboard at :3700/ — dark theme, shows:
- Contract cards sorted by priority with match status
- Day navigation across the work week
- Week summary stats (fill rate, emergencies, handoffs)
- Live alerts (erratic/silent workers)
- Playbook entries
- Real-time service health + VRAM
Self-orientation (/context) + verification (/verify) endpoints so
any agent can understand the system and fact-check claims without
human intermediary.
Accessible on LAN at http://192.168.1.177:3700
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Any agent (Claude Code via MCP stdio, or sub-agents via HTTP :3700)
can now self-orient without human explanation:
GET /context returns:
- System purpose and name
- All datasets with row counts
- All vector indexes with backends
- Available models and their strengths
- Complete tool list with rules
- Current VRAM state
POST /verify fact-checks any claim about a worker against the golden
data. Agent says "worker 1313 is a Forklift Operator in IL with
reliability 0.82" → endpoint returns verified=true/false with exact
discrepancies.
MCP resources (stdio path for Claude Code):
- lakehouse://system — live system status
- lakehouse://architecture — full PRD
- lakehouse://instructions — agent operating manual
- lakehouse://playbooks — successful operations database
- lakehouse://datasets — dataset listing
This is the "command and control" layer J asked for: any agent
connecting to this system gets the context it needs to operate
independently. No human intermediary required.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three new systemd services:
- lakehouse-agent (:3700) — REST gateway wrapping all lakehouse tools.
Clean JSON in/out, no protocol complexity. 9 endpoints: /search,
/sql, /match, /worker/:id, /ask, /log, /playbooks, /profile/:id, /vram
- lakehouse-observer — watches operations, logs to lakehouse, asks
local model to diagnose failure patterns, consolidates successful
patterns into playbooks every 5 cycles
- Stdio MCP transport preserved for Claude Code integration
AGENT_INSTRUCTIONS.md: complete operating manual for sub-agents.
Rules: never hallucinate, SQL first for structured questions, hybrid
for matching, log every success, check playbooks before complex tasks.
Observer loop:
observed() wrapper timestamps + persists every gateway call →
error analyzer reads failures + asks LLM for diagnosis →
playbook consolidator groups successes by endpoint pattern
All three designed for zero human intervention — agents operate,
observer watches, playbooks accumulate, iteration happens internally.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
MCP server at mcp-server/index.ts — 9 tools exposing the full
lakehouse to any MCP-compatible model:
search_workers (hybrid SQL+vector), query_sql, match_contract,
get_worker, rag_question, log_success, get_playbooks,
swap_profile, vram_status
The "successful playbooks" pattern: log_success writes outcomes
back to the lakehouse as a queryable dataset. Small models call
get_playbooks to learn what approaches worked for similar tasks —
no retraining needed, just data.
generate_workers.py scales to 100K+ with realistic distributions:
- 20 roles weighted by staffing industry frequency
- 44 real Midwest/South cities across 12 states
- Per-role skill pools (warehouse/production/machine/maintenance)
- 13 certification types with realistic probability
- 8 behavioral archetypes with score distributions
- SMS communication templates (20 patterns)
100K worker dataset ingested: 70MB CSV → Parquet in 1.1s. Verified:
11K forklift ops, 27K in IL, archetype distribution matches weights.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>