58 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
87b034f5f9 |
phase 1.6: ops dashboard + consent_versions allowlist + subject timeline tool
Closes the afternoon's "all four" wave (per J's request to do all the
items in one pass instead of pick-one-of-options):
(1) Live demo on WORKER-100 — full lifecycle exercised end-to-end
against the running gateway. 3 audit rows landed in correct
order (consent_grant → biometric_collection →
consent_withdrawal), chain_verified=true, photo on disk at
data/biometric/uploads/WORKER-100/1778011967957907731_027b6bb1.jpg
(180 bytes JFIF). retention_until=2026-06-04 (30d from
withdrawal per consent template v1 §2).
(2) GET /biometric/stats — read-only aggregate over all subjects.
Returns counts by biometric.status + subject.status, photo
count, oldest_active_retention_until, and the last 20
state-change events (consent_grant / collection / withdrawal /
erasure — validator_lookup and other noise filtered out).
Walks per-subject audit logs via the existing writer; cheap
for 100 subjects, would want an event-stream index at 100k.
Legal-tier auth (same posture as /audit). 4 unit tests.
(3) /biometric/dashboard mcp-server frontend. Auto-refreshes
/biometric/stats every 15s, neo-brutalist tile layout for
the per-status counts + retention horizon block + recent
events table with kind badges + event-kind breakdown pills.
sessionStorage-backed token; logout button clears state.
DOM-built throughout (textContent + createElement) — never
innerHTML on audit-row values, since trace_id et al. could
in theory carry operator-supplied strings.
(4) consent_versions allowlist. BiometricEndpointState gains
`allowed_consent_versions: Option<Arc<HashSet<String>>>`,
loaded at startup from /etc/lakehouse/consent_versions.json
(override via LH_CONSENT_VERSIONS_FILE). process_consent
refuses unknown hashes with HTTP 400 consent_version_unknown
when configured. Resolution semantics:
- Missing file → permissive (v1 compat, warn-log)
- Parse error → permissive (error-log; broken config
silently going strict would be worse)
- Empty array → strict, refuse all (deliberate freeze
mode for "counsel hasn't signed v1 yet")
- Populated → strict, lowercase-normalized comparison
5 unit tests (known/unknown/case/empty/none-permissive).
Example template at ops/consent_versions.example.json with
a counsel-tier deployment note.
(5) scripts/staffing/subject_timeline.sh — operator one-shot
pretty-print of any subject's full BIPA lifecycle. Curls
/audit/subject/{id} with legal token; renders manifest
summary + on-disk photo state + chronological audit chain
with kind badges + chain verification status. Smoke-tested
on WORKER-100 (3 rows verified).
(6) STATE_OF_PLAY.md refresh. New section "afternoon wave"
captures all four commits (76cb5ac, 7f0f500, 68d226c, this
one) + the live demo evidence + the v1 endpoint matrix +
UI/CLI inventory + the production-cutover blocking set
(counsel calendar only — eng substrate is done).
Verified live post-restart:
- /audit/health + /biometric/health both 200
- /biometric/stats returns 100 subjects, 2 withdrawn (WORKER-2 from
earlier scrum + WORKER-100 from today's demo), 1 photo on record,
6 recent state-change events
- /biometric/intake + /biometric/withdraw + /biometric/dashboard
all 200 on mcp-server :3700
- subject_timeline.sh on WORKER-100: chain_verified=true,
chain_root=a47563ff937d50de…
- 88/88 catalogd lib tests + 55/55 biometric_endpoint tests green
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
68d226c314 |
phase 1.6: BIPA withdrawal endpoint + UI + retention sweep timer
Closes the four production gaps that were live after the consent
endpoint shipped (76cb5ac):
(1) Withdrawal endpoint POST /biometric/subject/{id}/withdraw
backs the BIPA right of withdrawal that consent template v1 §2
explicitly promises. Without it, the only way to honor a
candidate's withdrawal request was the heavier /erase, which
destroys immediately rather than starting the 30-day SLA clock
that the consent template commits to. Side-effects:
- manifest.consent.biometric.status: Given → Withdrawn
- manifest.consent.biometric.retention_until: 18mo → 30d
- audit row kind=biometric_consent_withdrawal, captures
reason + operator_of_record + evidence_path
- DOES NOT touch general_pii or subject.status — biometric
is independently revocable
State machine: Given→Withdrawn (happy), NeverCollected/Pending→
409 nothing_to_withdraw, Withdrawn→409 already_withdrawn (won't
advance the destruction clock), Expired→409 already_expired,
subject Erased/RetentionExpired→403 subject_inactive.
12 new unit tests covering happy path + all guards + a full
grant→withdraw cycle that asserts retention_until is correctly
accelerated and the audit chain has 2 rows in correct order.
(2) Withdraw UI at /biometric/withdraw (mcp-server-served HTML).
3-screen flow: operator auth (token + name in sessionStorage),
withdrawal form (candidate_id + free-text reason ≥10 chars +
optional evidence path), confirmation showing the audit row
HMAC + the 30-day retention_until clock + a curl recipe for
/audit/subject/{id} verification. Same neo-brutalist styling
as biometric_intake.html. Mounted at
http://localhost:3700/biometric/withdraw and externally at
https://devop.live/lakehouse/biometric/withdraw.
(3) Retention sweep systemd timer. crates/catalogd/bin/retention_sweep
binary already existed; this commit schedules it. Daily 03:00 UTC,
Persistent=true so a missed boot triggers on next start. Service
runs as oneshot with --apply (writes a date-stamped JSONL to
data/_catalog/subjects/_retention_sweep_<date>.jsonl ONLY when
overdue subjects exist, per the binary's existing semantics).
install.sh updated to handle .timer + paired .service correctly:
enables the timer, skips direct start of the oneshot service
(the timer pulls it in). One-shot manual test confirmed clean:
100 subjects scanned, 0 overdue (all backfill subjects within
their 4-year general retention window).
(4) operator_of_record bug fix in intake UI. Previously the page
hardcoded the literal string 'intake_ui_operator' as the
operator_of_record sent to /consent — meaning every audit row
captured the same useless placeholder, defeating the whole
point of operator traceability. Fixed by adding an operator
name field to the token-paste step (sessionStorage-backed),
passed through to consent + photo POSTs as the actual operator.
Verified live post-restart:
- gateway /audit/health + /biometric/health both 200
- mcp-server /biometric/intake + /biometric/withdraw both 200
- Live withdraw probes: 401 (no token), 400 (empty body), 404
(ghost subject), 409 nothing_to_withdraw on WORKER-1 (which
is NeverCollected per backfill default) — all expected
- Binary strings contain: process_withdraw, withdraw_consent,
biometric_consent_withdrawal, biometric_withdraw_response.v1,
nothing_to_withdraw, already_withdrawn, already_expired,
/subject/{candidate_id}/withdraw route
- systemd: lakehouse-retention-sweep.timer active+enabled,
next fire Tue 2026-05-05 22:00 CDT (= 03:00 UTC May 6)
- Manual one-shot of retention sweep service: exit 0/SUCCESS,
100 subjects loaded, 0 overdue
83/83 catalogd lib tests + 46/46 biometric_endpoint tests green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
7f0f500050 |
phase 1.6: candidate intake UI — operator-driven consent + photo capture
Adds the Gate 2 frontend that operators use to capture candidate
biometric consent + photo at intake. Without this, the
/biometric/subject/{id}/consent endpoint shipped in 76cb5ac has
no caller — operators would have to build curl scripts by hand.
Surfaces:
- mcp-server serves /biometric/intake as static HTML (mounted at
http://localhost:3700/biometric/intake; externally accessible via
nginx at https://devop.live/lakehouse/biometric/intake)
- URL must include ?candidate_id=WORKER-XXX
Page flow (4 screens):
- Step 1 — operator authentication. Pastes legal-tier audit token
into a password field; stored in sessionStorage (cleared on tab
close), never localStorage, never cookies.
- Step 2 — consent. Renders the v1 consent template text inline
(Disclosures 1/2/3 + plain-language summary, all matching
docs/policies/consent/biometric_consent_template_v1.md as of
this commit). Captures candidate printed name + checkbox accept;
computes SHA-256 of the rendered consent block as
consent_version_hash; computes SHA-256 of the click-acceptance
evidence (method + name + ts + user_agent + page_origin) and
records it as inline:sha256=<hash> evidence path. POSTs to
gateway /biometric/subject/{id}/consent.
- Step 3 — photo. Two paths: file upload OR getUserMedia camera
capture. Preview before submit. Skip-photo button for
consent-only intake (e.g. consent collected before equipment
available). POSTs to gateway /biometric/subject/{id}/photo.
- Step 4 — confirmation. Displays the audit row HMACs from both
endpoint responses + the verify_biometric_erasure.sh command
the operator can run for chain attestation.
Design choices:
- No framework, no build step. Single self-contained HTML file
~22KB. Matches the existing mcp-server precedent
(onboard.html, console.html, etc.).
- Neo-brutalist dark style matching mcp-server's other pages
(tracked dark surfaces, monospace technical metadata, sharp
borders). Consistent with j's UI preferences.
- Server-authoritative timestamps. The page sends its own UA +
click ts as evidence, but the canonical given_at on the
manifest comes from the gateway's Utc::now() (per
process_consent). Page displays whatever the gateway returns.
- Gateway URL configurable via ?gw= query param; defaults to
http://localhost:3100 for the same-box workstation pattern.
External-access deployment requires a separate nginx route
for the gateway (out of scope for v1 to avoid touching the
devop.live nginx config).
Verified live:
- GET http://localhost:3700/biometric/intake?candidate_id=WORKER-100
returns 200 + 22KB HTML body
- GET https://devop.live/lakehouse/biometric/intake?candidate_id=WORKER-100
returns 200 (nginx /lakehouse/ route proxies to :3700)
- Gateway CORS preflight for POST /biometric/subject/{id}/consent
from origin http://localhost:3700: 200 with allow-methods/headers/origin: *
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
9eed982f1a |
mcp-server: /_go/* pass-through for G5 cutover slice
Adds an opt-in pass-through that routes Bun mcp-server requests
to the Go gateway when GO_LAKEHOUSE_URL is set. /_go/v1/embed,
/_go/v1/matrix/search etc. flow through Bun frontend → Go
backend without touching any existing tool. Off-by-default
(empty GO_LAKEHOUSE_URL → 503 with rationale); enabled via
systemd drop-in at:
/etc/systemd/system/lakehouse-agent.service.d/go-cutover.conf
This is the first slice of real Bun-fronted traffic hitting the
Go substrate. The /api/* pass-through (Rust gateway) and every
existing tool are unmodified — fully additive cutover step.
Reversible: unset GO_LAKEHOUSE_URL or remove the systemd drop-in
and restart lakehouse-agent.service.
Verified end-to-end against persistent Go stack on :4110:
/_go/health → {"status":"ok","service":"gateway"}
/_go/v1/embed → nomic-embed-text-v2-moe vectors (dim=768)
/_go/v1/matrix/search → 3/3 Forklift Operators (role+geo match)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
8de94eba08 |
cleanup: bump qwen2.5 → qwen3.5:latest in active defaults
Some checks failed
lakehouse/auditor 16 blocking issues: cloud: claim not backed — "Verified end-to-end via playwright on devop.live/lakehouse:"
stronger local rung is now the small-model-pipeline tier-1 default across both Rust legacy + Go rewrite (cf. golangLAKEHOUSE phase 1). same JSON-clean property as qwen2.5, more capacity. ollama still serves both side-by-side; rollback is a 4-line revert if a workload regresses. active-default sites: - lakehouse.toml [ai] gen_model + rerank_model → qwen3.5:latest - mcp-server/observer.ts diagnose call (Phase 44 /v1/chat path) → qwen3.5:latest - mcp-server/index.ts model roster doc → qwen3.5:latest first - crates/vectord/src/rag.rs ContinuableOpts + RagResponse.model → qwen3.5:latest skipped: execution_loop/mod.rs comments describing historic qwen2.5 tool_call quirks — those are documentation of past behavior, not active defaults. data/_catalog/profiles/*.json are runtime-generated (gitignored), not in scope for tracked changes. cargo check -p vectord: clean. no behavioral change in the audit pipeline — same JSON-clean local model, same think=Some(false) posture, just stronger upstream. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
4b92d1da91 |
demo: icon recipe pipeline + role-aware portraits + ComfyUI negative-prompt override
Adds two single-source-of-truth recipe files that drive both the
hot-path render server and the offline pre-render scripts:
- role_scenes.ts: per-role-band scene clauses (clothing + backdrop).
Forklift operators look like forklift operators instead of
collapsing to interchangeable studio shots. SCENES_VERSION mixes
into the headshot cache key so a coordinator tweak refreshes every
matching face on next view.
- icon_recipes.ts: cert / role-prop / status / hazard / empty icons
with deterministic per-recipe seeds + fuzzy text resolver.
ICONS_VERSION suffix on the cached file means edits don't
overwrite in place — misfires are recoverable.
Routes (mcp-server/index.ts):
- GET /headshots/_scenes — exposes SCENES + version to the
pre-render script so prompts don't drift between batch and hot-path.
- GET /icons/_recipes — same idea for icons.
- GET /icons/cert?text=... — resolves free-text cert names to a
recipe and 302s to the rendered icon. 404 (not 500) when no recipe
matches so the front-end can hang `onerror="this.remove()"`.
- GET /icons/render/{category}/{slug} — cache-or-render at 256² (8
steps) for crisper edges than 512² when downsampled to 14px.
ComfyUI portrait support (scripts/serve_imagegen.py):
The editorial workflow had `human, person, face` baked into its
negative prompt — actively sabotaging portraits. _comfyui_generate
now accepts negative_prompt/cfg/sampler/scheduler overrides, and
those mix into the cache key so portrait calls don't collapse into
hero-shot cache hits.
scripts/staffing/render_role_pool.py: pre-renders the role-aware
face pool by reading SCENES from /headshots/_scenes — single source
of truth verified at run time.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
f9a408e4c4 |
Surname → ethnicity routing + ComfyUI fallback for sparse pool buckets + cache-buster
Three problems J flagged ("not matching properly", "same faces", "still
showing old icons") had three different roots:
1. MISMATCH: front-end was first-name only, so "Anna Cruz" / "Patricia
Garcia" / "John Jimenez" all defaulted to caucasian. Added
SURNAMES_HISPANIC / _SOUTH_ASIAN / _EAST_ASIAN / _MIDDLE_EASTERN
dicts to both search.html and console.html. Surname is checked
FIRST (stronger signal for hispanic + asian than first names),
then first-name fallback. Cruz → hispanic, Patel → south_asian,
Nguyen → east_asian, regardless of first name.
2. SAME FACES: pool buckets are uneven — woman/south_asian=3,
man/black=4, woman/middle_eastern=2 — so any worker in those
buckets collapses to 2-4 photos no matter how good the hash is.
/headshots/:key now 302-redirects to /headshots/generate/:key
when the gender × race intersection is below 30 faces. ComfyUI
on-demand gives infinite uniqueness for the sparse buckets
(deterministic-per-worker via djb2 seed). Dense buckets still
serve from the pool — no GPU cost there.
3. STALE CACHE: Cache-Control was max-age=86400, immutable — pinned
old photos in browsers for 24h after any server-side update.
Dropped to max-age=3600, must-revalidate, and added a v=2
cache-buster query param to all front-end /headshots/ URLs so
existing cached entries are bypassed on next page load.
Also surfacing X-Face-Pool-Bucket / Bucket-Size headers for diagnosis.
Verified: playwright run shows surname routing correct (Torres,
Rivera, Alvarez, Gutierrez, Patel, Nguyen, Omar all bucketed
correctly), sparse buckets 302 to ComfyUI, dense buckets stay on
the thumb pool.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
a3b65f314e |
Synthetic face pool — 1000 StyleGAN headshots, ComfyUI hot-swap, 60x smaller thumbs
Worker cards now ship a real photo per person instead of monogram tiles:
- fetch_face_pool.py pulls 1000 faces from thispersondoesnotexist.com
- tag_face_pool.py runs deepface for gender/race/age, excludes <22yo
- manifest.jsonl: 952 servable, gender/race buckets populated
- /headshots/_thumbs/ pre-resized to 384px webp (587KB -> 11KB,
60x smaller; without this Chrome's parallel-connection budget
drops ~75% of tiles in a 40-card grid)
- /headshots/:key gender x race x age intersection bucketing with
gender-only fallback when intersection is sparse
- /headshots/generate/:key ComfyUI on-demand for the contractor
profile spotlight (cold ~1.5s, cached ~1ms; worker-derived
djb2 seed makes faces deterministic-per-worker but unique
across workers sharing the same prompt)
- serve_imagegen.py _cache_key() now includes seed (was caching
by prompt only -> 3 different worker seeds collapsed to 1
cached image; verified fix produces 3 distinct md5s)
- confidence-default name resolution: Xavier->man+hispanic,
Aisha->woman+black, etc. Every worker resolves to a bucket.
End-to-end: playwright run on /?q=forklift+operators+IL -> 21/21
cards loaded, 0 broken, all 384px webp.
Cache + binary pool gitignored; manifest tracked.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
10ed3bc630 |
demo: real synthetic headshots — fetch pool + serve route + UI wire
Three layers shipped:
1. SCRIPT — scripts/staffing/fetch_face_pool.py
Pulls N synthetic StyleGAN faces from thispersondoesnotexist.com
into data/headshots/face_NNNN.jpg, writes manifest.jsonl. Idempotent:
re-running skips existing files. Optional gender tagging via deepface
(currently unavailable on this box; the script handles ImportError
gracefully and tags everything as untagged). Fetched 198 faces with
concurrency=3 in ~67s.
2. SERVER — /headshots/:key route in mcp-server/index.ts
Loads manifest at first hit, caches in globalThis._faces. Hashes the
key with djb2-style mixing → pool index → returns the JPG. Same
key always gets the same face (deterministic). Accepts
?g=man|woman&e=caucasian|black|hispanic|south_asian|east_asian|middle_eastern
to bias pool selection — the gender/ethnicity buckets fall back to
the full pool when no tagged matches exist. Cache-Control:
86400 immutable so faces ride the browser cache after first hit.
/headshots/__reload re-reads the manifest without restart.
3. UI — search.html + console.html worker cards
Re-added overlay <img> on top of the monogram .av circle. img.src
= /headshots/<encoded-key>?g=<hint>&e=<hint>. img.onerror removes
the failed image so the monogram stays visible if the face pool
isn't fetched / CDN is blocked. .av now has overflow:hidden +
position:relative to clip the img to a perfect circle.
Forced-confident name resolution (J: "we're CREATING the profile,
created as though you truly have the information Xavier is more
likely Hispanic and he's a male"):
genderFor(name) — looks up MALE_NAMES + FEMALE_NAMES,
falls back to a deterministic hash split
so unknown names spread ~50/50. Sets now
include cross-cultural names: Alejandro/
Andres/Mateo/Santiago/Joaquin/Cesar/Hugo/
Felipe/Gerardo/Salvador/Ramon (Hispanic),
Raj/Anil/Vikram/Krishna/Pradeep (South
Asian), Wei/Yi/Hiroshi/Akira/Hyun (East
Asian), Demetrius/Kareem/DaQuan/Khalil
(Black), Omar/Khalid/Hassan/Ahmed/Bilal
(Middle Eastern). FEMALE_NAMES extended
in parallel.
guessEthnicityFromFirstName(name)
— confident default of 'caucasian' for any
name not in the cultural buckets so every
worker resolves to a category the face
pool can be biased toward. Order: ME → Black
→ Hispanic → South Asian → East Asian →
Caucasian (matters where names overlap,
e.g. Aisha appears in ME + Black, biases
toward ME for visual fit).
Both helpers also ported into console.html so the triage backfills
and try-it-yourself rendering get the same hint stack.
Privacy note in the script + route comments: the synthetic data uses
the worker's name as the seed; production should hash worker_id (not
name) to avoid leaking PII to a third-party CDN. The fetch URL itself
is referenced once per pool build, not per-worker.
.gitignore — added data/headshots/face_*.jpg (~100MB for 198 faces;
the manifest + script are tracked). Re-running the script on a fresh
checkout rebuilds the pool from scratch.
Verified end-to-end via playwright on devop.live/lakehouse:
forklift query → 10 worker cards
10/10 with face images (real synthetic headshots, not monograms)
0/10 broken
Alejandro G. Nelson → ?g=man&e=hispanic
Patricia K. Garcia → ?g=woman&e=caucasian
Each name → unique face, deterministic across loads.
Console triage backfills get the same treatment.
|
||
|
|
aa56fbce61 |
demo: profiler — scrolling ticker basket with live prices + click-to-filter
J asked: "kind of like a scrolling ticker that has all of the companies
and their stock prices and where they fit in the map." Implemented as
a horizontal-scroll strip at the top of /profiler:
9 public issuers in this view · quotes via Stooq · 669ms
┌────┬────┬────┬────┬────┬────┬────┐
│TGT │JPM │BALY│ACRE│FCBC│NREF│LSBK│ ← live price + day-change per
│129 │311 │... │... │... │... │... │ ticker, color-banded by
│+.17│+1.5│... │... │... │... │... │ attribution kind
└────┴────┴────┴────┴────┴────┴────┘
Each card carries:
- ticker + live price + day-change % (red/green)
- attribution count + kind (exact / direct / parent / associated)
- left bar color = strongest attribution kind (green for direct
issuer, amber for parent, blue for co-permit associated, gradient
when both direct and associated apply)
- tooltip on hover lists the contractors attributed to this ticker
- click toggles a filter on the table below — clicking TGT cuts the
200-row list down to just TARGET CORPORATION + TORNOW, KYLE F
(Target's primary co-permit contractor)
Server-side:
- entity.ts exports fetchStooqQuote (was internal)
- new POST /intelligence/ticker_quotes — accepts {tickers: [...]},
fans out to Stooq.us in parallel, returns
{ticker, price, price_date, open, high, low, day_change_pct,
stooq_url} per symbol or null for non-US listings (HOC.DE, SKA-B.ST,
LLC.AX). Capped at 50 symbols per call.
Front-end:
- mcp-server/profiler.html — new .basket-wrap section above the
controls. buildBasket() runs after profiler_index loads:
1. Aggregates unique tickers from .tickers.direct + .associated
across all surfaced contractors
2. Renders shells immediately (ticker symbol + "—" placeholder)
3. Batch-fetches quotes via /intelligence/ticker_quotes
4. Updates each card with price + day-change in place
Click on a card sets a tickerFilter; render() skips rows whose
attributions don't include that ticker. "clear filter" button on
the basket strip resets it.
Verified end-to-end on devop.live/lakehouse/profiler:
Default load → 9 issuers, live prices populated in 669ms
TGT click → table filters to TARGET CORPORATION + TORNOW, KYLE F
(the contractor who runs 3 of Target's recent permits
gets the TGT correlation indicator)
JPM card → $311.63, +1.55% — JPMorgan-adjacent contractors
Tooltip → list of contractors attributed to the ticker
|
||
|
|
ba41ad2846 |
demo: profiler index — ticker associations (direct, parent, co-permit)
J's framing: "if a contractor works for Target, future Target contracts
mean money flows back to the contractor — the ticker is an associated
indicator." Now the profiler index attaches three flavors of ticker per
contractor and renders them as colored pills:
green DIRECT contractor IS the public issuer (Target Corp → TGT)
amber PARENT contractor is a subsidiary of a public parent
(Turner Construction → HOC.DE via Hochtief AG)
blue ASSOCIATED contractor co-appears on permits with a public
entity (TORNOW, KYLE F → TGT, 3 shared permits with
TARGET CORPORATION)
The associated flavor is the correlation signal J described — it pulls
the ticker for whoever the contractor has been working *with*, not
just what they are themselves. Most contractors are private; the
associated link is how the moat shows up.
Server-side:
- entity.ts new export `lookupTickerLite(name)` — cheap in-memory
resolver that does only the SEC tickers index lookup + curated
KNOWN_PARENT_MAP check, no per-call SEC profile or Stooq fetch.
~10ms per name after the index is loaded once.
- /intelligence/profiler_index now runs a third Socrata pull
(5K permit pairs in window) to build a co-occurrence map. For each
contractor in the result, attaches:
.tickers.direct[] — name matches a public issuer
.tickers.associated[] — top 5 co-permit partners that resolve
to a ticker, with partner_name +
co_permits count + partner_via reason
Front-end:
- mcp-server/profiler.html — new .ticker-pill styles (3 colors per
attribution kind), pills render under the contractor name in the
table. Hover title gives the full reason path.
Verified end-to-end on the public URL:
search="tornow" → blue TGT pill, hint "Associated via co-permits
with TARGET CORPORATION (3 shared permits) —
TARGET CORP"
search="target" → green TGT × 2 (TARGET CORPORATION +
CORPORATION TARGET name variants both resolve
direct to the same issuer)
default top 200 → 15 ticker pills surface across the page including
JPM (via JPMORGAN CHASE BANK co-permits) and
parent-link tickers for the construction majors.
|
||
|
|
f6a7621b2d |
demo: profiler index — directory of every Chicago contractor
J asked for "a profiler index that shows a history of everyone." This
is a /profiler directory page (also reachable via /contractors) that
ranks every contractor who's filed a Chicago permit, by total permit
value. Rows are clickable into the full /contractor profile.
Defaults: since 2025-06-01, min permit cost $250K, top 200 contractors
by total_cost. Server pulls two Socrata GROUP BY queries (one keyed on
contact_1_name, one on contact_2_name), merges them so contractors
listed in either applicant or contractor slot appear once with combined
counts/cost. ~300ms cold.
UI: live search box, since-date selector, min-cost selector, sortable
columns (name / permits / total_cost / last_filed). Live numbers as of
this write: 200 contractors, 1,702 permits, $14.22B aggregate. Filter
"Target" returns TARGET CORPORATION + CORPORATION TARGET (name variants
from Socrata).
Also fixes J's other complaint — "no new contracts, Target is gone":
/intelligence/permit_contracts was hard-capped at $limit=6 + only
the most recent 6 over $250K, so any day with 6 fresh permits would
push older contractors (Target) off the panel entirely. Now defaults
to 24 (caller can pass body.limit up to 100), so 2-3 days of permits
stay on the panel. Added body.contractor — passes a name into the
WHERE so the staffer can pin a specific contractor to the panel
("Target Corporation" → 3 of their permits over $250K).
Server-side:
- new POST /intelligence/profiler_index — paginated contractor index
(since, min_cost, search, limit) with merged contact_1+contact_2
aggregations
- /intelligence/permit_contracts — body.limit + body.contractor
- /profiler and /contractors routes serve profiler.html
Front-end:
- new mcp-server/profiler.html — sortable table, live filter, deep
links to /contractor?name=... (prefix-aware via P, so /lakehouse
works on devop.live)
- search.html + console.html nav: added "Profiler" link
Verified end-to-end via playwright on the public URL.
|
||
|
|
5f0beffe80 |
demo: G — per-staffer hot-swap index (synthetic coordinator personas)
Same corpus, different relevance gradient per staffer. Three personas defined in mcp-server/index.ts STAFFERS roster (Maria/IL, Devon/IN, Aisha/WI), each with a primary state + secondary cities. Server-side: /intelligence/chat smart_search accepts a staffer_id body field; when set, defaults state to the staffer's territory and labels the playbook context as theirs. The playbook patterns query also defaults its geo to the staffer's primary city/state, so the recurring-skills/cert breakdowns reflect what they actually fill, not the global IL prior. Front-end: a staffer selector dropdown beside the existing state/role filters. Picking a staffer auto-pins state to their territory, shows a greeting line, relabels the MEMORY panel as MARIA'S/DEVON'S/AISHA'S MEMORY, and sends staffer_id to chat for scoping. Dropdown is populated from /staffers (NOT /api/staffers — the generic /api/* passthrough sends everything under /api/ to the Rust gateway, which doesn't own the roster). loadStaffers runs at window-load independently of loadDay's Promise.all so the dropdown populates even if simulation/SQL inits error out. Verified end-to-end via playwright. Same q="forklift operators": no staffer → 509 workers across MI/OH/IA, MEMORY label as Devon → 89 IN-only (Fort Wayne, Terre Haute), DEVON'S MEMORY as Aisha → 16 WI-only (Milwaukee, Madison, Green Bay), AISHA'S MEMORY As Maria with q="8 production workers near 60607": tags: headcount: 8 · zip 60607 → Chicago, IL · role: production · city: Chicago 20 workers, MARIA'S MEMORY label, top results in Chicago zips Closes the demo-side build of A-G from the persona plan: A. zip → city/state, B. headcount, C. bare-name, D. temporal, E. late-worker triage, F. contractor anchor, G. per-staffer index. |
||
|
|
677065de76 |
demo: P2 — staffer-language routes (zip, headcount, name, late-triage, ingest log)
Built from a playwright run as three personas:
Maria — "8 production workers near 60607 by next Friday, prior-fill at this client"
Devon — "what came in last night?"
Aisha — "Marcus running late site 4422"
Each one previously fell through to smart_search and returned irrelevant
results (geo wrong, headcount ignored, no triage, no temporal). Now:
A. Zip code → city/state lookup. Chicago zips (606xx, 607xx, 608xx)
resolve to {city: Chicago, state: IL}; 13 metro prefixes covered.
Maria's "near 60607" now returns Chicago workers, not Dayton/Green Bay.
B. Headcount parser. "8 production workers" / "12 forklift operators" /
"5 welders" set top_k 1..200, capped 5..25 for SQL+vector LIMIT.
Allows 0-2 role words between the count and the worker noun so
"8 production workers" matches as well as "8 workers".
C. Bare-name profile lookup. Single short capitalized phrase
("Marcus" / "Sarah Lopez") triggers a profile route. Per-token LIKE
AND-joined so "Marcus Rivera" matches "Marcus L. Rivera" without
hardcoding middle initials.
E. Late-worker / no-show triage. Pattern: <Name> (running late|late|
no show|sick|out today|called out|can't make it) — pulls profile +
reliability + responsiveness + recent calls, sources 5 same-role
same-geo backfills sorted by responsiveness, drafts a client SMS
the coordinator can copy. Front-end renders triage card + Copy SMS
button + green backfill list.
F. Contractor name preview anchor. The PROJECT INDEX preview line on
each permit card now wraps contact_1_name and contact_2_name in
anchors to /contractor?name=... — clicking a contractor finally
navigates instead of doing nothing. Click handler stops propagation
so the details element doesn't toggle.
D. Temporal "what came in" route. last night / today / past N hours /
recent — surfaces datasets from the catalog whose updated_at is
within the window, samples one row per dataset to detect worker-
shape, groups by role for worker tables. Schema-agnostic — drop
any dataset and it shows up. Currently sparse because no fresh
ingest has happened today; will populate as ingest runs.
Server: /intelligence/chat smart_search route accepts structured
state/role from the search-form dropdowns (P1 from prior commit) and
now ALSO honors b.state, b.role, q.match for headcount + zip + name +
triage patterns BEFORE falling through to NL parsing.
Front-end: doSearch dispatches on response.type and renders triage,
profile, ingest_log, and miss states with type-specific UI. All DOM
construction uses textContent / appendChild — no innerHTML, no XSS.
Verified end-to-end via playwright drive of devop.live/lakehouse:
Maria → 8 Chicago Production Workers (60685, 60662, 60634)
tags: "headcount: 8 · zip 60607 → Chicago, IL · ..."
Aisha → Marcus V. Campbell card + draft SMS + 5 Quincy IL backfills
"I'm dispatching Scott B. Cooper (96% reliability) to cover."
Devon → ingest_log surfaces successful_playbooks_live (last 1h)
Marcus → 5 profiles (Adams Louisville KY, Jenkins Green Bay WI, ...)
Screenshots: /tmp/persona_v2/{01_maria,02_aisha,03_devon,04_marcus}.png
Restart sequence after these edits: pkill -9 -f "mcp-server/index.ts" ;
cd /home/profit/lakehouse ; bun run mcp-server/index.ts. The bun on
:3700 is not systemd-managed (pre-existing convention).
|
||
|
|
fb99e92a60 |
demo: P1 — search filter now actually filters by state and role
The Co-Pilot search box read state and role from the dropdowns (#sst, #srl)
but appended them to the message string as ' in '+st. The server's NL
parser then matched the literal preposition "in" against the case-insensitive
regex /\b(IL|IN|...)\b/i and assigned state IN (Indiana) to every search.
Result: typing "forklift in IL" returned Indiana workers. Same for WI, TX,
any state — all silently became Indiana. That was the "cached/generic
response" the legacy staffing client was seeing.
Two prongs:
1. search.html doSearch() now passes structured fields:
{message, state, role}
instead of munging into the message text. Dropdown selections bypass
NL parsing entirely.
2. /intelligence/chat smart_search route accepts those structured fields
and prefers them over regex archaeology. Falls back to NL parsing only
when fields aren't provided. Fixed the regex too: the prepositional
form (?:in|from)\s+(STATE) wins, the standalone form requires uppercase
(drops /i flag) so the lowercase preposition "in" can no longer match.
Verified live:
- POST /intelligence/chat {"message":"forklift","state":"IL"}
→ 167 IL forklift operators (Galesburg, Joliet, ...)
- POST /intelligence/chat {"message":"forklift","state":"WI","role":"Forklift Operator"}
→ 16 WI Forklift Operators (Milwaukee, Madison, ...)
- POST /intelligence/chat {"message":"forklift in IL"} (NL fallback)
→ 167 IL workers (regex now correctly distinguishes preposition from state code)
Playwright drove the live UI through devop.live/lakehouse and confirmed the
front-end posts the structured body and the result panel renders the right
state. Restart sequence: kill old bun :3700, bun run mcp-server/index.ts.
|
||
|
|
b843a23433 |
mcp: contractor entity-brief drill-down + mobile UX pass
Adds /contractor page route plus /intelligence/contractor_profile endpoint that fans out across OSHA, ticker, history, parent_link, federal contracts, debarment, NLRB, ILSOS, news, diversity certs, BLS macro — single per-contractor portfolio view across every wired source. search.html: mobile responsive layout, fixed bottom dock with horizontal scroll-snap, legacy bridge row stacking, viewport overflow guards. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
858954975b |
Staffing Co-Pilot UI — architecture-first enrichments + shift clock
Some checks failed
lakehouse/auditor 2 blocking issues: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
J's direction: the dashboard was explanatory but not *actionable* as
a staffing-matrix console. Refactor so the architecture claims from
docs/PRD.md surface as operational signals on every contract card.
Backend (mcp-server/index.ts):
+ GET|POST /intelligence/arch_signals — probes live substrate health
so the dashboard shows instant-search latency, index shape,
playbook-memory entries, and pathway-memory (ADR-021) trace count.
Fires one fresh /vectors/hybrid probe against workers_500k_v1 so
the "instant search" number on screen is live, not cached.
* /intelligence/permit_contracts now times every hybrid call per
contract and returns search_latency_ms, so the card can display
the per-query latency pill (⚡ 342ms).
+ Per-contract computed fields returned from the backend:
search_latency_ms — real /vectors/hybrid duration
fill_probability — base_pct (by pool_size×count ratio)
+ curve [d0, d3, d7, d14, d21, d30]
with cumulative fill% per bucket
economics — avg_pay_rate, gross_revenue,
gross_margin, margin_pct,
payout_window_days [30, 45],
over_bill_count,
over_bill_pool_margin_at_risk
shifts_needed — 1st/2nd/3rd/4th inferred from
permit work_type + description regex
* Pre-existing dangling-brace bug in api() fixed (the `activeTrace`
logging block had been misplaced at module scope, referencing
variables that only existed inside the function). Restart was
failing with "Unexpected }" at line 76. Moved tracing inside the
try block where parsed/path/body/ms are in scope.
Frontend (mcp-server/search.html):
+ Top "Substrate Signals" section — 4 live tiles (instant search,
index, playbook memory, pathway matrix). Color-codes latency
(green <100ms, amber <500ms, red otherwise).
+ "24/7 Shift Coverage" section — SVG 24-hour clock with 4 colored
shift arcs (1st/2nd/3rd/4th), current-time needle, center label
showing the live shift, per-shift contract count tiles beside.
4th shift assumes weekend/split; handles 3rd-shift wrap across
midnight by splitting into two arcs.
+ Per-card architecture pills: instant-search latency, SQL-filter
pool-size with k=200 boost note, shift requirements.
+ Per-card fill-probability horizontal stacked bar with day
markers (d0/d3/d7/d14/d21/d30) and per-bucket segment shading
(green → amber → orange → red as time decays).
+ Per-card economics 4-tile grid: Est. Revenue, Est. Margin (with
% colored by health), Payout Window (30–45d standard), Over-Bill
Pool count + margin at risk.
Architecture smoke test (tests/architecture_smoke.ts, earlier commit)
still green: 11/11 pass including the new /intelligence/arch_signals
+ permit_contracts enrichments.
J specifically wanted: "shoot for the stars · hyperfocus · our
architecture is better because it self-regulates, uses hot-swap,
pulls from real data, and shows instant searches from clever
indexing." Every one of those is now a specific visible signal on
the page, not prose in the README.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
5b1fcf6d27 |
Phase 28-36 body of work
Accumulated since a6f12e2 (Phase 21 Rust port + Phase 27 versioning): - Phase 36: embed_semaphore on VectorState (permits=1) serializes seed embed calls — prevents sidecar socket collisions under concurrent /seed stress load - Phase 31+: run_stress.ts 6-task diverse stress scaffolding; run_e2e_rated.ts + orchestrator.ts tightening - Catalog dedupe cleanup: 16 duplicate manifests removed; canonical candidates.parquet (10.5MB -> 76KB) + placements.parquet (1.2MB -> 11KB) regenerated post-dedupe; fresh manifests for active datasets - vectord: harness EvalSet refinements (+181), agent portfolio rotation + ingest triggers (+158), autotune + rag adjustments - catalogd/storaged/ingestd/mcp-server: misc tightening - docs: Phase 28-36 PRD entries + DECISIONS ADR additions; control-plane pivot banner added to top of docs/PRD.md (pointing at docs/CONTROL_PLANE_PRD.md which lands in next commit) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
52561d10d3 |
Input normalizer + unified memory query — "seamless with whatever input"
J asked directly: "did we implement our memory findings so that our
knowledge base and our configuration playbook [work] seamlessly with
whatever input they're given?" Honest answer tonight was "one of five
findings shipped, normalizer is the blocker." This closes that gap.
NORMALIZER (tests/multi-agent/normalize.ts):
Accepts structured JSON, natural language, or mixed. Returns canonical
NormalizedInput { role, city, state, count, client, deadline, intent,
confidence, extraction_method, missing_fields } for any downstream
consumer.
Three-tier path:
1. Structured fast-path — already-shaped input skips LLM
2. Regex path — "need 3 welders in Nashville, TN" parses without LLM.
City/state parser tightened to 1-3 capitalized words + "in {city}"
anchor preference + case-exact full-state-name variants to prevent
"Forklift Operators in Chicago" being captured as the city name
3. LLM fallback — qwen3 local with think:false + 400 max_tokens for
inputs the regex can't handle
Unit tests (tests/multi-agent/normalize.test.ts): 9/9 pass. Covers
structured fast-path, misplacement→rescue intent, state-name→abbrev
conversion, regex extraction from natural language, plural role +
full state name edge case, rescue intent keyword precedence, partial
input reporting missing fields, empty object fallthrough, async/sync
parity on clean inputs.
UNIFIED MEMORY QUERY (tests/multi-agent/memory_query.ts):
One function, five parallel fan-outs, one bundle returned:
- playbook_workers — hybrid_search via gateway with use_playbook_memory
- pathway_recommendation — KB recommender for this sig
- neighbor_signatures — K-NN sigs weighted by staffer competence
- prior_lessons — T3 overseer lessons filtered by city/state
- top_staffers — competence-sorted leaderboard
- discovered_patterns — top workers endorsed across past playbooks
for this (role, city, state)
- latency_ms — per-source + total
Every branch is best-effort: one source down doesn't break the bundle.
HTTP ENDPOINT (mcp-server/index.ts):
POST /memory/query with body {input: <anything>} → MemoryQueryResult
Returns the same shape the TS function does. Typed with types.ts for
future UI consumption.
VERIFIED:
curl POST /memory/query with structured {role,city,state,count}
→ extraction_method=structured, 10 playbook workers, top score 0.878
curl POST /memory/query with "I need 3 welders in Nashville, TN"
→ extraction_method=regex (no LLM call), 319ms total, 8 endorsements
for Lauren Gomez auto-discovered as top Nashville Welder
Honest remaining gaps (documented for next phase):
- Mem0 ADD/UPDATE/DELETE/NOOP — we still only ADD + mark_failed
- Zep validity windows — playbook entries have timestamps but no
retirement semantic
- Letta working-memory / hot cache — every query scans all 1560
playbook entries
- Memory profiles / scoped queries — global pool, no per-staffer
private subsets
2 of 5 findings now shipped (multi-strategy retrieval in Rust, input
normalization + unified query in TS). The remaining 3 are architectural
additions queued as Phase 25 items — validity windows first since it's
the most load-bearing for long-running systems.
|
||
|
|
03d723e7e6 |
Model matrix — 5 tiers, local hard workers + cloud overseers
config/models.json is the authoritative catalog. Hot path (T1/T2) stays local; cloud is consulted only for overview (T3), strategic (T4), and gatekeeper (T5) calls. J named qwen3.5 + newer models (minimax-m2.7, glm-5, qwen3-next) specifically — all mapped with real reachable IDs verified against ollama.com/api/tags. Tier shape: - t1_hot mistral + qwen2.5 local — 50-200 calls/scenario - t2_review qwen2.5 + qwen3 local — 5-14 calls/event - t3_overview gpt-oss:120b cloud — 1-3 calls/scenario - t4_strategic qwen3.5:397b + glm-4.7 — 1-10 calls/day - t5_gatekeeper kimi-k2-thinking — 1-5 calls/day, audit-logged Rate budgets are declared in-config — Ollama Cloud paid tier is generous but we cap overview/strategic/gatekeeper so no single rogue scenario can blow the day's quota. Experimental rotation list wired but disabled by default. When enabled, T4 randomly routes 10% of calls to a rotating minimax/GLM/qwen-next/ deepseek/nemotron/cogito/mistral-large candidate, logs comparisons, and auto-promotes after 3 rotations of wins. Playbook versioning SPEC embedded under `playbook_versioning` key: every seed gets version + parent_id + retired_at + architecture_snapshot, so when a schema migration breaks a playbook we can pinpoint which change retired it. Implementation flagged for next sprint (touches gateway + catalogd + mcp-server) — not wired here. - scenario.ts now loads config/models.json at init, env vars still override - mcp-server exposes /models/matrix read-only so UI can render it |
||
|
|
0ff091c173 |
Honesty fixes — no hard-coded counts, dynamic sample CSV
- generateSampleRosterCSV(): 120-180 randomized rows per call, timestamp-prefixed IDs (no dedup on re-upload, no static 25 row lie) - /system/summary: truth via SQL COUNT(*), surfaces manifest_drift (caught candidates: manifest 100K, actual 1K) - search.html: loadSystemSummary() hydrates live counts; removed hard-coded 500K strings - MCP tool description: "candidates (100K)" → "candidates (1K)", added "workers_500k (500K)" |
||
|
|
af3856b103 |
Rate/margin awareness: implied pay rate per worker, bill rate per contract
Closes one of the Path 1 trust-break gaps. The scenario we kept flagging:
recruiter calls the system's top pick, worker quotes $35/hr, contract
pays $28/hr. First broken call kills the demo. This fixes it.
Heuristic (no schema change, derived at query time):
- Per worker: implied_pay_rate = role_base + (reliability × 4) + archetype_bump
role_base: Electrician $28, Welder $26, Machine Op $24, Maint $26,
Forklift Op $20, Loader $17, Warehouse Assoc $17, Quality Tech $23,
Production Worker $18 ...
archetype bump: specialist +4, leader +3, reliable +1, else 0
- Per contract: implied_bill_rate = role_base × 1.4
(40% markup — industry norm: pay + overhead + insurance + margin)
- Worker is 'over_bill_rate' when implied_pay_rate > contract's bill_rate
on a candidate-by-candidate basis
Backend (mcp-server/index.ts):
- ROLE_BASE_PAY_RATE + BILL_MARKUP constants
- impliedPayRate(worker), impliedBillRate(role) functions
- parseWorkerChunk() extracts role/reliability/archetype from vector text
- enrichWithRates() attaches implied_pay_rate on every /vectors/hybrid
source response. Called from /search and /intelligence/permit_contracts.
- /search accepts optional max_pay_rate number — if set, filters out
workers above that rate and reports pay_rate_filtered_out count.
- /intelligence/permit_contracts returns implied_bill_rate per contract
AND over_bill_rate boolean per candidate.
Frontend (search.html):
- Live Contracts cards show 'bill rate: $X/hr' under the headcount line
- Each candidate shows 'pay $X/hr' in the sub-line; red 'Over bill rate'
chip next to name when their pay exceeds the contract's bill rate
(hover reveals the exact numbers and why it's flagged)
- Main 'Search all workers' results now include 'pay $X/hr' in the
why-text (computeImpliedPayRate mirrored client-side to match Bun)
End-to-end verified live:
- Masonry Work permit, bill_rate $25.20/hr
Kathleen M. Gutierrez pay $25.56/hr → 🔴 OVER
Melissa C. Rivera pay $20.88/hr → 🟢 OK
- /search with max_pay_rate:32 filtered out 1 Toledo Welder above $32
- Main search shows 'pay $28.64/hr' in each result row
When real ATS data replaces synthetic workers_500k, same UI — the
client's real pay_rate column substitutes for the heuristic.
|
||
|
|
a117ae8b38 |
Workspace UI — surface Phase 8.5 per-contract state + handoff
Phase 8.5 was fully built on the Rust side (WorkspaceManager with
create/handoff/search/shortlist/activity/get/list, persisted to
object storage, zero-copy handoff between agents). Nothing surfaced
it in the recruiter UI. This page closes that gap.
/workspaces — split-pane UI:
Left: scrollable list of all workspaces, sorted by updated_at.
Each card shows name, tier pill (daily/weekly/monthly/pinned),
current owner, count of shortlisted candidates + activity events.
Right: selected workspace detail with five sections:
1. Header — name, tier, owner, created/updated dates, description,
previous-owners audit trail (each handoff is preserved)
2. Actions row — Hand off, Shortlist candidate, Save search, Log activity
3. Shortlist — candidates flagged with dataset + record_id + notes
4. Saved searches — named SQL queries the staffer wants to rerun
5. Activity — chronological (newest first) log of what happened
Four modals for the add/edit actions (create, handoff, shortlist,
save-search, log-activity). All forms POST through the existing
/api/* passthrough to the gateway's /workspaces/* routes.
End-to-end verified live:
1. Sarah creates 'Demo: Toledo Week 17' workspace
2. Shortlists Helen Sanchez (W500K-4661) with notes about prior endorsements
3. Logs activity: 'called — Helen confirmed Tuesday 7am shift'
4. Hands off to Kim with reason 'end of shift'
5. Kim opens the workspace: owner=kim, previous_owners=[{sarah→kim}],
sees all 3 prior events + the shortlisted Helen
— no data copy, pointer swap only (Phase 8.5 design)
Security: all dynamic content built via el(tag,cls,text) DOM helper.
Zero innerHTML on API-derived strings. Modal close-on-backdrop-click
is guarded to the backdrop element.
Nav updated across all 7 pages. Workspaces is the 7th tab.
Dashboard · Walkthrough · Architecture · Spec · Onboard · Alerts · Workspaces.
|
||
|
|
6287558493 |
Push/daemon presence: background digest + /alerts settings page
Converts the app from 'dashboard you visit' to 'system that finds you.'
Critical for the phone-first staffing shop that won't open a URL —
the system reaches out when something matters.
Daemon:
- Starts once per Bun process (guarded via globalThis sentinel)
- Default interval 15 min (configurable, min 1, max 1440)
- On each cycle, buildDigest() compares current state against prior
snapshot persisted in mcp-server/data/notification_state.json
- Events detected:
- risk_escalation: role moved to tight or critical (was ok/watch)
- deadline_approaching: staffing window falls within warn window
(default 7 days) AND deadline date differs from prior
- memory_growth: playbook_memory entries grew by >= 5 since last run
Channels (all opt-out individually via config):
- console: always on, logged to journalctl -u lakehouse-agent
- file: always on, appends JSONL to mcp-server/data/notifications.jsonl
- webhook: optional, POSTs {text, digest} to configured URL
(Slack incoming-webhook / Discord webhook / any custom endpoint)
Digest format (human-readable, fits in a Slack message):
LAKEHOUSE DIGEST — 2026-04-20 23:24
3 staffing deadlines within window:
• Production Worker — 2d to 2026-04-23 · demand 724
• Maintenance Tech — 4d to 2026-04-25 · demand 32
• Electrician — 5d to 2026-04-26 · demand 34
+779 new playbooks (total 779, 2204 endorsed names)
snapshot: 0 critical · 0 tight · $275,599,326 pipeline
/alerts page:
- Current status table (daemon state, interval, webhook, last run)
- Config form: enable toggle, interval, deadline warn window, webhook
URL + label (saved to data/notification_config.json)
- 'Fire a test digest now' button — force a cycle without waiting
- Recent digests panel shows the last 10 dispatches with full text
End-to-end verified live:
- Daemon armed successfully on startup
- First-run digest dispatched to console + file in <1s
- Events detected correctly: 3 deadlines within 7 days from real
Chicago permit data; 779 playbook entries surfaced as memory growth
- Digest text format is Slack-pastable
- Dispatch records appear in /alerts recent list
TDZ caveat: startAlertsDaemon() invocation moved to end of module so
all const/let in the alerts block evaluate before daemon reads them.
Previously failed with 'Cannot access X before initialization' when
the call lived near the top of the file. Nav added to all 6 pages:
Dashboard · Walkthrough · Architecture · Spec · Onboard · Alerts.
|
||
|
|
23eb04a145 |
Onboarding wizard — ingest any staffing CSV in 3 steps
New /onboard page. Client-facing wizard for getting real data into the system without engineering help. Flow: 1. Drop a CSV (or click 'Use the sample as my data' — ships a 25-row realistic staffing roster under /samples/staffing_roster_sample.csv) 2. Browser parses client-side. Columns auto-typed (text/int/decimal/ date). PII flagged by name hint AND content regex (emails, phones). First rows previewed. Read-only — nothing written yet. 3. Name the dataset (lowercase+underscores). Commit. 4. Post-commit: dataset is live. Shows 4 next steps the operator can take (SQL query, vector index, dashboard search, playbook training). Backend: - /onboard serves onboard.html - /samples/*.csv serves CSV files from mcp-server/samples/ with filename validation (only [a-zA-Z0-9_-.]+.csv, prevents path traversal) - /onboard/ingest forwards multipart/form-data to gateway /ingest/file preserving the boundary. The generic /api/* passthrough breaks multipart because it reads as text and forwards as JSON; this route uses arrayBuffer + original Content-Type. Verified end-to-end: upload sample roster (25 rows, 12 columns) → parse in browser → show columns + PII flags + preview → commit → gateway writes Parquet, registers in catalog → immediately queryable: SELECT * FROM onboard_demo2 LIMIT 3 → Sarah Johnson, Forklift Operator, Chicago, IL, 0.92 Round-trip <1 second. Nav updated on all pages to link Onboard. Shipped with a sample CSV so the full flow is demonstrable without real client data. When a real client shows up, same path — they upload their CSV. No engineering ticket, no code change, no schema pre-definition. Security: sample filename regex prevents path traversal. CSV parse is client-side pure JS (no DOM injection). Commit uses existing /ingest/file validation (schema fingerprint, PII server-side, content-hash dedup). |
||
|
|
468798c9ac |
/spec: technical specification — 11-chapter README-equivalent
J's ask: explain the full architecture so someone reading a README
can dispute it or recreate it. The repo isn't public yet; this page
IS the spec until it is.
Ch1 Repository layout — 13 crates + tests/multi-agent + docs + data,
with owned responsibility and file path per crate.
Ch2 Data ingest pipeline (8 steps) — sources (file/inbox/DB/cron),
parse+normalize with ADR-010 conservative typing, PII auto-tag,
dedup, Parquet write, catalog register with fingerprint gate,
mark embeddings stale, queryable immediately.
Ch3 Measurement & indexing — row count / fingerprint / owner /
sensitivity / freshness / lineage per dataset. HNSW vs Lance
tradeoff table with measured numbers (ADR-019). Autotune loop.
Per-profile scoping (Phase 17).
Ch4 Contract inference from external signal — Chicago permit feed
→ role mapping → worker count heuristic → timeline → hybrid
search with boost → pattern discovery → rendered card. All
pre-computed before staffer opens UI.
Ch5 What a CRM can't do — 11-row comparison table of capabilities.
Ch6 How it gets better over time — three paths:
- Phase 19 playbook boost (full math)
- Pattern discovery meta-index
- Autotune agent
Ch7 Scale story: 20 staffers, 300 contracts, midday +20/+1M surge
- Async gateway + per-staffer profile isolation + client blacklists
- 7-step surge handling flow (ingest, stale-mark, incremental refresh,
degradation, hot-swap, autotune re-enter)
- Known pain points: Ollama inference serial, RAM ceiling ~5M on
HNSW (mitigated by Lance), VRAM 1-2 models sequential,
playbook_memory unbounded.
Ch8 Error surfaces & recovery — 10-row table covering ingest schema
conflicts, bucket failures, ghost names, dual-agent drift,
empty searches, Ollama down, gateway restart, schema fingerprint
divergence. Every failure has a named surface and recovery path.
Ch9 Per-staffer context — active profile, workspace, client blacklist,
audit trail, daily summary. How 20 staffers don't see the same UI.
Ch10 Day in the life — 07:00 housekeeping → 07:30 refresh → 08:00
staffer opens → 08:15 drill down → 08:30 Call click → 09:00
second staffer shares memory → 12:30 surge → 14:00 no-show →
15:00 new embeddings live → 17:00 retrospective → 22:00
overnight trials.
Ch11 Known limits & non-goals — deferred (rate/margin, push, confidence
calibration, neural re-ranker, pm compaction, call_log cross-ref)
and explicitly out-of-scope (cloud, ACID, streaming, CRM replace,
proprietary formats, hard multi-tenant).
Also: nav updated on /dashboard, /console, /proof to link /spec.
Every architectural claim in the spec cites either a code path, an
ADR number, or a phase reference so someone skeptical can target
the specific artifact.
|
||
|
|
76bfa2c8d7 |
/proof: explain the dual-agent recursive architecture with citations
Previous page was numeric claims without explanations — 'sub-100ms SQL',
'500K vectors in 341ms' etc. Accurate but undefendable without math,
code paths, and ADR references. Expanded to 8 chapters:
Ch1 — Live receipts (unchanged: real gateway tests, pass/fail, timing)
Ch2 — Architecture. 13-crate diagram with per-crate responsibility
table and file paths. gateway → catalogd/queryd/vectord/ingestd
+ aibridge → object_store. References ADRs 1-20.
Ch3 — Dual-agent recursive consensus loop (NEW)
- Role specialization (executor=optimist, reviewer=pessimist)
- Parallel orchestration via Promise.all
- Recursive: sealed playbooks feed playbook_memory → next query
- Termination math: sealed | tool-error abort | drift abort |
turn-cap abort — every path dumps forensic log
- File refs: tests/multi-agent/agent.ts, orchestrator.ts,
scenario.ts, run_e2e_rated.ts
Ch4 — Playbook memory feedback loop (NEW)
- PlaybookEntry shape with embedding
- Full boost math: similarity * base_weight * decay * penalty
/ n_workers, capped at MAX_BOOST_PER_WORKER
- Temporal decay (e^-age/30, 30d half-life)
- Negative signal (0.5^failures)
- Why k=200: narrow cosine discrimination in nomic-embed-text
- Evidence: compounding test 0 → 0.250 cap in 3 seeds
- persist_sql write-through
- Pattern discovery (Path 2 meta-index)
- File: crates/vectord/src/playbook_memory.rs
Ch5 — ADR citations for each key choice
ADR-001, 008, 012, 015, 019, 020 + Phase 19 design note
Ch6 — Live scale data (unchanged: pulled from /proof.json)
Ch7 — Reproduction recipes: curl for health, sql, hybrid with boost,
patterns, pm stats, and the full dual-agent scenario run
Ch8 — Honest limits (unchanged: synthetic workers_500k, 1K candidates
misaligned to call_log, 7B model imperfection, no rate/margin)
Every architectural claim now cites either the code path
(crates/.../src/file.rs::fn_name) or the ADR (docs/DECISIONS.md).
Someone disputing the system has specific targets to attack.
Mechanism unchanged: /proof serves mcp-server/proof.html via
Bun.file. /proof.json still returns the live test data the page
consumes client-side.
|
||
|
|
bb1b471c67 |
Predictive staffing forecast + per-contract timeline
J's ask: move the system from retrospective ranking to predictive anticipation. Show it tracks the clock, not just the roster. New endpoint /intelligence/staffing_forecast: - Pulls 30-day Chicago permit window (200 permits) - Maps work_type → role via industry heuristic - Aggregates predicted worker demand per role - Joins IL bench supply (workers_500k state='IL' group by role) - Computes coverage_pct, reliable_coverage_pct - Classifies risk: critical/tight/watch/ok - Computes earliest staffing deadline per role (permit issue_date + 31d = 45d construction start - 14d window) - Surfaces recent Chicago playbook ops for the role-specific memory New UI 'Staffing Forecast' section ABOVE Live Contracts: - Top card: total construction value, permit count, workers needed, critical/tight role count - Per-role rows: demand vs available supply, coverage %, deadline with red/amber/green urgency coloring Per-contract timeline on Live Contracts: - estimated_construction_start, staffing_window_opens, days_to_deadline - urgency classification: overdue/urgent/soon/scheduled - card border colored by urgency - timeline line explicitly shows recruiter: OVERDUE/URGENT + days count This is the 'system already thinks about when, not just who' surface J was asking for. CRMs store; this anticipates. |
||
|
|
2595d48535 |
Gap fixes: pattern fallback, narrative citations, call_log plumbing
Closing trust-breaks surfaced in the strategic audit. A — MEMORY chip renders even when sparse: Previously rendered nothing when no trait crossed threshold, which recruiters would read as "system has no signal." Now explicitly says "memory is sparse for this role+geo — no trait crossed threshold" or "no similar past playbooks yet — first fill of this kind will seed it." Honest when it doesn't know. B — Removed /intelligence/learn dead endpoint: Legacy CSV-writer path that destructively re-wrote successful_playbooks. /log and /log_failure replace it cleanly. Leaving dead code confuses future maintainers. C — Narrative tooltips on Endorsed chips: Hovering the green "Endorsed · N playbooks" chip now fetches the worker's past operations from successful_playbooks_live and shows a story: "Maria — past endorsements: • Welder x2 in Toledo (2026-04-15), • Welder x1 in Toledo (2026-04-18)..." Falls back to honest "narrative unavailable" if the seed didn't land in SQL. D — call_log infrastructure in worker modal: New "Recent Contact" section queries call_log JOIN candidates by name. Surfaces last 3 call entries with timestamp, recruiter, disposition, duration. When empty (which is today's reality — candidates table only has 1000 rows vs call_log's higher IDs), shows an honest message about the data gap and what real ATS integration would unlock. Honest call: D ships infrastructure. Actual utility depends on aligning candidate IDs between the candidates table and call_log — current synthetic data doesn't cross-ref cleanly. When real ATS data lands, this section becomes the "system knows who we called yesterday" feature the recruiter needs. Deferred (would require a dedicated session): - Rate awareness (needs worker pay_rate + contract bill_rate) - Push / background daemon (Slack/SMS/email integration) - Confidence calibration (needs a probabilistic ranking layer) |
||
|
|
cdd12a1438 |
#2: Per-client worker blacklists
New endpoints:
- POST /clients/:client/blacklist { worker_id, name?, reason? }
- GET /clients/:client/blacklist → { client, entries }
- DELETE /clients/:client/blacklist/:worker_id → { removed, total }
Bun /search accepts optional `client` field. When present, loads that
client's blacklist and appends `AND worker_id NOT IN (...)` to the
SQL filter. Zero-cost if unused; clean trust-break avoidance when a
client has previously flagged a worker.
Persistence: mcp-server/data/client_blacklists.json, synchronous
writes via Bun.write. Scale target is hundreds of entries per client
tops — JSON is fine until we hit 10K+ per client.
Verified: worker_id 9326 (Carmen Green) blacklisted for AcmeCorp,
same Chicago Electrician search with client=AcmeCorp returns 196
sql_matches vs 197 without — exactly one excluded.
|
||
|
|
72ee8f006f | k=200 on /search and /match too — consistency with compounding default | ||
|
|
99ab0fe623 |
A+B: patterns in main search + compounding bump
A — Patterns surface in main Worker Search:
/intelligence/chat smart_search fallback now calls /patterns in
parallel with hybrid, returns discovered_pattern + matched count.
search.html doSearch renders a green "MEMORY (N playbooks): ..."
chip above results so every recruiter query shows the meta-index
dimension, not just live-contract cards.
B — Compounding proven and default-k bumped:
Direct compounding test on Chicago Electrician:
- Run 0 (no seeds): Carmen Green not in top-5, boost 0
- After 3 seeds of identical operation: boost +0.250 (capped),
3 citations, lifted to #1. Each seed adds 1 citation. Cap
prevents one worker from dominating future searches.
- Required k=200 (not 25 or 50) — embedding band is narrow
(cosines 0.55-0.67 across all playbooks regardless of geo).
- Bumped defaults on /search, permit_contracts, and smart_search
to playbook_memory_k=200. Brute-force sub-ms at this scale.
|
||
|
|
5c39c74fe4 |
Live Contracts canvas: Chicago permits × workers_500k × playbook patterns
New devop.live/lakehouse section pairs live public Chicago building
permits with derived staffing contracts, ranked candidates from the
500K worker bench, and meta-index discovered patterns per role+geo.
Makes the Phase 19 boost + Path 2 pattern discovery visible on real
external data, without needing a paying client to demo.
Backend:
- New /intelligence/permit_contracts endpoint
- Fetches 6 recent Chicago permits > $250K from the Socrata API
- Derives proposed fill: 1 worker per $150K of permit value (capped 2-8)
- For each: /vectors/hybrid with use_playbook_memory=true,
playbook_memory_k=25, auto availability>0.5 filter
- For each: /vectors/playbook_memory/patterns with k=25 min_freq=0.3
- Returns permit + proposed contract + top 5 candidates with boosts
and citations + discovered pattern + pattern_matched count
Frontend:
- New "Live Contracts" section on search.html between today's sim
contracts and Market Intelligence
- Per-permit card: cost + work_type + address + proposed role/count
+ pool size + top 3 candidates (with endorsement chip when boost
fires) + memory-derived pattern ("MEMORY (N playbooks): recurring
certifications: OSHA-10 47%, Forklift... · archetype mostly: ...")
Real working demo even without paying clients: shows the system
operating on genuinely external data with our synthetic-data-derived
learning applied.
|
||
|
|
95c26f04f8 |
Path 1 negative signal + Path 2 pattern discovery + name validation
New: - /vectors/playbook_memory/patterns: meta-index pattern discovery. Given a query, finds top-K similar playbooks, pulls each endorsed worker's full workers_500k profile, aggregates shared traits (cert frequencies, skill frequencies, modal archetype, reliability distribution), returns a human-readable discovered_pattern. Surfaces signals operators didn't explicitly query — the original PRD's "identify things we didn't know" dimension. - /vectors/playbook_memory/mark_failed: records worker failures per (city, state, name). compute_boost_for applies 0.5^n penalty per recorded failure, so 3 failures quarter a worker's positive boost and 5 effectively zero it. Path 1 negative signal — recruiter trust depends on the system NOT recommending people who no-showed. - Bun /log_failure: validates failed_names against workers_500k (same ghost-guard as /log), forwards to /mark_failed. Improved: - /log now validates endorsed_names against workers_500k for the contract's city+state before seeding. Ghost names (names that don't correspond to real workers) are rejected in the response and excluded from the seed, preventing silent boost failures. - Bun /search auto-appends `CAST(availability AS DOUBLE) > 0.5` to sql_filter when the caller didn't constrain availability. Opt out with `include_unavailable: true`. Recruiter trust bug: surfacing already-placed workers breaks the first call. - DEFAULT_TOP_K_PLAYBOOKS 25 → 100. Direct cosine measurement showed similarities cluster 0.55-0.67 across all playbooks regardless of geo, so k=25 missed relevant geo-matched playbooks. Brute-force is still sub-ms at this size. Verified end-to-end on live data: - Ghost names rejected on /log + /log_failure - Availability filter drops unavailable workers from candidate pool - Pattern discovery on unseen Cleveland OH Welder query returned recurring skills (first aid 43%, grinder 43%, blueprint 43%) and modal archetype (specialist) across 20 semantically similar past playbooks in 0.24s - Negative signal: Helen Sanchez boost dropped +0.250 → +0.163 after 3 failures recorded via /log_failure (34% reduction) |
||
|
|
20b0289aa9 |
/log validates endorsed names + /search auto-appends availability>0.5
Two gap-fills surfaced by the real test on 2026-04-20: 1. /log no longer seeds endorsed_names that don't exist in workers_500k for the contract's (city, state). Previously accepted ghost names silently (entry count grew, SQL row landed, but boost never fired because no real worker chunk matched the stored tuple). Response now reports rejected_ghost_names and explains why seeding was skipped. 2. Bun /search auto-appends `CAST(availability AS DOUBLE) > 0.5` to sql_filter when the caller didn't constrain availability themselves. Recruiters expect "available workers" by default — surfacing someone on an active placement would break trust on first contact. Opt out with `include_unavailable: true`. Verified: ghost names rejected end-to-end, real names accepted, mixed input handled correctly. Availability filter drops ~10 workers from a 305-row Cleveland OH Welder pool to 295 actually-available. |
||
|
|
25b7e6c3a7 |
Phase 19 wiring + Path 1/2 work + chain integrity fixes
Backend:
- crates/vectord/src/playbook_memory.rs (new): Phase 19 in-memory boost
store with seed/rebuild/snapshot, plus temporal decay (e^-age/30 per
playbook), persist_to_sql endpoint backing successful_playbooks_live,
and discover_patterns endpoint for meta-index pattern aggregation
(recurring certs/skills/archetype/reliability across similar past fills).
- DEFAULT_TOP_K_PLAYBOOKS bumped 5 → 25; old default silently missed
most boosts when memory had > 25 entries.
- service.rs: new routes /vectors/playbook_memory/{seed,rebuild,stats,
persist_sql,patterns}.
Bun staffing co-pilot (mcp-server/):
- /search, /match, /verify, /proof, /simulation/run, MCP tools all
forward use_playbook_memory:true and playbook_memory_k:25 to the
hybrid endpoint. Boost was previously dark across the entire app.
- /log no longer POSTs to /ingest/file — that endpoint REPLACES the
dataset's object list, so single-row CSV writes were wiping all prior
rows in successful_playbooks (sp_rows went 33→1 in one /log call).
/log now seeds playbook_memory with canonical short text and calls
/persist_sql to keep successful_playbooks_live in sync.
- /simulation/run cumulative end-of-week CSV write removed for the same
reason. Per-day per-contract /seed (added in this session) is the
accumulating feedback path now.
- search.html addWorkerInsight renders a green "Endorsed · N playbooks"
chip with playbook citations when boost > 0.
Internal Dioxus UI (crates/ui/):
- Dashboard phase list rewritten through Phase 19 (was stuck at "Phase
16: File Watcher" / "Phase 17: DB Connector" — both wrong).
- Removed fabricated "27ms" stat label.
- Ask tab examples + SQL default replaced with real staffing prompts
against candidates/clients/job_orders (was referencing nonexistent
employees/products/events).
- New Playbook tab exposes /vectors/playbook_memory/{stats,rebuild} and
side-by-side hybrid search (boost OFF vs ON) with citations.
Tests (tests/multi-agent/):
- run_e2e_rated.ts: parallel two-agent (mistral + qwen2.5) build phase
+ verifier rating (geo, auth, persist, boost, speed → /10).
- network_proving.ts: continuous build → verify → repeat with
staffing-recruiter profile hot-swap; geo-discrimination check.
- chain_of_custody.ts: single recruiter operation traced through every
layer (Bun /search, direct /vectors/hybrid parity, /log, SQL,
playbook_memory growth, profile activation, post-op boost lift).
|
||
|
|
2da8562c90 |
Interactive permit heat map with live data verification
- Leaflet.js map with dark tiles showing real Chicago building permits - Dots sized and colored by project cost ($1B+ red, $100M+ orange, $10M+ blue) - Hover any dot for project details — address, cost, description, date - LIVE indicator with green pulse dot - Timestamp showing when data was fetched - "Verify source" link goes directly to Chicago Open Data portal - "Refresh" button re-fetches from the API on click - Expanded to 50 permits for denser map coverage - Legend showing dot size scale No one can say "you just typed those numbers in" when they can click a dot on the map, see 10000 W OHARE ST, and verify it themselves on data.cityofchicago.org. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
9acbe5c369 |
Market Intelligence: live Chicago building permits → staffing demand forecast
/intelligence/market pulls real permit data from Chicago Open Data API: - $9.6B in active construction permits - O'Hare expansion ($730M), new casino ($580M), transit station ($445M) - Maps permit types to staffing roles (electrical→Electrician, masonry→Loader) - Cross-references with our IL worker bench to show coverage gaps - Electrician gap: only 1,036 reliable vs 63K estimated demand Datalake page now shows three intelligence layers: 1. Contract simulation with scenario-driven matching 2. Market Intelligence with live permit data + bench analysis 3. System Learning with fill history and detected patterns The staffing company sees demand forming before the phone rings. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
b16e485be1 |
Every page refresh feeds the learning loop — contracts logged as playbook entries
Each simulation fill now logs: role, headcount, city, state, workers matched, client, start time, and scenario type. One page refresh = ~20 playbook entries. 4 refreshes = 28 entries with patterns already forming. Fixed activity counters: shows Contract Fills, Searches, and Patterns. Activity feed now shows the actual fill data with worker names and scenarios. This is the PRD's learning loop in action — the system records every successful match so future queries can learn from past decisions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
bba5b826a3 |
Learning loop + smart search on datalake page
Learning Loop: - /intelligence/learn endpoint logs search→selection as playbook entry - /intelligence/activity returns learning stats, patterns, and recent activity - Call/SMS buttons trigger logSelection() — records what query led to what pick - "System Learning" card on main page shows searches logged, patterns detected, and recent activity feed with timestamps - Every search-selection pair becomes institutional knowledge stored in the lakehouse Smart Search on Main Page: - doSearch() now routes through /intelligence/chat (smart NL parser) - Extracts role, city, state, availability, reliability from natural language - Shows understanding tags so staffer sees what the system parsed - Returns workers with ZIP codes, availability %, reliability %, archetype - "reliable forklift operator available in Nashville" → 10 Nashville forklift operators with ZIP codes, all 86-98% reliable, all available — 372ms Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
df71ac7156 |
Smart NL search: extracts role, city, state, availability from natural language
"find me a warehouse worker available today near Nashville" now: - Parses: role=warehouse, city=Nashville, available=true - Builds SQL: role LIKE '%warehouse%' AND city='Nashville' AND availability>0.5 - Returns: 12 Nashville warehouse workers with ZIP codes, availability %, reliability %, skills, certs, and archetype - Shows understanding tags so user sees what the system parsed - 414ms, 12 records — not a generic search, a targeted answer Recognizes 20 role keywords, 40+ cities, 10 states, availability/reliability signals from natural language. Falls through to vector search for anything the parser doesn't catch. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
37804d7195 |
Staffing Intelligence Console: workforce command center with conversational AI
New page at /lakehouse/console — a $200/hr consultant's intelligence product: Morning Brief (auto-loads in ~120ms across 500K profiles): - Workforce Pulse: total, reliable %, elite %, archetype breakdown - Geographic Bench: state-by-state reliable % with weakest-state alert - Comeback Watch: 15K improving workers who crossed 80% reliability - Risk Watch: 5K erratic + 5K silent workers flagged automatically - Ready & Waiting: available + reliable workers to call first - Role Supply: 20 roles with supply/available/reliability Conversational Chat with 5 intelligent routes: - "Find someone like [Name] but in OH" → vector similarity search - "Who could handle industrial electrical work?" → semantic role discovery (finds workers for roles that DON'T EXIST in the database) - "What if we lose our top 5 forklift operators?" → scenario analysis with risk rating, bench depth, state-by-state breakdown - "Which workers should we stop placing?" → risk flagging - Default: hybrid SQL+vector search with LLM summary Every response shows: query steps, records scanned, response time. Transparency kills the "AI is making it up" argument. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
37c68d9567 |
Kill all static/fake elements — every number on the page is now live from data
Skeptic-proof audit: - Worker count queried from database (was hardcoded "500K") - State/role dropdowns populated from actual data (was hardcoded 8 states, 6 roles) - Now shows 11 states, 21 roles — whatever exists in the dataset - Client names generated combinatorially (20×20=400 combos, was 12 static) - Top workers randomized with SQL OFFSET (was same 5 every time) - Deleted fabricated "Recent Activity" section (fake placement history) - Replaced with transparent "Data Source" showing where numbers come from - Fixed NOTES undefined crash — hybrid search actually returns results now (was silently failing, showing 0/X filled on every contract) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
be7436b6f0 |
Diverse scenario engine: 15 weighted staffing situations replace crisis-every-refresh
Simulation now uses weighted random selection across 4 priority tiers: - Urgent (walkoff, quarantine, no-show), High (new client, cert expiry, expansion), Medium (recurring, seasonal, medical leave, cross-train), Low (future, exploratory) - Color-coded scenario banners on ALL contracts, not just urgent - Each scenario carries context (note) + recommended action Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
c0ff7434cb |
Technical deep-dive: architecture explained for non-technical audience
Added 'How This Actually Works' section below the proof page: 1. CRM vs Lakehouse side-by-side — what's different in plain English 2. Your Data Never Leaves — local AI, local storage, your hardware 3. How It Handles Scale — HNSW (RAM, 1ms) + Lance (disk, 5ms at 10M) 4. Hot-Swap Profiles — 4 AI models explained by what they DO 5. Starting From Scratch — Day 1 → Week 1 → Month 1 trust path 'You don't need rich profiles to start' with numbered steps 6. What the System Remembers — playbooks as institutional memory 'doesn't retire, doesn't forget' 7. Measured Not Promised — table of real numbers with plain English Addresses the legacy company pushback: explains WHY the architecture matters, HOW sparse data becomes rich data over time, and that everything runs on hardware they own with zero cloud dependency. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
2279d9f51d |
Fix: simulation now passes chunk_text — worker cards show full profiles
The simulation was only storing name/doc_id/score but dropping chunk_text. Worker cards showed 'New — data builds with placements' for every worker. Now includes the full profile text so cards render skills (blue), certs (green), archetype (purple), and reliability/ availability meters. Verified via Playwright: cards now show DeShawn Cook with 6S|Excel|SAP skills, First Aid/CPR cert, flexible archetype, 72% reliability. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
7cb9999451 |
Rebuild search UI: zero dependencies, plain JS, DOM-only, works
Replaced complex dashboard with minimal search.html: - No external JS/CSS files, no transpilation, no module imports - Plain JS with .then() chains (no async/await compat issues) - DOM-only rendering via createElement (no innerHTML with data) - 20s AbortController timeout so fetch never hangs - Detects /lakehouse/ proxy prefix automatically - 7KB total, loads in 18ms Calls lakehouse /vectors/hybrid directly — SQL filters always apply, works even when HNSW isn't loaded (brute-force fallback). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
5c93338f40 |
Fix: gateway defaulted to wrong vector index (10K instead of 50K)
All gateway endpoints pointed to ethereal_workers_v1 (10K, W- prefix) instead of workers_500k_v1 (50K, W500K- prefix). Filters appeared broken because the vector results came from the wrong dataset — IDs matched numerically but belonged to different workers. Now: every search, match, and hybrid call uses workers_500k_v1. Verified: 'experienced welder' + state=OH + role=Welder returns 5 Welders in OH (Carmen Perry, Janet White, Rachel Miller, etc). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
7367e5f71d |
Proof page: LIVE side-by-side CRM vs AI — shows, doesn't tell
3 live demo searches run on page load against 500K real profiles: 'warehouse help' — CRM: 0, AI: finds Forklift Ops + Loaders 'someone good with machines who is dependable' — CRM: 0, AI: finds Machine Ops 'safety trained worker for chemical plant' — CRM: 0, AI: finds OSHA+Hazmat workers Each shows the actual CRM keyword count (LIKE match) next to the AI vector results with real worker names, roles, and cities. Not described — demonstrated. The numbers come from queries that run when the page loads. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
5aaa3c5c08 |
Mobile responsive: proof page works on phones
Added @media(max-width:768px) breakpoints: - 2-col grids → single column on mobile - 3-col grids → single column - 4-col model cards → 2-col - Stats grid → 2-col - Tables: horizontal scroll, smaller text - Reduced padding and font sizes - Hero title scales down Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |