94 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
f9a408e4c4 |
Surname → ethnicity routing + ComfyUI fallback for sparse pool buckets + cache-buster
Three problems J flagged ("not matching properly", "same faces", "still
showing old icons") had three different roots:
1. MISMATCH: front-end was first-name only, so "Anna Cruz" / "Patricia
Garcia" / "John Jimenez" all defaulted to caucasian. Added
SURNAMES_HISPANIC / _SOUTH_ASIAN / _EAST_ASIAN / _MIDDLE_EASTERN
dicts to both search.html and console.html. Surname is checked
FIRST (stronger signal for hispanic + asian than first names),
then first-name fallback. Cruz → hispanic, Patel → south_asian,
Nguyen → east_asian, regardless of first name.
2. SAME FACES: pool buckets are uneven — woman/south_asian=3,
man/black=4, woman/middle_eastern=2 — so any worker in those
buckets collapses to 2-4 photos no matter how good the hash is.
/headshots/:key now 302-redirects to /headshots/generate/:key
when the gender × race intersection is below 30 faces. ComfyUI
on-demand gives infinite uniqueness for the sparse buckets
(deterministic-per-worker via djb2 seed). Dense buckets still
serve from the pool — no GPU cost there.
3. STALE CACHE: Cache-Control was max-age=86400, immutable — pinned
old photos in browsers for 24h after any server-side update.
Dropped to max-age=3600, must-revalidate, and added a v=2
cache-buster query param to all front-end /headshots/ URLs so
existing cached entries are bypassed on next page load.
Also surfacing X-Face-Pool-Bucket / Bucket-Size headers for diagnosis.
Verified: playwright run shows surname routing correct (Torres,
Rivera, Alvarez, Gutierrez, Patel, Nguyen, Omar all bucketed
correctly), sparse buckets 302 to ComfyUI, dense buckets stay on
the thumb pool.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
a3b65f314e |
Synthetic face pool — 1000 StyleGAN headshots, ComfyUI hot-swap, 60x smaller thumbs
Worker cards now ship a real photo per person instead of monogram tiles:
- fetch_face_pool.py pulls 1000 faces from thispersondoesnotexist.com
- tag_face_pool.py runs deepface for gender/race/age, excludes <22yo
- manifest.jsonl: 952 servable, gender/race buckets populated
- /headshots/_thumbs/ pre-resized to 384px webp (587KB -> 11KB,
60x smaller; without this Chrome's parallel-connection budget
drops ~75% of tiles in a 40-card grid)
- /headshots/:key gender x race x age intersection bucketing with
gender-only fallback when intersection is sparse
- /headshots/generate/:key ComfyUI on-demand for the contractor
profile spotlight (cold ~1.5s, cached ~1ms; worker-derived
djb2 seed makes faces deterministic-per-worker but unique
across workers sharing the same prompt)
- serve_imagegen.py _cache_key() now includes seed (was caching
by prompt only -> 3 different worker seeds collapsed to 1
cached image; verified fix produces 3 distinct md5s)
- confidence-default name resolution: Xavier->man+hispanic,
Aisha->woman+black, etc. Every worker resolves to a bucket.
End-to-end: playwright run on /?q=forklift+operators+IL -> 21/21
cards loaded, 0 broken, all 384px webp.
Cache + binary pool gitignored; manifest tracked.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
10ed3bc630 |
demo: real synthetic headshots — fetch pool + serve route + UI wire
Three layers shipped:
1. SCRIPT — scripts/staffing/fetch_face_pool.py
Pulls N synthetic StyleGAN faces from thispersondoesnotexist.com
into data/headshots/face_NNNN.jpg, writes manifest.jsonl. Idempotent:
re-running skips existing files. Optional gender tagging via deepface
(currently unavailable on this box; the script handles ImportError
gracefully and tags everything as untagged). Fetched 198 faces with
concurrency=3 in ~67s.
2. SERVER — /headshots/:key route in mcp-server/index.ts
Loads manifest at first hit, caches in globalThis._faces. Hashes the
key with djb2-style mixing → pool index → returns the JPG. Same
key always gets the same face (deterministic). Accepts
?g=man|woman&e=caucasian|black|hispanic|south_asian|east_asian|middle_eastern
to bias pool selection — the gender/ethnicity buckets fall back to
the full pool when no tagged matches exist. Cache-Control:
86400 immutable so faces ride the browser cache after first hit.
/headshots/__reload re-reads the manifest without restart.
3. UI — search.html + console.html worker cards
Re-added overlay <img> on top of the monogram .av circle. img.src
= /headshots/<encoded-key>?g=<hint>&e=<hint>. img.onerror removes
the failed image so the monogram stays visible if the face pool
isn't fetched / CDN is blocked. .av now has overflow:hidden +
position:relative to clip the img to a perfect circle.
Forced-confident name resolution (J: "we're CREATING the profile,
created as though you truly have the information Xavier is more
likely Hispanic and he's a male"):
genderFor(name) — looks up MALE_NAMES + FEMALE_NAMES,
falls back to a deterministic hash split
so unknown names spread ~50/50. Sets now
include cross-cultural names: Alejandro/
Andres/Mateo/Santiago/Joaquin/Cesar/Hugo/
Felipe/Gerardo/Salvador/Ramon (Hispanic),
Raj/Anil/Vikram/Krishna/Pradeep (South
Asian), Wei/Yi/Hiroshi/Akira/Hyun (East
Asian), Demetrius/Kareem/DaQuan/Khalil
(Black), Omar/Khalid/Hassan/Ahmed/Bilal
(Middle Eastern). FEMALE_NAMES extended
in parallel.
guessEthnicityFromFirstName(name)
— confident default of 'caucasian' for any
name not in the cultural buckets so every
worker resolves to a category the face
pool can be biased toward. Order: ME → Black
→ Hispanic → South Asian → East Asian →
Caucasian (matters where names overlap,
e.g. Aisha appears in ME + Black, biases
toward ME for visual fit).
Both helpers also ported into console.html so the triage backfills
and try-it-yourself rendering get the same hint stack.
Privacy note in the script + route comments: the synthetic data uses
the worker's name as the seed; production should hash worker_id (not
name) to avoid leaking PII to a third-party CDN. The fetch URL itself
is referenced once per pool build, not per-worker.
.gitignore — added data/headshots/face_*.jpg (~100MB for 198 faces;
the manifest + script are tracked). Re-running the script on a fresh
checkout rebuilds the pool from scratch.
Verified end-to-end via playwright on devop.live/lakehouse:
forklift query → 10 worker cards
10/10 with face images (real synthetic headshots, not monograms)
0/10 broken
Alejandro G. Nelson → ?g=man&e=hispanic
Patricia K. Garcia → ?g=woman&e=caucasian
Each name → unique face, deterministic across loads.
Console triage backfills get the same treatment.
|
||
|
|
cdf5f5926a |
demo: console — sober worker cards (mirror dashboard styling)
J: "can you update Staffer's Console too the same look." Console
rendered worker rows in three places (Chapter 4 permit-contract
candidates, Chapter 8 triage backfills, Chapter 9 try-it-yourself
results) with the original 28px square avatar + flat backgrounds —
inconsistent with the new dashboard design.
Three changes:
1. CSS — .worker now has a 3px left-edge border that color-codes the
role family, and .av is a 32px circle with a muted dark background
+ 1px ring + monogram initials. Five role-band colors mirror
search.html: warehouse blue / production amber / trades purple /
driver green / lead orange. Plus a .role-pill style matching the
dashboard's small uppercase chip.
2. Helpers — added ROLE_BANDS regex table + roleBand() classifier and
a new workerRow(name, role, detail, opts) builder. Same regex
patterns as search.html so a "Forklift Operator" classifies
identically on every page. opts.endorsed adds the green endorsed
chip; opts.score appends a rank badge.
3. Replaced the three inline avatar+row constructors with workerRow()
calls. Net: console.html lost ~20 lines of duplicated DOM building
while gaining role bands + pills.
Verified end-to-end via playwright on devop.live/lakehouse/console:
Chapter 8 triage scenario "Marcus running late site 4422":
5 backfill rows render with [warehouse] band + WAREHOUSE pill +
monogram avatars (SBC, ETW, SHC, WMG, MEB).
Same sober look as the dashboard worker cards. No emojis, no
cartoons, color-coded role family on the left edge.
|
||
|
|
f92b55615f |
demo: worker cards — sober monogram avatars + role bands (no cartoons)
J: "It's two cartoonish right now the website looks like it was made
by first grade teacher." Pulled the DiceBear personas-style headshots
and the emoji role badges. They were generative-illustration playful;
this is supposed to read like a staffing tool, not a kindergarten
attendance sheet.
Replacement design — restraint, signal, no glyphs:
Avatar: 40px circle, monogram initials, muted dark background
(#161b22), 1px ring (#21262d), white-ish text. No image,
no emoji. Looks like a pre-photo placeholder slot in a
real ATS.
Role band: the role gets classified into one of five families:
WAREHOUSE / PRODUCTION / SKILLED TRADE / DRIVER / LEAD
(regex-based; falls back to the first word of the role
for unknown families). Each family has a single muted
color: blue / amber / purple / green / orange. The
color appears as:
- a 3px left border on the .iworker card
- a 2px left border + matching text color on a small
uppercase pill in the detail line
That's it. No images, no emojis, no per-role illustrations. The
staffer sees role-family at a glance via the band color, name and
initials prominently, full role + city + zip in the detail string
behind the pill. Five colors total instead of an eight-color rainbow.
CSS:
.iworker[data-role-band="warehouse"] etc. → 3px left border
.role-pill[data-rb="warehouse"] etc. → matching pill border
JS:
ROLE_BANDS = 6 regex → band+label entries (warehouse, production,
trades, driver, lead, quality)
roleBand(role) = first matching entry, fallback to first
word of role uppercased
Verified end-to-end via playwright on devop.live/lakehouse:
forklift query → 10 cards
every card → monogram avatar + WAREHOUSE pill (blue band)
no images, no emojis, no rainbow
Restart sequence after these edits:
pkill -9 -f "/home/profit/lakehouse/mcp-server/index.ts"
( setsid bun run /home/profit/lakehouse/mcp-server/index.ts \
> /tmp/mcp-server.log 2>&1 < /dev/null & disown )
|
||
|
|
d571d62e9b |
demo: spec — refresh repo layout + model fleet + per-staffer + paths 8-9
J: "how about devop.live/lakehouse/spec." Spec was anchored on
2026-04-21 state (v2 footer): mistral mentioned in the model matrix,
13 crates not 15, missing validator/truth/auditor crates, no mention
of OpenCode 40-model fleet, no pathway memory, no per-staffer
hot-swap, no Construction Activity Signal Engine, ADR count was 20.
Footer claimed Phases 19-25.
Edits, in order:
Ch1 Repository layout
+ crates/truth/ (ADR-021 rule store)
+ crates/validator/ (Phase 43 — schema/completeness/policy gates)
+ auditor/ (cross-lineage Kimi↔Haiku/Opus auto-promote)
+ scripts/distillation/ (frozen substrate v1.0.0 at e7636f2)
Updated aibridge to mention ProviderAdapter dispatch
Updated gateway to mention OpenAI-compat /v1/* drop-in middleware
Updated mcp-server route list to include /staffers + profiler/contractor pages
Updated config/ to mention modes.toml + providers.toml + routing.toml
Updated docs/ ADR count from 20 → 21
Updated data/ to mention _pathway_memory + _auditor/kimi_verdicts
Ch3 Measurement & indexing
REPLACED stale "Model matrix (Phase 20)" T1-T5 table that
mentioned mistral with the current 5-provider fleet:
ollama / ollama_cloud / openrouter / opencode (40 models, one
sk-* key reaches Claude Opus 4.7, GPT-5.5-pro, Gemini 3.1-pro,
Kimi K2.6, GLM, DeepSeek, Qwen, MiniMax, free) / kimi
ADDED 9-rung cloud-first ladder pseudocode
ADDED N=3 consensus + cross-architecture tie-breaker math
ADDED auditor cross-lineage Kimi K2.6 ↔ Haiku 4.5 + Opus auto-promote
ADDED distillation v1.0.0 freeze paragraph (145 tests, 22/22, 16/16)
Updated Continuation primitive to mention Phase 44 Rust port
Ch5 What a CRM can't do
Extended the table with 6 new capabilities:
- Per-staffer relevance gradient
- Triage in one shot (late-worker → backfills + draft SMS)
- Permit → fill plan derivation
- Public-issuer attribution across contractor graph
- Cross-lineage AI audit on every PR
- Pathway memory (system-level hot-swap, ADR-021)
Ch6 How it gets better over time
Lede updated from 7 paths → 10 paths
NEW Path 7 — Pathway memory (ADR-021)
NEW Path 8 — Per-staffer hot-swap index
NEW Path 9 — Construction Activity Signal Engine
Original Path 7 (observer ingest) renumbered to Path 10
Ch9 Per-staffer context
Lede now anchors Path 8 from Ch6
NEW lead section: Per-staffer hot-swap index — Maria/Devon/Aisha,
same query reshapes per coordinator (167 IL / 89 IN / 16 WI),
MARIA'S MEMORY pill, /staffers endpoint, metro-agnostic by
construction. The original Phase 17 profile / Phase 23 competence
sections retained beneath as the deeper architecture detail.
Ch10 A day in the life
Updated 14:00 emergency event to use the late-worker triage
handler — coordinator types "Dave running late site 4422", gets
profile + draft SMS + 5 backfills + Copy SMS button in 250ms.
The old Click No-show button → /log_failure flow remains valid
(penalty still records); the user-facing surface is the new
triage card.
Ch11 Known limits
REPLACED the Mem0/Letta/Phase-26 era list with current honest
limits: BAI persistence + backtesting, NYC DOB adapter, 12
awaiting public-data sources for contractor profile, rate/margin
awareness, Mem0-style UPDATE/DELETE, Letta hot cache (now 5K
not 1.9K), confidence calibration, SEC fuzzy precision, tighter
pathway+scrum integration.
Footer
v2 2026-04-21 → v3 2026-04-27
Phases 19-25 → 19-45
Lists today's phases: distillation v1.0.0 substrate, gateway as
OpenAI-compat drop-in, mode runner, validator + iterate, ADR-021
pathway memory, per-staffer hot-swap, Construction Activity Signal
Engine.
Nav
+ Profiler link
Date pill v1 · 2026-04-20 → v3 · 2026-04-27
Verified end-to-end on devop.live/lakehouse/spec — 11 chapter h2s
render in order, 67KB page (was 50KB-ish), all internal links resolve.
|
||
|
|
631b0329b1 |
demo: proof — full architecture-page rewrite for current state
J: "needs a rewrite." Old version was anchored on a dual-agent
mistral+qwen2.5 loop that hasn't been the model story for weeks,
called the system 13 crates (it's 15), referenced "Local 7B models"
in the honest-limits section, and had no mention of:
- the 40-model OpenCode fleet via one sk-* key
- the 9-rung cloud-first ladder
- N=3 consensus + cross-architecture tie-breaker
- auditor cross-lineage (Kimi K2.6 ↔ Haiku 4.5, Opus auto-promote)
- distillation v1.0.0 frozen substrate (e7636f2)
- pathway memory (88 traces, 11/11 replays, ADR-021)
- per-staffer hot-swap index
- Construction Activity Signal Engine + BAI + ticker network
- the gateway as OpenAI-compat drop-in middleware
Rewrote into 10 chapters:
1. Receipts — live tests + new live tile showing the Signal Engine
view for THIS load (issuer count, attributed build value,
contractor count, attribution edges)
2. Architecture — corrected to 15 crates with current responsibilities;
ASCII diagram showing OpenAI consumers + MCP + Browser all hitting
gateway /v1/*; provider fleet table with all 5 (ollama, ollama_cloud,
openrouter, opencode 40-model, kimi); validator + truth + auditor
crates added
3. Model fleet — REPLACED the dual-agent mistral story. Now: the
9-rung ladder (kimi-k2:1t through openrouter:free → ollama local),
N=3 consensus + tie-breaker math, auditor Kimi↔Haiku alternation
with Opus auto-promote on big diffs, distillation v1.0.0 freeze
tag e7636f2 (145 tests · 22/22 · 16/16 · bit-identical)
4. Two memory layers — kept playbook content (Phase 19 boost math
still load-bearing), added pathway memory (ADR-021) section with
live counters in the page (88 / 11-11 / 100% reuse rate)
5. Per-staffer hot-swap — NEW. Pseudocode showing how staffer_id
scopes state filter + playbook geo + UI relabel to MARIA'S MEMORY
6. Construction Activity Signal Engine — NEW. Three attribution
flavors (direct, parent, associated), BAI math, cross-metro
replication framing (NYC DOB next, then LA / Houston / Boston)
7. Architectural choices — added ADR-021 row + distillation freeze row
8. Measured at scale — kept (uses /proof.json scale data)
9. Verify or dispute — REFRESHED with current endpoints. Removed the
stale "bun run tests/multi-agent/scenario.ts" recipe; added curl
examples for /v1/health, pathway/stats, per-staffer scoping (3-loop
bash script), late-worker triage, profiler_index, ticker_quotes,
auditor verdicts, distillation acceptance gate
10. What we are NOT claiming — REFRESHED. Removed "Local 7B models"
caveat; added: 12 awaiting public-data sources are placeholders,
SEC name-fuzzy has rare false positives, BAI is a thesis not a
backtest yet, single-metro today
Live data probes added:
loadPathwayLive — fills pwm-traces / pwm-replays / pwm-rate spans
loadSignalLive — renders the LIVE Signal Engine tile under Ch1
Nav also gained a Profiler link to match search.html and console.html.
Verified end-to-end on devop.live/lakehouse/proof:
10 chapters render, 5/5 live tests pass, pathway shows 88 traces +
100% reuse rate, live signal tile shows 11 issuers + $347M attributed
+ 200 contractors + 14 attribution edges. Architecture diagram +
crate table accurate as of HEAD.
|
||
|
|
4c46cf6a21 |
demo: console — three new chapters reflecting recent shipments
J: "it's outdated." Console walkthrough was stuck on the original 6
chapters (legacy-bridge / permits / catalog / ranking demo / playbook
memory / try-it-yourself). Three weeks of new work weren't visible.
Three new chapters added between the existing playbook-memory chapter
and the input box; all pull live data from the running system:
Chapter 6 — Three coordinators, three views of the same corpus
Renders Maria/Devon/Aisha cards from /staffers with their
territories. Frames the per-staffer hot-swap as the relevance
gradient that compounds independently per coordinator. Same query
"forklift operators" returns 89 IN / 16 WI / 167 IL workers
depending on who's acting.
Chapter 7 — The hidden signal — public issuers in your contractor graph
Pulls /intelligence/profiler_index, builds the basket, shows
issuer count + attributed build value + contractor count as the
three top metrics. Lists top 8 issuers with attribution counts
and direct-link to the profiler. This is the BAI / Signal Engine
pitch in walkthrough form: every contractor name is also a forward
indicator on a public equity. Cross-metro replication explicit
in the closing paragraph.
Chapter 8 — When something breaks — triage in one shot
Live triage demo against /intelligence/chat with body
{message:"Marcus running late site 4422"}. Renders the worker
card + draft SMS + 5 backfills + duration_ms. The 250ms-vs-20min
moment, made concrete with real Quincy IL workers.
Chapter 9 (was 6) — Try it yourself
Updated input examples to demonstrate each new route:
"8 production workers near 60607" → headcount + zip parser
"Marcus running late site 4422" → triage handler
"Marcus" → bare-name lookup
"what came in last night" → temporal route
"reliable forklift operators with OSHA certs" → hybrid SQL+vector
Each is a click-to-run link beneath the input.
Two new accent classes: .accent-g (green for issuer-count) and
.accent-r (red for triage event).
Verified end-to-end on devop.live/lakehouse/console: 9 chapters
render, ch6 shows 3 staffer personas, ch7 shows 11 issuers / $347M /
200 contractors, ch8 shows Marcus V. Campbell + draft SMS + 5
backfills.
|
||
|
|
db81fd8836 |
demo: System Activity panel — capability index reflects every recent shipment
Old panel showed playbook ops + search counts and went empty in a
fresh demo (no operations yet). J: "update System Activity to coincide
with all of our recent updates."
Rebuilt as a live capability index — each tile is a thing the
substrate has learned to do, with the metric proving it's running.
Pulled in parallel from /staffers, /system/summary,
/api/vectors/playbook_memory/stats, /api/vectors/pathway/stats,
/intelligence/profiler_index, /intelligence/activity. Each probe
catches its own error so a single missing endpoint doesn't collapse
the panel.
Nine capability cards (verified end-to-end on devop.live/lakehouse):
1. Per-staffer hot-swap index 3 personas (Maria/Devon/Aisha)
2. Construction Activity Signal Engine 11 issuers · $347M attributed
build value · network 11/14
3. Late-worker / no-show triage one-shot — name+late → backfills+SMS
4. Permit → staffing bridge 24/day, every Chicago permit ≥$250K
5. Hybrid SQL + vector search 500K workers · 5,474 playbook entries
6. Schema-agnostic ingestion 36 datasets · 2.98M rows
7. Contractor profile + project index 6 wired · 12 queued sources
8. Pathway memory 88 traces · 11/11 replays · 100%
9. Ticker association network 11 tickers · 3 direct + 11 associated
Each card carries:
- capability title + ship date pill ("baseline" or "shipped 2026-04-27")
- big metric (live, not pre-baked)
- sub-context line in coordinator language
- "why a staffer cares" explanation
- optional "Open →" deep link to the surface (Profiler, Contractor)
Header + intro paragraph reframed: "what the substrate has learned to
do" instead of "what the substrate has learned." Operational learning
(fills, playbooks, hot-swaps) compounds INSIDE each capability; the
panel surfaces the set of capabilities the corpus knows how to express.
Closing operational-stats row at the bottom shows fills/searches/
recent playbooks when /intelligence/activity has any.
|
||
|
|
a789000982 |
demo: profiler — Construction Activity Signal Engine narrative + BAI
J's prompt: shoot for the stars, frame the data corpus's value as a
predictive signal, not just a contractor directory. The thesis is
that every name in this corpus is also a forward indicator on public
equities — permits filed today predict construction starts in ~45
days, staffing in ~30, revenue recognition months later. The
associated-ticker network surfaces this signal before any 10-Q does.
Two new layers above the basket:
1. HERO THESIS PANEL — "Chicago Construction Activity Signal Engine"
header + 3-line value statement, then 4 live metrics:
- BAI (Building Activity Index) — attribution-weighted average of
day-change % across surfaced issuers. Weight = attribution count
so issuers we have more depth on count more. Today: +0.76%
(9 issuers · top contributors FCBC +2.4%, ACRE +1.7%, JPM +1.5%).
Color-coded green/red.
- Indexed build value — total $ of permits attributable to ANY
public issuer in this view. Today: $344M.
- Network depth — issuers / attribution edges. Today: 9 / 15.
This is the "we see what nobody else sees" metric: how many
contractors are bridges from a private builder back to a public
equity holder.
- Market replication roadmap — chips showing "Chicago — live ·
NYC DOB — adapter ready · LA County · Houston BCD · Boston ISD
· DC DCRA". Frames the corpus as metro-agnostic from day one.
2. PER-TICKER ACTIVITY MAP — when a basket card is clicked, a leaflet
map appears below the basket plotting that ticker's geocoded permit
activity. Pulls /intelligence/contractor_profile for up to 6
attributed contractors, merges their geocoded permits, plots on a
dark Chicago tile layer. Color-banded by permit cost (green <$100K,
amber $100K-$1M, red ≥$1M). Click TGT → 23 Target permits across
Chicago; click JPM → JPMorgan-adjacent contractor activity. Cached
per ticker so toggling is instant.
Verified end-to-end on devop.live/lakehouse/profiler:
Default load: hero panel renders with all 4 metrics, basket strip
with 9 issuers + live prices in 669ms.
Click TGT : signal map activates, "23 geocoded permits across
1 contractor", table filters to 2 rows.
Tooltip on basket cards: full reason path including matched name +
contributors attributed to that ticker.
Architecture-side: zero new server code — all metrics computed
client-side from the existing profiler_index + ticker_quotes payloads.
The corpus already had the value; the page just needed to articulate it.
|
||
|
|
aa56fbce61 |
demo: profiler — scrolling ticker basket with live prices + click-to-filter
J asked: "kind of like a scrolling ticker that has all of the companies
and their stock prices and where they fit in the map." Implemented as
a horizontal-scroll strip at the top of /profiler:
9 public issuers in this view · quotes via Stooq · 669ms
┌────┬────┬────┬────┬────┬────┬────┐
│TGT │JPM │BALY│ACRE│FCBC│NREF│LSBK│ ← live price + day-change per
│129 │311 │... │... │... │... │... │ ticker, color-banded by
│+.17│+1.5│... │... │... │... │... │ attribution kind
└────┴────┴────┴────┴────┴────┴────┘
Each card carries:
- ticker + live price + day-change % (red/green)
- attribution count + kind (exact / direct / parent / associated)
- left bar color = strongest attribution kind (green for direct
issuer, amber for parent, blue for co-permit associated, gradient
when both direct and associated apply)
- tooltip on hover lists the contractors attributed to this ticker
- click toggles a filter on the table below — clicking TGT cuts the
200-row list down to just TARGET CORPORATION + TORNOW, KYLE F
(Target's primary co-permit contractor)
Server-side:
- entity.ts exports fetchStooqQuote (was internal)
- new POST /intelligence/ticker_quotes — accepts {tickers: [...]},
fans out to Stooq.us in parallel, returns
{ticker, price, price_date, open, high, low, day_change_pct,
stooq_url} per symbol or null for non-US listings (HOC.DE, SKA-B.ST,
LLC.AX). Capped at 50 symbols per call.
Front-end:
- mcp-server/profiler.html — new .basket-wrap section above the
controls. buildBasket() runs after profiler_index loads:
1. Aggregates unique tickers from .tickers.direct + .associated
across all surfaced contractors
2. Renders shells immediately (ticker symbol + "—" placeholder)
3. Batch-fetches quotes via /intelligence/ticker_quotes
4. Updates each card with price + day-change in place
Click on a card sets a tickerFilter; render() skips rows whose
attributions don't include that ticker. "clear filter" button on
the basket strip resets it.
Verified end-to-end on devop.live/lakehouse/profiler:
Default load → 9 issuers, live prices populated in 669ms
TGT click → table filters to TARGET CORPORATION + TORNOW, KYLE F
(the contractor who runs 3 of Target's recent permits
gets the TGT correlation indicator)
JPM card → $311.63, +1.55% — JPMorgan-adjacent contractors
Tooltip → list of contractors attributed to the ticker
|
||
|
|
ba41ad2846 |
demo: profiler index — ticker associations (direct, parent, co-permit)
J's framing: "if a contractor works for Target, future Target contracts
mean money flows back to the contractor — the ticker is an associated
indicator." Now the profiler index attaches three flavors of ticker per
contractor and renders them as colored pills:
green DIRECT contractor IS the public issuer (Target Corp → TGT)
amber PARENT contractor is a subsidiary of a public parent
(Turner Construction → HOC.DE via Hochtief AG)
blue ASSOCIATED contractor co-appears on permits with a public
entity (TORNOW, KYLE F → TGT, 3 shared permits with
TARGET CORPORATION)
The associated flavor is the correlation signal J described — it pulls
the ticker for whoever the contractor has been working *with*, not
just what they are themselves. Most contractors are private; the
associated link is how the moat shows up.
Server-side:
- entity.ts new export `lookupTickerLite(name)` — cheap in-memory
resolver that does only the SEC tickers index lookup + curated
KNOWN_PARENT_MAP check, no per-call SEC profile or Stooq fetch.
~10ms per name after the index is loaded once.
- /intelligence/profiler_index now runs a third Socrata pull
(5K permit pairs in window) to build a co-occurrence map. For each
contractor in the result, attaches:
.tickers.direct[] — name matches a public issuer
.tickers.associated[] — top 5 co-permit partners that resolve
to a ticker, with partner_name +
co_permits count + partner_via reason
Front-end:
- mcp-server/profiler.html — new .ticker-pill styles (3 colors per
attribution kind), pills render under the contractor name in the
table. Hover title gives the full reason path.
Verified end-to-end on the public URL:
search="tornow" → blue TGT pill, hint "Associated via co-permits
with TARGET CORPORATION (3 shared permits) —
TARGET CORP"
search="target" → green TGT × 2 (TARGET CORPORATION +
CORPORATION TARGET name variants both resolve
direct to the same issuer)
default top 200 → 15 ticker pills surface across the page including
JPM (via JPMORGAN CHASE BANK co-permits) and
parent-link tickers for the construction majors.
|
||
|
|
f6a7621b2d |
demo: profiler index — directory of every Chicago contractor
J asked for "a profiler index that shows a history of everyone." This
is a /profiler directory page (also reachable via /contractors) that
ranks every contractor who's filed a Chicago permit, by total permit
value. Rows are clickable into the full /contractor profile.
Defaults: since 2025-06-01, min permit cost $250K, top 200 contractors
by total_cost. Server pulls two Socrata GROUP BY queries (one keyed on
contact_1_name, one on contact_2_name), merges them so contractors
listed in either applicant or contractor slot appear once with combined
counts/cost. ~300ms cold.
UI: live search box, since-date selector, min-cost selector, sortable
columns (name / permits / total_cost / last_filed). Live numbers as of
this write: 200 contractors, 1,702 permits, $14.22B aggregate. Filter
"Target" returns TARGET CORPORATION + CORPORATION TARGET (name variants
from Socrata).
Also fixes J's other complaint — "no new contracts, Target is gone":
/intelligence/permit_contracts was hard-capped at $limit=6 + only
the most recent 6 over $250K, so any day with 6 fresh permits would
push older contractors (Target) off the panel entirely. Now defaults
to 24 (caller can pass body.limit up to 100), so 2-3 days of permits
stay on the panel. Added body.contractor — passes a name into the
WHERE so the staffer can pin a specific contractor to the panel
("Target Corporation" → 3 of their permits over $250K).
Server-side:
- new POST /intelligence/profiler_index — paginated contractor index
(since, min_cost, search, limit) with merged contact_1+contact_2
aggregations
- /intelligence/permit_contracts — body.limit + body.contractor
- /profiler and /contractors routes serve profiler.html
Front-end:
- new mcp-server/profiler.html — sortable table, live filter, deep
links to /contractor?name=... (prefix-aware via P, so /lakehouse
works on devop.live)
- search.html + console.html nav: added "Profiler" link
Verified end-to-end via playwright on the public URL.
|
||
|
|
31d8ef918c |
demo: contractor links — respect the /lakehouse path prefix
J reported https://devop.live/contractor?name=3115%20W%20POLK%20ST.%20LLC returned 404. Cause: the anchor href was a bare /contractor, which on devop.live routes to the LLM Team UI (port 5000) at the main site root, not the lakehouse mcp-server (which lives under /lakehouse/*). Every page that renders a contractor link now uses the same prefix detector the dashboard already had: var P = location.pathname.indexOf('/lakehouse') >= 0 ? '/lakehouse' : ''; Files updated: - search.html: entity-brief anchor + preview anchor → P+/contractor - console.html: permit-card contractor list → P+/contractor - contractor.html: history.replaceState + back-link + the /intelligence/contractor_profile fetch all use P prefix. The page is reachable at /lakehouse/contractor on the public URL and bare /contractor on localhost; both work without further config. Verified: https://devop.live/lakehouse/contractor?name=3115%20W%20POLK%20ST.%20LLC → 200, 29.9 KB, full profile renders. Contractor has 1 permit on file (a small LLC), 1 geocoded so the heat map plots one marker. |
||
|
|
a1066db87b |
demo: contractor profile — heat map, project index, 12 awaiting sources
The contractor.html click-target J asked for: a separate page (not a
modal, not a fall-through search) showing every angle on a contractor.
Reachable from the Co-Pilot dashboard, the staffers console, and the
search box — all anchor-wrap contractor names to /contractor?name=...
What's new on the page:
1. PROJECT INDEX — build-signal score
Single 0-100 number with the drivers laid out beneath. Driver list
is staffer-readable: "59 Chicago permits in 180d (+30) · OSHA 20
inspections (-25) · federal contractor (+15)". Score weights are
placeholders to be replaced by an ML model once the 12 awaiting
sources ship — the current 6 wired signals would not give a real
model enough features.
2. HEAT MAP — every Chicago permit they've been contact_1 or contact_2
on, last 24 months, plotted on a leaflet dark map. Color by cost
(green <$100K, amber $100K-$1M, red ≥$1M), radius proportional to
cost so the staffer sees where money + activity concentrates. Click
a marker for permit detail (cost, date, work type, address, permit
ID). All 50 of Turner Construction's geocoded recent permits in
Chicago plot end-to-end.
3. ACTIVITY TIMELINE — monthly permit count, bar chart, with the
first/last month labels so the staffer sees momentum. Tooltip on
each bar gives the count and total cost for that month.
4. 12 AWAITING SOURCES — placeholder cards for the public datasets
that would 3× the build-signal feature count. Each card has:
- source name (real, e.g. DOL Wage & Hour, EPA ECHO, MSHA, BBB)
- one-liner in coordinator language ("Has this contractor stiffed
workers? Will they pay our staffing invoices?")
- "Would show:" sample shape so the engineering scope is concrete
Order is staffing-decision relevance:
1. DOL Wage & Hour (WHD violations)
2. State Licensure Boards (active license + expiry)
3. Surety Bond Capacity (bonding ceiling)
4. EPA ECHO Compliance (env violations at sites)
5. DOT/FMCSA Carrier Safety (crash + OOS rates)
6. BBB Complaints + Rating
7. PACER Civil Suits (FLSA / Title VII / ADA)
8. UCC Lien Filings (cash flow distress)
9. D&B / Credit Bureau (PAYDEX, payment behavior)
10. State UI Employer Claims (workforce stability)
11. MSHA Mine Safety (excavation / aggregate / heavy)
12. Registered Apprenticeships (DOL RAPIDS pipeline)
Server-side: entity.ts fetchContractorHistory now pulls the 50 most
recent permits with id + lat/lng + work_description, so the heat map
and timeline have what they need without a second SQL hop. The
ContractorHistory.recent_permits type gained the optional fields.
Front-end: contractor.html got 4 new render sections, leaflet wiring
(stylesheet + script in head), placeholder grid CSS, and a PLACEHOLDERS
const at the bottom with the 12 sources. All popup HTML is built via
DOM construction (textContent + appendChild) — no innerHTML, no XSS.
console.html: contractor names from /intelligence/permit_contracts now
anchor-wrapped to /contractor?name=... so the click-through J described
works from the staffers console too. Click stops propagation so the
permit details element doesn't toggle on the same click.
Verified end-to-end via playwright — Turner Construction profile shows:
PIX score "Mixed signals — review drivers below"
Heat map: "50 permits plotted · green/amber/red"
4 section labels in order
12 placeholder cards in the documented order
|
||
|
|
5f0beffe80 |
demo: G — per-staffer hot-swap index (synthetic coordinator personas)
Same corpus, different relevance gradient per staffer. Three personas defined in mcp-server/index.ts STAFFERS roster (Maria/IL, Devon/IN, Aisha/WI), each with a primary state + secondary cities. Server-side: /intelligence/chat smart_search accepts a staffer_id body field; when set, defaults state to the staffer's territory and labels the playbook context as theirs. The playbook patterns query also defaults its geo to the staffer's primary city/state, so the recurring-skills/cert breakdowns reflect what they actually fill, not the global IL prior. Front-end: a staffer selector dropdown beside the existing state/role filters. Picking a staffer auto-pins state to their territory, shows a greeting line, relabels the MEMORY panel as MARIA'S/DEVON'S/AISHA'S MEMORY, and sends staffer_id to chat for scoping. Dropdown is populated from /staffers (NOT /api/staffers — the generic /api/* passthrough sends everything under /api/ to the Rust gateway, which doesn't own the roster). loadStaffers runs at window-load independently of loadDay's Promise.all so the dropdown populates even if simulation/SQL inits error out. Verified end-to-end via playwright. Same q="forklift operators": no staffer → 509 workers across MI/OH/IA, MEMORY label as Devon → 89 IN-only (Fort Wayne, Terre Haute), DEVON'S MEMORY as Aisha → 16 WI-only (Milwaukee, Madison, Green Bay), AISHA'S MEMORY As Maria with q="8 production workers near 60607": tags: headcount: 8 · zip 60607 → Chicago, IL · role: production · city: Chicago 20 workers, MARIA'S MEMORY label, top results in Chicago zips Closes the demo-side build of A-G from the persona plan: A. zip → city/state, B. headcount, C. bare-name, D. temporal, E. late-worker triage, F. contractor anchor, G. per-staffer index. |
||
|
|
677065de76 |
demo: P2 — staffer-language routes (zip, headcount, name, late-triage, ingest log)
Built from a playwright run as three personas:
Maria — "8 production workers near 60607 by next Friday, prior-fill at this client"
Devon — "what came in last night?"
Aisha — "Marcus running late site 4422"
Each one previously fell through to smart_search and returned irrelevant
results (geo wrong, headcount ignored, no triage, no temporal). Now:
A. Zip code → city/state lookup. Chicago zips (606xx, 607xx, 608xx)
resolve to {city: Chicago, state: IL}; 13 metro prefixes covered.
Maria's "near 60607" now returns Chicago workers, not Dayton/Green Bay.
B. Headcount parser. "8 production workers" / "12 forklift operators" /
"5 welders" set top_k 1..200, capped 5..25 for SQL+vector LIMIT.
Allows 0-2 role words between the count and the worker noun so
"8 production workers" matches as well as "8 workers".
C. Bare-name profile lookup. Single short capitalized phrase
("Marcus" / "Sarah Lopez") triggers a profile route. Per-token LIKE
AND-joined so "Marcus Rivera" matches "Marcus L. Rivera" without
hardcoding middle initials.
E. Late-worker / no-show triage. Pattern: <Name> (running late|late|
no show|sick|out today|called out|can't make it) — pulls profile +
reliability + responsiveness + recent calls, sources 5 same-role
same-geo backfills sorted by responsiveness, drafts a client SMS
the coordinator can copy. Front-end renders triage card + Copy SMS
button + green backfill list.
F. Contractor name preview anchor. The PROJECT INDEX preview line on
each permit card now wraps contact_1_name and contact_2_name in
anchors to /contractor?name=... — clicking a contractor finally
navigates instead of doing nothing. Click handler stops propagation
so the details element doesn't toggle.
D. Temporal "what came in" route. last night / today / past N hours /
recent — surfaces datasets from the catalog whose updated_at is
within the window, samples one row per dataset to detect worker-
shape, groups by role for worker tables. Schema-agnostic — drop
any dataset and it shows up. Currently sparse because no fresh
ingest has happened today; will populate as ingest runs.
Server: /intelligence/chat smart_search route accepts structured
state/role from the search-form dropdowns (P1 from prior commit) and
now ALSO honors b.state, b.role, q.match for headcount + zip + name +
triage patterns BEFORE falling through to NL parsing.
Front-end: doSearch dispatches on response.type and renders triage,
profile, ingest_log, and miss states with type-specific UI. All DOM
construction uses textContent / appendChild — no innerHTML, no XSS.
Verified end-to-end via playwright drive of devop.live/lakehouse:
Maria → 8 Chicago Production Workers (60685, 60662, 60634)
tags: "headcount: 8 · zip 60607 → Chicago, IL · ..."
Aisha → Marcus V. Campbell card + draft SMS + 5 Quincy IL backfills
"I'm dispatching Scott B. Cooper (96% reliability) to cover."
Devon → ingest_log surfaces successful_playbooks_live (last 1h)
Marcus → 5 profiles (Adams Louisville KY, Jenkins Green Bay WI, ...)
Screenshots: /tmp/persona_v2/{01_maria,02_aisha,03_devon,04_marcus}.png
Restart sequence after these edits: pkill -9 -f "mcp-server/index.ts" ;
cd /home/profit/lakehouse ; bun run mcp-server/index.ts. The bun on
:3700 is not systemd-managed (pre-existing convention).
|
||
|
|
fb99e92a60 |
demo: P1 — search filter now actually filters by state and role
The Co-Pilot search box read state and role from the dropdowns (#sst, #srl)
but appended them to the message string as ' in '+st. The server's NL
parser then matched the literal preposition "in" against the case-insensitive
regex /\b(IL|IN|...)\b/i and assigned state IN (Indiana) to every search.
Result: typing "forklift in IL" returned Indiana workers. Same for WI, TX,
any state — all silently became Indiana. That was the "cached/generic
response" the legacy staffing client was seeing.
Two prongs:
1. search.html doSearch() now passes structured fields:
{message, state, role}
instead of munging into the message text. Dropdown selections bypass
NL parsing entirely.
2. /intelligence/chat smart_search route accepts those structured fields
and prefers them over regex archaeology. Falls back to NL parsing only
when fields aren't provided. Fixed the regex too: the prepositional
form (?:in|from)\s+(STATE) wins, the standalone form requires uppercase
(drops /i flag) so the lowercase preposition "in" can no longer match.
Verified live:
- POST /intelligence/chat {"message":"forklift","state":"IL"}
→ 167 IL forklift operators (Galesburg, Joliet, ...)
- POST /intelligence/chat {"message":"forklift","state":"WI","role":"Forklift Operator"}
→ 16 WI Forklift Operators (Milwaukee, Madison, ...)
- POST /intelligence/chat {"message":"forklift in IL"} (NL fallback)
→ 167 IL workers (regex now correctly distinguishes preposition from state code)
Playwright drove the live UI through devop.live/lakehouse and confirmed the
front-end posts the structured body and the result panel renders the right
state. Restart sequence: kill old bun :3700, bun run mcp-server/index.ts.
|
||
|
|
f6af0fd409 |
phase 44 (part 1): migrate TS callers to /v1/chat + add regression guard
Some checks failed
lakehouse/auditor 16 blocking issues: cloud: claim not backed — "Verified end-to-end:"
Migrates the four TypeScript /generate callers to the gateway's
/v1/chat surface so every LLM call lands on /v1/usage and Langfuse:
tests/multi-agent/agent.ts::generate() provider="ollama"
tests/agent_test/agent_harness.ts::callAgent provider="ollama"
bot/propose.ts::generateProposal provider="ollama_cloud"
mcp-server/observer.ts (error analysis) provider="ollama"
Each migration follows the same pattern as the prior generateCloud()
migration (already on /v1/chat from 2026-04-24): replace
`fetch(SIDECAR/generate)` with `fetch(GATEWAY/v1/chat)`, swap the
prompt-style body for OpenAI-compat messages array, extract
content from `choices[0].message.content` instead of `text`.
Same upstream models in every case — gateway is the new home for
the call, transport otherwise unchanged.
Adds scripts/check_phase44_callers.sh — fail-loud regression guard
that exits non-zero if any non-adapter file fetches /generate or
api/generate. Adapter files (crates/gateway, crates/aibridge,
sidecar/) are exempt. Pre-tightening regex flagged prose mentions
in comments; the shipped regex requires `fetch(...)` or
`client.post(...)` shape so comments don't trip it.
Verification:
bun build mcp-server/observer.ts compiles
bun build tests/multi-agent/agent.ts compiles
bun build tests/agent_test/agent_harness.ts compiles
bun build bot/propose.ts compiles
./scripts/check_phase44_callers.sh ✅ clean
systemctl restart lakehouse-observer active
Phase 44 part 2 (deferred):
- crates/aibridge/src/client.rs:118 still posts to sidecar /generate
directly. AiClient is the foundational Rust LLM caller used by
8+ vectord modules; migrating it is a workspace-wide refactor
that needs its own commit. Plan: keep AiClient as the local-
transport layer for the gateway's `provider=ollama` arm, but
introduce a thin `/v1/chat` wrapper for external callers (vectord
autotune, agent, rag, refresh, supervisor, playbook_memory).
- tests/real-world/hard_task_escalation.ts: comment mentions
/api/generate but doesn't actually call it. Comment is left
intentionally as historical context; regex no longer flags it.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
0844206660 |
observer + scrum: gold-standard answer corpus for compounding context
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
The compose-don't-add discipline applied to the original ask: when big
models produce good results (scrum reviews + observer escalations),
save them into the matrix indexer so future small-model handlers can
retrieve them as scaffolding. Local model gets near-paid quality from
a fraction of the cost.
New: scripts/build_answers_corpus.ts indexes lakehouse_answers_v1
from data/_kb/scrum_reviews.jsonl + data/_kb/observer_escalations.jsonl.
doc_id prefixes ('review:' vs 'escalation:') let consumers same-file-
gate the prior-reviews case while keeping escalations broad.
observer.ts: buildKbPreamble adds lakehouse_answers_v1 as a third
retrieval source alongside pathway/bug_fingerprints + lakehouse_arch_v1.
qwen3.5:latest synthesis now compresses three lenses into a single
briefing for the cloud reviewer.
scrum_master_pipeline.ts: epilogue dispatches a fire-and-forget rebuild
of lakehouse_answers_v1 after each run so this run's accepted reviews
are retrievable within ~30s. LH_SCRUM_SKIP_ANSWERS_REBUILD=1 disables.
Verified live: kb_preamble grew 416 → 727 chars after wiring third
source; qwen3.5:latest synthesis (702 → 128 tokens) compresses
correctly; deepseek-v3.1-terminus diagnosis (301 → 148 tokens) is
sharper, citing architectural patterns (circuit breaker, adapter
files) instead of generic timeouts. Total cost per escalation
unchanged at ~$0.0002.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
340fca2427 |
observer: route escalation to paid OpenRouter (deepseek-v3.1-terminus)
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
ollama_cloud/qwen3-coder:480b was hitting weekly 429 quota; observer escalations were silently failing 502 with no audit row. Switched escalation cloud-call to deepseek-v3.1-terminus on paid OpenRouter: 671B reasoning specialist, $0.21 in / $0.79 out per M tokens (under the $0.85/M ceiling J set), 164K ctx. End-to-end verified: kb_preamble_chars=416, prompt 245 tokens, completion 155 tokens, ~$0.00018 per escalation. Diagnosis output is specific (cites adapter + route file), not generic. Two-stage chain holds: qwen3.5:latest compresses raw KB hits into a tight briefing, deepseek-v3.1-terminus reasons over the briefing for diagnosis. Audit `mode` field updated to direct_chat_deepseek_v3_1_terminus so downstream consumers can attribute analyses to the correct rung. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
d9bd4c9bdf |
observer: KB enrichment preamble before failure-cluster escalation
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
escalateFailureClusterToLLMTeam now calls a new buildKbPreamble() that mirrors what scrum_master_pipeline does on every per-file review: queries /vectors/pathway/bug_fingerprints + /vectors/search against the lakehouse_arch_v1 corpus, then asks local qwen3.5:latest (provider=ollama) to synthesize a tight briefing. The synthesized preamble prepends the existing escalation prompt so the cloud reviewer sees historical context the same way scrum reviewers do. Reuses existing KB primitives — no new corpora, no new endpoints, no new abstractions. Same code path scrum already exercises 3+ times per review; observer joins the same compounding loop. Audit row gains kb_preamble_chars so we can later track enrichment yield per escalation. Empty preamble (both fingerprints + matrix return nothing) → empty string, prompt unchanged. Verified: qwen3.5:latest synthesis fires for every escalation with non-empty matrix hits (gateway log: 445→72 tokens, 3.1s). Matrix retrieval correctly surfaces PRD Phase 40/44 chunks for chat_completion clusters. Pathway memory stays consistent with scrum (84→87 traces); chat_completion task_class doesn't have fingerprints yet — graceful. Local-model synthesis was J's explicit ask: compress the raw bundle before the cloud call so the briefing is actionable, not a dump. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
0115a60072 |
observer: add /relevance heuristic filter for adjacency pollution
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
Matrix retrieval often surfaces high-cosine chunks that are about
symbols the focus file IMPORTS but doesn't define. The reviewer LLM
then hallucinates those imported-crate internals as in-file content
("I see main.rs does X" when X lives in queryd::context).
mcp-server/relevance.ts — pure scorer with five signals:
path_match +1.0 chunk source/doc_id encodes focus path
defined_match +0.6 chunk text mentions focus.defined_symbols
token_overlap +0.4 jaccard of non-stopword tokens
prefix_match +0.3 shared first-2-segment prefix
import_only -0.5 mentions only imported symbols (pollution)
Default threshold 0.3 — tuned empirically on the gateway/main.rs case.
Also fixes a regex bug in the import extractor: the character class
was lowercase-only, so `use catalogd::Registry;` silently never
matched (regex backed off when it hit the uppercase R). Caught by
the test suite.
observer.ts — POST /relevance endpoint wraps filterChunks().
scrum_master_pipeline.ts — fetchMatrixContext gains optional
focusContent param; calls /relevance after collecting allHits and
before sort+top. Opt-out via LH_RELEVANCE_FILTER=0; threshold via
LH_RELEVANCE_THRESHOLD. Fall-open on observer failure.
9 unit tests, all green. Live probe on real shape correctly drops
a 0.7-cosine adjacency-pollution chunk while keeping in-focus hits.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
54689d523c |
observer: fix gateway health probe — text/plain not JSON
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
Same bug as matrix-agent-validated 5db0c58. observer.ts:645 did fetch().then(r => r.json()) against /health which returns text/plain "lakehouse ok". r.json() throws on non-JSON, .catch swallows to null, observer exits assuming gateway down. With systemd Restart=on-failure this crash-loops every 5s — confirmed live on matrix-test box today. Fix: r.ok ? r.text() : null. Same shape, accepts the actual content type. Sealed in pathway_memory as TypeConfusion:fetch-health-json. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
4ac56564c0 |
scrum + applier + observer: switch to paid OpenRouter ladder, add Kimi K2.6 + Gemini 2.5
Ollama Cloud was throttled across all 6 cloud rungs in iters 1-9, which forced the loop into 0-review iterations even though the architecture was sound. Swapping to paid OpenRouter unblocks the test path. Ladder changes (top-of-ladder paid models, all under $0.85/M either side): - moonshotai/kimi-k2.6 ($0.74/$4.66, 256K) — capped at 25/hr - x-ai/grok-4.1-fast ($0.20/$0.50, 2M) — primary general - google/gemini-2.5-flash ($0.30/$2.50, 1M) — Google reasoning - deepseek/deepseek-v4-flash ($0.14/$0.28, 1M) — cheap workhorse - qwen/qwen3-235b-a22b-2507 ($0.07/$0.10, 262K) — cheapest big Existing rungs (Ollama Cloud + free OR + local qwen3.5) kept as fallback. Per-model rate limiter (MODEL_RATE_LIMITS in scrum_master_pipeline.ts): - Persists call timestamps to data/_kb/rate_limit_calls.jsonl so caps survive process restarts (autonomous loop spawns a fresh subprocess per iteration; without persistence each iter would reset) - O(1) writes, prune-on-read for the rolling 1h window - Capped models log "SKIP (rate-limited: cap N/hr reached)" and the ladder cycles to the next rung - J directive 2026-04-25: 25/hr on Kimi K2.6 to bound output cost Observer hand-review cloud tier swapped from ollama_cloud/qwen3-coder:480b to openrouter/x-ai/grok-4.1-fast — proven to emit precise semantic verdicts (named "AccessControl::can_access() doesn't exist" specifically in 2026-04-25 tests instead of the heuristic fallback). Applier patch emitter swapped from ollama_cloud/qwen3-coder:480b to openrouter/x-ai/grok-4.1-fast (default; LH_APPLIER_MODEL + LH_APPLIER_PROVIDER override). This was the third LLM call we missed — without it, observer accepts a review but applier never produces patches because its emitter was still hitting the throttled account. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
3f166a5558 |
scrum + observer: hand-review wire — judgment moved out of the inner loop
Pre-2026-04-25 the scrum_master applied a hardcoded grounding-rate gate
inline. That baked policy into the wrong layer — semantic judgment about
whether a review is grounded belongs in the observer (which has Langfuse
traces, sees every response across the system, and can call cloud LLMs
for real evaluation). Scrum should report DATA, observer DECIDES.
What landed:
- scrum_master_pipeline.ts: removed the inline grounding-pct threshold;
every accepted candidate now POSTs to observer's /review endpoint with
{response, source_content, grounding_stats, model, attempt}. Observer
returns {verdict: accept|reject|cycle, confidence, notes}. On observer
failure, scrum falls open to accept (observer is policy, not blocker).
- mcp-server/observer.ts: new POST /review endpoint with two-tier
evaluator. Tier 1: cloud LLM (qwen3-coder:480b at temp=0) hand-reviews
with full context — response + source excerpt + grounding stats — and
emits structured verdict JSON. Tier 2: deterministic heuristic over
grounding pct + total quotes when cloud throttles, marked source:
"heuristic" so consumers can tune it later by comparing against cloud.
- Every verdict persists to data/_kb/observer_reviews.jsonl with full
input snapshot so cloud vs heuristic can be A/B compared once cloud
quota refreshes.
Verified end-to-end: smoke loop iter 1 — observer returned `cycle` on
21% grounding (cycled to next rung), `reject` on 17% (gave up). Iter 2
— `reject` on 12% and 14%. Both UNRESOLVED with honest signal instead
of polluting pathway memory with hallucinated patterns.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
b843a23433 |
mcp: contractor entity-brief drill-down + mobile UX pass
Adds /contractor page route plus /intelligence/contractor_profile endpoint that fans out across OSHA, ticker, history, parent_link, federal contracts, debarment, NLRB, ILSOS, news, diversity certs, BLS macro — single per-contractor portfolio view across every wired source. search.html: mobile responsive layout, fixed bottom dock with horizontal scroll-snap, legacy bridge row stacking, viewport overflow guards. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
858954975b |
Staffing Co-Pilot UI — architecture-first enrichments + shift clock
Some checks failed
lakehouse/auditor 2 blocking issues: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
J's direction: the dashboard was explanatory but not *actionable* as
a staffing-matrix console. Refactor so the architecture claims from
docs/PRD.md surface as operational signals on every contract card.
Backend (mcp-server/index.ts):
+ GET|POST /intelligence/arch_signals — probes live substrate health
so the dashboard shows instant-search latency, index shape,
playbook-memory entries, and pathway-memory (ADR-021) trace count.
Fires one fresh /vectors/hybrid probe against workers_500k_v1 so
the "instant search" number on screen is live, not cached.
* /intelligence/permit_contracts now times every hybrid call per
contract and returns search_latency_ms, so the card can display
the per-query latency pill (⚡ 342ms).
+ Per-contract computed fields returned from the backend:
search_latency_ms — real /vectors/hybrid duration
fill_probability — base_pct (by pool_size×count ratio)
+ curve [d0, d3, d7, d14, d21, d30]
with cumulative fill% per bucket
economics — avg_pay_rate, gross_revenue,
gross_margin, margin_pct,
payout_window_days [30, 45],
over_bill_count,
over_bill_pool_margin_at_risk
shifts_needed — 1st/2nd/3rd/4th inferred from
permit work_type + description regex
* Pre-existing dangling-brace bug in api() fixed (the `activeTrace`
logging block had been misplaced at module scope, referencing
variables that only existed inside the function). Restart was
failing with "Unexpected }" at line 76. Moved tracing inside the
try block where parsed/path/body/ms are in scope.
Frontend (mcp-server/search.html):
+ Top "Substrate Signals" section — 4 live tiles (instant search,
index, playbook memory, pathway matrix). Color-codes latency
(green <100ms, amber <500ms, red otherwise).
+ "24/7 Shift Coverage" section — SVG 24-hour clock with 4 colored
shift arcs (1st/2nd/3rd/4th), current-time needle, center label
showing the live shift, per-shift contract count tiles beside.
4th shift assumes weekend/split; handles 3rd-shift wrap across
midnight by splitting into two arcs.
+ Per-card architecture pills: instant-search latency, SQL-filter
pool-size with k=200 boost note, shift requirements.
+ Per-card fill-probability horizontal stacked bar with day
markers (d0/d3/d7/d14/d21/d30) and per-bucket segment shading
(green → amber → orange → red as time decays).
+ Per-card economics 4-tile grid: Est. Revenue, Est. Margin (with
% colored by health), Payout Window (30–45d standard), Over-Bill
Pool count + margin at risk.
Architecture smoke test (tests/architecture_smoke.ts, earlier commit)
still green: 11/11 pass including the new /intelligence/arch_signals
+ permit_contracts enrichments.
J specifically wanted: "shoot for the stars · hyperfocus · our
architecture is better because it self-regulates, uses hot-swap,
pulls from real data, and shows instant searches from clever
indexing." Every one of those is now a specific visible signal on
the page, not prose in the README.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
25ea3de836 |
observer: fix LLM Team escalation — route to /v1/chat qwen3-coder:480b instead of dead mode
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "journal event verified live (total_events_created 0→1 after probe)."
Discovery 2026-04-24: /api/run?mode=code_review returns "Unknown mode"
(error response from llm_team_ui.py). The 2026-04-24 observer escalation
wiring pointed at a dead endpoint and was failing silently. My earlier
claim of "9 registered LLM Team modes" came from GET probes that all
returned 405 — I interpreted that as "POST-only endpoints exist" when
it just means "GET is not allowed for anything, and on POST only `extract`
is registered."
Rewire: observer's escalateFailureClusterToLLMTeam now hits
POST /v1/chat { provider: "ollama_cloud", model: "qwen3-coder:480b", ... }
which is the same coding-specialist rung 2 of the scrum ladder that
reliably produces substantive reviews. Probe shows 1240 chars of
substantive analysis in ~8.7s.
Also tightens scrum_applier:
* MODEL default: kimi-k2:1t → qwen3-coder:480b (coding specialist)
* Size gate: 20 lines → 6 lines (surgical patches only)
* Max patches per file: 3 → 2
* Prompt: explicit forbidden-actions list (no struct renames, no
function-signature changes, no new modules) and mechanical-only
whitelist
These changes produced the first auto-applied commit (96b46cd), which
landed a 2-line import addition that passed cargo check. Zero-to-one.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
8b77d67c9c |
OpenRouter rescue ladder + tree-split reduce fix + observer→LLM Team + scrum_applier + first auto-applied patch
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "journal event verified live (total_events_created 0→1 after probe)."
## Infrastructure (scrum loop hardening)
crates/gateway/src/v1/openrouter.rs — new OpenRouter provider
Direct HTTPS to openrouter.ai/api/v1/chat/completions with OpenAI-compatible shape.
Key resolution: OPENROUTER_API_KEY env → /home/profit/.env → /root/llm_team_config.json
(shares LLM Team UI's quota). Added after iter 5 hit repeated Ollama Cloud 502s on
kimi-k2:1t — different provider backbone as rescue rung. Unit tests pin the URL
stripping and OpenAI wire shape.
crates/gateway/src/v1/mod.rs + main.rs
Added `"openrouter" | "openrouter_free"` arm to /v1/chat dispatch.
V1State.openrouter_key loaded at startup via openrouter::resolve_openrouter_key()
mirroring the Ollama Cloud pattern. Startup log:
"v1: OpenRouter key loaded — /v1/chat provider=openrouter enabled"
tests/real-world/scrum_master_pipeline.ts
* 9-rung ladder — kimi-k2:1t → qwen3-coder:480b → deepseek-v3.1:671b →
mistral-large-3:675b → gpt-oss:120b → qwen3.5:397b → openrouter/gpt-oss-120b:free
→ openrouter/gemma-3-27b-it:free → local qwen3.5:latest.
Added qwen3-coder:480b as rung 2 after live probes confirmed it rescues
kimi-k2:1t 502s cleanly (0.9s latency, substantive reviews).
Dropped devstral-2 (displaced by qwen3-coder); dropped kimi-k2.6 (not available);
dropped minimax-m2.7 (returned 0 chars / 400 thinking tokens).
Local fallback promoted qwen3.5:latest per J's direction 2026-04-24.
* MAX_ATTEMPTS bumped 6 → 9 to accommodate the rescue tier.
* Tree-split scratchpad fixed — was concatenating shard markers directly
into the reviewer input, causing kimi-k2:1t to write titles like
"Forensic Audit Report – file.rs (shard 3)". Now uses internal §N§
markers during accumulation and runs a proper reduce step that
collapses per-shard digests into ONE coherent file-level synthesis
with markers stripped. Matches the Phase 21 aibridge::tree_split
map→reduce design. Fallback to stripped scratchpad if reducer returns thin.
tests/real-world/scrum_applier.ts — NEW (737 lines)
The auto-apply pipeline. Reads scrum_reviews.jsonl, filters rows where
gradient_tier ∈ {auto, dry_run} AND confidence_avg ≥ MIN_CONF (default 90),
asks the reviewer model for concrete old_string/new_string patch JSON,
applies via text replacement, runs cargo check after each file, commits
if green and reverts if red. Deny-list: /etc/, config/, ops/, auditor/,
docs/, data/, mcp-server/, ui/, sidecar/, scripts/. Hard caps: per-patch
confidence ≥ MIN_CONF, old_string must be exactly unique, max 20 lines per
patch. Never runs on main without explicit LH_APPLIER_BRANCH override.
Audit trail in data/_kb/auto_apply.jsonl.
Empirical behavior (dry-run over iter 4 reviews):
5 eligible files → 1 green commit-ready, 2 build-red reverts, 2 all-rejected
The build-green gate caught 2 bad patches before they'd have merged.
mcp-server/observer.ts — LLM Team code_review escalation
When a sig_hash accumulates ≥3 failures (ESCALATION_THRESHOLD), fire-and-forget
POST /api/run?mode=code_review at localhost:5000 with the failure cluster context.
Parses facts/entities/relationships/file_hints from the response. Writes to a
new data/_kb/observer_escalations.jsonl surface. Answers J's vision of the
observer triggering richer LLM Team calls when failures pile up.
Non-blocking: runs parallel to existing qwen2.5 analyzer, never replaces it.
Tracks escalated sig_hashes in a session-local Set to avoid re-hammering
LLM Team when a cluster persists across observer cycles.
crates/aibridge/src/context.rs
First auto-applied patch produced by scrum_applier.ts (dry-run path —
applier writes files in dry-run mode but doesn't commit; bug noted for
iter 6 fix). Adds #[deprecated] annotation to the inline estimate_tokens
helper pointing callers to the centralized shared::model_matrix::ModelMatrix
entry point (P21-002 — duplicate token-estimator surfaces). Cargo check
passes with the annotation (verified by applier's own build gate).
## Visual Control Plane (UI)
ui/server.ts — Bun.serve on :3950 with /data/* fan-out:
/data/services, /data/reviews, /data/metrics, /data/trust, /data/overrides,
/data/findings, /data/outcomes, /data/audit_facts, /data/file/:path,
/data/refactor_signals, /data/search?q=, /data/signal_classes,
/data/logs/:svc (journalctl tail per systemd unit), /data/scrum_log.
Bug fix: tryFetch always attempts JSON.parse before falling back to text
— observer's Bun.serve returns JSON without application/json content-type,
which was displaying stats as a raw string ("0 ops" on map) before.
ui/index.html + ui.css — dark neo-brutalist shell. 6 views:
MAP (D3 force-graph + overlays) / TRACE (per-file iter history) /
TRAJECTORY (signal-class cards + refactor-signals table + reverse-index
search box) / METRICS (every card has SOURCE + GOOD lines explaining
where the number comes from and what target trajectory means) /
KB (card grid with tooltips on every field) / CONSOLE (per-service
journalctl tabs).
ui/ui.js — polling client, D3 wiring, signal-class panel, refactor-signals
table, reverse-index search, per-service console tabs. Bug fix:
renderNodeContext had Object.entries() iterating string characters when
/health returned a plain string — now guards with typeof check so
"lakehouse ok" renders as one row instead of "0 l / 1 a / 2 k / ...".
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
21fd3b9c61 |
Scrum-driven fixes: P5-001 auth wired, P42-001 truth evaluator, P9-001 journal on ingest
Some checks failed
lakehouse/auditor 2 blocking issues: cloud: claim not backed — "| **P9-001** (partial) | `crates/ingestd/src/service.rs` | **3 → 6** ↑↑↑ | `journal.record_ing
Apply the highest-confidence findings from the Phase 0→42 forensic sweep
after four scrum-master iterations under the adversarial prompt. Each fix
is independently validated by a later scrum iteration scoring the same
file higher under the same bar.
Code changes
────────────
P5-001 — crates/gateway/src/auth.rs + main.rs
api_key_auth was marked #[allow(dead_code)] and never wrapped around
the router, so `[auth] enabled=true` logged a green message and
enforced nothing. Now wired via from_fn_with_state, with constant-time
header compare and /health exempted for LB probes.
P42-001 — crates/truth/src/lib.rs
TruthStore::check() ignored RuleCondition entirely — signature looked
like enforcement, body returned every action unconditionally. Added
evaluate(task_class, ctx) that actually walks FieldEquals / FieldEmpty /
FieldGreater / Always against a serde_json::Value via dot-path lookup.
check() kept for back-compat. Tests 14 → 24 (10 new exercising real
pass/fail semantics). serde_json moved to [dependencies].
P9-001 (partial) — crates/ingestd/src/service.rs
Added Optional<Journal> to IngestState + a journal.record_ingest() call
on /ingest/file success. Gateway wires it with `journal.clone()` before
the /journal nest consumes the original. First-ever internal mutation
journal event verified live (total_events_created 0→1 after probe).
Iter-4 scrum scored these files higher under same prompt:
ingestd/src/service.rs 3 → 6 (P9-001 visible)
truth/src/lib.rs 3 → 4 (P42-001 visible)
gateway/src/auth.rs 3 → 4 (P5-001 visible)
gateway/src/execution_loop 4 → 6 (indirect)
storaged/src/federation 3 → 4 (indirect)
Infrastructure additions
────────────────────────
* tests/real-world/scrum_master_pipeline.ts
- cloud-first ladder: kimi-k2:1t → deepseek-v3.1:671b → mistral-large-3:675b
→ gpt-oss:120b → devstral-2:123b → qwen3.5:397b (deep final thinker)
- LH_SCRUM_FORENSIC env: injects SCRUM_FORENSIC_PROMPT.md as adversarial preamble
- LH_SCRUM_PROPOSAL env: per-iter fix-wave doc override
- Confidence extraction (markdown + JSON), schema v4 KB rows with:
verdict, critical_failures_count, verified_components_count,
missing_components_count, output_format, gradient_tier
- Model trust profile written per file-accept to data/_kb/model_trust.jsonl
- Fire-and-forget POST to observer /event so by_source.scrum appears in /stats
* mcp-server/observer.ts — unchanged in shape, confirmed receiving scrum events
* ui/ — new Visual Control Plane on :3950
- Bun.serve with /data/{services,reviews,metrics,trust,overrides,findings,file,refactor_signals,search,logs/:svc,scrum_log}
- Views: MAP (D3 graph, 5 overlays) / TRACE (per-file iter timeline) /
TRAJECTORY (refactor signals + reverse index search) / METRICS (explainers
with SOURCE + GOOD lines) / KB (card grid with tooltips) / CONSOLE (per-service
journalctl tail, tabs for gateway/sidecar/observer/mcp/ctx7/auditor/langfuse)
- tryFetch always attempts JSON.parse (fix for observer returning JSON without content-type)
- renderNodeContext primitive-vs-object guard (fix for gateway /health string)
* docs/SCRUM_FIX_WAVE.md — iter-specific scope directing the scrum
* docs/SCRUM_FORENSIC_PROMPT.md — adversarial audit prompt (verdict/critical/verified schema)
* docs/SCRUM_LOOP_NOTES.md — iteration observations + fix-next-loop queue
* docs/SYSTEM_EVOLUTION_LAYERS.md — Layers 1-10 roadmap (trust profiling, execution DNA, drift sentinel, etc)
Measurements across iterations
──────────────────────────────
iter 1 (soft prompt, gpt-oss:120b): mean score 5.00/10
iter 3 (forensic, kimi-k2:1t): mean score 3.56/10 (−1.44 — bar raised)
iter 4 (same bar, post fixes): mean score 4.00/10 (+0.44 — fixes landed)
Score movement iter3→iter4: ↑5 ↓1 =12
21/21 first-attempt accept by kimi-k2:1t in iter 4
20/21 emitted forensic JSON (richer signal than markdown)
16 verified_components captured (proof-of-life, new metric)
Permission Gradient distribution: 0 auto · 16 dry_run · 4 sim · 1 block
Observer loop: by_source {scrum: 21, langfuse: 1985, phase24_audit: 1}
v1/usage: 224 requests, 477K tokens, all tracked
Signal classes per file (iter 3 → iter 4):
CONVERGING: 1 (ingestd/service.rs — fix clearly landed)
LOOPING: 4 (catalogd/registry, main, queryd/service, vectord/index_registry)
ORBITING: 1 (truth — novel findings surfacing as surface ones fix)
PLATEAU: 9 (scores flat with high confidence — diminishing returns)
MIXED: 6
Loop thesis status
──────────────────
A file's score rises only when the scrum confirms a real fix landed.
No false positives yet across 3 iterations. Fixes applied to 3 files all
raised their independent scores under the same adversarial prompt. Loop
is measurable, not hand-wavy.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
affab8ac83 |
Phase 45 slice 2: context7 HTTP bridge for doc drift detection
Bun bridge on :3900 that wraps context7's public API and exposes the
surface gateway consumes for Phase 45 drift checks. Own port so a
failure here never tips over mcp-server on :3700.
Endpoints:
GET /health status + cache stats
GET /docs/:tool resolve tool → library_id → fetch
docs → return descriptor
{snippet_hash, last_updated,
source_url, docs_preview, ...}
GET /docs/:tool/diff?since=X compare current snippet_hash to X;
returns {drifted: bool, current,
previous, preview if drifted}
GET /cache debug dump of cached entries
Implementation notes:
- 5 minute in-memory cache (context7 rate-limits by IP; gateway
drift-checks are the hot caller)
- 1500-token slices from context7 (enough for drift-meaningful
hash, not so much we hammer their API)
- snippet_hash = SHA-256 prefix (16 hex chars) of fetched content
- Library resolution prefers "finalized" state; falls back to top
result if none finalized
Verified live against context7.com:
- /health → ok, 0 cache, 300s TTL
- /docs/docker → library_id /docker/docs,
title "Docker", hash
475a0396ca436bba, last
updated 2026-04-20
- /docs/docker (again) → cache hit, 0.37ms
(5400× speedup)
- /docs/docker/diff?since=stale-hash-0000 → drifted=true, preview
included
- /docs/docker/diff?since=<current hash> → drifted=false, preview
omitted (honest: no
drift to show)
Not yet wired:
- Gateway consumer (Phase 45 slice 3):
/vectors/playbook_memory/doc_drift/check/{id} calls this bridge
and updates DocRef.snippet_hash + doc_drift_flagged_at
- Systemd unit (bridge is manual-start for now, same as bot/)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
5b1fcf6d27 |
Phase 28-36 body of work
Accumulated since a6f12e2 (Phase 21 Rust port + Phase 27 versioning): - Phase 36: embed_semaphore on VectorState (permits=1) serializes seed embed calls — prevents sidecar socket collisions under concurrent /seed stress load - Phase 31+: run_stress.ts 6-task diverse stress scaffolding; run_e2e_rated.ts + orchestrator.ts tightening - Catalog dedupe cleanup: 16 duplicate manifests removed; canonical candidates.parquet (10.5MB -> 76KB) + placements.parquet (1.2MB -> 11KB) regenerated post-dedupe; fresh manifests for active datasets - vectord: harness EvalSet refinements (+181), agent portfolio rotation + ingest triggers (+158), autotune + rag adjustments - catalogd/storaged/ingestd/mcp-server: misc tightening - docs: Phase 28-36 PRD entries + DECISIONS ADR additions; control-plane pivot banner added to top of docs/PRD.md (pointing at docs/CONTROL_PLANE_PRD.md which lands in next commit) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
138592dc56 |
Spec v2 — all chapters aligned with Phases 19-25
Full audit pass on devop.live/lakehouse/spec. Five chapters were
stale, one had an outright incorrect line. Scope was bigger than
ch6 alone — J asked "you want to update all" and the honest answer
was yes.
Ch 1 (Repository layout):
- mcp-server row gains /memory/query, /models/matrix, /system/summary,
observer.ts with :3800 listener
- tests/multi-agent/ row lists all new files: kb.ts, normalize.ts,
memory_query.ts, gen_scenarios.ts, gen_staffer_demo.ts, and the
colocated unit tests (kb.test.ts, normalize.test.ts)
- NEW config/ row documents models.json as the 5-tier matrix
- data/ row enumerates the four learning-loop directories:
_kb/, _playbook_lessons/, _observer/, _chunk_cache/
Ch 3 (Measurement & indexing):
- NEW "Model matrix (Phase 20)" subsection — 5-tier table (T1 hot /
T2 review / T3 overview / T4 strategic / T5 gatekeeper), per-tier
primary model, frequency, the think:false mechanical finding
called out with the 650-token reasoning-budget example
- NEW "Continuation primitive (Phase 21)" paragraph
- NEW "Per-staffer tool_level (Phase 23)" section with full/local/
basic/minimal mapping and the 46pt fill-rate delta from the 36-run
demo
Ch 7 (Scale story):
- FIX: playbook_memory growth bullet was claiming "No TTL or merge
policy" — Phase 25 added retirement via valid_until +
schema_fingerprint + /retire endpoint. Rewritten to name current
state (1936 entries, active vs retired split exposed).
Ch 8 (Error surfaces):
- Five new rows added to the failure-mode table:
* Zero-supply city → cloud rescue (Phase 22 item B) with the
Gary IN → South Bend IN concrete example
* LLM truncation → generateContinuable (Phase 21)
* Schema migration → /vectors/playbook_memory/retire (Phase 25)
* Observer unreachable → scenario silent-skip + append journal
survivability
Ch 9 (Per-staffer context):
- NEW "Staffer identity + competence-weighted retrieval (Phase 23)"
section with the competence_score formula and findNeighbors
weighted_score
- NEW "Auto-discovered reliable-performer labels" section naming
Rachel D. Lewis (18 endorsements) and Angela U. Ward (19) as
concrete output of 36-run demo
Ch 10 (A day in the life):
- Added 17:15 timeline entry — Kim using /memory/query with natural
language, regex normalizer extracting role/city/count in 0ms
- 17:00 entry updated to mention KB indexing + pathway recommendation
+ observer stream
- 22:00 entry updated to mention detectErrorCorrections nightly scan
Ch 11 (Known limits & non-goals):
- FIX: "playbook_memory compaction" bullet rewritten since retirement
is now wired; reframed as the honest Mem0 UPDATE/NOOP gap
- Added Letta hot cache deferred item with honest "cheap at 1.9K,
will bite at 100K" framing
- Added Chunking cache (Phase 21 Rust port) deferred item
- Added Observer → autotune feedback wire deferred item (Phase 26+)
Footer bumped v1 2026-04-20 → v2 2026-04-21 with Phase list.
Verified all updates live on devop.live/lakehouse/spec.
|
||
|
|
3fb3a60da4 |
Spec ch6 rewrite — 3 learning paths → 7 + honest gap list
J flagged the spec out of alignment with what's built. Ch6 now reflects the full current architecture: - Path 1 (playbook boost) — formula kept; geo+role prefilter refinement called out with measured 14× citation lift - Path 2 (pattern discovery) — unchanged - Path 3 (autotune agent) — unchanged - Path 4 (KB + pathway recommender) — Phase 22, file layout documented - Path 5 (cloud rescue on failure) — Phase 22 item B, verified stress_01 example cited - Path 6 (staffer competence-weighted retrieval) — Phase 23, competence_score formula included, cross-staffer auto- discovered worker labels (Rachel D. Lewis 18× endorsements) - Path 7 (observer outcome ingest) — Phase 24, :3800 HTTP listener + ops.jsonl append journal Input normalizer + unified /memory/query surface documented as the "seamless with whatever input" answer, with the 319ms natural-language latency number. Honest gaps kept visible in the spec itself, not hidden: - Zep validity windows (most load-bearing remaining) - Mem0 UPDATE/DELETE/NOOP ops - Letta working-memory hot cache Live at https://devop.live/lakehouse/spec#ch6 after service restart. Verified post-deploy: geo+role prefilter, 14× delta, validity windows gap all present in served HTML. |
||
|
|
52561d10d3 |
Input normalizer + unified memory query — "seamless with whatever input"
J asked directly: "did we implement our memory findings so that our
knowledge base and our configuration playbook [work] seamlessly with
whatever input they're given?" Honest answer tonight was "one of five
findings shipped, normalizer is the blocker." This closes that gap.
NORMALIZER (tests/multi-agent/normalize.ts):
Accepts structured JSON, natural language, or mixed. Returns canonical
NormalizedInput { role, city, state, count, client, deadline, intent,
confidence, extraction_method, missing_fields } for any downstream
consumer.
Three-tier path:
1. Structured fast-path — already-shaped input skips LLM
2. Regex path — "need 3 welders in Nashville, TN" parses without LLM.
City/state parser tightened to 1-3 capitalized words + "in {city}"
anchor preference + case-exact full-state-name variants to prevent
"Forklift Operators in Chicago" being captured as the city name
3. LLM fallback — qwen3 local with think:false + 400 max_tokens for
inputs the regex can't handle
Unit tests (tests/multi-agent/normalize.test.ts): 9/9 pass. Covers
structured fast-path, misplacement→rescue intent, state-name→abbrev
conversion, regex extraction from natural language, plural role +
full state name edge case, rescue intent keyword precedence, partial
input reporting missing fields, empty object fallthrough, async/sync
parity on clean inputs.
UNIFIED MEMORY QUERY (tests/multi-agent/memory_query.ts):
One function, five parallel fan-outs, one bundle returned:
- playbook_workers — hybrid_search via gateway with use_playbook_memory
- pathway_recommendation — KB recommender for this sig
- neighbor_signatures — K-NN sigs weighted by staffer competence
- prior_lessons — T3 overseer lessons filtered by city/state
- top_staffers — competence-sorted leaderboard
- discovered_patterns — top workers endorsed across past playbooks
for this (role, city, state)
- latency_ms — per-source + total
Every branch is best-effort: one source down doesn't break the bundle.
HTTP ENDPOINT (mcp-server/index.ts):
POST /memory/query with body {input: <anything>} → MemoryQueryResult
Returns the same shape the TS function does. Typed with types.ts for
future UI consumption.
VERIFIED:
curl POST /memory/query with structured {role,city,state,count}
→ extraction_method=structured, 10 playbook workers, top score 0.878
curl POST /memory/query with "I need 3 welders in Nashville, TN"
→ extraction_method=regex (no LLM call), 319ms total, 8 endorsements
for Lauren Gomez auto-discovered as top Nashville Welder
Honest remaining gaps (documented for next phase):
- Mem0 ADD/UPDATE/DELETE/NOOP — we still only ADD + mark_failed
- Zep validity windows — playbook entries have timestamps but no
retirement semantic
- Letta working-memory / hot cache — every query scans all 1560
playbook entries
- Memory profiles / scoped queries — global pool, no per-staffer
private subsets
2 of 5 findings now shipped (multi-strategy retrieval in Rust, input
normalization + unified query in TS). The remaining 3 are architectural
additions queued as Phase 25 items — validity windows first since it's
the most load-bearing for long-running systems.
|
||
|
|
b95dd86556 |
Phase 24 — observer HTTP ingest + scenario outcome streaming
Closes the gap J flagged: observer wraps MCP:3700, scenarios hit
gateway:3100 directly, observer idle at 0 ops across 3600+ cycles.
Now scenarios POST per-event outcomes to observer's new HTTP ingest
on :3800, observer consumes them alongside MCP-wrapped ops, ERROR_
ANALYZER and PLAYBOOK_BUILDER loops see the full picture.
observer.ts:
- Bun.serve() HTTP listener on OBSERVER_PORT (default 3800):
GET /health — basic + ring depth
GET /stats — total / success / failure / by_source / recent
scenario ops digest
POST /event — accept scenario outcome, shape it into ObservedOp
with source="scenario" + staffer_id + sig_hash +
event_kind + role/city/state + rescue flags
- recordExternalOp() — shared ring-buffer insert so the main analyzer
+ playbook builder don't care where the op came from
- ObservedOp extended with provenance fields
persistOp() FIX — old path POSTed to /ingest/file?name=observed_operations
which REPLACES the dataset (flagged in feedback_ingest_replace_semantics.md).
Every op was silently wiping all prior ops. Replaced with append to
data/_observer/ops.jsonl so the historical trace is durable across
analyzer cycles and process restarts.
scenario.ts:
- OBSERVER_URL env (default http://localhost:3800)
- postObserverEvent() helper with 2s AbortSignal.timeout so observer
being down doesn't block scenario flow
- Per-event POST after ctx.results.push(result), carrying staffer_id,
sig_hash (via imported computeSignature), event_kind + role + city
+ state + count + rescue_attempted / rescue_succeeded + truncated
output_summary
VERIFIED:
curl POST /event → {"accepted":true,"ring_size":1}
curl GET /stats → {"total":1,"successes":1,"by_source":{"scenario":1},
"recent_scenario_ops":[{...staffer_id,kind,role}]}
Final v3 demo leaderboard (9 runs per staffer, cumulative 3 batches):
James (local): 92.9% fill, 36.8 cites, score 0.775 — RANK 1
Maria (full): 81.0% fill, 26.2 cites, score 0.727
Sam (basic): 61.9% fill, 28.2 cites, score 0.640
Alex (minimal): 59.5% fill, 32.2 cites, score 0.631
Honest finding: Alex has MORE citations than Sam despite NO T3 and NO
rescue. Playbook inheritance alone is firing hardest when overseer is
absent. The 59.5% fill rate (up from 0% when qwen2.5 was executor)
proves cloud-exec + playbook inheritance is the floor the architecture
delivers.
Local gpt-oss:20b T3 outperforms cloud gpt-oss:120b T3 by 12pt fill
rate on this workload — cloud overseer paying latency+variance for
no measurable gain, worth flagging in next models.json tune.
|
||
|
|
03d723e7e6 |
Model matrix — 5 tiers, local hard workers + cloud overseers
config/models.json is the authoritative catalog. Hot path (T1/T2) stays local; cloud is consulted only for overview (T3), strategic (T4), and gatekeeper (T5) calls. J named qwen3.5 + newer models (minimax-m2.7, glm-5, qwen3-next) specifically — all mapped with real reachable IDs verified against ollama.com/api/tags. Tier shape: - t1_hot mistral + qwen2.5 local — 50-200 calls/scenario - t2_review qwen2.5 + qwen3 local — 5-14 calls/event - t3_overview gpt-oss:120b cloud — 1-3 calls/scenario - t4_strategic qwen3.5:397b + glm-4.7 — 1-10 calls/day - t5_gatekeeper kimi-k2-thinking — 1-5 calls/day, audit-logged Rate budgets are declared in-config — Ollama Cloud paid tier is generous but we cap overview/strategic/gatekeeper so no single rogue scenario can blow the day's quota. Experimental rotation list wired but disabled by default. When enabled, T4 randomly routes 10% of calls to a rotating minimax/GLM/qwen-next/ deepseek/nemotron/cogito/mistral-large candidate, logs comparisons, and auto-promotes after 3 rotations of wins. Playbook versioning SPEC embedded under `playbook_versioning` key: every seed gets version + parent_id + retired_at + architecture_snapshot, so when a schema migration breaks a playbook we can pinpoint which change retired it. Implementation flagged for next sprint (touches gateway + catalogd + mcp-server) — not wired here. - scenario.ts now loads config/models.json at init, env vars still override - mcp-server exposes /models/matrix read-only so UI can render it |
||
|
|
0ff091c173 |
Honesty fixes — no hard-coded counts, dynamic sample CSV
- generateSampleRosterCSV(): 120-180 randomized rows per call, timestamp-prefixed IDs (no dedup on re-upload, no static 25 row lie) - /system/summary: truth via SQL COUNT(*), surfaces manifest_drift (caught candidates: manifest 100K, actual 1K) - search.html: loadSystemSummary() hydrates live counts; removed hard-coded 500K strings - MCP tool description: "candidates (100K)" → "candidates (1K)", added "workers_500k (500K)" |
||
|
|
af3856b103 |
Rate/margin awareness: implied pay rate per worker, bill rate per contract
Closes one of the Path 1 trust-break gaps. The scenario we kept flagging:
recruiter calls the system's top pick, worker quotes $35/hr, contract
pays $28/hr. First broken call kills the demo. This fixes it.
Heuristic (no schema change, derived at query time):
- Per worker: implied_pay_rate = role_base + (reliability × 4) + archetype_bump
role_base: Electrician $28, Welder $26, Machine Op $24, Maint $26,
Forklift Op $20, Loader $17, Warehouse Assoc $17, Quality Tech $23,
Production Worker $18 ...
archetype bump: specialist +4, leader +3, reliable +1, else 0
- Per contract: implied_bill_rate = role_base × 1.4
(40% markup — industry norm: pay + overhead + insurance + margin)
- Worker is 'over_bill_rate' when implied_pay_rate > contract's bill_rate
on a candidate-by-candidate basis
Backend (mcp-server/index.ts):
- ROLE_BASE_PAY_RATE + BILL_MARKUP constants
- impliedPayRate(worker), impliedBillRate(role) functions
- parseWorkerChunk() extracts role/reliability/archetype from vector text
- enrichWithRates() attaches implied_pay_rate on every /vectors/hybrid
source response. Called from /search and /intelligence/permit_contracts.
- /search accepts optional max_pay_rate number — if set, filters out
workers above that rate and reports pay_rate_filtered_out count.
- /intelligence/permit_contracts returns implied_bill_rate per contract
AND over_bill_rate boolean per candidate.
Frontend (search.html):
- Live Contracts cards show 'bill rate: $X/hr' under the headcount line
- Each candidate shows 'pay $X/hr' in the sub-line; red 'Over bill rate'
chip next to name when their pay exceeds the contract's bill rate
(hover reveals the exact numbers and why it's flagged)
- Main 'Search all workers' results now include 'pay $X/hr' in the
why-text (computeImpliedPayRate mirrored client-side to match Bun)
End-to-end verified live:
- Masonry Work permit, bill_rate $25.20/hr
Kathleen M. Gutierrez pay $25.56/hr → 🔴 OVER
Melissa C. Rivera pay $20.88/hr → 🟢 OK
- /search with max_pay_rate:32 filtered out 1 Toledo Welder above $32
- Main search shows 'pay $28.64/hr' in each result row
When real ATS data replaces synthetic workers_500k, same UI — the
client's real pay_rate column substitutes for the heuristic.
|
||
|
|
a117ae8b38 |
Workspace UI — surface Phase 8.5 per-contract state + handoff
Phase 8.5 was fully built on the Rust side (WorkspaceManager with
create/handoff/search/shortlist/activity/get/list, persisted to
object storage, zero-copy handoff between agents). Nothing surfaced
it in the recruiter UI. This page closes that gap.
/workspaces — split-pane UI:
Left: scrollable list of all workspaces, sorted by updated_at.
Each card shows name, tier pill (daily/weekly/monthly/pinned),
current owner, count of shortlisted candidates + activity events.
Right: selected workspace detail with five sections:
1. Header — name, tier, owner, created/updated dates, description,
previous-owners audit trail (each handoff is preserved)
2. Actions row — Hand off, Shortlist candidate, Save search, Log activity
3. Shortlist — candidates flagged with dataset + record_id + notes
4. Saved searches — named SQL queries the staffer wants to rerun
5. Activity — chronological (newest first) log of what happened
Four modals for the add/edit actions (create, handoff, shortlist,
save-search, log-activity). All forms POST through the existing
/api/* passthrough to the gateway's /workspaces/* routes.
End-to-end verified live:
1. Sarah creates 'Demo: Toledo Week 17' workspace
2. Shortlists Helen Sanchez (W500K-4661) with notes about prior endorsements
3. Logs activity: 'called — Helen confirmed Tuesday 7am shift'
4. Hands off to Kim with reason 'end of shift'
5. Kim opens the workspace: owner=kim, previous_owners=[{sarah→kim}],
sees all 3 prior events + the shortlisted Helen
— no data copy, pointer swap only (Phase 8.5 design)
Security: all dynamic content built via el(tag,cls,text) DOM helper.
Zero innerHTML on API-derived strings. Modal close-on-backdrop-click
is guarded to the backdrop element.
Nav updated across all 7 pages. Workspaces is the 7th tab.
Dashboard · Walkthrough · Architecture · Spec · Onboard · Alerts · Workspaces.
|
||
|
|
6287558493 |
Push/daemon presence: background digest + /alerts settings page
Converts the app from 'dashboard you visit' to 'system that finds you.'
Critical for the phone-first staffing shop that won't open a URL —
the system reaches out when something matters.
Daemon:
- Starts once per Bun process (guarded via globalThis sentinel)
- Default interval 15 min (configurable, min 1, max 1440)
- On each cycle, buildDigest() compares current state against prior
snapshot persisted in mcp-server/data/notification_state.json
- Events detected:
- risk_escalation: role moved to tight or critical (was ok/watch)
- deadline_approaching: staffing window falls within warn window
(default 7 days) AND deadline date differs from prior
- memory_growth: playbook_memory entries grew by >= 5 since last run
Channels (all opt-out individually via config):
- console: always on, logged to journalctl -u lakehouse-agent
- file: always on, appends JSONL to mcp-server/data/notifications.jsonl
- webhook: optional, POSTs {text, digest} to configured URL
(Slack incoming-webhook / Discord webhook / any custom endpoint)
Digest format (human-readable, fits in a Slack message):
LAKEHOUSE DIGEST — 2026-04-20 23:24
3 staffing deadlines within window:
• Production Worker — 2d to 2026-04-23 · demand 724
• Maintenance Tech — 4d to 2026-04-25 · demand 32
• Electrician — 5d to 2026-04-26 · demand 34
+779 new playbooks (total 779, 2204 endorsed names)
snapshot: 0 critical · 0 tight · $275,599,326 pipeline
/alerts page:
- Current status table (daemon state, interval, webhook, last run)
- Config form: enable toggle, interval, deadline warn window, webhook
URL + label (saved to data/notification_config.json)
- 'Fire a test digest now' button — force a cycle without waiting
- Recent digests panel shows the last 10 dispatches with full text
End-to-end verified live:
- Daemon armed successfully on startup
- First-run digest dispatched to console + file in <1s
- Events detected correctly: 3 deadlines within 7 days from real
Chicago permit data; 779 playbook entries surfaced as memory growth
- Digest text format is Slack-pastable
- Dispatch records appear in /alerts recent list
TDZ caveat: startAlertsDaemon() invocation moved to end of module so
all const/let in the alerts block evaluate before daemon reads them.
Previously failed with 'Cannot access X before initialization' when
the call lived near the top of the file. Nav added to all 6 pages:
Dashboard · Walkthrough · Architecture · Spec · Onboard · Alerts.
|
||
|
|
23eb04a145 |
Onboarding wizard — ingest any staffing CSV in 3 steps
New /onboard page. Client-facing wizard for getting real data into the system without engineering help. Flow: 1. Drop a CSV (or click 'Use the sample as my data' — ships a 25-row realistic staffing roster under /samples/staffing_roster_sample.csv) 2. Browser parses client-side. Columns auto-typed (text/int/decimal/ date). PII flagged by name hint AND content regex (emails, phones). First rows previewed. Read-only — nothing written yet. 3. Name the dataset (lowercase+underscores). Commit. 4. Post-commit: dataset is live. Shows 4 next steps the operator can take (SQL query, vector index, dashboard search, playbook training). Backend: - /onboard serves onboard.html - /samples/*.csv serves CSV files from mcp-server/samples/ with filename validation (only [a-zA-Z0-9_-.]+.csv, prevents path traversal) - /onboard/ingest forwards multipart/form-data to gateway /ingest/file preserving the boundary. The generic /api/* passthrough breaks multipart because it reads as text and forwards as JSON; this route uses arrayBuffer + original Content-Type. Verified end-to-end: upload sample roster (25 rows, 12 columns) → parse in browser → show columns + PII flags + preview → commit → gateway writes Parquet, registers in catalog → immediately queryable: SELECT * FROM onboard_demo2 LIMIT 3 → Sarah Johnson, Forklift Operator, Chicago, IL, 0.92 Round-trip <1 second. Nav updated on all pages to link Onboard. Shipped with a sample CSV so the full flow is demonstrable without real client data. When a real client shows up, same path — they upload their CSV. No engineering ticket, no code change, no schema pre-definition. Security: sample filename regex prevents path traversal. CSV parse is client-side pure JS (no DOM injection). Commit uses existing /ingest/file validation (schema fingerprint, PII server-side, content-hash dedup). |
||
|
|
468798c9ac |
/spec: technical specification — 11-chapter README-equivalent
J's ask: explain the full architecture so someone reading a README
can dispute it or recreate it. The repo isn't public yet; this page
IS the spec until it is.
Ch1 Repository layout — 13 crates + tests/multi-agent + docs + data,
with owned responsibility and file path per crate.
Ch2 Data ingest pipeline (8 steps) — sources (file/inbox/DB/cron),
parse+normalize with ADR-010 conservative typing, PII auto-tag,
dedup, Parquet write, catalog register with fingerprint gate,
mark embeddings stale, queryable immediately.
Ch3 Measurement & indexing — row count / fingerprint / owner /
sensitivity / freshness / lineage per dataset. HNSW vs Lance
tradeoff table with measured numbers (ADR-019). Autotune loop.
Per-profile scoping (Phase 17).
Ch4 Contract inference from external signal — Chicago permit feed
→ role mapping → worker count heuristic → timeline → hybrid
search with boost → pattern discovery → rendered card. All
pre-computed before staffer opens UI.
Ch5 What a CRM can't do — 11-row comparison table of capabilities.
Ch6 How it gets better over time — three paths:
- Phase 19 playbook boost (full math)
- Pattern discovery meta-index
- Autotune agent
Ch7 Scale story: 20 staffers, 300 contracts, midday +20/+1M surge
- Async gateway + per-staffer profile isolation + client blacklists
- 7-step surge handling flow (ingest, stale-mark, incremental refresh,
degradation, hot-swap, autotune re-enter)
- Known pain points: Ollama inference serial, RAM ceiling ~5M on
HNSW (mitigated by Lance), VRAM 1-2 models sequential,
playbook_memory unbounded.
Ch8 Error surfaces & recovery — 10-row table covering ingest schema
conflicts, bucket failures, ghost names, dual-agent drift,
empty searches, Ollama down, gateway restart, schema fingerprint
divergence. Every failure has a named surface and recovery path.
Ch9 Per-staffer context — active profile, workspace, client blacklist,
audit trail, daily summary. How 20 staffers don't see the same UI.
Ch10 Day in the life — 07:00 housekeeping → 07:30 refresh → 08:00
staffer opens → 08:15 drill down → 08:30 Call click → 09:00
second staffer shares memory → 12:30 surge → 14:00 no-show →
15:00 new embeddings live → 17:00 retrospective → 22:00
overnight trials.
Ch11 Known limits & non-goals — deferred (rate/margin, push, confidence
calibration, neural re-ranker, pm compaction, call_log cross-ref)
and explicitly out-of-scope (cloud, ACID, streaming, CRM replace,
proprietary formats, hard multi-tenant).
Also: nav updated on /dashboard, /console, /proof to link /spec.
Every architectural claim in the spec cites either a code path, an
ADR number, or a phase reference so someone skeptical can target
the specific artifact.
|
||
|
|
76bfa2c8d7 |
/proof: explain the dual-agent recursive architecture with citations
Previous page was numeric claims without explanations — 'sub-100ms SQL',
'500K vectors in 341ms' etc. Accurate but undefendable without math,
code paths, and ADR references. Expanded to 8 chapters:
Ch1 — Live receipts (unchanged: real gateway tests, pass/fail, timing)
Ch2 — Architecture. 13-crate diagram with per-crate responsibility
table and file paths. gateway → catalogd/queryd/vectord/ingestd
+ aibridge → object_store. References ADRs 1-20.
Ch3 — Dual-agent recursive consensus loop (NEW)
- Role specialization (executor=optimist, reviewer=pessimist)
- Parallel orchestration via Promise.all
- Recursive: sealed playbooks feed playbook_memory → next query
- Termination math: sealed | tool-error abort | drift abort |
turn-cap abort — every path dumps forensic log
- File refs: tests/multi-agent/agent.ts, orchestrator.ts,
scenario.ts, run_e2e_rated.ts
Ch4 — Playbook memory feedback loop (NEW)
- PlaybookEntry shape with embedding
- Full boost math: similarity * base_weight * decay * penalty
/ n_workers, capped at MAX_BOOST_PER_WORKER
- Temporal decay (e^-age/30, 30d half-life)
- Negative signal (0.5^failures)
- Why k=200: narrow cosine discrimination in nomic-embed-text
- Evidence: compounding test 0 → 0.250 cap in 3 seeds
- persist_sql write-through
- Pattern discovery (Path 2 meta-index)
- File: crates/vectord/src/playbook_memory.rs
Ch5 — ADR citations for each key choice
ADR-001, 008, 012, 015, 019, 020 + Phase 19 design note
Ch6 — Live scale data (unchanged: pulled from /proof.json)
Ch7 — Reproduction recipes: curl for health, sql, hybrid with boost,
patterns, pm stats, and the full dual-agent scenario run
Ch8 — Honest limits (unchanged: synthetic workers_500k, 1K candidates
misaligned to call_log, 7B model imperfection, no rate/margin)
Every architectural claim now cites either the code path
(crates/.../src/file.rs::fn_name) or the ADR (docs/DECISIONS.md).
Someone disputing the system has specific targets to attack.
Mechanism unchanged: /proof serves mcp-server/proof.html via
Bun.file. /proof.json still returns the live test data the page
consumes client-side.
|
||
|
|
05f2e42c45 |
Rebuild /console as narrative walkthrough for a skeptical staffer
Old console was a chat playground. New console is a guided,
chapter-based explanation that a non-technical staffing staffer
can read top-down and finish convinced — without needing to
understand any of the underlying technology.
Six chapters, each loading live data:
1. Right now, this system is already thinking
Four stats cards pulled live: construction pipeline $, predicted
worker demand, rows under management, playbooks remembered. Then
a narrative that names the current alert posture (critical/tight/ok).
2. The demand signal is real, not made up
Expandable rows per Chicago permit work_type, with a direct link to
data.cityofchicago.org for verification. Pill labeled LIVE ·
DATA.CITYOFCHICAGO.ORG leaves no ambiguity.
3. Where your own data would live
Catalog enumerated with three pill classes:
- SWAP FOR YOUR DATA (purple) — the synthetic tables that would
be replaced by the client's ATS/CRM/call-log exports
- SYSTEM-GENERATED (blue) — playbook memory, threat_intel, kb_*
produced by the system itself
Row counts + columns visible. Names it honestly.
4. Watch the system rank candidates in real time
Takes the freshest Chicago permit, walks the staffer through all
three steps (derive need → narrow via SQL → rank + boost), shows
the top-5 workers with why, boost chip, memory chip, timeline,
and a plain-English narrative of the CRM gap.
5. Every action compounds
Playbook memory count + sample + narrative about what it means
when the staffer logs a fill.
6. Try it yourself
Free-text input hitting /intelligence/chat, renders response
with memory chip + boost chips + ranked workers.
Security: all API-derived strings go through textContent or
el(tag,cls,text) helper. Zero innerHTML usage on dynamic content.
Passes security reminder hook.
File size: 419 → ~500 lines. Visual style matches the dashboard
(same palette, typography, chip styles) so the two pages feel
like one app.
|
||
|
|
bb1b471c67 |
Predictive staffing forecast + per-contract timeline
J's ask: move the system from retrospective ranking to predictive anticipation. Show it tracks the clock, not just the roster. New endpoint /intelligence/staffing_forecast: - Pulls 30-day Chicago permit window (200 permits) - Maps work_type → role via industry heuristic - Aggregates predicted worker demand per role - Joins IL bench supply (workers_500k state='IL' group by role) - Computes coverage_pct, reliable_coverage_pct - Classifies risk: critical/tight/watch/ok - Computes earliest staffing deadline per role (permit issue_date + 31d = 45d construction start - 14d window) - Surfaces recent Chicago playbook ops for the role-specific memory New UI 'Staffing Forecast' section ABOVE Live Contracts: - Top card: total construction value, permit count, workers needed, critical/tight role count - Per-role rows: demand vs available supply, coverage %, deadline with red/amber/green urgency coloring Per-contract timeline on Live Contracts: - estimated_construction_start, staffing_window_opens, days_to_deadline - urgency classification: overdue/urgent/soon/scheduled - card border colored by urgency - timeline line explicitly shows recruiter: OVERDUE/URGENT + days count This is the 'system already thinks about when, not just who' surface J was asking for. CRMs store; this anticipates. |
||
|
|
2595d48535 |
Gap fixes: pattern fallback, narrative citations, call_log plumbing
Closing trust-breaks surfaced in the strategic audit. A — MEMORY chip renders even when sparse: Previously rendered nothing when no trait crossed threshold, which recruiters would read as "system has no signal." Now explicitly says "memory is sparse for this role+geo — no trait crossed threshold" or "no similar past playbooks yet — first fill of this kind will seed it." Honest when it doesn't know. B — Removed /intelligence/learn dead endpoint: Legacy CSV-writer path that destructively re-wrote successful_playbooks. /log and /log_failure replace it cleanly. Leaving dead code confuses future maintainers. C — Narrative tooltips on Endorsed chips: Hovering the green "Endorsed · N playbooks" chip now fetches the worker's past operations from successful_playbooks_live and shows a story: "Maria — past endorsements: • Welder x2 in Toledo (2026-04-15), • Welder x1 in Toledo (2026-04-18)..." Falls back to honest "narrative unavailable" if the seed didn't land in SQL. D — call_log infrastructure in worker modal: New "Recent Contact" section queries call_log JOIN candidates by name. Surfaces last 3 call entries with timestamp, recruiter, disposition, duration. When empty (which is today's reality — candidates table only has 1000 rows vs call_log's higher IDs), shows an honest message about the data gap and what real ATS integration would unlock. Honest call: D ships infrastructure. Actual utility depends on aligning candidate IDs between the candidates table and call_log — current synthetic data doesn't cross-ref cleanly. When real ATS data lands, this section becomes the "system knows who we called yesterday" feature the recruiter needs. Deferred (would require a dedicated session): - Rate awareness (needs worker pay_rate + contract bill_rate) - Push / background daemon (Slack/SMS/email integration) - Confidence calibration (needs a probabilistic ranking layer) |
||
|
|
1f56630d5d |
#3: Worker profile modal shows past playbook history
Click any worker card → modal now includes a 'Past Playbooks' section that queries successful_playbooks_live for any row where this worker's name appears in the result field. Shows up to 8 most recent with operation, timestamp, approach, and context. When empty: 'No prior playbooks for NAME yet. First placement builds the first entry.' — makes the institutional-memory claim visible to the recruiter: the system is tracking everyone, not just the ones that sealed this session. Also added Call / SMS / No-show buttons to the modal action row (matching the card-level buttons from #1). Every worker-card path now trains the system. Closes the user-visible side of Phase 19 — patterns surface during search (Pass A), boosts fire in ranking (Phase 19 core), and now the worker's own profile shows the full history that informs those boosts. Institutional memory legibility, per J's ask. |
||
|
|
cdd12a1438 |
#2: Per-client worker blacklists
New endpoints:
- POST /clients/:client/blacklist { worker_id, name?, reason? }
- GET /clients/:client/blacklist → { client, entries }
- DELETE /clients/:client/blacklist/:worker_id → { removed, total }
Bun /search accepts optional `client` field. When present, loads that
client's blacklist and appends `AND worker_id NOT IN (...)` to the
SQL filter. Zero-cost if unused; clean trust-break avoidance when a
client has previously flagged a worker.
Persistence: mcp-server/data/client_blacklists.json, synchronous
writes via Bun.write. Scale target is hundreds of entries per client
tops — JSON is fine until we hit 10K+ per client.
Verified: worker_id 9326 (Carmen Green) blacklisted for AcmeCorp,
same Chicago Electrician search with client=AcmeCorp returns 196
sql_matches vs 197 without — exactly one excluded.
|