New:
- /vectors/playbook_memory/patterns: meta-index pattern discovery.
Given a query, finds top-K similar playbooks, pulls each endorsed
worker's full workers_500k profile, aggregates shared traits (cert
frequencies, skill frequencies, modal archetype, reliability
distribution), returns a human-readable discovered_pattern. Surfaces
signals operators didn't explicitly query — the original PRD's
"identify things we didn't know" dimension.
- /vectors/playbook_memory/mark_failed: records worker failures per
(city, state, name). compute_boost_for applies 0.5^n penalty per
recorded failure, so 3 failures quarter a worker's positive boost and
5 effectively zero it. Path 1 negative signal — recruiter trust
depends on the system NOT recommending people who no-showed.
- Bun /log_failure: validates failed_names against workers_500k
(same ghost-guard as /log), forwards to /mark_failed.
Improved:
- /log now validates endorsed_names against workers_500k for the
contract's city+state before seeding. Ghost names (names that don't
correspond to real workers) are rejected in the response and excluded
from the seed, preventing silent boost failures.
- Bun /search auto-appends `CAST(availability AS DOUBLE) > 0.5` to
sql_filter when the caller didn't constrain availability. Opt out
with `include_unavailable: true`. Recruiter trust bug: surfacing
already-placed workers breaks the first call.
- DEFAULT_TOP_K_PLAYBOOKS 25 → 100. Direct cosine measurement showed
similarities cluster 0.55-0.67 across all playbooks regardless of
geo, so k=25 missed relevant geo-matched playbooks. Brute-force is
still sub-ms at this size.
Verified end-to-end on live data:
- Ghost names rejected on /log + /log_failure
- Availability filter drops unavailable workers from candidate pool
- Pattern discovery on unseen Cleveland OH Welder query returned
recurring skills (first aid 43%, grinder 43%, blueprint 43%) and
modal archetype (specialist) across 20 semantically similar past
playbooks in 0.24s
- Negative signal: Helen Sanchez boost dropped +0.250 → +0.163 after
3 failures recorded via /log_failure (34% reduction)