Workers driver embed text reverted to V0 after testing 3 variants
on the "Forklift operator with OSHA-30 certification, warehouse
experience" reality-test query against 5000 workers (which contains
569 actual Forklift Operators per the 31b4088 probe).
V0 (current, restored): "Worker role: <role>. Skills: ...
Certifications: ... <resume_text>"
→ 6 workers in top-8, 0 Forklift Ops,
top distance 0.327, top role
"Production Worker"
V4a (role-doubled): "<role>. <role> with <skills>. ..."
drop archetype + resume_text
→ 6 workers in top-8, 0 Forklift Ops,
top distance 0.254, top role
"Production Worker"
V4b (resume-only): just the resume_text natural-language
sentence, no structured prefix
→ 4 workers in top-8 (WORSE mix —
software-engineer candidates filled
the displaced slots), 0 Forklift Ops,
top distance 0.379
Conclusion: all three variants surface Production Workers / Machine
Operators / Line Leads ABOVE Forklift Operators for this query.
The 569 actual Forklift Operators in the 5000-row sample don't
appear in any top-8. Embed-text design isn't the bottleneck —
nomic-embed-text 137M's geometry doesn't separate "Forklift
Operator" from "Production Worker" / "Machine Operator" / "Line
Lead" in this query's neighborhood.
Real fixes belong elsewhere:
- Hybrid SQL+semantic (B): pre-filter by role/certs via queryd
before semantic ranking. Addresses the gap directly.
- Different embedding model: mxbai-embed-large or a staffing-
fine-tuned model. Costs an Ollama model swap + re-embedding.
- Playbook boost (component 5, already shipped): record
successful Forklift placements; future queries surface those
workers via similarity. Compounds with use.
V0 restored because it has the best worker/candidate mix in top-8
(6 vs 4 in V4b), preserving the multi-corpus reality-test signal
quality even if the role match is imperfect. Comments updated to
record the experiment so future sessions don't relitigate.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>