POST /v1/matrix/playbooks/bulk accepts an array of playbook entries
and records each independently — failures per-entry don't abort the
batch. Designed for two operational use cases:
1. Backfilling historical placement data into the playbook
substrate (the Rust system has 4,701 fill operations recorded
with embeddings; that data deserves to feed the Go learning
loop without a 4,701-call procedural script).
2. Batched click-tracking from a session's worth of coordinator
interactions, posted once at idle rather than per-click.
Per-entry response shape: {index, playbook_id} on success or
{index, error} on failure. Caller can inspect failures without
diffing.
Smoke (scripts/playbook_smoke.sh, new assertion #4):
Bulk POST 3 entries: 2 valid (alpha→widget-a, bravo→widget-b) +
1 invalid (empty query_text). Verifies recorded=2, failed=1,
the 2 valid ones get playbook_ids back, and the invalid one
surfaces its validation error in-line.
Single-record /matrix/playbooks/record from 06e7152 still works
unchanged; bulk is additive. The corpus field can be set per-
entry or once at the batch level (entry-level wins on collision).
Per the small-model autonomous pipeline framing: this is the
"the playbook gets denser with each iteration" mechanism. Click
tracking → bulk POST → playbook entries → future similar queries
get those answers boosted via the existing /matrix/search
use_playbook path. The learning loop now has both inflows wired
(single + bulk) — what remains is the demo UI shim that calls
/feedback on result interaction (deferred — no Go demo UI yet).
15-smoke regression all green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>