74 Commits

Author SHA1 Message Date
root
b2e45f7f26 playbook_lift: harness expansion + reality test #001 (7/8 lift, 87.5%)
The 5-loop substrate's load-bearing gate is verified — playbook +
matrix indexer give the results we're looking for. Per the report's
rubric, lift ≥ 50% of discoveries means matrix is doing real work;
7/8 = 87.5% blew through that.

Harness was structurally hiding bugs behind a 5-daemon stripped boot.
Expanding to the full 10-daemon prod stack surfaced 7 fixes in cascade:

1. driver→matrixd: {"query": ...} → {"query_text": ...} field name
2. harness temp toml missing [s3] → wrong default bucket → catalogd
   rehydrate 500 on first call
3. harness→queryd SQL probe: {"q": ...} → {"sql": ...} field name
4. expand boot from 5 → 10 daemons in dep-ordered launch
5. add SQL surface probe (3-row CSV ingest → COUNT(*)=3 assertion)
6. candidates corpus was synthetic SWE-tech (Swift/iOS, Scala/Spark) —
   wrong domain for staffing queries; replaced with ethereal_workers
   (10K rows, real staffing schema, "e-" id prefix to avoid collision
   with workers' "w-"). staffing_workers driver gains -index-name +
   -id-prefix flags so the same binary serves both corpora
7. local_judge qwen3.5:latest is a vision-SSM 256K-ctx build running
   ~30s per judge call against the lift loop; reverted to
   qwen2.5:latest (~1s/call, 30× faster, held lift theory)

Each contract drift (1, 3) is now locked into a cmd/<bin>/main_test.go
so future drift fires in `go test`, not in a reality run. R-005 closed:

- cmd/matrixd/main_test.go (new) — playbook record drift detector +
  score bounds + 6 routes mounted
- cmd/queryd/main_test.go — wrong-field-name drift detector
- cmd/pathwayd/main_test.go (new) — 9 routes + add round-trip + retire
- cmd/observerd/main_test.go (new) — 4 routes + invalid-op + unknown-mode

`go test ./cmd/{matrixd,queryd,pathwayd,observerd}` all green.

Reality test results (reports/reality-tests/playbook_lift_001.{json,md}):
  Queries              21 (staffing-domain, 7 categories)
  Discoveries          8 (judge ≠ cosine top-1)
  Lifts                7/8 (87.5%)
  Boosts triggered     9
  Mean Δ distance      -0.053 (warm closer than cold)
  OOD honesty          dental/RN/SWE rated 1, no fake matches
  Cross-corpus boosts  confirmed (e- ↔ w- swaps in lifts)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 06:22:21 -05:00
root
740eb0d00c scrum_review: switch curl to stdin so large diffs don't blow argv
Phase 4-bundle review (128KB diff) hit "Argument list too long" when
curl --data was passed the body as a literal arg. Pipe via stdin
with --data-binary @- instead. Lifts the practical bundle size from
~30KB to whatever fits in process memory.

Caught while running the harness scrum on golangLAKEHOUSE today —
the bigger Phase A+B harness diff (4566 lines) tripped it.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 02:46:52 -05:00
root
511083ae40 docs: SPEC §3.9 (chatd) + §3.10 (local-review-harness sibling)
- SPEC §1 component table: add chatd row marked DONE; replaces
  Rust gateway's v1::ollama_cloud / openrouter / opencode adapters
  + the aibridge crate.
- SPEC §3.9 — chatd shipped: 5-provider routing (ollama, ollama_cloud,
  openrouter, opencode, kimi) by model-name prefix or :cloud suffix.
  Captures the Anthropic 4.7 temperature-deprecation quirk + the
  local-Ollama think=false default that the playbook_lift judge
  needed. Mentions scrum_review.sh as the reusable cross-lineage
  vehicle eating chatd's own /v1/chat.
- SPEC §3.10 — local-review-harness sibling tool: separate repo at
  git.agentview.dev/profit/local-review-harness, MVP shipped today.
  Documents the cross-pollination plan for when both substrates
  stabilize (chatd as the harness's LLM backend; harness findings
  as Lakehouse pathway-memory drift signal; .memory/known-risks
  as a matrix corpus). Explicit "don't re-port" so future Claudes
  don't try to absorb the harness into Lakehouse.
- STATE_OF_PLAY.md: SIBLING TOOLS section with 1-line summary
  + pointer to SPEC §3.10.

No code changes. just verify still PASS — touched only docs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 01:01:23 -05:00
root
c5c31b6ca6 docs: STATE_OF_PLAY.md — Go-side truth anchor (mirrors Rust convention)
Adds the "verified working RIGHT NOW / DO NOT RELITIGATE / OPEN"
anchor at the repo root, mirroring /home/profit/lakehouse/STATE_OF_PLAY.md.
Memory files (project_golang_lakehouse.md) supplement; this file is
the verified-truth pointer.

Sections:
- VERIFIED WORKING: 13 cmd binaries + 18 smokes + 5 matrix components
  + Mem0 pathway + observerd + workflow runner + chatd 5-provider
  dispatcher + model tier registry. just verify PASS in 31s.
- DO NOT RELITIGATE: 4 ratified ADRs (DECISIONS.md ADR-001..004) +
  today's scrum dispositions (B-1..B-4 fixed, FP-A1/A2/C1 dismissed)
  + session frame items (Rust legacy is maintenance-only, etc.).
- OPEN: reality test held on J's queries, 3 daemon main_test.go gap,
  Sprint 4 deployment, ADR-005 observer fail-safe, ADR-006 auth posture.
- RECENT WAVE: 6-commit table 05273ac..e4ee002 documenting today's
  4 phases + scrum + tooling.
- RUNTIME CHEATSHEET: just verify, chatd boot, /v1/chat/providers
  probe, scrum_review.sh usage.
- VISION: 5-loop substrate gate from project_small_model_pipeline_vision.md.

The read-mem skill (in /root/.claude/skills/read-mem/) and project
memory file are updated to reference this file as the primary Go anchor.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 00:37:24 -05:00
root
e4ee0029c0 scrum_review.sh: reusable 3-lineage cross-review driver
Bash driver wrapping /v1/chat for Opus + Kimi + Qwen3-coder review
runs. Used today to scrum the 4-phase wave (1,624 LoC of chatd +
config-refactor + Rust cleanup) and caught 2 BLOCKs + 2 WARNs.

Usage:
  ./scripts/scrum_review.sh <bundle.diff> <bundle_label>

Output: reports/scrum/_evidence/<DATE>/verdicts/<bundle>_<reviewer>.md
verbatim, per the evidence-only convention. Per-reviewer latency +
token counts captured in the report header.

System prompt enforces the BLOCK/WARN/INFO + WHERE/WHAT/WHY shape
per feedback_cross_lineage_review.md — leads with verdict, no
preamble (Kimi tends to spend tokens thinking otherwise).

Reviewer fleet matches project_golang_lakehouse.md "Scrum routing":
- opencode/claude-opus-4-7
- openrouter/moonshotai/kimi-k2-0905
- openrouter/qwen/qwen3-coder

This is the first dogfood of chatd as the scrum vehicle — eats its
own /v1/chat dispatcher.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 00:29:36 -05:00
root
0efc7363c5 scrum 2026-04-30: 4 real fixes + 2 INFOs from cross-lineage review
3-lineage scrum (Opus 4.7 / Kimi K2.6 / Qwen3-coder) on today's wave
landed 4 real findings (2 BLOCK + 2 WARN) and 2 INFO touch-ups.
Verbatim verdicts + disposition table at:
  reports/scrum/_evidence/2026-04-30/

B-1 (BLOCK Opus + INFO Kimi convergent) — ResolveKey API:
  collapse from 3-arg (envVar, envFileName, envFilePath) to 2-arg
  (envVar, envFilePath). Pre-fix every chatd caller passed the env
  var name twice; if operator renamed *_key_env in lakehouse.toml
  while keeping the canonical KEY= line in the .env file, fallback
  silently missed.

B-2 (WARN Opus + WARN Kimi convergent) — handleProviders probe:
  drop the synthesize-then-Resolve probe; look up by name directly
  via Registry.Available(name). Prior probe synthesized "<name>/probe"
  model strings and routed through Resolve, fragile to any future
  routing rule (e.g. cloud-suffix special case).

B-3 (BLOCK Opus single — verified by trace + end-to-end probe) —
  OllamaCloud.Chat StripPrefix used "cloud" but registry routes
  "ollama_cloud/<m>". Result: upstream got the prefixed model name
  and 400'd. Smoke missed it because chatd_smoke runs without
  ollama_cloud registered. Now strips the right prefix; new
  TestOllamaCloud_StripsCorrectPrefix locks both prefix + suffix
  cases. Verified live: ollama_cloud/deepseek-v3.2 round-trips
  cleanly through the real ollama.com endpoint.

B-4 (WARN Opus single) — Ollama finishReason: read done_reason
  field instead of inferring from done bool alone. Newer Ollama
  reports done=true with done_reason="length" on truncation; the
  prior code mapped that to "stop" and lost the truncation signal
  the playbook_lift judge needs to retry. New
  TestFinishReasonFromOllama_PrefersDoneReason covers the fallback
  ladder.

INFOs:
- B-5: replace hand-rolled insertion sort in Registry.Names with
  sort.Strings (Opus called the "avoid sort import" comment a
  false economy — correct).
- A-1: clarify the playbook_lift.sh comment around -judge "" arg
  passing (Opus noted the comment said "env priority" but didn't
  reflect that the empty arg also passes through the Go driver's
  resolution chain).

False positives dismissed (3, documented in disposition.md):
- Kimi: TestMaybeDowngrade_WithConfigList wrong assertion (test IS
  correct per design — model excluded from weak list = strong = downgrade)
- Qwen: nil-deref claim (defensive code already handles nil)
- Opus: qwen3.5:latest doesn't exist on Ollama hub (true on the
  public hub but local install has it)

just verify: PASS. chatd_smoke 6/6 PASS. New regression tests:
3 (B-2, B-3, B-4 each get a focused test).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 00:28:08 -05:00
root
05273ac06b phase 4: chatd — multi-provider LLM dispatcher (ollama / cloud / openrouter / opencode / kimi)
new cmd/chatd on :3220 routes /v1/chat to the right provider based
on model-name prefix or :cloud suffix. closes the architectural gap
named in lakehouse.toml [models]: tiers map to model IDs, but until
phase 4 there was no service that could actually CALL those models
from go.

routing rules (registry.Resolve):
  ollama/<m>          → local Ollama (prefix stripped)
  ollama_cloud/<m>    → Ollama Cloud
  <m>:cloud           → Ollama Cloud (suffix variant — kimi-k2.6:cloud)
  openrouter/<v>/<m>  → OpenRouter (prefix stripped, OpenAI-compat)
  opencode/<m>        → OpenCode unified Zen+Go
  kimi/<m>            → Kimi For Coding (api.kimi.com/coding/v1)
  bare names          → local Ollama (default)

provider implementations:
- internal/chat/types.go      Provider interface, Request/Response, errors
- internal/chat/registry.go   prefix + :cloud suffix dispatch
- internal/chat/ollama.go     local Ollama via /api/chat (think=false default)
- internal/chat/ollama_cloud.go  Ollama Cloud via /api/generate (Bearer auth)
- internal/chat/openai_compat.go shared OpenAI Chat Completions for the
                                 OpenRouter/OpenCode/Kimi family
- internal/chat/builder.go    BuildRegistry from BuilderInput;
                              ResolveKey reads env then .env file fallback

config:
- ChatdConfig in internal/shared/config.go with bind, ollama_url,
  per-provider key env names + .env fallback paths, timeout
- Gateway gains chatd_url + /v1/chat + /v1/chat/* routes
- lakehouse.toml [chatd] block with /etc/lakehouse/<provider>.env defaults

tests (19 in internal/chat):
- registry: prefix + :cloud + errors + telemetry + provider listing
- ollama: happy path + prefix strip + format=json + 500 mapping +
  flatten_messages
- openai_compat: happy path + format=json + 429 mapping + zero-choices

think=false default in ollama + ollama_cloud — local hot path skips
reasoning, low-budget callers (the playbook_lift judge at max_tokens=10)
get direct answers instead of empty content + done_reason=length.
proven via chatd_smoke acceptance.

acceptance gate: scripts/chatd_smoke.sh — 6/6 PASS:
1. /v1/chat/providers lists exactly registered providers (1 in dev mode)
2. bare model → ollama default with content + token counts + latency
3. explicit ollama/<m> → prefix stripped at upstream
4. <m>:cloud without ollama_cloud registered → 404 (no silent fall-through)
5. unknown/<m> → falls through to default → upstream 502 (no prefix rewrite)
6. missing model field → 400

just verify: PASS (vet + 30 packages × short tests + 9 smokes).
chatd_smoke is a domain smoke (not in just verify, mirrors matrix /
observer / pathway pattern).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 00:08:29 -05:00
root
848cbf5fef phase 3: playbook_lift harness reads judge from config
migrate the reality-test harness's judge-model default from a
hardcoded "qwen3.5:latest" string to cfg.Models.LocalJudge.

resolution priority: explicit -judge flag > $JUDGE_MODEL env >
cfg.Models.LocalJudge from lakehouse.toml > hardcoded fallback.

bumping the judge for run #N+1 now means editing one line in
lakehouse.toml [models].local_judge — no Go file or shell script
edits required.

changes:
- scripts/playbook_lift/main.go: -config flag added, judge default
  flips to "" so resolution chain runs. Imports internal/shared for
  config loader.
- scripts/playbook_lift.sh: JUDGE_MODEL no longer defaulted in bash;
  EFFECTIVE_JUDGE resolved by mirror-of-the-Go-chain (env > config
  grep > qwen3.5:latest fallback). Used for the Ollama presence
  check + report header. Pre-flight grep avoids requiring jq just
  to read the toml.
- reports/reality-tests/README.md: documents the 4-step priority
  chain.

verified all 4 paths produce the expected judge:
- config (no env): qwen3.5:latest (from lakehouse.toml)
- env override:    env wins
- flag override:   flag wins over env
- missing config:  DefaultConfig fallback still gives qwen3.5:latest

just verify PASS.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 23:57:28 -05:00
root
622e124b8f phase 2: matrix.downgrade reads WeakModels from config
migrate the strong-model auto-downgrade gate from a hardcoded weak
list to cfg.Models.WeakModels. backward compatible: existing API
preserved, callers that don't migrate keep using DefaultWeakModels.

changes:
- internal/matrix/downgrade.go: split IsWeakModel into rule-based
  base (`:free` suffix/infix) + literal-list lookup. New
  IsWeakModelInList(model, list) takes the config-supplied list.
  DowngradeInput grows a WeakModels field; nil falls back to
  DefaultWeakModels (preserves pre-phase-2 behavior).
- internal/workflow/modes.go: add MatrixDowngradeWithWeakList(list)
  factory mirroring MatrixSearch's pattern. Plain MatrixDowngrade
  kept for backward compat.
- cmd/matrixd/main.go: handlers struct holds weakModels populated
  from cfg.Models.WeakModels at startup; handleDowngrade threads it
  into every DowngradeInput.
- cmd/observerd/main.go: registerBuiltinModes accepts weakModels
  and uses the factory variant. observerd reads cfg.Models.WeakModels
  in main().

end-to-end verified: downgrade + matrix + observer + workflow smokes
all pass. Existing TestMaybeDowngrade_TruthTable + TestIsWeakModel
unchanged (backward compat). Two new tests cover the config path:
- TestIsWeakModelInList — covers rule + literal + empty + nil
- TestMaybeDowngrade_WithConfigList — verifies cfg list overrides
  default

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 23:52:18 -05:00
root
ec1d031996 phase 1: add [models] tier config — additive, no callers migrate yet
Codifies the small-model-pipeline tiering (per project_small_model_pipeline_vision.md)
in lakehouse.toml [models] section. Tier names map to actual model
IDs; bumping a model means editing one line, not hunting through code.

Tier philosophy:
- local_*    : on-box Ollama. Inner-loop hot path. Repeated calls.
- cloud_*    : Ollama Cloud (Pro plan). Larger context, fail-up tier.
- frontier_* : OpenRouter / OpenCode. Rate-limited, billed per call.

weak_models is the codified "local-hot-path eligible" list — phase 2
will migrate matrix.downgrade to read it instead of hardcoding.

Defaults reflect 2026-04-29 architecture: qwen3.5:latest as local
(stronger than qwen2.5, same JSON-clean property), kimi-k2.6 as cloud
judge (kimi-k2:1t still upstream-broken), opus-4-7 + kimi-k2-0905 as
frontier review/arch via OpenRouter, opencode/claude-opus-4-7 as
frontier_free leveraging the OpenCode subscription.

3 new tests in internal/shared/config_test.go:
- TestDefaultConfig_ModelsTier — locks tier defaults
- TestModelsConfig_IsWeak     — weak-bypass list
- TestLoadConfig_ModelsTOMLRoundTrip — override semantics

just verify PASS (g2 had one flake on first run — Ollama transfer
truncation; clean on retry, unrelated to this change).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 23:48:45 -05:00
root
3dd7d9fe30 reality-tests: playbook-lift harness — does the 5-loop substrate beat raw cosine?
First reality test driver. Two-pass design:
- Pass 1 (cold): matrix.search use_playbook=false → small-model judge
  rates top-K → record playbook entry pointing at the highest-rated
  result (which may NOT be top-1 by distance — that's the discovery).
- Pass 2 (warm): same queries with use_playbook=true → measure
  ranking shift. Lift = real if recorded answer becomes top-1.

Files:
- scripts/playbook_lift/main.go         driver (391 LoC)
- scripts/playbook_lift.sh              stack-bring-up + report gen
- tests/reality/playbook_lift_queries.txt  query corpus (5 placeholders;
                                            J writes real 20+)
- reports/reality-tests/README.md       framework + interpretation
- .gitignore                            track reports/reality-tests/
                                        but ignore per-run JSON evidence

This answers the gate from project_small_model_pipeline_vision.md:
"the playbook + matrix indexer must give the results we're looking
for." Without ground-truth labels, the LLM judge is the proxy — the
same small-model thesis applied to evaluation. Honest about that
limitation in the generated reports.

Driver compiles clean; full run requires Ollama + workers/candidates
ingest. Skips cleanly if Ollama absent.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 23:22:36 -05:00
root
8278eb9a87 scrum2 cleanup: JSON-marshal in stringifyValue, drop dead detectCycle, name SourceWorkflow
5 small fixes from the §3.8 scrum2 review wave:

- workflow.stringifyValue now JSON-marshals maps/slices instead of
  fmt.Sprint %v (Opus+Kimi convergent: LLM modes were getting Go's
  map[k:v] syntax, which is unparseable as JSON context).
- workflow.detectCycle removed — duplicate of topoSort that discarded
  the useful node ID. Validate() now calls topoSort directly and
  returns its wrapped ErrCycle.
- observer.SourceWorkflow named constant — was an implicit string
  cast (observer.Source("workflow")) at the cmd/observerd handler.
- Unused context imports + dead silencer comments removed across
  workflow/modes.go and observerd/main.go.
- Unused store parameter dropped from registerBuiltinModes (reserved
  comment removed; can be re-added when a mode actually needs it).

just verify still PASS — these are pure cleanup, no behavior change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 23:16:07 -05:00
root
c41698acae scrum rerun-2 — 50/60 (Δ R1 +7, Δ baseline +15) at c7e3124
Audited stash-clean c7e3124 (30 commits past rerun-1 4840c10).
3 HIGH risks closed (R-002 internal/shared, R-003 internal/storeclient,
R-008 queryd/db.go). 3 advanced to partial (R-001 via fail-loud-bind +
opt-in auth, R-006 via g2_smoke_fixtures, R-007 via ADR-003 auth.go).

Biggest move: Agent Memory Correctness 4 → 9 — pathway Mem0 ops
(ADD/UPDATE/REVISE/RETIRE/HISTORY) all tested, including cycle-detection
and retired-trace-exclusion. Sprint 2 acceptance criteria are now
verified code, not design-bar work.

Two new findings:
- F1 (MED): cmd/{matrixd,observerd,pathwayd}/main_test.go absent —
  reopens R-005 against new daemons.
- F2 (LOW): scripts/staffing_*/main.go flag-defaults reach
  /home/profit/lakehouse/data/...

Evidence under reports/scrum/_evidence/rerun2/ (local; per
.gitkeep convention).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 23:13:01 -05:00
root
c7e3124208 §3.8 second slice: real modes wired (matrix.relevance/downgrade/search,
distillation.score, drift.scorer)

Lands the workflow.Mode adapters for the §3.4 components + the
distillation scorer + drift quantifier. Workflows can now compose
real measurement capabilities; the substrate's parallel
capabilities become composable Lego bricks (per the prior commit's
closing insight).

Modes registered (in observerd's registerBuiltinModes):

  Pure-function wrappers (no I/O):
    - matrix.relevance    → matrix.FilterChunks
    - matrix.downgrade    → matrix.MaybeDowngrade
    - distillation.score  → distillation.ScoreRecord
    - drift.scorer        → drift.ComputeScorerDrift

  HTTP-backed:
    - matrix.search       → POST matrixd /matrix/search
                             (registered only when matrixd_url is set)

  Fixture (kept from §3.8 first slice):
    - fixture.echo, fixture.upper

internal/workflow/modes.go:
  Each mode follows the same glue pattern: marshal generic input
  through a typed struct (free schema validation + clear error
  messages), call the underlying capability, return a generic
  output map. Roundtrip-via-JSON gives us schema validation
  without writing custom field-by-field coercion.

internal/workflow/modes_test.go (10 tests, all PASS):
  - matrix.relevance filters adjacency pollution (Connector kept,
    catalogd::Registry dropped — same headline as the relevance
    smoke, run through the workflow mode)
  - matrix.downgrade flips lakehouse→isolation on strong model;
    keeps lakehouse on weak (qwen3.5:latest); errors on missing
    fields
  - distillation.score rates scrum_review attempt_1 as accepted;
    rejects empty record
  - drift.scorer reports zero drift on matched inputs; errors on
    empty inputs slice
  - matrix.search HTTP flow round-trips through httptest fake
    matrixd; non-OK status surfaces a clear error

scripts/workflow_smoke.sh (5 assertions PASS, was 4):
  New assertion #5: real-mode chain
    matrix.downgrade (lakehouse + grok-4.1-fast → isolation)
    → distillation.score (scrum_review attempt_1 → accepted)
  Proves §3.4 components compose through the workflow runner with
  no fixture intermediation. Both nodes ran successfully, runner
  recorded provenance, status=succeeded.

  Mode listing assertion now expects 7 modes (5 real + 2 fixture)
  instead of just the fixtures.

17-smoke regression all green. SPEC §3.8 acceptance gate G3.8.D
("Mode catalog dispatches matrix.search invocation to the matrixd
backend without going through HTTP") still pending — current path
goes through HTTP for matrix.search, which is the cleaner service-
mesh shape but slower than direct in-process. In-process dispatch
when matrixd is co-resident is a future optimization.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:39:26 -05:00
root
e30da6e5aa §3.8 first slice: workflow runner skeleton + DAG executor + observerd integration
Lands the structural piece of SPEC §3.8 (Observer-KB workflow runner)
documented in 97dd3f8: types + DAG runner + reference substitution +
provenance recording into observerd. Real-mode integrations
(matrix.search, distillation.score, drift.scorer, llm.chat) come in
follow-up commits — this commit proves the mechanics.

internal/workflow/types.go:
  - Workflow / Node / NodeResult / RunResult types matching Archon's
    YAML shape so existing workflows (e.g. lakehouse-architect-review.yaml)
    load directly. Optional `mode` field added — implicit fall-back is
    "llm.chat" matching Archon's convention.
  - Mode signature: func(Context, map[string]any) (map[string]any, error)
  - 4 sentinel errors: ErrCycle, ErrMissingDep, ErrUnknownMode,
    ErrDuplicateNodeID, ErrUnresolvedRef
  - Validate enforces structural invariants: unique IDs, every
    depends_on resolves, no cycles

internal/workflow/runner.go:
  - Kahn's-algorithm topological sort, stable for declaration-order
    ties (deterministic execution + JSON output across runs)
  - Reference substitution: $node_id.output.key.path resolves through
    nested maps; $node_id alone resolves to the whole output map
  - Skip cascade: a node whose dependency failed/skipped is skipped
    with explicit "upstream node X failed" error in NodeResult, never
    silently dropped
  - Per-node provenance: NodeResult.StartedAt + DurationMs captured
    for every execution
  - Mode pre-validation: every node's mode checked against registry
    BEFORE any node runs — typo catches in 5ms not after 6 nodes

internal/workflow/runner_test.go (14 tests, all PASS):
  - Validate: missing name, no nodes, duplicate IDs, missing deps, cycles
  - Run: single node, 3-node DAG with chained $-refs (shape→weakness→improvement),
    failed-node skip cascade with independent siblings still running,
    unknown-mode abort, unresolved-reference error, implicit
    llm.chat fallback, provenance fields populated, inputs (not just
    prompt) honor $-refs, topological-sort stability for ties

cmd/observerd extended:
  - POST /observer/workflow/run executes a workflow, records each
    node's execution as an ObservedOp (source="workflow"), returns
    the full RunResult
  - GET /observer/workflow/modes lists the registered mode names
  - registerBuiltinModes wires fixture.echo + fixture.upper for v0;
    real modes register here in follow-up commits

scripts/workflow_smoke.sh (4 assertions PASS):
  - GET /modes lists fixture.echo + fixture.upper
  - 3-node DAG executes: shape (uppercase "hello world") → weakness
    (sees "HELLO WORLD" via $shape.output.upper ref) → improvement
    (sees "HELLO WORLD" propagated through 2-hop $weakness.output.prompt)
  - /observer/stats shows by_source.workflow == 3 (one per node) and
    total == 3 — provenance lands as expected
  - Unknown mode → 400 with "unknown mode" in error body

17-smoke regression all green. Acceptance gates G3.8.A (Archon-shape
workflow loads + executes topologically) + G3.8.B (per-node ObservedOps)
+ G3.8.C ($prior_node.output ref resolves, error on missing ref) all
satisfied. G3.8.D (in-process matrix.search dispatch) deferred until
a real mode is wired.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:34:30 -05:00
root
97dd3f826d SPEC §3.5/§3.6/§3.7/§3.8 — name F/B/C as port targets + add Archon-style workflow runner
Per the 2026-04-29 scope-discipline pause: the wave shipped four
pieces beyond SPEC §3.4 component scope, and one architectural
pattern surfaced (Archon-style multi-pass workflow runner) that's
the observer's natural growth path. Document them as port targets
so the next scrum review has authoritative SPEC components.

§3.5 — Drift quantification (loop 5 of the PRD)
  Names the SCORER drift work shipped in be65f85 + the deferred
  shapes (PLAYBOOK drift, EMBEDDING drift, AUDIT BASELINE drift).
  Acceptance gates G3.5.A–B.

§3.6 — Staffing-side structured filter
  Names the metadata-filter MVP shipped in b199093 + the deferred
  pre-retrieval SQL gate via queryd. Acceptance gates G3.6.A–C.

§3.7 — Operational rating wiring
  Names the bulk playbook-record endpoint shipped in 6392772 + the
  deferred UI shim, negative-feedback path, and time-decay.
  Acceptance gates G3.7.A–B.

§3.8 — Observer-KB workflow runner (Archon-style multi-pass) —
       PORT TARGET, not yet started
  Documents the architecture J was working on across the Rust
  observer-kb branch (10 commits ahead of main, never merged) and
  the local Archon mod (committed 2026-04-29 as 3f2afc8 in
  /home/profit/external/Archon, not pushed to coleam00/Archon).

  The pattern: multi-pass mode chain (extract → validator →
  hallucination → consensus → redteam → pipeline → render) where
  each pass is a deterministic measurement. The observer is the
  natural home — workflows ARE observation patterns whose every
  step is recorded. Five components in dependency order: workflow
  definition (YAML), node executor (DAG runner), provenance
  recording (ObservedOps), mode catalog (matrix.search,
  distillation.score, drift.scorer, llm.chat), HTTP surface
  (/v1/observer/workflow/run).

  Reference materials on the system (preserved, not lost):
    - /home/profit/lakehouse/.archon/workflows/lakehouse-architect-review.yaml
      (Rust main, 69919d9) — 3-node Archon-via-Lakehouse proof
    - /home/profit/external/Archon dev branch — upstream engine
      with local pi/provider.ts mod (3f2afc8) for Lakehouse routing
    - Rust observer-kb branch — apps/observer-kb/docs/PRD.md +
      Python prototypes proven on real ChatGPT/Claude PDF data

  Acceptance gates G3.8.A–D. Estimated effort: L.

PRD updated with "Observer as system resource (clarified
2026-04-29)" section pointing at §3.8 as the architectural growth
path. The bare-bones observerd in bc9ab93 is the substrate; the
workflow runner is what makes it the "objective measurement engine"
the small-model pipeline needs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:27:41 -05:00
root
bc9ab93afe H: observerd — autonomous-iteration witness loop (SPEC §2 port)
Port of the load-bearing pieces of mcp-server/observer.ts (Rust
system, 852 lines TS) per SPEC §2's named target. Implements PRD
loop 3 ("Observer loop — watches each run, refines configs").

Routes (all under /v1/observer/* via gateway):
  GET  /observer/health   — liveness
  GET  /observer/stats    — total / successes / failures /
                             by_source / recent_scenario_ops
                             (matches Rust JSON shape exactly)
  POST /observer/event    — record one ObservedOp; auto-defaults
                             timestamp + source, validates required
                             fields (endpoint), persists to JSONL,
                             appends to ring buffer

Architecture:
  - internal/observer/types.go — ObservedOp model + Source taxonomy
    (mcp / scenario / langfuse / overseer_correction). Mirrors the
    Rust shape so JSON round-trips during cutover.
  - internal/observer/store.go — Store + Persistor. Ring buffer cap
    matches Rust's 2000; recent_scenarios cap matches Rust's 10.
    Same persist-then-apply order as pathwayd; same corruption-
    tolerant replay (skip malformed lines + warn).
  - cmd/observerd — :3219 HTTP service, fronted by gateway as
    /v1/observer/*.
  - lakehouse.toml + DefaultConfig — [observerd] block matches the
    pathwayd pattern (Bind + PersistPath; empty path = ephemeral).

Tests + smoke (all PASS):
  - 7 unit tests in store_test.go: validation, default fields,
    stats aggregation, recent-scenarios cap + ordering, ring-buffer
    rollover at cap, JSONL round-trip persistence, corruption-
    tolerant replay (1 valid + 1 corrupt + 1 valid → 2 applied)
  - scripts/observer_smoke.sh: 4 assertions through gateway —
    record 5 events (3 ok / 2 fail across 2 sources), stats
    aggregates correctly, empty-endpoint→400, kill+restart preserves
    via JSONL replay (5 ops, 3 ok, 2 err survive)

Deferred (named in package + cmd doc, not in this commit):
  - POST /observer/review (cloud-LLM hand-review fall-back). The
    heuristic-only path could land cheaply but the productized
    cloud path (qwen3-coder fall-back) is multi-day port.
  - Background loops: analyzeErrors, consolidatePlaybooks,
    tailOverseerCorrections (read overseer_corrections.jsonl into
    the ring buffer once per cycle).
  - escalateFailureClusterToLLMTeam (failure clustering trigger
    that posts to LLM Team's /api/run with code_review mode).

/relevance is NOT duplicated — already ported in 9588bd8 to
internal/matrix/relevance.go (component 3 of SPEC §3.4).

16-smoke regression all green (D1-D6, G1, G1P, G2, storaged_cap,
pathway, matrix, relevance, downgrade, playbook, observer).
13 binaries now: gateway, storaged, catalogd, ingestd, queryd,
vectord, embedd, pathwayd, matrixd, observerd, mcpd, fake_ollama
(plus catalogd-only test build).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:18:02 -05:00
root
6392772f41 C: bulk playbook record — operational rating wiring
POST /v1/matrix/playbooks/bulk accepts an array of playbook entries
and records each independently — failures per-entry don't abort the
batch. Designed for two operational use cases:

  1. Backfilling historical placement data into the playbook
     substrate (the Rust system has 4,701 fill operations recorded
     with embeddings; that data deserves to feed the Go learning
     loop without a 4,701-call procedural script).
  2. Batched click-tracking from a session's worth of coordinator
     interactions, posted once at idle rather than per-click.

Per-entry response shape: {index, playbook_id} on success or
{index, error} on failure. Caller can inspect failures without
diffing.

Smoke (scripts/playbook_smoke.sh, new assertion #4):
  Bulk POST 3 entries: 2 valid (alpha→widget-a, bravo→widget-b) +
  1 invalid (empty query_text). Verifies recorded=2, failed=1,
  the 2 valid ones get playbook_ids back, and the invalid one
  surfaces its validation error in-line.

Single-record /matrix/playbooks/record from 06e7152 still works
unchanged; bulk is additive. The corpus field can be set per-
entry or once at the batch level (entry-level wins on collision).

Per the small-model autonomous pipeline framing: this is the
"the playbook gets denser with each iteration" mechanism. Click
tracking → bulk POST → playbook entries → future similar queries
get those answers boosted via the existing /matrix/search
use_playbook path. The learning loop now has both inflows wired
(single + bulk) — what remains is the demo UI shim that calls
/feedback on result interaction (deferred — no Go demo UI yet).

15-smoke regression all green.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:10:13 -05:00
root
b199093d1f B: matrix metadata filter — post-retrieval structured gate
Addresses the reality-test gap surfaced by the candidates and
multi-corpus e2e runs (0d1553c, a97881d): semantic-only retrieval
can't gate by status / state / availability. SearchRequest now
takes an optional MetadataFilter map; results whose metadata
doesn't match every key are dropped before top-K truncation.

Filter value semantics:
  string|number|bool → exact equality (JSON-canonical, so 1 ≡ 1.0)
  []any              → OR within key (any element matching wins)
  AND across keys: every filter key must match.

Missing key in metadata = drop. Malformed metadata = drop. Filter
absent or empty = pass through (zero overhead).

The response now reports MetadataFilterDropped so callers can see
how aggressive the filter was without re-querying.

Caveat (also captured in code comment): this is POST-retrieval, not
PRE-filtering via SQL. Aggressive filters can shrink the result set
below K; caller should bump PerCorpusK to compensate. A queryd-
backed pre-filter is a future commit; this lands the user-visible
fix today.

Tests:
  - 7 unit tests (internal/matrix/filter_test.go) covering: nil/
    empty filter pass-through, missing-metadata always-fails,
    single-value exact match (incl. numeric 5 ≡ 5.0), AND across
    keys, OR within list, bool match, malformed JSON metadata
  - matrix_smoke.sh: new assertion #7 — filter
    label∈{"a near","b near"} drops the 4 mid/far entries from the
    6-entry pool, keeping exactly 2 (one per corpus, both with the
    matching label). Dropped count surfaces in the response.

15-smoke regression all green. vet clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:08:56 -05:00
root
be65f85f17 F: drift quantification — scorer drift first
PRD's 5-loop substrate names "drift" as loop 5: quantify when
historical decisions stop matching current reality. Distinct from
the rating+distillation loop because drift is MEASUREMENT, not
LEARNING. The learning loop says "this match worked, remember it";
the drift loop says "this 4-month-old playbook entry — does it
still match what the substrate would surface today?"

First-shipped drift shape: SCORER drift. When the deterministic
scorer's ScorerVersion bumps, historical ScoredRuns may no longer
match what the current scorer produces on the same EvidenceRecord.

internal/drift/drift.go:
  - ScorerDriftInput  — (EvidenceRecord, persisted_category) pair
  - ScorerDriftEntry  — one mismatch with current reasons attached
  - CategoryShift     — (from, to, count) cell in the shift matrix
  - ScorerDriftReport — summary + sorted shift matrix + optional entries
  - ComputeScorerDrift(inputs, includeEntries) — pure function;
    re-runs ScoreRecord over each input and reports mismatches

Why this matters: without a drift quantifier, a scorer-rule change
silently invalidates the historical training data feeding the
learning loop. With drift quantification, a rule change surfaces
a concrete number ("847 of 4701 historical ScoredRuns now
disagree") that triggers a re-score-and-retrain cycle rather than
letting the substrate quietly rot.

Tests (6/6 PASS):
  - No-drift: all 3 inputs match → 100% matched
  - Shift detected: 5 inputs, 3 drift cases, drift_rate=0.6,
    shift matrix shows accepted→partially_accepted x3
  - Multiple shifts sorted by count desc
  - includeEntries=false skips the per-mismatch list
  - Empty input → all-zero report (no division-by-zero)
  - ScorerVersion stamped on every report

Future drift shapes (deferred to follow-ups, named in package doc):
  - PLAYBOOK drift: re-run playbook queries through current
    matrix-search; recorded answer not in top-K = drift
  - EMBEDDING drift: KS-test on vector distribution at T1 vs T2
  - AUDIT BASELINE drift: matches Rust audit_baselines.jsonl
    longitudinal signal

Pure compute. Materialization layer (read scored-runs jsonl + their
matching evidence jsonl + feed into ComputeScorerDrift) lands with
the distillation materialization commit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:06:17 -05:00
root
57d0df125d E (partial): distillation port — scorer + contamination firewall
First slice of the Rust v1.0.0 distillation substrate (e7636f2)
ported to Go per ADR-001 #4 (port LOGIC, not bit-identical
reproducibility). This commit lands the LOAD-BEARING pieces named
in project_distillation_substrate.md memory:

  - The deterministic Success Scorer (8 sub-scorers + dispatch)
  - The contamination firewall on SFT samples (the "non-negotiable"
    spec property: rejected/needs_human_review NEVER ship to SFT)
  - All on-wire types + validators for ScoredRun, SftSample,
    EvidenceRecord with Provenance

Files:
  internal/distillation/types.go  — types + ScorerVersion + SftNever
                                    + ValidateScoredRun + ValidateSftSample
  internal/distillation/scorer.go — ScoreRecord + 8 class scorers +
                                    BuildScoredRun (deterministic)
  internal/distillation/scorer_test.go — ~40 test cases:
    - source-class dispatch (verdict / telemetry / extraction)
    - scrum_review (4 attempt cases)
    - observer_review (5 verdict cases)
    - audit (legacy + severity, 9 cases)
    - auto_apply (4 cases)
    - outcomes / mode_experiment / extraction
    - CONTAMINATION FIREWALL: ErrSftContamination sentinel fires
      on rejected/needs_human_review, distinct from typo errors
    - empty-pair guard (instruction/response trim != "")
    - reasons-required ScoredRun validation
    - deterministic sig_hash on identical input
    - purity check (input not mutated, repeatable output)

Per the 2026-04-29 cross-lineage scrum's discipline: false-positive
findings would be dismissed inline (none in this commit). Real
findings would be addressed before merge — but this is greenfield
port code reviewed against its Rust source line-by-line, which the
test suite encodes as truth tables.

Explicitly DEFERRED to follow-up commits:
  - Materialization layer (jsonl read/write, date-partitioned
    storage in data/scored-runs/YYYY/MM/DD/, evidence index)
  - SFT exporter (file iteration + filtering — the SCORING firewall
    is here; the EXPORT firewall is the next layer)
  - export_preference, export_rag (other export shapes)
  - Acceptance harness (16/16 acceptance gate that locks v1.0.0)
  - replay, receipts, build_evidence_index, transforms

The scorer + firewall validator are pure functions — operational
tooling layers on top without changing the deterministic logic the
downstream learning loop depends on. The Go ScorerVersion stays at
v1.0.0 to match the Rust e7636f2 baseline; bumping in the Go
materialization commit is reserved for the next scoring-rule
change, NOT the port itself.

15-smoke regression all green. vet clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:04:29 -05:00
root
7f42089521 D: embed-text iteration — clean negative finding (3 variants tested)
Workers driver embed text reverted to V0 after testing 3 variants
on the "Forklift operator with OSHA-30 certification, warehouse
experience" reality-test query against 5000 workers (which contains
569 actual Forklift Operators per the 31b4088 probe).

  V0 (current, restored):  "Worker role: <role>. Skills: ...
                            Certifications: ... <resume_text>"
                           → 6 workers in top-8, 0 Forklift Ops,
                             top distance 0.327, top role
                             "Production Worker"
  V4a (role-doubled):      "<role>. <role> with <skills>. ..."
                           drop archetype + resume_text
                           → 6 workers in top-8, 0 Forklift Ops,
                             top distance 0.254, top role
                             "Production Worker"
  V4b (resume-only):       just the resume_text natural-language
                           sentence, no structured prefix
                           → 4 workers in top-8 (WORSE mix —
                             software-engineer candidates filled
                             the displaced slots), 0 Forklift Ops,
                             top distance 0.379

Conclusion: all three variants surface Production Workers / Machine
Operators / Line Leads ABOVE Forklift Operators for this query.
The 569 actual Forklift Operators in the 5000-row sample don't
appear in any top-8. Embed-text design isn't the bottleneck —
nomic-embed-text 137M's geometry doesn't separate "Forklift
Operator" from "Production Worker" / "Machine Operator" / "Line
Lead" in this query's neighborhood.

Real fixes belong elsewhere:
  - Hybrid SQL+semantic (B): pre-filter by role/certs via queryd
    before semantic ranking. Addresses the gap directly.
  - Different embedding model: mxbai-embed-large or a staffing-
    fine-tuned model. Costs an Ollama model swap + re-embedding.
  - Playbook boost (component 5, already shipped): record
    successful Forklift placements; future queries surface those
    workers via similarity. Compounds with use.

V0 restored because it has the best worker/candidate mix in top-8
(6 vs 4 in V4b), preserving the multi-corpus reality-test signal
quality even if the role match is imperfect. Comments updated to
record the experiment so future sessions don't relitigate.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 19:58:39 -05:00
root
a730fc2016 scrum fixes: 4 real findings landed, 4 false positives dismissed
Cross-lineage scrum review on the 12 commits of this session
(afbb506..06e7152) via Rust gateway :3100 with Opus + Kimi +
Qwen3-coder. Results:

  Real findings landed:
    1. Opus BLOCK — vectord BatchAdd intra-batch duplicates panic
       coder/hnsw's "node not added" length-invariant. Fixed with
       last-write-wins dedup inside BatchAdd before the pre-pass.
       Regression test TestBatchAdd_IntraBatchDedup added.
    2. Opus + Kimi convergent WARN — strings.Contains(err.Error(),
       "status 404") was brittle string-matching to detect cold-
       start playbook state. Fixed: ErrCorpusNotFound sentinel
       returned by searchCorpus on HTTP 404; fetchPlaybookHits
       uses errors.Is.
    3. Opus WARN — corpusingest.Run returned nil on total batch
       failure, masking broken pipelines as "empty corpora." Fixed:
       Stats.FailedBatches counter, ErrPartialFailure sentinel
       returned when nonzero. New regression test
       TestRun_NonzeroFailedBatchesReturnsError.
    4. Opus WARN — dead var _ = io.EOF in staffing_500k/main.go
       was justified by a fictional comment. Removed.

  Drivers (staffing_500k, staffing_candidates, staffing_workers)
  updated to handle ErrPartialFailure gracefully — print warn, keep
  running queries — rather than fatal'ing on transient hiccups
  while still surfacing the failure clearly in the output.

  Documented (no code change):
    - Opus WARN: matrixd /matrix/downgrade reads
      LH_FORCE_FULL_ENRICHMENT from process env when body omits
      it. Comment now explains the opinionated default and points
      callers wanting deterministic behavior to pass the field
      explicitly.

  False positives dismissed (caught and verified, NOT acted on):
    A. Kimi BLOCK on errors.Is + wrapped error in cmd/matrixd:223.
       Verified false: Search wraps with %w (fmt.Errorf("%w: %v",
       ErrEmbed, err)), so errors.Is matches the chain correctly.
    B. Kimi INFO "BatchAdd has no unit tests." Verified false:
       batch_bench_test.go has BenchmarkBatchAdd; the new dedup
       test TestBatchAdd_IntraBatchDedup adds another.
    C. Opus BLOCK on missing finite/zero-norm pre-validation in
       cmd/vectord:280-291. Verified false: line 272 already calls
       vectord.ValidateVector before BatchAdd, so finite + zero-
       norm IS checked. Pre-validation is exhaustive.
    D. Opus WARN on relevance.go tokenRe (Opus self-corrected
       mid-finding when realizing leading char counts toward token
       length).

  Qwen3-coder returned NO FINDINGS — known issue with very long
  diffs through the OpenRouter free tier; lineage rotation worked
  as designed (Opus + Kimi between them caught everything Qwen
  would have).

15-smoke regression sweep all green (D1-D6, G1, G1P, G2,
storaged_cap, pathway, matrix, relevance, downgrade, playbook).
Unit tests all green (corpusingest +1, vectord +1).

Per feedback_cross_lineage_review.md: convergent finding #2 (404
detection) is the highest-signal one — both Opus and Kimi
flagged it independently. The other Opus findings stand on
single-reviewer signal but each one verified against the actual
code.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 19:42:39 -05:00
root
06e71520c4 matrix: playbook memory + boost — SPEC §3.4 component 5 of 5 (LEARNING LOOP)
Closes SPEC §3.4. The matrix indexer is now a learning meta-index per
feedback_meta_index_vision.md — every successful (query → answer)
pair recorded via /matrix/playbooks/record boosts that answer for
future similar queries.

This is the architectural piece that lifts vectord from "static
hybrid search" to the meta-index J originally framed in Phase 19 of
the Rust system.

What's new:
  - internal/matrix/playbook.go — PlaybookEntry, PlaybookHit,
    ApplyPlaybookBoost. Pure-function boost math:
      distance' = distance * (1 - 0.5 * score)
    Score 0 = no boost (factor 1.0); score 1 = halve distance
    (factor 0.5). Capped at 0.5 deliberately so a single high-
    confidence playbook can't dominate the base ranking forever
    (runaway-feedback-loop guard).
  - Retriever.Record(entry, corpus) — embeds query_text, ensures
    playbook corpus exists (idempotent), upserts via deterministic
    sha256-derived ID (last score wins on re-record of same triple).
  - Retriever.Search extended with UsePlaybook + PlaybookCorpus +
    PlaybookTopK + PlaybookMaxDistance. Reuses the query vector —
    no extra embed call. Missing-corpus 404 = no-op (cold-start
    state before any Record call), not an error.
  - POST /v1/matrix/playbooks/record (matrixd) — caller submits
    {query_text, answer_id, answer_corpus, score, tags?}; gets
    {playbook_id} back.

Storage: a vectord index named "playbook_memory" (configurable per
request) with embed(query_text) as the vector and the
PlaybookEntry JSON as metadata. Just another corpus — observable
from /vectors/index, persistable through G1P, etc.

Match key for boost: (AnswerID, AnswerCorpus). Cross-corpus ID
collisions don't false-match — verified by
TestApplyPlaybookBoost_CorpusAttributionRespected.

End-to-end smoke (scripts/playbook_smoke.sh, all assertions PASS):
  - Baseline search: widget-c at distance 0.6566 (rank 3)
  - Record playbook: query → widget-c, score=1.0
  - Re-search with use_playbook=true:
      widget-c distance: 0.3283 (rank 2)
      ratio: 0.5 EXACTLY (matches boost math precisely)
      playbook_boosted: 1
  - widget-c jumped from #3 to #2 — learning loop visible

Tests:
  - 8 unit tests in internal/matrix/playbook_test.go covering
    Validate, BoostFactor (5 cases), the no-boost identity, the
    boost-moves-result-up scenario, highest-score wins on duplicate
    matches, cross-corpus attribution, JSON round-trip, and
    rejection of empty metadata
  - scripts/playbook_smoke.sh integration test (3 assertions PASS)

15-smoke regression sweep all green (D1-D6, G1, G1P, G2,
storaged_cap, pathway, matrix, relevance, downgrade, playbook).

SPEC §3.4 NOW COMPLETE: 5 of 5 components shipped. The matrix
indexer's port is done as a substrate; remaining work is operational
(rating signal sources, telemetry, eventual structured filtering for
staffing data — none in §3.4).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 19:34:24 -05:00
root
31b408882b multi_corpus_e2e: WORKERS_LIMIT env knob — and the embed-text-not-sample-size finding
Adds WORKERS_LIMIT env override (default 5000) so the e2e can be
re-run at different sample sizes. Tiny change; the interesting part
is the FINDING that motivated the run.

Investigation: a97881d's reality test put zero Forklift Operators in
the top-6 for "Forklift operator with OSHA-30 certification,
warehouse experience" — instead returned Production Worker / Machine
Operator / Assembler.

Hypothesis tested: maybe the 5000-row sample didn't contain
forklift operators in retrievable density.

Result: hypothesis falsified. Direct probe of workers_500k.parquet:

  All 500K rows         → 55,349 Forklift Operators (11.07%)
                       → 150,328 with "forklift" in certs
                       → 74,852 with OSHA-30 specifically
  First 5K rows         → 569 Forklift Operators (11.38%)
                       → distribution matches global, no ordering bias

So 569 forklift operators were IN the corpus the matrix indexer
searched and STILL didn't surface in top-6. That means the bottleneck
isn't sample size — it's nomic-embed-text + our embed-text template
ranking "Production Worker" / "Machine Operator" / "Assembler" as
semantically nearer to the query than literal "Forklift Operator".

The reality test exposed this faithfully. Three real follow-ups, none
in scope of this commit:

  1. Embed text design — front-loading role + certs (currently
     "Worker role: <role>" then skills then certs) might dominate
     retrieval better. Worth A/B-testing.
  2. Hybrid SQL+semantic — pre-filter by role/certs via queryd
     before semantic ranking. Not in SPEC §3.4 today; would address
     the "available" / "Chicago" gap from the candidates reality
     test (0d1553c) too.
  3. Playbook-memory boost — SPEC §3.4 component 5. When a query
     "Forklift OSHA-30" was answered with worker w-X in the past,
     boost w-X's score for similar future queries. The retrieval
     gap CAN be bridged by the learning loop without changing the
     base embedder.

Commits the env knob; the finding lives in the commit body so future
sessions don't re-run the sample-size hypothesis.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 19:26:32 -05:00
root
a97881d80c workers corpus + multi-corpus reality test — matrix indexer end-to-end
Lands the second real-data corpus (workers_500k) and the first
multi-corpus reality test through /v1/matrix/search composing both
corpora live.

What's new:
  - scripts/staffing_workers/main.go — parquet driver over
    workers_500k.parquet, multi-chunk arrow handling (workers
    parquet has multiple row groups vs candidates' one). Embed text:
    role + skills + certifications + city + state + archetype +
    resume_text. IDs prefixed "w-".
  - scripts/multi_corpus_e2e.sh — first end-to-end test composing
    both corpora through the matrix indexer.

Real-data multi-corpus result (this commit):
  Query: "Forklift operator with OSHA-30 certification, warehouse
          experience"
  Corpora: workers (5000 rows) + candidates (1000 rows)
  Merged top-8: workers=6, candidates=2

  Top hits:
    w  d=0.327  w-4573  Production Worker
    w  d=0.353  w-1726  Machine Operator
    w  d=0.362  w-3806  Production Worker
    w  d=0.366  w-1000  Machine Operator
    w  d=0.374  w-1436  Assembler
    w  d=0.395  w-162   Machine Operator
    c  d=0.440  c-CAND-00727  C#,.NET,Azure
    c  d=0.446  c-CAND-00031  React,TypeScript,Node

The matrix indexer correctly chose the right domain — manufacturing/
warehouse roles in workers (correct semantic match for the staffing
query) rank ABOVE software-engineer candidates from the candidates
corpus. 0.11 gap between the worst worker (0.395) and the best
candidate (0.440) — clean distance separation.

Compared to the candidates-only e2e run from 0d1553c:
  candidates-only top: c-CAND-00727 at d=0.4404
  multi-corpus top:    w-4573 at d=0.3265 (a Production Worker)

That's the matrix indexer's whole point made visible: composing
domain-distinct corpora surfaces better matches than single-corpus
search. Without workers in the search space, the staffing query
returned software engineers (wrong domain). With workers, it
returns roles in the right ballpark.

What's still imperfect (signal for component 5 + future work):
  - No top-6 worker actually has "Forklift" or "OSHA-30" visible in
    metadata; "Production Worker" is semantically nearest in this
    sample. Likely needs a larger workers ingest (5000 from 500K)
    or skill-keyword boost.
  - Status/availability still not gated. The staffing-side
    structured filtering gap from 0d1553c persists; relevance filter
    (CODE-aware) doesn't address it.

Pipeline timings:
  workers ingest: 5000 rows / 19.2s = 260/sec end-to-end
  candidates ingest: 1000 rows / 3.1s = 322/sec
  multi-corpus query (text → embed → 2 parallel vectord → merge): 14ms

14-smoke regression sweep all green (D1-D6, G1, G1P, G2,
storaged_cap, pathway, matrix, relevance, downgrade).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 19:22:16 -05:00
root
3968ec8a7b matrix: strong-model downgrade gate — SPEC §3.4 component 4 of 5
Pure-Go port of mode.rs::execute's pass5 downgrade gate (Rust
2026-04-26). Adds POST /v1/matrix/downgrade endpoint via matrixd.

The gate captures the pass5 finding: composing matrix corpora into
codereview_lakehouse on a strong model LOST 5/5 head-to-head reps
against matrix-free codereview_isolation on grok-4.1-fast (p=0.031).
Strong models have enough native capacity that bug fingerprints +
adversarial framing + file content carry them; matrix chunks
displace depth-of-analysis.

Logic (matches Rust mode.rs:614-632):
  if mode == codereview_lakehouse
     && !forced_mode
     && !LH_FORCE_FULL_ENRICHMENT
     && !is_weak_model(model)
  → flip to codereview_isolation, record downgraded_from

is_weak_model captures the empirical weak-list:
  - `:free` suffix or `:free/` infix (OpenRouter free tier)
  - qwen3.5:latest, qwen3:latest (local last-resort rungs)
  - everything else → strong by default

Tests:
  - 3 unit tests in internal/matrix/downgrade_test.go: IsWeakModel
    coverage, MaybeDowngrade truth table (5 rows), forced-mode
    precedence (forced beats every other bypass)
  - scripts/downgrade_smoke.sh: 6 assertions through gateway covering
    all 5 truth-table rows + empty-mode 400

14-smoke regression sweep all green (D1-D6, G1, G1P, G2,
storaged_cap, pathway, matrix, relevance, downgrade).

SPEC §3.4 progress: 4 of 5 components shipped (corpus builders,
multi-corpus retrieve+merge, relevance filter, downgrade gate).
Last component is learning-loop integration.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 19:17:55 -05:00
root
9588bd82ae matrix: relevance filter — SPEC §3.4 component 3 of 5
Faithful port of mcp-server/relevance.ts (Rust observer's adjacency-
pollution filter). Same 5-signal scoring, same default threshold 0.3.
Adds POST /v1/matrix/relevance endpoint via matrixd.

Scoring signals (additive, can sign-flip):
  path_match     +1.0  chunk source/doc_id encodes focus.path
  filename_match +0.6  chunk text mentions focus's filename
  defined_match  +0.6  chunk text mentions focus.defined_symbols
  token_overlap  +0.4  jaccard of non-stopword tokens
  prefix_match   +0.3  chunk source shares first-2-segment prefix
  import_penalty -0.5  mentions ONLY imported symbols, no defined ones

What this does and doesn't do:
  - DOES filter code-aware corpora (eventually lakehouse_arch_v1,
    lakehouse_symbols_v1, scrum_findings_v1) — drops chunks about
    code the focus file IMPORTS rather than DEFINES, the
    "adjacency pollution" pattern that makes a reviewer LLM
    hallucinate imported-crate internals as belonging to the focus
  - DOES NOT meaningfully filter staffing data — the candidates
    reality test 2026-04-29 had "exact skill match buried at #3"
    which is a different problem (semantic-only ranking dominated
    by secondary text). Staffing needs structured filtering
    (status gates, location gates) that lives outside this
    package — future work, not in SPEC §3.4 yet

Headline smoke assertion: focus = crates/queryd/src/db.go which
defines Connector and imports catalogd::Registry. The filter
scores:
  Connector chunk: +0.68  (defined_match fires, kept)
  Registry chunk: -0.46  (import_only penalty fires, dropped)
  unrelated junk:  0.00  (no signals, dropped)

That's a 1.14-point gap between what we ARE and what we IMPORT —
the entire purpose of the filter.

Tests:
  - 9 unit tests in internal/matrix/relevance_test.go covering
    Tokenize, Jaccard, ExtractDefinedSymbols (Rust + TS),
    ExtractImportedSymbols, FilePrefix, ScoreRelevance per-signal,
    FilterChunks threshold splitting, and the headline
    AdjacencyPollutionScenario
  - scripts/relevance_smoke.sh integration smoke (3 assertions PASS):
    adjacency-pollution scenario, empty-chunks 400, threshold honored

13-smoke regression sweep all green (D1-D6, G1, G1P, G2,
storaged_cap, pathway, matrix, relevance).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 19:13:22 -05:00
root
0d1553ca88 candidates corpus: first deep-field reality test on real staffing data
Lands the second staffing corpus and the first end-to-end reality test
through the full Go pipeline: parquet → corpusingest → embedd →
vectord → matrixd → gateway.

What's new:
  - scripts/staffing_candidates/main.go — parquet Source over
    candidates.parquet (1000 rows, 11 cols), single-chunk arrow-go
    pqarrow read. Embed text: "Candidate skills: <s>. Based in
    <city>, <state>. <years> years experience. Status: <status>.
    <first> <last>." IDs prefixed "c-" so multi-corpus merges
    against workers ("w-") stay unambiguous.
  - scripts/candidates_e2e.sh — first integration smoke that runs
    the full stack (storaged + embedd + vectord + matrixd + gateway),
    ingests via corpusingest, runs a real query through
    /v1/matrix/search, prints results. Ephemeral mode (vectord
    persistence disabled via custom toml) so re-runs don't pollute
    MinIO _vectors/ and break g1p_smoke's "only-one-persisted-index"
    assertion.

Real bug caught + fixed in corpusingest:
  When LogProgress > 0, the progress goroutine's only exit was
  ctx.Done(). With context.Background() in the production driver,
  Run hung forever after the pipeline finished. Added a stopProgress
  channel that close()s after wg.Wait(). Regression test
  TestRun_ProgressLoggerExits bounds Run's wall to 2s with
  LogProgress=50ms.

This is the bug the unit tests didn't catch because every prior test
set LogProgress: 0. Reality test surfaced it on first real-data
run — exactly the hyperfocus-and-find-architectural-weakness
property J framed as the reason for the Go pass.

End-to-end output (1000 candidates, query "Python AWS Docker
engineer in Chicago available now"):

  populate: scanned=1000 embedded=1000 added=1000 wall=3.5s
  matrix returned 5 hits in 26ms

The result quality is the interesting signal: top-5 had ZERO
Chicago candidates, ZERO active-status candidates, and the exact-
skill-match (Python,AWS,Docker) ranked #3 not #1. Pipeline works;
retrieval quality has real architectural limits (no structured
filtering, no relevance gate, semantic-only ranking dominated by
secondary signals like "1 year experience" and "engineer"). This
motivates SPEC §3.4 components 3 (relevance filter) and
eventually structured filtering — exactly the kind of finding the
deep field reality tests are supposed to surface before Enterprise
cutover.

12-smoke regression sweep all green. 9 corpusingest unit tests
including the new regression. vet clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 19:06:27 -05:00
root
166470f532 corpusingest: extract reusable text→vector ingest pipeline
Generalizes the staffing_500k driver's embed-and-push loop into
internal/corpusingest. Per docs/SPEC.md §3.4 component 1 (corpus
builders): adding a new staffing/code/playbook corpus is now one
Source impl + one main.go calling Run, not 200 lines of pipeline
copy-paste.

API:
  type Source interface { Next() (Row, error) }
  func Run(ctx, Config, Source) (Stats, error)

Library owns:
  - Index lifecycle (create, optional drop-existing, idempotent
    reuse on 409)
  - Parallel embed dispatcher (configurable workers + batch size)
  - Vectord push batching
  - Progress logging + Stats reporting
  - Partial-failure semantics (log + continue per-batch errors;
    operator decides on re-run via Stats.Embedded vs Scanned delta)

Per-corpus driver owns: source parsing + column→Row mapping +
post-ingest validation queries.

Refactor scripts/staffing_500k/main.go to use it. Driver is now
~190 lines (was 339), with the embed/add plumbing replaced by one
Run call. -drop flag added so callers can opt out of the destructive
DELETE-first behavior (default still true to keep the 500K test
clean-recall semantics).

Unit tests (internal/corpusingest/ingest_test.go, 8/8 PASS):
  - Pipeline shape: 50 rows / 16 batch → 4 embed + 4 add calls,
    every ID added exactly once, vectors at correct dimension
  - DropExisting fires DELETE
  - 409 on create → reuse existing index
  - Limit stops early
  - Empty Text rows skipped (counted as scanned, not added)
  - Required IndexName + Dimension validation
  - Context cancel stops mid-pipeline

Real bug caught and fixed by the test suite: if embedd ever returns
fewer vectors than texts in the request (degraded backend), the
addBatch loop would panic with index-out-of-range. Worker now
length-checks the response and logs+skips on mismatch.

12-smoke regression sweep all green (D1-D6, G1, G1P, G2,
storaged_cap, pathway, matrix). vet clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 18:47:18 -05:00
root
c1d96b7b60 matrixd: multi-corpus retrieve+merge — SPEC §3.4 component 2 of 5
Lands the matrix indexer's first piece per docs/SPEC.md §3.4:
multi-corpus retrieve+merge with corpus attribution per result.
Future components (relevance filter, downgrade gate, learning-loop
integration) layer on top of this surface.

Architecture:
  - internal/matrix/retrieve.go — Retriever takes (query, corpora,
    k, per_corpus_k), parallel-fans across vectord indexes, merges
    by distance ascending, preserves corpus origin per hit
  - cmd/matrixd — HTTP service on :3217, fronts /v1/matrix/*
  - gateway proxy + [matrixd] config + lakehouse.toml entry
  - Either query_text (matrix calls embedd) or query_vector
    (caller pre-embedded) — vector takes precedence if both set

Error policy: fail-loud on any corpus error. Silent partial returns
would lie about coverage, defeating the matrix's whole purpose.
Bubbles vectord errors as 502 (upstream), validation as 400.

Smoke (scripts/matrix_smoke.sh, 6 assertions PASS first try):
  - /matrix/corpora lists indexes
  - Multi-corpus search returns hits from BOTH corpora
  - Top hit is the globally-closest across all corpora
    (b-near beats a-near at distance 0.05 vs 0.1 — proves merge)
  - Metadata round-trips through the merge
  - Distances ascending in result list
  - Negative paths: empty corpora → 400, missing corpus → 502,
    no query → 400

12-smoke regression sweep all green (D1-D6, G1, G1P, G2,
storaged_cap, pathway, matrix).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 18:39:17 -05:00
root
a7620c8b6f PRD: name the product vision — small-model pipeline + 5-loop substrate
Adds a "Product vision" section before the Direction-pivot section.
Captures the framing J flagged 2026-04-29: the Go refactor is not the
goal. The goal is a small-model-driven autonomous pipeline that gets
better with each run, with frontier models in audit/oversight, not
the hot path.

Five loops named explicitly:
  1. Knowledge pathway (pathway memory + matrix indexer)
  2. Execution (small models on focused context)
  3. Observer (refines configs that got the model to a good pathway)
  4. Rating + distillation (outcomes fold back into the playbook)
  5. Drift (measure when the playbook stops matching reality)

Triage / human-in-loop named as the system's job, not an escape
hatch. The gate: "playbook + matrix indexer must give the results
we're looking for" — single load-bearing acceptance criterion.

Why Go after Rust: second-language pass surfaces architectural
weaknesses Rust hid; the pipeline must work AS A PIPELINE, not as
crates that interact. Maps existing Rust components (✓ pathway, ✓
matrix, ✓ observer, ✓ distillation, ✓ auditor; partial: drift,
rating gate, triage).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 18:17:01 -05:00
root
71b35fb85e SPEC §1 + §3.4: name matrix indexer as a port target
Adds matrix indexer as its own row in the §1 component table and a
new §3.4 with port plan. Distinct from vectord (substrate); lives at
internal/matrix/ + gateway /v1/matrix/*.

Five components in dependency order: corpus builders → multi-corpus
retrieve+merge → relevance filter → strong-model downgrade gate →
learning-loop integration.

Locks in the framing J flagged 2026-04-29: in Rust the matrix indexer
was emergent across mode.rs + build_*_corpus.ts + observer /relevance,
and earlier port-planning reduced it to "we have vectord." The SPEC
now names it explicitly so the port preserves the multi-corpus
retrieval shape AND the learning loop, not just the HNSW substrate.

Sharding-by-id was investigated as a throughput fix and rejected —
corpus-as-shard at the matrix layer is the existing retrieval shape
and parallelizes Adds for free.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 18:12:10 -05:00
root
f1c188323c vectord: BatchAdd — single-lock variadic batch (Option A)
Replaces the per-item Add loop in the HTTP handler with one call to
Index.BatchAdd, which acquires the write-lock once and pushes the
whole batch through coder/hnsw's variadic Graph.Add. Pre-validation
stays in the handler so per-item error messages keep their item-index
precision.

Microbench (internal/vectord/batch_bench_test.go) at d=768 cosine:

  N=16    SingleAdd 283µs/op  →  BatchAdd 170µs/op   1.66×
  N=128   SingleAdd 7.9ms/op  →  BatchAdd 7.5ms/op   1.05×
  N=1024  SingleAdd 87.5ms/op →  BatchAdd 83.4ms/op  1.05×

Win is biggest at staffing-driver batch sizes (N=16) where
per-call lock + validation overhead is a meaningful fraction. At
larger N the inner HNSW neighborhood search per insert dominates,
which is the load-bearing finding for Option B (sharded indexes):
the throughput ceiling lives inside the library, not at the lock,
so sharding to N parallel Graphs is the only path to true
concurrent-Add throughput.

g1, g1p, g2 smokes all PASS post-change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 18:05:48 -05:00
root
afbb506dbc pathwayd: HTTP service over internal/pathway · 11/11 smoke gate
Network-callable Mem0-style trace memory at :3217, fronted by gateway
/v1/pathway/*. Closes the ADR-004 wire-up: store substrate landed in
2a6234f, this lands the HTTP surface + [pathwayd] config + acceptance
gate.

Smoke proves the architecturally distinctive properties: Revise →
History walks the predecessor chain backward (audit trail), Retire
excludes from Search default but stays Get-able, AddIdempotent bumps
replay_count without replacing — and all survive kill+restart via
JSONL log replay.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 17:49:42 -05:00
root
2a6234ff82 ADR-004 + internal/pathway: Mem0 versioned trace substrate
Closes Sprint 2 design-bar work (audit reports/scrum/sprint-backlog.md):
  S2.1 — ADR-004 documents the pathway-memory data model
  S2.2 — pathway port lands with deterministic fixture corpus
         and full test coverage on day one
  S2.3 — retired traces are excluded from retrieval (test
         passes; would fail without the filter)

Mem0-style operations: Add / AddIdempotent / Update / Revise /
Retire / Get / History / Search. Each operation is a method on
Store; persistence is JSONL append-only with corruption recovery
on Replay.

internal/pathway/types.go     Trace + event + SearchFilter + sentinel errors
internal/pathway/store.go     in-memory state + RWMutex + ops
internal/pathway/persistor.go JSONL append-only log with replay
internal/pathway/store_test.go  20 test funcs covering all 7
                                Sprint 2 claim rows + concurrency
internal/pathway/persistor_test.go  6 test funcs covering missing-
                                file, corruption recovery, long-line
                                handling, parent-dir auto-create,
                                apply-error skip behavior

Sprint 2 claim coverage row-by-row:
  ADD          TestAdd_AssignsUIDAndTimestamps + TestAdd_RejectsInvalidJSON
  UPDATE       TestUpdate_ReplacesContentSameUID + Update_MissingUID_Errors
  REVISE       TestRevise_LinksToPredecessorViaHistory +
               TestRevise_PredecessorMissing_Errors +
               TestRevise_ChainOfThree_BackwardWalk
  RETIRE       TestRetire_ExcludedFromSearch +
               TestRetire_StillAccessibleViaGet +
               TestRetire_StillAccessibleViaHistory
  HISTORY/cycle TestHistory_CycleDetected (injected via internal map),
                TestHistory_PredecessorMissing_TruncatesChain,
                TestHistory_UnknownUID_ErrorsClean
  REPLAY/dup   TestAddIdempotent_IncrementsReplayCount (locks the
               "replay preserves original content" rule per ADR-004)
  CORRUPTION   TestPersistor_CorruptedLines_Skipped +
               TestPersistor_ApplyError_Skipped
  ROUND-TRIP   TestPersistor_RoundTrip locks the full Save → fresh
               Store → Load → Stats-match contract

Two real bugs caught during testing:
  - Add returned the same *Trace stored in the map, so callers
    holding a reference saw later mutations. Fixed: clone before
    return (matches Get's contract). Same fix in AddIdempotent
    + Revise.
  - Test typo: {"v":different} isn't valid JSON; AddIdempotent's
    json.Valid rejected it as ErrInvalidContent. Test fixed to
    use {"v":"different"}; the validation behavior is correct.

Skipped this commit (next):
  - cmd/pathwayd HTTP binary
  - gateway routing for /v1/pathway/*
  - end-to-end smoke
  These add the wire surface; the substrate ships first so the
  wire layer can be a pure proxy in the next commit.

Verified:
  go test -count=1 ./internal/pathway/ — 26 tests green
  just verify                          — vet + test + 9 smokes 34s

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 07:23:30 -05:00
root
ad1670d36a storaged cap smoke — verifies ADR-002 at 300 MiB
Closes the "needs heavy integration smoke" follow-up from the
ADR-002 commit (423a381). Until now the per-prefix PUT cap was
verified only by unit tests + commits' theory; this smoke runs
the actual cap path with real bytes.

Three assertions, ~2s wall:
  1. PUT 300 MiB to _vectors/<key> → 200 (cap raised to 4 GiB
     for the vectord persistence prefix).
  2. PUT same 300 MiB to datasets/<key> → 413 (default 256 MiB
     cap still protects routine traffic).
  3. GET _vectors/<key> → sha256 round-trips (no truncation
     between cap-raise and S3 multipart streaming).

scripts/storaged_cap_smoke.sh
  Builds storaged + gateway, boots them, generates 300 MiB
  deterministic /dev/zero payload (sha stable across runs),
  runs the 3 assertions, cleans up the keys + processes via trap.

  /dev/zero generation chosen over yes/head pipe — pipefail
  catches the SIGPIPE from yes when head closes early.

just smoke-storaged-cap
  Wrapper recipe. Outside the main `just verify` chain because
  300 MiB payload generation + transfer is MB-heavy. Run after
  meaningful storaged or vectord-persistence changes.

Verified:
  bash scripts/storaged_cap_smoke.sh — 3/3 PASS · 2s wall
  just verify                         — vet + test + 9 smokes still 33s

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 07:14:57 -05:00
root
fa56134b90 ADR-003 wiring: Bearer token + IP allowlist middleware
Implements the auth posture from ADR-003 (commit 0d18ffa). Two
independent layers — Bearer token (constant-time compare via
crypto/subtle) and IP allowlist (CIDR set) — composed in shared.Run
so every binary inherits the same gate without per-binary wiring.

Together with the bind-gate from commit 6af0520, this mechanically
closes audit risks R-001 + R-007:
  - non-loopback bind without auth.token = startup refuse
  - non-loopback bind WITH auth.token + override env = allowed
  - loopback bind = all gates open (G0 dev unchanged)

internal/shared/auth.go (NEW)
  RequireAuth(cfg AuthConfig) returns chi-compatible middleware.
  Empty Token + empty AllowedIPs → pass-through (G0 dev mode).
  Token-only → 401 Bearer mismatch.
  AllowedIPs-only → 403 source IP not in CIDR set.
  Both → both gates apply.
  /health bypasses both layers (load-balancer / liveness probes
  shouldn't carry tokens).

  CIDR parsing pre-runs at boot; bare IP (no /N) treated as /32 (or
  /128 for IPv6). Invalid entries log warn and drop, fail-loud-but-
  not-fatal so a typo doesn't kill the binary.

  Token comparison: subtle.ConstantTimeCompare on the full
  "Bearer <token>" wire-format string. Length-mismatch returns 0
  (per stdlib spec), so wrong-length tokens reject without timing
  leak. Pre-encoded comparison slice stored in the middleware
  closure — one allocation per request.

  Source-IP extraction prefers net.SplitHostPort fallback to
  RemoteAddr-as-is for httptest compatibility. X-Forwarded-For
  support is a follow-up when a trusted proxy fronts the gateway
  (config knob TBD per ADR-003 §"Future").

internal/shared/server.go
  Run signature: gained AuthConfig parameter (4th arg).
  /health stays mounted on the outer router (public).
  Registered routes go inside chi.Group with RequireAuth applied —
  empty config = transparent group.
  Added requireAuthOnNonLoopback startup check: non-loopback bind
  with empty Token = refuse to start (cites R-001 + R-007 by name).

internal/shared/config.go
  AuthConfig type added with TOML tags. Fields: Token, AllowedIPs.
  Composed into Config under [auth].

cmd/<svc>/main.go × 7 (catalogd, embedd, gateway, ingestd, queryd,
storaged, vectord, mcpd is unaffected — stdio doesn't bind a port)
  Each call site adds cfg.Auth as the 4th arg to shared.Run. No
  other changes — middleware applies via shared.Run uniformly.

internal/shared/auth_test.go (12 test funcs)
  Empty config pass-through, missing-token 401, wrong-token 401,
  correct-token 200, raw-token-without-Bearer-prefix 401, /health
  always public, IP allowlist allow + reject, bare IP /32, both
  layers when both configured, invalid CIDR drop-with-warn, RemoteAddr
  shape extraction. The constant-time comparison is verified by
  inspection (comments in auth.go) plus the existence of the
  passthrough test (length-mismatch case).

Verified:
  go test -count=1 ./internal/shared/  — all green (was 21, now 33 funcs)
  just verify                            — vet + test + 9 smokes 33s
  just proof contract                    — 53/0/1 unchanged

Smokes + proof harness keep working without any token configuration:
default Auth is empty struct → middleware is no-op → existing tests
pass unchanged. To exercise the gate, operators set [auth].token in
lakehouse.toml (or, per the "future" note in the ADR, via env var).

Closes audit findings:
  R-001 HIGH — fully mechanically closed (was: partial via bind gate)
  R-007 MED  — fully mechanically closed (was: design-only ADR-003)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 07:11:34 -05:00
root
8f4c16fab1 mcpd: Go MCP SDK port — replaces Bun mcp-server tool surface
New cmd/mcpd binary using github.com/modelcontextprotocol/go-sdk
v1.5.0 over stdio transport. Exposes Lakehouse capabilities as MCP
tools: list_datasets, get_manifest, query_sql, embed_text,
search_vectors. Each tool proxies to the gateway via HTTP.

Replaces the MCP-tool subset of the Rust system's Bun mcp-server.ts
(the audit's "split this 2520-line empire" finding from R-005). HTTP
demo routes (the staffing co-pilot UI at /api/intelligence/*,
/headshots/*, etc.) stay Bun until G5 cutover — those are demo-
specific and depend on matrix-indexer signals not yet ported.

Architecture:
  cmd/mcpd/main.go (235 LoC)
    main() reads --gateway flag, builds server via buildServer(),
    runs on StdioTransport. Each tool's args is a typed struct with
    jsonschema tags (the SDK's canonical pattern); reflection
    generates the JSON Schema automatically.

    gatewayClient: thin HTTP wrapper over the configured gateway URL.
    30s per-request timeout. 16 MiB tool-response cap. Non-2xx
    surfaces as IsError CallToolResult (NOT as transport error) so
    the LLM caller sees the error text and can decide how to react.

    proxy() handles GET + POST + JSON body uniformly. errorResult()
    + jsonResult() helpers normalize CallToolResult shape.

  cmd/mcpd/main_test.go (13 test funcs)
    Tests the full MCP wire end-to-end without a subprocess: spin
    up a fake gateway via httptest, build the MCP server pointed at
    it, connect a client via in-memory transports (NewInMemoryTransports),
    call each tool. Each tool gets:
      - happy path (gateway returns 200 → tool returns content)
      - input validation (missing required fields → IsError)
      - upstream error (gateway 4xx → tool returns IsError)
    Plus TestListTools verifies all 5 tools register; TestGatewayUnreachable
    verifies network-level failures surface as IsError, not panics.

Setup for Claude Desktop / Code documented in README:
  {
    "mcpServers": {
      "lakehouse": {
        "command": "/path/to/bin/mcpd",
        "args": ["--gateway", "http://127.0.0.1:3110"]
      }
    }
  }

Verified:
  go test -count=1 ./cmd/mcpd/  — 13/13 green
  just verify                    — vet + test + 9 smokes 35s

Out of scope for this commit:
  - Resources (mcp.AddResource): not needed yet; tools cover the
    interactive surface. Add when an LLM-side use case shows up.
  - Prompts (mcp.AddPrompt): same.
  - Streamable transports (HTTP, SSE): stdio is the universal one;
    streamable can be added with srv.Run(ctx, &mcp.StreamableHTTPHandler{})
    swap if a daemon-mode deploy makes sense.
  - mcpd inside the daemon-supervised stack: it's stdio-only and
    spawned by the MCP client, not run as a service. Adding a
    daemon-mode (HTTP transport on a port) is a follow-up if MCP
    consumers want long-lived sessions.

This is a tool-surface only port. The Bun mcp-server.ts also serves
HTTP demo routes (/api/catalog/datasets, /intelligence/*, /headshots/*)
that depend on the matrix-indexer signals from the Rust system; those
stay Bun until G5 cutover when the staffing co-pilot service ports
to Go.

Direct deps added:
  github.com/modelcontextprotocol/go-sdk v1.5.0

Transitive (resolved by go mod tidy):
  github.com/google/jsonschema-go        v0.4.2
  github.com/yosida95/uritemplate/v3     v3.0.2
  golang.org/x/oauth2                    v0.35.0
  github.com/segmentio/encoding          v0.5.4
  github.com/golang-jwt/jwt/v5           v5.3.1

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 07:00:38 -05:00
root
56844c3f31 embed cache — LRU at /v1/embed for repeat-query elimination
Adds CachedProvider wrapping the embedding Provider with a thread-safe
LRU keyed on (effective_model, sha256(text)) → []float32. Repeat
queries return the stored vector without round-tripping to Ollama.

Why this matters: the staffing 500K test (memory project_golang_lakehouse)
documented that the staffing co-pilot replays many of the same query
texts ("forklift driver IL", "welder Chicago", "warehouse safety", etc).
Each repeat paid the ~50ms Ollama round-trip. Cached repeats now serve
in <1µs (LRU lookup + sha256 of input).

Memory budget: ~3 KiB per entry at d=768. Default 10K entries ≈ 30 MiB.
Configurable via [embedd].cache_size; 0 disables (pass-through mode).

Per-text caching, not per-batch — a batch with mixed hits/misses only
fetches the misses upstream, then merges the result preserving caller
input order. Three-text batch with one miss = one upstream call for
that one text instead of three.

Implementation:
  internal/embed/cached.go (NEW, 150 LoC)
    CachedProvider implements Provider; uses hashicorp/golang-lru/v2.
    Key shape: "<model>:<sha256-hex>". Empty model resolves to
    defaultModel (request-derived) for the key — NOT res.Model
    (upstream-derived), so future requests with same input shape
    hit the same key. Caught by TestCachedProvider_EmptyModelResolvesToDefault.
    Atomic hit/miss counters + Stats() + HitRate() + Len().

  internal/embed/cached_test.go (NEW, 12 test funcs)
    Pass-through-when-zero, hit-on-repeat, mixed-batch only fetches
    misses, model-key isolation, empty-model resolves to default,
    LRU eviction at cap, error propagation, all-hits synthesized
    without upstream call, hit-rate accumulation, empty-texts
    rejected, concurrent-safe (50 goroutines × 100 calls), key
    stability + distinctness.

  internal/shared/config.go
    EmbeddConfig.CacheSize (toml: cache_size). Default 10000.

  cmd/embedd/main.go
    Wraps Ollama Provider with CachedProvider on startup. Adds
    /embed/stats endpoint exposing hits / misses / hit_rate / size.
    Operators check the rate to confirm the cache is working
    (high rate = good) or sized wrong (low rate + many misses on a
    workload that should have repeats).

  cmd/embedd/main_test.go
    Stats endpoint tests — disabled mode shape, enabled mode tracks
    hits + misses across repeat calls.

One real bug caught by my own test:
  Initial implementation cached under res.Model (upstream-resolved)
  rather than effectiveModel (request-resolved). A request with
  model="" caching under "test-model" (Ollama's default), then a
  request with model="the-default" (our config default) missing
  the cache. Fix: always use the request-derived effectiveModel
  for keys; that's the predictable side. Locked by
  TestCachedProvider_EmptyModelResolvesToDefault.

Verified:
  go test -count=1 ./internal/embed/  — all 12 cached tests + 6 ollama tests green
  go test -count=1 ./cmd/embedd/      — stats endpoint tests green
  just verify                          — vet + test + 9 smokes 33s

Production benefit:
  ~50ms Ollama round-trip → <1µs cache lookup for cached entries.
  At 10K-entry default + ~30% repeat rate (typical staffing co-pilot
  workload), saves several seconds per staffer-query session.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 06:54:30 -05:00
root
fb08232f58 Batch 4: embed fixture-mode — partial R-006 closure
Adds cmd/fake_ollama, a minimal Ollama-API-compatible fake that
implements just enough surface for embedd to drive end-to-end
without a real Ollama install:

  GET  /api/tags        — fixed model list including nomic-embed-text
  POST /api/embeddings  — deterministic dim-D vector from sha256(prompt)
  GET  /health          — for the smoke's poll_health helper

Same prompt → bit-identical vector across runs, machines, and CI
nodes. Vectors are NOT semantically meaningful; the fake validates
the embed CONTRACT (dimension echo, response shape, status codes,
deterministic round-trip), not real semantic ranking. Real ranking
still requires real Ollama and lives in scripts/g2_smoke.sh + the
integration tier of the proof harness.

scripts/g2_smoke_fixtures.sh — full chain smoke against the fake:
  - Build fake_ollama + embedd + vectord + gateway
  - Start fake on :11435 (distinct from real Ollama at :11434)
  - Generate temp lakehouse.toml with provider_url override
  - Boot embedd/vectord/gateway with --config <override>
  - 4 assertions: dim=768, deterministic same-text, different-text
    divergence, bad-model → 4xx/5xx (fake 404 → embedd 502)
  - Trap-cleanup tears down all 4 binaries + tmp config

Wired into the task runner:
  just smoke-g2-fixtures

Closes R-006 partially:
  - Embed half: ✓ — CI / fresh-clone reviewers without Ollama can
    now run the embed contract smoke
  - Storage half: deferred — mocking S3 protocol is non-trivial
    (multipart, signed URLs, etc.) and MinIO itself is lightweight
    enough to install via Docker in any CI environment. Documented
    as Sprint 0 follow-up if a CI system without Docker shows up.

What this DOESN'T cover:
  - Real semantic similarity (use scripts/g2_smoke.sh + real Ollama)
  - Real Ollama API quirks (timeouts, version-specific shapes,
    /api/embed batch endpoint that newer versions support)

Verified:
  bash scripts/g2_smoke_fixtures.sh — 4/4 assertions PASS, ~3s wall
  just verify                       — vet + test + 9 smokes still green

Doesn't replace the existing g2_smoke.sh (which still requires real
Ollama and exercises the actual embed semantics). Adds an alternate
mode for portability.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 06:22:07 -05:00
root
0f79bce948 Batch 3: cmd/<bin>/main_test.go × 6 — closes R-005
Adds main_test.go for each of the 6 cmd binaries that lacked them
(storaged already had main_test.go; that's where the pattern came
from). Each test file focuses on the cmd-specific surface — route
mounts, body caps, decode/validation paths — without re-testing
internal package logic that's covered elsewhere.

cmd/catalogd/main_test.go — 6 funcs
  TestRoutesMounted: chi.Walk asserts /catalog/{register,manifest/*,list}
  TestHandleRegister_BodyTooLarge: 5 MiB body → 4xx
  TestHandleRegister_MalformedJSON: 400
  TestHandleRegister_EmptyName_400: ErrEmptyName surfaces as 400
  TestHandleGetManifest_404 + TestHandleList_EmptyShape

cmd/embedd/main_test.go — 8 funcs
  stubProvider implements embed.Provider deterministically
  TestRoutesMounted, MalformedJSON_400, EmptyTextRejected_400 (per
    scrum O-W3), UpstreamError_502 (provider error → 502, not 500),
    HappyPath_ProviderEcho, BodyTooLarge (4xx range), TestItoa
    (covers the no-strconv helper)

cmd/gateway/main_test.go — 4 funcs
  TestMustParseUpstream_HappyPaths: 3 valid URLs
  TestMustParseUpstream_FailureExits: re-execs the test binary in a
    subprocess with env flag (standard pattern for testing os.Exit
    callers); subprocess invokes mustParseUpstream("127.0.0.1:3211")
    [missing scheme]; expects exit non-zero. Same pattern for garbage.
  TestUpstreamConfigKeys_DocumentedShape: locks the 6 _url keys

cmd/ingestd/main_test.go — 7 funcs
  Stubs both storaged and catalogd via httptest.Server so the cmd
  layer can be exercised without bringing the full chain up.
  TestHandleIngest_MissingNameQueryParam: 400 with "name" in body
  TestHandleIngest_MalformedMultipart: 400
  TestHandleIngest_MissingFormFile: 400 (valid multipart, wrong field)
  TestHandleIngest_BodyTooLarge: 4xx
  TestEscapeKeyPath: 6-case URL-escape table (apostrophe, space, etc.)
  TestParquetKeyPath_Format: locks the datasets/<n>/<fp>.parquet shape
    per scrum C-DRIFT (any rename breaks idempotent re-ingest)

cmd/queryd/main_test.go — 6 funcs
  Tests pre-DB paths (decode, body cap, empty SQL); db.QueryContext
  itself needs DuckDB so it's covered by GOLAKE-040 in the proof
  harness, not unit tests. handlers.db = nil here is intentional.
  TestHandleSQL_EmptySQL_400: 3 cases (empty, whitespace, mixed-WS)
  TestMaxSQLBodyBytes_Reasonable: locks the 64 KiB constant in a
    sane range so a refactor can't blow it open
  TestPrimaryBucket_Constant: locks "primary" — secrets lookup uses
    this; rename = silent secret-resolution failure at boot

cmd/vectord/main_test.go — 14 funcs
  All 6 routes verified mounted. handlers.persist = nil = pure
  in-memory mode; persistence is GOLAKE-070 in the proof harness.
  Coverage of every error branch in handleCreate/Add/Search/Delete:
    missing index → 404, dim mismatch → 400, empty items → 400,
    empty id → 400, malformed JSON → 400, body too large → 4xx,
    happy create → 201, happy list → 200.

One real finding caught during writing:
  Body-cap rejection is sometimes 413 (typed MaxBytesError survives
  unwrap) and sometimes 400 (decoder wraps it as a generic decode
  error). Both are valid client-error contracts; the contract isn't
  "exactly 413" but "fails loud as 4xx, never silent 200 or 5xx."
  Tests assert 4xx range. The proof harness's
  proof_assert_status_4xx already had this shape — just bringing
  the unit tests in line with it.

Verified:
  go test -count=1 -short ./cmd/...  — all 7 packages green
  just verify                         — vet + test + 9 smokes 35s

Closes audit risk R-005 (6/7 cmd/main.go untested). Combined with
the proof harness's wiring coverage, every cmd-level handler now
has both unit-test and integration-test coverage of the wiring
layer. R-005 → CLOSED.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 06:18:46 -05:00
root
1ec85b0a16 Batch 2: perf baseline — multi-sample + warmup + MAD threshold
Replaces single-shot baselines (40% noise floor flagged in Phase E)
with noise-aware regression detection.

What changed:
  ingest      n=3 runs (was 1) with 3-pass warmup
  vector_add  n=3 runs (was 1) with 3-pass warmup
  query       n=20 samples (unchanged) with 50-pass warmup
  search      n=20 samples (unchanged) with 50-pass warmup
  RSS         n=1 (unchanged — steady-state in G0)

Each metric stored as {value: median, mad: median absolute
deviation} in baseline.json (schema: v2-multisample-mad).

New regression detection:
  threshold = max(3 * baseline.mad, value * 0.75)
  REGRESSION iff |actual - baseline.value| > threshold AND direction
    signals worse (lower throughput / higher latency).

Why these specific numbers:
  3*MAD   = standard "outside the spread" bound; lets high-variance
            metrics tolerate their own noise.
  75% floor = empirical observation: even with 50 warmups, single-
            host inter-run variance on bootstrap-cold queryd was
            consistently 90-130% on this box. 75% catches >75%
            regressions cleanly while ignoring known noise.

lib/metrics.sh: new proof_compute_mad helper computes MAD from a
file of one-number-per-line samples. Used for both regen (to write
the baseline.mad value) and diff (read from baseline).

Honest finding from this iteration's 3 back-to-back diff runs:
  query_ms shows 90-130% delta from baseline consistently — not
  random noise but a systematic 2x gap between regen-time and
  steady-state. The regen captured a particularly fast moment;
  steady-state is slower. Operator workflow: regenerate the
  baseline at a known-representative state via
  `bash tests/proof/run_proof.sh --mode performance --regenerate-baseline`
  rather than expecting the harness to track a moving target.

The harness's value here is the EVIDENCE RECORD (every run captures
median+MAD+p95 plus all raw samples in raw/metrics/), not the gate.
Even false-positive REGRESSION skips give operators "this run was
20ms vs baseline 10ms" which is informative.

Sample counts also written into baseline.json under "samples" so a
future audit can verify the methodology that produced the values.

Verified across 3 back-to-back runs:
  ingest_rows_per_sec    PASS (delta within 75%, mostly < 10%)
  vectors_per_sec_add    PASS
  search_ms              PASS
  rss_*                  PASS
  query_ms               REGRESSION flagged (130/100/90%) — known
                         systematic gap, not bug

Closes the "40% noise floor" follow-up from Phase E FINAL_REPORT.
Honest about limitations: hard regression gating on a busy single-
host setup needs either much bigger sample counts (n≥100), longer
warmup, or moving to a dedicated benchmark host. Documented inline.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 06:13:47 -05:00
root
0d18ffa780 ADR-003: inter-service auth posture — Bearer + IP allowlist
Locks in the auth model that R-001 + R-007 will be retrofitted
against. Doc-only — wiring deferred to Sprint 1 when the first
non-loopback binding is needed.

Decision: Bearer token (from secrets-go.toml [auth] section) + IP
allowlist (CIDR list). Both layers required when auth is on; empty
token = G0 dev no-op. /health exempt.

Implementation shape (when it lands):
  - internal/shared/auth.go middleware: one chi r.Use line per binary
  - shared.Run gates: refuses non-loopback bind without configured token
  - subtle.ConstantTimeCompare for token equality (timing-safe)

Alternatives considered + rejected:
  mTLS         — too heavy for single-machine inter-service traffic
  JWT          — buys nothing over Bearer without external IdP
  IP-only      — one stolen IP entry = full access; no defense depth
  OAuth2       — no external IdP commitment in G0-G3 timeline

What this doesn't do:
  - Doesn't implement (code lands Sprint 1)
  - Doesn't break G0 dev (empty token = middleware no-op)
  - Doesn't address gateway→end-user auth (different ADR shape)

Closes the design-decision blocker for R-001 and R-007. Wiring
ticket: Sprint 1 backlog story S1.2.

Also lifts ADR-002 (storaged per-prefix PUT cap) into the doc —
it was implemented in 423a381 but not yet recorded as an ADR.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 06:05:59 -05:00
root
423a3817c5 D: storaged per-prefix PUT cap — vectord _vectors/ → 4 GiB
Closes the documented 500K-test gap (memory project_golang_lakehouse:
"storaged 256 MiB PUT cap blocks single-file LHV1 persistence above
~150K vectors at d=768"). Vectord persistence under "_vectors/" now
gets a 4 GiB cap; everything else (parquets, manifests, ingest)
keeps the 256 MiB default.

Why per-prefix and not "raise globally":
  - 256 MiB cap is a real DoS protection — runaway clients can't
    drain the daemon. Raising it for ALL traffic would expand the
    attack surface for routine paths that have no need.
  - Per-prefix preserves existing protection while opening the one
    documented production-scale path.

Why not split LHV1 across multiple keys (the alternative):
  - G1P shipped a single-Put framed format SPECIFICALLY to eliminate
    the torn-write class (memory: "Single Put eliminates the torn-
    write class that the 3-way convergent scrum finding identified").
  - Multi-key LHV1 would re-introduce the half-saved-state failure
    mode we just paid to fix. Streaming via existing manager.Uploader
    is the better architectural answer.

Why not bump the cap operationally via env/config:
  - Future operator-driven cap can drop in cleanly via the
    maxPutBytesFor function. Started with hardcoded 4 GiB to keep
    this commit small; config knob is a follow-up if production
    workloads diverge from the documented 500K-vector ceiling.

manager.Uploader is already streaming-multipart on the outbound
S3 side; the inbound MaxBytesReader cap is a safety gate, not a
memory bottleneck. So raising it for vectord just lets the
existing streaming path actually flow, without introducing new
memory pressure (4-slot semaphore × 4 GiB worst case = 16 GiB
only if all slots simultaneously max out — vanishingly unlikely).

Implementation:
  cmd/storaged/main.go:
    new constant maxPutBytesVectors = 4 GiB (covers >700K vectors @ d=768)
    new constant vectorsPrefix = "_vectors/" (synced with vectord.VectorPrefix)
    new function maxPutBytesFor(key) → cap-by-prefix
    handlePut: ContentLength check + MaxBytesReader use the per-key cap

  cmd/storaged/main_test.go (3 new test funcs):
    TestMaxPutBytesFor: 7 cases incl. nested prefix, substring-but-not-
      prefix, empty key, parquet/manifest paths.
    TestVectorPrefixSyncWithVectord: regression test that asserts
      vectorsPrefix == vectord.VectorPrefix. A future rename surfaces
      here instead of silently bypassing the larger cap.
    TestVectorCapAccommodates500KStaffingTest: bounds the cap above
      the documented production workload (~700 MiB conservative).

Verified:
  go test ./cmd/storaged/ — all green (was 1 func, now 4)
  just verify             — 9 smokes still pass · 32s wall
  just proof contract     — 53/0/1 unchanged

Out of scope for this commit (deserves its own):
  - Heavy integration smoke: 200K dim=768 synthetic vectors → ~700
    MiB LHV1 → kill+restart vectord → recall=1. ~5-10 min wall;
    follow-up if you want production-scale persistence verified
    end-to-end. Unit tests + existing g1p_smoke cover the wiring.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 06:00:09 -05:00
root
6af0520ed2 A: fail-loud on non-loopback bind — closes worst case of R-001
shared.Run now refuses to bind a non-loopback address unless the
LH_<SERVICE>_ALLOW_NONLOOPBACK=1 env is set. Single change covers
all 7 binaries via the existing Run call site; no per-binary
wiring needed.

Closes the accidental-0.0.0.0 deploy attack surface for R-001:
queryd /sql is RCE-equivalent off loopback (DuckDB has filesystem
read + COPY TO + read_text), but the gate applies to every binary
uniformly so the same posture covers vectord (mutation routes),
catalogd (manifest writes), and the others.

What passes the gate:
  127.0.0.1:port, 127.x.y.z:port (full /8), [::1]:port,
  localhost:port, OR explicit env LH_<SVC>_ALLOW_NONLOOPBACK=1

What fail-louds:
  0.0.0.0:port, [::]:port, :port (all interfaces),
  any non-loopback IP, any non-localhost hostname,
  unparseable shapes ("", "no port", garbage)

Override env is strict equality "1" — typos like "true"/"yes" do NOT
trigger it, so a future operator can't accidentally expose by typing
the wrong value. Override fires log a structured warn so the choice
is auditable in production.

Error message cites the env name AND R-001 by name so operators see
the fix path without grepping:
  "refusing non-loopback bind \"0.0.0.0:3214\" for \"queryd\"
   (set LH_QUERYD_ALLOW_NONLOOPBACK=1 to override; see audit R-001)"

internal/shared/bind.go            — requireLoopbackOrOverride + isLoopbackAddr
internal/shared/bind_test.go       — 7 test funcs incl. table-driven
                                     IPv4/IPv6/hostname coverage and
                                     per-service env isolation
internal/shared/server.go          — 1-line gate in Run before listen

Verified:
  go test -short ./internal/shared/ — all green (was 14 funcs, now 21)
  just verify                       — vet + test + 9 smokes still 33s

Doesn't address R-001's full attack surface (any reachable port can
issue arbitrary SQL); ADR-003 + Bearer-token middleware is the
follow-up. This commit makes the implicit "localhost-only is the auth
layer" guarantee explicit and un-bypassable without explicit env.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 05:56:42 -05:00
root
125e1c80b9 tests: close R-002 / R-003 / R-008 — internal/shared, storeclient, queryd/db.go
Audit-driven follow-up to the Rust scrum review on the 3 untested
HIGH-risk packages. Both the audit (reports/scrum/risk-register.md)
and the scrum (tests/real-world/runs/scrum_mojxb5bw/) independently
flagged these files as the highest-leverage missing test coverage.

internal/shared/server_test.go — 8 test funcs
  newListener: valid addr, invalid addr (non-numeric port, port
    out of range, port-already-in-use surfacing as net.OpError).
  Empty-addr-is-valid: documents the net.Listen quirk that "" binds
    an OS-picked port — future readers don't need to relitigate.
  HealthResponse marshal: JSON shape stable, round-trip clean.
  /health handler reconstructed via httptest.Server: status 200,
    Content-Type application/json, body fields stable.
  RegisterRoutes callback: contract verified (callback is invoked
    with a real chi.Router, mounted route reachable end-to-end).
  Run bind-failure surface: synchronous error, not a goroutine swallow
    — the contract Run depends on per the race-safe-startup comment.

internal/shared/config_test.go — 6 test funcs
  DefaultConfig G0 port pinning: every binary's default bind locked
    in (3110/3211-3216) so a refactor can't silently flip a port.
  LoadConfig empty path: returns DefaultConfig, no error.
  LoadConfig missing file: returns DefaultConfig, logs warn (the warn
    line shows up in test output, captured-but-not-asserted).
  LoadConfig valid TOML: partial overrides land, unspecified sections
    keep defaults (TOML decoder leave-alone behavior).
  LoadConfig invalid TOML: returns wrapped 'parse config' error.
  LoadConfig unreadable file: skipped under root (root reads 0000);
    captures the read-error wrap path for non-root contexts.

internal/storeclient/client_test.go — 14 test funcs
  safeKey table-driven: plain segments, single slash, empty, trailing
    slash, space (→ %20), apostrophe (→ %27), unicode (→ %C3%A9),
    deep nesting. Locks URL-escape contract per scrum suggestion.
  recordingServer helper backs Put/Get/Delete/List against
    httptest.Server: verifies method, path, body bytes round-trip.
  ErrKeyNotFound on 404 (errors.Is round-trip).
  Non-OK status wraps body preview into the error chain.
  Delete accepts both 200 and 204 (S3 vs compatible-store quirk).
  List parses JSON shape and surfaces query-string prefix.
  Context cancellation propagates through Put as context.Canceled.

internal/queryd/db_test.go — 5 test funcs (with subtests)
  sqlEscape table-driven: 8 cases including empty, all-quotes,
    nested apostrophes (the case from the scrum suggestion).
  redactCreds table-driven: 6 cases — both keys, single keys,
    empty, multi-occurrence, placeholder-collision (lossy but safe).
  buildBootstrap statement order: INSTALL → LOAD → CREATE SECRET.
  buildBootstrap endpoint schemes: http strips + USE_SSL false,
    https keeps SSL true, no-scheme defaults SSL true (prod ambient).
  buildBootstrap URL_STYLE: 'path' vs 'vhost' branch.
  buildBootstrap escapes credential quotes: future SSO-token-with-
    apostrophe doesn't break out of the SQL string literal — the
    belt holds when the suspenders snap.

Real finding caught by my own test:
  net.Listen("tcp", "") succeeds (OS-picked port) — captured as
  TestNewListener_EmptyAddrIsValid so the quirk is documented.

Verified:
  go test -short ./... — every internal/ package now has tests
    (no more 'no test files' lines for shared/storeclient).
  just verify — vet + test + 9 smokes green in 33s.
  just proof contract — 53/0/1 green (no harness regression).

Closes:
  R-002 internal/shared zero tests        HIGH
  R-003 internal/storeclient zero tests   HIGH
  R-008 queryd/db.go untested             MED (sqlEscape, redactCreds,
                                              CREATE SECRET formation)

Composite scrum score should move from 43 → ~46 / 60 — the three
HIGH/MED risks closed, internal/shared and internal/storeclient
become "tested + load-bearing" instead of "untested + load-bearing."

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 05:51:05 -05:00
root
ff9823b871 scrum audit re-run: 35 → 43 / 60 after Phase A-E + S0.3
Re-runs the SCRUM.md framework against HEAD (4840c10) to score the
delta from the audit baseline at 91edd43. Composite +8.

Scoring deltas:
  Reproducibility       7 → 9  (just verify, just doctor, pre-push hook)
  Test Coverage         6 → 8  (168 proof harness assertions; Go-test
                                gaps in shared/storeclient remain)
  Trust Boundary        7 → 7  (no code change; R-001/R-007 open)
  Memory Correctness    3 → 4  (vectord persistence proven; Mem0
                                pathway/playbook still not ported)
  Deployment Readiness  4 → 5  (just doctor; REPLICATION/systemd open)
  Maintainability       8 → 8  (spine unchanged; harness obeys
                                CLAUDE_REFACTOR_GUARDRAILS)

Risk register changes:
  R-004 (smokes not gated)        CLOSED — just verify + pre-push hook
  R-005 (cmd/main.go untested)    partial — proof harness covers wiring
  R-012 (empty tests/ dir)        CLOSED — populated by harness
  R-001/R-002/R-003/R-006/R-007/R-008/R-009/R-010 unchanged

Sprint 0 progress:
  S0.1 just doctor               DONE
  S0.3 just verify + pre-push    DONE
  S0.6 tests/ dir cleanup        DONE
  S0.2 just smoke-fixtures       open
  S0.4 cmd/main_test × 6         partial (harness coverage; go-test gap)
  S0.5 shared/storeclient tests  open  (HIGH risks still unaddressed)

New finding from this rerun (worth recording):
  Queryd refresh-tick race in 04_query_correctness — cache-warm
  binaries fire SELECTs faster than queryd's 500ms refresh tick.
  Caught by integration mode going 104/0/1 → 102/1/1, fixed at
  4840c10 with proof_wait_for_sql helper. Exactly the failure-mode
  the harness was designed to catch.

Original 5 audit reports preserved as immutable history at
91edd43; this file documents the delta only.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 05:37:45 -05:00
root
4840c10311 proof harness: fix queryd refresh-tick race in 04_query_correctness
Caught by the audit rerun: with cache-warm binaries, 04 fires its
first SELECT faster than queryd's 500ms refresh tick — Q1 returned
400 ("table not found") even though 03_ingest had registered the
manifest. Subsequent queries (after the next tick) succeeded.

This is an eventual-consistency wait, not a retry — queryd's
contract is that views appear within one tick of catalogd having the
manifest. Production code does not need changing.

Added to lib/http.sh:
  proof_wait_for_sql <budget_sec> <sql>
    polls a SQL probe until it returns 200 or budget elapses; emits
    no evidence (test setup, not a claim).

Used in 04_query_correctness:
  Wait up to 5s for queryd to have the view before running the 5
  SQL assertions. Skip-with-loud-reason if the view never appears.

Verified: integration mode back to 104 pass / 0 fail / 1 skip after
fix. The skip is the unchanged GOLAKE-085 informational record.

This is exactly the kind of finding the harness was designed to
surface — the regression existed in the codebase the moment Phase D
shipped, but only fired when the next compare run hit cache-warm
timing. Without the harness, it would have surfaced on a CI run
weeks from now and been hard to bisect.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 05:36:28 -05:00
root
4bb6548cbc proof harness Phase E: FINAL_REPORT.md answers the 9 mandated questions
Per docs/TEST_PROOF_SCOPE.md, this is the closing deliverable for the
proof harness: a single document that names what's proven, what's
partially proven, what failed, what was skipped and why, what evidence
exists for each, what bottlenecks were measured, what contract drift
was found, what refactor risks remain, and what to fix first.

Per-run report dirs (tests/proof/reports/proof-<ts>/) keep their
existing summary.md + summary.json + raw/ structure — they are the
replayable evidence chain. FINAL_REPORT.md is the stable, repo-tracked
synthesis pointing at them.

Headline findings (no surprises — harness behaves as designed):
  - 24 claims encoded; 22 fully proven, 1 informational (GOLAKE-085
    duplicate vector ID, contract not yet specified), 0 failed.
  - 4 contract-drift findings recorded as canonical: vectord add
    body field is `items` not `vectors`, search response is `results`
    not `hits`, index info is `length` not `count`, status codes
    201/204 not 200. All caught during Phase B; all now pinned by the
    harness.
  - Performance baseline shows queryd as the largest RSS (69 MiB,
    DuckDB process); single-sample noise floor is ~40% — tightening
    to multi-sample medians is a documented Sprint follow-up.
  - HIGH-risk audit findings (R-001 queryd /sql, R-002/R-003 untested
    shared+storeclient) are NOT closed by the harness — it's a
    multiplier, not a replacement for unit tests + auth posture.

The proof harness is complete. 11 cases · 3 modes · 168 assertions
peak across all tiers · ~22s total wall (contract+integration+perf).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 05:32:56 -05:00