12 Commits

Author SHA1 Message Date
root
bc698eb6da gateway: OpenCode (Zen + Go) provider adapter
Wires opencode.ai as a /v1/chat provider. One sk-* key reaches 40
models across Anthropic, OpenAI, Google, Moonshot, DeepSeek, Zhipu,
Alibaba, Minimax — billed against either the user's Zen balance
(pay-per-token premium models) or Go subscription (flat-rate
Kimi/GLM/DeepSeek/etc.). The unified /zen/v1 endpoint routes both;
upstream picks the billing tier based on model id.

Notable adapter quirks:

- Strip "opencode/" prefix on outbound (mirrors openrouter/kimi
  pattern). Caller can use {provider:"opencode", model:"X"} or
  {model:"opencode/X"}.
- Drop temperature for claude-*, gpt-5*, o1/o3/o4 models. Anthropic
  and OpenAI's reasoning lineage rejects temperature with 400
  "deprecated for this model". OCChatBody now serializes temperature
  as Option<f64> with skip_serializing_if so omitting it produces
  clean JSON.
- max_tokens.filter(|&n| n > 0) catches Some(0) — defensive after
  the same trap bit kimi.rs (empty env -> Number("") -> 0 -> 503).
- 600s default upstream timeout; reasoning models on big audit
  prompts legitimately take 3-5 min. Override OPENCODE_TIMEOUT_SECS.

Key handling:
- /etc/lakehouse/opencode.env (0600 root) loaded via systemd
  EnvironmentFile. Same pattern as kimi.env.
- OPENCODE_API_KEY env first, file scrape as fallback.

Verified end-to-end:
  opencode/claude-opus-4-7   -> "I'm Claude, made by Anthropic."
  opencode/kimi-k2.6         -> PONG-K26-GO
  opencode/deepseek-v4-pro   -> PONG-DS-V4
  opencode/glm-5.1           -> PONG-GLM
  opencode/minimax-m2.5-free -> PONG-FREE

Pricing reference (per audit @ ~14k in / 6k out):
  claude-opus-4-7   ~$0.22  (Zen)
  claude-haiku-4-5  ~$0.04  (Zen)
  gpt-5.5-pro       ~$1.50  (Zen)
  gemini-3-flash    ~$0.03  (Zen)
  kimi-k2.6 / glm / deepseek / qwen / minimax / mimo: covered by Go
  subscription ($10/mo, $60/mo cap).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 06:40:55 -05:00
root
643dd2d520 gateway: direct Kimi For Coding provider adapter (api.kimi.com)
Wires kimi-for-coding (Kimi K2.6 underneath) as a first-class /v1/chat
provider so consumers can target it via {provider:"kimi"} or model
prefix kimi/<model>. Bypasses the upstream-broken kimi-k2:1t on Ollama
Cloud and the rate-limited moonshotai/kimi-k2.6 path through OpenRouter.

Adapter shape mirrors openrouter.rs (OpenAI-compatible Chat Completions).
Differences from generic OpenAI providers:

- api.kimi.com is a SEPARATE account system from api.moonshot.ai and
  api.moonshot.cn. sk-kimi-* keys are NOT interchangeable across them.
- Endpoint is User-Agent-gated to "approved" coding agents (Kimi CLI,
  Claude Code, Roo Code, Kilo Code, ...). Requests from generic clients
  return 403 access_terminated_error. Adapter sends User-Agent:
  claude-code/1.0.0. Per Moonshot TOS this is a tampering-class action
  that may result in seat suspension; J authorized 2026-04-27 with
  awareness of the risk.
- kimi-for-coding is a reasoning model — reasoning_content counts
  against max_tokens. Default 800-token budget yields empty visible
  content with finish_reason=length. Code-review workloads need
  max_tokens >= 1500.
- Default 600s upstream timeout (vs 180s for openrouter.rs) — code
  audits with full file context legitimately take 3-5 minutes.
  Override via KIMI_TIMEOUT_SECS env.

Key handling:
- /etc/lakehouse/kimi.env (0600 root) loaded via systemd EnvironmentFile
- KIMI_API_KEY env first, then file scrape as fallback
- /etc/systemd/system/lakehouse.service NOT included in this commit
  (system file outside repo); operator must add EnvironmentFile=-
  /etc/lakehouse/kimi.env to the lakehouse.service unit

NOT in scrum_master_pipeline LADDER. The 9-rung ladder is for
unattended automatic recovery; placing Kimi there would hammer a
TOS-gated endpoint with hostility-policy potential. Kimi is
addressable via /v1/chat for explicit invocations only — auditor
integration in a follow-up commit.

Verification:
  cargo check -p gateway --tests          compiles
  curl /v1/chat provider=kimi             200 OK, content="PONG"
  curl /v1/chat model="kimi/kimi-for-coding"  200 OK (prefix routing)
  Kimi audit on distillation last-week    7/7 grounded findings
                                          (reports/kimi/audit-last-week-full.md)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 05:35:58 -05:00
root
20a039c379 auditor: rebuild on mode runner + drop tree-split (use distillation substrate)
Some checks failed
lakehouse/auditor 13 blocking issues: cloud: claim not backed — "Invariants enforced (proven by tests + real run):"
Architectural simplification leveraging Phase 5 distillation work:
the auditor no longer pre-extracts facts via per-shard summaries
because lakehouse_answers_v1 (gold-standard prior PR audits + observer
escalations corpus) supplies cross-PR context through the mode runner's
matrix retrieval. Same signal, ~50× fewer cloud calls per audit.

Per-audit cost:
  Before: 168 gpt-oss:120b shard summaries + 3 final inference calls
  After:  3 deepseek-v3.1:671b mode-runner calls (full retrieval included)

Wall-clock on PR #11 (1.36MB diff):
  Before: ~25 minutes
  After:  88 seconds (3/3 consensus succeeded)

Files:
  auditor/checks/inference.ts
    - Default MODEL kimi-k2:1t → deepseek-v3.1:671b. kimi-k2 is hitting
      sustained Ollama Cloud 500 ISE (verified via repeated trivial
      probes; multi-hour outage). deepseek is the proven drop-in from
      Phase 5 distillation acceptance testing.
    - Dropped treeSplitDiff invocation. Diff truncates to MAX_DIFF_CHARS
      and goes straight to /v1/mode/execute task_class=pr_audit; mode
      runner pulls cross-PR context from lakehouse_answers_v1 via
      matrix retrieval. SHARD_MODEL retained for legacy callCloud
      compatibility (default qwen3-coder:480b if it ever runs).
    - extractAndPersistFacts now reads from truncated diff (no
      scratchpad post-tree-split-removal).

  auditor/checks/static.ts
    - serde-derived struct exemption (commit 107a682 shipped this; this
      commit is the rest of the auditor rebuild it landed alongside)
    - multi-line template literal awareness in isInsideQuotedString —
      tracks backtick state across lines so todo!() inside docstrings
      doesn't trip BLOCK_PATTERNS.

  crates/gateway/src/v1/mode.rs
    - pr_audit native runner mode added to VALID_MODES + is_native_mode
      + flags_for_mode + framing_text. PrAudit framing produces strict
      JSON {claim_verdicts, unflagged_gaps} for the auditor to parse.

  config/modes.toml
    - pr_audit task class with default_model=deepseek-v3.1:671b and
      matrix_corpus=lakehouse_answers_v1. Documents kimi-k2 outage
      with link to the swap rationale.

Real-data audit on PR #11 head 1b433a9 (which is the PR with all the
distillation work + auditor rebuild itself):
  - Pipeline ran to completion (88s for inference; full audit ~3 min)
  - 3/3 consensus runs succeeded on deepseek-v3.1:671b
  - 156 findings: 12 block, 23 warn, 121 info
  - Block findings are legitimate signal: 12 reviewer claims like
    "Invariants enforced (proven by tests + real run):" that the
    truncated diff can't directly verify. The auditor is correctly
    flagging claim-vs-diff divergence — exactly its job.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 23:32:44 -05:00
root
2dbc8dbc83 v1/mode: model-aware enrichment downgrade + 3 corpora + variance harness
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
Pass 5 (5 reps × 4 conditions × 1 file on grok-4.1-fast) showed composing
matrix corpora is anti-additive on strong models — composed lakehouse_arch
+ symbols LOST 5/5 head-to-head vs codereview_isolation (Δ −1.8 grounded
findings, p=0.031). Default flips to isolation; matrix path now auto-
downgrades when the resolved model is strong.

Mode runner:
- matrix_corpus is Vec<String> (string OR array via deserialize_string_or_vec)
- top_k=6 from each corpus, merge by score, take top 8 globally
- chunk tag prefers doc_id over source so reviewer sees [adr:009] vs [lakehouse_arch]
- is_weak_model() gate auto-downgrades codereview_lakehouse → codereview_isolation
  for strong models (default-strong; weak = :free suffix or local last-resort)
- LH_FORCE_FULL_ENRICHMENT=1 bypasses for diagnostic runs
- EnrichmentSources.downgraded_from records when the gate fires

Three corpora indexed via /vectors/index (5849 chunks total):
- lakehouse_arch_v1 — ADRs + phases + PRD + scrum spec (93 docs, 2119 chunks)
- scrum_findings_v1 — past scrum_reviews.jsonl (168 docs, 1260 chunks; EXCLUDED
  from defaults — 24% out-of-bounds line citations from cross-file drift)
- lakehouse_symbols_v1 — regex-extracted pub items + /// docs (656 docs, 2470 chunks)

Experiment infra:
- scripts/build_*_corpus.ts — re-runnable when source content changes
- scripts/mode_pass5_variance_paid.ts — N reps × M conditions on one file
- scripts/mode_pass5_summarize.ts — mean ± σ + head-to-head, parser handles
  numbered + path-with-line + path-with-symbol finding tables
- scripts/mode_compare.ts — groups by mode|corpus when sweeps span corpora
- scripts/mode_experiment.ts — default model bumped to x-ai/grok-4.1-fast,
  --corpus flag for per-call override

Decisions + open follow-ups: docs/MODE_RUNNER_TUNING_PLAN.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 17:29:17 -05:00
root
56bf30cfd8 v1/mode: override knobs + staffing native runner + pass 2/3/4 harnesses
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
Setup for the corpus-tightening experiment sweep (J 2026-04-26 — "now
is the only cheap window before the corpus gets large and refactoring
costs go up").

Override params on /v1/mode/execute (additive — old callers unaffected):
  force_matrix_corpus      — Pass 2: try alternate corpora per call
  force_relevance_threshold — Pass 2: sweep filter strictness
  force_temperature         — Pass 3: variance test

New native mode `staffing_inference_lakehouse` (Pass 4):
  - Same composer architecture as codereview_lakehouse
  - Staffing framing: coordinator producing fillable|contingent|
    unfillable verdict + ranked candidate list with playbook citations
  - matrix_corpus = workers_500k_v8
  - Validates that modes-as-prompt-molders generalizes beyond code
  - Framing explicitly says "do NOT fabricate workers" — the staffing
    analog of the lakehouse mode's symbol-grounding requirement

Three sweep harnesses:
  scripts/mode_pass2_corpus_sweep.ts — 4 corpora × 4 thresholds × 5 files
  scripts/mode_pass3_variance.ts     — 3 files × 3 temps × 5 reps
  scripts/mode_pass4_staffing.ts     — 5 fill requests through staffing mode

Each appends per-call rows to data/_kb/mode_experiments.jsonl which
mode_compare.ts already aggregates with grounding column.

Pass 1 (10 files × 5 modes broad sweep) currently running via the
existing scripts/mode_experiment.ts — gateway restart deferred until
it completes so the new override knobs aren't enabled mid-experiment.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 01:55:12 -05:00
root
86f63a083d v1/mode: codereview_lakehouse native runner — modes are prompt-molders
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
J's framing (2026-04-26): "Modes are how you ask ONCE and get BETTER
information — they mold the data, hyperfocus the prompt on this
codebase's needs, so the model gets it right the first time without
the cascading retry ladder."

Built the first concrete native enrichment runner (codereview_lakehouse)
that composes every context primitive the gateway exposes:

  1. Focus file content (read from disk OR caller-supplied)
  2. Pathway memory bug_fingerprints for this file area (ADR-021
     preamble — "📚 BUGS PREVIOUSLY FOUND IN THIS FILE AREA")
  3. Matrix corpus search via the task_class's matrix_corpus
  4. Relevance filter (observer /relevance) drops adjacency pollution
  5. Assembles ONE precise prompt with system framing
  6. Single call to /v1/chat with the recommended model

POST /v1/mode/execute dispatches. Native mode → runs the composer.
Non-native mode → 501 NOT_IMPLEMENTED with hint (proxy to LLM Team
/api/run is queued).

Provider hint logic auto-routes by model name shape:
  - vendor/model[:tag] → openrouter
  - kimi-*/qwen3-coder*/deepseek-v*/mistral-large* → ollama_cloud
  - everything else → local ollama

Live test against crates/queryd/src/delta.rs (10593 bytes, 10
historical bug fingerprints, 2 matrix chunks dropped by relevance):
  - enriched_chars: 12876
  - response_chars: 16346 (14 findings with confidence percentages)
  - Model literally cited the pathway memory preamble in finding #7
  - One call to free-tier gpt-oss:120b produced what previously
    required the 9-rung escalation ladder

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 00:28:46 -05:00
root
d277efbfd2 v1/mode: task_class → mode/model router (decision-only, phase 1)
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
HANDOVER §queued (2026-04-25): "Mode router — port LLM Team multi-model
patterns. Pick the right TOOL/MODE for each task class via the matrix,
not cascade through models."

Two-stage architecture:
  1. Decision (POST /v1/mode) — pure recommendation, no execution.
     Returns {mode, model, decision: {source, fallbacks, matrix_corpus,
     notes}} so callers see WHY this mode was picked.
  2. Execution (future POST /v1/mode/execute) — proxy to LLM Team
     /api/run for modes not yet ported to native Rust runners. Not
     wired in this phase.

Splitting decision from execution lets us A/B-test the routing logic
without committing to running every recommendation. The decision
function is pure enough for exhaustive unit tests (3 added).

config/modes.toml — initial map for 5 task_classes (scrum_review,
contract_analysis, staffing_inference, fact_extract, doc_drift_check)
+ a default. matrix_corpus per task is reserved for the future
matrix-informed routing pass.

VALID_MODES list (24 modes) is kept in sync manually with LLM Team's
/api/run handler at /root/llm_team_ui.py:10581. Adding a mode here
without adding it upstream returns 400 from a future proxy.

GET /v1/mode/list — operator introspection so a UI can render the
registry table without re-parsing TOML.

Live-tested: 5 task classes match, unknown classes fall through to
default, force_mode override works + validates, bogus modes return
400 with the valid_modes list.

Updates reference_llm_team_modes.md memory — earlier note claiming
"only extract is registered" was wrong (all 25 are registered).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 00:16:32 -05:00
root
2f1b9c9768 phase-39+41: land promised artifacts — providers.toml, activation.rs, profiles/
Three PRD gaps closed in one coherent batch — all were cosmetic or
scaffold-shaped, now real files:

Phase 39 (PRD:57):
  + config/providers.toml — provider registry (name/base_url/auth/
    default_model) for ollama, ollama_cloud, openrouter. Commented
    stubs for gemini + claude pending adapter work. Secrets stay in
    /etc/lakehouse/secrets.toml or env, NEVER inline.

Phase 41 (PRD:115):
  + crates/vectord/src/activation.rs — ActivationTracker with the
    PRD-named single-flight guard ("refuse new activation if one is
    pending/running"). Per-profile granularity — activating A doesn't
    block B. 5 tests cover the full state machine. Handler body stays
    in service.rs for now; tracker usage integration is a follow-up.

Phase 41 (PRD:113):
  + crates/shared/src/profiles/ with 4 submodules:
      * execution.rs — `pub use crate::types::ModelProfile as
        ExecutionProfile` (backward-compat rename per PRD)
      * retrieval.rs — top_k, rerank_top_k, freshness cutoff,
        playbook boost, sensitivity-gate enforcement
      * memory.rs — playbook boost ceiling, history cap, doc
        staleness, auto-retire-on-failure
      * observer.rs — failure cluster size, alert cooldown, ring
        size, langfuse forwarding
    All fields `#[serde(default)]` so existing ModelProfile files
    load unchanged.

Still open from the same phases:
  - Gemini + Claude provider adapters (Phase 40 — 100-200 LOC each)
  - Full activate_profile handler extraction into activation.rs
    (Phase 41 — module-structure refactor)
  - Catalogd CRUD endpoints for retrieval/memory/observer profiles
    (Phase 41 — exists at list level, no create/update/delete yet)
  - truth/ repo-root directory for file-backed rules (Phase 42 —
    TOML loader + schema)
  - crates/validator crate (Phase 43 — full greenfield)

Workspace warnings still at 0. 5 new tests, all green.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 13:32:40 -05:00
root
55f8e0fe6e Phase 40: Routing Engine + Policy
- RoutingEngine with RouteDecision (model_pattern → provider)
- config/routing.toml: rules, fallback chain, cost gating
- Per-provider Usage tracking in /v1/usage response
- 12 gateway tests green
2026-04-23 02:36:45 -05:00
root
0c4868c191 qwen3.5 executor + continuation primitive + think:false
Three coupled fixes that together turned the Riverfront Steel scenario
from 0/5 (mistral) to 4/5 (qwen3.5) with T3 flagging real staffing
concerns rather than linter advice.

MODEL SWAP
- Executor: mistral → qwen3.5:latest (9.7B, 262K ctx, thinking).
  mistral's decoder emitted malformed JSON on complex SQL filters
  regardless of prompt; J called it — stop using mistral.
- Reviewer: qwen2.5 → qwen3:latest (40K ctx)
- Applied to scenario.ts, orchestrator.ts, network_proving.ts,
  run_e2e_rated.ts

CONTINUATION PRIMITIVE (agent.ts)
- generateContinuable(): empty-response → geometric backoff retry;
  truncated-JSON → continue from partial as scratchpad; bounded by
  budget cap + max_continuations. No more "bump max_tokens until it
  stops truncating" tourniquet.
- generateTreeSplit(): map-reduce for oversized input corpora with
  running scratchpad digest, reduce pass for final synthesis.
- Empty text no longer throws — it's a signal to continuable that
  thinking ate the budget.

think:false FOR HOT PATH
- qwen3.5 burned ~650 tokens of hidden thinking for trivial JSON
  emission. For executor/reviewer/draft: think:false. For T3/T4/T5
  overseers: thinking stays on (that's the point).
- Sidecar generate endpoint accepts `think` bool, passes through to
  Ollama's /api/generate.

VERIFIED OUTCOMES
Riverfront Steel 2026-04-21, qwen3.5+continuable+think:false:
  08:00 baseline_fill  3/3  4 turns
  10:30 recurring      2/2  3 turns (1 playbook citation)
  12:15 expansion      0/5  drift-aborted (5-fill orchestration
                            problem, separate work)
  14:00 emergency      4/4  3 turns (1 citation)
  15:45 misplacement   1/1  3 turns
  → T3 caught Patrick Ross double-booking across events
  → T3 flagged forklift cert drift on the event that failed
  → Cross-day lesson proposed "maintain buffer of ≥3 emergency
    candidates, pre-fetch certs for expansion, booking system
    cross-check" — real staffing advice, not generic linter output

PRD PHASE 21 rewritten to reflect the actual primitive shape (two-
call map-reduce with scratchpad glue) instead of the tourniquet
approach originally documented. Rust port queued for next sprint.

scripts/ab_t3_test.sh: A/B harness that chains B→C→D runs and emits
tests/multi-agent/playbooks/ab_scorecard.json.
2026-04-20 20:19:02 -05:00
root
6e7ca1830e Phase 21 foundation — context stability + chunking pipeline
PRD: add Phase 20 (model matrix, wired) and Phase 21 (context stability,
partial). Phase 21 exists because LLM Team hit this exact wall — running
multi-model ranking on large context silently truncated, rankings
degraded, no pipeline caught it. The stable answer: every agent call
goes through a budget check against the model's declared context_window
minus safety_margin, with a declared overflow_policy when the check
fails.

config/models.json:
- context_window + context_budget per tier
- overflow_policies block: summarize_oldest_tool_results_via_t3,
  chunk_lessons_via_cosine_topk, two_pass_map_reduce,
  escalate_to_kimi_k2_1t_or_split_decision
- chunking_cache spec (data/_chunk_cache/, corpus-hash keyed)

agent.ts:
- estimateTokens() chars/4 biased safe ~15%
- CONTEXT_WINDOWS table (fallback; prod reads models.json)
- assertContextBudget() — throws on overflow with exact numbers, can
  bypass with bypass_budget:true for callers with their own policy
- Wired into generate() and generateCloud() so EVERY call is checked

scenario.ts:
- T3 lesson archive to data/_playbook_lessons/*.json (the old
  /vectors/playbook_memory/seed path was silently failing with HTTP 400
  because it requires 'fill: Role xN in City, ST' operation shape)
- loadPriorLessons() at scenario start — filters by city/state match,
  date-sorted, takes top-3
- prior_lessons.json archived per-run (honest signal for A/B)
- guidanceFor() injects up to 2 prior lessons (≤500 chars each) into
  the executor's per-event context
- Retrospective shows explicit "Prior lessons loaded: N" line

Verified: mistral correctly rejects a 150K-char prompt (7532 tokens
over), gpt-oss:120b accepts it with 90K headroom. The enforcement is
in-band on every call now, not an afterthought.

Full chunking service (Rust) remains deferred to the sprint this feeds:
crates/aibridge/src/budget.rs + chunk.rs + storaged/chunk_cache.rs
2026-04-20 19:34:44 -05:00
root
03d723e7e6 Model matrix — 5 tiers, local hard workers + cloud overseers
config/models.json is the authoritative catalog. Hot path (T1/T2) stays
local; cloud is consulted only for overview (T3), strategic (T4), and
gatekeeper (T5) calls. J named qwen3.5 + newer models (minimax-m2.7,
glm-5, qwen3-next) specifically — all mapped with real reachable IDs
verified against ollama.com/api/tags.

Tier shape:
- t1_hot     mistral + qwen2.5 local       — 50-200 calls/scenario
- t2_review  qwen2.5 + qwen3 local         — 5-14 calls/event
- t3_overview gpt-oss:120b cloud           — 1-3 calls/scenario
- t4_strategic qwen3.5:397b + glm-4.7      — 1-10 calls/day
- t5_gatekeeper kimi-k2-thinking           — 1-5 calls/day, audit-logged

Rate budgets are declared in-config — Ollama Cloud paid tier is generous
but we cap overview/strategic/gatekeeper so no single rogue scenario can
blow the day's quota.

Experimental rotation list wired but disabled by default. When enabled,
T4 randomly routes 10% of calls to a rotating minimax/GLM/qwen-next/
deepseek/nemotron/cogito/mistral-large candidate, logs comparisons, and
auto-promotes after 3 rotations of wins.

Playbook versioning SPEC embedded under `playbook_versioning` key: every
seed gets version + parent_id + retired_at + architecture_snapshot, so
when a schema migration breaks a playbook we can pinpoint which change
retired it. Implementation flagged for next sprint (touches gateway +
catalogd + mcp-server) — not wired here.

- scenario.ts now loads config/models.json at init, env vars still override
- mcp-server exposes /models/matrix read-only so UI can render it
2026-04-20 19:24:41 -05:00