16 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
a44ccde845 |
observer: overseer fallback label → qwen3.5:latest (matches reverted route)
Mirror of yesterday's execution_loop overseer revert (commit d054c0b). The observer logs an "overseer:<model>" endpoint string for analysis; when row.model is missing it falls back to a hardcoded label. PR #13 set that fallback to "claude-opus-4-7" — but the route now goes to local Ollama qwen3.5:latest, so the label was wrong. Trivial one-line fix, no behavior change. Just keeps observer's endpoint string honest when older rows from the cloud-routing window get re-analyzed. End-to-end verification of the local hot path (post-revert): BEFORE /v1/usage by_provider: [] AFTER /v1/usage by_provider: [{"k":"ollama","v":2}] → /v1/iterate fired 2 chat calls, both to local ollama → ZERO cloud requests (no kimi/openrouter/opencode/ollama_cloud) → API meter on cloud providers stays at 0 for customer requests Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
8de94eba08 |
cleanup: bump qwen2.5 → qwen3.5:latest in active defaults
Some checks failed
lakehouse/auditor 16 blocking issues: cloud: claim not backed — "Verified end-to-end via playwright on devop.live/lakehouse:"
stronger local rung is now the small-model-pipeline tier-1 default across both Rust legacy + Go rewrite (cf. golangLAKEHOUSE phase 1). same JSON-clean property as qwen2.5, more capacity. ollama still serves both side-by-side; rollback is a 4-line revert if a workload regresses. active-default sites: - lakehouse.toml [ai] gen_model + rerank_model → qwen3.5:latest - mcp-server/observer.ts diagnose call (Phase 44 /v1/chat path) → qwen3.5:latest - mcp-server/index.ts model roster doc → qwen3.5:latest first - crates/vectord/src/rag.rs ContinuableOpts + RagResponse.model → qwen3.5:latest skipped: execution_loop/mod.rs comments describing historic qwen2.5 tool_call quirks — those are documentation of past behavior, not active defaults. data/_catalog/profiles/*.json are runtime-generated (gitignored), not in scope for tracked changes. cargo check -p vectord: clean. no behavioral change in the audit pipeline — same JSON-clean local model, same think=Some(false) posture, just stronger upstream. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
d475fc7fff |
infra: replace gpt-oss with Ollama Pro + OpenCode Zen across hot paths
Ollama Pro plan went live today (39-model fleet on the same
OLLAMA_CLOUD_KEY) and OpenCode Zen was already wired in the gateway
but not consumed. Routing every gpt-oss call site to faster /
stronger replacements:
| Site | gpt-oss → replacement | Why |
|---|---|---|
| ollama_cloud default | gpt-oss:120b → deepseek-v3.2 | newest DeepSeek revision; live-probed `pong` |
| openrouter default | openai/gpt-oss-120b:free → x-ai/grok-4.1-fast | already the scrum LADDER's PRIMARY |
| modes.toml staffing_inference | openai/gpt-oss-120b:free → kimi-k2.6 | coding-specialized, on Ollama Pro |
| modes.toml doc_drift_check | gpt-oss:120b → gemini-3-flash-preview | speed leader for factual checks |
| scrum_master_pipeline tree-split MAP+REDUCE | gpt-oss:120b → gemini-3-flash-preview | latency-dominated path (5-20× per file) |
| bot/propose.ts CLOUD_MODEL | gpt-oss:120b → deepseek-v3.2 | same Ollama key, faster |
| mcp-server/observer.ts overseer label fallback | gpt-oss:120b → claude-opus-4-7 | matches new overseer model |
| crates/gateway/src/execution_loop overseer escalation | ollama_cloud/gpt-oss:120b → opencode/claude-opus-4-7 | frontier reasoning matters here — fires only after local self-correct fails twice; Zen pay-per-token cost is bounded |
Verification:
- `cargo check -p gateway --tests` — clean
- Live probes through localhost:3100/v1/chat:
- `opencode/claude-opus-4-7` → "pong"
- `gemini-3-flash-preview` (ollama_cloud) → "pong"
- `kimi-k2.6` (ollama_cloud) → "pong"
- `deepseek-v3.2` (ollama_cloud) → "Pong! 🏓"
Notes:
- kimi-k2:1t still upstream-broken (HTTP 500 on Ollama Pro probe today,
matches yesterday's memory). Replacement table never picks it.
- The Rust changes need a `systemctl restart lakehouse.service` to
take effect on the running gateway. TS callers reload on next run.
- aibridge/src/context.rs still has gpt-oss:{20b,120b} in its window-
size lookup table; harmless and kept for callers that pass it
explicitly as an override.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
f6af0fd409 |
phase 44 (part 1): migrate TS callers to /v1/chat + add regression guard
Some checks failed
lakehouse/auditor 16 blocking issues: cloud: claim not backed — "Verified end-to-end:"
Migrates the four TypeScript /generate callers to the gateway's
/v1/chat surface so every LLM call lands on /v1/usage and Langfuse:
tests/multi-agent/agent.ts::generate() provider="ollama"
tests/agent_test/agent_harness.ts::callAgent provider="ollama"
bot/propose.ts::generateProposal provider="ollama_cloud"
mcp-server/observer.ts (error analysis) provider="ollama"
Each migration follows the same pattern as the prior generateCloud()
migration (already on /v1/chat from 2026-04-24): replace
`fetch(SIDECAR/generate)` with `fetch(GATEWAY/v1/chat)`, swap the
prompt-style body for OpenAI-compat messages array, extract
content from `choices[0].message.content` instead of `text`.
Same upstream models in every case — gateway is the new home for
the call, transport otherwise unchanged.
Adds scripts/check_phase44_callers.sh — fail-loud regression guard
that exits non-zero if any non-adapter file fetches /generate or
api/generate. Adapter files (crates/gateway, crates/aibridge,
sidecar/) are exempt. Pre-tightening regex flagged prose mentions
in comments; the shipped regex requires `fetch(...)` or
`client.post(...)` shape so comments don't trip it.
Verification:
bun build mcp-server/observer.ts compiles
bun build tests/multi-agent/agent.ts compiles
bun build tests/agent_test/agent_harness.ts compiles
bun build bot/propose.ts compiles
./scripts/check_phase44_callers.sh ✅ clean
systemctl restart lakehouse-observer active
Phase 44 part 2 (deferred):
- crates/aibridge/src/client.rs:118 still posts to sidecar /generate
directly. AiClient is the foundational Rust LLM caller used by
8+ vectord modules; migrating it is a workspace-wide refactor
that needs its own commit. Plan: keep AiClient as the local-
transport layer for the gateway's `provider=ollama` arm, but
introduce a thin `/v1/chat` wrapper for external callers (vectord
autotune, agent, rag, refresh, supervisor, playbook_memory).
- tests/real-world/hard_task_escalation.ts: comment mentions
/api/generate but doesn't actually call it. Comment is left
intentionally as historical context; regex no longer flags it.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
0844206660 |
observer + scrum: gold-standard answer corpus for compounding context
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
The compose-don't-add discipline applied to the original ask: when big
models produce good results (scrum reviews + observer escalations),
save them into the matrix indexer so future small-model handlers can
retrieve them as scaffolding. Local model gets near-paid quality from
a fraction of the cost.
New: scripts/build_answers_corpus.ts indexes lakehouse_answers_v1
from data/_kb/scrum_reviews.jsonl + data/_kb/observer_escalations.jsonl.
doc_id prefixes ('review:' vs 'escalation:') let consumers same-file-
gate the prior-reviews case while keeping escalations broad.
observer.ts: buildKbPreamble adds lakehouse_answers_v1 as a third
retrieval source alongside pathway/bug_fingerprints + lakehouse_arch_v1.
qwen3.5:latest synthesis now compresses three lenses into a single
briefing for the cloud reviewer.
scrum_master_pipeline.ts: epilogue dispatches a fire-and-forget rebuild
of lakehouse_answers_v1 after each run so this run's accepted reviews
are retrievable within ~30s. LH_SCRUM_SKIP_ANSWERS_REBUILD=1 disables.
Verified live: kb_preamble grew 416 → 727 chars after wiring third
source; qwen3.5:latest synthesis (702 → 128 tokens) compresses
correctly; deepseek-v3.1-terminus diagnosis (301 → 148 tokens) is
sharper, citing architectural patterns (circuit breaker, adapter
files) instead of generic timeouts. Total cost per escalation
unchanged at ~$0.0002.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
340fca2427 |
observer: route escalation to paid OpenRouter (deepseek-v3.1-terminus)
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
ollama_cloud/qwen3-coder:480b was hitting weekly 429 quota; observer escalations were silently failing 502 with no audit row. Switched escalation cloud-call to deepseek-v3.1-terminus on paid OpenRouter: 671B reasoning specialist, $0.21 in / $0.79 out per M tokens (under the $0.85/M ceiling J set), 164K ctx. End-to-end verified: kb_preamble_chars=416, prompt 245 tokens, completion 155 tokens, ~$0.00018 per escalation. Diagnosis output is specific (cites adapter + route file), not generic. Two-stage chain holds: qwen3.5:latest compresses raw KB hits into a tight briefing, deepseek-v3.1-terminus reasons over the briefing for diagnosis. Audit `mode` field updated to direct_chat_deepseek_v3_1_terminus so downstream consumers can attribute analyses to the correct rung. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
d9bd4c9bdf |
observer: KB enrichment preamble before failure-cluster escalation
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
escalateFailureClusterToLLMTeam now calls a new buildKbPreamble() that mirrors what scrum_master_pipeline does on every per-file review: queries /vectors/pathway/bug_fingerprints + /vectors/search against the lakehouse_arch_v1 corpus, then asks local qwen3.5:latest (provider=ollama) to synthesize a tight briefing. The synthesized preamble prepends the existing escalation prompt so the cloud reviewer sees historical context the same way scrum reviewers do. Reuses existing KB primitives — no new corpora, no new endpoints, no new abstractions. Same code path scrum already exercises 3+ times per review; observer joins the same compounding loop. Audit row gains kb_preamble_chars so we can later track enrichment yield per escalation. Empty preamble (both fingerprints + matrix return nothing) → empty string, prompt unchanged. Verified: qwen3.5:latest synthesis fires for every escalation with non-empty matrix hits (gateway log: 445→72 tokens, 3.1s). Matrix retrieval correctly surfaces PRD Phase 40/44 chunks for chat_completion clusters. Pathway memory stays consistent with scrum (84→87 traces); chat_completion task_class doesn't have fingerprints yet — graceful. Local-model synthesis was J's explicit ask: compress the raw bundle before the cloud call so the briefing is actionable, not a dump. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
0115a60072 |
observer: add /relevance heuristic filter for adjacency pollution
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
Matrix retrieval often surfaces high-cosine chunks that are about
symbols the focus file IMPORTS but doesn't define. The reviewer LLM
then hallucinates those imported-crate internals as in-file content
("I see main.rs does X" when X lives in queryd::context).
mcp-server/relevance.ts — pure scorer with five signals:
path_match +1.0 chunk source/doc_id encodes focus path
defined_match +0.6 chunk text mentions focus.defined_symbols
token_overlap +0.4 jaccard of non-stopword tokens
prefix_match +0.3 shared first-2-segment prefix
import_only -0.5 mentions only imported symbols (pollution)
Default threshold 0.3 — tuned empirically on the gateway/main.rs case.
Also fixes a regex bug in the import extractor: the character class
was lowercase-only, so `use catalogd::Registry;` silently never
matched (regex backed off when it hit the uppercase R). Caught by
the test suite.
observer.ts — POST /relevance endpoint wraps filterChunks().
scrum_master_pipeline.ts — fetchMatrixContext gains optional
focusContent param; calls /relevance after collecting allHits and
before sort+top. Opt-out via LH_RELEVANCE_FILTER=0; threshold via
LH_RELEVANCE_THRESHOLD. Fall-open on observer failure.
9 unit tests, all green. Live probe on real shape correctly drops
a 0.7-cosine adjacency-pollution chunk while keeping in-focus hits.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
54689d523c |
observer: fix gateway health probe — text/plain not JSON
Some checks failed
lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
Same bug as matrix-agent-validated 5db0c58. observer.ts:645 did fetch().then(r => r.json()) against /health which returns text/plain "lakehouse ok". r.json() throws on non-JSON, .catch swallows to null, observer exits assuming gateway down. With systemd Restart=on-failure this crash-loops every 5s — confirmed live on matrix-test box today. Fix: r.ok ? r.text() : null. Same shape, accepts the actual content type. Sealed in pathway_memory as TypeConfusion:fetch-health-json. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
4ac56564c0 |
scrum + applier + observer: switch to paid OpenRouter ladder, add Kimi K2.6 + Gemini 2.5
Ollama Cloud was throttled across all 6 cloud rungs in iters 1-9, which forced the loop into 0-review iterations even though the architecture was sound. Swapping to paid OpenRouter unblocks the test path. Ladder changes (top-of-ladder paid models, all under $0.85/M either side): - moonshotai/kimi-k2.6 ($0.74/$4.66, 256K) — capped at 25/hr - x-ai/grok-4.1-fast ($0.20/$0.50, 2M) — primary general - google/gemini-2.5-flash ($0.30/$2.50, 1M) — Google reasoning - deepseek/deepseek-v4-flash ($0.14/$0.28, 1M) — cheap workhorse - qwen/qwen3-235b-a22b-2507 ($0.07/$0.10, 262K) — cheapest big Existing rungs (Ollama Cloud + free OR + local qwen3.5) kept as fallback. Per-model rate limiter (MODEL_RATE_LIMITS in scrum_master_pipeline.ts): - Persists call timestamps to data/_kb/rate_limit_calls.jsonl so caps survive process restarts (autonomous loop spawns a fresh subprocess per iteration; without persistence each iter would reset) - O(1) writes, prune-on-read for the rolling 1h window - Capped models log "SKIP (rate-limited: cap N/hr reached)" and the ladder cycles to the next rung - J directive 2026-04-25: 25/hr on Kimi K2.6 to bound output cost Observer hand-review cloud tier swapped from ollama_cloud/qwen3-coder:480b to openrouter/x-ai/grok-4.1-fast — proven to emit precise semantic verdicts (named "AccessControl::can_access() doesn't exist" specifically in 2026-04-25 tests instead of the heuristic fallback). Applier patch emitter swapped from ollama_cloud/qwen3-coder:480b to openrouter/x-ai/grok-4.1-fast (default; LH_APPLIER_MODEL + LH_APPLIER_PROVIDER override). This was the third LLM call we missed — without it, observer accepts a review but applier never produces patches because its emitter was still hitting the throttled account. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
3f166a5558 |
scrum + observer: hand-review wire — judgment moved out of the inner loop
Pre-2026-04-25 the scrum_master applied a hardcoded grounding-rate gate
inline. That baked policy into the wrong layer — semantic judgment about
whether a review is grounded belongs in the observer (which has Langfuse
traces, sees every response across the system, and can call cloud LLMs
for real evaluation). Scrum should report DATA, observer DECIDES.
What landed:
- scrum_master_pipeline.ts: removed the inline grounding-pct threshold;
every accepted candidate now POSTs to observer's /review endpoint with
{response, source_content, grounding_stats, model, attempt}. Observer
returns {verdict: accept|reject|cycle, confidence, notes}. On observer
failure, scrum falls open to accept (observer is policy, not blocker).
- mcp-server/observer.ts: new POST /review endpoint with two-tier
evaluator. Tier 1: cloud LLM (qwen3-coder:480b at temp=0) hand-reviews
with full context — response + source excerpt + grounding stats — and
emits structured verdict JSON. Tier 2: deterministic heuristic over
grounding pct + total quotes when cloud throttles, marked source:
"heuristic" so consumers can tune it later by comparing against cloud.
- Every verdict persists to data/_kb/observer_reviews.jsonl with full
input snapshot so cloud vs heuristic can be A/B compared once cloud
quota refreshes.
Verified end-to-end: smoke loop iter 1 — observer returned `cycle` on
21% grounding (cycled to next rung), `reject` on 17% (gave up). Iter 2
— `reject` on 12% and 14%. Both UNRESOLVED with honest signal instead
of polluting pathway memory with hallucinated patterns.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
25ea3de836 |
observer: fix LLM Team escalation — route to /v1/chat qwen3-coder:480b instead of dead mode
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "journal event verified live (total_events_created 0→1 after probe)."
Discovery 2026-04-24: /api/run?mode=code_review returns "Unknown mode"
(error response from llm_team_ui.py). The 2026-04-24 observer escalation
wiring pointed at a dead endpoint and was failing silently. My earlier
claim of "9 registered LLM Team modes" came from GET probes that all
returned 405 — I interpreted that as "POST-only endpoints exist" when
it just means "GET is not allowed for anything, and on POST only `extract`
is registered."
Rewire: observer's escalateFailureClusterToLLMTeam now hits
POST /v1/chat { provider: "ollama_cloud", model: "qwen3-coder:480b", ... }
which is the same coding-specialist rung 2 of the scrum ladder that
reliably produces substantive reviews. Probe shows 1240 chars of
substantive analysis in ~8.7s.
Also tightens scrum_applier:
* MODEL default: kimi-k2:1t → qwen3-coder:480b (coding specialist)
* Size gate: 20 lines → 6 lines (surgical patches only)
* Max patches per file: 3 → 2
* Prompt: explicit forbidden-actions list (no struct renames, no
function-signature changes, no new modules) and mechanical-only
whitelist
These changes produced the first auto-applied commit (96b46cd), which
landed a 2-line import addition that passed cargo check. Zero-to-one.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
8b77d67c9c |
OpenRouter rescue ladder + tree-split reduce fix + observer→LLM Team + scrum_applier + first auto-applied patch
Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "journal event verified live (total_events_created 0→1 after probe)."
## Infrastructure (scrum loop hardening)
crates/gateway/src/v1/openrouter.rs — new OpenRouter provider
Direct HTTPS to openrouter.ai/api/v1/chat/completions with OpenAI-compatible shape.
Key resolution: OPENROUTER_API_KEY env → /home/profit/.env → /root/llm_team_config.json
(shares LLM Team UI's quota). Added after iter 5 hit repeated Ollama Cloud 502s on
kimi-k2:1t — different provider backbone as rescue rung. Unit tests pin the URL
stripping and OpenAI wire shape.
crates/gateway/src/v1/mod.rs + main.rs
Added `"openrouter" | "openrouter_free"` arm to /v1/chat dispatch.
V1State.openrouter_key loaded at startup via openrouter::resolve_openrouter_key()
mirroring the Ollama Cloud pattern. Startup log:
"v1: OpenRouter key loaded — /v1/chat provider=openrouter enabled"
tests/real-world/scrum_master_pipeline.ts
* 9-rung ladder — kimi-k2:1t → qwen3-coder:480b → deepseek-v3.1:671b →
mistral-large-3:675b → gpt-oss:120b → qwen3.5:397b → openrouter/gpt-oss-120b:free
→ openrouter/gemma-3-27b-it:free → local qwen3.5:latest.
Added qwen3-coder:480b as rung 2 after live probes confirmed it rescues
kimi-k2:1t 502s cleanly (0.9s latency, substantive reviews).
Dropped devstral-2 (displaced by qwen3-coder); dropped kimi-k2.6 (not available);
dropped minimax-m2.7 (returned 0 chars / 400 thinking tokens).
Local fallback promoted qwen3.5:latest per J's direction 2026-04-24.
* MAX_ATTEMPTS bumped 6 → 9 to accommodate the rescue tier.
* Tree-split scratchpad fixed — was concatenating shard markers directly
into the reviewer input, causing kimi-k2:1t to write titles like
"Forensic Audit Report – file.rs (shard 3)". Now uses internal §N§
markers during accumulation and runs a proper reduce step that
collapses per-shard digests into ONE coherent file-level synthesis
with markers stripped. Matches the Phase 21 aibridge::tree_split
map→reduce design. Fallback to stripped scratchpad if reducer returns thin.
tests/real-world/scrum_applier.ts — NEW (737 lines)
The auto-apply pipeline. Reads scrum_reviews.jsonl, filters rows where
gradient_tier ∈ {auto, dry_run} AND confidence_avg ≥ MIN_CONF (default 90),
asks the reviewer model for concrete old_string/new_string patch JSON,
applies via text replacement, runs cargo check after each file, commits
if green and reverts if red. Deny-list: /etc/, config/, ops/, auditor/,
docs/, data/, mcp-server/, ui/, sidecar/, scripts/. Hard caps: per-patch
confidence ≥ MIN_CONF, old_string must be exactly unique, max 20 lines per
patch. Never runs on main without explicit LH_APPLIER_BRANCH override.
Audit trail in data/_kb/auto_apply.jsonl.
Empirical behavior (dry-run over iter 4 reviews):
5 eligible files → 1 green commit-ready, 2 build-red reverts, 2 all-rejected
The build-green gate caught 2 bad patches before they'd have merged.
mcp-server/observer.ts — LLM Team code_review escalation
When a sig_hash accumulates ≥3 failures (ESCALATION_THRESHOLD), fire-and-forget
POST /api/run?mode=code_review at localhost:5000 with the failure cluster context.
Parses facts/entities/relationships/file_hints from the response. Writes to a
new data/_kb/observer_escalations.jsonl surface. Answers J's vision of the
observer triggering richer LLM Team calls when failures pile up.
Non-blocking: runs parallel to existing qwen2.5 analyzer, never replaces it.
Tracks escalated sig_hashes in a session-local Set to avoid re-hammering
LLM Team when a cluster persists across observer cycles.
crates/aibridge/src/context.rs
First auto-applied patch produced by scrum_applier.ts (dry-run path —
applier writes files in dry-run mode but doesn't commit; bug noted for
iter 6 fix). Adds #[deprecated] annotation to the inline estimate_tokens
helper pointing callers to the centralized shared::model_matrix::ModelMatrix
entry point (P21-002 — duplicate token-estimator surfaces). Cargo check
passes with the annotation (verified by applier's own build gate).
## Visual Control Plane (UI)
ui/server.ts — Bun.serve on :3950 with /data/* fan-out:
/data/services, /data/reviews, /data/metrics, /data/trust, /data/overrides,
/data/findings, /data/outcomes, /data/audit_facts, /data/file/:path,
/data/refactor_signals, /data/search?q=, /data/signal_classes,
/data/logs/:svc (journalctl tail per systemd unit), /data/scrum_log.
Bug fix: tryFetch always attempts JSON.parse before falling back to text
— observer's Bun.serve returns JSON without application/json content-type,
which was displaying stats as a raw string ("0 ops" on map) before.
ui/index.html + ui.css — dark neo-brutalist shell. 6 views:
MAP (D3 force-graph + overlays) / TRACE (per-file iter history) /
TRAJECTORY (signal-class cards + refactor-signals table + reverse-index
search box) / METRICS (every card has SOURCE + GOOD lines explaining
where the number comes from and what target trajectory means) /
KB (card grid with tooltips on every field) / CONSOLE (per-service
journalctl tabs).
ui/ui.js — polling client, D3 wiring, signal-class panel, refactor-signals
table, reverse-index search, per-service console tabs. Bug fix:
renderNodeContext had Object.entries() iterating string characters when
/health returned a plain string — now guards with typeof check so
"lakehouse ok" renders as one row instead of "0 l / 1 a / 2 k / ...".
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
21fd3b9c61 |
Scrum-driven fixes: P5-001 auth wired, P42-001 truth evaluator, P9-001 journal on ingest
Some checks failed
lakehouse/auditor 2 blocking issues: cloud: claim not backed — "| **P9-001** (partial) | `crates/ingestd/src/service.rs` | **3 → 6** ↑↑↑ | `journal.record_ing
Apply the highest-confidence findings from the Phase 0→42 forensic sweep
after four scrum-master iterations under the adversarial prompt. Each fix
is independently validated by a later scrum iteration scoring the same
file higher under the same bar.
Code changes
────────────
P5-001 — crates/gateway/src/auth.rs + main.rs
api_key_auth was marked #[allow(dead_code)] and never wrapped around
the router, so `[auth] enabled=true` logged a green message and
enforced nothing. Now wired via from_fn_with_state, with constant-time
header compare and /health exempted for LB probes.
P42-001 — crates/truth/src/lib.rs
TruthStore::check() ignored RuleCondition entirely — signature looked
like enforcement, body returned every action unconditionally. Added
evaluate(task_class, ctx) that actually walks FieldEquals / FieldEmpty /
FieldGreater / Always against a serde_json::Value via dot-path lookup.
check() kept for back-compat. Tests 14 → 24 (10 new exercising real
pass/fail semantics). serde_json moved to [dependencies].
P9-001 (partial) — crates/ingestd/src/service.rs
Added Optional<Journal> to IngestState + a journal.record_ingest() call
on /ingest/file success. Gateway wires it with `journal.clone()` before
the /journal nest consumes the original. First-ever internal mutation
journal event verified live (total_events_created 0→1 after probe).
Iter-4 scrum scored these files higher under same prompt:
ingestd/src/service.rs 3 → 6 (P9-001 visible)
truth/src/lib.rs 3 → 4 (P42-001 visible)
gateway/src/auth.rs 3 → 4 (P5-001 visible)
gateway/src/execution_loop 4 → 6 (indirect)
storaged/src/federation 3 → 4 (indirect)
Infrastructure additions
────────────────────────
* tests/real-world/scrum_master_pipeline.ts
- cloud-first ladder: kimi-k2:1t → deepseek-v3.1:671b → mistral-large-3:675b
→ gpt-oss:120b → devstral-2:123b → qwen3.5:397b (deep final thinker)
- LH_SCRUM_FORENSIC env: injects SCRUM_FORENSIC_PROMPT.md as adversarial preamble
- LH_SCRUM_PROPOSAL env: per-iter fix-wave doc override
- Confidence extraction (markdown + JSON), schema v4 KB rows with:
verdict, critical_failures_count, verified_components_count,
missing_components_count, output_format, gradient_tier
- Model trust profile written per file-accept to data/_kb/model_trust.jsonl
- Fire-and-forget POST to observer /event so by_source.scrum appears in /stats
* mcp-server/observer.ts — unchanged in shape, confirmed receiving scrum events
* ui/ — new Visual Control Plane on :3950
- Bun.serve with /data/{services,reviews,metrics,trust,overrides,findings,file,refactor_signals,search,logs/:svc,scrum_log}
- Views: MAP (D3 graph, 5 overlays) / TRACE (per-file iter timeline) /
TRAJECTORY (refactor signals + reverse index search) / METRICS (explainers
with SOURCE + GOOD lines) / KB (card grid with tooltips) / CONSOLE (per-service
journalctl tail, tabs for gateway/sidecar/observer/mcp/ctx7/auditor/langfuse)
- tryFetch always attempts JSON.parse (fix for observer returning JSON without content-type)
- renderNodeContext primitive-vs-object guard (fix for gateway /health string)
* docs/SCRUM_FIX_WAVE.md — iter-specific scope directing the scrum
* docs/SCRUM_FORENSIC_PROMPT.md — adversarial audit prompt (verdict/critical/verified schema)
* docs/SCRUM_LOOP_NOTES.md — iteration observations + fix-next-loop queue
* docs/SYSTEM_EVOLUTION_LAYERS.md — Layers 1-10 roadmap (trust profiling, execution DNA, drift sentinel, etc)
Measurements across iterations
──────────────────────────────
iter 1 (soft prompt, gpt-oss:120b): mean score 5.00/10
iter 3 (forensic, kimi-k2:1t): mean score 3.56/10 (−1.44 — bar raised)
iter 4 (same bar, post fixes): mean score 4.00/10 (+0.44 — fixes landed)
Score movement iter3→iter4: ↑5 ↓1 =12
21/21 first-attempt accept by kimi-k2:1t in iter 4
20/21 emitted forensic JSON (richer signal than markdown)
16 verified_components captured (proof-of-life, new metric)
Permission Gradient distribution: 0 auto · 16 dry_run · 4 sim · 1 block
Observer loop: by_source {scrum: 21, langfuse: 1985, phase24_audit: 1}
v1/usage: 224 requests, 477K tokens, all tracked
Signal classes per file (iter 3 → iter 4):
CONVERGING: 1 (ingestd/service.rs — fix clearly landed)
LOOPING: 4 (catalogd/registry, main, queryd/service, vectord/index_registry)
ORBITING: 1 (truth — novel findings surfacing as surface ones fix)
PLATEAU: 9 (scores flat with high confidence — diminishing returns)
MIXED: 6
Loop thesis status
──────────────────
A file's score rises only when the scrum confirms a real fix landed.
No false positives yet across 3 iterations. Fixes applied to 3 files all
raised their independent scores under the same adversarial prompt. Loop
is measurable, not hand-wavy.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
b95dd86556 |
Phase 24 — observer HTTP ingest + scenario outcome streaming
Closes the gap J flagged: observer wraps MCP:3700, scenarios hit
gateway:3100 directly, observer idle at 0 ops across 3600+ cycles.
Now scenarios POST per-event outcomes to observer's new HTTP ingest
on :3800, observer consumes them alongside MCP-wrapped ops, ERROR_
ANALYZER and PLAYBOOK_BUILDER loops see the full picture.
observer.ts:
- Bun.serve() HTTP listener on OBSERVER_PORT (default 3800):
GET /health — basic + ring depth
GET /stats — total / success / failure / by_source / recent
scenario ops digest
POST /event — accept scenario outcome, shape it into ObservedOp
with source="scenario" + staffer_id + sig_hash +
event_kind + role/city/state + rescue flags
- recordExternalOp() — shared ring-buffer insert so the main analyzer
+ playbook builder don't care where the op came from
- ObservedOp extended with provenance fields
persistOp() FIX — old path POSTed to /ingest/file?name=observed_operations
which REPLACES the dataset (flagged in feedback_ingest_replace_semantics.md).
Every op was silently wiping all prior ops. Replaced with append to
data/_observer/ops.jsonl so the historical trace is durable across
analyzer cycles and process restarts.
scenario.ts:
- OBSERVER_URL env (default http://localhost:3800)
- postObserverEvent() helper with 2s AbortSignal.timeout so observer
being down doesn't block scenario flow
- Per-event POST after ctx.results.push(result), carrying staffer_id,
sig_hash (via imported computeSignature), event_kind + role + city
+ state + count + rescue_attempted / rescue_succeeded + truncated
output_summary
VERIFIED:
curl POST /event → {"accepted":true,"ring_size":1}
curl GET /stats → {"total":1,"successes":1,"by_source":{"scenario":1},
"recent_scenario_ops":[{...staffer_id,kind,role}]}
Final v3 demo leaderboard (9 runs per staffer, cumulative 3 batches):
James (local): 92.9% fill, 36.8 cites, score 0.775 — RANK 1
Maria (full): 81.0% fill, 26.2 cites, score 0.727
Sam (basic): 61.9% fill, 28.2 cites, score 0.640
Alex (minimal): 59.5% fill, 32.2 cites, score 0.631
Honest finding: Alex has MORE citations than Sam despite NO T3 and NO
rescue. Playbook inheritance alone is firing hardest when overseer is
absent. The 59.5% fill rate (up from 0% when qwen2.5 was executor)
proves cloud-exec + playbook inheritance is the floor the architecture
delivers.
Local gpt-oss:20b T3 outperforms cloud gpt-oss:120b T3 by 12pt fill
rate on this workload — cloud overseer paying latency+variance for
no measurable gain, worth flagging in next models.json tune.
|
||
|
|
b532ae61f1 |
Agent gateway + observer — autonomous internal operation
Three new systemd services: - lakehouse-agent (:3700) — REST gateway wrapping all lakehouse tools. Clean JSON in/out, no protocol complexity. 9 endpoints: /search, /sql, /match, /worker/:id, /ask, /log, /playbooks, /profile/:id, /vram - lakehouse-observer — watches operations, logs to lakehouse, asks local model to diagnose failure patterns, consolidates successful patterns into playbooks every 5 cycles - Stdio MCP transport preserved for Claude Code integration AGENT_INSTRUCTIONS.md: complete operating manual for sub-agents. Rules: never hallucinate, SQL first for structured questions, hybrid for matching, log every success, check playbooks before complex tasks. Observer loop: observed() wrapper timestamps + persists every gateway call → error analyzer reads failures + asks LLM for diagnosis → playbook consolidator groups successes by endpoint pattern All three designed for zero human intervention — agents operate, observer watches, playbooks accumulate, iteration happens internally. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |