qwen3.5 executor + continuation primitive + think:false
Three coupled fixes that together turned the Riverfront Steel scenario
from 0/5 (mistral) to 4/5 (qwen3.5) with T3 flagging real staffing
concerns rather than linter advice.
MODEL SWAP
- Executor: mistral → qwen3.5:latest (9.7B, 262K ctx, thinking).
mistral's decoder emitted malformed JSON on complex SQL filters
regardless of prompt; J called it — stop using mistral.
- Reviewer: qwen2.5 → qwen3:latest (40K ctx)
- Applied to scenario.ts, orchestrator.ts, network_proving.ts,
run_e2e_rated.ts
CONTINUATION PRIMITIVE (agent.ts)
- generateContinuable(): empty-response → geometric backoff retry;
truncated-JSON → continue from partial as scratchpad; bounded by
budget cap + max_continuations. No more "bump max_tokens until it
stops truncating" tourniquet.
- generateTreeSplit(): map-reduce for oversized input corpora with
running scratchpad digest, reduce pass for final synthesis.
- Empty text no longer throws — it's a signal to continuable that
thinking ate the budget.
think:false FOR HOT PATH
- qwen3.5 burned ~650 tokens of hidden thinking for trivial JSON
emission. For executor/reviewer/draft: think:false. For T3/T4/T5
overseers: thinking stays on (that's the point).
- Sidecar generate endpoint accepts `think` bool, passes through to
Ollama's /api/generate.
VERIFIED OUTCOMES
Riverfront Steel 2026-04-21, qwen3.5+continuable+think:false:
08:00 baseline_fill 3/3 4 turns
10:30 recurring 2/2 3 turns (1 playbook citation)
12:15 expansion 0/5 drift-aborted (5-fill orchestration
problem, separate work)
14:00 emergency 4/4 3 turns (1 citation)
15:45 misplacement 1/1 3 turns
→ T3 caught Patrick Ross double-booking across events
→ T3 flagged forklift cert drift on the event that failed
→ Cross-day lesson proposed "maintain buffer of ≥3 emergency
candidates, pre-fetch certs for expansion, booking system
cross-check" — real staffing advice, not generic linter output
PRD PHASE 21 rewritten to reflect the actual primitive shape (two-
call map-reduce with scratchpad glue) instead of the tourniquet
approach originally documented. Rust port queued for next sprint.
scripts/ab_t3_test.sh: A/B harness that chains B→C→D runs and emits
tests/multi-agent/playbooks/ab_scorecard.json.
This commit is contained in:
parent
6e7ca1830e
commit
0c4868c191
@ -22,9 +22,9 @@
|
||||
"t1_hot": {
|
||||
"purpose": "Per tool call — SQL generation, hybrid_search, sql(). Runs 50-200 times per scenario. Latency-sensitive.",
|
||||
"kind": "local_fast",
|
||||
"primary": { "model": "mistral:latest", "provider": "ollama_local", "context_window": 32768 },
|
||||
"primary": { "model": "qwen3.5:latest", "provider": "ollama_local", "context_window": 262144 },
|
||||
"fallback": { "model": "qwen2.5:latest", "provider": "ollama_local", "context_window": 32768 },
|
||||
"max_tokens": 800,
|
||||
"max_tokens": 1000,
|
||||
"temperature": 0.3,
|
||||
"never_route_cloud": true,
|
||||
"context_budget": {
|
||||
@ -34,14 +34,14 @@
|
||||
"safety_margin": 2000,
|
||||
"overflow_policy": "summarize_oldest_tool_results_via_t3"
|
||||
},
|
||||
"rationale": "Mistral produces valid JSON reliably. Qwen2.5 is the consensus reviewer. Known flakiness on 5-fill + misplacement events — do NOT mask by upgrading; route to T3 for post-hoc review instead."
|
||||
"rationale": "qwen3.5:latest is a 9.7B thinking model with 262K context that emits clean JSON (verified 2026-04-21 after mistral A/B wipeout). Thinking budget requires max_tokens ≥ 400; we hold 1000 to keep propose_done payloads intact. qwen2.5 stays as fallback."
|
||||
},
|
||||
"t2_review": {
|
||||
"purpose": "Per step consensus — executor ↔ reviewer loop critique. 5-14 calls per event.",
|
||||
"kind": "local_balanced",
|
||||
"primary": { "model": "qwen2.5:latest", "provider": "ollama_local", "context_window": 32768 },
|
||||
"fallback": { "model": "qwen3:latest", "provider": "ollama_local", "context_window": 40960 },
|
||||
"max_tokens": 600,
|
||||
"primary": { "model": "qwen3:latest", "provider": "ollama_local", "context_window": 40960 },
|
||||
"fallback": { "model": "qwen2.5:latest", "provider": "ollama_local", "context_window": 32768 },
|
||||
"max_tokens": 800,
|
||||
"temperature": 0.3,
|
||||
"never_route_cloud": true,
|
||||
"context_budget": {
|
||||
@ -72,7 +72,8 @@
|
||||
"t4_strategic": {
|
||||
"purpose": "Daily playbook board re-ranking, weekly gap audit, pattern discovery across accumulated playbooks. 1-10 calls per day.",
|
||||
"kind": "thinking_cloud_large",
|
||||
"primary": { "model": "qwen3.5:397b", "provider": "ollama_cloud", "context_window": 131072 },
|
||||
"primary": { "model": "kimi-k2.6", "provider": "ollama_cloud", "context_window": 200000 },
|
||||
"secondary": { "model": "qwen3.5:397b", "provider": "ollama_cloud", "context_window": 131072 },
|
||||
"fallback": { "model": "glm-4.7", "provider": "ollama_cloud", "context_window": 131072 },
|
||||
"local_fallback": { "model": "gpt-oss:20b", "provider": "ollama_local", "context_window": 131072 },
|
||||
"max_tokens": 2000,
|
||||
@ -108,6 +109,14 @@
|
||||
}
|
||||
},
|
||||
|
||||
"embeddings": {
|
||||
"_description": "Vector embedding models for hybrid search + playbook_memory similarity. Always local — embedding calls are too high-volume for cloud. qwen3-embedding is the new primary (2026-04-21); nomic-embed-text-v2-moe is the MoE alternative for corpus-scale embedding; original nomic-embed-text stays as legacy fallback while vectord indexes are being re-embedded.",
|
||||
"primary": { "model": "qwen3-embedding", "provider": "ollama_local", "dim": 1024, "context_window": 32768 },
|
||||
"alternate": { "model": "nomic-embed-text-v2-moe", "provider": "ollama_local", "dim": 768, "context_window": 2048, "notes": "MoE variant — better for long documents, slower per-call, roughly matches nomic-embed-text on short text." },
|
||||
"legacy": { "model": "nomic-embed-text:latest", "provider": "ollama_local", "dim": 768, "context_window": 8192, "notes": "Used by existing vectord indexes. Do NOT switch live until indexes are rebuilt, or hybrid search recall will crater." },
|
||||
"migration_plan": "Re-embed workers_500k (and all production indexes) with qwen3-embedding, keep the old index under a _v1 suffix for rollback, flip the profile pointer only after recall benchmarks match or exceed legacy. Touches vectord::agent + autotune."
|
||||
},
|
||||
|
||||
"context_management": {
|
||||
"_description": "Rule zero: NEVER call a model with more tokens than its context_window minus safety_margin. Every call goes through the budget checker first. If over budget → chunk, summarize, or escalate. This is the stability floor.",
|
||||
"token_estimator": {
|
||||
|
||||
36
docs/PRD.md
36
docs/PRD.md
@ -392,24 +392,34 @@ Five-tier routing declared in `config/models.json`. Hot path (T1/T2) stays on lo
|
||||
|
||||
T3 checkpoints + cross-day lessons are wired. Lessons archive to `data/_playbook_lessons/` and load back at next scenario start as `prior_lessons` in executor context.
|
||||
|
||||
### Phase 21: Context Stability & Chunking Pipeline
|
||||
### Phase 21: Scratchpad + Tree-Split Continuation
|
||||
|
||||
**Why this is a phase and not an optimization:** when playbooks accumulate into the thousands, naive concatenation blows any model's context window. LLM Team hit this exact issue running multi-model ranking — context got lost silently, rankings degraded, and there was no pipeline to catch it. We cannot trust cloud inference to save us: bigger context window just raises the cliff. The stable answer is a chunking + caching layer that runs BEFORE any agent call.
|
||||
**Why this is a phase and not an optimization:** bumping `max_tokens` until a response stops truncating is a tourniquet — J called this out explicitly. As playbooks accumulate into the hundreds and responses grow, eventually SOME request will exceed SOME model's window, and we can't solve it by raising a number. The stable answer is two primitives that let us handle arbitrary-size work without losing context: a scratchpad that glues multi-call responses together, and a tree split that shards oversized inputs and reduces them back.
|
||||
|
||||
**Two primitives (WIRED 2026-04-21 in `tests/multi-agent/agent.ts`):**
|
||||
|
||||
1. **`generateContinuable()`** — handles OUTPUT overflow. Calls the model; checks structural completeness (for JSON: matched braces + JSON.parse success; for text: non-empty). If incomplete, calls again with "continue from here" + the partial response as scratchpad. Up to `max_continuations` times. No `max_tokens` tuning needed — if thinking ate the initial budget, continuation picks up the slack.
|
||||
|
||||
2. **`generateTreeSplit()`** — handles INPUT overflow. Caller passes an array of shards (semantic chunks of the corpus). For each shard: map call with running scratchpad digest. Final reduce call produces the answer. Scratchpad truncates oldest content if it approaches its own budget. If a single shard still overflows, `assertContextBudget` throws — caller must re-shard at finer granularity, NOT silently truncate.
|
||||
|
||||
**Guarantees:**
|
||||
1. Every `generate()` call goes through a budget check against the model's declared `context_window` minus `safety_margin`.
|
||||
2. On overflow, the declared `overflow_policy` fires — one of: summarize oldest tool results via T3, cosine-rank top-K lessons, two-pass map-reduce across shards, or escalate to a bigger-context model.
|
||||
3. Chunk shards of large corpora (playbook_memory, lesson archive) are precomputed into `data/_chunk_cache/` keyed by corpus hash. Invalidation is automatic when the hash changes.
|
||||
4. Token estimation uses `chars / 4` (biased safe by ~15%) until we wire the provider's tokenizer.
|
||||
1. No agent call can silently truncate. Either it completes, continues, or throws with numbers.
|
||||
2. No corpus is too big — `generateTreeSplit` handles any size the caller can shard.
|
||||
3. Scratchpad is the glue between multi-call responses; context is never lost, only compacted.
|
||||
4. Token estimation uses `chars / 4` (biased safe ~15%) until we wire the provider's tokenizer.
|
||||
|
||||
**What lives where:**
|
||||
- Budget checker in `crates/aibridge/src/budget.rs` (new)
|
||||
- Chunker in `crates/aibridge/src/chunk.rs` (new) — deterministic splits on sentence/paragraph boundaries, each shard ≤ policy cap
|
||||
- Cache service in `crates/storaged/src/chunk_cache.rs` (new) — `corpus_hash → [shard_id, tokens, content]`
|
||||
- Agent helper `tests/multi-agent/agent.ts::generateSafe()` wraps `generate()` + `generateCloud()` with the checker
|
||||
- Metric emitted on every overflow: `context_overflow{tier=?,policy=?,model=?}` → surfaces on /metrics
|
||||
**What lives where now:**
|
||||
- `agent.ts::estimateTokens()` + `assertContextBudget()` + `generateContinuable()` + `generateTreeSplit()` — WIRED
|
||||
- `scenario.ts` executor + reviewer + overviewGenerate calls — migrated to `generateContinuable`
|
||||
- `config/models.json` — context_window + context_budget + overflow_policies per tier (declarative)
|
||||
|
||||
**Status:** `context_window` + `context_budget` + `overflow_policies` WIRED in config/models.json. Enforcement helpers NOT yet wired. Implementation target: current sprint — this is a stability floor, not a feature.
|
||||
**Next sprint (Rust side, so gateway tools share it):**
|
||||
- `crates/aibridge/src/continuation.rs` — port of `generateContinuable`
|
||||
- `crates/aibridge/src/tree_split.rs` — port of `generateTreeSplit`
|
||||
- `crates/storaged/src/chunk_cache.rs` — precomputed shards keyed by corpus hash (avoid re-chunking on every T4 run)
|
||||
- `/metrics` counter: `context_continuations_total{model,shape,succeeded}`
|
||||
|
||||
**Status:** TS primitives WIRED. Rust port pending. The escalation path (tree split → bigger-context cloud model → kimi-k2:1t's 1M window → split decision into sub-decisions) is declared in `config/models.json` under `context_management.overflow_policies`.
|
||||
|
||||
### Phase 22+: Further horizon
|
||||
|
||||
|
||||
77
scripts/ab_t3_test.sh
Executable file
77
scripts/ab_t3_test.sh
Executable file
@ -0,0 +1,77 @@
|
||||
#!/usr/bin/env bash
|
||||
# A/B test of T3 overseer: does it actually make subsequent runs better?
|
||||
# Chains Run B (T3 seed) → Run C (T3 + read-back) → Run D (T3 cloud).
|
||||
# Run A is assumed already complete (launched separately). Aggregates
|
||||
# metrics at the end into ab_scorecard.json.
|
||||
|
||||
set -e
|
||||
cd "$(dirname "$0")/.."
|
||||
|
||||
export OLLAMA_CLOUD_KEY="$(python3 -c "import json; print(json.load(open('/root/llm_team_config.json'))['providers']['ollama_cloud']['api_key'])")"
|
||||
|
||||
echo "▶ A/B test start at $(date -Iseconds)"
|
||||
echo "▶ prior lessons dir: $(ls data/_playbook_lessons 2>/dev/null | wc -l) files"
|
||||
|
||||
# Run B — T3 enabled local, no prior lessons should exist yet
|
||||
echo "──── RUN B: T3 local, seeds first lesson ────"
|
||||
bun tests/multi-agent/scenario.ts > /tmp/lakehouse_ab_B.log 2>&1 || true
|
||||
echo " B exit=$?"
|
||||
ls data/_playbook_lessons/*.json 2>/dev/null | head -5
|
||||
|
||||
# Run C — T3 enabled local, B's lesson should load
|
||||
echo "──── RUN C: T3 local, reads B's lesson ────"
|
||||
bun tests/multi-agent/scenario.ts > /tmp/lakehouse_ab_C.log 2>&1 || true
|
||||
echo " C exit=$?"
|
||||
|
||||
# Run D — T3 enabled CLOUD (gpt-oss:120b), reads B+C lessons
|
||||
echo "──── RUN D: T3 cloud, reads B+C lessons ────"
|
||||
LH_OVERVIEW_CLOUD=1 bun tests/multi-agent/scenario.ts > /tmp/lakehouse_ab_D.log 2>&1 || true
|
||||
echo " D exit=$?"
|
||||
|
||||
echo "▶ all runs done at $(date -Iseconds)"
|
||||
echo "▶ scorecard:"
|
||||
ls -1dt tests/multi-agent/playbooks/scenario-* | head -4 | tac | python3 -c "
|
||||
import sys, os, json
|
||||
|
||||
runs = [l.strip() for l in sys.stdin if l.strip()]
|
||||
labels = ['A(no-T3)','B(T3-seed)','C(T3-read)','D(T3-cloud)']
|
||||
# Prepend Run A: most recent BEFORE the ab_t3_test kicked off is Run A
|
||||
# (launched separately). But we only picked up the most recent 4 runs.
|
||||
# Actually: ab_t3_test runs B/C/D, so recent 3 = B,C,D. Run A is the one
|
||||
# BEFORE those — find it separately.
|
||||
# Reread to include Run A:
|
||||
import subprocess
|
||||
all_runs = subprocess.check_output(['bash','-c','ls -1dt tests/multi-agent/playbooks/scenario-* | head -8']).decode().strip().split('\n')
|
||||
# The 4 most recent are D, C, B, A (reverse chronological).
|
||||
top4 = list(reversed(all_runs[:4])) # oldest first → A,B,C,D
|
||||
rows = []
|
||||
for i, path in enumerate(top4):
|
||||
try:
|
||||
results = json.load(open(os.path.join(path, 'results.json')))
|
||||
except FileNotFoundError:
|
||||
continue
|
||||
ok = sum(1 for r in results if r.get('ok'))
|
||||
turns = sum(r.get('turns', 0) for r in results)
|
||||
gaps = sum(len(r.get('gap_signals', [])) for r in results)
|
||||
cites = sum(len(r.get('playbook_citations') or []) for r in results)
|
||||
prior = []
|
||||
try:
|
||||
prior = json.load(open(os.path.join(path, 'prior_lessons.json')))
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
rows.append({
|
||||
'label': labels[i] if i < len(labels) else f'run{i}',
|
||||
'path': path,
|
||||
'ok_events': ok,
|
||||
'total_events': len(results),
|
||||
'total_turns': turns,
|
||||
'total_gaps': gaps,
|
||||
'total_citations': cites,
|
||||
'prior_lessons_loaded': len(prior),
|
||||
})
|
||||
|
||||
scorecard = {'generated_at': __import__('datetime').datetime.utcnow().isoformat()+'Z', 'runs': rows}
|
||||
open('tests/multi-agent/playbooks/ab_scorecard.json','w').write(json.dumps(scorecard, indent=2))
|
||||
print(json.dumps(scorecard, indent=2))
|
||||
"
|
||||
echo "▶ saved: tests/multi-agent/playbooks/ab_scorecard.json"
|
||||
@ -16,6 +16,10 @@ class GenerateRequest(BaseModel):
|
||||
system: str | None = None
|
||||
temperature: float = 0.7
|
||||
max_tokens: int = 2048
|
||||
# think=false disables hidden reasoning blocks on thinking models
|
||||
# (qwen3, qwen3.5, gpt-oss). Required for hot-path JSON emitters
|
||||
# that need the whole token budget for the visible response.
|
||||
think: bool | None = None
|
||||
|
||||
|
||||
class GenerateResponse(BaseModel):
|
||||
@ -40,6 +44,8 @@ async def generate(req: GenerateRequest):
|
||||
}
|
||||
if req.system:
|
||||
payload["system"] = req.system
|
||||
if req.think is not None:
|
||||
payload["think"] = req.think
|
||||
|
||||
async with client() as c:
|
||||
resp = await c.post("/api/generate", json=payload)
|
||||
|
||||
@ -22,6 +22,214 @@ export function estimateTokens(text: string): number {
|
||||
return Math.ceil(text.length / 4);
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// Scratchpad + tree-split continuation
|
||||
// ============================================================
|
||||
//
|
||||
// Core problem: when a prompt OR response would exceed a model's window
|
||||
// — e.g. 200 playbooks pasted in, or a long propose_done payload — we
|
||||
// cannot just raise max_tokens forever. That's the tourniquet approach
|
||||
// J explicitly rejected. The right answer is: split into sub-calls
|
||||
// glued by a persistent scratchpad, so no context is ever lost.
|
||||
//
|
||||
// Two primitives:
|
||||
//
|
||||
// 1. generateContinuable(): for OUTPUT overflow. If the model returns
|
||||
// a structurally-incomplete JSON (unmatched braces, truncated
|
||||
// mid-value), auto-continue with the partial as scratchpad until
|
||||
// the JSON parses or max_continuations is hit.
|
||||
//
|
||||
// 2. generateTreeSplit(): for INPUT overflow. If prompt + system +
|
||||
// max_tokens exceeds window, shard the input at semantic boundaries,
|
||||
// run each shard through the model with a running scratchpad digest,
|
||||
// then run a final reduce pass to combine. This is map-reduce with
|
||||
// glue.
|
||||
//
|
||||
// Both are tier-agnostic — they wrap generate() and generateCloud().
|
||||
|
||||
// Check structural completeness of a response. For non-JSON responses
|
||||
// (lesson text, checkpoint hints) we treat "non-empty" as complete.
|
||||
// For JSON-shaped responses (executor/reviewer actions) we check that
|
||||
// braces match AND that JSON.parse succeeds on the first {...} block.
|
||||
function isStructurallyComplete(text: string, shape: "json" | "text"): boolean {
|
||||
if (!text || !text.trim()) return false;
|
||||
if (shape === "text") return true;
|
||||
// Strip ``` fences
|
||||
let s = text.trim();
|
||||
if (s.startsWith("```")) s = s.replace(/^```(?:json)?\n?/, "").replace(/```$/, "").trim();
|
||||
const start = s.indexOf("{");
|
||||
const end = s.lastIndexOf("}");
|
||||
if (start < 0 || end <= start) return false;
|
||||
const slice = s.slice(start, end + 1);
|
||||
// Balance check — cheaper than JSON.parse and catches truncated nests
|
||||
let depth = 0, inStr = false, esc = false;
|
||||
for (const c of slice) {
|
||||
if (esc) { esc = false; continue; }
|
||||
if (c === "\\") { esc = true; continue; }
|
||||
if (c === '"') { inStr = !inStr; continue; }
|
||||
if (inStr) continue;
|
||||
if (c === "{") depth++;
|
||||
else if (c === "}") depth--;
|
||||
}
|
||||
if (depth !== 0) return false;
|
||||
try { JSON.parse(slice); return true; } catch { return false; }
|
||||
}
|
||||
|
||||
// Continue a truncated response. We do NOT ask the model to start over —
|
||||
// we ask it to continue from exactly where it stopped. The partial goes
|
||||
// in as scratchpad so it knows what's already committed.
|
||||
async function continueResponse(
|
||||
model: string,
|
||||
originalPrompt: string,
|
||||
partial: string,
|
||||
opts: { max_tokens: number; temperature: number; system?: string; cloud: boolean; think?: boolean },
|
||||
): Promise<string> {
|
||||
const continuationPrompt = `${originalPrompt}
|
||||
|
||||
PARTIAL RESPONSE SO FAR (continue from here — do NOT restart, do NOT repeat what's already there, emit ONLY the remaining tokens to close the structure):
|
||||
${partial}`;
|
||||
const fn = opts.cloud ? generateCloud : generate;
|
||||
const rest = await fn(model, continuationPrompt, {
|
||||
max_tokens: opts.max_tokens,
|
||||
temperature: opts.temperature,
|
||||
system: opts.system,
|
||||
bypass_budget: true, // the caller already checked; continuation doesn't double-count
|
||||
think: opts.think,
|
||||
});
|
||||
return partial + rest;
|
||||
}
|
||||
|
||||
// Output-overflow handler. Handles two distinct failure modes:
|
||||
// (a) EMPTY response — thinking model ate the whole budget. Retry the
|
||||
// ORIGINAL prompt with 2x the budget (geometric backoff up to cap).
|
||||
// (b) TRUNCATED non-empty — model got most of the way there but hit
|
||||
// max_tokens before closing. Continue with the partial as
|
||||
// scratchpad.
|
||||
// The two modes need different repair: (a) needs more budget, not
|
||||
// continuation from ""; (b) needs scratchpad-glued continuation.
|
||||
export async function generateContinuable(
|
||||
model: string,
|
||||
prompt: string,
|
||||
opts: {
|
||||
max_tokens?: number;
|
||||
temperature?: number;
|
||||
system?: string;
|
||||
shape?: "json" | "text";
|
||||
max_continuations?: number;
|
||||
cloud?: boolean;
|
||||
think?: boolean;
|
||||
on_continuation?: (n: number, combined_len: number) => void;
|
||||
} = {},
|
||||
): Promise<string> {
|
||||
const shape = opts.shape ?? "json";
|
||||
const initialMax = opts.max_tokens ?? 800;
|
||||
const maxConts = opts.max_continuations ?? 3;
|
||||
const cloud = opts.cloud ?? false;
|
||||
const budgetCap = 8000; // don't geometric-backoff forever
|
||||
const fn = cloud ? generateCloud : generate;
|
||||
|
||||
let combined = "";
|
||||
let currentMax = initialMax;
|
||||
|
||||
// Initial call + empty-response backoff loop.
|
||||
for (let retry = 0; retry < 3; retry++) {
|
||||
const out = await fn(model, prompt, {
|
||||
max_tokens: currentMax,
|
||||
temperature: opts.temperature,
|
||||
system: opts.system,
|
||||
think: opts.think,
|
||||
});
|
||||
if (out.trim().length > 0) { combined = out; break; }
|
||||
// Empty — thinking model ate the budget. Double it and retry.
|
||||
if (opts.on_continuation) opts.on_continuation(retry + 1, 0);
|
||||
currentMax = Math.min(currentMax * 2, budgetCap);
|
||||
}
|
||||
|
||||
// Structural completion loop (continuation from partial).
|
||||
for (let i = 0; i < maxConts; i++) {
|
||||
if (isStructurallyComplete(combined, shape)) return combined;
|
||||
if (opts.on_continuation) opts.on_continuation(i + 1, combined.length);
|
||||
combined = await continueResponse(model, prompt, combined, {
|
||||
max_tokens: Math.min(currentMax, budgetCap),
|
||||
temperature: opts.temperature ?? 0.3,
|
||||
system: opts.system,
|
||||
cloud,
|
||||
think: opts.think,
|
||||
});
|
||||
}
|
||||
// Last-resort: return combined even if incomplete; caller's parser
|
||||
// will throw with the raw text for forensics rather than silently
|
||||
// truncating.
|
||||
return combined;
|
||||
}
|
||||
|
||||
// Input-overflow handler. When the input corpus exceeds the window,
|
||||
// shard → map → reduce with a running scratchpad digest.
|
||||
//
|
||||
// shards: array of input chunks the caller already split at semantic
|
||||
// boundaries (paragraphs, records, playbook entries).
|
||||
// map_prompt: fn taking (shard, running_scratchpad) → prompt for a
|
||||
// single map call. Must fit within window.
|
||||
// reduce_prompt: fn taking (combined_scratchpad) → final prompt. Also
|
||||
// must fit window; if the scratchpad itself overflows, this triggers
|
||||
// a recursive tree-split.
|
||||
//
|
||||
// Result: the reduce call's response.
|
||||
export async function generateTreeSplit(
|
||||
model: string,
|
||||
shards: string[],
|
||||
map_prompt: (shard: string, scratchpad: string) => string,
|
||||
reduce_prompt: (scratchpad: string) => string,
|
||||
opts: {
|
||||
max_tokens?: number;
|
||||
temperature?: number;
|
||||
system?: string;
|
||||
cloud?: boolean;
|
||||
on_shard?: (i: number, total: number) => void;
|
||||
scratchpad_budget?: number;
|
||||
} = {},
|
||||
): Promise<{ response: string; scratchpad: string; shards_processed: number }> {
|
||||
const cloud = opts.cloud ?? false;
|
||||
const scratchpadBudget = opts.scratchpad_budget ?? 6000;
|
||||
let scratchpad = "";
|
||||
|
||||
for (let i = 0; i < shards.length; i++) {
|
||||
if (opts.on_shard) opts.on_shard(i + 1, shards.length);
|
||||
const shardPrompt = map_prompt(shards[i], scratchpad);
|
||||
// If the per-shard prompt alone exceeds window, the caller sharded
|
||||
// too coarsely — bubble up rather than silently truncating.
|
||||
assertContextBudget(model, shardPrompt, {
|
||||
system: opts.system,
|
||||
max_tokens: opts.max_tokens,
|
||||
bypass: false,
|
||||
});
|
||||
const shardOut = await generateContinuable(model, shardPrompt, {
|
||||
max_tokens: opts.max_tokens ?? 800,
|
||||
temperature: opts.temperature,
|
||||
system: opts.system,
|
||||
shape: "text",
|
||||
cloud,
|
||||
});
|
||||
// Append to scratchpad; truncate oldest if over budget.
|
||||
scratchpad = (scratchpad + `\n— shard ${i + 1}/${shards.length} digest —\n` + shardOut.trim()).slice(-scratchpadBudget * 4);
|
||||
}
|
||||
|
||||
const reducePrompt = reduce_prompt(scratchpad);
|
||||
assertContextBudget(model, reducePrompt, {
|
||||
system: opts.system,
|
||||
max_tokens: opts.max_tokens,
|
||||
bypass: false,
|
||||
});
|
||||
const response = await generateContinuable(model, reducePrompt, {
|
||||
max_tokens: opts.max_tokens ?? 1500,
|
||||
temperature: opts.temperature,
|
||||
system: opts.system,
|
||||
shape: "text",
|
||||
cloud,
|
||||
});
|
||||
return { response, scratchpad, shards_processed: shards.length };
|
||||
}
|
||||
|
||||
// Known context windows — matches crates/../config/models.json. Kept in
|
||||
// code as a fallback so the test harness doesn't crash if the config is
|
||||
// missing. Production path should read from models.json.
|
||||
@ -29,10 +237,14 @@ export const CONTEXT_WINDOWS: Record<string, number> = {
|
||||
"mistral:latest": 32768,
|
||||
"qwen2.5:latest": 32768,
|
||||
"qwen3:latest": 40960,
|
||||
"qwen3.5:latest": 262144, // local 9.7B — new executor
|
||||
"qwen3-embedding": 32768, // local embedding model
|
||||
"nomic-embed-text-v2-moe": 2048, // local embedding, MoE
|
||||
"gpt-oss:20b": 131072,
|
||||
"gpt-oss:120b": 131072,
|
||||
"qwen3.5:397b": 131072,
|
||||
"kimi-k2-thinking": 200000,
|
||||
"kimi-k2.6": 200000, // cloud — new T4 candidate
|
||||
"kimi-k2:1t": 1048576,
|
||||
"deepseek-v3.1:671b": 131072,
|
||||
"glm-4.7": 131072,
|
||||
@ -89,6 +301,7 @@ export interface LogEntry {
|
||||
| "critique"
|
||||
| "propose_done"
|
||||
| "consensus_done"
|
||||
| "note"
|
||||
| "error";
|
||||
content: any;
|
||||
}
|
||||
@ -150,6 +363,7 @@ export async function generate(model: string, prompt: string, opts: {
|
||||
temperature?: number;
|
||||
system?: string;
|
||||
bypass_budget?: boolean;
|
||||
think?: boolean;
|
||||
} = {}): Promise<string> {
|
||||
assertContextBudget(model, prompt, {
|
||||
system: opts.system,
|
||||
@ -163,11 +377,13 @@ export async function generate(model: string, prompt: string, opts: {
|
||||
max_tokens: opts.max_tokens ?? 800,
|
||||
};
|
||||
if (opts.system) body.system = opts.system;
|
||||
if (opts.think !== undefined) body.think = opts.think;
|
||||
const r = await http<any>("POST", `${SIDECAR}/generate`, body);
|
||||
const text = r.text ?? "";
|
||||
if (!text || typeof text !== "string") {
|
||||
throw new Error(`generate returned empty text from ${model}: ${JSON.stringify(r).slice(0, 200)}`);
|
||||
}
|
||||
const text = typeof r.text === "string" ? r.text : "";
|
||||
// Do NOT throw on empty. Thinking models (gpt-oss, qwen3.5) burn the
|
||||
// max_tokens budget on hidden reasoning and emit "" when budget was
|
||||
// too tight. generateContinuable detects empty + continues with more
|
||||
// budget. Callers that expected non-empty can check themselves.
|
||||
return text;
|
||||
}
|
||||
|
||||
@ -181,6 +397,7 @@ export async function generateCloud(model: string, prompt: string, opts: {
|
||||
temperature?: number;
|
||||
system?: string;
|
||||
bypass_budget?: boolean;
|
||||
think?: boolean;
|
||||
} = {}): Promise<string> {
|
||||
if (!OLLAMA_CLOUD_KEY) {
|
||||
throw new Error("OLLAMA_CLOUD_KEY not set; cannot reach Ollama Cloud");
|
||||
@ -200,6 +417,7 @@ export async function generateCloud(model: string, prompt: string, opts: {
|
||||
},
|
||||
};
|
||||
if (opts.system) body.system = opts.system;
|
||||
if (opts.think !== undefined) body.think = opts.think;
|
||||
const resp = await fetch(`${OLLAMA_CLOUD_URL}/api/generate`, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
@ -212,10 +430,9 @@ export async function generateCloud(model: string, prompt: string, opts: {
|
||||
throw new Error(`Ollama Cloud ${resp.status}: ${await resp.text().catch(() => "?")}`);
|
||||
}
|
||||
const data: any = await resp.json();
|
||||
const text = data.response ?? "";
|
||||
if (!text) {
|
||||
throw new Error(`Ollama Cloud returned empty response for ${model}: ${JSON.stringify(data).slice(0, 200)}`);
|
||||
}
|
||||
const text = typeof data.response === "string" ? data.response : "";
|
||||
// Same non-throw policy as local generate() — empty text is a valid
|
||||
// signal that thinking ate the budget. Let generateContinuable retry.
|
||||
return text;
|
||||
}
|
||||
|
||||
|
||||
@ -49,8 +49,8 @@ import {
|
||||
callTool,
|
||||
} from "./agent.ts";
|
||||
|
||||
const EXECUTOR_MODEL = "mistral:latest";
|
||||
const REVIEWER_MODEL = "qwen2.5:latest";
|
||||
const EXECUTOR_MODEL = "qwen3.5:latest";
|
||||
const REVIEWER_MODEL = "qwen3:latest";
|
||||
const VERIFIER_MODEL = "qwen2.5:latest";
|
||||
const PROFILE_ID = "staffing-recruiter";
|
||||
const INDEX_NAME = "workers_500k_v1";
|
||||
@ -175,7 +175,7 @@ async function buildPhase(task: TaskSpec, prefix: string): Promise<BuildResult>
|
||||
while (turn < MAX_TURNS && !sealed) {
|
||||
turn += 1;
|
||||
|
||||
const execRaw = await generate(EXECUTOR_MODEL, executorPrompt(task, log), { temperature: 0.2, max_tokens: 600 });
|
||||
const execRaw = await generate(EXECUTOR_MODEL, executorPrompt(task, log), { temperature: 0.2, max_tokens: 1200 });
|
||||
const execAction = parseAction(execRaw, "executor");
|
||||
append({ turn, role: "executor", model: EXECUTOR_MODEL, kind: execAction.kind as any, content: execAction });
|
||||
|
||||
@ -192,7 +192,7 @@ async function buildPhase(task: TaskSpec, prefix: string): Promise<BuildResult>
|
||||
}
|
||||
}
|
||||
|
||||
const revRaw = await generate(REVIEWER_MODEL, reviewerPrompt(task, log), { temperature: 0.1, max_tokens: 400 });
|
||||
const revRaw = await generate(REVIEWER_MODEL, reviewerPrompt(task, log), { temperature: 0.1, max_tokens: 1000 });
|
||||
const revAction = parseAction(revRaw, "reviewer");
|
||||
append({ turn, role: "reviewer", model: REVIEWER_MODEL, kind: "critique", content: revAction });
|
||||
|
||||
|
||||
@ -25,8 +25,8 @@ import {
|
||||
import { mkdir, writeFile } from "node:fs/promises";
|
||||
import { join } from "node:path";
|
||||
|
||||
const EXECUTOR_MODEL = "mistral:latest";
|
||||
const REVIEWER_MODEL = "qwen2.5:latest";
|
||||
const EXECUTOR_MODEL = "qwen3.5:latest";
|
||||
const REVIEWER_MODEL = "qwen3:latest";
|
||||
const MAX_TURNS = 12; // executor turns; reviewer gets one per
|
||||
const MAX_CONSECUTIVE_DRIFTS = 3; // drift-cycle blown → give up
|
||||
|
||||
@ -148,7 +148,7 @@ async function main() {
|
||||
// --- EXECUTOR TURN ---
|
||||
const execRaw = await generate(EXECUTOR_MODEL, executorPrompt(task, log), {
|
||||
temperature: 0.2,
|
||||
max_tokens: 600,
|
||||
max_tokens: 1200,
|
||||
});
|
||||
let execAction: Action;
|
||||
try {
|
||||
@ -191,7 +191,7 @@ async function main() {
|
||||
// --- REVIEWER TURN ---
|
||||
const revRaw = await generate(REVIEWER_MODEL, reviewerPrompt(task, log), {
|
||||
temperature: 0.1,
|
||||
max_tokens: 400,
|
||||
max_tokens: 1000,
|
||||
});
|
||||
let revAction: Action;
|
||||
try {
|
||||
|
||||
@ -32,8 +32,8 @@ import {
|
||||
callTool,
|
||||
} from "./agent.ts";
|
||||
|
||||
const EXECUTOR_MODEL = "mistral:latest";
|
||||
const REVIEWER_MODEL = "qwen2.5:latest";
|
||||
const EXECUTOR_MODEL = "qwen3.5:latest";
|
||||
const REVIEWER_MODEL = "qwen3:latest";
|
||||
const MAX_TURNS = 12;
|
||||
const MAX_CONSECUTIVE_DRIFTS = 3;
|
||||
const INDEX_NAME = "workers_500k_v1";
|
||||
@ -75,7 +75,7 @@ async function runOrchestrator(task: TaskSpec, prefix: string): Promise<RunResul
|
||||
turn += 1;
|
||||
|
||||
// Executor
|
||||
const execRaw = await generate(EXECUTOR_MODEL, executorPrompt(task, log), { temperature: 0.2, max_tokens: 600 });
|
||||
const execRaw = await generate(EXECUTOR_MODEL, executorPrompt(task, log), { temperature: 0.2, max_tokens: 1200 });
|
||||
const execAction = parseAction(execRaw, "executor");
|
||||
append({ turn, role: "executor", model: EXECUTOR_MODEL, kind: execAction.kind as any, content: execAction });
|
||||
|
||||
@ -95,7 +95,7 @@ async function runOrchestrator(task: TaskSpec, prefix: string): Promise<RunResul
|
||||
}
|
||||
|
||||
// Reviewer
|
||||
const revRaw = await generate(REVIEWER_MODEL, reviewerPrompt(task, log), { temperature: 0.1, max_tokens: 400 });
|
||||
const revRaw = await generate(REVIEWER_MODEL, reviewerPrompt(task, log), { temperature: 0.1, max_tokens: 1000 });
|
||||
const revAction = parseAction(revRaw, "reviewer");
|
||||
append({ turn, role: "reviewer", model: REVIEWER_MODEL, kind: "critique", content: revAction });
|
||||
|
||||
|
||||
@ -29,6 +29,7 @@ import {
|
||||
sqlQuery,
|
||||
generate,
|
||||
generateCloud,
|
||||
generateContinuable,
|
||||
parseAction,
|
||||
executorPrompt,
|
||||
reviewerPrompt,
|
||||
@ -37,15 +38,14 @@ import {
|
||||
import { mkdir, writeFile, appendFile } from "node:fs/promises";
|
||||
import { join } from "node:path";
|
||||
|
||||
// 2026-04-20 — reverted to mistral executor after trying qwen2.5.
|
||||
// qwen2.5 emits malformed JSON (trailing `)` garbage, unterminated
|
||||
// strings) when asked for tool calls. mistral drops fields occasionally
|
||||
// but produces valid JSON. With optional `question` default + lean
|
||||
// prompt + schema lock, mistral seals baseline + recurring reliably.
|
||||
// Complex scenarios (5-fill, emergency, misplacement) remain flaky —
|
||||
// real Phase 20+ problem (larger model or constrained decoding needed).
|
||||
const EXECUTOR_MODEL = "mistral:latest";
|
||||
const REVIEWER_MODEL = "qwen2.5:latest";
|
||||
// 2026-04-21 — executor is now qwen3.5:latest (9.7B, 262K context,
|
||||
// thinking model, emits clean JSON). Replaces mistral, which produced
|
||||
// malformed JSON on complex SQL filters (bare IN-clause identifiers,
|
||||
// unclosed braces) regardless of prompt — decoder-level bug that all
|
||||
// 5 events hit across 4 A/B test runs. qwen3.5 tested clean on first
|
||||
// try with 800 max_tokens.
|
||||
const EXECUTOR_MODEL = "qwen3.5:latest";
|
||||
const REVIEWER_MODEL = "qwen3:latest";
|
||||
const DRAFT_MODEL = "qwen2.5:latest"; // artifact generation; short outputs
|
||||
|
||||
// T3 overview tier. Called sparingly — NOT per tool call. Two insertion
|
||||
@ -81,9 +81,16 @@ const T3_DISABLED = process.env.LH_T3_DISABLE === "1";
|
||||
|
||||
// Dispatcher: route T3 calls to local sidecar or Ollama Cloud depending
|
||||
// on the LH_OVERVIEW_CLOUD flag. Hot-path T1/T2 always stay local.
|
||||
// T3 outputs are free-form prose (lesson/hint), so shape=text — the
|
||||
// continuation primitive treats any non-empty response as complete.
|
||||
async function overviewGenerate(prompt: string, opts: { temperature?: number; max_tokens?: number } = {}): Promise<string> {
|
||||
if (OVERVIEW_CLOUD) return generateCloud(OVERVIEW_MODEL, prompt, opts);
|
||||
return generate(OVERVIEW_MODEL, prompt, opts);
|
||||
return generateContinuable(OVERVIEW_MODEL, prompt, {
|
||||
temperature: opts.temperature,
|
||||
max_tokens: opts.max_tokens ?? 1000,
|
||||
shape: "text",
|
||||
max_continuations: 2,
|
||||
cloud: OVERVIEW_CLOUD,
|
||||
});
|
||||
}
|
||||
|
||||
const MAX_TURNS = 14;
|
||||
@ -365,10 +372,26 @@ async function runAgentFill(
|
||||
while (turn < MAX_TURNS && !sealed) {
|
||||
turn += 1;
|
||||
|
||||
const execRaw = await generate(
|
||||
// generateContinuable: if the model truncates mid-JSON (thinking
|
||||
// ate the budget, or payload was just long), auto-continue with the
|
||||
// partial as scratchpad until braces balance and JSON parses.
|
||||
// No more "bump max_tokens until it stops truncating" tourniquet.
|
||||
// think:false — executor emits structured JSON, doesn't need hidden
|
||||
// reasoning. Burning ~650 thinking tokens on a 400-token JSON was
|
||||
// exactly the bug we just solved.
|
||||
const execRaw = await generateContinuable(
|
||||
EXECUTOR_MODEL,
|
||||
withExtras(executorPrompt(task, log)),
|
||||
{ temperature: 0.2, max_tokens: 600 },
|
||||
{
|
||||
temperature: 0.2,
|
||||
max_tokens: 800,
|
||||
shape: "json",
|
||||
max_continuations: 3,
|
||||
think: false,
|
||||
on_continuation: (n, len) =>
|
||||
append({ turn, role: "executor", model: EXECUTOR_MODEL, kind: "note",
|
||||
content: { continuation: n, combined_chars: len } }),
|
||||
},
|
||||
);
|
||||
let execAction: Action;
|
||||
try {
|
||||
@ -419,10 +442,19 @@ async function runAgentFill(
|
||||
}
|
||||
}
|
||||
|
||||
const revRaw = await generate(
|
||||
const revRaw = await generateContinuable(
|
||||
REVIEWER_MODEL,
|
||||
withExtras(reviewerPrompt(task, log)),
|
||||
{ temperature: 0.1, max_tokens: 400 },
|
||||
{
|
||||
temperature: 0.1,
|
||||
max_tokens: 600,
|
||||
shape: "json",
|
||||
max_continuations: 3,
|
||||
think: false,
|
||||
on_continuation: (n, len) =>
|
||||
append({ turn, role: "reviewer", model: REVIEWER_MODEL, kind: "note",
|
||||
content: { continuation: n, combined_chars: len } }),
|
||||
},
|
||||
);
|
||||
let revAction: Action;
|
||||
try {
|
||||
@ -867,7 +899,10 @@ HINT: <hint>`;
|
||||
|
||||
let text = "";
|
||||
try {
|
||||
text = await overviewGenerate(prompt, { temperature: 0.2, max_tokens: 600 });
|
||||
// overviewGenerate routes through generateContinuable — if thinking
|
||||
// ate the initial budget, it auto-continues rather than requiring
|
||||
// us to guess a safe max_tokens upfront.
|
||||
text = await overviewGenerate(prompt, { temperature: 0.2, max_tokens: 800 });
|
||||
} catch (e) {
|
||||
return {
|
||||
after_event: event.at,
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user