Architectural correction (J 2026-04-25): The 9-rung ladder was treating cascade as the strategy. That's wrong. ONE model handles the work, with same-model retries using enriched context. Cycle to a different model ONLY on PROVIDER errors (network / auth / 5xx) — never on quality issues, because quality issues mean the context needs more enrichment, not a different model. Changes: - LADDER shrunk from 11 entries to 3 (Grok 4.1 fast primary, DeepSeek V4 flash + Qwen3-235B as provider-error fallbacks). Removed Kimi K2.6, Gemini 2.5 flash, all Ollama Cloud rungs, OR free-tier rungs, local qwen3.5 — none were doing the work, all wasted attempts. They remain available as routable tools for the future mode router. - Loop restructured: separate `modelIdx` from attempt counter. Provider error → modelIdx++ (advance fallback). Observer reject / cycle / thin response → retry SAME model with rejection notes feeding into the `learning` preamble; advance fallback only after MAX_QUALITY_RETRIES (default 2) exhausted on the current model. - LH_SCRUM_MAX_QUALITY_RETRIES env to tune the per-model retry cap. What this preserves: - Tree-split (treeSplitFile) is still the ONE legitimate model-switch trigger for context-overflow, but even it just re-runs the same model against smaller chunks. - Pathway memory preamble still fires. - Hot-swap reorder still applies — when a recommended model maps to the new shorter ladder. Future direction (J 2026-04-25 note): the LLM Team multi-model modes in /root/llm_team_ui.py are a REFERENCE PATTERN for a mode router we will build INSIDE this gateway. Mimic the patterns, don't modify the LLM Team UI itself. The mode router will pick the right approach for each task class via the matrix index, not cascade through models. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Audit pipeline PR #9: determinism + fact extraction + verifier gate + KB stats + context injection (PR #9)
Description
Rust-first object storage system
Languages
TypeScript
38.4%
Rust
35.8%
HTML
13.9%
Python
7.8%
Shell
2.1%
Other
2%