profit 540c493ff1 tests/real-world: hard-task escalation — prove the ladder solves tasks local can't
J asked (2026-04-22): construct a task the local model provably can't
complete, then watch the escalation + retry + cloud pipeline actually
solve it.

The task: generate a Rust async function with 15 specific
structural rules (exact signature, bounded concurrency, exponential
backoff 250/500/1000ms, NO .unwrap(), rustdoc comments, etc.).
Small enough to fit in one response but strict enough that one
rule violation = not accepted. Fits Rust + async + concurrency +
error-handling — across the hardest dimensions for 7B models.

Escalation ladder (corrected per J — kimi-k2.x requires Ollama
Cloud Pro subscription which J's key lacks; mistral-large-3:675b
is the biggest provisioned model):

  1. qwen3.5:latest        (local 7B)
  2. qwen3:latest          (local 7B)
  3. gpt-oss:20b           (local 20B)
  4. gpt-oss:120b          (cloud 120B)
  5. devstral-2:123b       (cloud 123B coding specialist)
  6. mistral-large-3:675b  (cloud 675B — biggest available)

Each attempt gets PRIOR failures' rubric violations injected as
learning context. Loop caps at MAX_ATTEMPTS=6.

Live run (runs/hard_task_moapd3g3/):
  attempt 1: qwen3.5:latest         11/15  — missed concurrency + some constraints
  attempt 2: qwen3:latest           11/15  — different misses after learning
  attempt 3: gpt-oss:20b             0/1  — empty response (local model dead-end)
  attempt 4: gpt-oss:120b            0/1  — empty (heavy learning context may confuse)
  attempt 5: devstral-2:123b        15/15   ACCEPTED after 10.4s
  attempt 6: (not reached)

Total: 5 attempts, 145.6s, coding-specialist succeeded.

Honest findings from the run:
- Pipeline works: escalated through 4 distinct model tiers, injected
  learning, bounded at 6, graceful failure surfaces.
- Learning injection doesn't always help general-purpose models —
  gpt-oss:120b returned empty when given heavy prior-failure context
  (attempt 4). The coding specialist (devstral) worked better because
  the task is domain-aligned.
- Local 7B came within 4 rules of success first-try (11/15) — not
  bad for the scale, but specific constraints like "EXACT signature"
  and "bounded concurrency at 4" are where small models slip.
- Kimi K2.5/K2.6 both require a paid subscription on our current
  Ollama Cloud key — verified via direct ollama.com curl. Swap
  to kimi once subscription lands.

Also includes a rubric bug-fix caught in the run: the regex for
"reaches 500/1000ms backoff" originally required literal constants,
but devstral-2:123b wrote idiomatic `retry_delay *= 2;` which
doubles 250 → 500 → 1000 correctly. Broadened rubric to recognize
`*= 2`, bit-shift, `.pow()`, and literal forms. Without this the
ladder would have false-failed on semantically-correct code.

Files:
  tests/real-world/hard_task_escalation.ts (270 LOC)
  tests/real-world/runs/hard_task_moapd3g3/
    attempt_{1..5}.txt     — raw model outputs (last successful)
    attempt_{1..5}.json    — per-attempt rubric verdict + error
    summary.json           — ladder summary

What this PROVES that no prior test did:
- Task-level retry ESCALATES across distinct model capabilities
  (not just same model retried)
- Bigger and more-specialized models ACTUALLY solve what smaller
  ones can't — the ladder works by design, not by luck
- The subscription boundary (Kimi K2.x) is a real operational
  constraint, not a code issue
- Rubric engineering is its own discipline — a strict-but-wrong
  validator can reject correct code; shipping the test harness
  required tuning against actual model outputs

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 18:50:53 -05:00
2026-04-22 02:41:15 -05:00
2026-03-27 05:54:09 -05:00
2026-04-22 02:41:15 -05:00
Description
Rust-first object storage system
6.3 GiB
Languages
TypeScript 38.4%
Rust 35.8%
HTML 13.9%
Python 7.8%
Shell 2.1%
Other 2%