Cross-lineage scrum (Opus 4.7 / Kimi K2.6 / Qwen3-coder via chatd's
/v1/chat) on the harness's first 4 commits surfaced 5 real bugs;
this commit lands the 4 inside the LLM/validator stack. B5 (scanner
skip-list semantics) ships separately as it changes scan behavior
on every target repo.
B1 (Kimi BLOCK + Opus WARN convergent) — internal/validators:
evidencePresent had two flaws: (1) cursor advanced on match in the
trim-line fallback, breaking same-line repeated matches AND skipping
not-yet-considered lines so out-of-order evidence spuriously failed;
(2) strings.Contains on a single `}` trim-matched any closing brace
in the file, defeating the "evidence quotes real text" contract.
Fix: trivial-evidence guard FIRST (reject anything <4 non-whitespace
chars) + per-line search no longer advances a cursor. New regression
test TestEvidencePresent_RejectsTrivialMatches covers `}`, `{`, `)`,
empty, and out-of-order multi-line evidence (which now passes —
order isn't part of the contract).
B2 (Kimi WARN + Opus WARN convergent) — internal/pipeline:
WriteJSON error for rejected-findings.json was swallowed with
`if err == nil`, so a write failure left the validation phase
reporting status="ok" while the audit trail vanished. Mirror the
validated-findings branch: surface the error in
validatePhase.Errors + bump status to degraded + ExitCode=66.
B3 (Kimi BLOCK + Opus BLOCK convergent) — internal/llm/ollama.go:
HealthCheck.basic_prompt_ok was set to true on ANY non-empty
response, so a model emitting `<think>...` traces or apologies
passed silently. Now requires the response to contain "OK"
(uppercase, substring). Substring rather than equality lets minor
whitespace/punctuation variations through (some models add a
trailing period). Errors now record what the model actually said
when it fails the check.
B4 (Opus BLOCK only — same class as today's chatd Anthropic-temp
fix) — internal/llm/ollama.go: chatBody had `if opts.Temperature != 0`
which silently dropped Temperature=0 from the request, so HealthCheck
+ Reviewer (both pass Temperature=0 expecting determinism) actually
ran at Ollama's ~0.8 default. Always forward Temperature now. The
two callers always set explicit values, so "0 means 0" is correct;
if a future caller wants Ollama's default they'll switch
CompleteOptions.Temperature to *float64 like chatd did this morning.
Verified end-to-end: insecure-repo + --enable-llm still produces 25
confirmed findings (16 static + 9 LLM), 0 rejected. Validator unit
tests: 11 pass (added TestEvidencePresent_RejectsTrivialMatches).
Same-day-as-shipping scrum, same-day-as-shipping fixes. The
convergent-≥2 gate caught 3 of these; the 4th was Opus-only but
verified by reading the code (same idiom as today's chatd bug).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
local-review-harness
Local-first code review harness. Walks a repository, runs evidence-bearing static checks, generates Scrum-style reports. No cloud dependencies. LLM review is local-Ollama-only (Phase C, not yet shipped).
Per FIRST_COMMAND_FOR_CLAUDE_CODE.md + PROMPT.md — "AI may suggest. Code validates. Reports must show evidence." Findings without grep-able evidence get rejected; the validator phase rejects model claims that cite missing files.
Status
Phase A + Phase B (MVP) shipped. What works today:
review-harness repo <path>— Phase 0 intake + Phase 1 static scanreview-harness scrum <path>— same pipeline + full Scrum report bundlereview-harness model doctor— stub (real Ollama probe in Phase C)- 12 static analyzers covering hardcoded paths, shell exec, raw SQL, wildcard CORS, secret patterns, large files, TODO/FIXME, missing tests, committed
.env, unsafe file I/O, exposed mutation endpoints, hardcoded private-network IPs
Phases C–E pending: real LLM review, validation cross-check, append-only memory, diff/rules subcommands.
Build
Single static binary, no cgo:
go build -o review-harness ./cmd/review-harness
Requires Go 1.22+.
Run
# Full repo review (Phase 0 + Phase 1 + Phase 4)
./review-harness repo /path/to/target/repo
# Same + Scrum bundle (scrum-test.md, risk-register.md, sprint-backlog.md, acceptance-gates.md)
./review-harness scrum /path/to/target/repo
# Model doctor stub
./review-harness model doctor
Reports land in <target>/reports/latest/ by default; override with --output-dir.
Optional config files:
./review-harness scrum /path --review-profile configs/review-profile.example.yaml \
--model-profile configs/model-profile.example.yaml
Self-review
The harness reviews itself as a sanity gate (PROMPT.md "Final Deliverable"):
./review-harness scrum .
cat reports/latest/scrum-test.md
The fixture-planted secrets in tests/fixtures/insecure-repo/ are intentional — they prove the secret-pattern analyzer fires. Operators reviewing the self-report should expect those critical-severity hits and dismiss them as fixture content.
Test fixtures
Three synthetic repos under tests/fixtures/:
| Fixture | Purpose | Expected outcome |
|---|---|---|
clean-repo/ |
sterile reference | 0 confirmed findings |
insecure-repo/ |
every static check fires | ≥8 distinct check IDs |
degraded-repo/ |
no git, no manifests | repo_intake phase marked degraded |
Run them all to validate after a regex change:
for f in clean-repo insecure-repo degraded-repo; do
./review-harness scrum "tests/fixtures/$f" > /dev/null
echo "$f: $(jq '.summary.total' tests/fixtures/$f/reports/latest/static-findings.json) findings"
done
Exit codes
0— clean run, no degraded phases64— usage error65— runtime error (config parse fail, target path missing, etc.)66— degraded mode (one or more phases skipped or stubbed; reports still produced)
66 is the expected exit code in MVP because the LLM phase is hardcoded degraded until Phase C lands.