Implements PROMPT.md / docs/REVIEW_PIPELINE.md Phase 3:
"AI may suggest. Code validates."
internal/validators/validate.go — 3 hard checks per the
"Reject A Finding If" list:
- file does not exist (with path-traversal guard against the LLM
hallucinating ../../../etc/passwd)
- cited evidence does not appear in the file (verbatim or
trim-line-by-line — models often re-indent quotes when quoting code)
- line hint exceeds file line count
3 soft checks documented as open (claim semantics, suggested-fix
relevance, invented tests/commands — all need another LLM pass).
internal/validators/validate_test.go — 9 tests including:
- TestValidate_RejectsNonexistentFile (gate D1)
- TestValidate_RejectsEvidenceNotInFile
- TestValidate_RejectsLineHintBeyondFile
- TestValidate_AcceptsRealFinding
- TestValidate_AcceptsEvidenceWithDifferentLeadingWhitespace
- TestValidate_RejectsEmptyEvidence
- TestValidate_PassesThroughStaticFindings
- TestValidate_RejectsPathEscapingRepo (path-traversal protection)
- TestValidate_AcceptsRelativeRepoPath (the regression — see below)
Pipeline phase 3 wired between LLM review (Phase C) and report gen
(Phase 4). validated-findings.json contains the confirmed set;
rejected-findings.json contains rejects with per-finding reason +
detail. Receipt phase entry honest about output files + status.
=== Bug J caught ===
First Phase D run rejected EVERY real LLM finding as file_not_found
because the path-traversal check compared a relative joined path
(`tests/fixtures/insecure-repo/src/handler.go`) against an absolute
repoAbs (`/home/profit/share/.../insecure-repo`), so HasPrefix
always returned false. Both sides now resolved via filepath.Abs
before comparison. Regression test
TestValidate_AcceptsRelativeRepoPath locks this in — runs the
validator against a relative repo path AND a relative chdir, the
exact shape that hit the bug.
J's framing was honest: "I don't know what the problem is, but you
know what we're trying to accomplish." The fix-it-yourself signal
let me trace through the rejection details + see the smoking gun
in the detail string ("escapes repo root"). Without that prompt the
9 false rejections might have looked like real LLM bugs.
=== 2 close-out fixes ===
1. .gitignore: changed `/reports/latest/` → `**/reports/latest/`
(and same for `run-*`). Phase C committed 22 generated files
from `tests/fixtures/*/reports/latest/` because the original
pattern was anchored at the harness root only. Existing tracked
files removed via git rm --cached; new pattern keeps fixture
reports out of version control going forward.
2. pipeline.cleanOutputDir: pipeline now deletes the bounded list
of known per-run files at the start of each run. Before this,
a prior run's rejected-findings.json could linger when the
current run had no rejections — confused J during the bug hunt
above. cleanOutputDir is bounded (deletes only files we emit)
so operator-owned adjacent files stay.
Verified end-to-end: insecure-repo + --enable-llm → 25 confirmed
findings (16 static + 9 LLM), 0 rejected.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
local-review-harness
Local-first code review harness. Walks a repository, runs evidence-bearing static checks, generates Scrum-style reports. No cloud dependencies. LLM review is local-Ollama-only (Phase C, not yet shipped).
Per FIRST_COMMAND_FOR_CLAUDE_CODE.md + PROMPT.md — "AI may suggest. Code validates. Reports must show evidence." Findings without grep-able evidence get rejected; the validator phase rejects model claims that cite missing files.
Status
Phase A + Phase B (MVP) shipped. What works today:
review-harness repo <path>— Phase 0 intake + Phase 1 static scanreview-harness scrum <path>— same pipeline + full Scrum report bundlereview-harness model doctor— stub (real Ollama probe in Phase C)- 12 static analyzers covering hardcoded paths, shell exec, raw SQL, wildcard CORS, secret patterns, large files, TODO/FIXME, missing tests, committed
.env, unsafe file I/O, exposed mutation endpoints, hardcoded private-network IPs
Phases C–E pending: real LLM review, validation cross-check, append-only memory, diff/rules subcommands.
Build
Single static binary, no cgo:
go build -o review-harness ./cmd/review-harness
Requires Go 1.22+.
Run
# Full repo review (Phase 0 + Phase 1 + Phase 4)
./review-harness repo /path/to/target/repo
# Same + Scrum bundle (scrum-test.md, risk-register.md, sprint-backlog.md, acceptance-gates.md)
./review-harness scrum /path/to/target/repo
# Model doctor stub
./review-harness model doctor
Reports land in <target>/reports/latest/ by default; override with --output-dir.
Optional config files:
./review-harness scrum /path --review-profile configs/review-profile.example.yaml \
--model-profile configs/model-profile.example.yaml
Self-review
The harness reviews itself as a sanity gate (PROMPT.md "Final Deliverable"):
./review-harness scrum .
cat reports/latest/scrum-test.md
The fixture-planted secrets in tests/fixtures/insecure-repo/ are intentional — they prove the secret-pattern analyzer fires. Operators reviewing the self-report should expect those critical-severity hits and dismiss them as fixture content.
Test fixtures
Three synthetic repos under tests/fixtures/:
| Fixture | Purpose | Expected outcome |
|---|---|---|
clean-repo/ |
sterile reference | 0 confirmed findings |
insecure-repo/ |
every static check fires | ≥8 distinct check IDs |
degraded-repo/ |
no git, no manifests | repo_intake phase marked degraded |
Run them all to validate after a regex change:
for f in clean-repo insecure-repo degraded-repo; do
./review-harness scrum "tests/fixtures/$f" > /dev/null
echo "$f: $(jq '.summary.total' tests/fixtures/$f/reports/latest/static-findings.json) findings"
done
Exit codes
0— clean run, no degraded phases64— usage error65— runtime error (config parse fail, target path missing, etc.)66— degraded mode (one or more phases skipped or stubbed; reports still produced)
66 is the expected exit code in MVP because the LLM phase is hardcoded degraded until Phase C lands.