root dc6dd1d30c auditor: per-PR audit cap (default 3) — daemon halts further audits until reset
Adds MAX_AUDITS_PER_PR (env LH_AUDITOR_MAX_AUDITS_PER_PR, default 3).
The poller increments a per-PR counter on each successful audit; when
the counter reaches the cap it skips that PR with a "capped" log line
until the operator manually clears state.audit_count_per_pr[<PR#>].

Why:
"I don't want it to continuously loop even if it finds a problem.
We need a maximum until we can come back."

Without this, the daemon polls every 90s and audits every new head
SHA. If each fix-commit surfaces new findings (which is what
kimi_architect is designed to do), the audit loop runs unbounded
while the operator is away. At ~$0.30/audit on Opus and 5-10 pushes
a day, that's $1-3/day idle burn — fine for a couple days, painful
for weeks.

Cap mechanics:
- Counter starts at 0 per PR (or whatever exists in state.json)
- Increments only on successful audit (failures don't count)
- Comparison is >= so cap=3 means audits 1, 2, 3 run; 4+ skip
- Skip is logged: "capped at N/M audits — clear state.json
  audit_count_per_pr.<N> to resume"
- New `cycles_skipped_capped` counter on State for observability

Reset:
  jq '.audit_count_per_pr = (.audit_count_per_pr - {"11": 4})' \
    /home/profit/lakehouse/data/_auditor/state.json > /tmp/s.json && \
    mv /tmp/s.json /home/profit/lakehouse/data/_auditor/state.json
- Daemon picks up the change on the next cycle (no restart needed —
  state is reloaded each cycle)
- Or set the entry to 0 if you want to keep the key

Disable cap: LH_AUDITOR_MAX_AUDITS_PER_PR=0
Reduce cap: LH_AUDITOR_MAX_AUDITS_PER_PR=1   (one audit per PR head, then pause)

Pre-existing PR audits today (4 on PR #11) are NOT seeded into the
counter by this commit — operator decides post-deploy whether to set
state.audit_count_per_pr.11 to today's actual count or leave at 0.
Setting to 4 (or 3) immediately halts further audits on PR #11.

Verification:
  bun build auditor/index.ts   compiles
  systemctl restart lakehouse-auditor   active

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 07:24:23 -05:00
..

Lakehouse Claim Auditor

A Bun sub-agent that watches open PRs on Gitea, reads the ship-claims in commit messages and PR bodies, and hard-blocks merges when the code doesn't back the claim.

Rationale: when "compiles + one curl works" gets called "phase shipped," placeholder code accumulates. This auditor runs every 90s, fetches each open PR, and subjects it to four checks:

  1. Static diff — grep/parse looking for placeholder patterns
  2. Dynamic — runs the never-before-executed hybrid test fixture
  3. Cloud inference — asks gpt-oss:120b via /v1/chat to identify gaps in the diff
  4. KB query — looks up data/_kb/ + observer for prior failure patterns on similar claims

Verdict is assembled, posted to Gitea as:

  • A failing commit status (hard block — branch protection prevents merge)
  • A review comment explaining every finding

Run manually

cd /home/profit/lakehouse
bun run auditor/index.ts

Defaults: polls every 90s, stops on auditor.paused file present.

State

  • data/_auditor/state.json — last-audited head SHA per PR
  • data/_auditor/verdicts/{pr}-{sha}.json — per-run verdict record
  • data/_kb/audit_lessons.jsonl — one row per block/warn finding, path-agnostic signature for dedup. Tailed by kb_query on each audit to surface recurring patterns (2+ distinct PRs with same signature → info, 3-4 → warn, 5+ → block). This is how the auditor learns.
  • data/_kb/scrum_reviews.jsonl — scrum-master per-file reviews. If a file in the current PR has been scrum-reviewed, kb_query surfaces the review as a finding with the accepted model and attempt count.

Where YOU edit

auditor/policy.ts — the verdict assembler. Controls which findings block vs warn vs inform. All other code is mechanical: fetching, running checks, posting to Gitea.

Hard-block mechanism

  1. Commit status is posted as failure with context lakehouse/auditor
  2. If main branch protection requires lakehouse/auditor status to pass, Gitea prevents merge
  3. When code is fixed and re-audit passes, status flips to success, merge unblocks

Enable branch protection (one-time, via Gitea UI or API):

  • POST /repos/profit/lakehouse/branch_protections
  • {"branch_name": "main", "required_status_checks": {"contexts": ["lakehouse/auditor"]}}