root a50e9586f2
Some checks failed
lakehouse/auditor 13 blocking issues: cloud: claim not backed — "Verified live (current synthetic data):"
auditor: cap auto-resets on new head SHA (was per-PR-forever, now per-push)
Operator feedback: manual jq-edit-state.json + restart isn't
sustainable. Each push should naturally get a fresh budget; old
counter discarded the moment the SHA moves. Cap intent shifts
from "PR exhaustion" to "per-push attempt limit" — bounded
recovery from transient upstream errors, not a forever limit.

Mechanism:
- The dedup branch above (`last === pr.head_sha → continue`)
  unchanged.
- New branch: when `last` exists AND we have a non-zero count,
  AND we've fallen through to here (which means SHA != last,
  i.e. a new push), drop the counter to 0 BEFORE the cap check.
- Cap check fires only on same-SHA retries (transient errors that
  consumed multiple attempts).

Net behavior:
- push code → 3 audits run → cap → quiet → push more code →
  cap auto-resets → 3 more audits → cap → quiet
- No manual jq ever needed in steady state.
- Operator clears state.audit_count_per_pr.<N> = 0 only if a
  single SHA somehow needs MORE than the cap.

Pre-existing manual reset still works (state edit + daemon
restart for the change to take effect). Documented in the new
log line that fires on the rare same-SHA-burned-cap case.

Verified compile (bun build auditor/index.ts → green). Daemon
restart needed to activate; current cycle 4616's `[1/3]` audit
on 6ed48c1 finishes first, then restart.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 08:15:06 -05:00
..

Lakehouse Claim Auditor

A Bun sub-agent that watches open PRs on Gitea, reads the ship-claims in commit messages and PR bodies, and hard-blocks merges when the code doesn't back the claim.

Rationale: when "compiles + one curl works" gets called "phase shipped," placeholder code accumulates. This auditor runs every 90s, fetches each open PR, and subjects it to four checks:

  1. Static diff — grep/parse looking for placeholder patterns
  2. Dynamic — runs the never-before-executed hybrid test fixture
  3. Cloud inference — asks gpt-oss:120b via /v1/chat to identify gaps in the diff
  4. KB query — looks up data/_kb/ + observer for prior failure patterns on similar claims

Verdict is assembled, posted to Gitea as:

  • A failing commit status (hard block — branch protection prevents merge)
  • A review comment explaining every finding

Run manually

cd /home/profit/lakehouse
bun run auditor/index.ts

Defaults: polls every 90s, stops on auditor.paused file present.

State

  • data/_auditor/state.json — last-audited head SHA per PR
  • data/_auditor/verdicts/{pr}-{sha}.json — per-run verdict record
  • data/_kb/audit_lessons.jsonl — one row per block/warn finding, path-agnostic signature for dedup. Tailed by kb_query on each audit to surface recurring patterns (2+ distinct PRs with same signature → info, 3-4 → warn, 5+ → block). This is how the auditor learns.
  • data/_kb/scrum_reviews.jsonl — scrum-master per-file reviews. If a file in the current PR has been scrum-reviewed, kb_query surfaces the review as a finding with the accepted model and attempt count.

Where YOU edit

auditor/policy.ts — the verdict assembler. Controls which findings block vs warn vs inform. All other code is mechanical: fetching, running checks, posting to Gitea.

Hard-block mechanism

  1. Commit status is posted as failure with context lakehouse/auditor
  2. If main branch protection requires lakehouse/auditor status to pass, Gitea prevents merge
  3. When code is fixed and re-audit passes, status flips to success, merge unblocks

Enable branch protection (one-time, via Gitea UI or API):

  • POST /repos/profit/lakehouse/branch_protections
  • {"branch_name": "main", "required_status_checks": {"contexts": ["lakehouse/auditor"]}}