Some checks failed
lakehouse/auditor 8 warnings — see review
Two bundled changes. Both came out of J's observation that the
verifier was defaulting to UNVERIFIABLE on domain-specific facts
because it had no idea what Lakehouse was, which project's code it
was reading, or what framework the types belonged to.
1. Project context preamble. Added docs/AUDITOR_CONTEXT.md — a <400-
word concise description of the project (crates, services,
architecture phases, the auditor's role itself). fact_extractor
reads it once, caches it, prepends it to the extract prompt as a
"PROJECT CONTEXT (for grounding; do NOT extract from this)"
section. Both extractor and verifier now see this context, so
statements like "aggregate<T> returns Map<string, AggregateRow>"
get grounded as "this is a TypeScript function in the Lakehouse
auditor subsystem" and the verifier can reason about plausibility
instead of guessing.
2. Verifier-verdict parser fix. Gemma2's output format varies between
"**Verdict:** CORRECT" and just "* **CORRECT**" inline (observed
variance across runs). The old regex required "Verdict:" as a
label and missed the second format — causing all verdicts to
stay UNCHECKED. Replaced with a two-pass approach: find each
fact section start ("**N.**" or "N."), slice to the next section,
scan the slice for the first CORRECT|INCORRECT|UNVERIFIABLE
token. Handles both formats plus unfenced fallback.
Verified: 4-fact test extraction went from 0/4 verdicts scored
(pre-fix) to 2/4 CORRECT + 2/4 UNVERIFIABLE (post-fix). The 2
UNVERIFIABLE cases are domain-specific code behavior the verifier
legitimately can't confirm without reading source — correct stance,
not a parser miss.
No new consensus modes yet. J suggested adding codereview or
validator as a second pass; holding until we see whether context
injection alone gives sufficient signal lift.
Lakehouse Claim Auditor
A Bun sub-agent that watches open PRs on Gitea, reads the ship-claims in commit messages and PR bodies, and hard-blocks merges when the code doesn't back the claim.
Rationale: when "compiles + one curl works" gets called "phase shipped," placeholder code accumulates. This auditor runs every 90s, fetches each open PR, and subjects it to four checks:
- Static diff — grep/parse looking for placeholder patterns
- Dynamic — runs the never-before-executed hybrid test fixture
- Cloud inference — asks
gpt-oss:120bvia/v1/chatto identify gaps in the diff - KB query — looks up
data/_kb/+ observer for prior failure patterns on similar claims
Verdict is assembled, posted to Gitea as:
- A failing commit status (hard block — branch protection prevents merge)
- A review comment explaining every finding
Run manually
cd /home/profit/lakehouse
bun run auditor/index.ts
Defaults: polls every 90s, stops on auditor.paused file present.
State
data/_auditor/state.json— last-audited head SHA per PRdata/_auditor/verdicts/{pr}-{sha}.json— per-run verdict recorddata/_kb/audit_lessons.jsonl— one row per block/warn finding, path-agnostic signature for dedup. Tailed by kb_query on each audit to surface recurring patterns (2+ distinct PRs with same signature → info, 3-4 → warn, 5+ → block). This is how the auditor learns.data/_kb/scrum_reviews.jsonl— scrum-master per-file reviews. If a file in the current PR has been scrum-reviewed, kb_query surfaces the review as a finding with the accepted model and attempt count.
Where YOU edit
auditor/policy.ts — the verdict assembler. Controls which findings
block vs warn vs inform. All other code is mechanical: fetching,
running checks, posting to Gitea.
Hard-block mechanism
- Commit status is posted as
failurewith contextlakehouse/auditor - If
mainbranch protection requireslakehouse/auditorstatus to pass, Gitea prevents merge - When code is fixed and re-audit passes, status flips to
success, merge unblocks
Enable branch protection (one-time, via Gitea UI or API):
POST /repos/profit/lakehouse/branch_protections{"branch_name": "main", "required_status_checks": {"contexts": ["lakehouse/auditor"]}}