profit 2afad0f83f auditor/inference: N=3 consensus + qwen3-coder:480b tie-breaker
Closes the determinism gap observed in the 3-run baseline test: 1 of
8 findings (the "proven escalation ladder" block) was flipping across
identical-state audits. Root cause: cloud non-determinism at temp=0
is real in practice even though it shouldn't be in theory.

Fix: run the primary reviewer (gpt-oss:120b) N=3 times in PARALLEL
(Promise.all, wall-clock ≈ single call because they're independent
HTTP requests). Aggregate votes per claim_idx. Majority wins. On a
1-1-1 split, call a tie-breaker model with different architecture:
qwen3-coder:480b — newer coding specialist, 4x params of the primary,
distinct training lineage.

Every case where the 3 runs disagreed (even when majority resolved)
is logged to data/_kb/audit_discrepancies.jsonl with the vote counts
and resolution type. This is how we measure consensus drift over
time — a dashboard metric is literally `wc -l audit_discrepancies`
relative to audit count.

Verified: 2 back-to-back audits on unchanged PR #8 produced
identical 8 findings each (1 block + 7 warn). consensus=3/3 on every
claim, zero discrepancies logged. Cost: 3x primary tokens (7K per
audit vs 2K), wall-clock ~unchanged because calls are parallel.

New env vars:
  LH_AUDITOR_CONSENSUS_N        default 3
  LH_AUDITOR_TIEBREAKER_MODEL   default qwen3-coder:480b

Factored the cloud call into runCloudInference() helper so the
consensus loop is clean and the tie-breaker reuses the same prompt
shape as the primary.
2026-04-22 23:38:17 -05:00
..

Lakehouse Claim Auditor

A Bun sub-agent that watches open PRs on Gitea, reads the ship-claims in commit messages and PR bodies, and hard-blocks merges when the code doesn't back the claim.

Rationale: when "compiles + one curl works" gets called "phase shipped," placeholder code accumulates. This auditor runs every 90s, fetches each open PR, and subjects it to four checks:

  1. Static diff — grep/parse looking for placeholder patterns
  2. Dynamic — runs the never-before-executed hybrid test fixture
  3. Cloud inference — asks gpt-oss:120b via /v1/chat to identify gaps in the diff
  4. KB query — looks up data/_kb/ + observer for prior failure patterns on similar claims

Verdict is assembled, posted to Gitea as:

  • A failing commit status (hard block — branch protection prevents merge)
  • A review comment explaining every finding

Run manually

cd /home/profit/lakehouse
bun run auditor/index.ts

Defaults: polls every 90s, stops on auditor.paused file present.

State

  • data/_auditor/state.json — last-audited head SHA per PR
  • data/_auditor/verdicts/{pr}-{sha}.json — per-run verdict record
  • data/_kb/audit_lessons.jsonl — one row per block/warn finding, path-agnostic signature for dedup. Tailed by kb_query on each audit to surface recurring patterns (2+ distinct PRs with same signature → info, 3-4 → warn, 5+ → block). This is how the auditor learns.
  • data/_kb/scrum_reviews.jsonl — scrum-master per-file reviews. If a file in the current PR has been scrum-reviewed, kb_query surfaces the review as a finding with the accepted model and attempt count.

Where YOU edit

auditor/policy.ts — the verdict assembler. Controls which findings block vs warn vs inform. All other code is mechanical: fetching, running checks, posting to Gitea.

Hard-block mechanism

  1. Commit status is posted as failure with context lakehouse/auditor
  2. If main branch protection requires lakehouse/auditor status to pass, Gitea prevents merge
  3. When code is fixed and re-audit passes, status flips to success, merge unblocks

Enable branch protection (one-time, via Gitea UI or API):

  • POST /repos/profit/lakehouse/branch_protections
  • {"branch_name": "main", "required_status_checks": {"contexts": ["lakehouse/auditor"]}}