Some checks failed
lakehouse/auditor 1 blocking issue: cloud: claim not backed — "the proven escalation ladder with learning context, collects"
Phase 1 — definition-layer over append-only JSONL scratchpads.
auditor/kb_index.ts is the single shared aggregator:
aggregate<T>(jsonlPath, { keyFn, scopeFn, checkFn, tailLimit })
→ Map<signature, {count, distinct_scopes, confidence,
first_seen, last_seen, representative_summary, ...}>
ratingSeverity(agg) — confidence × count severity policy shared
across all KB readers. Kills the "same unfixed PR inflates its
own recurrence score" failure mode by design: confidence =
distinct_scopes/count, so same-scope noise stays below the 0.3
escalation threshold no matter how many times it repeats.
checkAuditLessons now routes through aggregate + ratingSeverity.
Net effect: the recurrence detector's bespoke Map/Set bookkeeping is
gone; same behavior, shared discipline, reusable by scrum/observer.
Also: symbolsExistInRepo now skips files >500KB so the audit can't
get stuck slurping a fixture.
Phase 2 — nine-consecutive audit runner.
tests/real-world/nine_consecutive_audits.ts pushes 9 empty commits,
waits for each verdict, captures the audit_lessons aggregate state
after each run, reports:
- sig_count trajectory (should stabilize, not grow linearly)
- max_count trajectory (same-signature repeat rate)
- max_confidence trajectory (must stay LOW on same-PR noise)
- verdict_stable across runs (must NOT oscillate)
This is the empirical proof that the KB compounds favorably:
noise doesn't escalate itself, and signal stays distinguishable.
Unit-tested both failure modes: same-PR × 9 repeats = conf=0.11
(info); cross-PR × 5 distinct = conf=1.00 (block). The rating
function correctly discriminates.
Lakehouse Claim Auditor
A Bun sub-agent that watches open PRs on Gitea, reads the ship-claims in commit messages and PR bodies, and hard-blocks merges when the code doesn't back the claim.
Rationale: when "compiles + one curl works" gets called "phase shipped," placeholder code accumulates. This auditor runs every 90s, fetches each open PR, and subjects it to four checks:
- Static diff — grep/parse looking for placeholder patterns
- Dynamic — runs the never-before-executed hybrid test fixture
- Cloud inference — asks
gpt-oss:120bvia/v1/chatto identify gaps in the diff - KB query — looks up
data/_kb/+ observer for prior failure patterns on similar claims
Verdict is assembled, posted to Gitea as:
- A failing commit status (hard block — branch protection prevents merge)
- A review comment explaining every finding
Run manually
cd /home/profit/lakehouse
bun run auditor/index.ts
Defaults: polls every 90s, stops on auditor.paused file present.
State
data/_auditor/state.json— last-audited head SHA per PRdata/_auditor/verdicts/{pr}-{sha}.json— per-run verdict recorddata/_kb/audit_lessons.jsonl— one row per block/warn finding, path-agnostic signature for dedup. Tailed by kb_query on each audit to surface recurring patterns (2+ distinct PRs with same signature → info, 3-4 → warn, 5+ → block). This is how the auditor learns.data/_kb/scrum_reviews.jsonl— scrum-master per-file reviews. If a file in the current PR has been scrum-reviewed, kb_query surfaces the review as a finding with the accepted model and attempt count.
Where YOU edit
auditor/policy.ts — the verdict assembler. Controls which findings
block vs warn vs inform. All other code is mechanical: fetching,
running checks, posting to Gitea.
Hard-block mechanism
- Commit status is posted as
failurewith contextlakehouse/auditor - If
mainbranch protection requireslakehouse/auditorstatus to pass, Gitea prevents merge - When code is fixed and re-audit passes, status flips to
success, merge unblocks
Enable branch protection (one-time, via Gitea UI or API):
POST /repos/profit/lakehouse/branch_protections{"branch_name": "main", "required_status_checks": {"contexts": ["lakehouse/auditor"]}}