The auditor's own audit on commit ebd9ab7 produced 10 kimi_architect
findings; 2 are real correctness issues that this commit lands. The
other 8 are documented in the commit body as triaged-skip with
rationale (false flags, defensible by current intent, or edge cases).
LANDED:
1. auditor/index.ts — atomic state mutation on audit count.
`state.audit_count_per_pr[prKey] += 1` was held in memory until
the cycle's saveState at the end. If the daemon was killed mid-
cycle (SIGTERM, OOM, panic), the count was lost on restart while
the on-disk last_audited still showed the SHA as audited — the cap
silently leaked one audit per crash. Fix: persist state immediately
after each successful audit so the increment survives a crash.
saveState is idempotent + cheap (single JSON write); per-audit
cost negligible.
2. auditor/checks/inference.ts — Number-coerce mode runner telemetry.
`body?.latency_ms ?? 0` collapses null/undefined but passes through
non-numeric values (string, NaN, etc.) which would poison downstream
arithmetic in maxLatencyMs computation. Added a `num(v)` helper
that does `Number(v)` with `isFinite` fallback to 0. Applied to
latency_ms, enriched_prompt_chars, bug_fingerprints_count,
matrix_chunks_kept.
SKIPPED with rationale:
- WARN kimi_architect.ts:211 "metrics appended even on empty verdict":
this is intentional — observability shouldn't depend on whether
parseFindings succeeded. Comment in the file explicitly notes this.
- WARN static.ts:270 "escaped-backslash-before-backtick edge case":
real but extremely narrow (Rust raw strings with `\\\\\``). No
observed false positives in production audits; defer.
- INFO kimi_architect.ts:333 "sync existsSync in async fn": existsSync
is non-blocking syscall on Linux; not a real perf hit at audit
scale (10s of findings per call).
- INFO kimi_architect.ts:105 "audit_index modulo wraparound at 50+
audits": cap=3 means we never reach high counts on any PR.
- INFO inference.ts:366 "prompt injection delimiter risk": OUTPUT
FORMAT delimiter is in our prompt template, not user input; user
data goes inside content sections that don't contain the delimiter.
- WARN Cargo.lock:8739 "truth+validator no Cargo.toml in diff":
false flag — Cargo.toml IS in workspace members (lines 17-18 of
the workspace manifest).
- WARN config/modes.toml:1 "no schema validation": defensible — the
load path validates structure (deserialize_string_or_vec at
mode.rs:175) and falls back to safe default on parse error.
- INFO evidence_record.ts:124 "metadata accepts any keys": values are
constrained to `string | number | boolean`; key-name validation
not warranted for a domain-metadata field.
The 13 BLOCK-severity inference findings on this audit are all
"claim not backed" against historical commit messages from earlier
in the branch (8aa7ee9, bc698eb, 5bdd159, etc.). Those are
aspirational prose ("Verified end-to-end") that the deepseek
consensus can't verify from a static diff — known limitation, not
actionable as code fixes.
Verification:
bun build auditor/index.ts compiles
bun build auditor/checks/inference.ts compiles
systemctl restart lakehouse-auditor active
Cap remains active on PR #11 (3/3) — daemon will not audit this
fix-commit. Reset state.audit_count_per_pr.11 to verify the fixes
land clean on a fresh audit when ready.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Lakehouse Claim Auditor
A Bun sub-agent that watches open PRs on Gitea, reads the ship-claims in commit messages and PR bodies, and hard-blocks merges when the code doesn't back the claim.
Rationale: when "compiles + one curl works" gets called "phase shipped," placeholder code accumulates. This auditor runs every 90s, fetches each open PR, and subjects it to four checks:
- Static diff — grep/parse looking for placeholder patterns
- Dynamic — runs the never-before-executed hybrid test fixture
- Cloud inference — asks
gpt-oss:120bvia/v1/chatto identify gaps in the diff - KB query — looks up
data/_kb/+ observer for prior failure patterns on similar claims
Verdict is assembled, posted to Gitea as:
- A failing commit status (hard block — branch protection prevents merge)
- A review comment explaining every finding
Run manually
cd /home/profit/lakehouse
bun run auditor/index.ts
Defaults: polls every 90s, stops on auditor.paused file present.
State
data/_auditor/state.json— last-audited head SHA per PRdata/_auditor/verdicts/{pr}-{sha}.json— per-run verdict recorddata/_kb/audit_lessons.jsonl— one row per block/warn finding, path-agnostic signature for dedup. Tailed by kb_query on each audit to surface recurring patterns (2+ distinct PRs with same signature → info, 3-4 → warn, 5+ → block). This is how the auditor learns.data/_kb/scrum_reviews.jsonl— scrum-master per-file reviews. If a file in the current PR has been scrum-reviewed, kb_query surfaces the review as a finding with the accepted model and attempt count.
Where YOU edit
auditor/policy.ts — the verdict assembler. Controls which findings
block vs warn vs inform. All other code is mechanical: fetching,
running checks, posting to Gitea.
Hard-block mechanism
- Commit status is posted as
failurewith contextlakehouse/auditor - If
mainbranch protection requireslakehouse/auditorstatus to pass, Gitea prevents merge - When code is fixed and re-audit passes, status flips to
success, merge unblocks
Enable branch protection (one-time, via Gitea UI or API):
POST /repos/profit/lakehouse/branch_protections{"branch_name": "main", "required_status_checks": {"contexts": ["lakehouse/auditor"]}}