root 454da15301
Some checks failed
lakehouse/auditor 16 blocking issues: cloud: claim not backed — "Verified end-to-end:"
auditor + aibridge: 6 fixes from Opus 4.7 self-audit on PR #11
The kimi_architect auditor on commit 00c8408 ran with auto-promotion
to claude-opus-4-7 (diff > 100k chars), produced 10 grounded
findings, 1 BLOCK + 6 WARN + 3 INFO. This commit lands 6 of them; 3
are skipped (false positives or out-of-scope cleanup deferred).

LANDED:

1. kimi_architect.ts:144  empty-parse cache poisoning. When parseFindings
   returns 0 findings (markdown shape changed, prompt too big, regex
   missed every block), the verdict was still persisted with empty
   findings, and the 24h TTL cache short-circuited every subsequent
   audit with a useless "0 findings" hit. Fix: only persist when
   findings.length > 0; metrics still appended unconditionally.

2. kimi_architect.ts:122  outage negative-cache. When callKimi throws
   (network error, gateway 502, rate limit), we returned skipFinding
   but didn't note the outage anywhere. Every audit cycle within the
   24h TTL hammered the dead upstream. Fix: write a sentinel file
   `<verdict>.outage` on failure with 10-min TTL; future calls within
   that window short-circuit immediately.

3. kimi_architect.ts:331  mkdir(join(p, "..")) -> dirname(p). The
   "/.." idiom resolved correctly via Node path normalization but
   was non-idiomatic and breaks if the path ever has trailing dots.
   Both Haiku and Opus self-audits flagged it.

4. inference.ts:202  N=3 consensus latency double/triple-count.
   `totalLatencyMs += run.latency_ms` summed across THREE parallel
   `Promise.all` calls — wall-clock is bounded by the slowest, not
   the sum. Renamed to `maxLatencyMs` using `Math.max`. Telemetry now
   reports actual wall-clock instead of 3x reality.

5. continuation.rs:198,199,230,231  i64/u64 -> u32 saturating cast.
   `resp.tokens_evaluated as u32` truncates bits when source > u32::MAX
   instead of saturating. Fix: u32::try_from(...).unwrap_or(u32::MAX)
   wraps the cast in a real saturate. Applied to both the empty-retry
   loop and the structural-completion continuation loop.

SKIPPED:

- BLOCK at Cargo.lock:8911 "validator-not-in-workspace" — confabulation.
  The diff Opus saw was truncated mid-line; validator IS in
  Cargo.toml workspace members. Real-world MAX_DIFF_CHARS=180k
  edge case to watch as we feed more big diffs.
- WARN at kimi_architect.ts:248 regex absolute-path edge case — minor,
  doesn't affect grounding rate observed so far.
- INFO at inference.ts:606 "dead reconstruction loop" — Opus misread.
  The Promise.all worker fills `summaries[]`; the second loop builds
  a sequential `scratchpad` string from those. Two distinct
  operations, not redundant.

Verification:
  bun build auditor/checks/{kimi_architect,inference}.ts   compiles
  cargo check -p aibridge                                  green
  cargo build --release -p gateway                          green
  systemctl restart lakehouse.service lakehouse-auditor.service  active

Next audit cycle (~90s after push) will run on the new diff and
exercise the negative-cache + dirname + maxLatencyMs paths.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 07:10:43 -05:00
..

Lakehouse Claim Auditor

A Bun sub-agent that watches open PRs on Gitea, reads the ship-claims in commit messages and PR bodies, and hard-blocks merges when the code doesn't back the claim.

Rationale: when "compiles + one curl works" gets called "phase shipped," placeholder code accumulates. This auditor runs every 90s, fetches each open PR, and subjects it to four checks:

  1. Static diff — grep/parse looking for placeholder patterns
  2. Dynamic — runs the never-before-executed hybrid test fixture
  3. Cloud inference — asks gpt-oss:120b via /v1/chat to identify gaps in the diff
  4. KB query — looks up data/_kb/ + observer for prior failure patterns on similar claims

Verdict is assembled, posted to Gitea as:

  • A failing commit status (hard block — branch protection prevents merge)
  • A review comment explaining every finding

Run manually

cd /home/profit/lakehouse
bun run auditor/index.ts

Defaults: polls every 90s, stops on auditor.paused file present.

State

  • data/_auditor/state.json — last-audited head SHA per PR
  • data/_auditor/verdicts/{pr}-{sha}.json — per-run verdict record
  • data/_kb/audit_lessons.jsonl — one row per block/warn finding, path-agnostic signature for dedup. Tailed by kb_query on each audit to surface recurring patterns (2+ distinct PRs with same signature → info, 3-4 → warn, 5+ → block). This is how the auditor learns.
  • data/_kb/scrum_reviews.jsonl — scrum-master per-file reviews. If a file in the current PR has been scrum-reviewed, kb_query surfaces the review as a finding with the accepted model and attempt count.

Where YOU edit

auditor/policy.ts — the verdict assembler. Controls which findings block vs warn vs inform. All other code is mechanical: fetching, running checks, posting to Gitea.

Hard-block mechanism

  1. Commit status is posted as failure with context lakehouse/auditor
  2. If main branch protection requires lakehouse/auditor status to pass, Gitea prevents merge
  3. When code is fixed and re-audit passes, status flips to success, merge unblocks

Enable branch protection (one-time, via Gitea UI or API):

  • POST /repos/profit/lakehouse/branch_protections
  • {"branch_name": "main", "required_status_checks": {"contexts": ["lakehouse/auditor"]}}