Acted on 2 of 10 findings Kimi caught when auditing its own integration
on PR #11 head 8d02c7f. Skipped 8 (false positives or out-of-scope).
1. crates/gateway/src/v1/kimi.rs — flatten OpenAI multimodal content
array to plain string before forwarding to api.kimi.com. The Kimi
coding endpoint is text-only; passing a [{type,text},...] array
returns 400. Use Message::text() to concat text-parts and drop
non-text. Verified with curl using array-shape content: gateway now
returns "PONG-ARRAY" instead of upstream error.
2. auditor/checks/kimi_architect.ts — computeGrounding switched from
readFileSync to async readFile inside Promise.all. Doesn't matter
at 10 findings; would matter at 100+. Removed unused readFileSync
import.
Skipped findings (with reason):
- drift_report.ts:18 schema bump migration concern: the strict
schema_version refusal IS the migration boundary (v1 readers
explicitly fail on v2; not a silent corruption risk).
- replay.ts:383 ISO timestamp precision: Date.toISOString always
emits "YYYY-MM-DDTHH:mm:ss.sssZ" (ms precision). False positive.
- mode.rs:1035 matrix_corpus deserializer compat: deserialize_string
_or_vec at mode.rs:175 already accepts both shapes. Confabulation
from not seeing the deserializer in the input bundle.
- /etc/lakehouse/kimi.env world-readable: actually 0600 root. Real
concern would be permission-drift; not a code bug.
- callKimi response.json hang: obsolete; we use curl now.
- parseFindings silent-drop: ergonomic concern, not a bug.
- appendMetrics join with "..": works for current path; deferred.
- stubFinding dead-type extension: cosmetic.
Self-audit grounding rate at v1.0.0: 10/10 file:line citations
verified by grep. 2 of 10 actionable bugs landed. The other 8 were
correctly flagged as concerns but didn't earn a code change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two changes:
1. Default provider now ollama_cloud/kimi-k2.6 (env-overridable via
LH_AUDITOR_KIMI_PROVIDER + LH_AUDITOR_KIMI_MODEL). Ollama Cloud Pro
exposes kimi-k2.6 legitimately, so we no longer need the User-Agent-
spoof path through api.kimi.com. Smoke test 2026-04-27:
api.kimi.com 368s 8 findings 8/8 grounded
ollama_cloud 54s 10 findings 10/10 grounded
The kimi.rs adapter (provider=kimi) stays wired as a fallback when
Ollama Cloud is upstream-broken.
2. Switch HTTP transport from Bun's native fetch to curl via Bun.spawn.
Bun fetch has an undocumented ~300s ceiling that AbortController +
setTimeout cannot override; curl honors -m for end-to-end max
transfer time without a hard intrinsic limit. Required for Kimi's
reasoning-heavy responses on big audit prompts.
3. Bug fix Kimi caught in this very file (turtles all the way down):
Number(process.env.LH_AUDITOR_KIMI_MAX_TOKENS ?? 128_000) yields 0
when env is set to empty string — `??` only catches null/undefined.
Switched to Number(env) || 128_000 so empty/0/NaN all fall back.
Same pattern probably exists in other files; future audit pass.
4. Bumped MAX_TOKENS default 12K -> 128K. Kimi K2.6's reasoning_content
counts against this budget but isn't surfaced in OpenAI-shape content;
12K silently produced finish_reason=length with empty content when
reasoning consumed the budget.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds kimi_architect as a fifth check kind in the auditor. Runs
sequentially after static/dynamic/inference/kb_query, consumes their
findings as context, and asks Kimi For Coding "what did everyone
miss?" — targeting load-bearing issues that deepseek N=3 voting can't
see (compile errors, false telemetry, schema bypasses, determinism
leaks). 7/7 grounded on the distillation v1.0.0 audit experiment
2026-04-27.
Off by default. Enable on the lakehouse-auditor service:
systemctl edit lakehouse-auditor.service
Environment=LH_AUDITOR_KIMI=1
Tunable env (all optional):
LH_AUDITOR_KIMI_MODEL default kimi-for-coding
LH_AUDITOR_KIMI_MAX_TOKENS default 12000
LH_GATEWAY_URL default http://localhost:3100
Guardrails:
- Failure-isolated. Any Kimi error / 429 / TOS revocation returns a
single info-level skip-finding so the existing pipeline never blocks
on a Kimi outage.
- Cost-bounded. Cached verdicts at data/_auditor/kimi_verdicts/<pr>-
<sha>.json with 24h TTL — re-audits within the window return cached
findings instead of re-calling upstream. New commits produce new
SHAs so caching is per-head, not per-day.
- 6min upstream timeout (vs 2min for openrouter inference) — Kimi is
a reasoning model and the audit prompt is large.
- Grounding verification baked in. Every finding's cited file:line is
greppped against the actual file before the verdict is persisted.
Per-finding evidence carries [grounding: verified at FILE:LINE] or
[grounding: line N > EOF] / [grounding: file not found]. Confab-
ulation rate goes into data/_kb/kimi_audits.jsonl as grounding_rate
for "is this still valuable" tracking.
Persisted artifacts:
data/_auditor/kimi_verdicts/<pr>-<sha>.json full verdict + raw
Kimi response + grounding
data/_kb/kimi_audits.jsonl one row per call:
latency, tokens, findings,
grounding rate
Verdict-rendering: kimi_architect now appears in the per-check
sections of the human-readable comment posted to PRs (auditor/audit.ts
checkOrder), after kb_query.
Verification:
bun build auditor/checks/kimi_architect.ts compiles
bun build auditor/audit.ts compiles
parser sanity (3-finding fixture) 3/3 lifted correctly
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>