Closes the determinism gap observed in the 3-run baseline test: 1 of 8 findings (the "proven escalation ladder" block) was flipping across identical-state audits. Root cause: cloud non-determinism at temp=0 is real in practice even though it shouldn't be in theory. Fix: run the primary reviewer (gpt-oss:120b) N=3 times in PARALLEL (Promise.all, wall-clock ≈ single call because they're independent HTTP requests). Aggregate votes per claim_idx. Majority wins. On a 1-1-1 split, call a tie-breaker model with different architecture: qwen3-coder:480b — newer coding specialist, 4x params of the primary, distinct training lineage. Every case where the 3 runs disagreed (even when majority resolved) is logged to data/_kb/audit_discrepancies.jsonl with the vote counts and resolution type. This is how we measure consensus drift over time — a dashboard metric is literally `wc -l audit_discrepancies` relative to audit count. Verified: 2 back-to-back audits on unchanged PR #8 produced identical 8 findings each (1 block + 7 warn). consensus=3/3 on every claim, zero discrepancies logged. Cost: 3x primary tokens (7K per audit vs 2K), wall-clock ~unchanged because calls are parallel. New env vars: LH_AUDITOR_CONSENSUS_N default 3 LH_AUDITOR_TIEBREAKER_MODEL default qwen3-coder:480b Factored the cloud call into runCloudInference() helper so the consensus loop is clean and the tie-breaker reuses the same prompt shape as the primary.
Description
Rust-first object storage system
Languages
TypeScript
38.4%
Rust
35.8%
HTML
13.9%
Python
7.8%
Shell
2.1%
Other
2%