lakehouse/auditor/schemas/distillation/scratchpad_summary.ts
root 27b1d27605
Some checks failed
lakehouse/auditor 9 blocking issues: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
distillation: Phase 0 recon + Phase 1 schemas + Phase 2 transforms scaffold
Phase 0 — docs/recon/local-distillation-recon.md
Inventories the 23 KB JSONL streams + 20 vector corpora + auditor's
kb_index.ts as substrate for the now.md distillation pipeline. Maps
spec modules to existing producers, identifies real gaps, lists 9
schemas to formalize. ZERO implementation in recon — gating doc only.

Phase 1 — auditor/schemas/distillation/
9 schemas + foundation types + 48 tests passing in 502ms:

  types.ts                      shared validators + canonicalSha256
  evidence_record.ts            EVIDENCE_SCHEMA_VERSION=1, ModelRole enum
  scored_run.ts                 4 categories pinned, anchor_grounding ∈ [0,1]
  receipt.ts                    git_sha 40-char, sha256 file refs, validation_pass:bool
  playbook.ts                   non-empty source_run_ids + acceptance_criteria
  scratchpad_summary.ts         validation_status enum, hash sha256
  model_ledger.ts               success_rate ∈ [0,1], sample_count ≥ 1
  rag_sample.ts                 success_score ∈ {accepted, partially_accepted}
  sft_sample.ts                 quality_score MUST be 'accepted' (no leak)
  preference_sample.ts          chosen != rejected, source_run_ids must differ
  evidence_record.test.ts       10 tests, JSON-fixture round-trip
  schemas.test.ts               30 tests, inline fixtures
  realdata.test.ts              8 tests, real-JSONL probe

Real-data validation probe (one of the 3 notables from recon):
46 rows across 7 sources, 100% pass. distilled_facts/procedures alive.
Report at data/_kb/realdata_validation_report.md (also written by the
test). Confirms schema fits existing producers without migration.

Phase 2 scaffold — scripts/distillation/transforms.ts
Promoted PROBES from realdata.test.ts into a real TRANSFORMS array
covering 12 source streams (8 Tier 1 validated + 4 Tier 2 from
recon's untested-streams list). Pure functions: no I/O, no model
calls, no clock reads. Caller supplies recorded_at + sig_hash so
materializer is deterministic by construction.

Spec non-negotiables enforced at schema layer (defense in depth):
  - provenance{source_file, sig_hash, recorded_at} required everywhere
  - schema_version mismatch hard-rejects (forward-compat gate)
  - SFT no-leak: validateSftSample REJECTS partially_accepted, rejected,
    needs_human_review — three explicit tests
  - Every score has WHY (reasons non-empty)
  - Every playbook traces to source (source_run_ids non-empty)
  - Every preference has WHY (reason non-empty)
  - Receipts substantive (git_sha 40-char, sha256 64-char, validation_pass:bool)

Branch carries uncommitted auditor rebuild work (mode.rs + modes.toml
+ inference.ts + static.ts) blocked on upstream Ollama Cloud kimi-k2
500 ISE; held pending recon-driven design decisions.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 22:30:38 -05:00

66 lines
3.0 KiB
TypeScript

// ScratchpadSummary — structured normalization of a tree-split or
// long-running scratchpad. Distinct from EvidenceRecord because a
// scratchpad accumulates across many calls; this schema captures the
// state at a checkpoint moment.
import {
ValidationResult, requireString, requireIsoTimestamp, requireProvenance, requireStringArray,
} from "./types";
export const SCRATCHPAD_SCHEMA_VERSION = 1;
export interface ScratchpadSummary {
schema_version: number;
run_id: string;
current_objective: string;
completed_steps: string[];
failed_steps: string[];
pending_steps: string[];
important_paths: string[]; // file paths the scratchpad references
decisions: string[]; // architectural/scope decisions made
unresolved_questions: string[];
validation_status: "pass" | "fail" | "partial" | "pending";
next_command?: string; // recommendation for next action
source_scratchpad_hash: string; // sha256 of the full source scratchpad text — diff detection
summarized_at: string; // ISO 8601
provenance: { source_file: string; line_offset?: number; sig_hash: string; recorded_at: string };
}
const STATUS = ["pass", "fail", "partial", "pending"];
export function validateScratchpadSummary(input: unknown): ValidationResult<ScratchpadSummary> {
const errors: string[] = [];
if (typeof input !== "object" || input === null) return { valid: false, errors: ["expected object"] };
const r = input as Record<string, unknown>;
let ok = true;
if (r.schema_version !== SCRATCHPAD_SCHEMA_VERSION) {
errors.push(`schema_version: expected ${SCRATCHPAD_SCHEMA_VERSION}, got ${JSON.stringify(r.schema_version)}`);
ok = false;
}
ok = requireString(r.run_id, "run_id", errors) && ok;
ok = requireString(r.current_objective, "current_objective", errors) && ok;
ok = requireIsoTimestamp(r.summarized_at, "summarized_at", errors) && ok;
if (typeof r.source_scratchpad_hash !== "string" || !/^[0-9a-f]{64}$/.test(r.source_scratchpad_hash as string)) {
errors.push("source_scratchpad_hash: must be hex sha256");
ok = false;
}
ok = requireStringArray(r.completed_steps, "completed_steps", errors) && ok;
ok = requireStringArray(r.failed_steps, "failed_steps", errors) && ok;
ok = requireStringArray(r.pending_steps, "pending_steps", errors) && ok;
ok = requireStringArray(r.important_paths, "important_paths", errors) && ok;
ok = requireStringArray(r.decisions, "decisions", errors) && ok;
ok = requireStringArray(r.unresolved_questions, "unresolved_questions", errors) && ok;
if (!STATUS.includes(r.validation_status as string)) {
errors.push(`validation_status: must be one of ${STATUS.join("|")}`);
ok = false;
}
if (r.next_command !== undefined && typeof r.next_command !== "string") {
errors.push("next_command: expected string when present");
ok = false;
}
ok = requireProvenance(r.provenance, "provenance", errors) && ok;
if (!ok) return { valid: false, errors };
return { valid: true, value: r as unknown as ScratchpadSummary };
}