2 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
d11632a6fa |
staffing: recon + synthetic-data gap report (Phase 0, no implementation)
Some checks failed
lakehouse/auditor 13 blocking issues: cloud: claim not backed — "Phase 8 done-criteria (per spec):"
Spec mandates these two docs before any staffing audit runner ships:
docs/recon/staffing-lakehouse-distillation-recon.md
reports/staffing/synthetic-data-gap-report.md
NO distillation core touched. Distillation v1.0.0 (commit e7636f2,
tag distillation-v1.0.0) remains the stable substrate. Staffing
work is consumer-only.
Recon findings (12 sections, ~5KB):
- Existing staffing schemas in crates/validator/staffing/* are scaffolds
(FillValidator schema-shape only; worker-existence/status/geo TODOs)
- Synthetic data spans 6+ shapes across 9 parquet files
(~625k worker-shape rows + 1k candidate-shape rows)
- PII detection lives in shared/pii.rs but enforcement at query
time is unverified — the LLM may have been seeing raw PII via
workers_500k_v8 vector corpus
- 44 scenarios + 64 playbook_lessons = ~108 RAG candidates
- No structured fill-event log exists; scenarios+lessons are
retrospective, not queryable per-event records
- workers_500k.phone is int (should be string — leading-zero loss)
- client_workerskjkk.parquet is a typo file (160 rows, sibling of
client_workersi.parquet)
- PRD §158 claims Phase 19 closed playbook write-only gap — unverified
Gap report findings (9 sections, ~6KB):
- 4 BLOCKING gaps requiring J decisions before audit ships:
A. Generate fill_events.parquet from scenarios + lessons?
B. Build views/{candidates,workers,jobs}_safe with PII masking?
C. Delete client_workerskjkk.parquet typo file?
D. Fix workers_500k.phone type (int → string)?
- 5 SOFT gaps the audit can run with (will be reported as findings)
- 3 NON-gaps (data sufficient as-is)
- Recommendation: NO new synthetic data needed; only normalization
of what already exists, contingent on J approval of A-D
Up-front commitments:
- Distillation v1.0.0 substrate untouched (verified by audit-full
running clean before+after each staffing change)
- All synthetic-data modifications via deterministic scripts under
scripts/staffing/, never hand-edit
- Every staffing artifact carries canonical sha256 provenance back
to source parquet/scenario/lesson
- _safe views are the source of truth for LLM-facing text; raw
parquets never directly fed into corpus builds
Phase 1 unblocks AFTER J reviews both docs and approves audit scope
+ the 4 gap-fix decisions.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
27b1d27605 |
distillation: Phase 0 recon + Phase 1 schemas + Phase 2 transforms scaffold
Some checks failed
lakehouse/auditor 9 blocking issues: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
Phase 0 — docs/recon/local-distillation-recon.md
Inventories the 23 KB JSONL streams + 20 vector corpora + auditor's
kb_index.ts as substrate for the now.md distillation pipeline. Maps
spec modules to existing producers, identifies real gaps, lists 9
schemas to formalize. ZERO implementation in recon — gating doc only.
Phase 1 — auditor/schemas/distillation/
9 schemas + foundation types + 48 tests passing in 502ms:
types.ts shared validators + canonicalSha256
evidence_record.ts EVIDENCE_SCHEMA_VERSION=1, ModelRole enum
scored_run.ts 4 categories pinned, anchor_grounding ∈ [0,1]
receipt.ts git_sha 40-char, sha256 file refs, validation_pass:bool
playbook.ts non-empty source_run_ids + acceptance_criteria
scratchpad_summary.ts validation_status enum, hash sha256
model_ledger.ts success_rate ∈ [0,1], sample_count ≥ 1
rag_sample.ts success_score ∈ {accepted, partially_accepted}
sft_sample.ts quality_score MUST be 'accepted' (no leak)
preference_sample.ts chosen != rejected, source_run_ids must differ
evidence_record.test.ts 10 tests, JSON-fixture round-trip
schemas.test.ts 30 tests, inline fixtures
realdata.test.ts 8 tests, real-JSONL probe
Real-data validation probe (one of the 3 notables from recon):
46 rows across 7 sources, 100% pass. distilled_facts/procedures alive.
Report at data/_kb/realdata_validation_report.md (also written by the
test). Confirms schema fits existing producers without migration.
Phase 2 scaffold — scripts/distillation/transforms.ts
Promoted PROBES from realdata.test.ts into a real TRANSFORMS array
covering 12 source streams (8 Tier 1 validated + 4 Tier 2 from
recon's untested-streams list). Pure functions: no I/O, no model
calls, no clock reads. Caller supplies recorded_at + sig_hash so
materializer is deterministic by construction.
Spec non-negotiables enforced at schema layer (defense in depth):
- provenance{source_file, sig_hash, recorded_at} required everywhere
- schema_version mismatch hard-rejects (forward-compat gate)
- SFT no-leak: validateSftSample REJECTS partially_accepted, rejected,
needs_human_review — three explicit tests
- Every score has WHY (reasons non-empty)
- Every playbook traces to source (source_run_ids non-empty)
- Every preference has WHY (reason non-empty)
- Receipts substantive (git_sha 40-char, sha256 64-char, validation_pass:bool)
Branch carries uncommitted auditor rebuild work (mode.rs + modes.toml
+ inference.ts + static.ts) blocked on upstream Ollama Cloud kimi-k2
500 ISE; held pending recon-driven design decisions.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|