Surfaced by today's untracked-files audit. None of these are accidents —
multiple are referenced by name in CLAUDE.md and memory files but were
never added.
Categories:
- docs/PHASE_AUDIT_GUIDE.md (106 LOC) — Claude Code phase audit guidance
- ops/systemd/lakehouse-langfuse-bridge.service — Langfuse bridge unit
- package.json — top-level npm manifest
- scripts/e2e_pipeline_check.sh + production_smoke.sh — real test scripts
- reports/kimi/audit-last-week*.md — the "Two reports live" CLAUDE.md cites
- tests/multi-agent/scenarios/ — 44 staffing scenarios (cutover decision A)
- tests/multi-agent/playbooks/ — 102 playbook records
- tests/battery/, tests/agent_test/PRD.md, tests/real-world/* — real tests
- sidecar/sidecar/{lab_ui,pipeline_lab}.py — 888 LOC dev-only UIs that
remain in service post-sidecar-drop (commit ba928b1 explicitly kept them)
Sensitivity check: scenarios use synthetic company names ("Heritage Foods",
"Cornerstone Fabrication"); audit reports describe code findings only;
no PII or secrets surfaced.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
91 lines
4.0 KiB
Markdown
91 lines
4.0 KiB
Markdown
# PRD: Chicago Permit Staffing Recommendation
|
|
|
|
## Mission
|
|
|
|
You are a staffing-intelligence assistant. Your job is to **analyze a Chicago building permit and produce a one-page staffing recommendation** for our staffing company.
|
|
|
|
The output is a markdown document that a human staffing coordinator will read in under 2 minutes to decide whether to pursue the contract for staffing fit.
|
|
|
|
## Critical rules
|
|
|
|
1. **DO NOT START WRITING THE FINAL ANALYSIS YET.**
|
|
- First, READ this PRD fully.
|
|
- Then, PLAN your approach in `note()` — what steps will you take, what tools will you call, what evidence will you need.
|
|
- Only after planning, begin executing.
|
|
|
|
2. **Never invent facts.** If you don't have evidence for a claim (from a tool call), do not make the claim. Say "no evidence available" instead.
|
|
|
|
3. **Cite your sources.** Every factual claim in the final output should reference either:
|
|
- The permit data you read (cite the permit ID)
|
|
- A matrix-retrieved chunk (cite as `[matrix:source:doc_id]`)
|
|
|
|
4. **Stay focused.** This is a one-page deliverable, not a research paper. Aim for 600-1000 words total.
|
|
|
|
## Tools available
|
|
|
|
- `list_permits(min_cost?: number, permit_type?: string)` — list permits matching filter; default returns top 5 by cost
|
|
- `read_permit(permit_id: string)` — get full details for one permit
|
|
- `query_matrix(query: string, top_k?: number)` — search the knowledge base for relevant context (contractor entities, prior permits, SEC tickers, LLM team patterns)
|
|
- `note(text: string)` — append to your working scratchpad (visible to you across iterations)
|
|
- `read_scratchpad()` — read your full scratchpad
|
|
- `done(summary: string)` — finish; pass your final markdown analysis as `summary`
|
|
|
|
## Required output structure
|
|
|
|
When you call `done(summary=...)`, the summary should contain:
|
|
|
|
```markdown
|
|
# Staffing Recommendation: Permit <ID>
|
|
|
|
## Permit Summary
|
|
[2-3 sentences: type, cost, address, scope of work]
|
|
|
|
## Contractor Profile
|
|
[What we know about the contractor(s) from matrix evidence. If no matrix hits, say so explicitly.]
|
|
|
|
## Staffing Implications
|
|
[What trades + headcount this permit implies. Ground in the work description.]
|
|
|
|
## Risk Signals
|
|
[Any matrix hits suggesting caution: debarment, prior incidents, low-quality history. If none, say so.]
|
|
|
|
## Recommendation
|
|
[Pursue / Pass / Investigate-Further, with one-sentence rationale.]
|
|
```
|
|
|
|
## Example workflow (do not copy verbatim)
|
|
|
|
1. Note your plan: "I will list 5 mid-range permits, pick one with a private contractor, read it fully, query the matrix for the contractor name, then write the recommendation."
|
|
2. Call `list_permits(min_cost=100000)` → see candidates
|
|
3. **PICK A PERMIT WITH A PRIVATE CONTRACTOR (a person's name or a private LLC), NOT a government agency** like CDOT, City of Chicago, etc. Government permits have no useful contractor profile to recommend on.
|
|
4. `read_permit(id)` → see all fields
|
|
5. Call `query_matrix("<contractor name> contractor Chicago renovation")` → see what the matrix has
|
|
6. Note any evidence found, gaps, surprises
|
|
7. Call `done(summary="<final markdown>")`
|
|
|
|
## Success criteria
|
|
|
|
- You called `done()` with a summary that follows the required structure
|
|
- Every factual claim has a source (permit ID or matrix citation)
|
|
- Total output is 600-1000 words
|
|
- You did not invent contractor names, prior incidents, or capabilities
|
|
- Plan was noted BEFORE execution started
|
|
|
|
## What "good" looks like
|
|
|
|
- Plan is concrete (which permit, which queries)
|
|
- Matrix queries are specific (contractor name + work type, not "find anything about this")
|
|
- When matrix returns nothing useful, you say so honestly
|
|
- Recommendation reflects the actual evidence, not boilerplate
|
|
|
|
## What "bad" looks like
|
|
|
|
- Skipping the plan and jumping to execution
|
|
- Making up contractor histories with no matrix evidence
|
|
- Generic recommendations that don't reference the actual permit
|
|
- Walls of text or structured padding to look thorough
|
|
|
|
## Begin
|
|
|
|
Start by acknowledging you've read this PRD and noting your plan via `note()`. Then proceed.
|