Three new pieces, executed in order:
scripts/dump_raw_corpus.sh
- One-shot bash that creates MinIO bucket `raw` and uploads all
testing corpora as a persistent immutable test set. 365 MB total
across 5 prefixes (chicago, entities, sec, staffing, llm_team)
+ MANIFEST.json. Sources: workers_500k.parquet (309 MB),
resumes.parquet, entities.jsonl, sec_company_tickers.json,
Chicago permits last 30d (2,853 records, 5.4 MB), 9 LLM Team
Postgres tables dumped via row_to_json.
scripts/vectorize_raw_corpus.ts
- Bun script that fetches each raw-bucket source via mc, runs a
source-specific extractor into {id, text} docs, posts to
/vectors/index, polls job to completion. Verified results:
chicago_permits_v1: 3,420 chunks
entity_brief_v1: 634 chunks
sec_tickers_v1: 10,341 chunks (after extractor fix for
wrapped {rows: {...}} JSON shape)
llm_team_runs_v1: in flight, 19K+ chunks
llm_team_response_cache_v1: queued
scripts/analyze_chicago_contracts.ts
- Real inference pipeline that picks N high-cost permits with
named contractors from the raw bucket, queries all 6 contract-
analysis corpora in parallel via /vectors/search, builds a
MATRIX CONTEXT preamble, calls Grok 4.1 fast for structured
staffing analysis, hand-reviews each via observer /review,
appends to data/_kb/contract_analyses.jsonl.
tests/real-world/scrum_master_pipeline.ts
- MATRIX_CORPORA_FOR_TASK extended with two new task classes:
contract_analysis (chicago + entity_brief + sec + llm_team_runs
+ llm_team_response_cache + distilled_procedural)
staffing_inference (workers_500k_v8 + entity_brief + chicago
+ llm_team_runs + distilled_procedural)
scrum_review unchanged.
This is the first time the matrix architecture operates on real
ingested data instead of code-review smoke tests.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
96 lines
3.9 KiB
Bash
Executable File
96 lines
3.9 KiB
Bash
Executable File
#!/bin/bash
|
|
# One-shot dump of all testing data into the `raw` MinIO bucket.
|
|
# Persistent test corpus so we don't re-extract every run.
|
|
#
|
|
# Layout:
|
|
# raw/
|
|
# staffing/ — workers_500k.parquet, resumes.parquet
|
|
# entities/ — entities.jsonl, sec_company_tickers.json
|
|
# llm_team/ — *.jsonl extracts from knowledge_base PG tables
|
|
# chicago/ — permits_<date>.json (last 30 days)
|
|
# MANIFEST.json — documents what's here + when
|
|
|
|
set -euo pipefail
|
|
|
|
REPO=/home/profit/lakehouse
|
|
BUCKET=raw
|
|
ALIAS=local
|
|
STAGE=$(mktemp -d /tmp/raw_dump.XXXXX)
|
|
trap 'rm -rf "$STAGE"' EXIT
|
|
DATE=$(date -u +%Y-%m-%d)
|
|
|
|
log() { echo "[dump $(date -u +%H:%M:%S)] $*"; }
|
|
|
|
log "creating bucket ${ALIAS}/${BUCKET} (idempotent)"
|
|
mc mb --ignore-existing ${ALIAS}/${BUCKET}
|
|
|
|
# ─── 1. STAFFING ───
|
|
log "staffing/ — workers_500k.parquet (323 MB) + resumes.parquet"
|
|
mc cp -q ${REPO}/data/datasets/workers_500k.parquet ${ALIAS}/${BUCKET}/staffing/workers_500k.parquet
|
|
mc cp -q ${REPO}/data/datasets/resumes.parquet ${ALIAS}/${BUCKET}/staffing/resumes.parquet
|
|
|
|
# ─── 2. ENTITIES + SEC + GEO ───
|
|
log "entities/ — contractor entities cache + SEC tickers + svep + tif districts"
|
|
mc cp -q ${REPO}/data/_entity_cache/entities.jsonl ${ALIAS}/${BUCKET}/entities/entities.jsonl
|
|
mc cp -q ${REPO}/data/_entity_cache/sec_company_tickers.json ${ALIAS}/${BUCKET}/sec/company_tickers.json
|
|
mc cp -q ${REPO}/data/_entity_cache/svep_log.json ${ALIAS}/${BUCKET}/entities/svep_log.json
|
|
mc cp -q ${REPO}/data/_entity_cache/tif_districts.geojson ${ALIAS}/${BUCKET}/chicago/tif_districts.geojson
|
|
|
|
# ─── 3. LLM TEAM HISTORY (Postgres → JSONL → S3) ───
|
|
log "llm_team/ — extracting from knowledge_base PG tables"
|
|
LLM_TABLES=(team_runs pipeline_runs lab_experiments lab_trials meta_pipelines meta_runs conversations response_cache memory_entries adaptive_runs)
|
|
for tbl in "${LLM_TABLES[@]}"; do
|
|
out=${STAGE}/${tbl}.jsonl
|
|
rows=$(sudo -u postgres psql -d knowledge_base -At -c "SELECT COUNT(*) FROM ${tbl};" 2>/dev/null || echo 0)
|
|
if [ "$rows" -eq 0 ]; then
|
|
log " · ${tbl}: 0 rows, skipping"
|
|
continue
|
|
fi
|
|
sudo -u postgres psql -d knowledge_base -At -c "COPY (SELECT row_to_json(t) FROM ${tbl} t) TO STDOUT;" > "$out" 2>/dev/null
|
|
size=$(du -h "$out" | awk '{print $1}')
|
|
log " · ${tbl}: ${rows} rows (${size})"
|
|
mc cp -q "$out" ${ALIAS}/${BUCKET}/llm_team/${tbl}.jsonl
|
|
done
|
|
|
|
# ─── 4. CHICAGO PERMITS (last 30 days, paginated) ───
|
|
log "chicago/ — pulling last 30 days of permits from data.cityofchicago.org"
|
|
since=$(date -u -d '30 days ago' +%Y-%m-%d)
|
|
out=${STAGE}/permits_${DATE}.json
|
|
url="https://data.cityofchicago.org/resource/ydr8-5enu.json?\$where=issue_date%3E='${since}'&\$limit=10000&\$order=issue_date%20DESC"
|
|
if curl -sf --max-time 60 "$url" -o "$out"; then
|
|
count=$(python3 -c "import json; print(len(json.load(open('${out}'))))")
|
|
size=$(du -h "$out" | awk '{print $1}')
|
|
log " · permits since ${since}: ${count} records (${size})"
|
|
mc cp -q "$out" ${ALIAS}/${BUCKET}/chicago/permits_${DATE}.json
|
|
else
|
|
log " · WARN: chicago permits fetch failed; skipping"
|
|
fi
|
|
|
|
# ─── 5. MANIFEST ───
|
|
log "writing MANIFEST.json"
|
|
manifest=${STAGE}/MANIFEST.json
|
|
python3 - <<PY
|
|
import json, subprocess, datetime
|
|
out = subprocess.check_output(['mc','ls','-r','--json','${ALIAS}/${BUCKET}'], text=True)
|
|
items = []
|
|
for line in out.strip().split('\n'):
|
|
if not line: continue
|
|
o = json.loads(line)
|
|
items.append({'key': o.get('key',''), 'size': o.get('size',0)})
|
|
total_size = sum(i['size'] for i in items)
|
|
manifest = {
|
|
'bucket': '${BUCKET}',
|
|
'created_at': datetime.datetime.utcnow().isoformat() + 'Z',
|
|
'total_objects': len(items),
|
|
'total_size_bytes': total_size,
|
|
'total_size_human': f'{total_size / (1024*1024):.1f} MB',
|
|
'items': items,
|
|
}
|
|
with open('${manifest}','w') as f:
|
|
json.dump(manifest, f, indent=2)
|
|
PY
|
|
mc cp -q "$manifest" ${ALIAS}/${BUCKET}/MANIFEST.json
|
|
|
|
log "DONE. Bucket contents:"
|
|
mc ls -r ${ALIAS}/${BUCKET} | head -30
|