Upload your ATS export, your worker roster, or any CSV with a name column.
The wizard auto-detects columns, flags PII, previews the first rows, then ingests
as a queryable Parquet dataset. Everything that follows — hybrid search,
playbook ranking, pattern discovery — works against your data automatically.
Pick a file
Drag a CSV in, pick from disk, or use the sample roster to see the flow without any real data.
Columns auto-typed. PII columns flagged. First rows previewed. Nothing is written to the system yet — this is a read-only dry-run.
Columns detected
First rows
What happens next. On commit, the file is sent to /ingest/file — the same
endpoint every other ingest path uses. The Rust gateway writes it to object storage as
Parquet, computes a schema fingerprint, registers it in the catalog, and auto-detects PII
columns server-side. Re-uploading the same file is a no-op (deduplicated by content hash).
Name it and commit
Give the dataset a queryable name. This becomes the table you can SELECT * FROM immediately after commit.
Use lowercase + underscores. Once committed: queryable via /query/sql,
searchable via /search, indexable via /vectors/index.
Your dataset is live
From here, the rest of the system applies to your data with zero additional setup.
What you can do right now
Query via SQL.POST /query/sql with SELECT * FROM your_dataset LIMIT 10.
Build a vector index.POST /vectors/index with {"dataset":"your_dataset","text_column":"skills"}. Embeddings stream in; queryable progressively.
Search via the dashboard. Open the dashboard and the "Search all workers" box. Results will come from your data.
Track with playbook memory. Every Call/SMS/No-show click on a worker card trains the system on your data.