- vectord crate: chunk → embed → store → search → RAG - chunker: configurable chunk size + overlap, sentence-boundary aware splitting - store: embeddings as Parquet (binary blob f32 vectors), portable format - search: brute-force cosine similarity (works up to ~100K vectors) - rag: full pipeline — embed question → search index → retrieve context → LLM answer - Endpoints: POST /vectors/index, /vectors/search, /vectors/rag - Gateway wired with vectord service - Tested: 200 candidate resumes indexed in 5.4s, semantic search + RAG working - 20 unit tests passing (chunker, search, ingestd, shared) - AI gives honest "no match found" when context doesn't support an answer Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
15 lines
384 B
JSON
15 lines
384 B
JSON
{
|
|
"id": "e015f0e2-51e4-4301-855d-76c54992c5b9",
|
|
"name": "call_log",
|
|
"schema_fingerprint": "auto",
|
|
"objects": [
|
|
{
|
|
"bucket": "data",
|
|
"key": "datasets/call_log.parquet",
|
|
"size_bytes": 3276693,
|
|
"created_at": "2026-03-27T13:11:42.483220340Z"
|
|
}
|
|
],
|
|
"created_at": "2026-03-27T13:11:42.483225870Z",
|
|
"updated_at": "2026-03-27T13:11:42.483225870Z"
|
|
} |