Four shipped features and a PRD realignment, all measured end-to-end:
HNSW trial system (Phase 15 horizon item → complete)
- vectord: EmbeddingCache, harness (eval sets + brute-force ground truth),
TrialJournal, parameterized HnswConfig on build_index_with_config
- /vectors/hnsw/trial, /hnsw/trials/{idx}, /hnsw/trials/{idx}/best,
/hnsw/evals/{name}/autogen, /hnsw/cache/stats
- Measured on resumes_100k_v2 (100K × 768d): brute-force 44ms -> HNSW 873us
at 100% recall@10. ec=80 es=30 locked as HnswConfig::default()
- Lower ec values trade recall for build time: 20/30 = 0.96 recall in 8s,
80/30 = 1.00 recall in 230s
Catalog manifest repair
- catalogd: resync_from_parquet reads parquet footers to restore row_count
and columns on drifted manifests
- POST /catalog/datasets/{name}/resync + POST /catalog/resync-missing
- All 7 staffing tables recovered to PRD-matching 2,469,278 rows
Federation foundation (ADR-017)
- shared::secrets: SecretsProvider trait + FileSecretsProvider (reads
/etc/lakehouse/secrets.toml, enforces 0600 perms)
- storaged::registry::BucketRegistry — multi-bucket resolution with
rescue_bucket read fallback and reachability probing
- storaged::error_journal — bucket op failures visible in one HTTP call
- storaged::append_log — write-once batched append pattern (fixes the RMW
anti-pattern llms3.com calls out; errors and trial journals both use it)
- /storage/buckets, /storage/errors, /storage/bucket-health,
/storage/errors/{flush,compact}
- Bucket-aware I/O at /storage/buckets/{bucket}/objects/{*key} with
X-Lakehouse-Rescue-Used observability headers on fallback
Postgres streaming ingest
- ingestd::pg_stream: DSN parser, batched ORDER BY + LIMIT/OFFSET pagination
into ArrowWriter, lineage redacts password
- POST /ingest/db — verified against live knowledge_base.team_runs
(586 rows × 13 cols, 6 batches, 196ms end-to-end)
PRD realignment (2026-04-16)
- Dual use case: staffing analytics + local LLM knowledge substrate
- Removed "multi-tenancy (single-owner system)" from non-goals
- Added invariants 8-11: indexes hot-swappable, per-reader profiles,
trials-as-data, operational failures findable in one HTTP call
- New phases 16 (hot-swap generations), 17 (model profiles + dataset
bindings), 18 (Lance vs Parquet+sidecar evaluation)
- Known ceilings table documents the 5M vector wall and escape hatches
- ADR-017 (federation), ADR-018 (append-log pattern) added
- EXECUTION_PLAN.md sequences phases B-E with success gates and
decision rules
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
99 lines
3.3 KiB
Rust
99 lines
3.3 KiB
Rust
/// In-memory cache for StoredEmbedding vectors.
|
|
///
|
|
/// Rationale: loading 100K embeddings from Parquet takes ~2-5s. When an AI agent
|
|
/// iterates on HNSW parameters, each trial would repeat that cost. The cache
|
|
/// pins embeddings in memory so trials reuse them.
|
|
///
|
|
/// This is a pure performance layer — the Parquet file is still the source of
|
|
/// truth (ADR-008). Eviction is safe; worst case is one slow reload.
|
|
|
|
use object_store::ObjectStore;
|
|
use serde::Serialize;
|
|
use std::collections::HashMap;
|
|
use std::sync::Arc;
|
|
use tokio::sync::RwLock;
|
|
|
|
use crate::store::{self, StoredEmbedding};
|
|
|
|
#[derive(Clone)]
|
|
pub struct EmbeddingCache {
|
|
store: Arc<dyn ObjectStore>,
|
|
cache: Arc<RwLock<HashMap<String, Arc<Vec<StoredEmbedding>>>>>,
|
|
}
|
|
|
|
#[derive(Debug, Clone, Serialize)]
|
|
pub struct CacheEntry {
|
|
pub index_name: String,
|
|
pub vectors: usize,
|
|
pub dimensions: usize,
|
|
pub memory_bytes: u64,
|
|
}
|
|
|
|
#[derive(Debug, Clone, Serialize)]
|
|
pub struct CacheStats {
|
|
pub entries: Vec<CacheEntry>,
|
|
pub total_memory_bytes: u64,
|
|
}
|
|
|
|
impl EmbeddingCache {
|
|
pub fn new(store: Arc<dyn ObjectStore>) -> Self {
|
|
Self {
|
|
store,
|
|
cache: Arc::new(RwLock::new(HashMap::new())),
|
|
}
|
|
}
|
|
|
|
/// Return cached embeddings, loading from object storage on first request.
|
|
pub async fn get_or_load(
|
|
&self,
|
|
index_name: &str,
|
|
) -> Result<Arc<Vec<StoredEmbedding>>, String> {
|
|
if let Some(cached) = self.cache.read().await.get(index_name) {
|
|
return Ok(cached.clone());
|
|
}
|
|
|
|
// Load under a write lock so concurrent callers only hit disk once.
|
|
let mut guard = self.cache.write().await;
|
|
if let Some(cached) = guard.get(index_name) {
|
|
return Ok(cached.clone());
|
|
}
|
|
tracing::info!("embedding_cache: loading '{index_name}' from object storage");
|
|
let t0 = std::time::Instant::now();
|
|
let loaded = store::load_embeddings(&self.store, index_name).await?;
|
|
let n = loaded.len();
|
|
let arc = Arc::new(loaded);
|
|
guard.insert(index_name.to_string(), arc.clone());
|
|
tracing::info!(
|
|
"embedding_cache: loaded '{index_name}' — {n} vectors in {:.2}s",
|
|
t0.elapsed().as_secs_f32()
|
|
);
|
|
Ok(arc)
|
|
}
|
|
|
|
pub async fn evict(&self, index_name: &str) -> bool {
|
|
self.cache.write().await.remove(index_name).is_some()
|
|
}
|
|
|
|
pub async fn stats(&self) -> CacheStats {
|
|
let cache = self.cache.read().await;
|
|
let mut entries = Vec::with_capacity(cache.len());
|
|
let mut total: u64 = 0;
|
|
for (name, embs) in cache.iter() {
|
|
let dims = embs.first().map(|e| e.vector.len()).unwrap_or(0);
|
|
// Rough estimate: vector data + chunk_text + metadata overhead.
|
|
let vec_bytes = (embs.len() * dims * std::mem::size_of::<f32>()) as u64;
|
|
let text_bytes: u64 = embs.iter().map(|e| e.chunk_text.len() as u64).sum();
|
|
let overhead = (embs.len() * 128) as u64; // strings + struct overhead
|
|
let mem = vec_bytes + text_bytes + overhead;
|
|
total += mem;
|
|
entries.push(CacheEntry {
|
|
index_name: name.clone(),
|
|
vectors: embs.len(),
|
|
dimensions: dims,
|
|
memory_bytes: mem,
|
|
});
|
|
}
|
|
CacheStats { entries, total_memory_bytes: total }
|
|
}
|
|
}
|