Five threads of work landing as one milestone — all individually
verified end-to-end against real data, full release build clean,
46 unit tests pass.
## Phase 16.2 / 16.5 — autotune agent + ingest triggers
`vectord::agent` is a long-running tokio task that watches the trial
journal and autonomously proposes + runs new HNSW configs. Distinct
from `autotune::run_autotune` (synchronous one-shot grid). Triggered
on POST /vectors/agent/enqueue/{idx} or by the periodic wake; ingest
paths now push DatasetAppended events when an index's source dataset
gets re-ingested. Rate-limited (max_trials_per_hour) and cooldown-
gated so it can't saturate Ollama under live load.
The proposer is ε-greedy around the current champion: with prob 0.25
sample random from full bounds, otherwise perturb champion ± small
delta on both axes. Dedup against history. Deterministic — RNG seeded
from history.len() so the same journal state proposes the same next
config (helps offline replay debugging).
`[agent]` config section in lakehouse.toml; opt-in via enabled=true.
## Federation Layer 2 — runtime bucket lifecycle + per-index scoping
`BucketRegistry.buckets` moved to `std::sync::RwLock<HashMap>` so
buckets can be added/removed after startup. POST /storage/buckets
provisions at runtime; DELETE /storage/buckets/{name} unregisters
(refuses primary/rescue with 403). Local-backend buckets get their
root directory auto-created.
`IndexMeta.bucket` (default "primary" via serde) records each index's
home bucket. `TrialJournal` and `PromotionRegistry` now hold
Arc<BucketRegistry> + IndexRegistry; they resolve target store per-
index via IndexMeta.bucket. PromotionRegistry::list_all scans every
bucket and dedups by index_name. Pre-federation indexes keep working
unchanged — they just default to primary.
`ModelProfile.bucket: Option<String>` declares per-profile artifact
home. POST /vectors/profile/{id}/activate auto-provisions the
profile's bucket under storage.profile_root if not yet registered.
EvalSets stay primary-only for now — noted gap, low-risk to extend
later with the same resolver pattern.
## Phase 17 — VRAM-aware two-profile gate
Sidecar gains POST /admin/unload (Ollama keep_alive=0 trick — forces
immediate VRAM release), POST /admin/preload (keep_alive=5m with
empty prompt, takes the slot warm), and GET /admin/vram (combines
nvidia-smi snapshot with Ollama /api/ps). Exposed via aibridge as
unload_model / preload_model / vram_snapshot.
`VectorState.active_profile` is the GPU-slot singleton —
Arc<RwLock<Option<ActiveProfileSlot>>>. activate_profile checks for
a previous profile with a different ollama_name and unloads it
before preloading the new one; same-model reactivations skip the
unload (Ollama no-ops). New routes: POST /vectors/profile/{id}/
deactivate (unload + clear slot), GET /vectors/profile/active.
Verified live: staffing-recruiter (qwen2.5) → docs-assistant
(mistral) swap freed qwen2.5 from VRAM and loaded mistral. nomic-
embed-text persists across swaps because both profiles use it —
free optimization that fell out of the design. Scoped search
correctly 403s cross-profile in both directions.
## MySQL streaming connector
`crates/ingestd/src/my_stream.rs` mirrors pg_stream.rs for MySQL.
Pure-rust `mysql_async` driver (default-features=false to avoid C
deps). Same OFFSET pagination, same Parquet-streaming write shape.
Type mapping per ADR-010: int/bigint → Int32/Int64, decimal/float
→ Float64, tinyint(1)/bool → Boolean, everything else → Utf8 with
fallback parsers for date/time/json/uuid via Display.
POST /ingest/mysql parallel to /ingest/db. Same PII auto-detection,
same lineage capture (source_system="mysql"), same agent-trigger
hook. `redact_dsn` generalized — was hardcoded to "postgresql://"
length, now works for any scheme://user:pass@host/path URL (latent
PII leak fix for MySQL DSNs).
Verified live against MariaDB on localhost: 10 rows × 9 columns of
test data round-tripped through datatypes int/varchar/decimal/
tinyint/datetime/text. PII detection auto-flagged name + email.
Aggregation queries through DataFusion match the source values
exactly.
## Phase 18 — Hybrid Parquet+HNSW ⊕ Lance backend (ADR-019)
`vectord-lance` is a new firewall crate. Lance pulls Arrow 57 and
DataFusion 52 — incompatible with the rest of the workspace's
Arrow 55 / DataFusion 47. The firewall isolates that dep tree:
public API uses only std types (Vec<f32>, Vec<String>, Hit, Row,
*Stats), so no Arrow types cross the crate boundary and nothing
propagates to vectord. The ADR-019 path that didn't ship until now.
`vectord::lance_backend::LanceRegistry` lazy-creates a
LanceVectorStore per index, resolving bucket → URI via the
conventional local-bucket layout. `IndexMeta.vector_backend` and
`ModelProfile.vector_backend` carry the choice (default Parquet so
existing indexes unchanged).
Six routes under /vectors/lance/*:
- migrate/{idx}: convert binary-blob Parquet → Lance FixedSizeList
- index/{idx}: build IVF_PQ
- search/{idx}: vector search (embed via sidecar)
- doc/{idx}/{doc_id}: random row fetch
- append/{idx}: native fragment append
- stats/{idx}: row count + index presence
Verified live on the real resumes_100k_v2 corpus (100K × 768d):
- Migrate: 0.57s
- Build IVF_PQ index: 16.2s (matches ADR-019 bench; 14× faster than
HNSW's 230s for the same data)
- Search end-to-end (Ollama embed + Lance scan): 23-53ms
- Random doc_id fetch: 5-7ms (filter scan; faster than Parquet's
~35ms full-file scan, slower than the bench's 311us positional
take — would close that gap with a scalar btree on doc_id)
- Append 100 rows: 3.3ms / +320KB on disk vs Parquet's required
full ~330MB rewrite — the structural win
- Index survives append; both backends coexist cleanly
## Known follow-ups not in this milestone
- ModelProfile.vector_backend doesn't yet auto-route /vectors/profile/
{id}/search to Lance; callers go through /vectors/lance/* directly
- Scalar btree on doc_id (closes the 5-7ms → ~300us gap)
- vectord-lance built default-features=false → no S3 yet
- IVF_PQ recall not measured (ADR-019 caveat) — needs a Lance-aware
variant of the eval harness
- Watcher-path ingest doesn't push agent triggers (HTTP paths do)
- EvalSets still primary-only (federation gap)
- No PATCH endpoint to move an existing index between buckets
- The pre-existing storaged::append_log doctest fails to compile
(malformed `{prefix}/` parses as code fence) — pre-existing bug,
left for a focused fix
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
241 lines
8.5 KiB
Rust
241 lines
8.5 KiB
Rust
/// Trial journal for HNSW parameter tuning.
|
|
///
|
|
/// Every HNSW build+eval is recorded as a Trial. The journal is append-only
|
|
/// and stored under `_hnsw_trials/{index_name}/` as batched JSONL files —
|
|
/// an AI agent iterating on configs reads prior trials to decide what to
|
|
/// try next, and writes a new trial on each attempt.
|
|
///
|
|
/// Storage uses the shared `storaged::append_log::AppendLog` so appends are
|
|
/// write-once (new file per batch) rather than rewriting a single growing
|
|
/// JSONL on every event. See `append_log.rs` for the full rationale.
|
|
|
|
use chrono::{DateTime, Utc};
|
|
use object_store::ObjectStore;
|
|
use serde::{Deserialize, Serialize};
|
|
use std::collections::HashMap;
|
|
use std::sync::Arc;
|
|
use tokio::sync::RwLock;
|
|
|
|
use storaged::append_log::AppendLog;
|
|
use storaged::registry::BucketRegistry;
|
|
|
|
use crate::index_registry::IndexRegistry;
|
|
|
|
/// HNSW build/search parameters the agent can tune.
|
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
pub struct HnswConfig {
|
|
pub ef_construction: usize,
|
|
pub ef_search: usize,
|
|
#[serde(default)]
|
|
pub seed: Option<u64>,
|
|
}
|
|
|
|
impl Default for HnswConfig {
|
|
/// Production default, locked in 2026-04-16 based on trial grid against
|
|
/// resumes_100k_v2 (100K vectors, 20 queries, recall@10):
|
|
/// ec=20 es=30 → recall 0.960, p50 509us, build 8s
|
|
/// ec=80 es=30 → recall 1.000, p50 873us, build 230s ← sweet spot
|
|
/// ec=200 es=30 → recall 1.000, p50 874us, build 106s (no recall gain)
|
|
///
|
|
/// `ec=80` is the smallest value that reaches 100% recall. Higher values
|
|
/// waste build time. `es=30` gives faster search than `es=100` with no
|
|
/// recall penalty at this scale.
|
|
fn default() -> Self {
|
|
Self {
|
|
ef_construction: 80,
|
|
ef_search: 30,
|
|
seed: None,
|
|
}
|
|
}
|
|
}
|
|
|
|
/// Metrics collected on every trial. All latencies in microseconds.
|
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
pub struct TrialMetrics {
|
|
pub build_time_secs: f32,
|
|
pub search_latency_p50_us: f32,
|
|
pub search_latency_p95_us: f32,
|
|
pub search_latency_p99_us: f32,
|
|
pub recall_at_k: f32,
|
|
pub memory_bytes: u64,
|
|
pub vectors: usize,
|
|
pub eval_queries: usize,
|
|
/// Brute-force latency for comparison — how much speedup did HNSW buy us?
|
|
pub brute_force_latency_us: f32,
|
|
}
|
|
|
|
/// A single tuning attempt.
|
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
pub struct Trial {
|
|
pub id: String,
|
|
pub index_name: String,
|
|
pub eval_set: String,
|
|
pub config: HnswConfig,
|
|
pub metrics: TrialMetrics,
|
|
pub created_at: DateTime<Utc>,
|
|
/// Free-form note — the agent can record why it tried this config.
|
|
#[serde(default)]
|
|
pub note: Option<String>,
|
|
}
|
|
|
|
impl Trial {
|
|
pub fn new_id() -> String {
|
|
format!(
|
|
"trial-{}-{}",
|
|
Utc::now().timestamp_millis(),
|
|
&uuid::Uuid::new_v4().to_string()[..8]
|
|
)
|
|
}
|
|
}
|
|
|
|
/// Per-index append log, lazy-created on first write.
|
|
///
|
|
/// Federation layer 2: the journal resolves each index's bucket from the
|
|
/// index registry and writes its JSONL batches to THAT bucket, not
|
|
/// primary. Back-compat is preserved by `IndexMeta::bucket` defaulting
|
|
/// to "primary" for pre-federation indexes. Indexes the registry has
|
|
/// never heard of (edge case — trials run before first register) fall
|
|
/// through to primary as well.
|
|
#[derive(Clone)]
|
|
pub struct TrialJournal {
|
|
buckets: Arc<BucketRegistry>,
|
|
index_registry: IndexRegistry,
|
|
/// Cache per (bucket, index) AppendLog so the in-memory buffer persists
|
|
/// across calls. Keyed by `(bucket, index_name)` so moving an index
|
|
/// between buckets is clean — the old journal stays intact.
|
|
logs: Arc<RwLock<HashMap<(String, String), Arc<AppendLog>>>>,
|
|
}
|
|
|
|
impl TrialJournal {
|
|
pub fn new(buckets: Arc<BucketRegistry>, index_registry: IndexRegistry) -> Self {
|
|
Self {
|
|
buckets,
|
|
index_registry,
|
|
logs: Arc::new(RwLock::new(HashMap::new())),
|
|
}
|
|
}
|
|
|
|
fn prefix(index_name: &str) -> String {
|
|
format!("_hnsw_trials/{}", index_name)
|
|
}
|
|
|
|
/// Resolve which bucket holds this index's trial artifacts.
|
|
/// Falls back to primary for indexes without recorded metadata.
|
|
async fn bucket_for(&self, index_name: &str) -> String {
|
|
self.index_registry
|
|
.get(index_name)
|
|
.await
|
|
.map(|m| m.bucket)
|
|
.unwrap_or_else(|| "primary".to_string())
|
|
}
|
|
|
|
async fn log_for(&self, index_name: &str) -> Result<Arc<AppendLog>, String> {
|
|
let bucket = self.bucket_for(index_name).await;
|
|
let key = (bucket.clone(), index_name.to_string());
|
|
|
|
if let Some(log) = self.logs.read().await.get(&key) {
|
|
return Ok(log.clone());
|
|
}
|
|
let mut guard = self.logs.write().await;
|
|
if let Some(log) = guard.get(&key) {
|
|
return Ok(log.clone());
|
|
}
|
|
let store = self.buckets.get(&bucket)?;
|
|
// Trials arrive one at a time during human/agent iteration — a low
|
|
// threshold gives "hit /trials and see my latest attempt" immediacy
|
|
// without creating one file per event.
|
|
let log = Arc::new(
|
|
AppendLog::new(store, Self::prefix(index_name))
|
|
.with_flush_threshold(4),
|
|
);
|
|
guard.insert(key, log.clone());
|
|
Ok(log)
|
|
}
|
|
|
|
/// Append a trial record. In-memory buffered; persisted in batches.
|
|
pub async fn append(&self, trial: &Trial) -> Result<(), String> {
|
|
let line = serde_json::to_vec(trial).map_err(|e| e.to_string())?;
|
|
let log = self.log_for(&trial.index_name).await?;
|
|
log.append(line).await
|
|
}
|
|
|
|
/// Read all trials for an index (flushed batches + unflushed buffer).
|
|
pub async fn list(&self, index_name: &str) -> Result<Vec<Trial>, String> {
|
|
let log = self.log_for(index_name).await?;
|
|
let lines = log.read_all().await?;
|
|
let mut trials = Vec::with_capacity(lines.len());
|
|
for line in lines {
|
|
match serde_json::from_slice::<Trial>(&line) {
|
|
Ok(t) => trials.push(t),
|
|
Err(e) => tracing::warn!("trial journal: skip malformed line: {e}"),
|
|
}
|
|
}
|
|
Ok(trials)
|
|
}
|
|
|
|
/// Explicit flush for callers that want write-through semantics
|
|
/// (e.g. an agent that wants to commit a trial before querying stats).
|
|
pub async fn flush(&self, index_name: &str) -> Result<(), String> {
|
|
let log = self.log_for(index_name).await?;
|
|
log.flush().await
|
|
}
|
|
|
|
/// Compact all batch files for an index into one.
|
|
pub async fn compact(&self, index_name: &str) -> Result<storaged::append_log::CompactStats, String> {
|
|
let log = self.log_for(index_name).await?;
|
|
log.compact().await
|
|
}
|
|
|
|
/// Current champion for an index by the named metric.
|
|
/// Valid metrics: `recall`, `latency`, `pareto`.
|
|
///
|
|
/// The `pareto` strategy is a placeholder — J should tune the scoring
|
|
/// function to match what matters in production. Right now it's a simple
|
|
/// weighted sum.
|
|
pub async fn best(
|
|
&self,
|
|
index_name: &str,
|
|
metric: &str,
|
|
) -> Result<Option<Trial>, String> {
|
|
let trials = self.list(index_name).await?;
|
|
if trials.is_empty() {
|
|
return Ok(None);
|
|
}
|
|
|
|
let best = match metric {
|
|
"recall" => trials
|
|
.into_iter()
|
|
.max_by(|a, b| {
|
|
a.metrics
|
|
.recall_at_k
|
|
.partial_cmp(&b.metrics.recall_at_k)
|
|
.unwrap_or(std::cmp::Ordering::Equal)
|
|
})
|
|
.unwrap(),
|
|
"latency" => trials
|
|
.into_iter()
|
|
.min_by(|a, b| {
|
|
a.metrics
|
|
.search_latency_p95_us
|
|
.partial_cmp(&b.metrics.search_latency_p95_us)
|
|
.unwrap_or(std::cmp::Ordering::Equal)
|
|
})
|
|
.unwrap(),
|
|
"pareto" | _ => trials
|
|
.into_iter()
|
|
.max_by(|a, b| pareto_score(a).partial_cmp(&pareto_score(b)).unwrap())
|
|
.unwrap(),
|
|
};
|
|
Ok(Some(best))
|
|
}
|
|
}
|
|
|
|
/// Simple Pareto-style score: reward recall, penalize p95 latency.
|
|
/// Tunable — J should swap this in production to match what matters.
|
|
fn pareto_score(t: &Trial) -> f32 {
|
|
// Recall is [0, 1]. Latency is us — assume 100us baseline.
|
|
let recall = t.metrics.recall_at_k;
|
|
let latency_penalty = (t.metrics.search_latency_p95_us / 1000.0).min(1.0); // cap at 1ms
|
|
recall - 0.2 * latency_penalty
|
|
}
|