lakehouse/crates/storaged/src/federation_service.rs
root 0d037cfac1 Phases 16.2 + L2 + 17 VRAM gate + MySQL + 18 Lance hybrid milestone
Five threads of work landing as one milestone — all individually
verified end-to-end against real data, full release build clean,
46 unit tests pass.

## Phase 16.2 / 16.5 — autotune agent + ingest triggers

`vectord::agent` is a long-running tokio task that watches the trial
journal and autonomously proposes + runs new HNSW configs. Distinct
from `autotune::run_autotune` (synchronous one-shot grid). Triggered
on POST /vectors/agent/enqueue/{idx} or by the periodic wake; ingest
paths now push DatasetAppended events when an index's source dataset
gets re-ingested. Rate-limited (max_trials_per_hour) and cooldown-
gated so it can't saturate Ollama under live load.

The proposer is ε-greedy around the current champion: with prob 0.25
sample random from full bounds, otherwise perturb champion ± small
delta on both axes. Dedup against history. Deterministic — RNG seeded
from history.len() so the same journal state proposes the same next
config (helps offline replay debugging).

`[agent]` config section in lakehouse.toml; opt-in via enabled=true.

## Federation Layer 2 — runtime bucket lifecycle + per-index scoping

`BucketRegistry.buckets` moved to `std::sync::RwLock<HashMap>` so
buckets can be added/removed after startup. POST /storage/buckets
provisions at runtime; DELETE /storage/buckets/{name} unregisters
(refuses primary/rescue with 403). Local-backend buckets get their
root directory auto-created.

`IndexMeta.bucket` (default "primary" via serde) records each index's
home bucket. `TrialJournal` and `PromotionRegistry` now hold
Arc<BucketRegistry> + IndexRegistry; they resolve target store per-
index via IndexMeta.bucket. PromotionRegistry::list_all scans every
bucket and dedups by index_name. Pre-federation indexes keep working
unchanged — they just default to primary.

`ModelProfile.bucket: Option<String>` declares per-profile artifact
home. POST /vectors/profile/{id}/activate auto-provisions the
profile's bucket under storage.profile_root if not yet registered.

EvalSets stay primary-only for now — noted gap, low-risk to extend
later with the same resolver pattern.

## Phase 17 — VRAM-aware two-profile gate

Sidecar gains POST /admin/unload (Ollama keep_alive=0 trick — forces
immediate VRAM release), POST /admin/preload (keep_alive=5m with
empty prompt, takes the slot warm), and GET /admin/vram (combines
nvidia-smi snapshot with Ollama /api/ps). Exposed via aibridge as
unload_model / preload_model / vram_snapshot.

`VectorState.active_profile` is the GPU-slot singleton —
Arc<RwLock<Option<ActiveProfileSlot>>>. activate_profile checks for
a previous profile with a different ollama_name and unloads it
before preloading the new one; same-model reactivations skip the
unload (Ollama no-ops). New routes: POST /vectors/profile/{id}/
deactivate (unload + clear slot), GET /vectors/profile/active.

Verified live: staffing-recruiter (qwen2.5) → docs-assistant
(mistral) swap freed qwen2.5 from VRAM and loaded mistral. nomic-
embed-text persists across swaps because both profiles use it —
free optimization that fell out of the design. Scoped search
correctly 403s cross-profile in both directions.

## MySQL streaming connector

`crates/ingestd/src/my_stream.rs` mirrors pg_stream.rs for MySQL.
Pure-rust `mysql_async` driver (default-features=false to avoid C
deps). Same OFFSET pagination, same Parquet-streaming write shape.
Type mapping per ADR-010: int/bigint → Int32/Int64, decimal/float
→ Float64, tinyint(1)/bool → Boolean, everything else → Utf8 with
fallback parsers for date/time/json/uuid via Display.

POST /ingest/mysql parallel to /ingest/db. Same PII auto-detection,
same lineage capture (source_system="mysql"), same agent-trigger
hook. `redact_dsn` generalized — was hardcoded to "postgresql://"
length, now works for any scheme://user:pass@host/path URL (latent
PII leak fix for MySQL DSNs).

Verified live against MariaDB on localhost: 10 rows × 9 columns of
test data round-tripped through datatypes int/varchar/decimal/
tinyint/datetime/text. PII detection auto-flagged name + email.
Aggregation queries through DataFusion match the source values
exactly.

## Phase 18 — Hybrid Parquet+HNSW ⊕ Lance backend (ADR-019)

`vectord-lance` is a new firewall crate. Lance pulls Arrow 57 and
DataFusion 52 — incompatible with the rest of the workspace's
Arrow 55 / DataFusion 47. The firewall isolates that dep tree:
public API uses only std types (Vec<f32>, Vec<String>, Hit, Row,
*Stats), so no Arrow types cross the crate boundary and nothing
propagates to vectord. The ADR-019 path that didn't ship until now.

`vectord::lance_backend::LanceRegistry` lazy-creates a
LanceVectorStore per index, resolving bucket → URI via the
conventional local-bucket layout. `IndexMeta.vector_backend` and
`ModelProfile.vector_backend` carry the choice (default Parquet so
existing indexes unchanged).

Six routes under /vectors/lance/*:
- migrate/{idx}: convert binary-blob Parquet → Lance FixedSizeList
- index/{idx}: build IVF_PQ
- search/{idx}: vector search (embed via sidecar)
- doc/{idx}/{doc_id}: random row fetch
- append/{idx}: native fragment append
- stats/{idx}: row count + index presence

Verified live on the real resumes_100k_v2 corpus (100K × 768d):
- Migrate: 0.57s
- Build IVF_PQ index: 16.2s (matches ADR-019 bench; 14× faster than
  HNSW's 230s for the same data)
- Search end-to-end (Ollama embed + Lance scan): 23-53ms
- Random doc_id fetch: 5-7ms (filter scan; faster than Parquet's
  ~35ms full-file scan, slower than the bench's 311us positional
  take — would close that gap with a scalar btree on doc_id)
- Append 100 rows: 3.3ms / +320KB on disk vs Parquet's required
  full ~330MB rewrite — the structural win
- Index survives append; both backends coexist cleanly

## Known follow-ups not in this milestone

- ModelProfile.vector_backend doesn't yet auto-route /vectors/profile/
  {id}/search to Lance; callers go through /vectors/lance/* directly
- Scalar btree on doc_id (closes the 5-7ms → ~300us gap)
- vectord-lance built default-features=false → no S3 yet
- IVF_PQ recall not measured (ADR-019 caveat) — needs a Lance-aware
  variant of the eval harness
- Watcher-path ingest doesn't push agent triggers (HTTP paths do)
- EvalSets still primary-only (federation gap)
- No PATCH endpoint to move an existing index between buckets
- The pre-existing storaged::append_log doctest fails to compile
  (malformed `{prefix}/` parses as code fence) — pre-existing bug,
  left for a focused fix

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 20:24:46 -05:00

180 lines
6.0 KiB
Rust

/// HTTP surface for federation operator endpoints.
///
/// - `GET /buckets` — configured buckets with reachability status
/// - `GET /errors` — recent bucket op failures (filterable)
/// - `GET /bucket-health` — aggregated errors-per-bucket in the last 5 minutes
///
/// Mounted under `/storage` by the gateway. `/storage/health` is already
/// claimed by the existing storage router for service liveness, so the
/// aggregated bucket health lives at `/storage/bucket-health`.
use axum::{
Json, Router,
extract::{Path, Query, State},
http::StatusCode,
response::IntoResponse,
routing::{delete, get, post, put},
};
use chrono::{DateTime, Utc};
use serde::Deserialize;
use shared::config::BucketConfig;
use std::sync::Arc;
use crate::error_journal::BucketErrorEvent;
use crate::registry::{BucketInfo, BucketRegistry};
pub fn router(registry: Arc<BucketRegistry>) -> Router {
Router::new()
.route("/buckets", get(list_buckets).post(add_bucket))
.route("/buckets/{name}", delete(remove_bucket))
.route("/errors", get(list_errors))
.route("/errors/flush", post(flush_errors))
.route("/errors/compact", post(compact_errors))
.route("/bucket-health", get(get_health))
.route("/buckets/{bucket}/objects/{*key}", put(put_bucket_object))
.route("/buckets/{bucket}/objects/{*key}", get(get_bucket_object))
.with_state(registry)
}
/// Provision + register a bucket at runtime. Body is a `BucketConfig`.
///
/// Federation layer 2: profile buckets (`profile:alice`) and tenant
/// buckets can be added after startup without a service restart.
async fn add_bucket(
State(reg): State<Arc<BucketRegistry>>,
Json(bc): Json<BucketConfig>,
) -> impl IntoResponse {
// Safety net: local backends must land somewhere under the configured
// profile_root (for profile:*) or be otherwise explicitly rooted.
// Refuse empty / missing root to avoid "bucket created at ./" surprises.
if bc.backend == "local" {
match bc.root.as_deref() {
Some(r) if !r.trim().is_empty() => {}
_ => return Err((StatusCode::BAD_REQUEST,
"local bucket requires a non-empty 'root' path".to_string())),
}
}
match reg.add_bucket(bc).await {
Ok(info) => Ok((StatusCode::CREATED, Json(info))),
Err(e) => {
let code = if e.contains("already registered") {
StatusCode::CONFLICT
} else {
StatusCode::BAD_REQUEST
};
Err((code, e))
}
}
}
/// Unregister a bucket. Refused for primary / rescue / unknown names.
async fn remove_bucket(
State(reg): State<Arc<BucketRegistry>>,
Path(name): Path<String>,
) -> impl IntoResponse {
match reg.remove_bucket(&name) {
Ok(()) => Ok((StatusCode::OK, format!("unregistered: {name}"))),
Err(e) => {
let code = if e.contains("cannot remove") {
StatusCode::FORBIDDEN
} else if e.contains("not registered") {
StatusCode::NOT_FOUND
} else {
StatusCode::BAD_REQUEST
};
Err((code, e))
}
}
}
async fn list_buckets(State(reg): State<Arc<BucketRegistry>>) -> Json<Vec<BucketInfo>> {
Json(reg.list().await)
}
#[derive(Deserialize)]
struct ErrorQuery {
bucket: Option<String>,
since: Option<DateTime<Utc>>,
#[serde(default = "default_limit")]
limit: usize,
}
fn default_limit() -> usize { 50 }
async fn list_errors(
State(reg): State<Arc<BucketRegistry>>,
Query(q): Query<ErrorQuery>,
) -> Json<Vec<BucketErrorEvent>> {
Json(reg.journal().filter(q.bucket.as_deref(), q.since, q.limit).await)
}
#[derive(Deserialize)]
struct HealthQuery {
#[serde(default = "default_period")]
minutes: i64,
}
fn default_period() -> i64 { 5 }
async fn get_health(
State(reg): State<Arc<BucketRegistry>>,
Query(q): Query<HealthQuery>,
) -> impl IntoResponse {
Json(reg.journal().health(q.minutes).await)
}
/// Bucket-aware write. Hard-fails if the target bucket is unreachable;
/// never falls back to rescue for writes.
async fn put_bucket_object(
State(reg): State<Arc<BucketRegistry>>,
Path((bucket, key)): Path<(String, String)>,
body: bytes::Bytes,
) -> impl IntoResponse {
match reg.write_smart(&bucket, &key, body).await {
Ok(()) => (StatusCode::CREATED, format!("stored: {bucket}/{key}")).into_response(),
Err(e) => (StatusCode::SERVICE_UNAVAILABLE, e).into_response(),
}
}
/// Bucket-aware read. Falls through to the rescue bucket if the target
/// is unreachable or the key is missing; emits observability headers so
/// callers can detect the fallback.
async fn flush_errors(State(reg): State<Arc<BucketRegistry>>) -> impl IntoResponse {
match reg.journal().flush().await {
Ok(()) => (StatusCode::OK, "flushed").into_response(),
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e).into_response(),
}
}
async fn compact_errors(State(reg): State<Arc<BucketRegistry>>) -> impl IntoResponse {
match reg.journal().compact().await {
Ok(stats) => Json(stats).into_response(),
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e).into_response(),
}
}
async fn get_bucket_object(
State(reg): State<Arc<BucketRegistry>>,
Path((bucket, key)): Path<(String, String)>,
) -> impl IntoResponse {
match reg.read_smart(&bucket, &key).await {
Ok(outcome) => {
let mut resp = outcome.data.into_response();
if outcome.rescued {
let headers = resp.headers_mut();
headers.insert("x-lakehouse-rescue-used", "true".parse().unwrap());
headers.insert(
"x-lakehouse-original-bucket",
outcome.original_bucket.parse().unwrap(),
);
headers.insert(
"x-lakehouse-served-by",
outcome.served_by.parse().unwrap(),
);
}
resp
}
Err(e) => (StatusCode::NOT_FOUND, e).into_response(),
}
}