root d87f2ccac6 Phase E: Soft deletes (tombstones) for compliance-grade row deletion
Implements GDPR/CCPA-compatible row-level deletion without rewriting
the underlying Parquet. Tombstone markers live beside each dataset and
are applied at query time via a DataFusion view that excludes the
deleted row_key_values.

Schema (shared::types):
- Tombstone { dataset, row_key_column, row_key_value, deleted_at,
              actor, reason }
- All tombstones for a dataset must share one row_key_column —
  enforced at write so the query-time filter remains a single
  WHERE NOT IN (...) clause

Storage (catalogd::tombstones):
- Per-dataset AppendLog at _catalog/tombstones/{dataset}/
- flush_threshold=1 + explicit flush after every append — tombstones
  are high-value, low-frequency; durability on return is the contract
- Reuses storaged::append_log infra so compaction is already wired
  (POST .../tombstones/compact will work once we expose it)

Catalog (catalogd::registry):
- add_tombstone validates dataset exists + key column compatibility
- list_tombstones for the GET endpoint
- TombstoneStore exposed via Registry::tombstones() for queryd

HTTP (catalogd::service):
- POST /catalog/datasets/by-name/{name}/tombstone
    { row_key_column, row_key_values[], actor, reason }
  Returns rows_tombstoned count + per-value failure list (207 on
  partial success).
- GET same path lists active tombstones with full audit info.

Query layer (queryd::context):
- Snapshot tombstones-by-dataset before registering tables
- Tombstoned tables: raw goes to "__raw__{name}", public "{name}"
  becomes DataFusion view with
  SELECT * FROM "__raw__{name}" WHERE CAST(col AS VARCHAR) NOT IN (...)
- CAST AS VARCHAR handles both string and integer key columns
- Untombstoned tables register as before — zero overhead

End-to-end on candidates (100K rows):
- Pick CAND-000001/2/3 (Linda/Charles/Kimberly)
- POST tombstone -> rows_tombstoned: 3
- COUNT(*) drops 100000 -> 99997
- WHERE candidate_id IN (those 3) -> 0 rows
- candidates_safe view transitively excludes them
  (Linda+Denver: __raw__candidates=159, candidates_safe=158)
- Restart: COUNT still 99997, 3 tombstones reload from disk

Reversibility: tombstones are reversible deletes, not destruction.
Power users can still query "__raw__{name}" to see deleted rows.
Phase 13 access control is what stops a non-admin from accessing
__raw__* tables.

Limits / follow-up:
- Physical compaction not yet integrated — Phase 8's compact_files
  doesn't read tombstones during merge. Tombstoned rows are still
  on disk until that integration ships.
- Phase 9 journald event emission for tombstones not wired —
  tombstone records carry their own actor+reason+timestamp so the
  audit trail is intact, but cross-referencing with the mutation
  event log would help compliance reporting.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 09:40:48 -05:00

341 lines
10 KiB
Rust

use axum::{
Json, Router,
extract::{Path, State},
http::StatusCode,
response::IntoResponse,
routing::{get, post},
};
use serde::{Deserialize, Serialize};
use shared::types::{DatasetId, ObjectRef, SchemaFingerprint};
use uuid::Uuid;
use crate::registry::Registry;
pub fn router(registry: Registry) -> Router {
Router::new()
.route("/health", get(health))
.route("/datasets", post(create_dataset))
.route("/datasets", get(list_datasets))
.route("/datasets/{id}", get(get_dataset))
.route("/datasets/by-name/{name}", get(get_dataset_by_name))
.route("/datasets/by-name/{name}/metadata", post(update_metadata))
.route("/datasets/by-name/{name}/resync", post(resync_dataset))
.route("/resync-missing", post(resync_all_missing))
.route("/migrate-buckets", post(migrate_buckets))
// Phase D: AI-safe views
.route("/views", post(create_view).get(list_views))
.route("/views/{name}", get(get_view).delete(delete_view))
// Phase E: soft-delete tombstones
.route("/datasets/by-name/{name}/tombstone", post(tombstone_rows).get(list_tombstones))
.with_state(registry)
}
async fn health() -> &'static str {
"catalogd ok"
}
#[derive(Deserialize)]
struct CreateDatasetRequest {
name: String,
schema_fingerprint: String,
objects: Vec<ObjectRefRequest>,
}
#[derive(Deserialize)]
struct ObjectRefRequest {
bucket: String,
key: String,
size_bytes: u64,
}
#[derive(Serialize)]
struct DatasetResponse {
id: String,
name: String,
schema_fingerprint: String,
objects: Vec<ObjectRefResponse>,
created_at: String,
updated_at: String,
// Rich metadata
description: String,
owner: String,
sensitivity: Option<shared::types::Sensitivity>,
columns: Vec<shared::types::ColumnMeta>,
lineage: Option<shared::types::Lineage>,
freshness: Option<shared::types::FreshnessContract>,
tags: Vec<String>,
row_count: Option<u64>,
}
#[derive(Serialize)]
struct ObjectRefResponse {
bucket: String,
key: String,
size_bytes: u64,
created_at: String,
}
impl From<&shared::types::DatasetManifest> for DatasetResponse {
fn from(m: &shared::types::DatasetManifest) -> Self {
Self {
id: m.id.to_string(),
name: m.name.clone(),
schema_fingerprint: m.schema_fingerprint.0.clone(),
objects: m.objects.iter().map(|o| ObjectRefResponse {
bucket: o.bucket.clone(),
key: o.key.clone(),
size_bytes: o.size_bytes,
created_at: o.created_at.to_rfc3339(),
}).collect(),
created_at: m.created_at.to_rfc3339(),
updated_at: m.updated_at.to_rfc3339(),
description: m.description.clone(),
owner: m.owner.clone(),
sensitivity: m.sensitivity.clone(),
columns: m.columns.clone(),
lineage: m.lineage.clone(),
freshness: m.freshness.clone(),
tags: m.tags.clone(),
row_count: m.row_count,
}
}
}
async fn create_dataset(
State(registry): State<Registry>,
Json(req): Json<CreateDatasetRequest>,
) -> impl IntoResponse {
let now = chrono::Utc::now();
let objects: Vec<ObjectRef> = req.objects.into_iter().map(|o| ObjectRef {
bucket: o.bucket,
key: o.key,
size_bytes: o.size_bytes,
created_at: now,
}).collect();
match registry.register(req.name, SchemaFingerprint(req.schema_fingerprint), objects).await {
Ok(manifest) => {
let resp = DatasetResponse::from(&manifest);
Ok((StatusCode::CREATED, Json(resp)))
}
Err(e) => Err((StatusCode::INTERNAL_SERVER_ERROR, e)),
}
}
async fn list_datasets(State(registry): State<Registry>) -> impl IntoResponse {
let datasets = registry.list().await;
let resp: Vec<DatasetResponse> = datasets.iter().map(DatasetResponse::from).collect();
Json(resp)
}
async fn get_dataset(
State(registry): State<Registry>,
Path(id): Path<String>,
) -> impl IntoResponse {
let uuid = Uuid::parse_str(&id).map_err(|e| (StatusCode::BAD_REQUEST, e.to_string()))?;
let dataset_id = DatasetId(uuid);
match registry.get(&dataset_id).await {
Some(manifest) => Ok(Json(DatasetResponse::from(&manifest))),
None => Err((StatusCode::NOT_FOUND, format!("dataset not found: {id}"))),
}
}
async fn get_dataset_by_name(
State(registry): State<Registry>,
Path(name): Path<String>,
) -> impl IntoResponse {
match registry.get_by_name(&name).await {
Some(manifest) => Ok(Json(DatasetResponse::from(&manifest))),
None => Err((StatusCode::NOT_FOUND, format!("dataset not found: {name}"))),
}
}
async fn update_metadata(
State(registry): State<Registry>,
Path(name): Path<String>,
Json(updates): Json<crate::registry::MetadataUpdate>,
) -> impl IntoResponse {
match registry.update_metadata(&name, updates).await {
Ok(manifest) => Ok(Json(DatasetResponse::from(&manifest))),
Err(e) => Err((StatusCode::INTERNAL_SERVER_ERROR, e)),
}
}
/// Re-read parquet footers for a single dataset and repopulate row_count
/// and columns from reality. Useful for repairing manifests whose metadata
/// was lost or never backfilled.
async fn resync_dataset(
State(registry): State<Registry>,
Path(name): Path<String>,
) -> impl IntoResponse {
match registry.resync_from_parquet(&name).await {
Ok(manifest) => Ok(Json(DatasetResponse::from(&manifest))),
Err(e) => Err((StatusCode::INTERNAL_SERVER_ERROR, e)),
}
}
#[derive(Serialize)]
struct ResyncAllResponse {
succeeded: Vec<ResyncOk>,
failed: Vec<ResyncErr>,
}
#[derive(Serialize)]
struct ResyncOk {
name: String,
row_count: u64,
}
#[derive(Serialize)]
struct ResyncErr {
name: String,
error: String,
}
/// Resync every dataset that currently has null row_count or empty columns.
async fn resync_all_missing(State(registry): State<Registry>) -> impl IntoResponse {
let (ok, err) = registry.resync_missing().await;
Json(ResyncAllResponse {
succeeded: ok.into_iter().map(|(name, row_count)| ResyncOk { name, row_count }).collect(),
failed: err.into_iter().map(|(name, error)| ResyncErr { name, error }).collect(),
})
}
/// Federation layer 2 one-shot: normalize every ObjectRef.bucket field
/// to the canonical "primary" value. Idempotent — re-running once
/// everything is canonical is a safe no-op.
async fn migrate_buckets(State(registry): State<Registry>) -> impl IntoResponse {
match registry.migrate_buckets_to_primary().await {
Ok(report) => Ok(Json(report)),
Err(e) => Err((StatusCode::INTERNAL_SERVER_ERROR, e)),
}
}
// --- Phase D: AI-safe views ---
#[derive(Deserialize)]
struct CreateViewRequest {
name: String,
base_dataset: String,
columns: Vec<String>,
#[serde(default)]
row_filter: Option<String>,
#[serde(default)]
column_redactions: std::collections::HashMap<String, shared::types::Redaction>,
#[serde(default)]
description: String,
#[serde(default)]
created_by: String,
}
async fn create_view(
State(registry): State<Registry>,
Json(req): Json<CreateViewRequest>,
) -> impl IntoResponse {
let view = shared::types::AiView {
name: req.name,
base_dataset: req.base_dataset,
columns: req.columns,
row_filter: req.row_filter,
column_redactions: req.column_redactions,
created_at: chrono::Utc::now(),
created_by: req.created_by,
description: req.description,
};
match registry.put_view(view).await {
Ok(v) => Ok((StatusCode::CREATED, Json(v))),
Err(e) => Err((StatusCode::BAD_REQUEST, e)),
}
}
async fn list_views(State(registry): State<Registry>) -> impl IntoResponse {
Json(registry.list_views().await)
}
async fn get_view(
State(registry): State<Registry>,
Path(name): Path<String>,
) -> impl IntoResponse {
match registry.get_view(&name).await {
Some(v) => Ok(Json(v)),
None => Err((StatusCode::NOT_FOUND, format!("view not found: {name}"))),
}
}
async fn delete_view(
State(registry): State<Registry>,
Path(name): Path<String>,
) -> impl IntoResponse {
match registry.delete_view(&name).await {
Ok(()) => Ok(StatusCode::NO_CONTENT),
Err(e) => Err((StatusCode::INTERNAL_SERVER_ERROR, e)),
}
}
// --- Phase E: soft-delete tombstones ---
#[derive(Deserialize)]
struct TombstoneRequest {
row_key_column: String,
row_key_values: Vec<String>,
#[serde(default)]
actor: String,
#[serde(default)]
reason: String,
}
#[derive(Serialize)]
struct TombstoneResponse {
dataset: String,
row_key_column: String,
rows_tombstoned: usize,
failures: Vec<String>,
}
async fn tombstone_rows(
State(registry): State<Registry>,
Path(name): Path<String>,
Json(req): Json<TombstoneRequest>,
) -> impl IntoResponse {
if req.row_key_values.is_empty() {
return Err((StatusCode::BAD_REQUEST, "row_key_values is empty".to_string()));
}
let mut ok = 0;
let mut failures = Vec::new();
for value in &req.row_key_values {
match registry
.add_tombstone(&name, &req.row_key_column, value, &req.actor, &req.reason)
.await
{
Ok(_) => ok += 1,
Err(e) => failures.push(format!("{value}: {e}")),
}
}
let status = if ok > 0 && failures.is_empty() {
StatusCode::CREATED
} else if ok > 0 {
StatusCode::MULTI_STATUS
} else {
StatusCode::BAD_REQUEST
};
Ok((status, Json(TombstoneResponse {
dataset: name,
row_key_column: req.row_key_column,
rows_tombstoned: ok,
failures,
})))
}
async fn list_tombstones(
State(registry): State<Registry>,
Path(name): Path<String>,
) -> impl IntoResponse {
match registry.list_tombstones(&name).await {
Ok(ts) => Ok(Json(ts)),
Err(e) => Err((StatusCode::INTERNAL_SERVER_ERROR, e)),
}
}