Phase G0 Day 5 ships queryd: in-memory DuckDB with custom Connector
that runs INSTALL httpfs / LOAD httpfs / CREATE OR REPLACE SECRET
(TYPE S3) on every new connection, sourced from SecretsProvider +
shared.S3Config. SetMaxOpenConns(1) so registrar's CREATE VIEWs and
handler's SELECTs serialize through one connection (avoids cross-
connection MVCC visibility edge cases).
Registrar.Refresh reads catalogd /catalog/list, runs CREATE OR
REPLACE VIEW "name" AS SELECT * FROM read_parquet('s3://bucket/key')
per manifest, drops views for removed manifests, skips on unchanged
updated_at (the implicit etag). Drop pass runs BEFORE create pass so
a poison manifest can't block other manifest refreshes (post-scrum
C1 fix).
POST /sql with JSON body {"sql":"…"} returns
{"columns":[{"name":"id","type":"BIGINT"},…], "rows":[[…]],
"row_count":N}. []byte → string conversion so VARCHAR rows
JSON-encode as text. 30s default refresh ticker, configurable via
[queryd].refresh_every.
Cross-lineage scrum on shipped code:
- Opus 4.7 (opencode): 1 BLOCK + 4 WARN + 4 INFO
- Kimi K2-0905 (openrouter): 2 BLOCK + 2 WARN + 1 INFO
- Qwen3-coder (openrouter): 2 BLOCK + 1 WARN + 1 INFO
Fixed (4):
C1 (Opus + Kimi convergent): Refresh aborts on first per-view error
→ drop pass first, collect errors, errors.Join. Poison manifest
no longer blocks the rest of the catalog from re-syncing.
B-CTX (Opus BLOCK): bootstrap closure captured OpenDB's ctx →
cancelled-ctx silently fails every reconnect. context.Background()
inside closure; passed ctx only for initial Ping.
B-LEAK (Kimi BLOCK): firstLine(stmt) truncated CREATE SECRET to 80
chars but those 80 chars contained KEY_ID + SECRET prefix → log
aggregator captures credentials. Stable per-statement labels +
redactCreds() filter on wrapped DuckDB errors.
JSON-ERR (Opus WARN): swallowed json.Encode error → silent
truncated 200 on unsupported column types. slog.Warn the failure.
Dismissed (4 false positives):
Qwen BLOCK "bootstrap not transactional" — DuckDB DDL is auto-commit
Qwen BLOCK "MaxBytesReader after Decode" — false, applied before
Kimi BLOCK "concurrent Refresh + user SELECT deadlock" — not a
deadlock, just serialization, by design with 10s timeout retry
Kimi WARN "dropView leaves r.known inconsistent" — current code
returns before the delete; the entry persists for retry
Critical reviewer behavior: 1 convergent BLOCK between Opus + Kimi
on the per-view error blocking, plus two independent single-reviewer
BLOCKs (B-CTX, B-LEAK) that smoke could never have caught. The
B-LEAK fix uses defense-in-depth: never pass SQL into the error
path AND redact known cred values from DuckDB's own error message.
DuckDB cgo path: github.com/duckdb/duckdb-go/v2 v2.10502.0 (per
ADR-001 §1) on Go 1.25 + arrow-go. Smoke 6/6 PASS after every
fix round.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
140 lines
4.7 KiB
Go
140 lines
4.7 KiB
Go
// Package shared also provides the TOML config loader. Per ADR
|
|
// equivalent of Rust ADR-006 (TOML config over env vars), every
|
|
// service reads `lakehouse.toml` with sane defaults and env
|
|
// overrides. Config is hot-reload-unaware in G0; reload-on-SIGHUP
|
|
// is a G1+ concern.
|
|
package shared
|
|
|
|
import (
|
|
"errors"
|
|
"fmt"
|
|
"io/fs"
|
|
"log/slog"
|
|
"os"
|
|
|
|
"github.com/pelletier/go-toml/v2"
|
|
)
|
|
|
|
// Config is the unified Lakehouse config. Each service reads only
|
|
// the section it cares about, but they all share the same file so
|
|
// operators have one place to look.
|
|
type Config struct {
|
|
Gateway ServiceConfig `toml:"gateway"`
|
|
Storaged ServiceConfig `toml:"storaged"`
|
|
Catalogd CatalogConfig `toml:"catalogd"`
|
|
Ingestd IngestConfig `toml:"ingestd"`
|
|
Queryd QuerydConfig `toml:"queryd"`
|
|
S3 S3Config `toml:"s3"`
|
|
Log LogConfig `toml:"log"`
|
|
}
|
|
|
|
// IngestConfig adds ingestd-specific knobs. ingestd needs to PUT
|
|
// parquet to storaged AND register manifests with catalogd, so it
|
|
// holds two upstream URLs in addition to its own bind.
|
|
type IngestConfig struct {
|
|
Bind string `toml:"bind"`
|
|
StoragedURL string `toml:"storaged_url"`
|
|
CatalogdURL string `toml:"catalogd_url"`
|
|
}
|
|
|
|
// QuerydConfig adds queryd-specific knobs. queryd talks DuckDB
|
|
// directly to MinIO via DuckDB's httpfs extension (so no storaged
|
|
// URL needed), and reads the catalog over HTTP for view registration.
|
|
// SecretsPath defaults to /etc/lakehouse/secrets-go.toml — the same
|
|
// file storaged uses, since both services need the S3 credentials.
|
|
type QuerydConfig struct {
|
|
Bind string `toml:"bind"`
|
|
CatalogdURL string `toml:"catalogd_url"`
|
|
SecretsPath string `toml:"secrets_path"`
|
|
RefreshEvery string `toml:"refresh_every"` // duration string, e.g. "30s"
|
|
}
|
|
|
|
// CatalogConfig adds catalogd-specific knobs on top of the standard
|
|
// bind. StoragedURL points at the storaged service for manifest
|
|
// persistence; G0 defaults to the localhost bind.
|
|
type CatalogConfig struct {
|
|
Bind string `toml:"bind"`
|
|
StoragedURL string `toml:"storaged_url"`
|
|
}
|
|
|
|
// ServiceConfig is the per-binary bind config. Default Bind ""
|
|
// means "use the service's hardcoded G0 default" — see DefaultConfig.
|
|
type ServiceConfig struct {
|
|
Bind string `toml:"bind"`
|
|
}
|
|
|
|
// S3Config holds S3-compatible storage settings. Endpoint blank →
|
|
// AWS default. Bucket "" → "lakehouse-primary".
|
|
type S3Config struct {
|
|
Endpoint string `toml:"endpoint"`
|
|
Region string `toml:"region"`
|
|
Bucket string `toml:"bucket"`
|
|
AccessKeyID string `toml:"access_key_id"`
|
|
SecretAccessKey string `toml:"secret_access_key"`
|
|
UsePathStyle bool `toml:"use_path_style"`
|
|
}
|
|
|
|
// LogConfig — slog level for now; structured fields land G1+.
|
|
type LogConfig struct {
|
|
Level string `toml:"level"`
|
|
}
|
|
|
|
// DefaultConfig returns the G0 dev defaults. Ports are shifted to
|
|
// 3110+ to coexist with the live Rust lakehouse on 3100/3201-3204
|
|
// during the migration. G5 cutover flips gateway back to 3100.
|
|
func DefaultConfig() Config {
|
|
return Config{
|
|
Gateway: ServiceConfig{Bind: "127.0.0.1:3110"},
|
|
Storaged: ServiceConfig{Bind: "127.0.0.1:3211"},
|
|
Catalogd: CatalogConfig{Bind: "127.0.0.1:3212", StoragedURL: "http://127.0.0.1:3211"},
|
|
Ingestd: IngestConfig{
|
|
Bind: "127.0.0.1:3213",
|
|
StoragedURL: "http://127.0.0.1:3211",
|
|
CatalogdURL: "http://127.0.0.1:3212",
|
|
},
|
|
Queryd: QuerydConfig{
|
|
Bind: "127.0.0.1:3214",
|
|
CatalogdURL: "http://127.0.0.1:3212",
|
|
SecretsPath: "/etc/lakehouse/secrets-go.toml",
|
|
RefreshEvery: "30s",
|
|
},
|
|
S3: S3Config{
|
|
Endpoint: "http://localhost:9000",
|
|
Region: "us-east-1",
|
|
Bucket: "lakehouse-primary",
|
|
UsePathStyle: true,
|
|
},
|
|
Log: LogConfig{Level: "info"},
|
|
}
|
|
}
|
|
|
|
// LoadConfig reads `lakehouse.toml` from path; if path is empty or
|
|
// the file doesn't exist, returns DefaultConfig. Any decode error is
|
|
// fatal (we don't want a misconfigured service silently falling back
|
|
// to defaults — that's the kind of bug you find at 2am).
|
|
//
|
|
// Per Opus + Qwen WARN #3: when path WAS given but the file is
|
|
// missing, log a warning so silent default-fallback doesn't hide
|
|
// misconfiguration. Empty path is fine (caller didn't ask for a
|
|
// file); non-empty + missing is suspicious.
|
|
func LoadConfig(path string) (Config, error) {
|
|
cfg := DefaultConfig()
|
|
if path == "" {
|
|
return cfg, nil
|
|
}
|
|
b, err := os.ReadFile(path)
|
|
if errors.Is(err, fs.ErrNotExist) {
|
|
slog.Warn("config file not found, using defaults",
|
|
"path", path,
|
|
"hint", "create the file or pass -config /path/to/lakehouse.toml")
|
|
return cfg, nil
|
|
}
|
|
if err != nil {
|
|
return cfg, fmt.Errorf("read config: %w", err)
|
|
}
|
|
if err := toml.Unmarshal(b, &cfg); err != nil {
|
|
return cfg, fmt.Errorf("parse config: %w", err)
|
|
}
|
|
return cfg, nil
|
|
}
|