Phase G0 Day 5 ships queryd: in-memory DuckDB with custom Connector
that runs INSTALL httpfs / LOAD httpfs / CREATE OR REPLACE SECRET
(TYPE S3) on every new connection, sourced from SecretsProvider +
shared.S3Config. SetMaxOpenConns(1) so registrar's CREATE VIEWs and
handler's SELECTs serialize through one connection (avoids cross-
connection MVCC visibility edge cases).
Registrar.Refresh reads catalogd /catalog/list, runs CREATE OR
REPLACE VIEW "name" AS SELECT * FROM read_parquet('s3://bucket/key')
per manifest, drops views for removed manifests, skips on unchanged
updated_at (the implicit etag). Drop pass runs BEFORE create pass so
a poison manifest can't block other manifest refreshes (post-scrum
C1 fix).
POST /sql with JSON body {"sql":"…"} returns
{"columns":[{"name":"id","type":"BIGINT"},…], "rows":[[…]],
"row_count":N}. []byte → string conversion so VARCHAR rows
JSON-encode as text. 30s default refresh ticker, configurable via
[queryd].refresh_every.
Cross-lineage scrum on shipped code:
- Opus 4.7 (opencode): 1 BLOCK + 4 WARN + 4 INFO
- Kimi K2-0905 (openrouter): 2 BLOCK + 2 WARN + 1 INFO
- Qwen3-coder (openrouter): 2 BLOCK + 1 WARN + 1 INFO
Fixed (4):
C1 (Opus + Kimi convergent): Refresh aborts on first per-view error
→ drop pass first, collect errors, errors.Join. Poison manifest
no longer blocks the rest of the catalog from re-syncing.
B-CTX (Opus BLOCK): bootstrap closure captured OpenDB's ctx →
cancelled-ctx silently fails every reconnect. context.Background()
inside closure; passed ctx only for initial Ping.
B-LEAK (Kimi BLOCK): firstLine(stmt) truncated CREATE SECRET to 80
chars but those 80 chars contained KEY_ID + SECRET prefix → log
aggregator captures credentials. Stable per-statement labels +
redactCreds() filter on wrapped DuckDB errors.
JSON-ERR (Opus WARN): swallowed json.Encode error → silent
truncated 200 on unsupported column types. slog.Warn the failure.
Dismissed (4 false positives):
Qwen BLOCK "bootstrap not transactional" — DuckDB DDL is auto-commit
Qwen BLOCK "MaxBytesReader after Decode" — false, applied before
Kimi BLOCK "concurrent Refresh + user SELECT deadlock" — not a
deadlock, just serialization, by design with 10s timeout retry
Kimi WARN "dropView leaves r.known inconsistent" — current code
returns before the delete; the entry persists for retry
Critical reviewer behavior: 1 convergent BLOCK between Opus + Kimi
on the per-view error blocking, plus two independent single-reviewer
BLOCKs (B-CTX, B-LEAK) that smoke could never have caught. The
B-LEAK fix uses defense-in-depth: never pass SQL into the error
path AND redact known cred values from DuckDB's own error message.
DuckDB cgo path: github.com/duckdb/duckdb-go/v2 v2.10502.0 (per
ADR-001 §1) on Go 1.25 + arrow-go. Smoke 6/6 PASS after every
fix round.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
38 lines
1.1 KiB
TOML
38 lines
1.1 KiB
TOML
# Lakehouse-Go config — G0 dev defaults. Overrides via env are a
|
|
# G1+ concern; for G0 edit this file and restart the affected service.
|
|
|
|
# G0 dev ports — shifted to 3110+ so the Go services run alongside
|
|
# the live Rust lakehouse on 3100/3201-3204 without colliding. G5
|
|
# (demo cutover) flips gateway back to 3100 when Rust retires.
|
|
[gateway]
|
|
bind = "127.0.0.1:3110"
|
|
|
|
[storaged]
|
|
bind = "127.0.0.1:3211"
|
|
|
|
[catalogd]
|
|
bind = "127.0.0.1:3212"
|
|
storaged_url = "http://127.0.0.1:3211"
|
|
|
|
[ingestd]
|
|
bind = "127.0.0.1:3213"
|
|
storaged_url = "http://127.0.0.1:3211"
|
|
catalogd_url = "http://127.0.0.1:3212"
|
|
|
|
[queryd]
|
|
bind = "127.0.0.1:3214"
|
|
catalogd_url = "http://127.0.0.1:3212"
|
|
secrets_path = "/etc/lakehouse/secrets-go.toml"
|
|
refresh_every = "30s"
|
|
|
|
[s3]
|
|
endpoint = "http://localhost:9000"
|
|
region = "us-east-1"
|
|
bucket = "lakehouse-go-primary" # G0 dedicated bucket so Rust + Go coexist
|
|
access_key_id = "" # populated by SecretsProvider from /etc/lakehouse/secrets-go.toml
|
|
secret_access_key = "" # ditto
|
|
use_path_style = true
|
|
|
|
[log]
|
|
level = "info"
|