trace-id propagation through /v1/iterate (multi-call observability)

Closes J's 2026-05-02 multi-call observability gap: a single
/v1/iterate session with N retries used to surface in Langfuse as
N+1 disconnected traces (one per /v1/chat hop + one for the iterate
request itself), with no parent/child linkage. Operators couldn't
scroll the retry chain in one trace tree to spot where grounding
failed.

## Wire-level change

- New header constant `shared.TraceIDHeader = "X-Lakehouse-Trace-Id"`
- `langfuseMiddleware` honors the header on inbound requests: if
  set, reuses that trace id instead of minting a new one. Stashes
  the trace id on the request context so handlers can attach
  application-level child spans.
- `validatord.chatCaller` forwards the header to chatd. Every chat
  hop in an iterate session lands as a child of the parent trace.

## Application-level spans

- `validator.IterateConfig` gains `Tracer` (optional callback).
  When wired, each iteration attempt emits one Langfuse span
  via `validator.AttemptSpan`:
    Name: iterate.attempt[N]
    Input: { iteration, model, provider, prompt }
    Output: { verdict, raw, error }
    Level: WARNING when verdict != accepted
- `validatord.iterTracer` is the production hook — bridges
  `validator.Tracer` → `langfuse.Client.Span`.
- `IterateRequest`/`IterateResponse`/`IterateFailure` gain
  `TraceID`; each `IterateAttempt` gains `SpanID`. The /v1/iterate
  caller can pivot from the JSON response straight into the
  Langfuse trace tree.

## What an operator sees post-cutover

  GET /v1/iterate {kind=fill, prompt=...} → Trace TR-1
    ├─ http.request span (from middleware)
    ├─ iterate.attempt[0] span (validator.Iterate emit)
    │     input: prompt+model
    │     output: { verdict: validation_failed, error: ..., raw }
    ├─ chatd /v1/chat call (X-Lakehouse-Trace-Id: TR-1)
    │     ├─ http.request span (chatd middleware)
    │     └─ chatd-internal spans (existing)
    ├─ iterate.attempt[1] span
    └─ ...

All in one Langfuse trace tree, not N+1 separate traces.

## Hallucinated-worker safety net is unchanged

The /v1/iterate flow's hard correctness gate is still
FillValidator + WorkerLookup. Phantom candidate IDs raise
ValidationError::Consistency which 422s and forces the iteration
loop to retry. The trace-id propagation is the OBSERVABILITY layer
on top — it makes the existing safety net's outcomes visible per-call,
not a replacement for it.

## Verification

- internal/validator: 4 new tests
  - TestIterate_TracerEmitsSpanPerAttempt — span/attempt count + SpanID
  - TestIterate_NoTraceIDSkipsTracer — no orphan spans without trace_id
  - TestIterate_ChatCallerReceivesTraceID — propagation contract
  - (existing iterate tests updated for new ChatCaller signature)
- internal/shared: 1 new test
  - TestLangfuseMiddleware_HonorsTraceIDHeader — cross-service linkage
- cmd/validatord: existing HTTP tests still PASS via the dual-shape
  UnmarshalJSON contract.
- validatord_smoke.sh: 5/5 PASS through gateway :3110 (unchanged).
- Full go test ./... green across 33 packages.

## Architecture invariant added

STATE_OF_PLAY "DO NOT RELITIGATE" gains a paragraph documenting
the X-Lakehouse-Trace-Id header contract + the iterate.attempt[N]
span emission. Future-Claude won't re-propose "wire trace-id
propagation" — the header IS the wiring.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
root 2026-05-02 05:13:18 -05:00
parent afdeca80d9
commit d6d2fdf81f
7 changed files with 366 additions and 39 deletions

View File

@ -1,6 +1,6 @@
# STATE OF PLAY — Lakehouse-Go
**Last verified:** 2026-05-02 ~06:30 CDT
**Last verified:** 2026-05-02 ~07:30 CDT
**Verified by:** **production-readiness gauntlet** — 21/21 smoke chain green, per-component scrum across 4 bundles, **3 cross-runtime parity probes all green post-fix** (validator: **6/6 match** after wire-format alignment shipped; materializer: 2/2 after omitempty fix; extract_json: 12/12). All findings surfaced by the parity probes have been actioned. Disposition: `reports/cutover/gauntlet_2026-05-02/disposition.md`.
> **Read this FIRST.** When the user says "we're working on lakehouse," default to the Go rewrite (this repo); the Rust legacy at `/home/profit/lakehouse/` is maintenance-only. If memory contradicts this file, this file wins. Update it when something is verified working — not when a phase finishes.
@ -216,6 +216,7 @@ Verbatim verdicts at `reports/scrum/_evidence/2026-04-30/verdicts/`. Disposition
- **Langfuse Go-side client lives at `internal/langfuse/`** with best-effort fail-open posture. URL+creds from `/etc/lakehouse/langfuse.env`. Don't propose to "wire Langfuse on Go side" — it's wired; multi_coord_stress is the proof.
- **vectord's source-of-truth is `i.vectors`, NOT the coder/hnsw graph.** The `Index` struct holds a parallel `vectors map[string][]float32` updated on every successful Add/Delete; the graph is a derived, replaceable view. `safeGraphAdd`/`safeGraphDelete` wrap the library's panic-prone ops; `rebuildGraphLocked` reads from `i.vectors` (graph-state-independent). Don't propose to "drop the side map for memory" — it's the load-bearing piece that makes Add panic-recoverable past the small-index threshold (closes the multitier_100k 277884b 96-98% fail). The prior `i.ids` set was folded into `i.vectors` keys.
- **vectord saves are coalesced async, not synchronous.** `cmd/vectord/main.go` runs a per-index `saveTask` that single-flights through `Persistor.Save` — at most one in-flight + one pending. Add returns OK before the save completes; an Add-then-crash can lose ~1 save's worth of data, matching ADR-005's fail-open posture. Don't propose to "make saves synchronous for durability" — that re-introduces the lock-contention bottleneck (1-2.5s tail at conc=50, observed 2026-05-01) without fixing a real durability hole (in-memory state is the source of truth in flight).
- **`X-Lakehouse-Trace-Id` header propagates Langfuse parent traces across daemon boundaries.** When validatord's `/v1/iterate` calls chatd's `/v1/chat`, it forwards the header so chatd's middleware reuses the parent trace id instead of minting a new one. Each iteration attempt also emits a child span (`iterate.attempt[N]`) carrying the prompt, raw model output, and validator verdict. Result: an iterate session with N retries shows in Langfuse as ONE trace tree, not N+1 disconnected traces. Don't propose to "wire trace-id propagation" — it's wired; the test at `internal/shared/langfuse_middleware_test.go::TestLangfuseMiddleware_HonorsTraceIDHeader` is the proof. Closes J's 2026-05-02 multi-call observability concern.
---

View File

@ -34,6 +34,7 @@ import (
"github.com/go-chi/chi/v5"
"git.agentview.dev/profit/golangLAKEHOUSE/internal/langfuse"
"git.agentview.dev/profit/golangLAKEHOUSE/internal/shared"
"git.agentview.dev/profit/golangLAKEHOUSE/internal/validator"
)
@ -73,6 +74,11 @@ func main() {
DefaultMaxIterations: cfg.Validatord.DefaultMaxIterations,
DefaultMaxTokens: cfg.Validatord.DefaultMaxTokens,
},
// Same env loader the middleware uses — getting our own
// reference lets us emit child spans of the per-request trace.
// nil when Langfuse env isn't set; the iterate handler skips
// span emission gracefully in that case.
lf: shared.LoadLangfuseFromEnv(),
}
if err := shared.Run("validatord", cfg.Validatord.Bind, h.register, cfg.Auth); err != nil {
@ -86,6 +92,12 @@ type handlers struct {
chatdURL string
chatClient *http.Client
iterCfg validator.IterateConfig
// lf is the Langfuse client (nil when unconfigured — best-effort
// posture matching the rest of the stack). Per-attempt iteration
// spans get emitted via lf when set. The HTTP middleware that
// creates the parent trace lives in shared.Run; this client lets
// validatord's handler emit application-level child spans.
lf *langfuse.Client
}
func (h *handlers) register(r chi.Router) {
@ -224,6 +236,16 @@ func (h *handlers) handleIterate(w http.ResponseWriter, r *http.Request) {
return
}
// Pull the per-request trace id from the langfuse middleware. If
// the caller forwarded an upstream trace via X-Lakehouse-Trace-Id
// the middleware reuses that one; otherwise it minted a fresh trace
// at HTTP entry. Either way, we propagate it so chat hops nest
// under the same parent and operators can pivot from the iterate
// response's trace_id straight into the full Langfuse tree.
if req.TraceID == "" {
req.TraceID = shared.TraceIDFromCtx(r.Context())
}
chat := h.chatCaller()
validate := func(kind string, artifact map[string]any) (validator.Report, error) {
report, vErr, kindErr := h.runValidator(kind, artifact, req.Context)
@ -239,7 +261,15 @@ func (h *handlers) handleIterate(w http.ResponseWriter, r *http.Request) {
return *report, nil
}
resp, fail, err := validator.Iterate(r.Context(), req, h.iterCfg, chat, validate)
// Wire per-attempt span emission when Langfuse is configured —
// each chat→validate cycle becomes a child span of the iterate
// trace, with prompt/raw/verdict captured for inspection.
cfg := h.iterCfg
if h.lf != nil {
cfg.Tracer = h.iterTracer(r.Context())
}
resp, fail, err := validator.Iterate(r.Context(), req, cfg, chat, validate)
if err != nil {
http.Error(w, err.Error(), http.StatusBadGateway)
return
@ -254,8 +284,14 @@ func (h *handlers) handleIterate(w http.ResponseWriter, r *http.Request) {
// chatCaller wires the iteration loop to chatd via HTTP. Builds the
// chat.Request shape, posts to ${chatdURL}/chat, returns the content
// string (no choices wrapper — chatd's response is already flat).
//
// When traceID is non-empty, forwards it as the `X-Lakehouse-Trace-Id`
// header so chatd's middleware attaches its trace to the same
// Langfuse parent. Closes the multi-call observability gap: a
// /v1/iterate session shows in Langfuse as one trace with N child
// chat spans, instead of N+1 disconnected traces.
func (h *handlers) chatCaller() validator.ChatCaller {
return func(ctx context.Context, system, user, _, model string, temp *float64, maxTokens int) (string, error) {
return func(ctx context.Context, system, user, _, model string, temp *float64, maxTokens int, traceID string) (string, error) {
messages := make([]map[string]string, 0, 2)
if system != "" {
messages = append(messages, map[string]string{"role": "system", "content": system})
@ -278,6 +314,9 @@ func (h *handlers) chatCaller() validator.ChatCaller {
return "", fmt.Errorf("build chat req: %w", err)
}
req.Header.Set("Content-Type", "application/json")
if traceID != "" {
req.Header.Set(shared.TraceIDHeader, traceID)
}
resp, err := h.chatClient.Do(req)
if err != nil {
return "", fmt.Errorf("chat hop: %w", err)
@ -297,6 +336,49 @@ func (h *handlers) chatCaller() validator.ChatCaller {
}
}
// iterTracer adapts validator.Tracer → langfuse.Client.Span. Each
// iteration attempt becomes one Langfuse span on the parent trace
// with input={prompt, model, provider}, output={raw, verdict}.
// Operators reading Langfuse can scroll the iterate session and see
// the full retry chain — which prompt produced which raw, which
// validator verdict landed, which retry the model recovered on.
//
// Returns the span id so it lands in IterateAttempt.SpanID — the
// /v1/iterate response carries that id back to the caller for log
// correlation.
//
// Errors are silent (best-effort posture per package langfuse). A
// dropped span never blocks the iterate loop.
func (h *handlers) iterTracer(ctx context.Context) validator.Tracer {
return func(s validator.AttemptSpan) string {
level := "DEFAULT"
if s.Verdict.Kind != "accepted" {
level = "WARNING"
}
return h.lf.Span(ctx, langfuse.SpanInput{
TraceID: s.TraceID,
Name: fmt.Sprintf("iterate.attempt[%d]", s.Iteration),
Input: map[string]any{
"iteration": s.Iteration,
"model": s.Model,
"provider": s.Provider,
"prompt": trim(s.Prompt, 4000),
},
Output: map[string]any{
"verdict": s.Verdict.Kind,
"error": s.Verdict.Error,
"raw": trim(s.Raw, 4000),
},
Metadata: map[string]any{
"verdict_error": s.Verdict.Error,
},
StartTime: s.StartTime,
EndTime: s.EndTime,
Level: level,
})
}
}
func writeJSON(w http.ResponseWriter, status int, body any) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(status)

View File

@ -1,6 +1,7 @@
package shared
import (
"context"
"net/http"
"os"
"time"
@ -8,6 +9,30 @@ import (
"git.agentview.dev/profit/golangLAKEHOUSE/internal/langfuse"
)
// TraceIDHeader propagates a Langfuse trace id across services. When
// validatord makes a /v1/iterate call that internally calls chatd's
// /v1/chat, validatord sends this header so both daemons' middleware
// emit spans under the SAME trace tree (rather than two unrelated
// traces). Closes the multi-call observability gap J flagged
// 2026-05-02 ("we need to make sure they have the corpus of
// information to complete the process and we want to spot errors").
const TraceIDHeader = "X-Lakehouse-Trace-Id"
// traceIDCtxKey is the context value key for the per-request trace id.
// Handlers downstream of langfuseMiddleware can pull it via TraceIDFromCtx
// to attach child spans (e.g. iteration-attempt spans inside validatord).
type traceIDCtxKey struct{}
// TraceIDFromCtx returns the per-request Langfuse trace id, or "" if
// the middleware didn't set one (Langfuse not configured / /health
// bypass / no Client wired).
func TraceIDFromCtx(ctx context.Context) string {
if v, ok := ctx.Value(traceIDCtxKey{}).(string); ok {
return v
}
return ""
}
// langfuseMiddleware emits one Langfuse trace per HTTP request, with
// a single span carrying start/end timestamps + status code. Per
// OPEN item #2 (closed by the wave that adds this file): production
@ -36,15 +61,28 @@ func langfuseMiddleware(serviceName string, lf *langfuse.Client) func(http.Handl
start := time.Now()
sw := &statusWriter{ResponseWriter: w, status: http.StatusOK}
traceID := lf.Trace(r.Context(), langfuse.TraceInput{
Name: serviceName + " " + r.Method + " " + r.URL.Path,
Tags: []string{serviceName, r.Method},
Metadata: map[string]any{
"path": r.URL.Path,
"method": r.Method,
"remote_addr": r.RemoteAddr,
},
})
// If the caller forwarded a trace id (cross-service parent
// linkage) reuse it instead of starting a new trace. Spans
// from this service then attach to the parent trace tree
// so an /v1/iterate session shows as one trace with
// children for each /v1/chat hop.
traceID := r.Header.Get(TraceIDHeader)
if traceID == "" {
traceID = lf.Trace(r.Context(), langfuse.TraceInput{
Name: serviceName + " " + r.Method + " " + r.URL.Path,
Tags: []string{serviceName, r.Method},
Metadata: map[string]any{
"path": r.URL.Path,
"method": r.Method,
"remote_addr": r.RemoteAddr,
},
})
}
// Stash the trace id on the request context so downstream
// handlers can attach finer-grained spans (e.g. one per
// iteration attempt inside validator.Iterate).
r = r.WithContext(context.WithValue(r.Context(), traceIDCtxKey{}, traceID))
next.ServeHTTP(sw, r)
@ -89,13 +127,18 @@ func (sw *statusWriter) WriteHeader(code int) {
sw.ResponseWriter.WriteHeader(code)
}
// loadLangfuseFromEnv builds a langfuse.Client from environment
// LoadLangfuseFromEnv builds a langfuse.Client from environment
// variables. Returns nil if any of LANGFUSE_URL / LANGFUSE_PUBLIC_KEY
// / LANGFUSE_SECRET_KEY is unset (best-effort: missing config means
// no tracing, never a startup error). Same env names as the bare
// /etc/lakehouse/langfuse.env file used by the multi_coord_stress
// driver — operators ship one env file across every daemon.
func loadLangfuseFromEnv() *langfuse.Client {
//
// Exported 2026-05-02 so daemons that need to emit application-level
// child spans (validatord's iterate-attempt spans) can hold their own
// reference to the same client `shared.Run` is already wiring into
// the middleware.
func LoadLangfuseFromEnv() *langfuse.Client {
url := os.Getenv("LANGFUSE_URL")
pk := os.Getenv("LANGFUSE_PUBLIC_KEY")
sk := os.Getenv("LANGFUSE_SECRET_KEY")

View File

@ -119,6 +119,50 @@ func TestLangfuseMiddleware_RealRequestEmitted(t *testing.T) {
}
}
// TestLangfuseMiddleware_HonorsTraceIDHeader locks the cross-service
// trace-linkage contract: when validatord forwards a /v1/iterate's
// trace id to chatd via X-Lakehouse-Trace-Id, chatd's middleware
// MUST reuse that id rather than minting a new one. Without this,
// every chat hop spawned its own orphan trace and an iterate session
// with N retries showed in Langfuse as N+1 disconnected traces.
//
// We verify by inspecting the trace id stashed on the request
// context — handlers downstream pull it via TraceIDFromCtx and
// attach finer-grained spans to it.
func TestLangfuseMiddleware_HonorsTraceIDHeader(t *testing.T) {
// nil client path is fine — TraceIDFromCtx still propagates the
// header value because the middleware sets the ctx value before
// minting any new id. But that's only true when the client is
// non-nil (the nil-client branch is a passthrough that never sets
// the ctx value). So we wire a real (mocked) Langfuse to exercise
// the header-honoring branch.
lfMock := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, _ = io.ReadAll(r.Body)
w.WriteHeader(http.StatusOK)
}))
defer lfMock.Close()
lf := langfuse.New(lfMock.URL, "test-pk", "test-sk", nil)
mw := langfuseMiddleware("test-service", lf)
var observed string
h := mw(http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
observed = TraceIDFromCtx(r.Context())
}))
srv := httptest.NewServer(h)
defer srv.Close()
req, _ := http.NewRequest("POST", srv.URL+"/api/echo", nil)
req.Header.Set(TraceIDHeader, "trace-from-parent")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("POST: %v", err)
}
resp.Body.Close()
if observed != "trace-from-parent" {
t.Errorf("expected forwarded trace id, got %q", observed)
}
}
// TestLangfuseMiddleware_StatusCaptured locks the status-writer
// wrapping: when the handler returns 500, the middleware must see
// 500 in the span output (otherwise error traces all show 200 and

View File

@ -80,7 +80,7 @@ func Run(serviceName, addr string, register RegisterRoutes, auth AuthConfig) err
// gets free production-traffic trace visibility when those env
// vars are set. Missing any of the three → nil client → middleware
// becomes a passthrough.
lf := loadLangfuseFromEnv()
lf := LoadLangfuseFromEnv()
if lf != nil {
// Make sure pending events flush on graceful shutdown so the
// last few requests' traces don't get lost.

View File

@ -5,6 +5,7 @@ import (
"encoding/json"
"fmt"
"strings"
"time"
)
// IterateRequest is the input to Iterate. Mirrors Rust's
@ -20,14 +21,24 @@ type IterateRequest struct {
MaxIterations int `json:"max_iterations,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
MaxTokens int `json:"max_tokens,omitempty"`
// TraceID lets external schedulers force a parent trace id (so the
// iterate session lands as a child of an upstream Langfuse trace).
// Empty = the caller didn't provide one; validatord may still
// generate one from request context.
TraceID string `json:"trace_id,omitempty"`
}
// IterateAttempt is one row in the history. raw is capped at 2000
// chars on the wire to keep responses bounded.
// chars on the wire to keep responses bounded. SpanID points at the
// Langfuse span emitted for this attempt (if a Tracer was wired in
// IterateConfig); empty when tracing was disabled. Operators reading
// `data/_kb/replay_runs.jsonl` or staffing-co-pilot session logs can
// jump from a row to its full trace tree via this id.
type IterateAttempt struct {
Iteration int `json:"iteration"`
Raw string `json:"raw"`
Status AttemptStatus `json:"status"`
Iteration int `json:"iteration"`
Raw string `json:"raw"`
Status AttemptStatus `json:"status"`
SpanID string `json:"span_id,omitempty"`
}
// AttemptStatus is the per-attempt verdict. Tagged JSON so consumers
@ -38,11 +49,14 @@ type AttemptStatus struct {
}
// IterateResponse is the success payload (200 + Report + accepted artifact).
// TraceID echoes back the parent trace id so callers can pivot to
// Langfuse and see the full session (chat hops + validator verdicts).
type IterateResponse struct {
Artifact map[string]any `json:"artifact"`
Validation Report `json:"validation"`
Iterations int `json:"iterations"`
History []IterateAttempt `json:"history"`
TraceID string `json:"trace_id,omitempty"`
}
// IterateFailure is the max-iter-exhausted payload (422 + history).
@ -50,19 +64,53 @@ type IterateFailure struct {
Error string `json:"error"`
Iterations int `json:"iterations"`
History []IterateAttempt `json:"history"`
TraceID string `json:"trace_id,omitempty"`
}
// ChatCaller is the seam Iterate uses to invoke an LLM. Tests inject
// scripted callers; production wires this to the chatd /v1/chat HTTP
// endpoint. Implementations must return the model's textual content
// (no choices wrapper, no message envelope).
type ChatCaller func(ctx context.Context, system, user, provider, model string, temperature *float64, maxTokens int) (string, error)
// (no choices wrapper, no message envelope). traceID is the Langfuse
// parent trace id — HTTP transports forward it as the
// `X-Lakehouse-Trace-Id` header so chatd's middleware attaches its
// spans to the same trace.
type ChatCaller func(ctx context.Context, system, user, provider, model string, temperature *float64, maxTokens int, traceID string) (string, error)
// AttemptSpan describes one iteration's chat-call + validator-verdict.
// The Tracer (see IterateConfig) consumes these to emit Langfuse
// spans — one span per attempt, plus inputs/outputs so an operator
// scrolling Langfuse sees the full prompt → raw → verdict chain
// without having to join multiple JSONL files.
type AttemptSpan struct {
TraceID string
ParentID string // span id of the surrounding /v1/iterate http request, if any
Iteration int
Model string
Provider string
Prompt string
Raw string
Verdict AttemptStatus
StartTime time.Time
EndTime time.Time
}
// Tracer is the hook Iterate calls after each attempt. Returns the
// span id so it lands in IterateAttempt.SpanID. nil = tracing
// disabled (tests, or production with Langfuse unconfigured).
//
// Production wiring lives in cmd/validatord/main.go and routes to
// internal/langfuse.Client — kept as an interface here so the
// validator package doesn't depend on langfuse for unit tests.
type Tracer func(span AttemptSpan) string
// IterateConfig threads daemon-level settings into the orchestrator.
type IterateConfig struct {
DefaultMaxIterations int
DefaultMaxTokens int
DefaultTemperature float64
// Tracer (optional) emits one span per iteration attempt for
// per-call visibility. nil = no tracing.
Tracer Tracer
}
const (
@ -101,36 +149,59 @@ func Iterate(ctx context.Context, req IterateRequest, cfg IterateConfig, chat Ch
currentPrompt := req.Prompt
history := make([]IterateAttempt, 0, maxIter)
traceID := req.TraceID
// recordAttempt: append to history, emit a Langfuse span if a
// Tracer is wired, return the SpanID. Centralized so every code
// path (no_json, accepted, validation_failed) hits the same
// observability path — no easy way to forget tracing on a branch.
recordAttempt := func(iteration int, raw string, verdict AttemptStatus, started time.Time) string {
spanID := ""
if cfg.Tracer != nil && traceID != "" {
spanID = cfg.Tracer(AttemptSpan{
TraceID: traceID,
Iteration: iteration,
Model: req.Model,
Provider: req.Provider,
Prompt: currentPrompt,
Raw: raw,
Verdict: verdict,
StartTime: started,
EndTime: time.Now(),
})
}
history = append(history, IterateAttempt{
Iteration: iteration,
Raw: trim(raw, 2000),
Status: verdict,
SpanID: spanID,
})
return spanID
}
for i := 0; i < maxIter; i++ {
raw, err := chat(ctx, req.System, currentPrompt, req.Provider, req.Model, temp, maxTokens)
started := time.Now()
raw, err := chat(ctx, req.System, currentPrompt, req.Provider, req.Model, temp, maxTokens, traceID)
if err != nil {
return nil, nil, fmt.Errorf("/v1/chat hop failed at iter %d: %w", i, err)
}
artifact := ExtractJSON(raw)
if artifact == nil {
history = append(history, IterateAttempt{
Iteration: i,
Raw: trim(raw, 2000),
Status: AttemptStatus{Kind: "no_json"},
})
recordAttempt(i, raw, AttemptStatus{Kind: "no_json"}, started)
currentPrompt = req.Prompt + "\n\nYour previous attempt did not contain a JSON object. Reply with ONLY a valid JSON object matching the requested artifact shape."
continue
}
report, vErr := validate(req.Kind, artifact)
if vErr == nil {
history = append(history, IterateAttempt{
Iteration: i,
Raw: trim(raw, 2000),
Status: AttemptStatus{Kind: "accepted"},
})
recordAttempt(i, raw, AttemptStatus{Kind: "accepted"}, started)
return &IterateResponse{
Artifact: artifact,
Validation: report,
Iterations: i + 1,
History: history,
TraceID: traceID,
}, nil, nil
}
@ -138,11 +209,7 @@ func Iterate(ctx context.Context, req IterateRequest, cfg IterateConfig, chat Ch
// The model sees concrete failure mode + retries with corrective
// context. Same "validator IS the observer" shape as Phase 43.
errSummary := vErr.Error()
history = append(history, IterateAttempt{
Iteration: i,
Raw: trim(raw, 2000),
Status: AttemptStatus{Kind: "validation_failed", Error: errSummary},
})
recordAttempt(i, raw, AttemptStatus{Kind: "validation_failed", Error: errSummary}, started)
currentPrompt = req.Prompt + "\n\nPrior attempt failed validation:\n" + errSummary + "\n\nFix the specific issue above and respond with a corrected JSON object."
}
@ -150,6 +217,7 @@ func Iterate(ctx context.Context, req IterateRequest, cfg IterateConfig, chat Ch
Error: fmt.Sprintf("max iterations reached (%d) without passing validation", maxIter),
Iterations: maxIter,
History: history,
TraceID: traceID,
}, nil
}

View File

@ -3,6 +3,7 @@ package validator
import (
"context"
"errors"
"fmt"
"testing"
)
@ -69,7 +70,7 @@ func TestExtractJSON_TopLevelArrayReturnsFirstInnerObject(t *testing.T) {
func scriptedChat(responses ...string) (ChatCaller, *int) {
idx := 0
return func(_ context.Context, _, _ string, _, _ string, _ *float64, _ int) (string, error) {
return func(_ context.Context, _, _ string, _, _ string, _ *float64, _ int, _ string) (string, error) {
if idx >= len(responses) {
return "", errors.New("scripted chat exhausted")
}
@ -175,8 +176,96 @@ func TestIterate_MaxIterationsExhaustedReturnsFailure(t *testing.T) {
}
}
// TestIterate_TracerEmitsSpanPerAttempt locks the per-attempt
// observability contract: when a Tracer is wired AND a TraceID is
// present, every retry produces exactly one span and the SpanID
// lands on the IterateAttempt history row.
func TestIterate_TracerEmitsSpanPerAttempt(t *testing.T) {
chat, _ := scriptedChat(
"no json",
`{"k":"v"}`,
)
validate := func(_ string, _ map[string]any) (Report, error) { return Report{}, nil }
var spans []AttemptSpan
tracer := func(s AttemptSpan) string {
spans = append(spans, s)
return fmt.Sprintf("span-%d", s.Iteration)
}
resp, _, err := Iterate(context.Background(),
IterateRequest{Kind: "playbook", Prompt: "X", Provider: "ollama", Model: "qwen", TraceID: "trace-abc"},
IterateConfig{Tracer: tracer}, chat, validate)
if err != nil {
t.Fatalf("Iterate: %v", err)
}
if resp.TraceID != "trace-abc" {
t.Errorf("TraceID echo: got %q, want trace-abc", resp.TraceID)
}
if len(spans) != 2 {
t.Fatalf("expected 2 emitted spans, got %d", len(spans))
}
if spans[0].Verdict.Kind != "no_json" || spans[1].Verdict.Kind != "accepted" {
t.Errorf("verdict propagation: %+v", spans)
}
if resp.History[0].SpanID != "span-0" || resp.History[1].SpanID != "span-1" {
t.Errorf("SpanID stamping: %+v", resp.History)
}
if spans[0].TraceID != "trace-abc" {
t.Errorf("TraceID didn't reach the tracer: %q", spans[0].TraceID)
}
}
// TestIterate_NoTraceIDSkipsTracer guards the "Langfuse not configured"
// path: tracer is non-nil but trace_id is empty → we MUST NOT emit
// orphan spans.
func TestIterate_NoTraceIDSkipsTracer(t *testing.T) {
chat, _ := scriptedChat(`{"k":"v"}`)
validate := func(_ string, _ map[string]any) (Report, error) { return Report{}, nil }
called := 0
tracer := func(_ AttemptSpan) string {
called++
return "should-not-be-used"
}
resp, _, err := Iterate(context.Background(),
IterateRequest{Kind: "playbook", Prompt: "X", Provider: "ollama", Model: "qwen"}, // no TraceID
IterateConfig{Tracer: tracer}, chat, validate)
if err != nil {
t.Fatalf("Iterate: %v", err)
}
if called != 0 {
t.Errorf("tracer should not fire without a trace_id; called=%d", called)
}
if resp.History[0].SpanID != "" {
t.Errorf("SpanID should be empty when no trace_id: %q", resp.History[0].SpanID)
}
}
// TestIterate_ChatCallerReceivesTraceID confirms the trace_id is
// forwarded into the ChatCaller signature so HTTP transports can set
// the X-Lakehouse-Trace-Id header. Without this, chatd's Langfuse
// emit would create an orphan trace per retry instead of nesting
// under the iterate parent.
func TestIterate_ChatCallerReceivesTraceID(t *testing.T) {
var observedTrace string
chat := func(_ context.Context, _, _ string, _, _ string, _ *float64, _ int, traceID string) (string, error) {
observedTrace = traceID
return `{"k":"v"}`, nil
}
validate := func(_ string, _ map[string]any) (Report, error) { return Report{}, nil }
_, _, err := Iterate(context.Background(),
IterateRequest{Kind: "playbook", Prompt: "X", Provider: "ollama", Model: "x", TraceID: "trace-xyz"},
IterateConfig{}, chat, validate)
if err != nil {
t.Fatalf("Iterate: %v", err)
}
if observedTrace != "trace-xyz" {
t.Errorf("ChatCaller should receive trace_id; got %q", observedTrace)
}
}
func TestIterate_PropagatesChatInfraError(t *testing.T) {
chat := func(_ context.Context, _, _ string, _, _ string, _ *float64, _ int) (string, error) {
chat := func(_ context.Context, _, _ string, _, _ string, _ *float64, _ int, _ string) (string, error) {
return "", errors.New("connection refused")
}
validate := func(_ string, _ map[string]any) (Report, error) { return Report{}, nil }