profit 75a0f424ef Phase 40 (early): Langfuse tracing on /v1/chat — observability recovery
The lost stack J flagged was partly already present: Langfuse
container has been running 2 days with the staffing project, SDK
installed, mcp-server tracing gw:/* routes. What was missing was
Rust-side /v1/chat emission — the new Phase 38/39 code bypassed
Langfuse entirely.

This commit bridges it. Fire-and-forget HTTP POST to
http://localhost:3001/api/public/ingestion (batch {trace-create +
generation-create}) on every chat call. Non-blocking — spawned
tokio task, response latency unaffected. Trace failures log warn
and drop, never propagate.

Verified end-to-end after restart:
- Log line "v1: Langfuse tracing enabled" at startup
- /v1/chat local (qwen3.5:latest) → v1.chat:ollama trace appears
  with lat=0.41s, 24+6 tokens
- /v1/chat cloud (gpt-oss:120b) → v1.chat:ollama_cloud trace appears
  with lat=1.87s, 73+87 tokens
- mcp-server's existing gw:/log + gw:/intelligence/* traces
  continue to flow into the same project unchanged

Files:
- crates/gateway/src/v1/langfuse_trace.rs (new, 195 LOC) — thin
  client, no SDK. reqwest Basic Auth. ChatTrace payload + event
  serializer. from_env_or_defaults() resolver matches
  mcp-server/tracing.ts conventions (pk-lf-staffing / sk-lf-
  staffing-secret / localhost:3001)
- crates/gateway/src/v1/mod.rs — V1State.langfuse field, emission
  after successful provider call (post-dispatch, pre-usage-update)
- crates/gateway/src/main.rs — resolve + log at startup

Tests: 12/12 green (9 prior + 3 for langfuse_trace: ingestion-batch
serialization, uuid generator uniqueness, env resolver shape).

Recovered piece #1 of 3 from the lost-stack narrative. Still open:
- Langfuse → observer :3800 pipe (Phase 40 mid-deliverable)
- Gitea MCP reconnect in mcp-server/index.ts (Phase 40 late)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 03:04:28 -05:00
..
2026-04-22 02:41:15 -05:00