lakehouse/auditor 1 blocking issue: todo!() macro call in tests/real-world/scrum_master_pipeline.ts
Modern OpenAI clients (pi-ai, openai SDK 6.x, langchain-js, the official
agents) send `messages[].content` as an array of content parts:
`[{type:"text", text:"..."}, {type:"image_url", ...}]`. Our gateway
typed `content` as plain `String` and 422'd those calls.
Fix: `Message.content` is now `serde_json::Value` so requests
deserialize regardless of shape. `Message::text()` flattens
content-parts arrays (concat'd `text` fields, non-text parts skipped)
for places that need a plain string — Ollama prompt assembly, char
counts, the assistant's own response synthesis. `Message::new_text()`
constructs string-content messages without writing the wrapper at
each call site. Forwarders (openrouter) clone content through
verbatim so providers see exactly what the client sent.
Verified end-to-end: Pi CLI (`pi --print --provider openrouter`)
landed a clean 1902-token request through `/v1/chat/completions`,
routed to OpenRouter as `openai/gpt-oss-120b:free`, response in
1.62s, Langfuse trace `v1.chat:openrouter` recorded with provider
tag. Same path that any tool using the official openai SDK takes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 40 PRD (docs/CONTROL_PLANE_PRD.md:82-83) listed:
- crates/aibridge/src/providers/gemini.rs
- crates/aibridge/src/providers/claude.rs
Neither existed. Landing both now, in gateway/src/v1/ (matches the
existing ollama.rs + openrouter.rs sibling pattern — aibridge's
providers/ is for the adapter *trait* abstractions, v1/ holds the
concrete /v1/chat dispatchers that know the wire format).
gemini.rs:
- POST https://generativelanguage.googleapis.com/v1beta/models/
{model}:generateContent?key=<API_KEY>
- Auth: query-string key (not bearer)
- Maps messages → contents+parts (Gemini's wire shape),
extracts from candidates[0].content.parts[0].text
- 3 tests: key resolution, body serialization (camelCase
generationConfig + maxOutputTokens), prefix-strip
claude.rs:
- POST https://api.anthropic.com/v1/messages
- Auth: x-api-key header + anthropic-version: 2023-06-01
- Carries system prompt in top-level `system` field (not
messages[]). Extracts from content[0].text where type=="text"
- 4 tests: key resolution, body serialization with/without
system field, prefix-strip
v1/mod.rs:
+ V1State.gemini_key + claude_key Option<String>
+ resolve_provider() strips "gemini/" and "claude/" prefixes
+ /v1/chat dispatcher handles "gemini" + "claude"/"anthropic"
+ 2 new resolve_provider tests (prefix + strip per adapter)
main.rs:
+ Construct both keys at startup via resolve_*_key() helpers.
Missing keys log at debug (not warn) since these are optional
providers — unlike OpenRouter which is the rescue rung.
Every /v1/chat error path mirrors the existing pattern:
- 503 SERVICE_UNAVAILABLE when key isn't configured
- 502 BAD_GATEWAY with the provider's error text when the
upstream call fails
- Response shape always the OpenAI-compatible ChatResponse
Workspace warnings still at 0. 9 new tests pass.
Pre-existing test failure `executor_prompt_includes_surfaced_
candidates` at execution_loop/mod.rs:1550 is unrelated (fails on
pristine HEAD too — PR fixture divergence).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>