Architectural snapshot of the lakehouse codebase at the point where the
full matrix-driven agent loop with Mem0 versioning + deletion was
validated end-to-end.
WHAT THIS REPO IS
A clean single-commit snapshot of the lakehouse code. Heavy test data
(.parquet datasets, vector indexes) excluded — see REPLICATION.md for
regen path. Full lakehouse history at git.agentview.dev/profit/lakehouse.
WHAT WAS PROVEN
- Vector retrieval across multi-corpora matrix (chicago_permits + entity
briefs + sec_tickers + distilled procedural + llm_team runs)
- Observer hand-review (cloud + heuristic fallback) gating each candidate
- Local-model agent loop (qwen3.5:latest) with tool use + scratchpad
- Playbook seal on success → next-iter retrieval surfaces it as preamble
- Mem0 versioning + deletion in pathway_memory:
* UPSERT: ADD on new workflow, UPDATE bumps replay_count on identical
* REVISE: chains versions, parent.superseded_at + superseded_by stamped
* RETIRE: marks specific trace retired with reason, excluded from retrieval
* HISTORY: walks chain root→tip, cycle-safe
KEY DIRECTORIES
- crates/vectord/src/pathway_memory.rs — Mem0 ops live here
- crates/vectord/src/playbook_memory.rs — original Mem0 reference
- tests/agent_test/ — local-model agent harness + PRD + session archives
- scripts/dump_raw_corpus.sh — MinIO bucket dump (raw test corpus)
- scripts/vectorize_raw_corpus.ts — corpus → vector indexes
- scripts/analyze_chicago_contracts.ts — real inference pipeline
- scripts/seal_agent_playbook.ts — Mem0 upsert from agent traces
Replication: see REPLICATION.md for Debian 13 clean install + cloud-only
adaptation (no local Ollama).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
53 lines
1.9 KiB
TypeScript
53 lines
1.9 KiB
TypeScript
/**
|
|
* Langfuse tracing for the Lakehouse agent gateway.
|
|
*
|
|
* Every LLM call gets traced: model, prompt, output, latency, tokens.
|
|
* The observer uses this to build a picture of what's working.
|
|
*
|
|
* Langfuse UI: http://localhost:3001
|
|
* Login: j@lakehouse.local / lakehouse2026
|
|
*/
|
|
|
|
import { Langfuse } from "langfuse";
|
|
|
|
const langfuse = new Langfuse({
|
|
publicKey: process.env.LANGFUSE_PUBLIC_KEY || "pk-lf-staffing",
|
|
secretKey: process.env.LANGFUSE_SECRET_KEY || "sk-lf-staffing-secret",
|
|
baseUrl: process.env.LANGFUSE_URL || "http://localhost:3001",
|
|
enabled: true,
|
|
});
|
|
|
|
export type TraceContext = ReturnType<typeof langfuse.trace>;
|
|
|
|
export function startTrace(name: string, input?: any, metadata?: any) {
|
|
return langfuse.trace({ name, input, metadata });
|
|
}
|
|
|
|
export function logGeneration(
|
|
trace: TraceContext, name: string,
|
|
opts: { model: string; prompt: string; completion: string;
|
|
duration_ms: number; tokens_in?: number; tokens_out?: number; },
|
|
) {
|
|
trace.generation({
|
|
name, model: opts.model, input: opts.prompt, output: opts.completion,
|
|
usage: { promptTokens: opts.tokens_in, completionTokens: opts.tokens_out },
|
|
metadata: { duration_ms: opts.duration_ms },
|
|
});
|
|
}
|
|
|
|
export function logSpan(trace: TraceContext, name: string, input: any, output: any, duration_ms: number) {
|
|
trace.span({ name, input, output, metadata: { duration_ms } });
|
|
}
|
|
|
|
export function logRetrieval(trace: TraceContext, name: string, query: string, results: any[], duration_ms: number) {
|
|
trace.span({ name, input: { query }, output: { results_count: results.length }, metadata: { duration_ms, type: "retrieval" } });
|
|
}
|
|
|
|
export function scoreTrace(trace: TraceContext, name: string, value: number, comment?: string) {
|
|
trace.score({ name, value, comment });
|
|
}
|
|
|
|
export async function flush() { await langfuse.flushAsync(); }
|
|
export async function shutdown() { await langfuse.shutdownAsync(); }
|
|
export { langfuse };
|