Backend: - Active run tracking with step/substep/error state - SSE keepalive heartbeat every 15s to prevent nginx timeout - Run log (last 100 completed runs with timing/errors) - Research mode: per-question progress, context caps, graceful failures - Hard cap on research questions (15), answer truncation (8K chars) Frontend: - Real progress bar with step segments, elapsed time, event counter - Progress shimmer animation, step completion indicators - Improved error display with timing context - Green completion state with fade Admin: - /admin/monitor — live process dashboard - Stats: active runs, completed, errors, avg duration - Active run cards with live progress, substep detail, errors - Recent run history with error traces - Auto-polls every 3 seconds - Full retro-brutalist theme matching main UI Nginx: - proxy_read_timeout 600s, proxy_send_timeout 600s - proxy_buffering off for SSE streaming Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Description
LLM Team UI - Full-stack local AI orchestration platform
Languages
Python
97.4%
Shell
2.6%