New mode: Deep Analysis — 6-phase autonomous pipeline: 1. Research: all selected models answer in parallel 2. Debate: models challenge each other's findings 3. Consensus: merge research with critiques, identify strong/weak points 4. Self-Eval: structured scoring (accuracy, completeness, actionability, nuance) 5. Final Synthesis: strongest model produces definitive answer 6. Knowledge Base: result stored for future RAG retrieval Designed for cloud models (Ollama Cloud, OpenRouter). Every successful run trains the local knowledge base so future adaptive runs benefit. Purple accent in mode selector to distinguish from standard modes. Token tracking fix: - Added est_tokens, input_chars, output_chars columns to team_runs - save_run() now calculates and stores token estimates for ALL runs - Both logged-in and public/demo/showcase runs track tokens - Enables accurate usage analytics across all users Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Description
LLM Team UI - Full-stack local AI orchestration platform
Languages
Python
97.4%
Shell
2.6%