llm-team-ui/llm_team_config.json
root 1711d33337 LLM Team UI v1.0 — full-stack local AI orchestration platform
Features:
- 20 team modes (brainstorm, debate, consensus, red team, etc.)
- 3 autonomous pipelines (research, model eval, knowledge extraction)
- AutoResearch Lab with ratchet engine (Karpathy-inspired)
- Multi-provider support (Ollama, OpenRouter, OpenAI, Anthropic)
- Admin panel (providers, models, timeouts, OpenRouter browser)
- History panel with copy/iterate/re-pipe workflow
- Context budget system (smart truncation, safe_query, overflow recovery)
- PostgreSQL persistence (team_runs, pipeline_runs, lab_experiments, lab_trials)
- Pure Python + embedded HTML/CSS/JS, no external JS dependencies
- Inline SVG score charts in Lab monitor
- SSE streaming for real-time output
- Systemd service with auto-restart

Stack: Flask + Ollama + PostgreSQL + Bun-compatible
Hardware: RTX A4000 (16GB) + 128GB RAM

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 02:51:36 -05:00

33 lines
662 B
JSON

{
"providers": {
"ollama": {
"enabled": true,
"base_url": "http://localhost:11434",
"timeout": 300
},
"openrouter": {
"enabled": false,
"base_url": "https://openrouter.ai/api/v1",
"api_key": "",
"timeout": 120
},
"openai": {
"enabled": false,
"base_url": "https://api.openai.com/v1",
"api_key": "",
"timeout": 120
},
"anthropic": {
"enabled": false,
"base_url": "https://api.anthropic.com/v1",
"api_key": "",
"timeout": 120
}
},
"disabled_models": [],
"cloud_models": [],
"timeouts": {
"global": 300,
"per_model": {}
}
}