LLM Team UI v1.0 — full-stack local AI orchestration platform

Features:
- 20 team modes (brainstorm, debate, consensus, red team, etc.)
- 3 autonomous pipelines (research, model eval, knowledge extraction)
- AutoResearch Lab with ratchet engine (Karpathy-inspired)
- Multi-provider support (Ollama, OpenRouter, OpenAI, Anthropic)
- Admin panel (providers, models, timeouts, OpenRouter browser)
- History panel with copy/iterate/re-pipe workflow
- Context budget system (smart truncation, safe_query, overflow recovery)
- PostgreSQL persistence (team_runs, pipeline_runs, lab_experiments, lab_trials)
- Pure Python + embedded HTML/CSS/JS, no external JS dependencies
- Inline SVG score charts in Lab monitor
- SSE streaming for real-time output
- Systemd service with auto-restart

Stack: Flask + Ollama + PostgreSQL + Bun-compatible
Hardware: RTX A4000 (16GB) + 128GB RAM

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
root 2026-03-25 02:51:36 -05:00
commit 1711d33337
5 changed files with 3577 additions and 0 deletions

4
.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
__pycache__/
*.pyc
.env
*.log

14
llm-team-ui.service Normal file
View File

@ -0,0 +1,14 @@
[Unit]
Description=LLM Team UI - Multi-model team web interface
After=network.target ollama.service
[Service]
Type=simple
User=root
WorkingDirectory=/root
ExecStart=/usr/bin/python3 /root/llm_team_ui.py
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

33
llm_team_config.json Normal file
View File

@ -0,0 +1,33 @@
{
"providers": {
"ollama": {
"enabled": true,
"base_url": "http://localhost:11434",
"timeout": 300
},
"openrouter": {
"enabled": false,
"base_url": "https://openrouter.ai/api/v1",
"api_key": "",
"timeout": 120
},
"openai": {
"enabled": false,
"base_url": "https://api.openai.com/v1",
"api_key": "",
"timeout": 120
},
"anthropic": {
"enabled": false,
"base_url": "https://api.anthropic.com/v1",
"api_key": "",
"timeout": 120
}
},
"disabled_models": [],
"cloud_models": [],
"timeouts": {
"global": 300,
"per_model": {}
}
}

3472
llm_team_ui.py Normal file

File diff suppressed because it is too large Load Diff

54
schema.sql Normal file
View File

@ -0,0 +1,54 @@
-- LLM Team UI Database Schema
-- Run against PostgreSQL: psql -d knowledge_base -f schema.sql
CREATE TABLE IF NOT EXISTS team_runs (
id SERIAL PRIMARY KEY,
mode TEXT NOT NULL,
prompt TEXT NOT NULL,
config JSONB,
responses JSONB NOT NULL DEFAULT '[]',
models_used TEXT[],
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS pipeline_runs (
id SERIAL PRIMARY KEY,
pipeline TEXT NOT NULL,
topic TEXT NOT NULL,
status TEXT DEFAULT 'running',
steps JSONB DEFAULT '[]',
result JSONB,
models_used TEXT[],
duration_ms INTEGER,
created_at TIMESTAMPTZ DEFAULT NOW(),
completed_at TIMESTAMPTZ
);
CREATE TABLE IF NOT EXISTS lab_experiments (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
status TEXT DEFAULT 'idle',
objective TEXT,
metric TEXT DEFAULT 'quality',
eval_cases JSONB DEFAULT '[]',
mutable_config JSONB DEFAULT '{}',
best_config JSONB,
best_score FLOAT DEFAULT 0,
total_trials INTEGER DEFAULT 0,
improvements INTEGER DEFAULT 0,
models_pool TEXT[],
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS lab_trials (
id SERIAL PRIMARY KEY,
experiment_id INTEGER REFERENCES lab_experiments(id) ON DELETE CASCADE,
trial_num INTEGER,
config_diff TEXT,
config_snapshot JSONB,
scores JSONB,
avg_score FLOAT,
improved BOOLEAN DEFAULT FALSE,
duration_ms INTEGER,
created_at TIMESTAMPTZ DEFAULT NOW()
);