Optimization & History: - Fix optimization history display on both /history and main page slide-out panel - Full card layout with score bars, "Use This" on all variations, A/B compare, export - Deep Optimize: chain 2-3 rounds, feed each winner to the next - Prompt template library: save winners, browse as quick-start chips - Mode recommendation engine from historical scores - Score calibration: strict anchor examples (scores now spread 4-8, not 7-9) Security Hardening: - Auto-escalation: 3 violations in 60s triggers instant ban + high-alert mode (30s scans) - Sentinel prompt injection defense: sanitize log data, adversarial boundary instruction - XSS fixes: escapeHtml on model names, mode labels in history panel - Log redaction: passwords/tokens/secrets auto-redacted from log display - Rate-limited /api/admin/logs endpoint (10 req/min) - HSTS + COOP headers, persistent session secret, HttpOnly+SameSite cookies - Concurrent ban execution via ThreadPoolExecutor Prompt Window (pretext integration): - Canvas particle system: keystroke particles, focus sparkle, paste explosion - Ghost text typewriter: cycling placeholder with animated typing - Pretext-powered line measurement for accurate metrics - Mode-colored particle cascade on mode switch - Sample prompt typewriter effect with spam-click protection - Live metrics bar: chars, words, lines, est. tokens Showcase mode now allows /optimize, /deep-optimize, /score endpoints. CSP updated for Google Fonts + esm.sh (pretext). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Description
LLM Team UI - Full-stack local AI orchestration platform
Languages
Python
97.4%
Shell
2.6%