root 266de613b2 llm_team_ui: 2 more scrum WARNs (rate_limit eviction + setup IP gate)
Closes the 2 remaining surgical-fix WARNs from the 2026-04-30
cross-lineage scrum on this codebase. OB-3 (root-running web app
with shell calls to fail2ban-client / systemctl / nginx config) and
the sentinel prompt-injection WARN both need bigger architectural
work and stay deferred.

OB-rate-limit (Opus WARN) — _rate_limit dict unbounded
  Pre-fix: per-worker dict with no eviction; an attacker slowly
  rotating IPs leaked memory forever. Fix: lazy eviction sweep
  triggered when dict grows beyond 10K entries (cheap because we
  only scan when growth is unusual). Real production wants a
  Redis-backed shared counter; this is the in-process band-aid
  that prevents runaway growth without changing the deploy shape.

OB-auth-setup (Opus WARN) — first-time setup grant from any IP
  Pre-fix: /api/auth/login with setup=true was gated only by
  COUNT(*) FROM users == 0. If the users table was ever truncated
  or restored empty, the next external visitor (ANY IP) claimed
  admin. Fix: also require the source IP to be in ALLOWLIST_IPS
  (typically loopback + LAN gateway). Local operator setup still
  works; remote attackers hitting the endpoint after an empty-
  users state get 403.

Both fixes are surgical — single function, no behavior change for
the happy path. The eviction sweep runs O(n) only when n>10K and
only drops entries already past their useful window, so it never
removes an active rate-limit count.

Outstanding from the scrum (deferred):
- OB-3 root-running web app: needs split into non-root Flask tier
  + privileged sudo wrapper service. 2-4 hr architectural work.
- Sentinel prompt-injection WARN: feeds attacker-controlled UA/
  path into LLM judge prompt. Needs prompt-template hardening or
  output validation gate before LLM verdicts can issue ban actions.
- CSP unsafe-inline WARN: defeats most XSS protection. Removing
  it requires moving inline scripts to external files (HTML
  refactor).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 03:19:38 -05:00
Description
LLM Team UI - Full-stack local AI orchestration platform
9.2 MiB
Languages
Python 97.4%
Shell 2.6%