root 28e641f939 Self-Analysis: AI reports from system's own data + Lab experiments API
4 one-click self-analysis reports in Lab:
1. Threat Intelligence Report — security logs → attack taxonomy,
   attacker profiling, predictive analysis, recommendations
2. Model Performance Analysis — 96 team runs → usage patterns,
   model workload, response efficiency, optimization opportunities
3. Usage Analytics — nginx access logs → traffic patterns, feature
   usage, user journey mapping, UX recommendations
4. Security Posture Assessment — combined audit of security logs,
   sentinel verdicts, fail2ban, threat intel DB → risk rating

API: POST /api/self-analyze
- type: threat_intel|model_performance|access_patterns|security_posture
- model: which local model to use (default qwen2.5)
- Returns structured report from real system data

Lab UI:
- Green-bordered Self-Analysis card above experiment templates
- Click any report → runs analysis in background → result panel
  expands inline with full report (scrollable, closeable)
- Loading state shows "Analyzing..." during generation

Each report analyzes REAL data: actual security logs, actual run
history, actual nginx access patterns — not synthetic test data.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 04:42:07 -05:00
Description
LLM Team UI - Full-stack local AI orchestration platform
9.2 MiB
Languages
Python 97.4%
Shell 2.6%