Major new AI capabilities for infrastructure monitoring:
Investigation System:
- Autonomous finding investigation with configurable autonomy levels
- Investigation orchestrator with rate limiting and guardrails
- Safety checks for read-only mode enforcement
- Chat-based investigation with approval workflows
Forecasting & Remediation:
- Trend forecasting for resource capacity planning
- Remediation engine for generating fix proposals
- Circuit breaker for AI operation protection
Unified Findings:
- Unified store bridging alerts and AI findings
- Correlation and root cause analysis
- Incident coordinator with metrics recording
New Frontend:
- AI Intelligence page with patrol controls
- Investigation drawer for finding details
- Unified findings panel with actions
Supporting Infrastructure:
- Learning store for user preference tracking
- Proxmox event ingestion and correlation
- Enhanced patrol with investigation triggers
- Add Start/Stop lifecycle methods to AlertTriggeredAnalyzer
- Periodic cleanup of lastAnalyzed map every 30 minutes
- Prevents memory growth from stale cooldown entries
- Document that ai package feature constants are aliases of license constants
- Call Start() in StartPatrol and Stop() in StopPatrol
- Add tests for Start/Stop lifecycle
Alert-triggered AI analysis was passing nil for lastBackup when analyzing
guests, causing 'Never backed up' findings even when backup data existed.
- Pass actual LastBackup timestamp from VM/Container state in analyzeGuestFromAlert
- Add regression test to verify backup data is correctly passed through
Fixes false positive 'Never backed up' alerts appearing when CPU/memory alerts fire.
Backend:
- Enhanced buildEnrichedResourceContext to ALWAYS show learned baselines with
status indicators (normal/elevated/anomaly) instead of only when anomalous
- This makes Pulse Pro's 'moat' visible - users can see the AI understands
their infrastructure's normal behavior patterns
- Added baseline import to service.go
Frontend (user changes):
- Added incident event type filtering with toggle buttons
- Added resource incident panel to view all incidents for a resource
- Added timeline expand/collapse functionality in alert history
- Added incident note saving with proper incidentId tracking
- Added startedAt parameter for proper incident timeline loading
Implements a comprehensive feedback system that allows the LLM to 'remember'
user decisions about findings, preventing repetitive/annoying alerts.
Backend changes:
- Extended Finding struct with dismissed_reason, user_note, times_raised, suppressed
- Added Dismiss(), Suppress(), SetUserNote(), IsSuppressed() methods to FindingsStore
- Added GetDismissedForContext() to format dismissed findings for LLM context
- Enhanced buildPatrolPrompt() to inject user feedback context
- Added POST /api/ai/patrol/dismiss and /api/ai/patrol/suppress endpoints
- Updated IsActive() to exclude suppressed findings
Frontend changes:
- Added Dismiss dropdown with options: Not an Issue, Expected Behavior, Will Fix Later
- Added Never Alert Again option for permanent suppression
- Expected Behavior prompts for optional note to help LLM understand context
- Added visual badges: recurrence count (×N), dismissed status, suppressed indicator
- Display user notes in expanded finding view
Also fixes:
- Fixed 403 error on Run Patrol (compilation errors from partial refactoring)
- Removed non-LLM patrol checks - patrol now uses LLM analysis only
- Fixed function signature mismatches in alert_triggered.go
The LLM now receives context about previously dismissed findings and is
instructed not to re-raise them unless severity has significantly worsened.
- Add alert-triggered AI analysis for real-time incident response
- Implement patrol history persistence across restarts
- Add patrol schedule configuration UI in AI Settings
- Enhance AIChat with patrol status and manual trigger controls
- Add resource store improvements for AI context building
- Expand Alerts page with AI-powered analysis integration
- Add Vite proxy config for AI API endpoints
- Support both Anthropic and OpenAI providers with streaming