The applyAuthContextHeaders early-return in CheckAuth skipped the OIDC
token refresh block, causing long-lived OIDC sessions to expire instead
of auto-refreshing. Move the refresh trigger into extractAndStoreAuthContext
so it fires at the middleware level before CheckAuth's early return.
Also add a nil guard on mtPersistence in AISettingsHandler.GetAIService
for non-default org paths, preventing a potential panic if background
code carries a non-default org context in v5 single-tenant mode.
HandleUndismissFinding now checks both patrol and unified stores
before returning. Returns 404 with error message when the finding
is not found or not dismissed, instead of silently returning success.
The Undismiss() method existed on FindingsStore but was never exposed
via the API. Users who dismissed findings as "not_an_issue" had no way
to revert them.
- Add HandleUndismissFinding handler and route
- Add Undismiss() to UnifiedStore for parity with FindingsStore
- Also remove matching explicit suppression rules on undismiss
When users enable AI discovery without setting an interval, the
default of 0 silently stays in manual-only mode. Now normalizes
0 to 24h on save so discovery actually starts automatically.
Fixes#1225
Some local LLM servers (LM Studio, llama.cpp) expose OpenAI-compatible
APIs but don't support function calling. When tools are sent to these
models, they output raw control tokens instead of proper responses.
This change adds:
- openai_tools_disabled config field in AIConfig
- AreToolsDisabledForProvider() method to check at runtime
- API support to get/set the new setting
- Tests for the new functionality
When enabled and using a custom OpenAI base URL, the chat service will
skip sending tools to the model, allowing basic chat functionality to
work even with models that don't support function calling.
Fixes#1154
- Sync UserNote, AcknowledgedAt, SnoozedUntil, DismissedReason, Suppressed,
and TimesRaised from ai.Finding to unified store in both callback and
startup sync paths. Mirror note writes to unified store immediately.
- Dim acknowledged findings (opacity-60), add "Acknowledged" badge, hide
acknowledge button once acknowledged, sort below unacknowledged in
severity mode.
- Pass finding_id through frontend chat API → backend ChatRequest →
ExecuteRequest. Look up full finding from unified store (mutex-guarded)
and prepend structured context to the prompt.
- Update license_required error to mention 'Auto-fix' instead of
'Assisted and Full autonomy' for clearer user messaging
- Update full_mode_locked error to reference the UI toggle label
'Auto-fix critical issues' instead of internal field name
Send an SSE comment immediately when a client connects to the patrol
stream endpoint. This flushes HTTP headers so clients receive the
200 response right away, rather than blocking until the first event.
This fixes eval tests where the stream connection would time out
waiting for headers while patrol was still initializing.
- Remove deprecated config.ModelInfo type (use providers.ModelInfo)
- Remove deprecated GetAvailableModels function (always returned nil)
- Remove associated test
- Update AISettingsResponse to use providers.ModelInfo
Major new AI capabilities for infrastructure monitoring:
Investigation System:
- Autonomous finding investigation with configurable autonomy levels
- Investigation orchestrator with rate limiting and guardrails
- Safety checks for read-only mode enforcement
- Chat-based investigation with approval workflows
Forecasting & Remediation:
- Trend forecasting for resource capacity planning
- Remediation engine for generating fix proposals
- Circuit breaker for AI operation protection
Unified Findings:
- Unified store bridging alerts and AI findings
- Correlation and root cause analysis
- Incident coordinator with metrics recording
New Frontend:
- AI Intelligence page with patrol controls
- Investigation drawer for finding details
- Unified findings panel with actions
Supporting Infrastructure:
- Learning store for user preference tracking
- Proxmox event ingestion and correlation
- Enhanced patrol with investigation triggers
- Updated LicenseHandlers and LicenseService to be context/tenant aware
- Refactored API router and middleware to support tenant-scoped license checks
- Updated associated tests for context-aware handlers
Implements Phase 1-2 of multi-tenancy support using a directory-per-tenant
strategy that preserves existing file-based persistence.
Key changes:
- Add MultiTenantPersistence manager for org-scoped config routing
- Add TenantMiddleware for X-Pulse-Org-ID header extraction and context propagation
- Add MultiTenantMonitor for per-tenant monitor lifecycle management
- Refactor handlers (ConfigHandlers, AlertHandlers, AIHandlers, etc.) to be
context-aware with getConfig(ctx)/getMonitor(ctx) helpers
- Add Organization model for future tenant metadata
- Update server and router to wire multi-tenant components
All handlers maintain backward compatibility via legacy field fallbacks
for single-tenant deployments using the "default" org.
Adapts API handlers to use the new native chat service:
ai_handler.go:
- Replace opencode.Service with chat.Service
- Add AIService interface for testability
- Add factory function for service creation (mockable)
- Update provider wiring to use tools package types
ai_handlers.go:
- Add Notable field to model list response
- Simplify command approval - execution handled by agentic loop
- Remove inline command execution from approval endpoint
router.go:
- Update imports: mcp -> tools, opencode -> chat
- Add monitor wrapper types for cleaner dependency injection
- Update patrol wiring for new chat service
agent_profiles:
- Rename agent_profiles_mcp.go -> agent_profiles_tools.go
- Update imports for tools package
monitor_wrappers.go:
- New file with wrapper types for alert/notification monitors
- Enables interface-based dependency injection
The agent was crashing with 'fatal error: concurrent map writes' when
handleCheckUpdatesCommand spawned a goroutine that called collectOnce
concurrently with the main collection loop. Both code paths access
a.prevContainerCPU without synchronization.
Added a.cpuMu mutex to protect all accesses to prevContainerCPU in:
- pruneStaleCPUSamples()
- collectContainer() delete operation
- calculateContainerCPUPercent()
Related to #1063
Add ability for users to describe what kind of agent profile they need
in natural language, and have AI generate a suggestion with name,
description, config values, and rationale.
- Add ProfileSuggestionHandler with schema-aware prompting
- Add SuggestProfileModal component with example prompts
- Update AgentProfilesPanel with suggest button and description field
- Streamline ValidConfigKeys to only agent-supported settings
- Update profile validation tests for simplified schema
Allow users to set custom disk usage thresholds per mounted filesystem
on host agents, rather than applying a single threshold to all volumes.
This addresses NAS/NVR use cases where some volumes (e.g., NVR storage)
intentionally run at 99% while others need strict monitoring.
Backend:
- Check for disk-specific overrides before using HostDefaults.Disk
- Override key format: host:<hostId>/disk:<mountpoint>
- Support both custom thresholds and disable per-disk
Frontend:
- Add 'hostDisk' resource type
- Add "Host Disks" collapsible section in Thresholds → Hosts tab
- Group disks by host for easier navigation
Closes#1103
The audit logging feature was showing the UI for Pro users but the
SQLiteLogger was never actually initialized - it fell back to the
ConsoleLogger which only writes to console and returns empty arrays
for queries.
This fix:
- Adds initAuditLoggerIfLicensed() helper to license_handlers.go
- Calls it when loading a persisted license at startup
- Calls it when activating a new license via API
- Creates SQLiteLogger with 90-day default retention when audit_logging
feature is enabled
The audit.db will be created in {dataDir}/audit/ when Pro is licensed.
- Add freebsd-amd64 and freebsd-arm64 to normalizeUnifiedAgentArch()
so the download endpoint serves FreeBSD binaries when requested
- Add FreeBSD/pfSense/OPNsense platform option to agent setup UI
with note about bash installation requirement
- Add FreeBSD test cases to unified_agent_test.go
Fixes installation on pfSense/OPNsense where users were getting 404
errors because the backend didn't recognize the freebsd-amd64 arch
parameter from install.sh.
Previous LLM sessions incorrectly inserted fake URLs (pulse.sh/pro and
yourpulse.io/pro) for the Pro upgrade links. Neither domain exists.
Replaced all 34 instances with the correct URL: https://pulserelay.pro/Fixes#1077
Users providing base URLs like "https://openrouter.ai/api/v1" were
getting HTML error responses because the client used the URL directly
without appending "/chat/completions".
- Normalize baseURL in NewOpenAIClient to ensure it ends with /chat/completions
- Fix modelsEndpoint() to derive /models from the normalized baseURL
- Add tests for URL normalization with various endpoint formats
Implements server-side persistence for AI chat sessions, allowing users
to continue conversations across devices and browser sessions. Related
to #1059.
Backend:
- Add chat session CRUD API endpoints (GET/PUT/DELETE)
- Add persistence layer with per-user session storage
- Support session cleanup for old sessions (90 days)
- Multi-user support via auth context
Frontend:
- Rewrite aiChat store with server sync (debounced)
- Add session management UI (new conversation, switch, delete)
- Local storage as fallback/cache
- Initialize sync on app startup when AI is enabled
- Add persistent volume mounts for Go/npm caches (faster rebuilds)
- Add shell config with helpful aliases and custom prompt
- Add comprehensive devcontainer documentation
- Add pre-commit hooks for Go formatting and linting
- Use go-version-file in CI workflows instead of hardcoded versions
- Simplify docker compose commands with --wait flag
- Add gitignore entries for devcontainer auth files
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add IsAIEnabled() method to AISettingsHandler for consistent checks
- Gate baseline learning, pattern detector, and correlation detector initialization
in StartPatrol() on AI being enabled
- Add AI enabled checks to all /api/ai/intelligence/* endpoints as defense-in-depth
- Return empty results with "AI is not enabled" message when AI is disabled
This ensures no AI-related data is collected, persisted, or returned when AI is disabled,
preventing the "undismissable alerts" issue where old AI findings would appear.
BREAKING CHANGE: AI Patrol now uses EXACT alert thresholds by default
instead of warning 5-15% before the threshold.
Changes:
- Default behavior: Patrol warns at your configured threshold (e.g., 96% = warns at 96%)
- New setting: 'use_proactive_thresholds' enables the old early-warning behavior
- API: Added use_proactive_thresholds to GET/PUT /api/settings/ai
- Backend: Added SetProactiveMode/GetProactiveMode to PatrolService
- Backend: Added GetThresholds to PatrolService for UI display
- Tests: Updated and added tests for both exact and proactive modes
- Also fixed unused imports in dockeragent/agent.go
When proactive mode is disabled (default):
- Watch: threshold - 5% (slight buffer)
- Warning: exact threshold
When proactive mode is enabled:
- Watch: threshold - 15%
- Warning: threshold - 5%
Related to #951
- patrol.go: Auto-fix now requires both config flag AND ai_autofix license
- service.go: IsAutonomous() checks for ai_autofix license before enabling
- ai_handlers.go: API returns 402 if enabling auto-fix/autonomous without license
Users who accumulated AI patrol findings before the patrol-without-AI
bug was fixed (24c4bb0b) could not dismiss them because the dismiss and
resolve endpoints required a Pro license.
Changes:
- Remove Pro license requirement from /api/ai/patrol/dismiss endpoint
- Remove Pro license requirement from /api/ai/patrol/resolve endpoint
- Add ClearAll() method to FindingsStore for bulk clearing
- Add DELETE /api/ai/patrol/findings endpoint for clearing all findings
- Add "Clear All" button to AI Insights UI
Users can now dismiss or resolve any findings they can see, and admins
can clear all findings at once if needed.
Added request_timeout_seconds field to:
- AISettingsResponse struct (for GET responses)
- AISettingsUpdateRequest struct (for PUT requests)
- HandleUpdateAISettings handler logic (validation + persistence)
- HandleGetAISettings response builder
The frontend was already sending request_timeout_seconds but the
backend was ignoring it. Now the setting persists correctly.
Ensures the AI settings endpoint reports enabled=true and configured=true
when running in demo mode (PULSE_MOCK_MODE=true), even if no provider is
configured. This unlocks the frontend UI to allow interaction with the
mock AI assistant.
Addresses #866 - agents were logging 'WebSocket connection failed' warnings
even during normal reconnection scenarios (server restart, network blip, etc).
Changes:
- Normal close errors (1000, 1001, connection reset) now log at Debug level
- Only log Warning after 3+ consecutive failures
- Changed 'Connecting to Pulse' from Info to Debug to reduce noise
- Successful connections still log at Info level
The WebSocket is only used for AI command execution, not metrics, so
transient disconnections don't affect monitoring functionality.
Bug Fixes:
- Fix boolean fields with 'omitempty' not persisting false values
- AlertTriggeredAnalysis, PatrolAnalyzeNodes/Guests/Docker/Storage
- omitempty causes Go to skip false (zero value) when marshaling JSON
- On reload, NewDefaultAIConfig() sets true, and missing field stays true
- Fix model dropdown losing selection after save (SolidJS reactivity issue)
- Added explicit 'selected' attribute to option elements
- Ensures browser maintains selection with optgroups during re-renders
Improvements:
- Change patrol type label from 'Quick' to 'Patrol' in history table
- Add chat_model and patrol_model to AI settings update log
- Add alert_triggered_analysis to AI config load log for debugging
Runbooks were a half-built feature that provided no value:
- Only 3 runbooks existed
- AI dynamic remediation already covers the same ground
- Added UI complexity without benefit
Removed:
- runbooks.go and runbooks_test.go
- Handler functions in ai_handlers.go
- Routes in router.go
- Test cases in ai_handlers_test.go
- Auto-fix call in patrol.go
Kept (dead code but harmless):
- Frontend types/API calls (will 404)
- RecordIncidentRunbook function (unused)
Less code = easier to maintain.
- Create Intelligence struct that aggregates all AI subsystems
- Add /api/ai/intelligence endpoint for system-wide and per-resource insights
- Wire Intelligence into PatrolService as a facade (not replacement)
- Add TypeScript types and API client for frontend
- Add unit tests for Intelligence orchestrator
- Fix pre-existing test failures using diagnostic commands instead of actionable ones
The Intelligence orchestrator provides:
- System-wide health scoring (A-F grades)
- Aggregated findings, predictions, correlations
- Per-resource context generation for AI prompts
- Learning progress tracking
This unifies access to AI subsystems without replacing existing code paths.
Backend:
- Enhanced buildEnrichedResourceContext to ALWAYS show learned baselines with
status indicators (normal/elevated/anomaly) instead of only when anomalous
- This makes Pulse Pro's 'moat' visible - users can see the AI understands
their infrastructure's normal behavior patterns
- Added baseline import to service.go
Frontend (user changes):
- Added incident event type filtering with toggle buttons
- Added resource incident panel to view all incidents for a resource
- Added timeline expand/collapse functionality in alert history
- Added incident note saving with proper incidentId tracking
- Added startedAt parameter for proper incident timeline loading