- patrol.go: Auto-fix now requires both config flag AND ai_autofix license
- service.go: IsAutonomous() checks for ai_autofix license before enabling
- ai_handlers.go: API returns 402 if enabling auto-fix/autonomous without license
This implements full remote configuration for the AI command execution setting:
Backend:
- Add CommandsEnabled field to HostMetadata for persistent storage
- Add GetHostAgentConfig/UpdateHostAgentConfig methods to Monitor
- Add /api/agents/host/{id}/config endpoint (GET for agents, PATCH for UI)
- Server includes config in report response for immediate agent application
- Agent parses response and dynamically enables/disables command client
Frontend:
- Add 'AI Commands' toggle column in Managed Agents table
- Toggle immediately updates server config; agent applies on next heartbeat
- Add 'Enable AI command execution' checkbox in agent installer wizard
- Checkbox adds --enable-commands flag to generated install commands
This allows users to:
1. Enable at install time via checkbox in the wizard
2. Toggle remotely via the Managed Agents UI for existing agents
3. Agents apply changes automatically on their next report cycle
Users who accumulated AI patrol findings before the patrol-without-AI
bug was fixed (24c4bb0b) could not dismiss them because the dismiss and
resolve endpoints required a Pro license.
Changes:
- Remove Pro license requirement from /api/ai/patrol/dismiss endpoint
- Remove Pro license requirement from /api/ai/patrol/resolve endpoint
- Add ClearAll() method to FindingsStore for bulk clearing
- Add DELETE /api/ai/patrol/findings endpoint for clearing all findings
- Add "Clear All" button to AI Insights UI
Users can now dismiss or resolve any findings they can see, and admins
can clear all findings at once if needed.
Extended Issue #891 fix to cover manual node addition via the UI:
1. HandleAddNode now checks for duplicates by Host URL (not name)
2. Disambiguator applied to PVE, PBS, and PMG node creation
3. Error message updated: 'host URL already exists' instead of 'name already exists'
This ensures the fix works whether nodes are added via:
- Agent auto-registration ✓
- Manual UI setup ✓
All node creation paths now consistently:
- Match by Host URL only
- Disambiguate duplicate hostnames with IP: 'px1' → 'px1 (10.0.2.224)'
Follow-up to #891 fix - also match by name+tokenID to handle the case
where the same physical host gets a new IP (DHCP). This ensures:
1. Same hostname + DIFFERENT token = different physical hosts → create separate nodes
2. Same hostname + SAME token = same host with new IP → update existing node
Also updates the host URL when an existing node is matched, so IP changes
are properly reflected in the saved configuration.
PROBLEM:
When two Proxmox hosts have the same hostname (e.g., 'px1' on different networks),
the auto-registration was matching by name and overwriting the first with the second.
This has been a recurring issue (#104) with at least 3 prior fix attempts.
ROOT CAUSE:
The auto-register handler matched existing nodes by BOTH Host URL and Name.
Matching by name is incorrect - different physical hosts can share hostnames.
FIXES:
1. Remove name-based matching in auto-registration - match by Host URL only
2. Add disambiguateNodeName() to append IP when duplicate hostnames exist
3. Add regression tests to prevent this from breaking again
Now when registering two hosts named 'px1':
- First becomes: px1
- Second becomes: px1 (10.0.2.224)
Both are stored as separate nodes with their own credentials.
Added request_timeout_seconds field to:
- AISettingsResponse struct (for GET responses)
- AISettingsUpdateRequest struct (for PUT requests)
- HandleUpdateAISettings handler logic (validation + persistence)
- HandleGetAISettings response builder
The frontend was already sending request_timeout_seconds but the
backend was ignoring it. Now the setting persists correctly.
When a PVE cluster has unique self-signed certificates on each node, Pulse
would mark secondary nodes as unhealthy because only the primary node's
fingerprint was used for all connections.
Now, during cluster discovery, Pulse captures each node's TLS fingerprint
and uses it when connecting to that specific node. This enables
"Trust On First Use" (TOFU) for clusters with unique per-node certs.
Changes:
- Add Fingerprint field to ClusterEndpoint config
- Add FetchFingerprint() to tlsutil for capturing node certs
- validateNodeAPI() now captures and returns fingerprints during discovery
- NewClusterClient() accepts endpointFingerprints map for per-node certs
- All client creation paths use per-endpoint fingerprints when available
Related to #879
Whitelists /api/ai/execute in the DemoModeMiddleware so users can
interact with the mock AI assistant while keeping the rest of the
system read-only and hardened.
Ensures the AI settings endpoint reports enabled=true and configured=true
when running in demo mode (PULSE_MOCK_MODE=true), even if no provider is
configured. This unlocks the frontend UI to allow interaction with the
mock AI assistant.
When a user deletes an API token that was migrated from .env, track
the hash in a suppression list to prevent it from being re-migrated
on the next restart.
Changes:
- Add SuppressedEnvMigrations field to Config
- Add env_token_suppressions.json persistence
- Check suppression list during env token migration
- Record suppressed hash when deleting "Migrated from .env" tokens
- Update RemoveAPIToken to return the removed record
Related to #871
- Fix PVE nodes: buildNodeUrl in ProxmoxNodesSection.tsx now prioritizes
guestURL over host (was ignoring guestURL entirely)
- Add PBS support: GuestURL field added to PBSInstance config, model,
and API handlers
- Add PMG support: GuestURL field added to PMGInstance config, model,
and API handlers
- Update NodeSummaryTable to use guestURL for PBS nodes
- Frontend types updated for PBS/PMG guestURL support
The Guest URL setting in node configuration now works correctly across
all node types. When set, it takes priority over the Host URL when
clicking on node names to navigate to the Proxmox/PBS/PMG web UI.
Closes#870
Addresses #866 - agents were logging 'WebSocket connection failed' warnings
even during normal reconnection scenarios (server restart, network blip, etc).
Changes:
- Normal close errors (1000, 1001, connection reset) now log at Debug level
- Only log Warning after 3+ consecutive failures
- Changed 'Connecting to Pulse' from Info to Debug to reduce noise
- Successful connections still log at Info level
The WebSocket is only used for AI command execution, not metrics, so
transient disconnections don't affect monitoring functionality.
Bug Fixes:
- Fix boolean fields with 'omitempty' not persisting false values
- AlertTriggeredAnalysis, PatrolAnalyzeNodes/Guests/Docker/Storage
- omitempty causes Go to skip false (zero value) when marshaling JSON
- On reload, NewDefaultAIConfig() sets true, and missing field stays true
- Fix model dropdown losing selection after save (SolidJS reactivity issue)
- Added explicit 'selected' attribute to option elements
- Ensures browser maintains selection with optgroups during re-renders
Improvements:
- Change patrol type label from 'Quick' to 'Patrol' in history table
- Add chat_model and patrol_model to AI settings update log
- Add alert_triggered_analysis to AI config load log for debugging
Runbooks were a half-built feature that provided no value:
- Only 3 runbooks existed
- AI dynamic remediation already covers the same ground
- Added UI complexity without benefit
Removed:
- runbooks.go and runbooks_test.go
- Handler functions in ai_handlers.go
- Routes in router.go
- Test cases in ai_handlers_test.go
- Auto-fix call in patrol.go
Kept (dead code but harmless):
- Frontend types/API calls (will 404)
- RecordIncidentRunbook function (unused)
Less code = easier to maintain.
Critical fixes to show only actionable insights:
1. Skip stopped VMs/containers from anomaly detection
- '0.0x baseline' for stopped resources is expected, not an anomaly
- Only check anomalies for status='running'
2. Filter correlations by confidence (>=70%)
- Low confidence correlations are likely coincidental
- Only show high-confidence, actionable dependencies
This reduces noise and surfaces genuinely useful intelligence.
Free Features (no license required):
- Anomaly detection - removed license gating, purely statistical analysis
- Learning status endpoint - GET /api/ai/intelligence/learning
Learning Status Response:
- resources_baselined: count of resources with learned baselines
- total_metrics: total metric baselines (cpu + memory + disk)
- metric_breakdown: {cpu: X, memory: Y, disk: Z}
- status: 'waiting' | 'learning' | 'active'
- message: human-readable description
This makes the AI intelligence features visible to all users,
encouraging upgrades for the full LLM-powered patrol experience.
Add /api/ai/intelligence/anomalies endpoint that compares live metrics
against learned baselines to surface deviations - all deterministic
(no LLM required).
Backend:
- Add AnomalyReport struct with severity classification
- Add CheckResourceAnomalies method to baseline store
- Add HandleGetAnomalies API handler
- Add GetStateProvider getter to AI service
Frontend:
- Add AnomalyReport and AnomaliesResponse types
- Add getAnomalies API function
- Add AnomalySeverity type
This is the first step toward surfacing deterministic intelligence
directly in the UI without requiring LLM interaction.
- Create Intelligence struct that aggregates all AI subsystems
- Add /api/ai/intelligence endpoint for system-wide and per-resource insights
- Wire Intelligence into PatrolService as a facade (not replacement)
- Add TypeScript types and API client for frontend
- Add unit tests for Intelligence orchestrator
- Fix pre-existing test failures using diagnostic commands instead of actionable ones
The Intelligence orchestrator provides:
- System-wide health scoring (A-F grades)
- Aggregated findings, predictions, correlations
- Per-resource context generation for AI prompts
- Learning progress tracking
This unifies access to AI subsystems without replacing existing code paths.
Backend:
- Enhanced buildEnrichedResourceContext to ALWAYS show learned baselines with
status indicators (normal/elevated/anomaly) instead of only when anomalous
- This makes Pulse Pro's 'moat' visible - users can see the AI understands
their infrastructure's normal behavior patterns
- Added baseline import to service.go
Frontend (user changes):
- Added incident event type filtering with toggle buttons
- Added resource incident panel to view all incidents for a resource
- Added timeline expand/collapse functionality in alert history
- Added incident note saving with proper incidentId tracking
- Added startedAt parameter for proper incident timeline loading
- Login.tsx: Use apiClient.fetch with skipAuth to avoid auth loops
- router.go: Skip CSRF validation for /api/login endpoint
- hot-dev.sh: Detect encrypted files before generating new key to prevent data loss
When offline_access scope is configured, Pulse now stores and uses
OIDC refresh tokens to automatically extend sessions. Sessions remain
valid as long as the IdP allows token refresh (typically 30-90 days).
Changes:
- Store OIDC tokens (refresh token, expiry, issuer) alongside sessions
- Automatically refresh tokens when access token nears expiry
- Invalidate session if IdP revokes access (forces re-login)
- Add background token refresh with concurrency protection
- Persist OIDC tokens across restarts
Related to #854
When 'Hide local login form' was toggled in Settings, the change
was saved to disk but not applied to the in-memory config until
restart. Now reloadSystemSettings() also updates config.HideLocalLogin
so the setting takes effect immediately.
- Add HandleLicenseFeatures handler that was missing from license_handlers.go
- Add /api/license/features route to router
- Update AI service and metadata provider
- Update frontend license API and components
- Fix CI build failure caused by tests referencing unimplemented method
The 'Removed Docker Hosts' section was not appearing in Settings -> Agents
even when hosts were blocked from re-enrolling. This prevented users from
using the 'Allow re-enroll' button to unblock their Docker agents.
Root cause: The WebSocket store was missing:
1. The 'removedDockerHosts' property in its initial state
2. A handler to process removedDockerHosts data from WebSocket messages
This meant the backend was correctly sending the data, but the frontend
was completely ignoring it.
Changes:
- Add removedDockerHosts to WebSocket store initial state and message handler
- Add removedDockerHosts to App.tsx fallback state for consistency
- Add missing BroadcastState call after AllowDockerHostReenroll succeeds
Also includes previous fixes from this session:
- Add PULSE_AGENT_URL as alias for PULSE_AGENT_CONNECT_URL (config.go)
- Add runtime Docker/Podman auto-detection in pulse-agent (main.go)
Fixes issue reported by darthrater78 in discussion #845
- Add AgentConnectURL config option to override public URL for agents
- Improve install.sh to diagnose docker detection failures
- Update router to prioritize AgentConnectURL for agent install commands
The /ws endpoint was rate limited to 30 connections/minute. After
prolonged use with WebSocket reconnections (network hiccups, browser
tab throttling, etc.), users with many Docker containers would hit
this limit and get stuck with a 'Connecting...' UI.
WebSocket connections are already authenticated via session/API token
and reconnections are normal behavior, so rate limiting is not needed.
Fixes#859 (second report about WebSocket rate limiting after hours of use).
Fixes issue where /api/security/status reports hasHTTPS=false when accessed
via HTTPS through a reverse proxy like Caddy.
Resolves feedback from discussion #845 (clar2242).
- Create reusable UrlEditPopover component with fixed positioning
- Add createUrlEditState hook for managing editing state
- Update DockerHostSummaryTable to use new popover
- Update DockerUnifiedTable (containers & services) to use new popover
- Update GuestRow (Proxmox VMs/containers) to use new popover
- Update HostsOverview (Proxmox hosts) to use new popover
- Add Docker host metadata API for custom URLs
- Consistent styling with save, delete, cancel buttons and keyboard shortcuts
Fixes#858
The patrol interval setting was not being properly applied due to:
1. ReconfigurePatrol() was setting the deprecated QuickCheckInterval field
instead of the preferred Interval field
2. SetConfig() was comparing raw field values instead of using GetInterval()
to compare effective intervals, causing change detection to fail
3. The API response was missing interval_ms, preventing the frontend from
displaying the correct interval
Changes:
- Update StartPatrol() and ReconfigurePatrol() to use the Interval field
- Fix SetConfig() to use GetInterval() for interval comparison
- Add IntervalMs to PatrolStatusResponse and include it in the API response
Adds IncludeAllDeployments option to show all deployments, not just
problem ones (where replicas don't match desired). This provides parity
with the existing --kube-include-all-pods flag.
- Add IncludeAllDeployments to kubernetesagent.Config
- Add --kube-include-all-deployments flag and PULSE_KUBE_INCLUDE_ALL_DEPLOYMENTS env var
- Update collectDeployments to respect the new flag
- Add test for IncludeAllDeployments functionality
- Update UNIFIED_AGENT.md documentation
Addresses feedback from PR #855
The issue was a SolidJS reactivity problem in the Dashboard component.
When guestMetadata signal was accessed inside a For loop callback and
assigned to a plain variable, SolidJS lost reactive tracking.
Changed from:
const metadata = guestMetadata()[guestId] || ...
customUrl={metadata?.customUrl}
To:
const getMetadata = () => guestMetadata()[guestId] || ...
customUrl={getMetadata()?.customUrl}
This ensures SolidJS properly tracks the signal dependency when the
getter function is called directly in JSX props.
When a specific architecture is requested (e.g., linux-arm64), don't fall
back to the generic pulse-agent binary if the requested arch isn't found.
This was causing ARM64 machines to receive x86-64 binaries that can't run.
Now returns 404 with helpful error message if requested architecture
binary is not available.
Reverts overly strict alert ID validation that was rejecting valid
alert IDs containing special characters. Docker host IDs can contain
user-supplied data like hostnames which may include parentheses,
brackets, or other printable ASCII characters.
The previous validation only allowed alphanumeric + limited punctuation,
which caused 400 errors when acknowledging alerts from Docker hosts
with special characters in their identifiers.
Related to #852
Previously the Retry-After header was hardcoded to "60" seconds
regardless of the rate limiter's actual window duration. Now uses
the limiter's configured window (e.g., 600 seconds for recovery
endpoints, 300 for exports).
Related to #579
- Replace verbose info banner with streamlined layout
- Add collapsible 'Advanced Model Selection' accordion for Chat/Patrol models
- Make AI Patrol Settings section collapsible with inline summary badges
- Compact Cost Controls into single-row inline layout
- Reduce form spacing for tighter presentation
- Remove unused formHelpText import
Also includes:
- OpenAI provider fixes for max_tokens parameters
- Security setup CSRF and 401 fixes
- Minor UI tweaks
- Add setup modal that appears when enabling AI without configured provider
- Modal allows selecting provider (Anthropic, OpenAI, DeepSeek, Ollama)
- Enter API key/URL and enable AI in one smooth flow
- Reorder backend to apply API keys before enabled check
- Fix Ollama to strip 'ollama:' prefix from model names
- Simplify backend error message for unconfigured providers
The enable validation was using the legacy single-provider model which
checked settings.Provider and settings.APIKey. Users configuring Ollama
via the new multi-provider UI (setting ollama_base_url) couldn't enable
AI because settings.Provider defaulted to "anthropic" which required an
API key.
Now checks GetConfiguredProviders() first - if any provider is configured
(Anthropic, OpenAI, DeepSeek, or Ollama), AI can be enabled.
Related to #847
- Add cluster-aware guest ID generation (clusterName-VMID instead of instanceName-VMID)
to prevent duplicate VMs/containers when multiple cluster nodes are monitored
- Add cluster deduplication at registration time - when a node is added that belongs
to an already-configured cluster, merge as endpoint instead of creating duplicate
- Add startup consolidation to automatically merge duplicate cluster instances
- Change host agent token binding from agent GUID to hostname, allowing:
- Multiple host agents to share a token (each bound by hostname)
- Agent reinstalls on same host without token conflicts
- Remove 12-character password minimum requirement
- Remove emoji from auto-registration success message
- Fix grouped view node lookup to support both cluster-aware node IDs
(clusterName-nodeName) and legacy guest grouping keys (instance-nodeName)
Fixes duplicate guests appearing when agents are installed on multiple
cluster nodes. Also improves multi-agent UX by allowing shared tokens.
When no auth is configured (fresh install), CheckAuth allows all requests.
This creates a race condition where existing agents from a previous setup
can report data before the wizard completes security configuration.
This fix clears all host agents and docker hosts when /api/security/quick-setup
is called, ensuring the wizard shows a clean state after security is configured.
Added:
- State.ClearAllHosts() - removes all host agents
- State.ClearAllDockerHosts() - removes all docker hosts
- Monitor.ClearUnauthenticatedAgents() - clears both and resets token bindings
- Call to ClearUnauthenticatedAgents() in handleQuickSecuritySetupFixed()
- Add GET /api/metrics-store/history endpoint for querying SQLite-backed metrics
- Support flexible time ranges: 1h, 6h, 12h, 24h, 7d, 30d, 90d
- Return aggregated data with min/max values for longer time ranges
- Add TypeScript types and ChartsAPI.getMetricsHistory() client method
This enables frontend charts to visualize long-term trends using the
tiered retention system (raw → minute → hourly → daily averages).