Proxmox VM/LXC templates are intentionally excluded from the monitored
guest list, but their backup files exist on storage. The orphan-detection
logic was firing for every template backup because the VMID was never
in the guest lookup maps.
Fix: track template VMID→node pairs in State.templateVMIDs (unexported,
not serialised to API/frontend) during the resources poll loop, expose
via StateSnapshot.TemplateVMIDs, and use in both buildGuestLookups() and
the storage backup node-resolution map so orphan detection treats template
backups as valid. Also preserves the template map through the cluster
health grace-period path (zero-resource preservation), the partial-node
grace-period path, and clears it on instance removal.
Closes#1352
Pulse was generating tag colours from a hash of the tag name instead
of using the colours configured in Proxmox. Now polls /cluster/options
once per PVE instance and merges the tag-style colour map into state,
which the frontend uses as the first-priority colour source for tag
badges. Falls back to the existing special-tag and hash-based colours
when Proxmox hasn't set a custom colour for a tag.
Show reclaimable buff/cache as a distinct amber segment between used
(green) and free (gray) in the memory bar. This explains why Pulse's
memory percentage differs from Proxmox: Pulse reports cache-aware
usage (MemAvailable) while Proxmox includes cache as used (Total-Free).
Backend: add Cache field to Memory model, derived from MemInfo
(Available - Free). Only uses MemInfo.Free (not FreeMem fallback) to
avoid inflating cache by the balloon gap on ballooned VMs.
Frontend: StackedMemoryBar renders three segments with tooltip
breakdown. Tooltip Free accounts for balloon limit when active.
Percentage label and alerts remain cache-aware (unchanged).
Host agents removed from the UI would reappear on the next report cycle
because there was no rejection mechanism — unlike Docker agents which
already had resurrection prevention. Mirror the Docker agent pattern:
- Track removed host IDs in a `removedHosts` map with 24hr TTL
- Persist removal records in `State.RemovedHosts` for frontend display
- Reject reports from removed hosts in `ApplyHostReport()`
- Add `AllowHostReenroll()` + API route to clear the block
- Show removed host agents in the Settings UI with "Allow re-enroll"
- Sync removed-agent maps from state on startup for all agent types
- Fix mock integration snapshot missing `RemovedDockerHosts` field
HostFrontend was missing LinkedVmId and LinkedContainerId fields, so the
frontend dedup aliases for VM/container agents resolved to undefined and
never matched. Also add .trim() to getStatusColor and default host agent
status to 'online' to fix grey status dots.
The --disk-exclude agent flag only filtered local metric collection but
had no effect on server-side Proxmox disk health and SSD wearout alerts,
which poll the Proxmox API directly. Users excluding disks (e.g.
--disk-exclude sda) still received alerts for those disks.
Agent now sends its DiskExclude patterns in each report. The server
stores them on the Host model and consults them during Proxmox disk
polling — excluded disks get a synthetic healthy status passed to
CheckDiskHealth so any existing alerts clear immediately.
Also adds FreeBSD pseudo-filesystem types (fdescfs, devfs, linprocfs,
linsysfs) to the virtual FS filter and /var/run/ to special mount
prefixes, fixing false disk-full alerts on FreeBSD for fdescfs mounts.
When two standalone (non-clustered) PVE hosts share the same storage (NFS,
etc.), both instances see the same backup files during polling. Each instance
creates its own StorageBackup entry, causing guests with the same VMID on
different hosts to incorrectly show each other's backups.
Detect shared-storage duplicates by checking if the same volid appears across
multiple instances. When it does AND the VMID is ambiguous (exists on multiple
instances), skip the backup in SyncGuestBackupTimes rather than guessing which
instance owns it. This uses the same ambiguity pattern already applied to PBS
backups.
Fixes#1177
Backend: nodes with the same logical identity (cluster+name) are merged
using a health-weighted preference, preserving host-agent links across
node-ID churn.
Frontend: extract buildMentionResources() with alias-based dedup so
docker hosts and standalone host agents sharing an ID/hostname appear
once in the @ mention autocomplete.
- fix(models): filter nodes by instance in UpdateNodesForInstance to prevent
PVE node duplication across poll cycles (#1214, #1192, #1217)
- fix(alerts): sort GetActiveAlerts output for stable ordering, preventing
hostname scrambling in frontend (#1218)
- fix(notifications): add ntfy-specific resolved webhook formatting with
plain-text body and proper headers (#1213)
- fix(frontend): respect "hide Docker update actions" setting in
DockerFilter Update All button (#1219)
- fix(frontend): add missing v prefix to GitHub release tag URLs (#1195)
- fix(monitoring): reduce disk detection warning from Warn to Debug to
eliminate log spam for pass-through disks (#1216)
- chore: bump VERSION to 5.1.5
Alerts previously showed the raw Proxmox node name (e.g., "on pve") even
when users configured a display name (e.g., "SPACEX") via Settings or the
host agent --hostname flag. This affected the alert UI, email notifications,
and webhook payloads.
Add NodeDisplayName field to the alert chain: cache display names in the
alert Manager (populated by CheckNode/CheckHost on every poll), resolve
them at alert creation via preserveAlertState, refresh on metric updates,
and enrich at read time in GetActiveAlerts. Update models.Alert, the
syncAlertsToState conversion, email templates, Apprise body text, webhook
payloads, and all frontend rendering paths.
Related to #1188
Expand the smartctl collector to capture detailed SMART attributes (SATA
and NVMe), propagate them through the full data pipeline, persist them
as time-series metrics, and display them in an interactive disk detail
drawer with historical sparkline charts.
Backend: add SMARTAttributes struct, writeSMARTMetrics for persistent
storage, "disk" resource type in metrics API with live fallback.
Frontend: enhanced DiskList with Power-On column and SMART warnings,
new DiskDetail drawer matching NodeDrawer styling patterns, generic
HistoryChart metric support with proper tooltip formatting.
When namespace matching fails, the VMID-only fallback now checks whether
the VMID appears on multiple PVE instances. If ambiguous, the fallback
is skipped — preventing backups from being falsely attributed to the
wrong guest. Unique VMIDs still fall back as before.
Implements multi-tenant infrastructure for organization-based data isolation.
Feature is gated behind PULSE_MULTI_TENANT_ENABLED env var and requires
Enterprise license - no impact on existing users.
Core components:
- TenantMiddleware: extracts org ID, validates access, 501/402 responses
- AuthorizationChecker: token/user access validation for organizations
- MultiTenantChecker: WebSocket upgrade gating with license check
- Per-tenant audit logging via LogAuditEventForTenant
- Organization model with membership support
Gating behavior:
- Feature flag disabled: 501 Not Implemented for non-default orgs
- Flag enabled, no license: 402 Payment Required
- Default org always works regardless of flag/license
Documentation added: docs/MULTI_TENANT.md
Implements Phase 1-2 of multi-tenancy support using a directory-per-tenant
strategy that preserves existing file-based persistence.
Key changes:
- Add MultiTenantPersistence manager for org-scoped config routing
- Add TenantMiddleware for X-Pulse-Org-ID header extraction and context propagation
- Add MultiTenantMonitor for per-tenant monitor lifecycle management
- Refactor handlers (ConfigHandlers, AlertHandlers, AIHandlers, etc.) to be
context-aware with getConfig(ctx)/getMonitor(ctx) helpers
- Add Organization model for future tenant metadata
- Update server and router to wire multi-tenant components
All handlers maintain backward compatibility via legacy field fallbacks
for single-tenant deployments using the "default" org.
- Add PendingUpdates and PendingUpdatesCheckedAt fields to Node model
- Add GetNodePendingUpdates method to Proxmox client (calls /nodes/{node}/apt/update)
- Add 30-minute polling cache to avoid excessive API calls
- Add pendingUpdates to frontend Node type
- Add color-coded badge in NodeSummaryTable (yellow: 1-9, orange: 10+)
- Update test stubs for interface compliance
Requires Sys.Audit permission on Proxmox API token to read apt updates.
Adds automatic Docker detection for Proxmox LXC containers:
- New HasDocker and DockerCheckedAt fields on Container model
- Docker socket check via connected agents on first run, restart, or start
- Parallel checking with timeouts for efficiency
- Caches results and only re-checks after state transitions
This enables the AI to know which LXC containers are Docker hosts
for better infrastructure guidance.
Add ability for users to describe what kind of agent profile they need
in natural language, and have AI generate a suggestion with name,
description, config values, and rationale.
- Add ProfileSuggestionHandler with schema-aware prompting
- Add SuggestProfileModal component with example prompts
- Update AgentProfilesPanel with suggest button and description field
- Streamline ValidConfigKeys to only agent-supported settings
- Update profile validation tests for simplified schema
The previous commit fixed namespace disambiguation for backup alerts,
but the Overview display uses SyncGuestBackupTimes to populate backup
timestamps on VMs/Containers. This commit extends the same namespace
matching logic to that function.
Also tightened the matching algorithm to use suffix matching instead
of substring matching, preventing false positives like "pve" matching
"pve-nat".
- Add freebsd-amd64 and freebsd-arm64 to normalizeUnifiedAgentArch()
so the download endpoint serves FreeBSD binaries when requested
- Add FreeBSD/pfSense/OPNsense platform option to agent setup UI
with note about bash installation requirement
- Add FreeBSD test cases to unified_agent_test.go
Fixes installation on pfSense/OPNsense where users were getting 404
errors because the backend didn't recognize the freebsd-amd64 arch
parameter from install.sh.
When multiple PVE nodes have the same hostname (e.g., both named "pve"),
auto-linking would incorrectly link all host agents to the first matching
node, causing temperature and sensor data to be mixed/duplicated.
Changes:
- findLinkedProxmoxEntity now detects hostname collisions and refuses
to auto-link, logging a warning instead
- Added manual link API endpoint (POST /api/agents/host/link) so users
can explicitly link agents to the correct nodes
- Added State.LinkHostAgentToNode for bidirectional manual linking
Fixes#1081
Allows administrators to create configuration profiles and assign them
to agents for centralized fleet management.
- Configuration profiles with customizable settings (Docker, K8s,
Proxmox monitoring, log level, reporting interval)
- Profile assignment to agents by ID
- Agent-side remote config client to fetch settings on startup
- Full CRUD API at /api/admin/profiles
- Settings UI panel in Settings → Agents → Agent Profiles
- Automatic cleanup of assignments when profiles are deleted
Ensures that LinkedHostAgentId, CommandsEnabled, IsLegacy, and LinkedNodeId
are correctly propagated to the frontend. This prevents regressions of the
bugs fixed for #952 and #971.
Related to #952 and #971
Both issues were caused by the backend not sending required fields to the
frontend in the ToFrontend() converters:
Issue #971 (Agent required badge):
- NodeFrontend was missing LinkedHostAgentId field
- Frontend couldn't identify linked host agents, so it fell back to showing
'Agent required' instead of 'Via agent'
Issue #952 (AI Commands toggle stuck):
- HostFrontend was missing CommandsEnabled field
- Frontend couldn't see the actual commandsEnabled state from the backend,
causing the optimistic UI to never receive confirmation that the state
had actually changed
Also added IsLegacy and LinkedNodeId to HostFrontend for completeness.
This resolves issues where snapshots/backups persist after deletion if the
Instance field didn't match the ID prefix (due to case changes, name changes, etc).
Now consistent with how VMs, Containers, Storage, etc. are filtered.
Also adds Instance field to BackupTask model for completeness.
Addresses #1009 (refs #991)
- Add persistent volume mounts for Go/npm caches (faster rebuilds)
- Add shell config with helpful aliases and custom prompt
- Add comprehensive devcontainer documentation
- Add pre-commit hooks for Go formatting and linting
- Use go-version-file in CI workflows instead of hardcoded versions
- Simplify docker compose commands with --wait flag
- Add gitignore entries for devcontainer auth files
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The SMART disk temperature data was being collected by the agent but not
passed through to the frontend. Fixed by:
1. Added SMART field to HostSensorSummaryFrontend and created
HostDiskSMARTFrontend type in models_frontend.go
2. Updated hostSensorSummaryToFrontend() in converters.go to include
SMART data conversion
3. Added HostDiskSMART interface to frontend TypeScript types
4. Updated HostTemperatureCell to display disk temperatures in tooltip
with a 'Disk Temperatures' section and fallback to SMART temps when
no CPU/sensor temps are available
Previously, toggling AI Commands in the Agents view would show a pending state
and wait for the agent to confirm the change (up to 2 minutes). If the agent
was slow to report or the WebSocket update was missed, the toggle would appear
stuck.
Now, UpdateHostAgentConfig also updates the Host model in state immediately,
providing instant UI feedback. The agent will still receive the config on its
next report, but users see the change right away.
Added SetHostCommandsEnabled function to models.State for this purpose.
Allows specifying which IP address the agent should report, useful for:
- Multi-homed systems with separate management networks
- Systems with private monitoring interfaces
- VPN/overlay network scenarios
Usage:
pulse-agent --report-ip 192.168.1.100
PULSE_REPORT_IP=192.168.1.100 pulse-agent
The GuestURL field was missing from NodeFrontend and its converter,
causing configured Guest URLs to be ignored when clicking on cluster
node names. The frontend would fall back to the auto-detected IP
instead of using the user-configured Guest URL.
Related to #940
When the same Ceph cluster is reported from multiple sources (PVE API
and host agent), it showed up twice in the UI. Now we deduplicate by
FSID before converting to frontend format, keeping the cluster entry
with the most complete data (most monitors/managers/pools reported).
Related to #928
- Add DELETE /api/agents/unregister endpoint for agent self-unregistration
- Agent now unregisters itself from Pulse server when uninstalled
- Add clarifying note in UnifiedAgents explaining linked agents behavior
- Linked agents are managed via their PVE node but this is now explained in UI
- Add LastSeen field to HostAgent model for better agent status tracking
When nodes are updated, now validates that LinkedHostAgentID points to
an existing host agent. References to deleted host agents are automatically
cleared, fixing the 'Agent' tag persistence for users who removed agent
entries before commit c394d24.
Related to #920
When a host agent is deleted via the UI, the LinkedHostAgentID on any
PVE nodes that were linked to it was not being cleared. This caused
the "Agent" tag to persist in the UI after uninstalling the agent.
Related to #920
- Add smartctl package to collect disk temperature and health data
- Add SMART field to agent Sensors struct
- Host agent now runs smartctl to collect disk temps when available
- Backend processes agent SMART data for temperature display
- Graceful fallback when smartctl not installed
- Add CommandsEnabled field to AgentInfo in pkg/agents/host/report.go
- Agent now reports whether AI command execution is enabled
- Server stores and exposes this via Host model
- Frontend can now show which agents have commands enabled
- This provides visibility before implementing remote configuration
- Fix PVE nodes: buildNodeUrl in ProxmoxNodesSection.tsx now prioritizes
guestURL over host (was ignoring guestURL entirely)
- Add PBS support: GuestURL field added to PBSInstance config, model,
and API handlers
- Add PMG support: GuestURL field added to PMGInstance config, model,
and API handlers
- Update NodeSummaryTable to use guestURL for PBS nodes
- Frontend types updated for PBS/PMG guestURL support
The Guest URL setting in node configuration now works correctly across
all node types. When set, it takes priority over the Host URL when
clicking on node names to navigate to the Proxmox/PBS/PMG web UI.
Closes#870
Previously, SyncGuestBackupTimes matched backups to guests using only VMID.
This caused newly created containers to incorrectly show old backup times
from different containers on other Proxmox instances that happened to have
the same VMID.
Now uses composite key (instance+VMID) for PVE storage backups to ensure
proper isolation. PBS backups still use VMID matching (since they aggregate
from multiple sources) but only as a fallback.
Fixes issue where ollama LXC showed 'last backup 3 months ago' despite
being created yesterday.
- Add cluster-aware guest ID generation (clusterName-VMID instead of instanceName-VMID)
to prevent duplicate VMs/containers when multiple cluster nodes are monitored
- Add cluster deduplication at registration time - when a node is added that belongs
to an already-configured cluster, merge as endpoint instead of creating duplicate
- Add startup consolidation to automatically merge duplicate cluster instances
- Change host agent token binding from agent GUID to hostname, allowing:
- Multiple host agents to share a token (each bound by hostname)
- Agent reinstalls on same host without token conflicts
- Remove 12-character password minimum requirement
- Remove emoji from auto-registration success message
- Fix grouped view node lookup to support both cluster-aware node IDs
(clusterName-nodeName) and legacy guest grouping keys (instance-nodeName)
Fixes duplicate guests appearing when agents are installed on multiple
cluster nodes. Also improves multi-agent UX by allowing shared tokens.