During cluster startup, nodes were temporarily using the primary cluster
endpoint for temperature collection before cluster metadata validation
completed. This caused all nodes to show the same (incorrect) temperature
values for ~4 minutes until validation finished and per-node endpoints
were established.
Example: minipc would show delly's temperature (90°C) instead of its own
(50°C) from startup until cluster validation completed.
Root cause:
- Temperature collection started immediately at startup
- Cluster endpoint validation happened asynchronously
- Code fell back to primary endpoint when ClusterEndpoints was empty
- All nodes used same endpoint, got same temperature data
Fix: Skip temperature collection for cluster nodes until:
1. ClusterEndpoints array is populated (validation complete)
2. Node's specific endpoint is found in the cluster metadata
This ensures correct temperature data from the very first collection,
maintaining data integrity during startup. When persisted config exists,
endpoints are available immediately so no delay occurs. For new clusters,
temperature collection begins once validation completes (~30s).
Preserves Pulse's correctness guarantee: users can trust metrics
immediately after restart without waiting for "warm-up" period.
## HTTP Server Fixes
- Add source IP middleware to enforce allowed_source_subnets
- Fix missing source subnet validation for external HTTP requests
- HTTP health endpoint now respects subnet restrictions
## Installer Improvements
- Auto-configure allowed_source_subnets with Pulse server IP
- Add cluster node hostnames to allowed_nodes (not just IPs)
- Fix node validation to accept both hostnames and IPs
- Add Pulse server reachability check before installation
- Add port availability check for HTTP mode
- Add automatic rollback on service startup failure
- Add HTTP endpoint health check after installation
- Fix config backup and deduplication (prevent duplicate keys)
- Fix IPv4 validation with loopback rejection
- Improve registration retry logic with detailed errors
- Add automatic LXC bind mount cleanup on uninstall
## Temperature Collection Fixes
- Add local temperature collection for self-monitoring nodes
- Fix node identifier matching (use hostname not SSH host)
- Fix JSON double-encoding in HTTP client response
Related to #XXX (temperature monitoring fixes)
This implements HTTP/HTTPS support for pulse-sensor-proxy to enable
temperature monitoring across multiple separate Proxmox instances.
Architecture changes:
- Dual-mode operation: Unix socket (local) + HTTPS (remote)
- Unix socket remains default for security/performance (no breaking change)
- HTTP mode enables temps from external PVE hosts
Backend implementation:
- Add HTTPS server with TLS + Bearer token authentication to sensor-proxy
- Add TemperatureProxyURL and TemperatureProxyToken fields to PVEInstance
- Add HTTP client (internal/tempproxy/http_client.go) for remote proxy calls
- Update temperature collector to prefer HTTP proxy when configured
- Fallback logic: HTTP proxy → Unix socket → direct SSH (if not containerized)
Configuration:
- pulse-sensor-proxy config: http_enabled, http_listen_addr, http_tls_cert/key, http_auth_token
- PVEInstance config: temperature_proxy_url, temperature_proxy_token
- Environment variables: PULSE_SENSOR_PROXY_HTTP_* for all HTTP settings
Security:
- TLS 1.2+ with modern cipher suites
- Constant-time token comparison (timing attack prevention)
- Rate limiting applied to HTTP requests (shared with socket mode)
- Audit logging for all HTTP requests
Next steps:
- Update installer script to support HTTP mode + auto-registration
- Add Pulse API endpoint for proxy registration
- Generate TLS certificates during installation
- Test multi-instance temperature collection
Related to #571 (multi-instance architecture)
Squashfs snap mounts on Ubuntu (and similar read-only filesystems like
erofs on Home Assistant OS) always report near-full usage and trigger
false disk alerts. The filter logic existed in Proxmox monitoring but
wasn't applied to host agents.
Changes:
- Extract read-only filesystem filter to shared pkg/fsfilters package
- Apply filter in hostmetrics.collectDisks() for host/docker agents
- Apply filter in monitor.ApplyHostReport() for backward compatibility
- Convert internal/monitoring/fs_filters.go to wrapper functions
This prevents squashfs, erofs, iso9660, cdfs, udf, cramfs, romfs, and
saturated overlay filesystems from generating alerts. Filtering happens
at both collection time (agents) and ingestion time (server) to ensure
older agents don't cause false alerts until they're updated.
Update test expectations to match new SMART-preferred behavior:
- mergeNVMeTempsIntoDisks now prioritizes SMART temps over NVMe temps
- NVMe temps only applied to disks with Temperature == 0
- Tests were failing because disks started with non-zero temperatures
- Changed test disks to start with Temperature: 0 to simulate fresh disks
This change was introduced in commit 2a79d57f7 (Add SMART temperature
collection for physical disks) but tests weren't updated.
Fixes TestMergeNVMeTempsIntoDisks and TestMergeNVMeTempsIntoDisksClearsMissingOrInvalid.
Two critical fixes to prevent test timeouts:
1. Nil map panic in TestPollPVEInstanceUsesRRDMemUsedFallback:
- Test monitor was missing nodeLastOnline map initialization
- Panic occurred when pollPVEInstance tried to update nodeLastOnline[nodeID]
- Caused deadlock when panic recovery tried to acquire already-held mutex
- Added nodeLastOnline: make(map[string]time.Time) to test monitor
2. Alert manager goroutine leak in Docker tests:
- newTestMonitor() created alert manager but never stopped it
- Background goroutines (escalationChecker, periodicSaveAlerts) kept running
- Added t.Cleanup(func() { m.alertManager.Stop() }) to test helper
These fixes resolve the 10+ minute test timeouts in CI workflows.
Related to workflow run 19281508603.
Three categories of fixes:
1. Goroutine leak causing 10-minute timeout:
- Add defer mon.notificationMgr.Stop() in monitor_memory_test.go
- Background goroutines from notification manager weren't being stopped
2. Database NULL column scanning errors:
- Change LastError from string to *string in queue.go
- Change PayloadBytes from int to *int in queue.go
- SQL NULL values require pointer types in Go
3. SSRF protection blocking test servers:
- Check allowlist for localhost before rejecting in notifications.go
- Set PULSE_DATA_DIR to temp directory in tests
- Add defer nm.Stop() calls to prevent goroutine leaks
Fixes for preflight test failures in workflow run 19280879903.
Fixes three test failures that were blocking release workflow:
1. TestApplyDockerReportGeneratesUniqueIDsForCollidingHosts:
- Initialize dockerTokenBindings and dockerMetadataStore in test helper
- These maps were nil causing panic on first access
2. TestSendGroupedAppriseHTTP & TestSendTestNotificationAppriseHTTP:
- Configure allowlist to permit localhost (127.0.0.1) for test servers
- SSRF protection was blocking httptest.NewServer() URLs
- Tests need to allowlist the test server IP to bypass security checks
Related to workflow fix in 5fa78c3e3.
Add defensive mitigation to prevent repeated guest-get-osinfo calls that
trigger buggy behavior in QEMU guest agent 9.0.2 on OpenBSD 7.6.
The issue: OpenBSD doesn't have /etc/os-release (Linux convention), and
qemu-ga 9.0.2 appears to spawn excessive helper processes trying to read
this file whenever guest-get-osinfo is called. These helpers don't clean
up properly, eventually exhausting the process table and crashing the VM.
The fix: Track consecutive OS info failures per VM. After 3 failures,
automatically skip future guest-get-osinfo calls for that VM while
continuing to fetch other guest agent data (network interfaces, version).
This prevents triggering the buggy code path while maintaining most guest
agent functionality.
The counter resets on success, so if the guest agent is upgraded or the
issue is resolved, Pulse will automatically resume OS info collection.
Related to #692
Implements comprehensive mdadm RAID array monitoring for Linux hosts
via pulse-host-agent. Arrays are automatically detected and monitored
with real-time status updates, rebuild progress tracking, and automatic
alerting for degraded or failed arrays.
Key changes:
**Backend:**
- Add mdadm package for parsing mdadm --detail output
- Extend host agent report structure with RAID array data
- Integrate mdadm collection into host agent (Linux-only, best-effort)
- Add RAID array processing in monitoring system
- Implement automatic alerting:
- Critical alerts for degraded arrays or arrays with failed devices
- Warning alerts for rebuilding/resyncing arrays with progress tracking
- Auto-clear alerts when arrays return to healthy state
**Frontend:**
- Add TypeScript types for RAID arrays and devices
- Display RAID arrays in host details drawer with:
- Array status (clean/degraded/recovering) with color-coded indicators
- Device counts (active/total/failed/spare)
- Rebuild progress percentage and speed when applicable
- Green for healthy, amber for rebuilding, red for degraded
**Documentation:**
- Document mdadm monitoring feature in HOST_AGENT.md
- Explain requirements (Linux, mdadm installed, root access)
- Clarify scope (software RAID only, hardware RAID not supported)
**Testing:**
- Add comprehensive tests for mdadm output parsing
- Test parsing of healthy, degraded, and rebuilding arrays
- Verify proper extraction of device states and rebuild progress
All builds pass successfully. RAID monitoring is automatic and best-effort
- if mdadm is not installed or no arrays exist, host agent continues
reporting other metrics normally.
Related to #676
This change fixes backup-age alert notifications to display VM/CT names
instead of just "VMID XXX" in multi-cluster environments where backups
are stored on PBS.
Changes:
- Store all guests per VMID (not just first match) to handle VMID collisions across clusters
- Persist last-known guest names/types in metadata store for deleted VMs
- Enrich backup correlation with persisted metadata when live inventory is empty
- Update CheckBackups to handle multiple VMID matches intelligently
The fix addresses two scenarios:
1. Multiple PVE clusters with same VMID backing up to one PBS
2. VMs deleted from Proxmox but backups still exist on PBS
Backup-age alerts will now show proper VM/CT names when:
- A unique guest exists with that VMID (live or persisted)
- Multiple guests share a VMID (uses first match, consistent with current behavior)
When truly ambiguous (multiple live VMs, same VMID, no way to determine origin),
the alert gracefully falls back to showing "VMID XXX".
Enhanced the "Docker hosts cycling" troubleshooting entry to explicitly
call out VM/LXC cloning as a cause of identical agent IDs. Added specific
remediation steps for regenerating machine IDs on cloned systems.
This addresses the resolution path discovered in discussion #648 where a
user cloned a Proxmox LXC and encountered cycling behavior even with
separate API tokens because the agent IDs were duplicated.
Fixes#657
Between v4.25.0 and v4.26.4, commit 72865ff62 changed cluster endpoint
resolution to prefer IP addresses over hostnames to reduce DNS lookups
(refs #620). However, this caused TLS certificate validation to fail for
installations with VerifySSL=true, because Proxmox certificates typically
contain hostnames (e.g., pve01.example.com), not IP addresses.
When all cluster endpoints failed TLS validation during the initial health
check, the ClusterClient marked all nodes as unhealthy. Subsequent calls
to GetAllStorage() would fail with "no healthy nodes available in cluster",
causing storage data to disappear from the UI despite the cluster being
fully operational.
**Root Cause:**
The IP-first approach breaks TLS hostname verification when:
- VerifySSL is enabled (common for production environments)
- Certificates are issued with hostnames, not IPs (standard practice)
- Result: x509 certificate validation fails (e.g., "certificate is valid
for pve01.example.com, not 10.0.0.44")
**Solution:**
Conditionally prefer hostnames vs IPs based on TLS validation requirements:
1. When TLS hostname verification is required (VerifySSL=true AND no
fingerprint override), prefer hostname to ensure certificate CN/SAN
validation succeeds.
2. When TLS verification is bypassed (VerifySSL=false OR fingerprint
provided), prefer IP to reduce DNS lookups.
This approach:
- Fixes the regression for users with VerifySSL enabled
- Preserves the DNS optimization for self-signed/fingerprint configs
- Maintains backwards compatibility with v4.25.0 behavior
- Does not compromise TLS security
**Testing:**
Users reported that rolling back to v4.25.0 fixed their storage visibility.
This fix should restore storage for v4.26.4+ while maintaining the DNS
optimization for appropriate scenarios.
Problem: Multiple Docker agents can share the same API token, which causes
serious operational and security issues:
1. Host identity collision - agents overwrite each other in state (the bug
fixed in aa0aa7d4f only addressed the symptom, not the root cause)
2. Security/audit gap - can't attribute actions to specific agents
3. User confusion - easy mistake that causes subtle, hard-to-debug issues
4. State corruption - race conditions on startup and racey metric updates
Root cause: The system treats API tokens as the agent's identity credential,
but never enforced uniqueness. This allowed users to accidentally (or
intentionally) reuse tokens across multiple agents, breaking the 1:1
token-to-agent relationship that the architecture assumes.
Solution: Enforce token uniqueness at the agent report ingestion point.
Implementation:
- Add dockerTokenBindings map[tokenID]agentID to Monitor state
- In ApplyDockerReport, check if token is already bound to a different agent
- On first report from a token, bind it to that agent's ID
- On subsequent reports, verify the binding matches
- Reject mismatches with clear error naming the conflicting host
- Unbind tokens when hosts are removed (allows token reuse after cleanup)
Error message example:
"API token (pk_abc…xyz) is already in use by agent 'agent-123'
(host: docker-host-1). Each Docker agent must use a unique API token.
Generate a new token for this agent"
Why fail-fast instead of phased rollout:
- Shared tokens are architecturally wrong and cannot work correctly
- The system cannot safely multiplex state for duplicate identities
- A clear, immediate error is better UX than silent corruption
- Users would need to generate per-agent tokens eventually anyway
Why in-memory instead of persisted:
- Aligns with Pulse's existing state model (JSON config + in-memory state)
- Bindings naturally rebuild as agents report in after restart
- No schema migration or additional persistence complexity needed
- Sufficient for correctness since overwrite can't happen until both
agents report, at which point the binding exists and rejects duplicates
Migration path for existing users with shared tokens:
- Generate new unique token for each agent
- Update agent configuration with new token
- Restart agents one at a time
This enforces the token-as-identity invariant and prevents users from
creating unsupportable configurations.
Updated the Quick Start for Docker section in TEMPERATURE_MONITORING.md to be
more user-friendly and address common setup issues:
- Added clear explanation of why the proxy is needed (containers can't access hardware)
- Provided concrete IP example instead of placeholder
- Showed full docker-compose.yml context with proper YAML structure
- Added sudo to commands where needed
- Updated docker-compose commands to v2 syntax with note about v1
- Expanded verification steps with clearer success indicators
- Added reminder to check container name in verification commands
These improvements should help users who encounter blank temperature displays
due to missing proxy installation or bind mount configuration.
Root cause: findMatchingDockerHost() was matching hosts by token ID alone,
causing multiple Docker agents using the same API token to overwrite each
other in state. This resulted in only N visible hosts (where N = number of
unique tokens) instead of all M agents, with hosts "rotating" as each agent
reported every 10 seconds.
Example: 4 agents using 2 tokens would show only 2 hosts, rotating between
agents 1↔2 (token A) and agents 3↔4 (token B).
Fix: Remove token-only matching from findMatchingDockerHost(). Hosts should
only match by:
1. Agent ID (unique per agent)
2. Machine ID + hostname combination (with optional token validation)
3. Machine ID or hostname alone (only for tokenless agents)
This allows multiple agents to share the same API token without colliding.
Additional fix: UpsertDockerHost() now preserves Hidden, PendingUninstall,
and Command fields from existing hosts, preventing these flags from being
reset to defaults on every agent report.
Extends temperature monitoring to collect SMART temps for SATA/SAS disks,
addressing issue #652 where physical disk temperatures showed as empty.
Architecture:
- Deploys pulse-sensor-wrapper.sh as SSH forced command on Proxmox nodes
- Wrapper collects both CPU/GPU temps (sensors -j) and disk temps (smartctl)
- Implements 30-min cache with background refresh to avoid performance impact
- Uses smartctl -n standby,after to skip sleeping drives without waking them
- Returns unified JSON: {sensors: {...}, smart: [...]}
Backend changes:
- Add DiskTemp model with device, serial, WWN, temperature, lastUpdated
- Extend Temperature model with SMART []DiskTemp field and HasSMART flag
- Add WWN field to PhysicalDisk for reliable disk matching
- Update parseSensorsJSON to handle both legacy and new wrapper formats
- Rewrite mergeNVMeTempsIntoDisks to match SMART temps by WWN → serial → devpath
- Preserve legacy NVMe temperature support for backward compatibility
Performance considerations:
- SMART data cached for 30 minutes per node to avoid excessive smartctl calls
- Background refresh prevents blocking temperature requests
- Respects drive standby state to avoid spinning up idle arrays
- Staggered disk scanning with 0.1s delay to avoid saturating SATA controllers
Install script:
- Deploys wrapper to /usr/local/bin/pulse-sensor-wrapper.sh
- Updates SSH forced command from "sensors -j" to wrapper script
- Backward compatible - falls back to direct sensors output if wrapper missing
Testing note:
- Requires real hardware with smartmontools installed for full functionality
- Empty smart array returned gracefully when smartctl unavailable
- Legacy sensor-only nodes continue working without changes
Fixed two test failures identified by go vet:
1. SSH knownhosts manager tests
- Updated keyscanFunc signatures from (ctx, host, timeout) to (ctx, host, port, timeout)
- Affected 4 test functions in manager_test.go
- Matches recent API change adding port parameter for flexibility
2. Monitor temperature toggle test
- Removed obsolete test file monitor_temperature_toggle_test.go
- Test was checking internal implementation details that have changed
- Enable/DisableTemperatureMonitoring() now only log (interface compatibility)
- Temperature collection is managed differently in current architecture
Impact:
- All tests now compile successfully
- Removes obsolete test that no longer reflects current behavior
- Updates remaining tests to match current API signatures
Related to #630
Proxmox 8.3+ changed the VM status API to return the `agent` field as an
object ({"enabled":1,"available":1}) instead of an integer (0 or 1). This
caused Pulse to incorrectly treat VMs as having no guest agent, resulting
in missing disk usage data (disk:-1) even when the guest agent was running
and functional.
The issue manifested as:
- VMs showing "Guest details unavailable" or missing disk data
- Pulse logs showing no "Guest agent enabled, querying filesystem info" messages
- `pvesh get /nodes/<node>/qemu/<vmid>/agent/get-fsinfo` working correctly
from the command line, confirming the agent was functional
Root cause:
The VMStatus struct defined `Agent` as an int field. When Proxmox 8.3+ sent
the new object format, JSON unmarshaling silently left the field at zero,
causing Pulse to skip all guest agent queries.
Changes:
- Created VMAgentField type with custom UnmarshalJSON to handle both formats:
* Legacy (Proxmox <8.3): integer (0 or 1)
* Modern (Proxmox 8.3+): object {"enabled":N,"available":N}
- Updated VMStatus.Agent from `int` to `VMAgentField`
- Updated all references to `detailedStatus.Agent` to use `.Agent.Value`
- The unmarshaler prioritizes the "available" field over "enabled" to ensure
we only query when the agent is actually responding
This fix maintains backward compatibility with older Proxmox versions while
supporting the new format introduced in Proxmox 8.3+.
Resolves#641
## Problem
When a VM migrates between Proxmox nodes, Pulse was treating it as a new
resource and discarding custom alert threshold overrides. This occurred
because guest IDs included the node name (e.g., `instance-node-VMID`),
causing the ID to change when the VM moved to a different node.
Users reported that after migrating a VM, previously disabled alerts
(e.g., memory threshold set to 0) would resume firing.
## Root Cause
Guest IDs were constructed as:
- Standalone: `node-VMID`
- Cluster: `instance-node-VMID`
When a VM migrated from node1 to node2, the ID changed from
`instance-node1-100` to `instance-node2-100`, causing:
- Alert threshold overrides to be orphaned (keyed by old ID)
- Guest metadata (custom URLs, descriptions) to be orphaned
- Active alerts to reference the wrong resource ID
## Solution
Changed guest ID format to be stable across node migrations:
- New format: `instance-VMID` (for both standalone and cluster)
- Retains uniqueness across instances while being node-independent
- Allows VMs to migrate freely without losing configuration
## Implementation
### Backend Changes
1. **Guest ID Construction** (`monitor_polling.go`):
- Simplified to always use `instance-VMID` format
- Removed node from the ID construction logic
2. **Alert Override Migration** (`alerts.go`):
- Added lazy migration in `getGuestThresholds()`
- Detects legacy ID formats and migrates to new format
- Preserves user configurations automatically
3. **Guest Metadata Migration** (`guest_metadata.go`):
- Added `GetWithLegacyMigration()` helper method
- Called during VM/container polling to migrate metadata
- Preserves custom URLs and descriptions
4. **Active Alerts Migration** (`alerts.go`):
- Added migration logic in `LoadActiveAlerts()`
- Translates legacy alert resource IDs to new format
- Preserves alert acknowledgments across restarts
### Frontend Changes
5. **ID Construction Updates**:
- `ThresholdsTable.tsx`: Updated fallback from `instance-node-vmid` to `instance-vmid`
- `Dashboard.tsx`: Simplified guest ID construction
- `GuestRow.tsx`: Updated `buildGuestId()` helper
## Migration Strategy
- **Lazy Migration**: Configs are migrated as guests are discovered
- **Backwards Compatible**: Old IDs are detected and automatically converted
- **Zero Downtime**: No manual intervention required
- **Persisted**: Migrated configs are saved on next config write cycle
## Testing Recommendations
After deployment:
1. Verify existing alert overrides still apply
2. Test VM migration - confirm thresholds persist
3. Check guest metadata (custom URLs) survive migration
4. Verify active alerts maintain acknowledgment state
## Related
- Addresses similar issues with guest metadata and active alert tracking
- Lays groundwork for any future guest-specific configuration features
- Aligns with project philosophy: correctness and UX over implementation complexity
Related to #617
This fixes a misconfiguration scenario where Docker containers could
attempt direct SSH connections (producing [preauth] log spam) instead
of using the sensor proxy.
Changes:
- Fix container detection to check PULSE_DOCKER=true in addition to
system.InContainer() heuristics (both temperature.go and config_handlers.go)
- Upgrade temperature collection log from Error to Warn with actionable
guidance about mounting the proxy socket
- Add Info log when dev mode override is active so operators understand
the security posture
- Add troubleshooting section to docs for SSH [preauth] logs from containers
The container detection was inconsistent - monitor.go checked both flags
but temperature.go and config_handlers.go only checked InContainer().
Now all locations consistently check PULSE_DOCKER || InContainer().
- Add nouveau chip recognition to temperature parser
- Implement parseNouveauGPUTemps() for NVIDIA GPU temps via nouveau driver
- Map "GPU core" sensor to edge temperature field
- Supports systems using open-source nouveau driver
This complements the AMD GPU support added previously. Systems using
the nouveau driver will now see NVIDIA GPU temperatures in the
dashboard. For proprietary nvidia driver users, GPU temps are not
available via lm-sensors and would require nvidia-smi integration.
Related to #600
- Add GPU field to Temperature model with edge, junction, and mem sensors
- Add amdgpu chip recognition to temperature parser
- Implement parseGPUTemps() to extract AMD GPU temperature data
- Update frontend TypeScript types to include GPU temperatures
- Display GPU temps in node table tooltip alongside CPU temps
- Set hasGPU flag when GPU data is available
This enables temperature monitoring for AMD GPUs (amdgpu sensors)
that was previously being collected via SSH but silently discarded
during parsing.
Related to #553
## Problem
LXC containers showed inflated memory usage (e.g., 90%+ when actual usage was 50-60%,
96% when actual was 61%) because the code used the raw `mem` value from Proxmox's
`/cluster/resources` API endpoint. This value comes from cgroup `memory.current` which
includes reclaimable cache and buffers, making memory appear nearly full even when
plenty is available.
## Root Cause
- **Nodes**: Had sophisticated cache-aware memory calculation with RRD fallbacks
- **VMs (qemu)**: Had detailed memory calculation using guest agent meminfo
- **LXCs**: Naively used `res.Mem` directly without any cache-aware correction
The Proxmox cluster resources API's `mem` field for LXCs includes cache/buffers
(from cgroup memory accounting), which should be excluded for accurate "used" memory.
## Solution
Implement cache-aware memory calculation for LXC containers by:
1. Adding `GetLXCRRDData()` method to fetch RRD metrics for LXC containers from
`/nodes/{node}/lxc/{vmid}/rrddata`
2. Using RRD `memavailable` to calculate actual used memory (total - available)
3. Falling back to RRD `memused` if `memavailable` is not available
4. Only using cluster resources `mem` value as last resort
This matches the approach already used for nodes and VMs, providing consistent
cache-aware memory reporting across all resource types.
## Changes
- Added `GuestRRDPoint` type and `GetLXCRRDData()` method to pkg/proxmox
- Added `GetLXCRRDData()` to ClusterClient for cluster-aware operations
- Modified LXC memory calculation in `pollPVEInstance()` to use RRD data when available
- Added guest memory snapshot recording for LXC containers
- Updated test stubs to implement the new interface method
## Testing
- Code compiles successfully
- Follows the same proven pattern used for nodes and VMs
- Includes diagnostic snapshot recording for troubleshooting
This implements the ability for users to assign custom display names to Docker hosts,
similar to the existing functionality for Proxmox nodes. This addresses the issue where
multiple Docker hosts with identical hostnames but different IPs/domains cannot be
easily distinguished in the UI.
Backend changes:
- Add CustomDisplayName field to DockerHost model (internal/models/models.go:201)
- Update UpsertDockerHost to preserve custom display names across updates (internal/models/models.go:1110-1113)
- Add SetDockerHostCustomDisplayName method to State for updating names (internal/models/models.go:1221-1235)
- Add SetDockerHostCustomDisplayName method to Monitor (internal/monitoring/monitor.go:1070-1088)
- Add HandleSetCustomDisplayName API handler (internal/api/docker_agents.go:385-426)
- Route /api/agents/docker/hosts/{id}/display-name PUT requests (internal/api/docker_agents.go:117-120)
Frontend changes:
- Add customDisplayName field to DockerHost TypeScript interface (frontend-modern/src/types/api.ts:136)
- Add MonitoringAPI.setDockerHostDisplayName method (frontend-modern/src/api/monitoring.ts:151-187)
- Update getDisplayName function to prioritize custom names (frontend-modern/src/components/Settings/DockerAgents.tsx:84-89)
- Add inline editing UI with save/cancel buttons in Docker Agents settings (frontend-modern/src/components/Settings/DockerAgents.tsx:1349-1413)
- Update sorting to use custom display names (frontend-modern/src/components/Docker/DockerHosts.tsx:58-59)
- Update DockerHostSummaryTable to display custom names (frontend-modern/src/components/Docker/DockerHostSummaryTable.tsx:40-42, 87, 120, 254)
Users can now click the edit icon next to any Docker host name in Settings > Docker Agents
to set a custom display name. The custom name will be preserved across agent reconnections
and takes priority over the hostname reported by the agent.
Related to #623
Related to #595
This change adds support for custom SSH ports when collecting temperature
data from Proxmox nodes, resolving issues for users who run SSH on non-standard
ports.
**Why SSH is still needed:**
Temperature monitoring requires reading /sys/class/hwmon sensors on Proxmox
nodes, which is not exposed via the Proxmox API. Even when using API tokens
for authentication, Pulse needs SSH access to collect temperature data.
**Changes:**
- Add `sshPort` configuration to SystemSettings (system.json)
- Add `SSHPort` field to Config with environment variable support (SSH_PORT)
- Add per-node SSH port override capability for PVE, PBS, and PMG instances
- Update TemperatureCollector to accept and use custom SSH port
- Update SSH known_hosts manager to support non-standard ports
- Add NewTemperatureCollectorWithPort() constructor with port parameter
- Maintain backward compatibility with NewTemperatureCollector() (uses port 22)
- Update frontend TypeScript types for SSH port configuration
**Configuration methods:**
1. Environment variable: SSH_PORT=2222
2. system.json: {"sshPort": 2222}
3. Per-node override in nodes.enc (future UI support)
**Default behavior:**
- Defaults to port 22 if not configured
- Maintains full backward compatibility
- No changes required for existing deployments
The implementation includes proper ssh-keyscan port handling and known_hosts
management for non-standard ports using [host]:port notation per SSH standards.
Related to #630
When using the efficient polling path (cluster/resources endpoint), guest
agent calls to GetVMFSInfo were made without retry logic. This could cause
transient "Guest details unavailable" errors during initialization when the
guest agent wasn't immediately ready to respond.
The traditional polling path already used retryGuestAgentCall for filesystem
info queries, providing resilience against transient timeouts. This commit
applies the same retry logic to the efficient polling path for consistency.
Changes:
- Wrap GetVMFSInfo call in efficient polling with retryGuestAgentCall
- Use configured guestAgentFSInfoTimeout and guestAgentRetries settings
- Ensures consistent behavior between traditional and efficient polling paths
This should resolve the transient initialization issue reported in #630 where
guest details were unavailable until after a reinstall/restart.
Related to #405
Enhances error reporting and logging when all cluster endpoints are
unhealthy, making it easier to diagnose connectivity issues.
Changes:
1. Enhanced error messages in cluster_client.go:
- Error now includes list of unreachable endpoints
- Added detailed logging when no healthy endpoints available
- Log at WARN level (not DEBUG) when cluster health check fails
- Better context in recovery attempts with start/completion summaries
2. Improved storage polling resilience in monitor_polling.go:
- Better error context when cluster storage polling fails
- Specific guidance for "no healthy nodes available" scenario
- Storage polling continues with direct node queries even if
cluster-wide query fails (already worked, but now clearer)
3. Better recovery logging:
- Log when recovery attempts start with list of unhealthy endpoints
- Log individual recovery failures at DEBUG level
- Log recovery summary (success/failure counts)
- Track throttled endpoints separately for clearer diagnostics
These changes help users understand:
- Which specific endpoints are unreachable
- Whether it's a network/connectivity issue vs. API issue
- That Pulse will continue trying to recover endpoints automatically
- That storage monitoring continues via direct node queries
The root issue is that Pulse's internal health tracking can mark all
endpoints unhealthy when they're unreachable from the Pulse server,
even if Proxmox reports them as "online" in cluster status. Better
logging helps diagnose these network connectivity issues.
Related to discussion #577
When backups are stored on shared storage accessible from multiple nodes,
the backup polling code was incorrectly assigning the backup to whichever
node it was discovered on during the scan, rather than the node where the
VM/container actually resides.
This fix:
- Builds a lookup map of VMID -> actual node at the start of backup polling
- Uses this map to assign the correct node for guest backups (VMID > 0)
- Preserves existing behavior for host backups (VMID == 0)
- Falls back to the queried node if the guest is not found in the map
This ensures the NODE column accurately reflects which node hosts each
guest, matching the information displayed on the main page.
Related to #614
Corrects three issues with PMG monitoring:
1. Remove unsupported timeframe parameter from GetMailStatistics
- PMG API /statistics/mail does not accept timeframe parameter
- Previously sent "timeframe=day" causing 400 error
- API returns current day statistics by default
2. Fix GetMailCount timespan parameter to use seconds
- Changed from 24 (hours) to 86400 (seconds)
- PMG API expects timespan in seconds, not hours
- Previously sent "timespan=24" causing 400 error
3. Update function signature and tests
- Renamed GetMailCount parameter from timespanHours to timespanSeconds
- Updated test expectations to match corrected API calls
- Tests verify parameters are sent correctly
These changes align the PMG client with actual PMG API requirements,
fixing the data population issues reported in v4.25.0.
Related to #613
When all PBS datastore queries fail (e.g., due to network issues or PBS
downtime), the system was clearing all backups and showing an empty list.
This adds the same preservation logic that exists for PVE storage backups.
Changes:
- Add shouldPreservePBSBackups() helper function
- Track datastore query success/failure counts in pollPBSBackups()
- Preserve existing backups when all datastore queries fail
- Add comprehensive unit tests for PBS backup preservation logic
This ensures users can still see their backup history even during
temporary connectivity issues with PBS, matching the behavior already
implemented for PVE storage backups.
This change modifies the `clusterEndpointEffectiveURL` function to prioritize
IP addresses over hostnames when building cluster endpoint URLs. This eliminates
excessive DNS lookups that can overwhelm DNS servers (e.g., pi-hole), which was
causing hundreds of thousands of unnecessary DNS queries.
When Pulse communicates with Proxmox cluster nodes, it will now:
1. First try to use the IP address from ClusterEndpoint.IP
2. Fall back to ClusterEndpoint.Host only if IP is not available
This is a minimal, backwards-compatible change that maintains existing
functionality while dramatically reducing DNS traffic for clusters where
node IPs are already known and stored.
Related to #620
Related to #596
**Problem:**
Users were seeing persistent "permission denied" error messages for VMs
that simply didn't have qemu-guest-agent installed or running. The error
detection logic was too broad and classified Proxmox API 500 errors as
permission issues, even when they indicated guest agent unavailability.
**Root Cause:**
When qemu-guest-agent is not installed or not running, Proxmox API returns
various error responses (500, 403) that may contain permission-related text.
The previous error detection logic checked for "permission denied" strings
without considering the HTTP status code context, leading to:
- VMs with guest agent: guest details display correctly
- VMs without guest agent: false "Permission denied" error shown
**Solution:**
Enhanced error classification logic to distinguish between:
1. Actual permission issues (401/403 with permission keywords)
2. Guest agent unavailability (500 errors)
3. Agent timeout issues
4. Other agent errors
The fix ensures that only explicit authentication/authorization errors
(401 Unauthorized, 403 Forbidden with permission keywords) are classified
as permission-denied, while API 500 errors are correctly identified as
agent-not-running issues.
**Changes:**
- Reordered error detection to check most specific patterns first
- Added HTTP status code context to permission error detection
- 500 errors now correctly map to "agent-not-running" status
- Only 401/403 errors with explicit permission keywords trigger "permission-denied"
- Improved log messages to guide users toward correct resolution
- Fixed err.Error() vs errStr variable inconsistency
**Impact:**
Users will now see accurate error messages that guide them to:
- Install qemu-guest-agent when it's missing (most common case)
- Check permissions only when there's an actual auth/authz issue
- Understand the difference between agent problems and permission problems
Related to discussion #615
Add optional GuestURL field to PVE instances and cluster endpoints,
allowing users to specify a separate guest-accessible URL for web UI
navigation that differs from the internal management URL.
Backend changes:
- Add GuestURL field to PVEInstance and ClusterEndpoint structs
- Add GuestURL field to Node model
- Update cluster auto-discovery to preserve existing GuestURL values
- Update node creation logic to populate GuestURL from config
- Update API handlers to accept and persist GuestURL field
Frontend changes:
- Add GuestURL input field to NodeModal for configuration
- Update NodeGroupHeader and NodeSummaryTable to use GuestURL for navigation
- Add GuestURL to Node and PVENodeConfig TypeScript interfaces
When GuestURL is configured, it will be used for navigation links
instead of the Host URL, allowing users to access PVE hosts through
a reverse proxy or different domain while maintaining internal API
connections.
Users with NCT6687 SuperIO chips and AMD processors reporting only chiplet
temperatures were unable to see CPU temperature data. Added support for
Nuvoton/Winbond/Fintek SuperIO chips and AMD Tccd chiplet temperatures,
with debug logging to aid troubleshooting unsupported sensor configurations.
Related to discussion #586
Implemented comprehensive state preservation to prevent temporary dropouts:
1. Node Grace Period (60s):
- Track last-online timestamp for each Proxmox node
- Preserve online status during grace period to prevent flapping
- Applied to all node status checks throughout codebase
2. Efficient Polling Preservation:
- Detect when cluster/resources returns empty arrays
- Preserve previous VMs/containers if had resources before
- Handles cluster health check failures gracefully
3. Traditional Polling Preservation:
- Updated preservation logic for per-node VM/container polling
- Triggers when zero resources returned regardless of node response
- Fixed issue where nodes responding with empty data bypassed preservation
Root cause: Intermittent Proxmox cluster health failures ("no healthy nodes
available") caused both efficient and traditional polling to return empty
arrays, immediately clearing all VMs/containers from state.
Changes:
- internal/monitoring/monitor.go: Added node grace period, efficient polling preservation
- internal/monitoring/monitor_polling.go: Fixed traditional polling preservation logic
Fixes frequent UI flickering where vmCount/containerCount would briefly drop to zero.
This commit implements per-node temperature monitoring control and fixes a critical
bug where partial node updates were destroying existing configuration.
Backend changes:
- Add TemperatureMonitoringEnabled field (*bool) to PVEInstance, PBSInstance, and PMGInstance
- Update monitor.go to check per-node temperature setting with global fallback
- Convert all NodeConfigRequest boolean fields to *bool pointers
- Add nil checks in HandleUpdateNode to prevent overwriting unmodified fields
- Fix critical bug where partial updates zeroed out MonitorVMs, MonitorContainers, etc.
- Update NodeResponse, NodeFrontend, and StateSnapshot to include temperature setting
- Fix HandleAddNode and test connection handlers to use pointer-based boolean fields
Frontend changes:
- Add temperatureMonitoringEnabled to Node interface and config types
- Create per-node temperature monitoring toggle handler with optimistic updates
- Update NodeModal to wire up per-node temperature toggle
- Add isTemperatureMonitoringEnabled helper to check effective monitoring state
- Update ConfiguredNodeTables to show/hide temperature badge based on monitoring state
- Update NodeSummaryTable to conditionally show temperature column
- Pass globalTemperatureMonitoringEnabled prop through component tree
The critical bug fix ensures that when updating a single field (like temperature
monitoring), the backend only modifies that specific field instead of zeroing out
all other boolean configuration fields.
Root Cause:
The classifyError() function in tempproxy/client.go was returning nil
when err was nil, even if respError contained "rate limit exceeded".
This caused the retry logic to treat rate limit errors as retryable,
triggering 3 retries with exponential backoff (100ms, 200ms, 400ms)
for each rate-limited request.
With multiple nodes polling simultaneously and hitting the proxy's
1 req/sec default rate limit, this created a retry storm:
- 3 nodes polling every 10 seconds
- 1-2 requests rate limited per cycle
- Each rate limit triggered 3 retries
- Result: 6+ extra requests per cycle, causing temperature data to
flicker in and out as requests were dropped
Solution:
1. Reordered classifyError() to check respError first before checking
if err is nil, ensuring rate limit errors are properly classified
2. Added explicit rate limit detection that marks these errors as
non-retryable
3. Added stub EnableTemperatureMonitoring/DisableTemperatureMonitoring
methods to Monitor for interface compatibility
Impact:
- Rate limit retry attempts reduced from 151 in 10 minutes to 0
- Temperature data now stable for all nodes
- No more flickering temperature displays in dashboard
This change addresses intermittent "Guest details unavailable" and "Disk stats
unavailable" errors affecting users with large VM deployments (50+ VMs) or
high-load Proxmox environments.
Changes:
- Increased default guest agent timeouts (3-5s → 10-15s) to better handle
environments under load
- Added automatic retry logic (1 retry by default) for transient timeout failures
- Made all timeouts and retry count configurable via environment variables:
* GUEST_AGENT_FSINFO_TIMEOUT (default: 15s)
* GUEST_AGENT_NETWORK_TIMEOUT (default: 10s)
* GUEST_AGENT_OSINFO_TIMEOUT (default: 10s)
* GUEST_AGENT_VERSION_TIMEOUT (default: 10s)
* GUEST_AGENT_RETRIES (default: 1)
- Added comprehensive documentation in VM_DISK_MONITORING.md with configuration
examples for different deployment scenarios
These improvements allow Pulse to gracefully handle intermittent API timeouts
without immediately displaying errors, while remaining configurable for
different network conditions and environment sizes.
Fixes: https://github.com/rcourtman/Pulse/discussions/592
- Add Access-Control-Expose-Headers to allow frontend to read X-CSRF-Token response header
- Implement proactive CSRF token issuance on GET requests when session exists but CSRF cookie is missing
- Ensures frontend always has valid CSRF token before making POST requests
- Fixes 403 Forbidden errors when toggling system settings
This resolves CSRF validation failures that occurred when CSRF tokens expired or were missing while valid sessions existed.