Cache err.Error() result in two locations:
- monitor.go: storage query retry logic (2x calls to 1)
- monitor_polling.go: storage timeout handling (2x calls to 1)
strconv.Itoa is faster than fmt.Sprintf("%d", ...) because it doesn't
need to parse a format string. Changed 4 occurrences in monitoring
package where integers are converted to strings.
- firstForwardedValue: strings.Split always returns at least one element
- shouldRunBackupPoll: remaining is always >= 1 by math
- convertContainerDiskInfo: lowerLabel is never empty for non-rootfs
All three functions now at 100% coverage.
Host disk bars were showing virtual filesystems like tmpfs, /dev, /run,
/sys, and Docker overlay mounts. These clutter the UI and don't represent
meaningful disk usage.
Changed from `shouldIgnoreReadOnlyFilesystem` (read-only only) to the
full `fsfilters.ShouldSkipFilesystem` which also excludes:
- Virtual FS types: tmpfs, devtmpfs, sysfs, proc, cgroup, etc.
- Special mountpoints: /dev, /proc, /sys, /run, /var/lib/docker, /snap
- Network filesystems: fuse, nfs, cifs, etc.
Related to #790
The error message referenced "Settings -> Docker -> Removed hosts" but
that UI path no longer exists. The correct path is now
"Settings -> Agents -> Removed Docker Hosts".
Related to #778
The backup status indicator feature was incomplete - it added the UI
component but never populated VM/Container LastBackup from actual
backup data. Now SyncGuestBackupTimes() is called after storage
backups and PBS backups are polled, matching each guest's VMID to
its most recent backup timestamp.
Fixes#786
Move the inline filesystem skip logic from pollVMsAndContainersEfficient
into a reusable ShouldSkipFilesystem function. This consolidates filtering
for virtual filesystems (tmpfs, cgroup, etc.), network mounts (nfs, cifs,
fuse), and special mountpoints (/dev, /proc, /snap, etc.) into one tested
location.
Reduces cyclomatic complexity of pollVMsAndContainersEfficient and adds
28 test cases covering virtual fs types, network mounts, special mounts,
Windows paths, and edge cases.
When api_tokens.json is modified on disk, the ConfigWatcher reloads
the tokens into memory. However, the Monitor's dockerTokenBindings and
hostTokenBindings maps were not synchronized with the new token set,
causing orphaned bindings when agents reconnect after reinstall.
Add SetAPITokenReloadCallback to ConfigWatcher that triggers Monitor's
new RebuildTokenBindings method after token reload. This method
reconstructs the binding maps from current Docker host and host agent
state, keeping only bindings for tokens that still exist in config.
Related to #773
Mark intentionally unused parameters with underscore to:
- Silence unparam warnings for legitimate unused parameters
- Keep function signatures intact for API compatibility
- Remove unused req from serveChecksum helper
- Replace custom maxInt64 helper with Go 1.21+ builtin max()
- Mark unused cfg parameter in newAdaptiveIntervalSelector
- Remove test for deleted helper function
- Fix SA4006 unused value issues in ssh.go, validation.go, generator.go
- Replace deprecated ioutil with io/os in config.go
- Replace deprecated tar.TypeRegA with tar.TypeReg
- Remove deprecated rand.Seed calls (auto-seeded in Go 1.20+)
- Fix always-true nil check in main.go
- Fix impossible nil comparison in tempproxy/client.go
- Add nil check for config in monitor.New()
Add seamless migration path from legacy agents to unified agent:
- Add AgentType field to report payloads (unified vs legacy detection)
- Update server to detect legacy agents by type instead of version
- Add UI banner showing upgrade command when legacy agents are detected
- Add deprecation notice to install-host-agent.ps1
- Create install-docker-agent.sh stub that redirects to unified installer
Legacy agents (pulse-host-agent, pulse-docker-agent) now show a "Legacy"
badge in the UI with a one-click copy command to upgrade to the unified
agent.
- Implemented adaptive layout for NodeSummaryTable with responsive columns and sticky name column.
- Fixed GuestRow background display issues.
- Added IsLegacy field to Host and DockerHost models to flag legacy agents (version < 1.0.0).
- Updated monitor to populate IsLegacy based on agent version.
- Removed global legacySSHDisabled flag that was triggered by any single node auth failure
- Changed disableLegacySSHOnAuthFailure to only log warnings
- Fixed potential context leak in monitor.go
- Updated tests to reflect removal of global disable logic
Fixes#727. Previously, if temperature monitoring was enabled and a node wasn't found in ClusterEndpoints, the entire node processing was skipped. This change ensures we only skip temperature collection.
During cluster startup, nodes were temporarily using the primary cluster
endpoint for temperature collection before cluster metadata validation
completed. This caused all nodes to show the same (incorrect) temperature
values for ~4 minutes until validation finished and per-node endpoints
were established.
Example: minipc would show delly's temperature (90°C) instead of its own
(50°C) from startup until cluster validation completed.
Root cause:
- Temperature collection started immediately at startup
- Cluster endpoint validation happened asynchronously
- Code fell back to primary endpoint when ClusterEndpoints was empty
- All nodes used same endpoint, got same temperature data
Fix: Skip temperature collection for cluster nodes until:
1. ClusterEndpoints array is populated (validation complete)
2. Node's specific endpoint is found in the cluster metadata
This ensures correct temperature data from the very first collection,
maintaining data integrity during startup. When persisted config exists,
endpoints are available immediately so no delay occurs. For new clusters,
temperature collection begins once validation completes (~30s).
Preserves Pulse's correctness guarantee: users can trust metrics
immediately after restart without waiting for "warm-up" period.
This implements HTTP/HTTPS support for pulse-sensor-proxy to enable
temperature monitoring across multiple separate Proxmox instances.
Architecture changes:
- Dual-mode operation: Unix socket (local) + HTTPS (remote)
- Unix socket remains default for security/performance (no breaking change)
- HTTP mode enables temps from external PVE hosts
Backend implementation:
- Add HTTPS server with TLS + Bearer token authentication to sensor-proxy
- Add TemperatureProxyURL and TemperatureProxyToken fields to PVEInstance
- Add HTTP client (internal/tempproxy/http_client.go) for remote proxy calls
- Update temperature collector to prefer HTTP proxy when configured
- Fallback logic: HTTP proxy → Unix socket → direct SSH (if not containerized)
Configuration:
- pulse-sensor-proxy config: http_enabled, http_listen_addr, http_tls_cert/key, http_auth_token
- PVEInstance config: temperature_proxy_url, temperature_proxy_token
- Environment variables: PULSE_SENSOR_PROXY_HTTP_* for all HTTP settings
Security:
- TLS 1.2+ with modern cipher suites
- Constant-time token comparison (timing attack prevention)
- Rate limiting applied to HTTP requests (shared with socket mode)
- Audit logging for all HTTP requests
Next steps:
- Update installer script to support HTTP mode + auto-registration
- Add Pulse API endpoint for proxy registration
- Generate TLS certificates during installation
- Test multi-instance temperature collection
Related to #571 (multi-instance architecture)
Squashfs snap mounts on Ubuntu (and similar read-only filesystems like
erofs on Home Assistant OS) always report near-full usage and trigger
false disk alerts. The filter logic existed in Proxmox monitoring but
wasn't applied to host agents.
Changes:
- Extract read-only filesystem filter to shared pkg/fsfilters package
- Apply filter in hostmetrics.collectDisks() for host/docker agents
- Apply filter in monitor.ApplyHostReport() for backward compatibility
- Convert internal/monitoring/fs_filters.go to wrapper functions
This prevents squashfs, erofs, iso9660, cdfs, udf, cramfs, romfs, and
saturated overlay filesystems from generating alerts. Filtering happens
at both collection time (agents) and ingestion time (server) to ensure
older agents don't cause false alerts until they're updated.
Add defensive mitigation to prevent repeated guest-get-osinfo calls that
trigger buggy behavior in QEMU guest agent 9.0.2 on OpenBSD 7.6.
The issue: OpenBSD doesn't have /etc/os-release (Linux convention), and
qemu-ga 9.0.2 appears to spawn excessive helper processes trying to read
this file whenever guest-get-osinfo is called. These helpers don't clean
up properly, eventually exhausting the process table and crashing the VM.
The fix: Track consecutive OS info failures per VM. After 3 failures,
automatically skip future guest-get-osinfo calls for that VM while
continuing to fetch other guest agent data (network interfaces, version).
This prevents triggering the buggy code path while maintaining most guest
agent functionality.
The counter resets on success, so if the guest agent is upgraded or the
issue is resolved, Pulse will automatically resume OS info collection.
Related to #692
Implements comprehensive mdadm RAID array monitoring for Linux hosts
via pulse-host-agent. Arrays are automatically detected and monitored
with real-time status updates, rebuild progress tracking, and automatic
alerting for degraded or failed arrays.
Key changes:
**Backend:**
- Add mdadm package for parsing mdadm --detail output
- Extend host agent report structure with RAID array data
- Integrate mdadm collection into host agent (Linux-only, best-effort)
- Add RAID array processing in monitoring system
- Implement automatic alerting:
- Critical alerts for degraded arrays or arrays with failed devices
- Warning alerts for rebuilding/resyncing arrays with progress tracking
- Auto-clear alerts when arrays return to healthy state
**Frontend:**
- Add TypeScript types for RAID arrays and devices
- Display RAID arrays in host details drawer with:
- Array status (clean/degraded/recovering) with color-coded indicators
- Device counts (active/total/failed/spare)
- Rebuild progress percentage and speed when applicable
- Green for healthy, amber for rebuilding, red for degraded
**Documentation:**
- Document mdadm monitoring feature in HOST_AGENT.md
- Explain requirements (Linux, mdadm installed, root access)
- Clarify scope (software RAID only, hardware RAID not supported)
**Testing:**
- Add comprehensive tests for mdadm output parsing
- Test parsing of healthy, degraded, and rebuilding arrays
- Verify proper extraction of device states and rebuild progress
All builds pass successfully. RAID monitoring is automatic and best-effort
- if mdadm is not installed or no arrays exist, host agent continues
reporting other metrics normally.
Related to #676