Related to #617
This fixes a misconfiguration scenario where Docker containers could
attempt direct SSH connections (producing [preauth] log spam) instead
of using the sensor proxy.
Changes:
- Fix container detection to check PULSE_DOCKER=true in addition to
system.InContainer() heuristics (both temperature.go and config_handlers.go)
- Upgrade temperature collection log from Error to Warn with actionable
guidance about mounting the proxy socket
- Add Info log when dev mode override is active so operators understand
the security posture
- Add troubleshooting section to docs for SSH [preauth] logs from containers
The container detection was inconsistent - monitor.go checked both flags
but temperature.go and config_handlers.go only checked InContainer().
Now all locations consistently check PULSE_DOCKER || InContainer().
- Add nouveau chip recognition to temperature parser
- Implement parseNouveauGPUTemps() for NVIDIA GPU temps via nouveau driver
- Map "GPU core" sensor to edge temperature field
- Supports systems using open-source nouveau driver
This complements the AMD GPU support added previously. Systems using
the nouveau driver will now see NVIDIA GPU temperatures in the
dashboard. For proprietary nvidia driver users, GPU temps are not
available via lm-sensors and would require nvidia-smi integration.
Related to #600
- Add GPU field to Temperature model with edge, junction, and mem sensors
- Add amdgpu chip recognition to temperature parser
- Implement parseGPUTemps() to extract AMD GPU temperature data
- Update frontend TypeScript types to include GPU temperatures
- Display GPU temps in node table tooltip alongside CPU temps
- Set hasGPU flag when GPU data is available
This enables temperature monitoring for AMD GPUs (amdgpu sensors)
that was previously being collected via SSH but silently discarded
during parsing.
Related to #637
The sensor-proxy was failing to start on systems with read-only filesystems
because audit logging required a writable /var/log/pulse/sensor-proxy directory.
Changes:
- Modified newAuditLogger() to automatically fall back to stderr (systemd journal)
if the audit log file cannot be opened
- Removed error return from newAuditLogger() since it now always succeeds
- Added warning logs when fallback mode is used to alert operators
- Updated tests to handle the new signature
- Added better debugging to audit log tests
This allows the sensor-proxy to run on:
- Immutable/read-only root filesystems
- Hardened systems with restricted /var mounts
- Containerized environments with limited write access
Audit events are still captured via systemd journal when file logging is
unavailable, maintaining the security audit trail.
Related to #553
## Problem
LXC containers showed inflated memory usage (e.g., 90%+ when actual usage was 50-60%,
96% when actual was 61%) because the code used the raw `mem` value from Proxmox's
`/cluster/resources` API endpoint. This value comes from cgroup `memory.current` which
includes reclaimable cache and buffers, making memory appear nearly full even when
plenty is available.
## Root Cause
- **Nodes**: Had sophisticated cache-aware memory calculation with RRD fallbacks
- **VMs (qemu)**: Had detailed memory calculation using guest agent meminfo
- **LXCs**: Naively used `res.Mem` directly without any cache-aware correction
The Proxmox cluster resources API's `mem` field for LXCs includes cache/buffers
(from cgroup memory accounting), which should be excluded for accurate "used" memory.
## Solution
Implement cache-aware memory calculation for LXC containers by:
1. Adding `GetLXCRRDData()` method to fetch RRD metrics for LXC containers from
`/nodes/{node}/lxc/{vmid}/rrddata`
2. Using RRD `memavailable` to calculate actual used memory (total - available)
3. Falling back to RRD `memused` if `memavailable` is not available
4. Only using cluster resources `mem` value as last resort
This matches the approach already used for nodes and VMs, providing consistent
cache-aware memory reporting across all resource types.
## Changes
- Added `GuestRRDPoint` type and `GetLXCRRDData()` method to pkg/proxmox
- Added `GetLXCRRDData()` to ClusterClient for cluster-aware operations
- Modified LXC memory calculation in `pollPVEInstance()` to use RRD data when available
- Added guest memory snapshot recording for LXC containers
- Updated test stubs to implement the new interface method
## Testing
- Code compiles successfully
- Follows the same proven pattern used for nodes and VMs
- Includes diagnostic snapshot recording for troubleshooting
This implements the ability for users to assign custom display names to Docker hosts,
similar to the existing functionality for Proxmox nodes. This addresses the issue where
multiple Docker hosts with identical hostnames but different IPs/domains cannot be
easily distinguished in the UI.
Backend changes:
- Add CustomDisplayName field to DockerHost model (internal/models/models.go:201)
- Update UpsertDockerHost to preserve custom display names across updates (internal/models/models.go:1110-1113)
- Add SetDockerHostCustomDisplayName method to State for updating names (internal/models/models.go:1221-1235)
- Add SetDockerHostCustomDisplayName method to Monitor (internal/monitoring/monitor.go:1070-1088)
- Add HandleSetCustomDisplayName API handler (internal/api/docker_agents.go:385-426)
- Route /api/agents/docker/hosts/{id}/display-name PUT requests (internal/api/docker_agents.go:117-120)
Frontend changes:
- Add customDisplayName field to DockerHost TypeScript interface (frontend-modern/src/types/api.ts:136)
- Add MonitoringAPI.setDockerHostDisplayName method (frontend-modern/src/api/monitoring.ts:151-187)
- Update getDisplayName function to prioritize custom names (frontend-modern/src/components/Settings/DockerAgents.tsx:84-89)
- Add inline editing UI with save/cancel buttons in Docker Agents settings (frontend-modern/src/components/Settings/DockerAgents.tsx:1349-1413)
- Update sorting to use custom display names (frontend-modern/src/components/Docker/DockerHosts.tsx:58-59)
- Update DockerHostSummaryTable to display custom names (frontend-modern/src/components/Docker/DockerHostSummaryTable.tsx:40-42, 87, 120, 254)
Users can now click the edit icon next to any Docker host name in Settings > Docker Agents
to set a custom display name. The custom name will be preserved across agent reconnections
and takes priority over the hostname reported by the agent.
Related to #623
Addresses issue #635 where users encounter "can't find the SSH key" errors
when enabling temperature monitoring during automated PVE setup with Pulse
running in Docker.
Root cause:
- Setup script embeds SSH keys at generation time (when downloaded)
- For containerized Pulse, keys are empty until pulse-sensor-proxy is installed
- Script auto-installs proxy, but didn't refresh keys after installation
- This caused temperature monitoring setup to fail with confusing errors
Changes:
1. After successful proxy installation, immediately fetch and populate the
proxy's SSH public key (lines 4068-4080)
2. Update bash variables SSH_SENSORS_PUBLIC_KEY and SSH_SENSORS_KEY_ENTRY
so temperature monitoring setup can proceed in the same script run
3. Improve error messaging when keys aren't available (lines 4424-4453):
- Clear explanation of containerized Pulse requirements
- Step-by-step instructions for container restart and verification
- Separate guidance for bare-metal vs containerized deployments
Flow improvements:
- Initial run: Proxy installs → keys fetched → temp monitoring configures
- Rerun after container restart: Keys fetched at script start → works
- Both scenarios now handled correctly
Related to #635
Related to #636
When authentication is not configured (hasAuth() returns false), the
Settings tab is now automatically hidden from the web interface. This
provides a cleaner monitoring-only view for unauthenticated deployments
where users only need to check the health of their environment.
The Settings icon beside the Alerts tab will only appear when
authentication is properly configured via PULSE_AUTH_USER/PASS,
API tokens, proxy auth, or OIDC.
Changes:
- Modified utilityTabs in App.tsx to conditionally include Settings
based on hasAuth() signal
- Updated CONFIGURATION.md to document this UI behavior
Add comprehensive documentation for HTTPS/TLS configuration including:
- File ownership and permission requirements (pulse user)
- Common troubleshooting steps for startup failures
- Complete setup examples for systemd and Docker
- Validation commands for certificate/key verification
Related to discussion #634
When Pulse is running in a container and the SSH key is not available,
provide clearer guidance about the pulse-sensor-proxy requirement and
include documentation link for Docker deployments.
This helps users understand that containerized Pulse needs the host-side
sensor proxy to access temperature data from Proxmox hosts.
The build-agents Makefile target was only building host agent binaries,
which meant development builds were missing the architecture-specific
docker agent binaries (pulse-docker-agent-linux-{amd64,arm64,armv7}).
This caused the install script to fail on ARM platforms like Raspberry Pi
because the download endpoint would fall back to the default amd64 binary,
resulting in "Exec format error" when trying to run on ARM.
Related to #633
Addresses issue #567 where selecting "Custom interval..." from the
backup polling dropdown would revert to a preset option if the current
custom minutes value happened to match a predefined interval.
The bug occurred because:
1. User selects "Custom interval..." from dropdown
2. Code sets interval based on current customMinutes value
3. If that value matches a preset (e.g., 60 min = 1 hour), the
computed select value returns the preset instead of 'custom'
4. Dropdown reverts, hiding the custom input field
Fix introduces a dedicated state variable (backupPollingUseCustom)
to explicitly track whether custom mode is active, independent of
whether the current interval value matches a preset option.
Changes:
- Add backupPollingUseCustom signal to track custom mode state
- Update backupIntervalSelectValue() to check custom flag first
- Set/clear custom flag in dropdown onChange handler
- Initialize custom flag when loading settings from API
Related to #567
Added comprehensive documentation for the per-metric alert delay feature
that was requested in issue #433. This feature allows configuring
different alert delays for different metrics (e.g., longer delays for
CPU spikes, shorter delays for memory pressure).
Key additions:
- Detailed explanation of delay precedence hierarchy
- JSON configuration examples for common use cases
- Table of recommended delays by metric type with reasoning
- UI access instructions for the Alert Delay row
Also added example tests demonstrating the feature's functionality
and common configuration patterns.
The feature itself was already fully implemented in both backend
(metricTimeThresholds support) and frontend (per-metric delay inputs
in ResourceTable). This commit surfaces the feature through
documentation so users know it exists and how to use it.
Related to #433
Related to #595
This change adds support for custom SSH ports when collecting temperature
data from Proxmox nodes, resolving issues for users who run SSH on non-standard
ports.
**Why SSH is still needed:**
Temperature monitoring requires reading /sys/class/hwmon sensors on Proxmox
nodes, which is not exposed via the Proxmox API. Even when using API tokens
for authentication, Pulse needs SSH access to collect temperature data.
**Changes:**
- Add `sshPort` configuration to SystemSettings (system.json)
- Add `SSHPort` field to Config with environment variable support (SSH_PORT)
- Add per-node SSH port override capability for PVE, PBS, and PMG instances
- Update TemperatureCollector to accept and use custom SSH port
- Update SSH known_hosts manager to support non-standard ports
- Add NewTemperatureCollectorWithPort() constructor with port parameter
- Maintain backward compatibility with NewTemperatureCollector() (uses port 22)
- Update frontend TypeScript types for SSH port configuration
**Configuration methods:**
1. Environment variable: SSH_PORT=2222
2. system.json: {"sshPort": 2222}
3. Per-node override in nodes.enc (future UI support)
**Default behavior:**
- Defaults to port 22 if not configured
- Maintains full backward compatibility
- No changes required for existing deployments
The implementation includes proper ssh-keyscan port handling and known_hosts
management for non-standard ports using [host]:port notation per SSH standards.
The docker agent binaries built in the Dockerfile were missing version
information in their ldflags, causing them to always report v4.30.0
(the hardcoded default). This created an update loop where agents would
continuously download and restart when checking for updates.
The server's /api/agent/version endpoint returns dockeragent.Version,
which for the bundled binaries was always v4.30.0 instead of the actual
release version (e.g., v4.25.0). When an older agent (e.g., v4.23.0)
checked for updates, it would see v4.30.0 available, download it, restart,
and repeat the cycle continuously.
This fix adds the VERSION file content to the ldflags when building
all docker agent binaries (amd64, arm64, armv7), matching the pattern
already used in scripts/build-release.sh.
Related to #631
Related to #630
When using the efficient polling path (cluster/resources endpoint), guest
agent calls to GetVMFSInfo were made without retry logic. This could cause
transient "Guest details unavailable" errors during initialization when the
guest agent wasn't immediately ready to respond.
The traditional polling path already used retryGuestAgentCall for filesystem
info queries, providing resilience against transient timeouts. This commit
applies the same retry logic to the efficient polling path for consistency.
Changes:
- Wrap GetVMFSInfo call in efficient polling with retryGuestAgentCall
- Use configured guestAgentFSInfoTimeout and guestAgentRetries settings
- Ensures consistent behavior between traditional and efficient polling paths
This should resolve the transient initialization issue reported in #630 where
guest details were unavailable until after a reinstall/restart.
Update hardening documentation to include log_level configuration option.
Users can now find examples of controlling logging verbosity through
YAML config and environment variables.
Related to #629
Users can now control logging verbosity through:
- YAML config file: log_level: "debug|info|warn|error"
- Environment variable: PULSE_SENSOR_PROXY_LOG_LEVEL
Default log level is set to "info" instead of debug, reducing verbose output.
Supported levels: trace, debug, info, warn, error, fatal, panic, disabled
Related to #629
Related to #405
Enhances error reporting and logging when all cluster endpoints are
unhealthy, making it easier to diagnose connectivity issues.
Changes:
1. Enhanced error messages in cluster_client.go:
- Error now includes list of unreachable endpoints
- Added detailed logging when no healthy endpoints available
- Log at WARN level (not DEBUG) when cluster health check fails
- Better context in recovery attempts with start/completion summaries
2. Improved storage polling resilience in monitor_polling.go:
- Better error context when cluster storage polling fails
- Specific guidance for "no healthy nodes available" scenario
- Storage polling continues with direct node queries even if
cluster-wide query fails (already worked, but now clearer)
3. Better recovery logging:
- Log when recovery attempts start with list of unhealthy endpoints
- Log individual recovery failures at DEBUG level
- Log recovery summary (success/failure counts)
- Track throttled endpoints separately for clearer diagnostics
These changes help users understand:
- Which specific endpoints are unreachable
- Whether it's a network/connectivity issue vs. API issue
- That Pulse will continue trying to recover endpoints automatically
- That storage monitoring continues via direct node queries
The root issue is that Pulse's internal health tracking can mark all
endpoints unhealthy when they're unreachable from the Pulse server,
even if Proxmox reports them as "online" in cluster status. Better
logging helps diagnose these network connectivity issues.
Related to #571
This addresses multiple temperature monitoring issues:
1. Fix single-node Proxmox installation failure: Add '|| true' to pvecm status
calls to prevent script exit on standalone (non-clustered) nodes with
'set -euo pipefail'. The script now properly falls through to standalone
node configuration when cluster detection fails.
2. Build pulse-sensor-proxy for all Linux architectures (amd64, arm64, armv7)
in Dockerfile to ensure binaries are available for download on all supported
platforms. This resolves the missing binary issue from v4.23.0.
Note: AMD Tctl sensor support was already implemented in a previous commit.
Related to discussion #577
When backups are stored on shared storage accessible from multiple nodes,
the backup polling code was incorrectly assigning the backup to whichever
node it was discovered on during the scan, rather than the node where the
VM/container actually resides.
This fix:
- Builds a lookup map of VMID -> actual node at the start of backup polling
- Uses this map to assign the correct node for guest backups (VMID > 0)
- Preserves existing behavior for host backups (VMID == 0)
- Falls back to the queried node if the guest is not found in the map
This ensures the NODE column accurately reflects which node hosts each
guest, matching the information displayed on the main page.
Added explicit onKeyDown handler to stop event propagation when Enter
is pressed in the ignored container prefixes textarea. This ensures
newlines can be properly entered to separate multiple prefixes.
Related to #625
Related to #626
When authentication expires after some time, users see "Connection lost"
and must refresh the page to see "Authentication required". This commit
implements automatic redirect to login when authentication expires.
Changes:
- Add authentication check to WebSocket endpoint to prevent unauthenticated
WebSocket connections
- Handle WebSocket close with code 1008 (policy violation) as auth failure
and redirect to login
- Intercept 401 responses on API calls (except initial auth checks) and
automatically redirect to login page
- Clear stored credentials and set logout flag before redirect to ensure
clean login flow
This provides a better user experience by immediately redirecting to the
login page when the session expires, rather than showing a confusing
"Connection lost" message that requires manual page refresh.
Related to #547 and #622
## Samsung SSD Fix (#547)
Samsung 980 and 990 series SSDs have known firmware bugs that cause them to
report incorrect health status (typically FAILED or critical warnings) even
when the drives are actually healthy. This is commonly due to incorrect
temperature threshold reporting in the firmware.
This change adds special handling to detect these drives and skip health
status alerts while still monitoring wearout metrics, which remain reliable.
The fix also clears any existing false alerts for these drives.
Users experiencing these false alerts should update their Samsung SSD firmware
to the latest version from Samsung, which typically resolves the issue.
## Docker Agent CPU Fix (#622)
Addresses issue where Docker container CPU usage shows 0%. The Docker
agent uses ContainerStatsOneShot which typically doesn't populate
PreCPUStats, requiring manual delta tracking between collection cycles.
Changes:
- Fix logic bug where prevContainerCPU was updated before checking if
previous sample existed, causing incorrect delta calculations
- Add comprehensive debug logging showing which calculation method
succeeded (PreCPUStats, system delta, or time-based fallback)
- Add warning after 10 PreCPUStats failures to inform about manual
tracking mode (normal for one-shot stats)
- Add detailed failure logging when CPU calculation cannot complete
Expected behavior: First collection cycle returns 0% (no previous
sample), subsequent cycles show accurate CPU metrics.
Related to #614
Corrects three issues with PMG monitoring:
1. Remove unsupported timeframe parameter from GetMailStatistics
- PMG API /statistics/mail does not accept timeframe parameter
- Previously sent "timeframe=day" causing 400 error
- API returns current day statistics by default
2. Fix GetMailCount timespan parameter to use seconds
- Changed from 24 (hours) to 86400 (seconds)
- PMG API expects timespan in seconds, not hours
- Previously sent "timespan=24" causing 400 error
3. Update function signature and tests
- Renamed GetMailCount parameter from timespanHours to timespanSeconds
- Updated test expectations to match corrected API calls
- Tests verify parameters are sent correctly
These changes align the PMG client with actual PMG API requirements,
fixing the data population issues reported in v4.25.0.
Related to #613
When all PBS datastore queries fail (e.g., due to network issues or PBS
downtime), the system was clearing all backups and showing an empty list.
This adds the same preservation logic that exists for PVE storage backups.
Changes:
- Add shouldPreservePBSBackups() helper function
- Track datastore query success/failure counts in pollPBSBackups()
- Preserve existing backups when all datastore queries fail
- Add comprehensive unit tests for PBS backup preservation logic
This ensures users can still see their backup history even during
temporary connectivity issues with PBS, matching the behavior already
implemented for PVE storage backups.
Improves user experience when physical disks don't appear by providing
clear, context-aware instructions. The empty state now shows:
- When no PVE nodes are configured: prompt to add nodes
- When nodes exist but no disks appear: step-by-step requirements
including enabling SMART monitoring in both Pulse and Proxmox
This addresses confusion from issue #594 where users didn't realize
they needed to enable SMART monitoring in Proxmox itself (not just
in Pulse settings) and wait 5 minutes for data collection.
Related to #594
This change modifies the `clusterEndpointEffectiveURL` function to prioritize
IP addresses over hostnames when building cluster endpoint URLs. This eliminates
excessive DNS lookups that can overwhelm DNS servers (e.g., pi-hole), which was
causing hundreds of thousands of unnecessary DNS queries.
When Pulse communicates with Proxmox cluster nodes, it will now:
1. First try to use the IP address from ClusterEndpoint.IP
2. Fall back to ClusterEndpoint.Host only if IP is not available
This is a minimal, backwards-compatible change that maintains existing
functionality while dramatically reducing DNS traffic for clusters where
node IPs are already known and stored.
Related to #620
Related to #596
**Problem:**
Users were seeing persistent "permission denied" error messages for VMs
that simply didn't have qemu-guest-agent installed or running. The error
detection logic was too broad and classified Proxmox API 500 errors as
permission issues, even when they indicated guest agent unavailability.
**Root Cause:**
When qemu-guest-agent is not installed or not running, Proxmox API returns
various error responses (500, 403) that may contain permission-related text.
The previous error detection logic checked for "permission denied" strings
without considering the HTTP status code context, leading to:
- VMs with guest agent: guest details display correctly
- VMs without guest agent: false "Permission denied" error shown
**Solution:**
Enhanced error classification logic to distinguish between:
1. Actual permission issues (401/403 with permission keywords)
2. Guest agent unavailability (500 errors)
3. Agent timeout issues
4. Other agent errors
The fix ensures that only explicit authentication/authorization errors
(401 Unauthorized, 403 Forbidden with permission keywords) are classified
as permission-denied, while API 500 errors are correctly identified as
agent-not-running issues.
**Changes:**
- Reordered error detection to check most specific patterns first
- Added HTTP status code context to permission error detection
- 500 errors now correctly map to "agent-not-running" status
- Only 401/403 errors with explicit permission keywords trigger "permission-denied"
- Improved log messages to guide users toward correct resolution
- Fixed err.Error() vs errStr variable inconsistency
**Impact:**
Users will now see accurate error messages that guide them to:
- Install qemu-guest-agent when it's missing (most common case)
- Check permissions only when there's an actual auth/authz issue
- Understand the difference between agent problems and permission problems
Webhook alert payloads now round Value and Threshold fields to 1 decimal
place before template rendering. This eliminates excessive precision in
webhook messages (e.g., 62.27451680630036 becomes 62.3).
The fix is applied in prepareWebhookData() so all webhook templates
benefit automatically, including Google Space webhooks, generic JSON
webhooks, and custom templates.
Related to #619
Related to discussion #615
Add optional GuestURL field to PVE instances and cluster endpoints,
allowing users to specify a separate guest-accessible URL for web UI
navigation that differs from the internal management URL.
Backend changes:
- Add GuestURL field to PVEInstance and ClusterEndpoint structs
- Add GuestURL field to Node model
- Update cluster auto-discovery to preserve existing GuestURL values
- Update node creation logic to populate GuestURL from config
- Update API handlers to accept and persist GuestURL field
Frontend changes:
- Add GuestURL input field to NodeModal for configuration
- Update NodeGroupHeader and NodeSummaryTable to use GuestURL for navigation
- Add GuestURL to Node and PVENodeConfig TypeScript interfaces
When GuestURL is configured, it will be used for navigation links
instead of the Host URL, allowing users to access PVE hosts through
a reverse proxy or different domain while maintaining internal API
connections.
Related to #612
This commit addresses the Alpine Linux installation issues reported where:
1. The OpenRC init system was not properly detected
2. Manual startup instructions were unclear and used placeholder values
3. The agent didn't validate configuration properly at startup
Changes:
Install Script (install-docker-agent.sh):
- Improved OpenRC detection to check for rc-service and rc-update commands
instead of looking for openrc-run binary in specific paths
- Added specific Alpine Linux detection via /etc/alpine-release and /etc/os-release
- Enhanced manual startup instructions to show actual values instead of placeholders
- Added clearer warnings and guidance when no init system is detected
- Included comprehensive startup command with all required parameters
Agent Startup Validation (pulse-docker-agent):
- Added validation to detect unexpected command-line arguments
- Added helpful note about double-dash flag requirements (--token vs -token)
- Improved error messages to include example usage patterns
- Added warning when defaulting to localhost without explicit URL configuration
- Provide both command-line and environment variable examples in error messages
These improvements ensure that:
- Alpine Linux installations will properly detect and configure OpenRC services
- Users who must start the agent manually get clear, copy-pasteable commands
- Configuration errors are caught early with actionable error messages
- Common mistakes (like missing --url) are clearly explained
- Add support for testing Apprise notifications via /api/notifications/test endpoint
- Users can now test their Apprise configuration (both CLI and HTTP modes) using method="apprise"
- Added comprehensive unit tests for both CLI and HTTP modes
- Tests verify correct behavior when Apprise is enabled/disabled
- Tests validate that notifications are properly sent through Apprise channels
Related to #584
Users with NCT6687 SuperIO chips and AMD processors reporting only chiplet
temperatures were unable to see CPU temperature data. Added support for
Nuvoton/Winbond/Fintek SuperIO chips and AMD Tccd chiplet temperatures,
with debug logging to aid troubleshooting unsupported sensor configurations.
Related to discussion #586
Replace non-functional docs.pulseapp.io URLs with direct GitHub repository
links. The containerized deployment security documentation exists in
SECURITY.md and was previously inaccessible via the external link.
Changes:
- Update SECURITY.md documentation reference
- Fix three documentation links in config_handlers.go (SSH verification,
setup script, and security block error messages)
- All links now point to GitHub repository where docs actually live
Related to #607
Related to #551
Enhanced the PMG connection test to actually validate the metrics
endpoints that Pulse uses for monitoring, rather than only checking
the version endpoint. This provides users with immediate feedback if
their PMG credentials lack the necessary permissions to collect metrics.
Backend changes:
- Test mail statistics, cluster status, and quarantine endpoints during
connection test (internal/api/config_handlers.go:1695-1714)
- Return warnings array in test response when endpoints are unavailable
- Increased timeout from 10s to 15s to accommodate multiple endpoint checks
- Added warning logs for failed endpoint checks
Frontend changes:
- Added showWarning() toast function for warning messages
- Enhanced NodeModal to display warning status with amber styling
- Added warnings list display in test results UI
- Updated Settings.tsx to show warnings from connection tests
This change helps users identify permission issues immediately rather
than discovering later that metrics aren't being collected despite a
"successful" connection.
Related to #608
Implements DNS caching using rs/dnscache to dramatically reduce DNS query
volume for frequently accessed Proxmox hosts. Users were reporting 260,000+
DNS queries in 37 hours for the same hostnames.
Changes:
- Added rs/dnscache dependency for DNS resolution caching
- Created pkg/tlsutil/dnscache.go with DNS cache wrapper
- Updated HTTP client creation to use cached DNS resolver
- Added DNSCacheTimeout configuration option (default: 5 minutes)
- Made DNS cache timeout configurable via:
- system.json: dnsCacheTimeout field (seconds)
- Environment variable: DNS_CACHE_TIMEOUT (duration string)
- DNS cache periodically refreshes to prevent stale entries
Benefits:
- Reduces DNS query load on local DNS servers by ~99%
- Reduces network traffic and DNS query log volume
- Maintains fresh DNS entries through periodic refresh
- Configurable timeout for different network environments
Default behavior: 5-minute cache timeout with automatic refresh
The previous commit added 4 new %s format specifiers for Docker/LXC
instructions but didn't add the corresponding arguments to fmt.Sprintf.
Added 4 pulseURL arguments to match the new format specifiers in the
'unknown environment' section of the setup script.
This corrects several issues with the temperature proxy configuration
in the example docker-compose.yml:
Issues fixed:
1. **Wrong mount path**: Was using /mnt/pulse-proxy (LXC path) instead of
/run/pulse-sensor-proxy (Docker path). While the client auto-detects both
paths, this was inconsistent with documentation.
2. **Wrong permissions**: Was mounted as :ro (read-only) but needs :rw
(read-write) for the Unix socket to work properly.
3. **Enabled by default**: Would cause container startup issues if the
proxy wasn't installed on the host.
Changes:
- Commented out the bind mount by default (requires manual setup)
- Changed path from /mnt/pulse-proxy to /run/pulse-sensor-proxy
- Changed permissions from :ro to :rw
- Added clear comment explaining it requires setup with --standalone flag
- Points users to documentation
Now matches the documented Docker setup process and won't break
fresh installations where the proxy isn't installed yet.
This addresses the need for users who deploy Pulse via infrastructure-as-code
tools (Ansible, Terraform, Salt, Puppet) to have scriptable, well-documented
installation procedures.
Changes:
**Comprehensive Automation Section:**
- Documented all installer script flags and options
- Required: --ctid (LXC) or --standalone (Docker)
- Optional: --quiet, --pulse-server, --version, --local-binary, --skip-restart
- Documented idempotency, exit codes, and non-interactive behavior
**Real-World Examples:**
- Ansible playbook for LXC deployments
- Ansible playbook for Docker deployments (includes docker-compose.yml management)
- Terraform null_resource example with remote-exec
- Manual step-by-step configuration (no script)
**Configuration Documentation:**
- Complete YAML config file format with all options
- Environment variable overrides (PULSE_SENSOR_PROXY_ALLOWED_SUBNETS, etc.)
- Example systemd service overrides
- Rate limiting, metrics, ACL, and subnet configuration
**Quick Reference:**
- Added link at top of doc for automation users to jump directly to automation section
- Clear examples of re-running after changes (adding nodes, upgrading versions)
**Key Features for Automation:**
- --quiet flag for non-interactive execution
- Idempotent design (safe to re-run)
- Verifiable exit codes
- Environment variable configuration
- Local binary support (no internet required)
This makes it straightforward for infrastructure teams to integrate Pulse
temperature monitoring into their existing automation workflows without
relying on interactive scripts or manual steps.
This addresses confusion around temperature monitoring setup for Docker
deployments where users expected a turnkey experience similar to LXC.
The core issue: The setup script and documentation suggested that
temperature monitoring was "automatically configured" for all containerized
deployments, but in reality only LXC containers have a fully automatic
setup. Docker requires manual steps.
Changes:
**Setup Script (config_handlers.go):**
- Fixed "unknown environment" path to show separate instructions for LXC vs Docker
- Docker instructions now correctly show --standalone flag (was incorrectly showing --ctid)
- Added docker-compose.yml bind mount instructions inline
- Added restart command for Docker deployments
**Documentation (TEMPERATURE_MONITORING.md):**
- Added prominent "Deployment-Specific Setup" callout at the top
- Clarified that LXC is fully automatic, Docker requires manual steps
- Reorganized "Setup (Automatic)" section to clearly distinguish:
- LXC: Fully turnkey (no manual steps)
- Docker: Manual proxy installation required
- Node configuration: Works for both
- Updated "Host-side responsibilities" to specify it's Docker-only
- Fixed architecture benefits to reflect LXC vs Docker differences
Why this matters:
- LXC setup script auto-detects the container and runs install-sensor-proxy.sh --ctid
- Docker deployments can't be auto-detected and require --standalone flag
- Users running Docker were getting incorrect instructions (--ctid instead of --standalone)
- Documentation suggested everything was automatic, leading to confusion
Now the documentation and setup script accurately reflect that:
- LXC = Turnkey (automatic)
- Docker = Manual steps required (but well-documented)
- Native = Direct SSH (no proxy)
Related to GitHub Discussion #605
This addresses GitHub Discussion #605 where users were unclear about
configuring the temperature proxy when running Pulse in Docker.
Changes:
**install-sensor-proxy.sh:**
- Add Docker-specific post-install instructions when --standalone flag is used
- Show required docker-compose.yml bind mount configuration
- Provide verification commands for Docker deployments
- Link to full documentation for troubleshooting
**TEMPERATURE_MONITORING.md:**
- Add prominent "Quick Start for Docker Deployments" section at the top
- Move Docker instructions earlier in the document for better visibility
- Provide complete 4-step setup process with verification commands
These changes ensure Docker users immediately see:
1. How to install the proxy on the Proxmox host
2. What bind mount to add to docker-compose.yml
3. How to restart and verify the setup
4. Where to find detailed troubleshooting
The installer now provides actionable next steps instead of just
confirming installation, reducing confusion for containerized deployments.
* Helm Chart version bump
Signed-off-by: Aleksandrs Markevics <amarkevics@onairent.live>
* fsGroup moved under podSecurityContext
Signed-off-by: Aleksandrs Markevics <amarkevics@onairent.live>
---------
Signed-off-by: Aleksandrs Markevics <amarkevics@onairent.live>
Co-authored-by: Aleksandrs Markevics <amarkevics@onairent.live>
- Build host agent binaries for all platforms (linux/darwin/windows, amd64/arm64/armv7) in Docker
- Add Makefile target for building agent binaries locally
- Add startup validation to check for missing agent binaries
- Improve download endpoint error messages with troubleshooting guidance
- Enhance host details drawer layout with better organization and visual hierarchy
- Update base images to rolling versions (node:20-alpine, golang:1.24-alpine, alpine:3.20)
Implemented comprehensive state preservation to prevent temporary dropouts:
1. Node Grace Period (60s):
- Track last-online timestamp for each Proxmox node
- Preserve online status during grace period to prevent flapping
- Applied to all node status checks throughout codebase
2. Efficient Polling Preservation:
- Detect when cluster/resources returns empty arrays
- Preserve previous VMs/containers if had resources before
- Handles cluster health check failures gracefully
3. Traditional Polling Preservation:
- Updated preservation logic for per-node VM/container polling
- Triggers when zero resources returned regardless of node response
- Fixed issue where nodes responding with empty data bypassed preservation
Root cause: Intermittent Proxmox cluster health failures ("no healthy nodes
available") caused both efficient and traditional polling to return empty
arrays, immediately clearing all VMs/containers from state.
Changes:
- internal/monitoring/monitor.go: Added node grace period, efficient polling preservation
- internal/monitoring/monitor_polling.go: Fixed traditional polling preservation logic
Fixes frequent UI flickering where vmCount/containerCount would briefly drop to zero.
- Add flex-1 to all drawer cards for consistent space filling
- Implement single-drawer-open behavior (accordion-style)
- Add text truncation to labels and IP badges to prevent overflow
- Replace Map-based state with reactive signals for better performance
This commit implements per-node temperature monitoring control and fixes a critical
bug where partial node updates were destroying existing configuration.
Backend changes:
- Add TemperatureMonitoringEnabled field (*bool) to PVEInstance, PBSInstance, and PMGInstance
- Update monitor.go to check per-node temperature setting with global fallback
- Convert all NodeConfigRequest boolean fields to *bool pointers
- Add nil checks in HandleUpdateNode to prevent overwriting unmodified fields
- Fix critical bug where partial updates zeroed out MonitorVMs, MonitorContainers, etc.
- Update NodeResponse, NodeFrontend, and StateSnapshot to include temperature setting
- Fix HandleAddNode and test connection handlers to use pointer-based boolean fields
Frontend changes:
- Add temperatureMonitoringEnabled to Node interface and config types
- Create per-node temperature monitoring toggle handler with optimistic updates
- Update NodeModal to wire up per-node temperature toggle
- Add isTemperatureMonitoringEnabled helper to check effective monitoring state
- Update ConfiguredNodeTables to show/hide temperature badge based on monitoring state
- Update NodeSummaryTable to conditionally show temperature column
- Pass globalTemperatureMonitoringEnabled prop through component tree
The critical bug fix ensures that when updating a single field (like temperature
monitoring), the backend only modifies that specific field instead of zeroing out
all other boolean configuration fields.
Root Cause:
The classifyError() function in tempproxy/client.go was returning nil
when err was nil, even if respError contained "rate limit exceeded".
This caused the retry logic to treat rate limit errors as retryable,
triggering 3 retries with exponential backoff (100ms, 200ms, 400ms)
for each rate-limited request.
With multiple nodes polling simultaneously and hitting the proxy's
1 req/sec default rate limit, this created a retry storm:
- 3 nodes polling every 10 seconds
- 1-2 requests rate limited per cycle
- Each rate limit triggered 3 retries
- Result: 6+ extra requests per cycle, causing temperature data to
flicker in and out as requests were dropped
Solution:
1. Reordered classifyError() to check respError first before checking
if err is nil, ensuring rate limit errors are properly classified
2. Added explicit rate limit detection that marks these errors as
non-retryable
3. Added stub EnableTemperatureMonitoring/DisableTemperatureMonitoring
methods to Monitor for interface compatibility
Impact:
- Rate limit retry attempts reduced from 151 in 10 minutes to 0
- Temperature data now stable for all nodes
- No more flickering temperature displays in dashboard
This change addresses intermittent "Guest details unavailable" and "Disk stats
unavailable" errors affecting users with large VM deployments (50+ VMs) or
high-load Proxmox environments.
Changes:
- Increased default guest agent timeouts (3-5s → 10-15s) to better handle
environments under load
- Added automatic retry logic (1 retry by default) for transient timeout failures
- Made all timeouts and retry count configurable via environment variables:
* GUEST_AGENT_FSINFO_TIMEOUT (default: 15s)
* GUEST_AGENT_NETWORK_TIMEOUT (default: 10s)
* GUEST_AGENT_OSINFO_TIMEOUT (default: 10s)
* GUEST_AGENT_VERSION_TIMEOUT (default: 10s)
* GUEST_AGENT_RETRIES (default: 1)
- Added comprehensive documentation in VM_DISK_MONITORING.md with configuration
examples for different deployment scenarios
These improvements allow Pulse to gracefully handle intermittent API timeouts
without immediately displaying errors, while remaining configurable for
different network conditions and environment sizes.
Fixes: https://github.com/rcourtman/Pulse/discussions/592