**Host Detection**:
- Now detects localhost by hostname and FQDN, not just IP
- Fixes issue where nodes configured as https://hostname:8006 would skip
localhost cleanup (API tokens, bind mounts, service removal)
**Systemd Sandbox**:
- Added /etc/pve and /etc/systemd/system to ReadWritePaths
- Allows cleanup script to modify Proxmox configs and systemd units
**Uninstaller Improvements**:
- Use UUID for transient unit names (prevents same-second collisions)
- Added --purge flag for complete removal
- Added --wait and --collect flags to capture exit code
- Now fails cleanup if uninstaller exits non-zero
**Path Migration**:
- Fixed all /usr/local references to use /opt/pulse/sensor-proxy
- Updated forced command in SSH authorized_keys
- Updated self-heal script installer path
- Updated Go backend removal helpers (supports both new and legacy paths)
These fixes address Codex findings: hostname detection, sandbox permissions,
transient unit collisions, incomplete purging, and incomplete path migration.
Related to cleanup implementation testing.
Related to #670, #657
The fix in v4.26.5 (commit 59a97f2e3) attempted to resolve storage disappearing
by preferring hostnames over IPs when TLS hostname verification is required
(VerifySSL=true and no fingerprint). However, that fix was ineffective because
the cluster discovery code was populating BOTH the Host and IP fields with the
IP address.
**Root Cause:**
In internal/api/config_handlers.go, the detectPVECluster function was setting:
- endpoint.Host = schemePrefix + clusterNode.IP (when IP was available)
- endpoint.IP = clusterNode.IP
This meant both fields contained the same IP address. When the monitoring code
tried to prefer endpoint.Host for TLS validation (internal/monitoring/monitor.go:
361-368), it was still getting an IP, causing certificate validation to fail
with "certificate is valid for pve01.example.com, not 10.0.0.44".
**Solution:**
Separate the Host and IP fields properly during cluster discovery:
- endpoint.Host = hostname (e.g., "https://pve01:8006") for TLS validation
- endpoint.IP = IP address (e.g., "10.0.0.44") for DNS-free connections
The existing logic in clusterEndpointEffectiveURL() can now correctly choose
between them based on TLS requirements.
**Impact:**
Users with VerifySSL=true who upgraded to v4.26.1-v4.26.5 and lost storage
visibility should now see storage, VM disks, and backups again after this fix.
The setup script template had 44 %s placeholders, but the fmt.Sprintf call
arguments were out of order starting at position 15. This caused the Pulse
URL to be inserted where the token name should be, resulting in errors like:
Token ID: pulse-monitor@pam!http://192.168.0.44:7655
Instead of the correct format:
Token ID: pulse-monitor@pam!pulse-192-168-0-44-1762545916
Changes:
- Escaped %s in printf helper (line 3949) so it doesn't consume arguments
- Reordered fmt.Sprintf arguments (lines 4727-4732) to match template order
- Removed 2 extra pulseURL arguments that were causing the shift
This fix ensures all 44 placeholders receive the correct values in order.
Addresses two issues preventing configuration backup/restore:
1. Export passphrase validation mismatch: UI only validated 12+ char
requirement when using custom passphrase, but backend always enforced
it. Users with shorter login passwords saw unexplained failures.
- Frontend now validates all passphrases meet 12-char minimum
- Clear error message suggests custom passphrase if login password too short
2. Import data parsing failed silently: Frontend sent `exportData.data`
which was undefined for legacy/CLI backups (raw base64 strings).
Backend rejected these with no logs.
- Frontend now handles both formats: {status, data} and raw strings
- Backend logs validation failures for easier troubleshooting
Related to #646 where user reported "error after entering password" with
no container logs. These changes ensure proper validation feedback and
make the backup system resilient to different export formats.
Related to #617
This fixes a misconfiguration scenario where Docker containers could
attempt direct SSH connections (producing [preauth] log spam) instead
of using the sensor proxy.
Changes:
- Fix container detection to check PULSE_DOCKER=true in addition to
system.InContainer() heuristics (both temperature.go and config_handlers.go)
- Upgrade temperature collection log from Error to Warn with actionable
guidance about mounting the proxy socket
- Add Info log when dev mode override is active so operators understand
the security posture
- Add troubleshooting section to docs for SSH [preauth] logs from containers
The container detection was inconsistent - monitor.go checked both flags
but temperature.go and config_handlers.go only checked InContainer().
Now all locations consistently check PULSE_DOCKER || InContainer().
Addresses issue #635 where users encounter "can't find the SSH key" errors
when enabling temperature monitoring during automated PVE setup with Pulse
running in Docker.
Root cause:
- Setup script embeds SSH keys at generation time (when downloaded)
- For containerized Pulse, keys are empty until pulse-sensor-proxy is installed
- Script auto-installs proxy, but didn't refresh keys after installation
- This caused temperature monitoring setup to fail with confusing errors
Changes:
1. After successful proxy installation, immediately fetch and populate the
proxy's SSH public key (lines 4068-4080)
2. Update bash variables SSH_SENSORS_PUBLIC_KEY and SSH_SENSORS_KEY_ENTRY
so temperature monitoring setup can proceed in the same script run
3. Improve error messaging when keys aren't available (lines 4424-4453):
- Clear explanation of containerized Pulse requirements
- Step-by-step instructions for container restart and verification
- Separate guidance for bare-metal vs containerized deployments
Flow improvements:
- Initial run: Proxy installs → keys fetched → temp monitoring configures
- Rerun after container restart: Keys fetched at script start → works
- Both scenarios now handled correctly
Related to #635
When Pulse is running in a container and the SSH key is not available,
provide clearer guidance about the pulse-sensor-proxy requirement and
include documentation link for Docker deployments.
This helps users understand that containerized Pulse needs the host-side
sensor proxy to access temperature data from Proxmox hosts.
Related to #595
This change adds support for custom SSH ports when collecting temperature
data from Proxmox nodes, resolving issues for users who run SSH on non-standard
ports.
**Why SSH is still needed:**
Temperature monitoring requires reading /sys/class/hwmon sensors on Proxmox
nodes, which is not exposed via the Proxmox API. Even when using API tokens
for authentication, Pulse needs SSH access to collect temperature data.
**Changes:**
- Add `sshPort` configuration to SystemSettings (system.json)
- Add `SSHPort` field to Config with environment variable support (SSH_PORT)
- Add per-node SSH port override capability for PVE, PBS, and PMG instances
- Update TemperatureCollector to accept and use custom SSH port
- Update SSH known_hosts manager to support non-standard ports
- Add NewTemperatureCollectorWithPort() constructor with port parameter
- Maintain backward compatibility with NewTemperatureCollector() (uses port 22)
- Update frontend TypeScript types for SSH port configuration
**Configuration methods:**
1. Environment variable: SSH_PORT=2222
2. system.json: {"sshPort": 2222}
3. Per-node override in nodes.enc (future UI support)
**Default behavior:**
- Defaults to port 22 if not configured
- Maintains full backward compatibility
- No changes required for existing deployments
The implementation includes proper ssh-keyscan port handling and known_hosts
management for non-standard ports using [host]:port notation per SSH standards.
Related to discussion #615
Add optional GuestURL field to PVE instances and cluster endpoints,
allowing users to specify a separate guest-accessible URL for web UI
navigation that differs from the internal management URL.
Backend changes:
- Add GuestURL field to PVEInstance and ClusterEndpoint structs
- Add GuestURL field to Node model
- Update cluster auto-discovery to preserve existing GuestURL values
- Update node creation logic to populate GuestURL from config
- Update API handlers to accept and persist GuestURL field
Frontend changes:
- Add GuestURL input field to NodeModal for configuration
- Update NodeGroupHeader and NodeSummaryTable to use GuestURL for navigation
- Add GuestURL to Node and PVENodeConfig TypeScript interfaces
When GuestURL is configured, it will be used for navigation links
instead of the Host URL, allowing users to access PVE hosts through
a reverse proxy or different domain while maintaining internal API
connections.
Replace non-functional docs.pulseapp.io URLs with direct GitHub repository
links. The containerized deployment security documentation exists in
SECURITY.md and was previously inaccessible via the external link.
Changes:
- Update SECURITY.md documentation reference
- Fix three documentation links in config_handlers.go (SSH verification,
setup script, and security block error messages)
- All links now point to GitHub repository where docs actually live
Related to #607
Related to #551
Enhanced the PMG connection test to actually validate the metrics
endpoints that Pulse uses for monitoring, rather than only checking
the version endpoint. This provides users with immediate feedback if
their PMG credentials lack the necessary permissions to collect metrics.
Backend changes:
- Test mail statistics, cluster status, and quarantine endpoints during
connection test (internal/api/config_handlers.go:1695-1714)
- Return warnings array in test response when endpoints are unavailable
- Increased timeout from 10s to 15s to accommodate multiple endpoint checks
- Added warning logs for failed endpoint checks
Frontend changes:
- Added showWarning() toast function for warning messages
- Enhanced NodeModal to display warning status with amber styling
- Added warnings list display in test results UI
- Updated Settings.tsx to show warnings from connection tests
This change helps users identify permission issues immediately rather
than discovering later that metrics aren't being collected despite a
"successful" connection.
The previous commit added 4 new %s format specifiers for Docker/LXC
instructions but didn't add the corresponding arguments to fmt.Sprintf.
Added 4 pulseURL arguments to match the new format specifiers in the
'unknown environment' section of the setup script.
This addresses confusion around temperature monitoring setup for Docker
deployments where users expected a turnkey experience similar to LXC.
The core issue: The setup script and documentation suggested that
temperature monitoring was "automatically configured" for all containerized
deployments, but in reality only LXC containers have a fully automatic
setup. Docker requires manual steps.
Changes:
**Setup Script (config_handlers.go):**
- Fixed "unknown environment" path to show separate instructions for LXC vs Docker
- Docker instructions now correctly show --standalone flag (was incorrectly showing --ctid)
- Added docker-compose.yml bind mount instructions inline
- Added restart command for Docker deployments
**Documentation (TEMPERATURE_MONITORING.md):**
- Added prominent "Deployment-Specific Setup" callout at the top
- Clarified that LXC is fully automatic, Docker requires manual steps
- Reorganized "Setup (Automatic)" section to clearly distinguish:
- LXC: Fully turnkey (no manual steps)
- Docker: Manual proxy installation required
- Node configuration: Works for both
- Updated "Host-side responsibilities" to specify it's Docker-only
- Fixed architecture benefits to reflect LXC vs Docker differences
Why this matters:
- LXC setup script auto-detects the container and runs install-sensor-proxy.sh --ctid
- Docker deployments can't be auto-detected and require --standalone flag
- Users running Docker were getting incorrect instructions (--ctid instead of --standalone)
- Documentation suggested everything was automatic, leading to confusion
Now the documentation and setup script accurately reflect that:
- LXC = Turnkey (automatic)
- Docker = Manual steps required (but well-documented)
- Native = Direct SSH (no proxy)
Related to GitHub Discussion #605
This commit implements per-node temperature monitoring control and fixes a critical
bug where partial node updates were destroying existing configuration.
Backend changes:
- Add TemperatureMonitoringEnabled field (*bool) to PVEInstance, PBSInstance, and PMGInstance
- Update monitor.go to check per-node temperature setting with global fallback
- Convert all NodeConfigRequest boolean fields to *bool pointers
- Add nil checks in HandleUpdateNode to prevent overwriting unmodified fields
- Fix critical bug where partial updates zeroed out MonitorVMs, MonitorContainers, etc.
- Update NodeResponse, NodeFrontend, and StateSnapshot to include temperature setting
- Fix HandleAddNode and test connection handlers to use pointer-based boolean fields
Frontend changes:
- Add temperatureMonitoringEnabled to Node interface and config types
- Create per-node temperature monitoring toggle handler with optimistic updates
- Update NodeModal to wire up per-node temperature toggle
- Add isTemperatureMonitoringEnabled helper to check effective monitoring state
- Update ConfiguredNodeTables to show/hide temperature badge based on monitoring state
- Update NodeSummaryTable to conditionally show temperature column
- Pass globalTemperatureMonitoringEnabled prop through component tree
The critical bug fix ensures that when updating a single field (like temperature
monitoring), the backend only modifies that specific field instead of zeroing out
all other boolean configuration fields.
Critical bug fix: The setup script's format string had 33 placeholders
but was only receiving 27 arguments, causing:
- INSTALLER_URL to receive authToken instead of pulseURL
- This made curl try to resolve the token value as a hostname
- Error: 'curl: (6) Could not resolve host: N7AE3P'
- Token ID showed '%!s(MISSING)' in manual setup instructions
Fixed by:
- Added missing tokenName at position 7
- Added literal '%s' strings for version_ge printf placeholders
- Added authToken arguments for Authorization headers (positions 29, 31)
- Ensured all 33 format placeholders have corresponding arguments
Now generates correct URLs:
- INSTALLER_URL: http://192.168.0.160:7655/api/install/install-sensor-proxy.sh
- --pulse-server: http://192.168.0.160:7655
- Token ID: pulse-monitor@pam!pulse-192-168-0-160-[timestamp]
Setup script improvements (config_handlers.go):
- Remove redundant mount configuration and container restart logic
- Let installer handle all mount/restart operations (single source of truth)
- Eliminate hard-coded mp0 assumption
Installer improvements (install-sensor-proxy.sh):
- Add mount configuration persistence validation via pct config check
- Surface pct set errors instead of silencing with 2>/dev/null
- Capture and display curl download errors with temp files
- Check systemd daemon-reload/enable/restart exit codes
- Show journalctl output when service fails to start
- Make socket verification fatal (was warning)
- Provide clear manual steps when hot-plug fails on running container
This makes the installation fail fast with actionable error messages
instead of silently proceeding with broken configuration.
Changes:
- Replace PULSE_SENSOR_PROXY_FALLBACK_URL env export with --pulse-server argument
- Remove --quiet flag from installer invocation to show download progress
- More reliable than environment variable inheritance in subshells
This ensures the proxy installer can reliably download the binary from the
Pulse server fallback when GitHub is unavailable.
The setup script was filtering installer output to only show lines with
✓|⚠️|ERROR, which hid successful download messages like:
'Downloading pulse-sensor-proxy-linux-amd64 from Pulse server...'
This made it appear the installer failed even when the Pulse server
fallback download succeeded. Changed to show all installer output for
better visibility and debugging.
Users will now see the complete installation flow including:
- GitHub download attempt (expected to fail for dev builds)
- Pulse server fallback download (should succeed)
- All setup steps and validations
Improves transparency and reduces confusion during setup
Version check was blocking dev/main builds (e.g., '0.0.0-main-da9da6f')
from using temperature proxy, even though they have the latest code.
Added regex to skip version check for builds matching:
- ^0\.0\.0-main (main branch builds)
- ^dev (dev builds)
- ^main (main version strings)
These builds are assumed to have proxy support since they're built from
the latest codebase.
Fixes testing workflow when installing Pulse with --main flag
The version check was blocking ALL v4.23.0 users from temperature monitoring,
even non-containerized ones who don't need the proxy.
Changed to only check version when PULSE_IS_CONTAINERIZED=true, since:
- Non-containerized Pulse can use direct SSH on any version
- Containerized Pulse requires v4.24.0+ for proxy support
This ensures non-containerized v4.23.0 users can still use temperature monitoring
via direct SSH while properly blocking proxy setup for containerized v4.23.0.
Fixes regression introduced in commit fbe4ab83a
Improves configuration handling and system settings APIs to support
v4.24.0 features including runtime logging controls, adaptive polling
configuration, and enhanced config export/persistence.
Changes:
- Add config override system for discovery service
- Enhance system settings API with runtime logging controls
- Improve config persistence and export functionality
- Update security setup handling
- Refine monitoring and discovery service integration
These changes provide the backend support for the configuration
features documented in the v4.24.0 release.
When the setup script detects TEMPERATURE_PROXY_KEY (proxy is available),
it now shows a clear success message instead of attempting SSH verification.
The verification check doesn't work with proxy-based setups since the
container doesn't have SSH keys - all temperature collection happens via
the Unix socket to pulse-sensor-proxy, which handles SSH.
Now shows:
✓ Temperature monitoring configured via pulse-sensor-proxy
Temperature data will appear in the dashboard within 10 seconds
Instead of the misleading:
⚠️ Unable to verify SSH connectivity.
Temperature data will appear once SSH connectivity is configured.
When pulse-sensor-proxy is available, the setup script now automatically
detects and uses the proxy's SSH public key instead of trying to generate
keys inside the container.
This fixes temperature monitoring setup for Docker deployments where:
- Container has proxy socket mounted at /mnt/pulse-proxy
- Proxy handles SSH connections to nodes
- Setup script needs to distribute the proxy's key, not container's key
The fix queries /api/system/proxy-public-key during setup script generation
and overrides SSH_SENSORS_PUBLIC_KEY if the proxy is available.
Tested with Docker on native Proxmox host (delly) - temperatures collected
successfully via proxy socket.
Changed heredoc delimiter from <<'EOF' to <<EOF to allow bash variable
expansion. Previously $SSH_PUBLIC_KEY and $SSH_RESTRICTED_KEY_ENTRY
were being passed as literal strings instead of their actual values,
so cluster nodes never received the correct SSH keys.
This fixes cluster node ProxyJump setup - now both restricted and
unrestricted keys are properly added to cluster nodes.
The setup script now adds both the restricted and unrestricted SSH keys
to ALL cluster nodes, not just the first one. This makes temperature
monitoring truly turnkey - you say 'yes' to configure cluster nodes and
it automatically sets up both keys on each node.
This ensures:
- All nodes can act as ProxyJump hosts if needed
- All nodes can provide temperature data via sensors
- No manual SSH key configuration required
Fixes turnkey cluster temperature monitoring setup.
When using ProxyJump for cluster temperature monitoring, the jump host
(typically the first cluster node) needs an unrestricted SSH key to allow
connection forwarding. Previously only the restricted key with
command="sensors -j" was added, which blocked ProxyJump.
Now the setup script adds TWO keys:
1. Unrestricted key (for ProxyJump/connection forwarding)
2. Restricted key (for running sensors -j directly)
This allows containerized Pulse to:
- Connect through the jump host to other cluster nodes
- Collect temperature data from all cluster members
Fixes cluster temperature monitoring for Docker/LXC deployments.
Added logic to resolve IP addresses for cluster nodes and include them as
HostName entries in the SSH config. Without this, Pulse couldn't connect
to cluster nodes like 'minipc' because the container couldn't resolve
the hostname.
Uses getent to resolve node names to IPs, with fallback to hostname if
resolution fails (for environments where DNS works).
- Changed SSH key generation from RSA 2048 to Ed25519 (more secure, faster, smaller)
- Added openssh-client package to Docker image (required for temperature monitoring)
- Updated SSH config template to use id_ed25519
- Removed unused crypto/rsa and crypto/x509 imports
Ed25519 provides better security with shorter keys and faster operations
compared to RSA. The container now has SSH client tools needed to connect
to Proxmox nodes for temperature data collection.
The setup script was generating SSH config with IdentityFile ~/.ssh/id_ed25519
but Pulse generates id_rsa keys. Updated SSH config template to use id_rsa
to match the actual key type generated by the monitoring system.
The setup script was passing pulseURL instead of authToken as the last
parameter, causing 'Authentication required' errors when verifying SSH
connectivity. Fixed parameter order in fmt.Sprintf call.
Make temperature monitoring truly turnkey by automatically configuring
SSH ProxyJump when running in containers without pulse-sensor-proxy.
How it works:
1. Setup script runs on Proxmox host (e.g., delly)
2. Detects Pulse is containerized but proxy unavailable
3. Automatically configures SSH ProxyJump through the current host
4. Writes SSH config to /home/pulse/.ssh/config in container
5. Temperature monitoring "just works" without manual configuration
Changes:
- Track TEMP_MONITORING_AVAILABLE flag during proxy installation
- Auto-configure ProxyJump if proxy installation fails
- Add /api/system/ssh-config endpoint to write SSH config
- Only prompt for temperature monitoring if it can actually work
- Automatic SSH config: ProxyJump through Proxmox host
Before: User had to manually configure ProxyJump or install proxy
After: Temperature monitoring works automatically after setup script
This makes Docker deployments as turnkey as LXC deployments.