# Upgrade to Pulse v5 This is a practical guide for upgrading an existing Pulse install to v5. ## Before You Upgrade - Create an encrypted config backup: **Settings → System → Backups → Create Backup** - Confirm you can access the host/container console (for rollback and bootstrap token retrieval) - Review the v5 release notes on GitHub before upgrading ## Upgrade Paths ### systemd and Proxmox LXC installs Preferred path: - **Settings → System → Updates** If you prefer CLI, use the official installer for the target version: ```bash curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | \ sudo bash -s -- --version vX.Y.Z ``` This installer updates the **Pulse server**. Agent updates use the `/install.sh` command generated in **Settings → Agents → Installation commands**. ### Docker ```bash docker pull rcourtman/pulse:latest docker compose up -d ``` ### Kubernetes (Helm) ```bash helm repo update helm upgrade pulse pulse/pulse -n pulse ``` ## Post-Upgrade Checklist - Confirm version: `GET /api/version` - Confirm scheduler health: `GET /api/monitoring/scheduler/health` - Confirm nodes are polling and no breakers are stuck open - Confirm notifications still send (send a test) - Confirm agents are connected (if used) ## Notes and Common Gotchas ### Bootstrap token on fresh auth setup If you reset auth (for example by deleting `.env`), Pulse may require a bootstrap token before you can complete setup. - Docker: `docker exec pulse /app/pulse bootstrap-token` - systemd/LXC: `sudo pulse bootstrap-token` ### Sensor proxy removal The `pulse-sensor-proxy` from v4 is no longer needed — temperature monitoring is now handled by the unified agent. If you had the sensor proxy installed on your Proxmox hosts, remove it **on each host** after upgrading: ```bash curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/uninstall-sensor-proxy.sh | \ sudo bash -s -- --uninstall --purge ``` If you deleted the old node from Pulse and want the cleanup to also remove the old `pulse-monitor@pam` API user/tokens before reinstalling, add `--remove-proxmox-access`. See the [Legacy Cleanup](TEMPERATURE_MONITORING.md#legacy-cleanup-if-upgrading) section in the temperature monitoring docs for the full cleanup details. Skipping this step will leave a selfheal timer running on the host that generates recurring `TASK ERROR` entries in the Proxmox task log. #### LXC mount entry cleanup If your Pulse LXC container fails to start after a host reboot with: ``` Failed to mount "/run/pulse-sensor-proxy" onto ".../mnt/pulse-proxy" TASK ERROR: startup for container '' failed ``` This means the v4 installer added a mount entry for `/run/pulse-sensor-proxy` to the container config. After reboot, `/run` (tmpfs) is cleared and the mount source no longer exists. **Automatic fix:** Re-run the Pulse installer on the Proxmox host. It detects and removes stale sensor-proxy mount entries from all LXC container configs before proceeding. **Manual fix:** ```bash # Check which containers have stale entries grep -n 'pulse-sensor-proxy' /etc/pve/lxc/*.conf # Remove mp entries via pct (container must be stopped) # Replace mp0 with the actual key shown in the grep output (mp0, mp1, etc.) pct set -delete mp0 # Or remove lxc.mount.entry lines directly sed -i '/lxc\.mount\.entry:.*pulse-sensor-proxy/d' /etc/pve/lxc/.conf ``` After removing the stale entry, start the container with `pct start `. ### Temperature monitoring in containers If Pulse runs in a container and you are relying on SSH-based temperature collection, move to the agent or run Pulse on the host. SSH-based collection from containers is intended for dev/test only (use `PULSE_DEV_ALLOW_CONTAINER_SSH=true` if you must). Preferred option: - Install the unified agent (`pulse-agent`) on Proxmox hosts with `--enable-proxmox` Alternative option: - Run Pulse outside a container and use SSH-based temperature collection (restricted `sensors -j` keys) ### Backups not showing (PVE) If local PVE backups aren't appearing in Pulse, your API token may be missing the `PVEDatastoreAdmin` permission required for backup visibility. This can happen if: - You upgraded from v4 (older setup scripts didn't include this permission) - You set up nodes via the unified agent before v5.1.x (the agent wasn't granting this permission) - You created the API token manually without the storage permission **Quick fix** (run on each Proxmox host): ```bash pveum aclmod /storage -user pulse-monitor@pam -role PVEDatastoreAdmin ``` **Alternative** (re-run setup): 1. Delete the node from Pulse Settings 2. Re-run the setup (either the UI-generated script or agent with `--enable-proxmox`) 3. The new token will have correct permissions Note: The "re-run setup" option only works on v5.1.x or later, which includes the fix for agent-based setups.