The aggressive use of cluster/resources was breaking storage collection
for setups with multiple standalone nodes or improperly clustered nodes.
Now only uses cluster/resources when explicitly configured as a cluster,
falling back to traditional node-by-node polling otherwise.
This should fix the missing storage issue where one node's storage
wasn't showing after upgrading to rc5.
The build was failing because Go wasn't in PATH after installation. Now
exports PATH=/usr/local/go/bin:/home/pulse/.local/bin:/home/pulse/.local/bin:/home/pulse/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games before running make build.
The install script now correctly builds from source when --main is specified,
even if Pulse is already installed. Previously it would go into the update
prompt instead of building from source.
The issue was that when a node was successfully polled but returned empty storage
(e.g., due to API permissions), it was still marked as 'successfully polled'.
This prevented the preservation logic from keeping existing storage data.
Now if a node returns empty storage but we have existing storage for that node,
we don't mark it as polled, allowing the preservation logic to keep the data.
This should fix the issue where storage disappears from one node in #448.
- Send error result to channel when storage query times out so preservation logic works
- Ensures storage data is preserved for nodes that experience timeouts
- Fixes issue where storage/backups would disappear when a node times out
- Checks if system Go version meets minimum requirement (1.21+)
- Downloads and installs Go 1.23 from official releases if needed
- Supports amd64, arm64, and armv6l architectures
- Ensures Go is in PATH for the build process
This fixes build failures on Debian 12 which ships with Go 1.19
- Added to README.md Quick Start section
- Added comprehensive docs in INSTALL.md
- Explains when and why to use --main option
- Notes build dependencies requirement
- --main is now the primary option (clearer intent)
- --source, --from-source, --branch remain as aliases
- Help text shows --main first for better discoverability
Usage: curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/install.sh | bash -s -- --main
When a node's storage query times out, don't return empty storage which would wipe out existing data. Instead, skip the node entirely so the preservation logic can maintain the existing storage information.
The 15-second timeout introduced to handle unavailable NFS storage was too aggressive and caused legitimate storage queries to timeout on nodes with many storage backends or higher latency. This was causing storage to not be displayed for affected nodes.
Increased timeout to 30 seconds as a better balance between responsiveness and reliability.
- include backend diagnostics data when available
- add node online/offline status (critical for storage issues)
- include physical disks data (addresses missing disks issue)
- add ZFS pool health information
- include alert configuration (helps debug threshold save issues)
- add recent errors and connection health
- show Pulse version information
- prompt user to run diagnostics first for comprehensive data
- helps troubleshoot issues like #448 more effectively
- fix incorrect instructions about diagnostics location
- it's actually in Settings → Diagnostics tab → Export for GitHub
- not 'Download Diagnostics' or 'scroll to bottom'
- make bug report template more concise and flexible
- remove overly detailed instructions
- add helpful tip about diagnostics export for troubleshooting issues
- keep templates simple so they don't restrict users
- add bug report template with instructions for attaching diagnostics
- add feature request template
- mention the 'Export for GitHub' option which provides sanitized diagnostics
- helps users provide better information when reporting issues
Removed the flexible ID matching code that was added for backward compatibility. Since we've fixed the frontend to generate IDs consistently with the backend, we don't need the complexity of trying multiple ID formats.
This keeps the codebase simpler and more maintainable.
The frontend was using guest.name while the backend uses guest.node in the ID pattern. This caused mismatches when saving and loading alert overrides.
Changed frontend ID fallback from:
instance-name-vmid
To match backend:
instance-node-vmid
This ensures consistent ID generation across the application.
The frontend can save alert overrides with different ID formats depending on how the cluster is configured. This fix makes the backend more flexible in matching these overrides by:
1. Trying the exact guest ID first
2. Checking for partial matches that end with -node-vmid
3. Trying alternative ID formats like node-vmid and instance-node-vmid
This ensures custom alert thresholds work correctly regardless of the cluster name format used when saving overrides.
- Changed min value from 0 to -1 on all threshold input fields
- Updated formatMetricValue to show 'Off' for disabled thresholds
- Added help text explaining that 0 or -1 disables alerts
- Backend already supports threshold.Trigger <= 0 check
- Always try cluster/resources endpoint first (works on standalone nodes too)
- Only fall back to traditional polling for very old Proxmox versions
- Confirmed working on standalone nodes like pimox
- Significantly reduces API calls and improves performance
- Addresses efficiency concerns from #447
PVE polling is hardcoded to 10s since Proxmox cluster/resources endpoint only updates every 10s internally. Setting faster polling intervals was wasteful and provided no benefit.
Removed:
- POLLING_INTERVAL env variable and all references
- pollingInterval from config structs and API responses
- UI settings for polling interval (already removed)
- Dynamic polling interval updates via SIGHUP
- Legacy persistence code for saving polling settings
The monitoring loop now uses a hardcoded 10s interval matching Proxmox's update frequency.
Strip trailing slashes and paths from URLs before parsing host:port
to prevent "invalid port number" errors when users add nodes with
URLs like https://192.168.xxx.xxx:8006/
Mock VMs and containers were using 'node:qemu/vmid' format but the alert
system expects 'instance-node-vmid' format. This caused custom thresholds
to be ignored for mock guests.