- Updated all tables to match Node Summary Table's cleaner aesthetic
- Consistent rounded corners, shadows, and border styling
- Cleaner header rows with gray-500 text and no background colors
- Added row dividers using divide-y for better visual separation
- Made node group headers more subtle with 50% opacity backgrounds
- Kept row heights compact with py-0.5 padding
- Improved overall visual consistency across the UI
- Add expandable namespace rows to PBS instances table
- Show deduplication factor from PBS GC status (calculated from index-data-bytes/disk-bytes)
- Move deduplication display to bottom left of backup frequency chart
- Add namespace highlighting when filtered (blue background, filtering indicator)
- Fix backup frequency chart to properly handle PBS namespace filters
- Allow clicking namespace again to clear filter (toggle behavior)
- Improve visual feedback for selected namespaces with color changes
PBS doesn't expose deduplication factor in its standard datastore status endpoint
Would require garbage collection stats or chunk store data to calculate properly
- PBS instances with datastores/namespaces now have expand/collapse buttons
- expanded view shows hierarchical structure: instance > datastore > namespace
- clicking a namespace filters the backup list to that specific namespace
- displays datastore storage usage and deduplication factor when available
- namespace filter format: pbs:instanceName:datastoreName:namespace
- capture deduplication_factor from PBS API datastore status endpoint
- display average deduplication ratio in backup frequency chart header
- shows as green 'Deduplication: X.X:1' when PBS datastores provide this data
- Removed All/VM Disks/CT Volumes filter buttons from Storage tab that weren't functional
- Removed PBS badge next to PBS instance names in Backups tab (redundant since it's already in PBS table)
- Added subtle scale effect (1.01x) on row hover
- Added shadow and left border indicator on hover
- Smooth 150ms transitions for all animations
- Visual feedback makes it clear rows are clickable
- Applied to both PVE and PBS node tables
- Fixed reactivity issue where PVE node tables weren't showing on hard refresh
- Removed component re-mounting caused by IIFE wrapper in App.tsx
- Added text truncation with ellipsis to prevent row height changes
- Fixed table visibility to properly hide when filtering excludes all nodes
- Added cache-busting headers to ensure browser loads latest JS/CSS files
- When searching shows local/snapshot backups, only display nodes with those specific backups
- PBS/remote backups are shared across nodes, so they alone shouldn't show all nodes
- If search only returns PBS backups, then show nodes with access to them
- Prevents all nodes from showing when searching for a guest that only exists on one node
- PVE node table now only shows nodes with actual PVE backups (not PBS)
- PBS node table filters based on PBS-specific backups
- Tables correctly hide entirely when no matching nodes after filtering
- Fixes issue where PVE header remained visible with PBS-only search results
- Node tables now dynamically update based on search in all tabs
- Storage tab filters nodes to show only those with matching storage
- Backups tab filters nodes to show only those with matching backups
- Counts update to reflect filtered items in each tab
- Consistent search experience across Dashboard, Storage, and Backups
- Remove max-width constraint on search fields to utilize available space
- Node summary table now updates based on search/filter criteria
- Only show nodes with matching guests when filtering is active
- Calculate node metrics based on filtered guests only
- Show matched guest count in node summary when filtering
- Provides better visual feedback on what the filters are affecting
- Created comprehensive mock data generator for nodes, VMs, containers
- Added toggle scripts for easy switching between real and mock mode
- Integrated with backend-watch.sh for auto-rebuild with mock support
- Modified monitor to skip polling when mock mode is enabled
- Added CLAUDE.md documentation for future sessions
Note: Mock system initializes but data isn't fully integrated with GetState() yet.
Currently shows mixed real + mock data. Works for UI testing purposes.
addresses #356 - node click filtering now works with:
- 1-4 nodes (regular cards - already worked)
- 5-9 nodes (compact cards - now fixed)
- 10+ nodes (ultra-compact list - now fixed)
clicking any node box filters VMs to that node only, regardless of how many nodes are displayed
- Setup scripts now accept both temporary setup codes and permanent API tokens
- Setup codes (6 chars): For manual setup by others, expire in 5 minutes
- API tokens: For automation and trusted environments, no expiration
- Modified auto-registration endpoint to accept API tokens directly
- Fixed JSON escaping issues with exclamation marks in bash scripts
- Updated README with clear documentation of both authentication methods
- Discovery modal now shows cached results immediately while scanning
This enables both secure manual setup (via temporary codes) and reliable
automation (via API tokens) without compromising security.
The discovery functionality was broken because the router was using a
simple GET-only handler instead of the complete HandleDiscoverServers
function that supports both GET (cached results) and POST (manual scans
with subnet parameters).
Changes:
- Updated router to use configHandlers.HandleDiscoverServers instead of r.handleDiscovery
- Removed the redundant handleDiscovery function
- Discovery endpoint now supports both GET and POST methods as expected by frontend
- Added proper authentication requirement for discovery endpoint
This addresses the discovery being broken in the latest RC releases.
Added detailed VM disk monitoring checks to the diagnostics page:
- Tests actual guest agent connectivity for each node
- Shows how many VMs have agents configured vs working
- Performs a detailed test on one VM and reports the result
- Provides specific recommendations based on the error encountered
- Shows SUCCESS when disk monitoring is working properly
This helps users quickly identify why VM disk monitoring might not be working:
- Guest agent not installed/running
- Permission issues with API tokens
- VM configuration problems
The diagnostics clearly show when everything is working (like the delly.lan cluster showing 19.3% disk usage) vs when there are issues to resolve.
TESTED AND CONFIRMED: API tokens CAN access guest agent data on PVE 9!
- Created test tokens and verified they work
- Guest agent API returns proper disk usage data
- The cluster/resources endpoint shows disk=0 but that's not what Pulse uses
- Pulse correctly fetches data via /nodes/{node}/qemu/{vmid}/agent/get-fsinfo
The misinformation about PVE 9 not working was completely wrong. It does work when properly configured with PVEAuditor role which includes VM.GuestAgent.Audit permission.
Stop making definitive claims about what works or doesn't work. The reality:
- Some users (like you) have it working fine in cluster configs
- Others report 0% disk usage
- The exact conditions that make it work are unclear
- Results vary between different setups
Updated all docs and messages to reflect this uncertainty rather than making false claims about non-existent workarounds or absolute limitations.
Previous advice was completely wrong. The facts:
- VM.Monitor permission doesn't exist in PVE 9 (was removed)
- It was replaced with VM.GuestAgent.Audit
- But even with correct permissions, API tokens CANNOT access guest agent data on PVE 9
- This is Proxmox bug #1373 with NO working workaround for API tokens
- Users must accept 0% VM disk usage on PVE 9 until Proxmox fixes it upstream
Updated all documentation and error messages to reflect this reality instead of giving false hope about non-existent workarounds.
The root@pam suggestion doesn't actually work since it requires the Linux system root password, not a Proxmox-specific password. Most users don't know or have disabled their Linux root password for security.
Updated all documentation and error messages to correctly advise users to grant VM.Monitor permission to their API token user instead.
addresses #349 - fixed the issue where toggling all Proxmox Node alerts would skip some nodes on subsequent clicks. The problem was that multiple toggle operations in a loop were reading from the same state snapshot.
- implemented batchToggleNodeConnectivity and batchToggleDisabled functions
- these functions collect all changes and apply them atomically
- ensures all resources are properly toggled to the target state
- fixes the issue where individual nodes (like 'pi') weren't toggling correctly
fixed issue where toggling all alerts for Proxmox Nodes would skip some nodes on the second click. The logic now properly checks each node's current state and only toggles those that need to change to reach the target state.
addresses #349 - adds a toggle button in the Alerts column header that allows users to enable/disable all alerts for a resource type with one click. This is especially helpful when managing many VMs, containers, or storage devices.
- added toggle icon in Alerts column header for VMs & Containers, Storage, and Nodes tables
- icon shows current state (eye for enabled, eye-slash for disabled)
- clicking toggles all resources in that table between enabled/disabled
- for nodes, toggles connectivity alerts instead of general disable flag
When the instance name equals the node name (common in single-node setups),
avoid generating redundant IDs like "pve-pve-100" by using just "pve-100".
This fixes alert acknowledgment issues where the UI couldn't match alert
IDs due to the duplicate node name pattern.
Addresses #353
The install script now provides convenient management options:
- --reset: Stops Pulse, removes config/data, restarts with fresh config
- --uninstall: Completely removes Pulse from the system
Also simplified the post-install message to show these one-liner commands instead of listing manual steps.
The version comparison function was attempting numeric comparisons on version parts containing RC suffixes (e.g., "0-rc" from "4.8.0-rc.2"), causing an "unbound variable" error due to set -u.
Now properly strips and handles pre-release suffixes separately, allowing correct comparison of RC versions.
Addresses discussion #344 comment from RLSinRFV
The setup code section in the modal is no longer shown when the auth token
is already embedded in the setup script URL. Since the token is included
as auth_token parameter, there's no need for users to see or enter it.
Users now see clear instructions for:
- Resetting configuration to start fresh (keeping Pulse installed)
- Complete removal of Pulse (uninstall everything)
This helps users who need to troubleshoot or start over with a clean slate.
- Add verification steps for qemu-guest-agent service status
- Clarify that the service is socket-activated (not systemctl enable)
- Add diagnostic commands users can run to verify agent is working
- Update FAQ with correct troubleshooting steps for agent issues
This helps users like @RLSinRFV who were trying to enable the service
when it's actually socket-activated and should start automatically.
The real issue for PVE 8 users seeing 0% disk usage:
- Users who added nodes BEFORE v4.7 don't have VM.Monitor permission
- The setup script always created tokens with privsep=0, so that wasn't the issue
- Solution: Re-run the setup script or manually add VM.Monitor permission
Updated error messages and documentation to reflect the actual cause
and provide the correct fix for users experiencing this issue.
- Add detailed logging when VM disk monitoring fails due to permissions
- Explain Proxmox 9 limitation: API tokens cannot access guest agent data (PVE bug #1373)
- Explain Proxmox 8 requirements: VM.Monitor permission and privsep=0 for tokens
- Update setup script to show appropriate warnings for each PVE version
- Update FAQ with troubleshooting steps for 0% disk usage on VMs
- Log messages now clearly indicate workarounds for each scenario
The core issue: Proxmox 9 removed VM.Monitor permission and the replacement
permissions don't allow API tokens to access guest agent filesystem info.
This is a Proxmox upstream bug that affects their own web UI as well.
For users experiencing this issue:
- PVE 9: Use root@pam credentials or wait for Proxmox to fix upstream
- PVE 8: Ensure token has VM.Monitor and privsep=0
- All versions: QEMU guest agent must be installed in VMs
- LXC containers run as root and don't have sudo installed
- Updated all documentation to remove sudo references
- Updated frontend UI to show correct install command
- Keep sudo mention only in troubleshooting for edge cases
- Changed from showing just percentage to "X.X GB free of Y.Y GB (Z% used)"
- Much more useful for users to see actual available space
- Applies to both Quick and Advanced installation modes
addresses #348
After extensive testing and research:
CONFIRMED: This is a Proxmox 9 API limitation, not a configuration issue
- Guest agent get-fsinfo works when called as root (qm agent <vmid> get-fsinfo)
- API tokens CANNOT access this data even with VM.GuestAgent.Audit permission
- Proxmox's own web UI also shows 0% for VM disk usage (bug #1373)
Updated:
- Setup script now clearly explains this is a known Proxmox limitation
- Changed log level from Warn to Debug for permission errors (expected on PVE 9)
- Added references to Proxmox bug #1373
Workarounds for users:
1. Use root@pam credentials instead of API tokens for full VM disk monitoring
2. Container (LXC) disk usage works correctly with tokens
3. Wait for Proxmox to fix this upstream
The guest agent returns the data (total-bytes, used-bytes) but Proxmox's
API doesn't allow token access to it. This is not something we can fix
in Pulse - it needs to be addressed in Proxmox itself.
addresses #348
After testing on actual PVE 9.0.5 nodes:
- Confirmed VM.Monitor privilege was removed in PVE 9
- PVEAuditor role includes VM.GuestAgent.Audit permission
- Added Sys.Audit permission (replacement for VM.Monitor)
- Added clear warning about known PVE 9 guest agent limitations
The issue appears to be a Proxmox 9 limitation where even with correct
permissions (VM.GuestAgent.Audit + Sys.Audit), the guest agent API may
not return disk usage data for non-root tokens. This is likely a bug or
intentional security restriction in Proxmox 9 that needs to be addressed
upstream.
Updated setup script to:
1. Properly detect PVE 9 and add appropriate permissions
2. Warn users about the known limitation
3. Suggest workarounds (using root credentials if needed)
addresses #348
- Updated setup script to properly detect and handle Proxmox 9 where VM.Monitor was removed
- For PVE 9+, now creates custom role with Sys.Audit permissions (replaces VM.Monitor)
- Attempts to add VM.Agent or Sys.Modify permissions for better guest agent access
- Added better error logging to identify permission issues with guest agent API
- Warns users about PVE 9 permission requirements if disk usage shows 0%
The setup script now:
1. Properly detects PVE version using pveversion command
2. Creates appropriate roles based on PVE version (VM.Monitor for PVE 8, Sys.Audit for PVE 9)
3. Provides clear instructions if guest agent access still doesn't work
The SecurityHeaders middleware was not being applied to the router,
causing the "Allow iframe embedding" setting to not take effect.
This fix properly applies the middleware with the saved settings,
allowing iframe embedding to work when enabled.
addresses #351
addresses #352
quick mode now:
- shows available network bridges and prompts for selection
- shows available storage pools with usage info and prompts for selection
- properly handles cases where defaults (vmbr0, local-lvm) don't exist
- gives clear error messages when no bridges or storage pools are found
this ensures users always see what's available and can make informed choices
even in quick mode, preventing installation failures due to missing defaults
addresses #352
the installer now:
- detects the actual default network interface (not just vmbr*)
- uses the first available bridge if default isn't a bridge
- prompts user to select a bridge when vmbr0 doesn't exist
- shows helpful messages when no bridges are detected
this fixes issues on systems with non-standard network configurations
where vmbr0 doesn't exist or isn't the default gateway
Initialize RC_VERSION to empty string before assignment to prevent
'unbound variable' errors when running with set -u. This ensures
the RC update option is shown when running a stable version.
The version detection regex now captures the full version string including
pre-release suffixes like -rc.1, -beta.2, etc. This prevents the script
from offering to update to a version that's already installed.
- Fixed safe_read function to properly handle TTY availability
- Added proper error handling for compare_versions return codes
- Script no longer exits silently when selecting menu options