Mock mode now properly returns simulated data including PMG host backups.
The monitor's GetState() method now checks for mock mode and returns
mock data when enabled, allowing full testing of UI features without
real Proxmox nodes.
Addresses #359 - PMG host config backups with VMID=0 are now correctly
identified as "Host" type instead of being misidentified as LXC containers.
Added purple color scheme for Host type backups in the UI.
When multiple clusters were added, shared storage from different clusters
would use the same ID (e.g., 'shared-local'), causing storage from one
cluster to overwrite storage from another. Now using instance-specific IDs
for shared storage to ensure each cluster's storage is properly tracked.
PBS often runs in Docker containers, so the container check was preventing
legitimate setups. Also fixed the script to check for proxmox-backup-manager
instead of pveum (which is PVE-only).
- Added required field validation for name, type, and host in node configuration
- Added duplicate node prevention by name (returns 409 Conflict)
- Added IP address format validation to reject invalid IPs
- Added port range validation (1-65535)
- Added validation for negative polling intervals in system settings
- Added HEAD request support for health and version endpoints
- Reduced node addition timeout from 10s to 3s to prevent UI hanging
These validation improvements were discovered through comprehensive testing
and prevent invalid data from being accepted by the API.
- Changed deduplication display from ratio (14.4:1) to multiplier (14.4x)
- Added encryption indicators for PBS backups (lock icon)
- Added owner column showing who created each PBS backup
- Fixed owner display to use separate column instead of cramped next to node name
- Added owner field to PBSBackup model and populated from PBS API
These improvements make it easier to understand backup status at a glance
The setup script no longer mentions VM disk monitoring at all, as requested.
This avoids confusion about what works or doesn't work on different
Proxmox versions. The permissions are still set up correctly behind
the scenes, but users don't need to see confusing information about it.
The setup script was incorrectly claiming that VM disk monitoring works
on Proxmox 9 with API tokens. This is not true due to an upstream
Proxmox limitation where API tokens cannot access guest agent data
even with the correct permissions.
Updated the setup script to clearly explain:
- This is a known Proxmox 9 limitation, not a Pulse issue
- API tokens are blocked from accessing get-fsinfo
- Available workarounds (use root@pam or wait for upstream fix)
- Link to issue #348 for full context
This should prevent further confusion for users running Proxmox 9.
- Add PBS alert monitoring (CPU, memory, offline detection)
- Add storage offline detection with proper cluster awareness
- Remove bulk toggle feature from thresholds UI (unnecessary complexity)
- Add enable/disable buttons for PBS servers in thresholds tab
- Fix storage offline detection to avoid false positives in clusters
(only alert on truly offline storage, not inactive cluster storage)
Alert improvements:
- PBS instances now properly monitored like nodes
- Storage devices generate offline alerts with confirmation system
- All resource types support custom thresholds and disable toggles
- Consistent alert ID format across all resource types
- Proper hysteresis and confirmation counts to prevent flapping
addresses #123 (if there was an issue about missing PBS alerts)
- capture deduplication_factor from PBS API datastore status endpoint
- display average deduplication ratio in backup frequency chart header
- shows as green 'Deduplication: X.X:1' when PBS datastores provide this data
- Fixed reactivity issue where PVE node tables weren't showing on hard refresh
- Removed component re-mounting caused by IIFE wrapper in App.tsx
- Added text truncation with ellipsis to prevent row height changes
- Fixed table visibility to properly hide when filtering excludes all nodes
- Added cache-busting headers to ensure browser loads latest JS/CSS files
- Remove max-width constraint on search fields to utilize available space
- Node summary table now updates based on search/filter criteria
- Only show nodes with matching guests when filtering is active
- Calculate node metrics based on filtered guests only
- Show matched guest count in node summary when filtering
- Provides better visual feedback on what the filters are affecting
- Created comprehensive mock data generator for nodes, VMs, containers
- Added toggle scripts for easy switching between real and mock mode
- Integrated with backend-watch.sh for auto-rebuild with mock support
- Modified monitor to skip polling when mock mode is enabled
- Added CLAUDE.md documentation for future sessions
Note: Mock system initializes but data isn't fully integrated with GetState() yet.
Currently shows mixed real + mock data. Works for UI testing purposes.
- Setup scripts now accept both temporary setup codes and permanent API tokens
- Setup codes (6 chars): For manual setup by others, expire in 5 minutes
- API tokens: For automation and trusted environments, no expiration
- Modified auto-registration endpoint to accept API tokens directly
- Fixed JSON escaping issues with exclamation marks in bash scripts
- Updated README with clear documentation of both authentication methods
- Discovery modal now shows cached results immediately while scanning
This enables both secure manual setup (via temporary codes) and reliable
automation (via API tokens) without compromising security.
The discovery functionality was broken because the router was using a
simple GET-only handler instead of the complete HandleDiscoverServers
function that supports both GET (cached results) and POST (manual scans
with subnet parameters).
Changes:
- Updated router to use configHandlers.HandleDiscoverServers instead of r.handleDiscovery
- Removed the redundant handleDiscovery function
- Discovery endpoint now supports both GET and POST methods as expected by frontend
- Added proper authentication requirement for discovery endpoint
This addresses the discovery being broken in the latest RC releases.
Added detailed VM disk monitoring checks to the diagnostics page:
- Tests actual guest agent connectivity for each node
- Shows how many VMs have agents configured vs working
- Performs a detailed test on one VM and reports the result
- Provides specific recommendations based on the error encountered
- Shows SUCCESS when disk monitoring is working properly
This helps users quickly identify why VM disk monitoring might not be working:
- Guest agent not installed/running
- Permission issues with API tokens
- VM configuration problems
The diagnostics clearly show when everything is working (like the delly.lan cluster showing 19.3% disk usage) vs when there are issues to resolve.
TESTED AND CONFIRMED: API tokens CAN access guest agent data on PVE 9!
- Created test tokens and verified they work
- Guest agent API returns proper disk usage data
- The cluster/resources endpoint shows disk=0 but that's not what Pulse uses
- Pulse correctly fetches data via /nodes/{node}/qemu/{vmid}/agent/get-fsinfo
The misinformation about PVE 9 not working was completely wrong. It does work when properly configured with PVEAuditor role which includes VM.GuestAgent.Audit permission.
Stop making definitive claims about what works or doesn't work. The reality:
- Some users (like you) have it working fine in cluster configs
- Others report 0% disk usage
- The exact conditions that make it work are unclear
- Results vary between different setups
Updated all docs and messages to reflect this uncertainty rather than making false claims about non-existent workarounds or absolute limitations.
Previous advice was completely wrong. The facts:
- VM.Monitor permission doesn't exist in PVE 9 (was removed)
- It was replaced with VM.GuestAgent.Audit
- But even with correct permissions, API tokens CANNOT access guest agent data on PVE 9
- This is Proxmox bug #1373 with NO working workaround for API tokens
- Users must accept 0% VM disk usage on PVE 9 until Proxmox fixes it upstream
Updated all documentation and error messages to reflect this reality instead of giving false hope about non-existent workarounds.
The root@pam suggestion doesn't actually work since it requires the Linux system root password, not a Proxmox-specific password. Most users don't know or have disabled their Linux root password for security.
Updated all documentation and error messages to correctly advise users to grant VM.Monitor permission to their API token user instead.
When the instance name equals the node name (common in single-node setups),
avoid generating redundant IDs like "pve-pve-100" by using just "pve-100".
This fixes alert acknowledgment issues where the UI couldn't match alert
IDs due to the duplicate node name pattern.
Addresses #353
The setup code section in the modal is no longer shown when the auth token
is already embedded in the setup script URL. Since the token is included
as auth_token parameter, there's no need for users to see or enter it.
- Add verification steps for qemu-guest-agent service status
- Clarify that the service is socket-activated (not systemctl enable)
- Add diagnostic commands users can run to verify agent is working
- Update FAQ with correct troubleshooting steps for agent issues
This helps users like @RLSinRFV who were trying to enable the service
when it's actually socket-activated and should start automatically.
The real issue for PVE 8 users seeing 0% disk usage:
- Users who added nodes BEFORE v4.7 don't have VM.Monitor permission
- The setup script always created tokens with privsep=0, so that wasn't the issue
- Solution: Re-run the setup script or manually add VM.Monitor permission
Updated error messages and documentation to reflect the actual cause
and provide the correct fix for users experiencing this issue.
- Add detailed logging when VM disk monitoring fails due to permissions
- Explain Proxmox 9 limitation: API tokens cannot access guest agent data (PVE bug #1373)
- Explain Proxmox 8 requirements: VM.Monitor permission and privsep=0 for tokens
- Update setup script to show appropriate warnings for each PVE version
- Update FAQ with troubleshooting steps for 0% disk usage on VMs
- Log messages now clearly indicate workarounds for each scenario
The core issue: Proxmox 9 removed VM.Monitor permission and the replacement
permissions don't allow API tokens to access guest agent filesystem info.
This is a Proxmox upstream bug that affects their own web UI as well.
For users experiencing this issue:
- PVE 9: Use root@pam credentials or wait for Proxmox to fix upstream
- PVE 8: Ensure token has VM.Monitor and privsep=0
- All versions: QEMU guest agent must be installed in VMs
addresses #348
After extensive testing and research:
CONFIRMED: This is a Proxmox 9 API limitation, not a configuration issue
- Guest agent get-fsinfo works when called as root (qm agent <vmid> get-fsinfo)
- API tokens CANNOT access this data even with VM.GuestAgent.Audit permission
- Proxmox's own web UI also shows 0% for VM disk usage (bug #1373)
Updated:
- Setup script now clearly explains this is a known Proxmox limitation
- Changed log level from Warn to Debug for permission errors (expected on PVE 9)
- Added references to Proxmox bug #1373
Workarounds for users:
1. Use root@pam credentials instead of API tokens for full VM disk monitoring
2. Container (LXC) disk usage works correctly with tokens
3. Wait for Proxmox to fix this upstream
The guest agent returns the data (total-bytes, used-bytes) but Proxmox's
API doesn't allow token access to it. This is not something we can fix
in Pulse - it needs to be addressed in Proxmox itself.
addresses #348
After testing on actual PVE 9.0.5 nodes:
- Confirmed VM.Monitor privilege was removed in PVE 9
- PVEAuditor role includes VM.GuestAgent.Audit permission
- Added Sys.Audit permission (replacement for VM.Monitor)
- Added clear warning about known PVE 9 guest agent limitations
The issue appears to be a Proxmox 9 limitation where even with correct
permissions (VM.GuestAgent.Audit + Sys.Audit), the guest agent API may
not return disk usage data for non-root tokens. This is likely a bug or
intentional security restriction in Proxmox 9 that needs to be addressed
upstream.
Updated setup script to:
1. Properly detect PVE 9 and add appropriate permissions
2. Warn users about the known limitation
3. Suggest workarounds (using root credentials if needed)
addresses #348
- Updated setup script to properly detect and handle Proxmox 9 where VM.Monitor was removed
- For PVE 9+, now creates custom role with Sys.Audit permissions (replaces VM.Monitor)
- Attempts to add VM.Agent or Sys.Modify permissions for better guest agent access
- Added better error logging to identify permission issues with guest agent API
- Warns users about PVE 9 permission requirements if disk usage shows 0%
The setup script now:
1. Properly detects PVE version using pveversion command
2. Creates appropriate roles based on PVE version (VM.Monitor for PVE 8, Sys.Audit for PVE 9)
3. Provides clear instructions if guest agent access still doesn't work
The SecurityHeaders middleware was not being applied to the router,
causing the "Allow iframe embedding" setting to not take effect.
This fix properly applies the middleware with the saved settings,
allowing iframe embedding to work when enabled.
addresses #351
Addresses #222 - Allow Pulse to be embedded in iframes (e.g., Homepage dashboard)
- Add AllowEmbedding and AllowedEmbedOrigins settings to SystemSettings
- Update security headers to respect embedding configuration
- When disabled: X-Frame-Options: DENY, frame-ancestors 'none'
- When enabled (same-origin): X-Frame-Options: SAMEORIGIN, frame-ancestors 'self'
- When enabled with origins: Adds specified origins to frame-ancestors
- Add UI controls in Settings → System → Network Settings
- Properly handle CSP frame-ancestors directive for cross-origin embedding
Users can now enable iframe embedding and specify allowed origins for embedding Pulse in Homepage or other dashboard applications.
Improved logging to help users diagnose why VM disk usage might not be showing:
- Clearly identify when agent is enabled in config but not running in guest OS
- Detect timeout issues with unresponsive agents
- Log when agent returns no filesystem info
- Show which filesystems are included/excluded from calculations
- Distinguish between no agent, agent not running, and agent working
This will help users understand exactly why their VM disk usage isn't showing
and what steps they need to take to fix it (install qemu-guest-agent, restart
the service, etc).
addresses discussion #344
The agent field in Proxmox can have values other than just 0 or 1 when features are enabled, causing the strict equality check (== 1) to fail. Changed to check for any value > 0 to properly detect when the agent is enabled.
addresses discussion #344
The temporary auth tokens generated by authenticated users are now properly
validated even when Pulse has authentication enabled. This fixes the issue
where fresh installs (which are secured by default) couldn't use the
auto-registration feature.
Replaced the two-step setup code process with a simpler token-in-URL approach:
- Auth token is now embedded directly in the setup URL
- No more prompting users for setup codes
- Same security level with better UX
- Backwards compatible with old setupCode field
The new flow generates a command like:
curl -sSL "http://pulse/api/setup-script?...&auth_token=TOKEN" | bash
This makes it much easier for users, especially in Proxmox shell where
interactive prompts can be problematic.
- The generated command now includes PULSE_SETUP_CODE environment variable
- Users can simply copy-paste the command in Proxmox shell without needing to type the code
- Makes the setup process more streamlined for the primary use case
- Add bulk acknowledge and clear operations for alerts
- Support selecting multiple alerts with checkboxes
- Add select all functionality for bulk operations
- Improve Proxmox permission setup to handle both PVE 8 and 9+
- Use PVEAuditor role which includes VM.GuestAgent.Audit for PVE 9+
- Add fallback VM.Monitor role for PVE 8 and below
- Bump version to 4.7.3
- Fixed parsing of pveversion output (uses colon separator not slash)
- Now correctly extracts version number from 'pve-manager: X.Y.Z' format
- addresses #348