Enhancements for OIDC authentication based on user feedback from issue #327:
1. Add OIDC logout URL support
- New OIDC_LOGOUT_URL environment variable
- UI field in OIDC settings panel for logout URL configuration
- Properly redirects to IdP logout endpoint (e.g., Authentik end-session)
- Stored in config and returned via security status API
2. Fix redirect URL help text in UI
- Handle empty defaultRedirect string properly
- Improved help text when PUBLIC_URL is not set
- Clarify when auto-detection vs manual config is needed
3. Documentation improvements
- Add note about using https:// in PUBLIC_URL/OIDC_REDIRECT_URL when behind TLS proxy
- Document OIDC_LOGOUT_URL environment variable
- Clarify X-Forwarded-Proto header behavior in OIDC docs
- Add better guidance for Authentik users on HTTPS setup
4. Frontend improvements
- Add HS256 signature algorithm error message in Login component
- Display OIDC logout URL when available
These changes address the remaining OIDC UX issues reported by users,
particularly around logout functionality and reverse proxy configuration.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixes multiple OIDC authentication issues reported in GitHub issue #327:
1. Fix DISABLE_AUTH=true disabling OIDC sessions
- Reorder authentication checks to validate proxy auth and OIDC sessions
before checking DISABLE_AUTH flag
- Allows OIDC to function even when basic auth is disabled
2. Fix missing username display for OIDC users
- Add GetSessionUsername() function to look up username from session ID
- Set X-Authenticated-User header for OIDC authenticated requests
- Update security status endpoint to return oidcUsername field
- Display OIDC username in UI header alongside logout button
3. Fix missing logout button for OIDC users
- Set hasAuth(true) when OIDC session is detected in frontend
- Update security status endpoint to return OIDC info even when
DISABLE_AUTH=true
- Properly initialize WebSocket and load user preferences for OIDC sessions
4. Add documentation for Authentik HS256/RS256 issue
- Document requirement for RSA signing key in Authentik
- Add troubleshooting entry for signature algorithm mismatch
- Provide clear resolution steps in CONFIGURATION.md and OIDC.md
All changes maintain backward compatibility and follow defensive security
practices. X-Forwarded-Proto header handling was verified to be correct.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
addresses #327
- added detailed logging when ID token verification fails
- added better error messages for common OIDC issues
- updated docs with Authentik-specific configuration
- added troubleshooting section for redirect loops and invalid_id_token errors
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changed from scary warnings to confident, reassuring tone:
Before:
- "⚠️ IMPORTANT: This grants SSH access..."
- Emphasized risks and compromise scenarios
- Made users feel unsafe enabling the feature
After:
- "Works just like Ansible, Saltstack, etc."
- Emphasizes this is industry-standard approach
- Compares to trusted automation tools
- Focuses on what it does, not what could go wrong
- Still transparent about security model
- Removes duplicate/contradictory sections
The feature is secure and follows best practices. The messaging should
reflect confidence in the design while still being transparent.
Users should feel good about enabling it, not scared.
- Make it clear SSH setup is OPTIONAL
- Explain security model upfront before user commits
- Detail exactly what access is being granted (root SSH, sensors only)
- Warn users to only proceed if they trust Pulse server
- Better differentiate public vs private keys
- Show exactly where the key is stored
- Explain how to revoke access
- Add comprehensive security documentation
- Include advanced option for command restrictions in authorized_keys
- Add risk assessment and best practices
This ensures users make informed decisions about SSH access to their
critical Proxmox infrastructure.
Added detailed debug-level logs throughout the OIDC flow:
- Provider initialization (issuer, endpoints, scopes)
- Login flow tracking (client ID, redirect URL)
- Token exchange success/failure details
- Claims extraction (username, email, groups)
- Access control checks (why restrictions passed/failed)
Enhanced error logs to include issuer URL and actual error details in
audit events instead of generic "failed" messages.
Updated docs with Debug Logging section showing example output and
troubleshooting guidance for common issues like group restrictions.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Corrected widespread misinformation claiming API tokens cannot access guest agent data on Proxmox 9.
Changes:
- Rewrote VM_DISK_MONITORING.md with accurate technical explanation
- Deleted VM_DISK_STATS_TROUBLESHOOTING.md (contained false information)
- Updated FAQ.md with correct quick reference and troubleshooting link
- Added comprehensive VM disk troubleshooting section to TROUBLESHOOTING.md
- Fixed README.md troubleshooting reference
- Updated frontend tooltip to show accurate permission requirements
- Corrected backend log messages to remove "known limitation" language
- Updated test-vm-disk.sh diagnostic script with accurate guidance
Key corrections:
- API tokens work fine for guest agent queries on both PVE 8 and 9
- Proxmox API returning disk=0 is normal behavior, not a bug
- Both tokens and passwords work equally well
- Only requirements: guest agent installed + proper permissions
- Permission issues are config problems, not authentication method limitations
Documentation now provides clear user journey: FAQ → Troubleshooting → Full Guide
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added to README.md Quick Start section
- Added comprehensive docs in INSTALL.md
- Explains when and why to use --main option
- Notes build dependencies requirement
- Always query guest agent for running VMs (cluster/resources API always returns 0)
- Show allocated disk size when guest agent unavailable (instead of misleading 0%)
- Fix duplicate mount point counting issue (#425)
- Add comprehensive logging for guest agent queries
- Include diagnostic script for troubleshooting VM disk issues
- Update both monitor.go and monitor_optimized.go for consistency
- Implement proper API integration with list and detail endpoints
- Add ZFS pool and device status conversion
- Enable by default with PULSE_DISABLE_ZFS_MONITORING opt-out
- Test with real Proxmox nodes and verify functionality
- Add comprehensive error handling and logging
- Document feature configuration and requirements
The feature now properly:
- Fetches ZFS pool status from Proxmox API
- Detects degraded/faulted pools and devices
- Tracks read/write/checksum errors
- Generates appropriate alerts
- Displays issues in the Storage tab UI
Tested and verified working with real Proxmox clusters.
- Added to CONFIGURATION.md environment variables section
- Added to WEBHOOKS.md for Gotify and ntfy services
- Added to DOCKER.md environment variables reference and compose example
- Explains how to configure the full Pulse URL for webhook notification links
- Removed all complex recovery mechanisms from documentation
- Clear stance: forgotten password = start fresh (takes 2 minutes)
- No recovery mechanisms = no security vulnerabilities
- Pulse's simplicity is a feature, not a bug
- Updated both TROUBLESHOOTING.md and SECURITY.md
- Addresses GitHub discussion #413 with security-first approach
- Added Custom Headers section explaining the new UI feature
- Updated ntfy instructions with security note about topic names
- Added common header examples table for authentication
- Clarified how to add Bearer tokens and API keys
The ProxmoxVE Helper Script is no longer the recommended installation method.
Users should use the official install.sh script instead, which supports
creating LXC containers directly on Proxmox hosts.
For existing users confused about updating (like in discussion #407), they
can use 'pct enter' from the Proxmox host to access their container as root.
- Add DiskStatusReason field to track why disk stats are unavailable
- Show helpful tooltips in UI explaining specific issues:
- Proxmox 9 API token limitation (401 on guest agent endpoints)
- Guest agent not installed/running
- Special filesystems only (Live ISOs)
- Permission issues
- Add comprehensive troubleshooting guide (docs/VM_DISK_STATS_TROUBLESHOOTING.md)
- Document that API tokens cannot access guest agent data on PVE 9
- Tested and confirmed: only password/cookie auth works for guest agent on PVE 9
- Update README with quick reference to VM disk stats issue
This addresses issues #348, #367, and #71 by clearly explaining the root cause
(Proxmox API limitation) and providing actionable guidance to users.
- Add proper JSON code block formatting for the template
- Keep all improvements from PR #401 by @rschoell
- Ensure consistent formatting throughout the document
- Remove chat_id from URL (should be in JSON payload)
- Add requirement to select 'Telegram Bot' service type
- Include custom payload template example
- Clarify that chat_id goes in the JSON body, not URL params
- Update screenshot tool to use MacBook Air resolution (2560x1600)
- Remove empty side borders from screenshots
- Use mock data for all screenshots for privacy
- Fix mobile alert buttons overflowing viewport
- Exempt localhost from API rate limiting for better dev experience
- Update documentation to showcase all features with screenshots
- Reorganize README visual tour into feature sections
- Add high-quality screenshots with 3x device scale factor for crisp text
- Implement mock alert history generator spanning 90 days
- Update documentation with detailed screenshot descriptions
- Add visual tour section to README with key screenshots
- Fix mock mode to properly separate from production data
- Clean up screenshot script to use actual mock data instead of DOM injection
- Enhance FAQ and webhooks docs with relevant screenshots
- Document auto-update feature in README
- Add detailed setup instructions in INSTALL.md
- Include auto-update configuration in CONFIGURATION.md
- Explain systemd timer behavior and controls
- Note that Docker doesn't support auto-updates
TESTED AND CONFIRMED: API tokens CAN access guest agent data on PVE 9!
- Created test tokens and verified they work
- Guest agent API returns proper disk usage data
- The cluster/resources endpoint shows disk=0 but that's not what Pulse uses
- Pulse correctly fetches data via /nodes/{node}/qemu/{vmid}/agent/get-fsinfo
The misinformation about PVE 9 not working was completely wrong. It does work when properly configured with PVEAuditor role which includes VM.GuestAgent.Audit permission.
Stop making definitive claims about what works or doesn't work. The reality:
- Some users (like you) have it working fine in cluster configs
- Others report 0% disk usage
- The exact conditions that make it work are unclear
- Results vary between different setups
Updated all docs and messages to reflect this uncertainty rather than making false claims about non-existent workarounds or absolute limitations.
Previous advice was completely wrong. The facts:
- VM.Monitor permission doesn't exist in PVE 9 (was removed)
- It was replaced with VM.GuestAgent.Audit
- But even with correct permissions, API tokens CANNOT access guest agent data on PVE 9
- This is Proxmox bug #1373 with NO working workaround for API tokens
- Users must accept 0% VM disk usage on PVE 9 until Proxmox fixes it upstream
Updated all documentation and error messages to reflect this reality instead of giving false hope about non-existent workarounds.
The root@pam suggestion doesn't actually work since it requires the Linux system root password, not a Proxmox-specific password. Most users don't know or have disabled their Linux root password for security.
Updated all documentation and error messages to correctly advise users to grant VM.Monitor permission to their API token user instead.
- Add verification steps for qemu-guest-agent service status
- Clarify that the service is socket-activated (not systemctl enable)
- Add diagnostic commands users can run to verify agent is working
- Update FAQ with correct troubleshooting steps for agent issues
This helps users like @RLSinRFV who were trying to enable the service
when it's actually socket-activated and should start automatically.
The real issue for PVE 8 users seeing 0% disk usage:
- Users who added nodes BEFORE v4.7 don't have VM.Monitor permission
- The setup script always created tokens with privsep=0, so that wasn't the issue
- Solution: Re-run the setup script or manually add VM.Monitor permission
Updated error messages and documentation to reflect the actual cause
and provide the correct fix for users experiencing this issue.
- Add detailed logging when VM disk monitoring fails due to permissions
- Explain Proxmox 9 limitation: API tokens cannot access guest agent data (PVE bug #1373)
- Explain Proxmox 8 requirements: VM.Monitor permission and privsep=0 for tokens
- Update setup script to show appropriate warnings for each PVE version
- Update FAQ with troubleshooting steps for 0% disk usage on VMs
- Log messages now clearly indicate workarounds for each scenario
The core issue: Proxmox 9 removed VM.Monitor permission and the replacement
permissions don't allow API tokens to access guest agent filesystem info.
This is a Proxmox upstream bug that affects their own web UI as well.
For users experiencing this issue:
- PVE 9: Use root@pam credentials or wait for Proxmox to fix upstream
- PVE 8: Ensure token has VM.Monitor and privsep=0
- All versions: QEMU guest agent must be installed in VMs
- LXC containers run as root and don't have sudo installed
- Updated all documentation to remove sudo references
- Updated frontend UI to show correct install command
- Keep sudo mention only in troubleshooting for edge cases
Replaced the two-step setup code process with a simpler token-in-URL approach:
- Auth token is now embedded directly in the setup URL
- No more prompting users for setup codes
- Same security level with better UX
- Backwards compatible with old setupCode field
The new flow generates a command like:
curl -sSL "http://pulse/api/setup-script?...&auth_token=TOKEN" | bash
This makes it much easier for users, especially in Proxmox shell where
interactive prompts can be problematic.
Webhooks now stored encrypted (webhooks.enc) instead of plain text:
- Automatic migration from webhooks.json to webhooks.enc
- Uses same AES-256-GCM encryption as nodes and email configs
- Original file backed up as webhooks.json.backup
- Protects sensitive webhook URLs and authentication headers
This addresses the security concern where webhook URLs containing API tokens
(like Telegram bot tokens) were stored in plain text.
Implements header-based proxy authentication for SSO integration with
Authentik, Authelia, and other authentication proxies.
- Add CheckProxyAuth function to validate proxy headers
- Support for username and role-based access control
- Frontend integration with logout URL support
- Comprehensive documentation with examples
- Backwards compatible - no breaking changes
Addresses #327
Configuration via environment variables:
- PROXY_AUTH_SECRET: Shared secret for validation
- PROXY_AUTH_USER_HEADER: Header containing username
- PROXY_AUTH_ROLE_HEADER: Header containing roles/groups
- PROXY_AUTH_LOGOUT_URL: SSO logout endpoint
- Install script now prompts for custom port (default: 7655)
- Can skip prompt with FRONTEND_PORT environment variable
- Fixed incorrect port configuration instructions in UI
- Updated documentation to reflect new installation options
- Fixed FAQ.md references to pulse-backend (should be pulse)
addresses #110
- Added comprehensive PORT_CONFIGURATION.md guide
- Updated CONFIGURATION.md to clarify .env is for auth only
- Install script no longer loads .env for environment variables
- Documented proper port configuration methods (systemd, system.json)
- Added port config guide to README documentation section
addresses #110 - helps users understand where to configure ports