Implements comprehensive Docker monitoring with a dedicated agent that collects
container metrics and reports them to the main Pulse server. Adds Docker-specific
alert rules and threshold management with a redesigned UI.
Backend changes:
- Add Docker agent binary with container metrics collection
- Implement Docker host and container models with CPU/memory tracking
- Add Docker-specific alert types (offline, state, health)
- Extend threshold system to support Docker resources
- Add WebSocket message types for Docker agent communication
- Implement Docker agent API endpoints for registration and metrics
Frontend changes:
- Add Docker monitoring page with host/container views
- Add Docker agent settings panel for configuration
- Reorganize thresholds page with Proxmox/Docker tabs
- Add Docker-specific alert threshold management
- Improve layout consistency with vertical stacking
- Fix defensive null checks and TypeScript errors
This change enables monitoring of Docker containers across multiple hosts
with the same alerting and threshold capabilities as Proxmox resources.
Make mock mode configuration part of the repository instead of a local-only
file. This ensures consistent mock mode behavior across all environments
(development, CI/CD, demo server) and makes it work out of the box for
new contributors.
Changes:
- Add mock.env to repository with sensible defaults (mock mode OFF by default)
- Support mock.env.local for personal overrides (gitignored)
- Update .gitignore to allow mock.env but exclude .local variants
- Backend loads mock.env then merges mock.env.local overrides
- hot-dev.sh loads both files in correct order
Benefits:
- New developers can clone and use mock mode immediately
- Demo server gets consistent mock configuration
- Personal preferences stay private in .local file
- No surprises - mock mode disabled by default in fresh clones
- CI/CD can use mock mode without custom configuration
Documentation:
- Updated README.md to explain mock.env is in repo
- Enhanced MOCK_MODE.md with local override instructions
- Updated claude.md with new configuration strategy
- Added mock.env.local.example for quick setup
Example workflow:
git clone <repo>
npm run mock:on # Works immediately with repo defaults
# Or create personal config:
cp docs/development/mock.env.local.example mock.env.local
# Edit mock.env.local with your preferences
Implement a hot-reloadable mock mode system that works seamlessly in both
development and production environments without requiring manual restarts
or port changes.
Key Features:
- Backend watches mock.env and auto-reloads when changed (via fsnotify + polling)
- npm commands for easy toggling: mock:on, mock:off, mock:status, mock:edit
- Works in both hot-dev mode and systemd deployments
- Reload completes in 2-5 seconds with no manual intervention
- No port changes or process restarts required
Implementation:
- Extended ConfigWatcher to monitor both .env and mock.env
- Added callback system to trigger ReloadableMonitor.Reload()
- Enhanced toggle-mock.sh to support both hot-dev and systemd modes
- Updated hot-dev.sh banner to show mock status and commands
- Created comprehensive documentation in docs/development/MOCK_MODE.md
Testing:
- Backend builds successfully
- Watcher initializes and monitors both files
- npm run mock:on/off toggles successfully
- mock.env updates correctly
- Scripts work in both hot-dev and systemd modes
Documentation:
- Added Mock Mode section to README.md
- Created detailed guide in docs/development/MOCK_MODE.md
- Updated claude.md with mock mode architecture and usage
Mock mode continues to return cached data instantly from memory
(no API calls, no locks, no timeouts), ensuring fast /api/state responses.
Enhancements for OIDC authentication based on user feedback from issue #327:
1. Add OIDC logout URL support
- New OIDC_LOGOUT_URL environment variable
- UI field in OIDC settings panel for logout URL configuration
- Properly redirects to IdP logout endpoint (e.g., Authentik end-session)
- Stored in config and returned via security status API
2. Fix redirect URL help text in UI
- Handle empty defaultRedirect string properly
- Improved help text when PUBLIC_URL is not set
- Clarify when auto-detection vs manual config is needed
3. Documentation improvements
- Add note about using https:// in PUBLIC_URL/OIDC_REDIRECT_URL when behind TLS proxy
- Document OIDC_LOGOUT_URL environment variable
- Clarify X-Forwarded-Proto header behavior in OIDC docs
- Add better guidance for Authentik users on HTTPS setup
4. Frontend improvements
- Add HS256 signature algorithm error message in Login component
- Display OIDC logout URL when available
These changes address the remaining OIDC UX issues reported by users,
particularly around logout functionality and reverse proxy configuration.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixes multiple OIDC authentication issues reported in GitHub issue #327:
1. Fix DISABLE_AUTH=true disabling OIDC sessions
- Reorder authentication checks to validate proxy auth and OIDC sessions
before checking DISABLE_AUTH flag
- Allows OIDC to function even when basic auth is disabled
2. Fix missing username display for OIDC users
- Add GetSessionUsername() function to look up username from session ID
- Set X-Authenticated-User header for OIDC authenticated requests
- Update security status endpoint to return oidcUsername field
- Display OIDC username in UI header alongside logout button
3. Fix missing logout button for OIDC users
- Set hasAuth(true) when OIDC session is detected in frontend
- Update security status endpoint to return OIDC info even when
DISABLE_AUTH=true
- Properly initialize WebSocket and load user preferences for OIDC sessions
4. Add documentation for Authentik HS256/RS256 issue
- Document requirement for RSA signing key in Authentik
- Add troubleshooting entry for signature algorithm mismatch
- Provide clear resolution steps in CONFIGURATION.md and OIDC.md
All changes maintain backward compatibility and follow defensive security
practices. X-Forwarded-Proto header handling was verified to be correct.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
addresses #327
- added detailed logging when ID token verification fails
- added better error messages for common OIDC issues
- updated docs with Authentik-specific configuration
- added troubleshooting section for redirect loops and invalid_id_token errors
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changed from scary warnings to confident, reassuring tone:
Before:
- "⚠️ IMPORTANT: This grants SSH access..."
- Emphasized risks and compromise scenarios
- Made users feel unsafe enabling the feature
After:
- "Works just like Ansible, Saltstack, etc."
- Emphasizes this is industry-standard approach
- Compares to trusted automation tools
- Focuses on what it does, not what could go wrong
- Still transparent about security model
- Removes duplicate/contradictory sections
The feature is secure and follows best practices. The messaging should
reflect confidence in the design while still being transparent.
Users should feel good about enabling it, not scared.
- Make it clear SSH setup is OPTIONAL
- Explain security model upfront before user commits
- Detail exactly what access is being granted (root SSH, sensors only)
- Warn users to only proceed if they trust Pulse server
- Better differentiate public vs private keys
- Show exactly where the key is stored
- Explain how to revoke access
- Add comprehensive security documentation
- Include advanced option for command restrictions in authorized_keys
- Add risk assessment and best practices
This ensures users make informed decisions about SSH access to their
critical Proxmox infrastructure.
Added detailed debug-level logs throughout the OIDC flow:
- Provider initialization (issuer, endpoints, scopes)
- Login flow tracking (client ID, redirect URL)
- Token exchange success/failure details
- Claims extraction (username, email, groups)
- Access control checks (why restrictions passed/failed)
Enhanced error logs to include issuer URL and actual error details in
audit events instead of generic "failed" messages.
Updated docs with Debug Logging section showing example output and
troubleshooting guidance for common issues like group restrictions.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Corrected widespread misinformation claiming API tokens cannot access guest agent data on Proxmox 9.
Changes:
- Rewrote VM_DISK_MONITORING.md with accurate technical explanation
- Deleted VM_DISK_STATS_TROUBLESHOOTING.md (contained false information)
- Updated FAQ.md with correct quick reference and troubleshooting link
- Added comprehensive VM disk troubleshooting section to TROUBLESHOOTING.md
- Fixed README.md troubleshooting reference
- Updated frontend tooltip to show accurate permission requirements
- Corrected backend log messages to remove "known limitation" language
- Updated test-vm-disk.sh diagnostic script with accurate guidance
Key corrections:
- API tokens work fine for guest agent queries on both PVE 8 and 9
- Proxmox API returning disk=0 is normal behavior, not a bug
- Both tokens and passwords work equally well
- Only requirements: guest agent installed + proper permissions
- Permission issues are config problems, not authentication method limitations
Documentation now provides clear user journey: FAQ → Troubleshooting → Full Guide
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added to README.md Quick Start section
- Added comprehensive docs in INSTALL.md
- Explains when and why to use --main option
- Notes build dependencies requirement
- Always query guest agent for running VMs (cluster/resources API always returns 0)
- Show allocated disk size when guest agent unavailable (instead of misleading 0%)
- Fix duplicate mount point counting issue (#425)
- Add comprehensive logging for guest agent queries
- Include diagnostic script for troubleshooting VM disk issues
- Update both monitor.go and monitor_optimized.go for consistency
- Implement proper API integration with list and detail endpoints
- Add ZFS pool and device status conversion
- Enable by default with PULSE_DISABLE_ZFS_MONITORING opt-out
- Test with real Proxmox nodes and verify functionality
- Add comprehensive error handling and logging
- Document feature configuration and requirements
The feature now properly:
- Fetches ZFS pool status from Proxmox API
- Detects degraded/faulted pools and devices
- Tracks read/write/checksum errors
- Generates appropriate alerts
- Displays issues in the Storage tab UI
Tested and verified working with real Proxmox clusters.
- Added to CONFIGURATION.md environment variables section
- Added to WEBHOOKS.md for Gotify and ntfy services
- Added to DOCKER.md environment variables reference and compose example
- Explains how to configure the full Pulse URL for webhook notification links
- Removed all complex recovery mechanisms from documentation
- Clear stance: forgotten password = start fresh (takes 2 minutes)
- No recovery mechanisms = no security vulnerabilities
- Pulse's simplicity is a feature, not a bug
- Updated both TROUBLESHOOTING.md and SECURITY.md
- Addresses GitHub discussion #413 with security-first approach
- Added Custom Headers section explaining the new UI feature
- Updated ntfy instructions with security note about topic names
- Added common header examples table for authentication
- Clarified how to add Bearer tokens and API keys
The ProxmoxVE Helper Script is no longer the recommended installation method.
Users should use the official install.sh script instead, which supports
creating LXC containers directly on Proxmox hosts.
For existing users confused about updating (like in discussion #407), they
can use 'pct enter' from the Proxmox host to access their container as root.
- Add DiskStatusReason field to track why disk stats are unavailable
- Show helpful tooltips in UI explaining specific issues:
- Proxmox 9 API token limitation (401 on guest agent endpoints)
- Guest agent not installed/running
- Special filesystems only (Live ISOs)
- Permission issues
- Add comprehensive troubleshooting guide (docs/VM_DISK_STATS_TROUBLESHOOTING.md)
- Document that API tokens cannot access guest agent data on PVE 9
- Tested and confirmed: only password/cookie auth works for guest agent on PVE 9
- Update README with quick reference to VM disk stats issue
This addresses issues #348, #367, and #71 by clearly explaining the root cause
(Proxmox API limitation) and providing actionable guidance to users.
- Add proper JSON code block formatting for the template
- Keep all improvements from PR #401 by @rschoell
- Ensure consistent formatting throughout the document
- Remove chat_id from URL (should be in JSON payload)
- Add requirement to select 'Telegram Bot' service type
- Include custom payload template example
- Clarify that chat_id goes in the JSON body, not URL params
- Update screenshot tool to use MacBook Air resolution (2560x1600)
- Remove empty side borders from screenshots
- Use mock data for all screenshots for privacy
- Fix mobile alert buttons overflowing viewport
- Exempt localhost from API rate limiting for better dev experience
- Update documentation to showcase all features with screenshots
- Reorganize README visual tour into feature sections
- Add high-quality screenshots with 3x device scale factor for crisp text
- Implement mock alert history generator spanning 90 days
- Update documentation with detailed screenshot descriptions
- Add visual tour section to README with key screenshots
- Fix mock mode to properly separate from production data
- Clean up screenshot script to use actual mock data instead of DOM injection
- Enhance FAQ and webhooks docs with relevant screenshots
- Document auto-update feature in README
- Add detailed setup instructions in INSTALL.md
- Include auto-update configuration in CONFIGURATION.md
- Explain systemd timer behavior and controls
- Note that Docker doesn't support auto-updates
TESTED AND CONFIRMED: API tokens CAN access guest agent data on PVE 9!
- Created test tokens and verified they work
- Guest agent API returns proper disk usage data
- The cluster/resources endpoint shows disk=0 but that's not what Pulse uses
- Pulse correctly fetches data via /nodes/{node}/qemu/{vmid}/agent/get-fsinfo
The misinformation about PVE 9 not working was completely wrong. It does work when properly configured with PVEAuditor role which includes VM.GuestAgent.Audit permission.
Stop making definitive claims about what works or doesn't work. The reality:
- Some users (like you) have it working fine in cluster configs
- Others report 0% disk usage
- The exact conditions that make it work are unclear
- Results vary between different setups
Updated all docs and messages to reflect this uncertainty rather than making false claims about non-existent workarounds or absolute limitations.
Previous advice was completely wrong. The facts:
- VM.Monitor permission doesn't exist in PVE 9 (was removed)
- It was replaced with VM.GuestAgent.Audit
- But even with correct permissions, API tokens CANNOT access guest agent data on PVE 9
- This is Proxmox bug #1373 with NO working workaround for API tokens
- Users must accept 0% VM disk usage on PVE 9 until Proxmox fixes it upstream
Updated all documentation and error messages to reflect this reality instead of giving false hope about non-existent workarounds.
The root@pam suggestion doesn't actually work since it requires the Linux system root password, not a Proxmox-specific password. Most users don't know or have disabled their Linux root password for security.
Updated all documentation and error messages to correctly advise users to grant VM.Monitor permission to their API token user instead.
- Add verification steps for qemu-guest-agent service status
- Clarify that the service is socket-activated (not systemctl enable)
- Add diagnostic commands users can run to verify agent is working
- Update FAQ with correct troubleshooting steps for agent issues
This helps users like @RLSinRFV who were trying to enable the service
when it's actually socket-activated and should start automatically.
The real issue for PVE 8 users seeing 0% disk usage:
- Users who added nodes BEFORE v4.7 don't have VM.Monitor permission
- The setup script always created tokens with privsep=0, so that wasn't the issue
- Solution: Re-run the setup script or manually add VM.Monitor permission
Updated error messages and documentation to reflect the actual cause
and provide the correct fix for users experiencing this issue.
- Add detailed logging when VM disk monitoring fails due to permissions
- Explain Proxmox 9 limitation: API tokens cannot access guest agent data (PVE bug #1373)
- Explain Proxmox 8 requirements: VM.Monitor permission and privsep=0 for tokens
- Update setup script to show appropriate warnings for each PVE version
- Update FAQ with troubleshooting steps for 0% disk usage on VMs
- Log messages now clearly indicate workarounds for each scenario
The core issue: Proxmox 9 removed VM.Monitor permission and the replacement
permissions don't allow API tokens to access guest agent filesystem info.
This is a Proxmox upstream bug that affects their own web UI as well.
For users experiencing this issue:
- PVE 9: Use root@pam credentials or wait for Proxmox to fix upstream
- PVE 8: Ensure token has VM.Monitor and privsep=0
- All versions: QEMU guest agent must be installed in VMs
- LXC containers run as root and don't have sudo installed
- Updated all documentation to remove sudo references
- Updated frontend UI to show correct install command
- Keep sudo mention only in troubleshooting for edge cases