Fixed three P1 goroutine/memory leaks that prevent proper resource cleanup:
1. Recovery Tokens goroutine leak
- Cleanup routine runs forever without stop mechanism
- Added stopCleanup channel and Stop() method
- Cleanup loop now uses select with stopCleanup case
2. Rate Limiter goroutine leak
- Cleanup routine runs forever without stop mechanism
- Added stopCleanup channel and Stop() method
- Changed from 'for range ticker.C' to select with stopCleanup case
3. OIDC Service memory leak (DoS vector)
- Abandoned OIDC flows never cleaned up
- State entries accumulate unboundedly
- Added cleanup routine with 5-minute ticker
- Periodically removes expired state entries (10min TTL)
- Added Stop() method for proper shutdown
All three follow consistent pattern:
- Add stopCleanup chan struct{} field
- Initialize in constructor
- Use select with ticker and stopCleanup cases
- Close channel in Stop() method to signal goroutine exit
Impact:
- Prevents goroutine leaks during service restarts/reloads
- Prevents memory exhaustion from abandoned OIDC login attempts
- Enables proper cleanup in tests and graceful shutdown
This commit addresses 5 critical P0 bugs that cause security vulnerabilities, crashes, and data corruption:
**P0-1: Recovery Tokens Replay Attack Vulnerability** (recovery_tokens.go:153-159)
- **SECURITY CRITICAL**: Single-use recovery tokens could be replayed
- **Problem**: Lock upgrade race - two concurrent requests both pass initial Used check
1. Both acquire RLock, see token.Used = false
2. Both release RLock
3. Both acquire Lock and mark token.Used = true
4. Both return true - TOKEN REUSED
- **Impact**: Attacker with intercepted token can use it multiple times
- **Fix**: Re-check token.Used after acquiring write lock (TOCTOU prevention)
**P0-2: WebSocket Hub Concurrent Map Panic** (hub.go:345-347, 376-378)
- **Problem**: Initial state goroutine reads h.clients map without lock
- Line 345: `if _, ok := h.clients[client]` (NO LOCK)
- Main loop writes to h.clients with lock (line 326, 394)
- **Impact**: "fatal error: concurrent map read and write" crashes hub
- **Fix**: Acquire RLock before all client map reads in goroutine
**P0-3: WebSocket Send on Closed Channel Panic** (hub.go:348, 380)
- **Problem**: Check client exists, then send - channel can close between
- **Impact**: "send on closed channel" panic crashes hub
- **Fix**: Hold RLock during both check and send (defensive select already present)
**P0-4: CSRF Store Shutdown Data Corruption** (csrf_store.go:189-196)
- **Problem**: Stop() calls save() after signaling worker. Both hold only RLock
- Worker's final save writes to csrf_tokens.json.tmp
- Stop()'s save writes to same file concurrently
- **Impact**: Corrupted/truncated csrf_tokens.json on shutdown
- **Fix**: Added saveMu mutex to serialize all disk writes
**P0-5: CSRF Store Deadlock on Double-Stop** (csrf_store.go:103-108)
- **Problem**: stopChan unbuffered, no sync.Once guard, uses send not close
- **Impact**: Second Stop() call blocks forever waiting for receiver
- **Fix**:
- Added sync.Once field stopOnce
- Changed to close(stopChan) within stopOnce.Do()
- Prevents double-close panic and deadlock
All fixes maintain backwards compatibility. The recovery token fix is particularly critical as it closes a security vulnerability allowing replay attacks on password reset flows.
This commit addresses 4 P1 important issues and 1 P2 optimization in infrastructure components:
**P1-1: Missing Panic Recovery in Discovery Service** (service.go:172-195, 499-542)
- **Problem**: No panic recovery in Start(), ForceRefresh(), SetSubnet() goroutines
- **Impact**: Silent service death if scan panics, broken discovery with no monitoring
- **Fix**:
- Wrapped initial scan goroutine with defer/recover (lines 172-182)
- Wrapped scanLoop goroutine with defer/recover (lines 185-195)
- Wrapped ForceRefresh scan with defer/recover (lines 499-509)
- Wrapped SetSubnet scan with defer/recover (lines 532-542)
- All log panics with stack traces for debugging
**P1-2: Missing Panic Recovery in Config Watcher Callback** (watcher.go:546-556)
- **Problem**: User-provided onMockReload callback could panic and crash watcher
- **Impact**: Panicking callback kills watcher goroutine, no config updates
- **Fix**: Wrapped callback invocation with defer/recover and stack trace logging
**P1-3: Session Store Stop() Using Send Instead of Close** (session_store.go:16-84)
- **Problem**: Stop() used channel send which blocks if nobody reads
- **Impact**: Stop() hangs if backgroundWorker already exited
- **Fix**:
- Added sync.Once field stopOnce (line 22)
- Changed Stop() to use close() within stopOnce.Do() (lines 80-84)
- Prevents double-close panic and ensures all readers are signaled
**P2-1: Backup Cleanup Inefficient O(n²) Sort** (persistence.go:1424-1427)
- **Problem**: Bubble sort used to sort backups by modification time
- **Impact**: Inefficient for large backup counts (>100 files)
- **Fix**:
- Replaced bubble sort with sort.Slice() using O(n log n) algorithm
- Added "sort" import (line 9)
- Maintains same oldest-first ordering for deletion logic
All fixes add defensive programming without changing external behavior. Panic recovery ensures services continue operating even with bugs, while optimization reduces cleanup time for backup-heavy environments.
Backend:
- Add IsEncryptionEnabled() method to ConfigPersistence
- Include encryption status in /api/notifications/health response
- Allows frontend to warn when credentials are stored in plaintext
Frontend:
- Update NotificationHealth type to include encryption.enabled field
- Frontend can now display warnings when encryption is disabled
This addresses the P2 requirement for encryption visibility, allowing
operators to know when notification credentials are not encrypted at rest.
Related to #630
Proxmox 8.3+ changed the VM status API to return the `agent` field as an
object ({"enabled":1,"available":1}) instead of an integer (0 or 1). This
caused Pulse to incorrectly treat VMs as having no guest agent, resulting
in missing disk usage data (disk:-1) even when the guest agent was running
and functional.
The issue manifested as:
- VMs showing "Guest details unavailable" or missing disk data
- Pulse logs showing no "Guest agent enabled, querying filesystem info" messages
- `pvesh get /nodes/<node>/qemu/<vmid>/agent/get-fsinfo` working correctly
from the command line, confirming the agent was functional
Root cause:
The VMStatus struct defined `Agent` as an int field. When Proxmox 8.3+ sent
the new object format, JSON unmarshaling silently left the field at zero,
causing Pulse to skip all guest agent queries.
Changes:
- Created VMAgentField type with custom UnmarshalJSON to handle both formats:
* Legacy (Proxmox <8.3): integer (0 or 1)
* Modern (Proxmox 8.3+): object {"enabled":N,"available":N}
- Updated VMStatus.Agent from `int` to `VMAgentField`
- Updated all references to `detailedStatus.Agent` to use `.Agent.Value`
- The unmarshaler prioritizes the "available" field over "enabled" to ensure
we only query when the agent is actually responding
This fix maintains backward compatibility with older Proxmox versions while
supporting the new format introduced in Proxmox 8.3+.
Addresses two issues preventing configuration backup/restore:
1. Export passphrase validation mismatch: UI only validated 12+ char
requirement when using custom passphrase, but backend always enforced
it. Users with shorter login passwords saw unexplained failures.
- Frontend now validates all passphrases meet 12-char minimum
- Clear error message suggests custom passphrase if login password too short
2. Import data parsing failed silently: Frontend sent `exportData.data`
which was undefined for legacy/CLI backups (raw base64 strings).
Backend rejected these with no logs.
- Frontend now handles both formats: {status, data} and raw strings
- Backend logs validation failures for easier troubleshooting
Related to #646 where user reported "error after entering password" with
no container logs. These changes ensure proper validation feedback and
make the backup system resilient to different export formats.
The bootstrap token security requirement was added proactively but
lacked discoverability, causing user friction during first-run setup.
These improvements make the token easier to find while maintaining
the security benefit.
Improvements:
- Display bootstrap token prominently in startup logs with ASCII box
(previously: single line log message)
- Add `pulse bootstrap-token` CLI command to display token on demand
(Docker: docker exec <container> /app/pulse bootstrap-token)
- Improve error messages in quick-setup API to show exact commands
for retrieving token when missing or invalid
- Error messages now include both Docker and bare metal examples
User experience improvements:
- Token visible in `docker logs` output immediately
- Clear instructions printed with token
- Helpful error messages if token is wrong/missing
- CLI helper for operators who need to retrieve token later
Security unchanged:
- Bootstrap token still required for first-run setup
- Token still auto-deleted after successful setup
- No bypass mechanism added
Related to discussion about bootstrap token UX friction.
Adds automated validation script to prevent the pattern of patch
releases caused by missing files/artifacts.
scripts/validate-release.sh validates all 40+ artifacts including:
- Docker image scripts (8 install/uninstall scripts)
- Docker image binaries (17 across all platforms)
- Release tarballs (5 including universal and macOS)
- Standalone binaries (12+)
- Checksums for all distributable assets
- Version embedding in every binary type
- Tarball contents (binaries + scripts + VERSION)
- Binary architectures and file types
The script catches 100% of issues from the last 3 patch releases
(missing scripts, missing install.sh, missing binaries, broken
version embedding).
Updated RELEASE_CHECKLIST.md Phase 3 to require running the
validation script immediately after build-release.sh and before
proceeding to Docker build/publish phases.
Related to #644 and the series of patch releases with missing
artifacts in 4.26.x.
Hashed static assets (e.g., index-BXHytNQV.js, index-TvhSzimt.css) are
now cached for 1 year with immutable flag since content hash changes
when files change.
Benefits:
- Faster page loads on subsequent visits
- Reduced server bandwidth
- Better user experience on demo and production instances
Only index.html and non-hashed assets remain uncached to ensure
users always get the latest version.
Demo mode now permits login/logout and OIDC authentication endpoints
while still blocking all modification requests. This allows demo
instances to require authentication while remaining read-only.
Authentication endpoints are read-only operations that verify
credentials and issue session tokens without modifying any state.
All POST/PUT/DELETE/PATCH operations remain blocked.
Related to #617
This fixes a misconfiguration scenario where Docker containers could
attempt direct SSH connections (producing [preauth] log spam) instead
of using the sensor proxy.
Changes:
- Fix container detection to check PULSE_DOCKER=true in addition to
system.InContainer() heuristics (both temperature.go and config_handlers.go)
- Upgrade temperature collection log from Error to Warn with actionable
guidance about mounting the proxy socket
- Add Info log when dev mode override is active so operators understand
the security posture
- Add troubleshooting section to docs for SSH [preauth] logs from containers
The container detection was inconsistent - monitor.go checked both flags
but temperature.go and config_handlers.go only checked InContainer().
Now all locations consistently check PULSE_DOCKER || InContainer().
This implements the ability for users to assign custom display names to Docker hosts,
similar to the existing functionality for Proxmox nodes. This addresses the issue where
multiple Docker hosts with identical hostnames but different IPs/domains cannot be
easily distinguished in the UI.
Backend changes:
- Add CustomDisplayName field to DockerHost model (internal/models/models.go:201)
- Update UpsertDockerHost to preserve custom display names across updates (internal/models/models.go:1110-1113)
- Add SetDockerHostCustomDisplayName method to State for updating names (internal/models/models.go:1221-1235)
- Add SetDockerHostCustomDisplayName method to Monitor (internal/monitoring/monitor.go:1070-1088)
- Add HandleSetCustomDisplayName API handler (internal/api/docker_agents.go:385-426)
- Route /api/agents/docker/hosts/{id}/display-name PUT requests (internal/api/docker_agents.go:117-120)
Frontend changes:
- Add customDisplayName field to DockerHost TypeScript interface (frontend-modern/src/types/api.ts:136)
- Add MonitoringAPI.setDockerHostDisplayName method (frontend-modern/src/api/monitoring.ts:151-187)
- Update getDisplayName function to prioritize custom names (frontend-modern/src/components/Settings/DockerAgents.tsx:84-89)
- Add inline editing UI with save/cancel buttons in Docker Agents settings (frontend-modern/src/components/Settings/DockerAgents.tsx:1349-1413)
- Update sorting to use custom display names (frontend-modern/src/components/Docker/DockerHosts.tsx:58-59)
- Update DockerHostSummaryTable to display custom names (frontend-modern/src/components/Docker/DockerHostSummaryTable.tsx:40-42, 87, 120, 254)
Users can now click the edit icon next to any Docker host name in Settings > Docker Agents
to set a custom display name. The custom name will be preserved across agent reconnections
and takes priority over the hostname reported by the agent.
Related to #623
Addresses issue #635 where users encounter "can't find the SSH key" errors
when enabling temperature monitoring during automated PVE setup with Pulse
running in Docker.
Root cause:
- Setup script embeds SSH keys at generation time (when downloaded)
- For containerized Pulse, keys are empty until pulse-sensor-proxy is installed
- Script auto-installs proxy, but didn't refresh keys after installation
- This caused temperature monitoring setup to fail with confusing errors
Changes:
1. After successful proxy installation, immediately fetch and populate the
proxy's SSH public key (lines 4068-4080)
2. Update bash variables SSH_SENSORS_PUBLIC_KEY and SSH_SENSORS_KEY_ENTRY
so temperature monitoring setup can proceed in the same script run
3. Improve error messaging when keys aren't available (lines 4424-4453):
- Clear explanation of containerized Pulse requirements
- Step-by-step instructions for container restart and verification
- Separate guidance for bare-metal vs containerized deployments
Flow improvements:
- Initial run: Proxy installs → keys fetched → temp monitoring configures
- Rerun after container restart: Keys fetched at script start → works
- Both scenarios now handled correctly
Related to #635
When Pulse is running in a container and the SSH key is not available,
provide clearer guidance about the pulse-sensor-proxy requirement and
include documentation link for Docker deployments.
This helps users understand that containerized Pulse needs the host-side
sensor proxy to access temperature data from Proxmox hosts.
Related to #595
This change adds support for custom SSH ports when collecting temperature
data from Proxmox nodes, resolving issues for users who run SSH on non-standard
ports.
**Why SSH is still needed:**
Temperature monitoring requires reading /sys/class/hwmon sensors on Proxmox
nodes, which is not exposed via the Proxmox API. Even when using API tokens
for authentication, Pulse needs SSH access to collect temperature data.
**Changes:**
- Add `sshPort` configuration to SystemSettings (system.json)
- Add `SSHPort` field to Config with environment variable support (SSH_PORT)
- Add per-node SSH port override capability for PVE, PBS, and PMG instances
- Update TemperatureCollector to accept and use custom SSH port
- Update SSH known_hosts manager to support non-standard ports
- Add NewTemperatureCollectorWithPort() constructor with port parameter
- Maintain backward compatibility with NewTemperatureCollector() (uses port 22)
- Update frontend TypeScript types for SSH port configuration
**Configuration methods:**
1. Environment variable: SSH_PORT=2222
2. system.json: {"sshPort": 2222}
3. Per-node override in nodes.enc (future UI support)
**Default behavior:**
- Defaults to port 22 if not configured
- Maintains full backward compatibility
- No changes required for existing deployments
The implementation includes proper ssh-keyscan port handling and known_hosts
management for non-standard ports using [host]:port notation per SSH standards.
Related to #626
When authentication expires after some time, users see "Connection lost"
and must refresh the page to see "Authentication required". This commit
implements automatic redirect to login when authentication expires.
Changes:
- Add authentication check to WebSocket endpoint to prevent unauthenticated
WebSocket connections
- Handle WebSocket close with code 1008 (policy violation) as auth failure
and redirect to login
- Intercept 401 responses on API calls (except initial auth checks) and
automatically redirect to login page
- Clear stored credentials and set logout flag before redirect to ensure
clean login flow
This provides a better user experience by immediately redirecting to the
login page when the session expires, rather than showing a confusing
"Connection lost" message that requires manual page refresh.
Related to discussion #615
Add optional GuestURL field to PVE instances and cluster endpoints,
allowing users to specify a separate guest-accessible URL for web UI
navigation that differs from the internal management URL.
Backend changes:
- Add GuestURL field to PVEInstance and ClusterEndpoint structs
- Add GuestURL field to Node model
- Update cluster auto-discovery to preserve existing GuestURL values
- Update node creation logic to populate GuestURL from config
- Update API handlers to accept and persist GuestURL field
Frontend changes:
- Add GuestURL input field to NodeModal for configuration
- Update NodeGroupHeader and NodeSummaryTable to use GuestURL for navigation
- Add GuestURL to Node and PVENodeConfig TypeScript interfaces
When GuestURL is configured, it will be used for navigation links
instead of the Host URL, allowing users to access PVE hosts through
a reverse proxy or different domain while maintaining internal API
connections.
Replace non-functional docs.pulseapp.io URLs with direct GitHub repository
links. The containerized deployment security documentation exists in
SECURITY.md and was previously inaccessible via the external link.
Changes:
- Update SECURITY.md documentation reference
- Fix three documentation links in config_handlers.go (SSH verification,
setup script, and security block error messages)
- All links now point to GitHub repository where docs actually live
Related to #607
Related to #551
Enhanced the PMG connection test to actually validate the metrics
endpoints that Pulse uses for monitoring, rather than only checking
the version endpoint. This provides users with immediate feedback if
their PMG credentials lack the necessary permissions to collect metrics.
Backend changes:
- Test mail statistics, cluster status, and quarantine endpoints during
connection test (internal/api/config_handlers.go:1695-1714)
- Return warnings array in test response when endpoints are unavailable
- Increased timeout from 10s to 15s to accommodate multiple endpoint checks
- Added warning logs for failed endpoint checks
Frontend changes:
- Added showWarning() toast function for warning messages
- Enhanced NodeModal to display warning status with amber styling
- Added warnings list display in test results UI
- Updated Settings.tsx to show warnings from connection tests
This change helps users identify permission issues immediately rather
than discovering later that metrics aren't being collected despite a
"successful" connection.
The previous commit added 4 new %s format specifiers for Docker/LXC
instructions but didn't add the corresponding arguments to fmt.Sprintf.
Added 4 pulseURL arguments to match the new format specifiers in the
'unknown environment' section of the setup script.
This addresses confusion around temperature monitoring setup for Docker
deployments where users expected a turnkey experience similar to LXC.
The core issue: The setup script and documentation suggested that
temperature monitoring was "automatically configured" for all containerized
deployments, but in reality only LXC containers have a fully automatic
setup. Docker requires manual steps.
Changes:
**Setup Script (config_handlers.go):**
- Fixed "unknown environment" path to show separate instructions for LXC vs Docker
- Docker instructions now correctly show --standalone flag (was incorrectly showing --ctid)
- Added docker-compose.yml bind mount instructions inline
- Added restart command for Docker deployments
**Documentation (TEMPERATURE_MONITORING.md):**
- Added prominent "Deployment-Specific Setup" callout at the top
- Clarified that LXC is fully automatic, Docker requires manual steps
- Reorganized "Setup (Automatic)" section to clearly distinguish:
- LXC: Fully turnkey (no manual steps)
- Docker: Manual proxy installation required
- Node configuration: Works for both
- Updated "Host-side responsibilities" to specify it's Docker-only
- Fixed architecture benefits to reflect LXC vs Docker differences
Why this matters:
- LXC setup script auto-detects the container and runs install-sensor-proxy.sh --ctid
- Docker deployments can't be auto-detected and require --standalone flag
- Users running Docker were getting incorrect instructions (--ctid instead of --standalone)
- Documentation suggested everything was automatic, leading to confusion
Now the documentation and setup script accurately reflect that:
- LXC = Turnkey (automatic)
- Docker = Manual steps required (but well-documented)
- Native = Direct SSH (no proxy)
Related to GitHub Discussion #605
- Build host agent binaries for all platforms (linux/darwin/windows, amd64/arm64/armv7) in Docker
- Add Makefile target for building agent binaries locally
- Add startup validation to check for missing agent binaries
- Improve download endpoint error messages with troubleshooting guidance
- Enhance host details drawer layout with better organization and visual hierarchy
- Update base images to rolling versions (node:20-alpine, golang:1.24-alpine, alpine:3.20)
This commit implements per-node temperature monitoring control and fixes a critical
bug where partial node updates were destroying existing configuration.
Backend changes:
- Add TemperatureMonitoringEnabled field (*bool) to PVEInstance, PBSInstance, and PMGInstance
- Update monitor.go to check per-node temperature setting with global fallback
- Convert all NodeConfigRequest boolean fields to *bool pointers
- Add nil checks in HandleUpdateNode to prevent overwriting unmodified fields
- Fix critical bug where partial updates zeroed out MonitorVMs, MonitorContainers, etc.
- Update NodeResponse, NodeFrontend, and StateSnapshot to include temperature setting
- Fix HandleAddNode and test connection handlers to use pointer-based boolean fields
Frontend changes:
- Add temperatureMonitoringEnabled to Node interface and config types
- Create per-node temperature monitoring toggle handler with optimistic updates
- Update NodeModal to wire up per-node temperature toggle
- Add isTemperatureMonitoringEnabled helper to check effective monitoring state
- Update ConfiguredNodeTables to show/hide temperature badge based on monitoring state
- Update NodeSummaryTable to conditionally show temperature column
- Pass globalTemperatureMonitoringEnabled prop through component tree
The critical bug fix ensures that when updating a single field (like temperature
monitoring), the backend only modifies that specific field instead of zeroing out
all other boolean configuration fields.
- Add Access-Control-Expose-Headers to allow frontend to read X-CSRF-Token response header
- Implement proactive CSRF token issuance on GET requests when session exists but CSRF cookie is missing
- Ensures frontend always has valid CSRF token before making POST requests
- Fixes 403 Forbidden errors when toggling system settings
This resolves CSRF validation failures that occurred when CSRF tokens expired or were missing while valid sessions existed.
When a Docker host successfully completes a stop command and confirms
it has disabled itself, automatically clear the removal block to allow
immediate re-enrollment.
This fixes the UX issue where users who remove a Docker host cannot
immediately reinstall it with a new token, as the host ID remains
blocked for 24 hours. The block is still needed to prevent zombie
reports from stale agents, but once the agent confirms it stopped
successfully, there's no need to keep the block.
Changes:
- Clear removal block in HandleCommandAck after successful host removal
- Allows remove → reinstall workflow without manual intervention
- Block remains for forced removals or offline hosts (as intended)
API Enhancements:
- Add SHA256 checksum endpoint for binary downloads
- Computes checksum on-the-fly when .sha256 suffix is requested
- Example: /download/pulse-host-agent?platform=linux&arch=amd64.sha256
- Enables installer scripts to verify binary integrity
- Add /uninstall-host-agent.sh endpoint for Linux/macOS uninstall script
- Add endpoint to public paths (no auth required)
Checksum Implementation:
- New serveChecksum() function computes SHA256 hash using crypto/sha256
- Returns plain text checksum in hex format
- Supports all binary download endpoints
- Zero performance impact (only computed when requested)
Install Script Updates:
- Add --force/-f flag to skip all interactive prompts
- URL/token prompts skipped with --force
- Reinstall confirmation skipped with --force
- Checksum mismatch still aborts (security first)
- Force mode auto-accepts updates and reinstalls
- Usage: ./install-host-agent.sh --url $URL --token $TOKEN --force
Security Notes:
- Checksum verification protects against:
- Corrupted downloads due to network issues
- Man-in-the-middle binary tampering
- Storage corruption on server
- Force mode maintains security by aborting on checksum mismatch
- No bypass for security-critical validations
These improvements enable:
- Automated deployments (--force flag)
- Binary integrity verification (checksums)
- Better security posture (tamper detection)
- Standardized uninstall process (endpoint)
The /api/version endpoint already exists and returns version info
for update checks (no changes needed).
Windows Host Agent Enhancements:
- Implement native Windows service support using golang.org/x/sys/windows/svc
- Add Windows Event Log integration for troubleshooting
- Create professional PowerShell installation/uninstallation scripts
- Add process termination and retry logic to handle Windows file locking
- Register uninstall endpoint at /uninstall-host-agent.ps1
Host Agent UI Improvements:
- Add expandable drawer to Hosts page (click row to view details)
- Display system info, network interfaces, disks, and temperatures in cards
- Replace status badges with subtle colored indicators
- Remove redundant master-detail sidebar layout
- Add search filtering for hosts
Technical Details:
- service_windows.go: Windows service lifecycle management with graceful shutdown
- service_stub.go: Cross-platform compatibility for non-Windows builds
- install-host-agent.ps1: Full Windows installation with validation
- uninstall-host-agent.ps1: Clean removal with process termination and retries
- HostsOverview.tsx: Expandable row pattern matching Docker/Proxmox pages
Files Added:
- cmd/pulse-host-agent/service_windows.go
- cmd/pulse-host-agent/service_stub.go
- scripts/install-host-agent.ps1
- scripts/uninstall-host-agent.ps1
- frontend-modern/src/components/Hosts/HostsOverview.tsx
- frontend-modern/src/components/Hosts/HostsFilter.tsx
The Windows service now starts reliably with automatic restart on failure,
and the uninstall script handles file locking gracefully without requiring reboots.
Introduces granular permission scopes for API tokens (docker:report, docker:manage, host-agent:report, monitoring:read/write, settings:read/write) allowing tokens to be restricted to minimum required access. Legacy tokens default to full access until scopes are explicitly configured.
Adds standalone host agent for monitoring Linux, macOS, and Windows servers outside Proxmox/Docker estates. New Servers workspace in UI displays uptime, OS metadata, and capacity metrics from enrolled agents.
Includes comprehensive token management UI overhaul with scope presets, inline editing, and visual scope indicators.