Modern Proxmox LXC containers (cgroup v2 + systemd) don't expose the CTID
inside the guest namespace. The auto-detection in DetectLXCCTID() works
for older LXC setups and when hostname is numeric, but fails for most
production containers where users set custom hostnames.
Changes:
- Added PULSE_LXC_CTID environment variable override in router.go:490-495
- Graceful fallback: auto-detect first, then check env var, then show placeholder
- UI already handles missing CTID by showing "pct exec <ctid>" placeholder
This provides a robust solution for thousands of users:
- Stock Proxmox LXC: Shows `pct exec <ctid>` placeholder (user substitutes manually)
- Custom hostname containers: Can set PULSE_LXC_CTID=171 in compose/systemd
- Numeric hostname containers: Auto-detected (backwards compatible)
Related: FirstRunSetup.tsx already has graceful fallback (line 336-339)
- Added DetectDockerContainerName() to detect container name from hostname
- Extended /api/security/status to expose dockerContainerName field
- Updated FirstRunSetup to show actual container name when detected:
* Before: 'docker exec <container-name> cat /data/.bootstrap_token'
* After: 'docker exec pulse cat /data/.bootstrap_token'
This reduces friction for users - no need to look up the container name.
Works when Docker container is named (--name flag), falls back to
placeholder for auto-generated container IDs.
- Added DetectLXCCTID() to internal/system/container.go to detect Proxmox container ID
- Extended /api/security/status to expose inContainer and lxcCtid fields
- Updated FirstRunSetup to show most relevant command based on detected environment:
* LXC with CTID: Shows 'pct exec 171 -- cat /etc/pulse/.bootstrap_token'
* Docker: Shows 'docker exec <container-name> cat /data/.bootstrap_token'
* Bare metal: Shows 'cat /etc/pulse/.bootstrap_token'
- Collapsed alternative methods behind 'Show other retrieval methods' button
This addresses user feedback that showing all options was overwhelming.
Now users see the command most likely to work for their setup first,
with alternatives hidden but still accessible.
**Host Detection**:
- Now detects localhost by hostname and FQDN, not just IP
- Fixes issue where nodes configured as https://hostname:8006 would skip
localhost cleanup (API tokens, bind mounts, service removal)
**Systemd Sandbox**:
- Added /etc/pve and /etc/systemd/system to ReadWritePaths
- Allows cleanup script to modify Proxmox configs and systemd units
**Uninstaller Improvements**:
- Use UUID for transient unit names (prevents same-second collisions)
- Added --purge flag for complete removal
- Added --wait and --collect flags to capture exit code
- Now fails cleanup if uninstaller exits non-zero
**Path Migration**:
- Fixed all /usr/local references to use /opt/pulse/sensor-proxy
- Updated forced command in SSH authorized_keys
- Updated self-heal script installer path
- Updated Go backend removal helpers (supports both new and legacy paths)
These fixes address Codex findings: hostname detection, sandbox permissions,
transient unit collisions, incomplete purging, and incomplete path migration.
Related to cleanup implementation testing.
## HTTP Server Fixes
- Add source IP middleware to enforce allowed_source_subnets
- Fix missing source subnet validation for external HTTP requests
- HTTP health endpoint now respects subnet restrictions
## Installer Improvements
- Auto-configure allowed_source_subnets with Pulse server IP
- Add cluster node hostnames to allowed_nodes (not just IPs)
- Fix node validation to accept both hostnames and IPs
- Add Pulse server reachability check before installation
- Add port availability check for HTTP mode
- Add automatic rollback on service startup failure
- Add HTTP endpoint health check after installation
- Fix config backup and deduplication (prevent duplicate keys)
- Fix IPv4 validation with loopback rejection
- Improve registration retry logic with detailed errors
- Add automatic LXC bind mount cleanup on uninstall
## Temperature Collection Fixes
- Add local temperature collection for self-monitoring nodes
- Fix node identifier matching (use hostname not SSH host)
- Fix JSON double-encoding in HTTP client response
Related to #XXX (temperature monitoring fixes)
Implements REST API endpoints to enable automatic registration of
temperature proxies during sensor-proxy installation.
API endpoints:
- POST /api/temperature-proxy/register
- Accepts: hostname, proxy_url
- Returns: authentication token
- Finds matching PVE instance and configures proxy URL/token
- No authentication required (called during installation)
- DELETE /api/temperature-proxy/unregister?hostname=X
- Removes proxy configuration from PVE instance
- Requires admin authentication
Implementation:
- Uses config.ConfigPersistence for loading/saving nodes.enc
- Matches PVE instances by hostname in Host field or ClusterEndpoints
- Generates cryptographically secure random tokens (32 bytes, base64)
- Atomic config updates (load → modify → save)
Next step: Update install-sensor-proxy.sh to call registration API
Related to #571
Implements a "Remember Me" option that allows users to stay logged in
for 30 days instead of the default 24 hours. This addresses the pain
point of frequent re-authentication in LAN-only environments while
maintaining authentication security.
Backend changes:
- Add rememberMe field to login request handling
- Support variable session durations (24h default, 30d with Remember Me)
- Implement sliding session expiration that extends sessions on each
authenticated request using the original duration
- Store OriginalDuration in session data for proper sliding window
- Update session cookie MaxAge to match session duration
Frontend changes:
- Add "Remember Me for 30 days" checkbox to login form
- Pass rememberMe flag in login request
- Improve UI with clear duration indication
Key features:
- Sessions extend automatically on each request (sliding window)
- Original duration preserved across session extension
- Backward compatible with existing sessions (legacy sessions work)
- Sessions persist across server restarts
This provides a better user experience for LAN deployments without
compromising security by completely disabling authentication.
CRITICAL SECURITY FIX: The /download/pulse-host-agent endpoint was directly
concatenating user-supplied platform and arch query parameters into file paths
without validation, allowing path traversal attacks.
An attacker could request:
/download/pulse-host-agent?platform=../../etc/passwd
to read arbitrary files from the container filesystem.
Fix: Add input validation to only allow alphanumeric characters and hyphens
in platform/arch parameters before using them in file paths.
Related: Codex security audit identified this during pre-release review
When a request for /login (or any other frontend route) comes in without
proper Accept headers (like from curl or some browsers), the server was
returning 'Authentication required' text instead of serving the frontend HTML.
This is because the router was checking authentication before serving ANY
non-API route, including frontend pages like /login, /dashboard, etc.
The fix: Frontend routes should always be served without backend auth checks.
The authentication logic runs in the frontend JavaScript after the page loads.
Backend auth should only block:
- API endpoints (/api/*)
- WebSocket connections (/ws*, /socket.io/*)
- Download endpoints (/download/*)
- Special scripts (/install-*.sh, etc.)
All other routes are frontend pages that need to be served to everyone so
the login page can load and handle auth in the browser.
This fixes the integration tests where Playwright couldn't see the login
form because the server was rejecting the /login request before serving HTML.
Related to #695 (release workflow integration tests)
- Add job queue system to ensure only one update runs at a time
- Add Server-Sent Events (SSE) for real-time push updates
- Increase rate limit from 20/min to 60/min for update endpoints
- Add unit tests for queue and SSE functionality
- Frontend: Update modal now uses SSE with polling fallback
Eliminates: 429 rate limit errors, duplicate modals, race conditions
Related to #671
This commit implements a comprehensive refactoring of the update system
to address race conditions, redundant polling, and rate limiting issues.
Backend changes:
- Add job queue system to ensure only ONE update runs at a time
- Implement Server-Sent Events (SSE) for real-time update progress
- Add rate limiting to /api/updates/status (5-second minimum per client)
- Create SSE broadcaster for push-based status updates
- Integrate job queue with update manager for atomic operations
- Add comprehensive unit tests for queue and SSE components
Frontend changes:
- Update UpdateProgressModal to use SSE as primary mechanism
- Implement automatic fallback to polling when SSE unavailable
- Maintain backward compatibility with existing update flow
- Clean up SSE connections on component unmount
API changes:
- Add new endpoint: GET /api/updates/stream (SSE)
- Enhance /api/updates/status with client-based rate limiting
- Return cached status with appropriate headers when rate limited
Benefits:
- Eliminates 429 rate limit errors during updates
- Only one update job can run at a time (prevents race conditions)
- Real-time updates via SSE reduce unnecessary polling
- Graceful degradation to polling when SSE unavailable
- Better resource utilization and reduced server load
Testing:
- All existing tests pass
- New unit tests for queue and SSE functionality
- Integration tests verify complete update flow
This commit addresses three recurring issues with the update system:
1. **Checksum mismatches (v4.27.0, v4.28.0):**
- Root cause: Release process uploads checksums.txt first, but if artifacts
are rebuilt after that upload, checksums become stale
- Fix: Update RELEASE_CHECKLIST.md to REQUIRE running validate-release.sh
before publishing (step 9, non-negotiable)
- The validation script exists and catches these errors, but wasn't being
enforced in the release process
2. **Duplicate error modals:**
- Root cause: UpdateProgressModal rendered in both App.tsx
(GlobalUpdateProgressWatcher) and UpdateBanner.tsx
- Fix: Remove UpdateProgressModal from UpdateBanner.tsx
- GlobalUpdateProgressWatcher automatically shows the modal when updates
start, so the banner's modal is redundant
3. **Rate limiting too strict:**
- Root cause: UpdateProgressModal polls /api/updates/status every 2 seconds
(30 req/min), but rate limit was 20/min
- Fix: Increase UpdateEndpoints rate limit from 20/min to 60/min
- Allows modal to poll without hitting rate limits during updates
These were all manual process errors and configuration issues, not code bugs.
The validation script enforcement prevents future checksum mismatches.
The first-run setup UI was displaying incorrect bootstrap token paths for
Docker deployments. It showed `/etc/pulse/.bootstrap_token` regardless of
deployment type, but Docker containers use `/data/.bootstrap_token` by
default (via PULSE_DATA_DIR env var).
Changes:
- Extended `/api/security/status` endpoint to include `bootstrapTokenPath`
and `isDocker` fields when a bootstrap token is active
- Updated FirstRunSetup component to fetch and display the correct path
dynamically based on actual deployment configuration
- For Docker deployments, UI now shows both `docker exec` command and
in-container command
- Falls back to showing both standard and Docker paths if API data
unavailable (backward compatibility)
This fix ensures users always see the correct command for their specific
deployment, including custom PULSE_DATA_DIR configurations.
Users upgrading from v4.25 (where DISABLE_AUTH actually disabled auth) to
v4.27.1 (where DISABLE_AUTH is ignored but triggers a deprecation warning)
were stuck in a catch-22:
- They had no credentials (old version had auth disabled)
- DISABLE_AUTH detection incorrectly required authentication
- Setup wizard returned 401, preventing first credential creation
- Could not complete setup to create credentials and remove flag
Root cause: When DISABLE_AUTH was detected, the code set forceRequested=true
which triggered the authentication requirement even when authConfigured=false.
Fix: Only require authentication when credentials actually exist. When no
auth is configured, allow the bootstrap token flow regardless of whether
DISABLE_AUTH is detected.
This lets users upgrade from legacy DISABLE_AUTH deployments by using the
bootstrap token to create their first credentials, then removing the flag.
The diagnostic code was warning ALL deployments using /run/pulse-sensor-proxy
socket path to "remove and re-add" their configuration to use /mnt/pulse-proxy
instead. This was incorrect for Docker deployments where /run is the correct
and documented mount path (see docker-compose.yml line 15).
The warning was only meant for LXC containers where the managed mount at
/mnt/pulse-proxy is preferred over a legacy hand-crafted /run mount.
Fix: Only show the warning in non-Docker environments (check PULSE_DOCKER env).
Docker deployments correctly use /run/pulse-sensor-proxy per compose file.
Impact: Docker users were seeing confusing diagnostic warnings telling them
to reconfigure a correct setup.
Implements comprehensive mdadm RAID array monitoring for Linux hosts
via pulse-host-agent. Arrays are automatically detected and monitored
with real-time status updates, rebuild progress tracking, and automatic
alerting for degraded or failed arrays.
Key changes:
**Backend:**
- Add mdadm package for parsing mdadm --detail output
- Extend host agent report structure with RAID array data
- Integrate mdadm collection into host agent (Linux-only, best-effort)
- Add RAID array processing in monitoring system
- Implement automatic alerting:
- Critical alerts for degraded arrays or arrays with failed devices
- Warning alerts for rebuilding/resyncing arrays with progress tracking
- Auto-clear alerts when arrays return to healthy state
**Frontend:**
- Add TypeScript types for RAID arrays and devices
- Display RAID arrays in host details drawer with:
- Array status (clean/degraded/recovering) with color-coded indicators
- Device counts (active/total/failed/spare)
- Rebuild progress percentage and speed when applicable
- Green for healthy, amber for rebuilding, red for degraded
**Documentation:**
- Document mdadm monitoring feature in HOST_AGENT.md
- Explain requirements (Linux, mdadm installed, root access)
- Clarify scope (software RAID only, hardware RAID not supported)
**Testing:**
- Add comprehensive tests for mdadm output parsing
- Test parsing of healthy, degraded, and rebuilding arrays
- Verify proper extraction of device states and rebuild progress
All builds pass successfully. RAID monitoring is automatic and best-effort
- if mdadm is not installed or no arrays exist, host agent continues
reporting other metrics normally.
Related to #676
Adds build support for 32-bit x86 (i386/i686) and ARMv6 (older Raspberry Pi models) architectures across all agents and install scripts.
Changes:
- Add linux-386 and linux-armv6 to build-release.sh builds array
- Update Dockerfile to build docker-agent, host-agent, and sensor-proxy for new architectures
- Update all install scripts to detect and handle i386/i686 and armv6l architectures
- Add architecture normalization in router download endpoints
- Update update manager architecture mapping
- Update validate-release.sh to expect 24 binaries (was 18)
This enables Pulse agents to run on older/legacy hardware including 32-bit x86 systems and Raspberry Pi Zero/Zero W devices.
Allow homelab users to send webhooks to internal services while maintaining security defaults.
Changes:
- Add webhookAllowedPrivateCIDRs field to SystemSettings (persistent config)
- Implement CIDR parsing and validation in NotificationManager
- Convert ValidateWebhookURL to instance method to access allowlist
- Add UI controls in System Settings for configuring trusted CIDR ranges
- Maintain strict security by default (block all private IPs)
- Keep localhost, link-local, and cloud metadata services blocked regardless of allowlist
- Re-validate on both config save and webhook delivery (DNS rebinding protection)
- Add comprehensive tests for CIDR parsing and IP matching
Backend:
- UpdateAllowedPrivateCIDRs() parses comma-separated CIDRs with validation
- Support for bare IPs (auto-converts to /32 or /128)
- Thread-safe allowlist updates with RWMutex
- Logging when allowlist is updated or used
- Validation errors prevent invalid CIDRs from being saved
Frontend:
- New "Webhook Security" section in System Settings
- Input field with examples and helpful placeholder text
- Real-time unsaved changes tracking
- Loads and saves allowlist via system settings API
Security:
- Default behavior unchanged (all private IPs blocked)
- Explicit opt-in required via configuration
- Localhost (127/8) always blocked
- Link-local (169.254/16) always blocked
- Cloud metadata services always blocked
- DNS resolution checked at both save and send time
Testing:
- Tests for CIDR parsing (valid/invalid inputs)
- Tests for IP allowlist matching
- Tests for bare IP address handling
- Tests for security boundaries (localhost, link-local remain blocked)
Related to #673🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Related to #670, #657
The fix in v4.26.5 (commit 59a97f2e3) attempted to resolve storage disappearing
by preferring hostnames over IPs when TLS hostname verification is required
(VerifySSL=true and no fingerprint). However, that fix was ineffective because
the cluster discovery code was populating BOTH the Host and IP fields with the
IP address.
**Root Cause:**
In internal/api/config_handlers.go, the detectPVECluster function was setting:
- endpoint.Host = schemePrefix + clusterNode.IP (when IP was available)
- endpoint.IP = clusterNode.IP
This meant both fields contained the same IP address. When the monitoring code
tried to prefer endpoint.Host for TLS validation (internal/monitoring/monitor.go:
361-368), it was still getting an IP, causing certificate validation to fail
with "certificate is valid for pve01.example.com, not 10.0.0.44".
**Solution:**
Separate the Host and IP fields properly during cluster discovery:
- endpoint.Host = hostname (e.g., "https://pve01:8006") for TLS validation
- endpoint.IP = IP address (e.g., "10.0.0.44") for DNS-free connections
The existing logic in clusterEndpointEffectiveURL() can now correctly choose
between them based on TLS requirements.
**Impact:**
Users with VerifySSL=true who upgraded to v4.26.1-v4.26.5 and lost storage
visibility should now see storage, VM disks, and backups again after this fix.
The setup script template had 44 %s placeholders, but the fmt.Sprintf call
arguments were out of order starting at position 15. This caused the Pulse
URL to be inserted where the token name should be, resulting in errors like:
Token ID: pulse-monitor@pam!http://192.168.0.44:7655
Instead of the correct format:
Token ID: pulse-monitor@pam!pulse-192-168-0-44-1762545916
Changes:
- Escaped %s in printf helper (line 3949) so it doesn't consume arguments
- Reordered fmt.Sprintf arguments (lines 4727-4732) to match template order
- Removed 2 extra pulseURL arguments that were causing the shift
This fix ensures all 44 placeholders receive the correct values in order.
The download endpoint had a dangerous fallback that silently served the
wrong binary when the requested platform/arch combination was missing.
If a Docker image shipped without Windows binaries, the installer would
receive a Linux ELF instead of a Windows PE, causing ERROR_BAD_EXE_FORMAT.
Changes:
- Download handler now operates in strict mode when platform+arch are
specified, returning 404 instead of serving mismatched binaries
- PowerShell installer validates PE header (MZ signature)
- PowerShell installer verifies PE machine type matches requested arch
- PowerShell installer fetches and verifies SHA256 checksums
- PowerShell installer shows diagnostic info: OS arch, download URL,
file size for better troubleshooting
This prevents silent failures and provides clear error messages when
binaries are missing or corrupted.
Fixed three P1 goroutine/memory leaks that prevent proper resource cleanup:
1. Recovery Tokens goroutine leak
- Cleanup routine runs forever without stop mechanism
- Added stopCleanup channel and Stop() method
- Cleanup loop now uses select with stopCleanup case
2. Rate Limiter goroutine leak
- Cleanup routine runs forever without stop mechanism
- Added stopCleanup channel and Stop() method
- Changed from 'for range ticker.C' to select with stopCleanup case
3. OIDC Service memory leak (DoS vector)
- Abandoned OIDC flows never cleaned up
- State entries accumulate unboundedly
- Added cleanup routine with 5-minute ticker
- Periodically removes expired state entries (10min TTL)
- Added Stop() method for proper shutdown
All three follow consistent pattern:
- Add stopCleanup chan struct{} field
- Initialize in constructor
- Use select with ticker and stopCleanup cases
- Close channel in Stop() method to signal goroutine exit
Impact:
- Prevents goroutine leaks during service restarts/reloads
- Prevents memory exhaustion from abandoned OIDC login attempts
- Enables proper cleanup in tests and graceful shutdown
This commit addresses 5 critical P0 bugs that cause security vulnerabilities, crashes, and data corruption:
**P0-1: Recovery Tokens Replay Attack Vulnerability** (recovery_tokens.go:153-159)
- **SECURITY CRITICAL**: Single-use recovery tokens could be replayed
- **Problem**: Lock upgrade race - two concurrent requests both pass initial Used check
1. Both acquire RLock, see token.Used = false
2. Both release RLock
3. Both acquire Lock and mark token.Used = true
4. Both return true - TOKEN REUSED
- **Impact**: Attacker with intercepted token can use it multiple times
- **Fix**: Re-check token.Used after acquiring write lock (TOCTOU prevention)
**P0-2: WebSocket Hub Concurrent Map Panic** (hub.go:345-347, 376-378)
- **Problem**: Initial state goroutine reads h.clients map without lock
- Line 345: `if _, ok := h.clients[client]` (NO LOCK)
- Main loop writes to h.clients with lock (line 326, 394)
- **Impact**: "fatal error: concurrent map read and write" crashes hub
- **Fix**: Acquire RLock before all client map reads in goroutine
**P0-3: WebSocket Send on Closed Channel Panic** (hub.go:348, 380)
- **Problem**: Check client exists, then send - channel can close between
- **Impact**: "send on closed channel" panic crashes hub
- **Fix**: Hold RLock during both check and send (defensive select already present)
**P0-4: CSRF Store Shutdown Data Corruption** (csrf_store.go:189-196)
- **Problem**: Stop() calls save() after signaling worker. Both hold only RLock
- Worker's final save writes to csrf_tokens.json.tmp
- Stop()'s save writes to same file concurrently
- **Impact**: Corrupted/truncated csrf_tokens.json on shutdown
- **Fix**: Added saveMu mutex to serialize all disk writes
**P0-5: CSRF Store Deadlock on Double-Stop** (csrf_store.go:103-108)
- **Problem**: stopChan unbuffered, no sync.Once guard, uses send not close
- **Impact**: Second Stop() call blocks forever waiting for receiver
- **Fix**:
- Added sync.Once field stopOnce
- Changed to close(stopChan) within stopOnce.Do()
- Prevents double-close panic and deadlock
All fixes maintain backwards compatibility. The recovery token fix is particularly critical as it closes a security vulnerability allowing replay attacks on password reset flows.
This commit addresses 4 P1 important issues and 1 P2 optimization in infrastructure components:
**P1-1: Missing Panic Recovery in Discovery Service** (service.go:172-195, 499-542)
- **Problem**: No panic recovery in Start(), ForceRefresh(), SetSubnet() goroutines
- **Impact**: Silent service death if scan panics, broken discovery with no monitoring
- **Fix**:
- Wrapped initial scan goroutine with defer/recover (lines 172-182)
- Wrapped scanLoop goroutine with defer/recover (lines 185-195)
- Wrapped ForceRefresh scan with defer/recover (lines 499-509)
- Wrapped SetSubnet scan with defer/recover (lines 532-542)
- All log panics with stack traces for debugging
**P1-2: Missing Panic Recovery in Config Watcher Callback** (watcher.go:546-556)
- **Problem**: User-provided onMockReload callback could panic and crash watcher
- **Impact**: Panicking callback kills watcher goroutine, no config updates
- **Fix**: Wrapped callback invocation with defer/recover and stack trace logging
**P1-3: Session Store Stop() Using Send Instead of Close** (session_store.go:16-84)
- **Problem**: Stop() used channel send which blocks if nobody reads
- **Impact**: Stop() hangs if backgroundWorker already exited
- **Fix**:
- Added sync.Once field stopOnce (line 22)
- Changed Stop() to use close() within stopOnce.Do() (lines 80-84)
- Prevents double-close panic and ensures all readers are signaled
**P2-1: Backup Cleanup Inefficient O(n²) Sort** (persistence.go:1424-1427)
- **Problem**: Bubble sort used to sort backups by modification time
- **Impact**: Inefficient for large backup counts (>100 files)
- **Fix**:
- Replaced bubble sort with sort.Slice() using O(n log n) algorithm
- Added "sort" import (line 9)
- Maintains same oldest-first ordering for deletion logic
All fixes add defensive programming without changing external behavior. Panic recovery ensures services continue operating even with bugs, while optimization reduces cleanup time for backup-heavy environments.
Backend:
- Add IsEncryptionEnabled() method to ConfigPersistence
- Include encryption status in /api/notifications/health response
- Allows frontend to warn when credentials are stored in plaintext
Frontend:
- Update NotificationHealth type to include encryption.enabled field
- Frontend can now display warnings when encryption is disabled
This addresses the P2 requirement for encryption visibility, allowing
operators to know when notification credentials are not encrypted at rest.
Related to #630
Proxmox 8.3+ changed the VM status API to return the `agent` field as an
object ({"enabled":1,"available":1}) instead of an integer (0 or 1). This
caused Pulse to incorrectly treat VMs as having no guest agent, resulting
in missing disk usage data (disk:-1) even when the guest agent was running
and functional.
The issue manifested as:
- VMs showing "Guest details unavailable" or missing disk data
- Pulse logs showing no "Guest agent enabled, querying filesystem info" messages
- `pvesh get /nodes/<node>/qemu/<vmid>/agent/get-fsinfo` working correctly
from the command line, confirming the agent was functional
Root cause:
The VMStatus struct defined `Agent` as an int field. When Proxmox 8.3+ sent
the new object format, JSON unmarshaling silently left the field at zero,
causing Pulse to skip all guest agent queries.
Changes:
- Created VMAgentField type with custom UnmarshalJSON to handle both formats:
* Legacy (Proxmox <8.3): integer (0 or 1)
* Modern (Proxmox 8.3+): object {"enabled":N,"available":N}
- Updated VMStatus.Agent from `int` to `VMAgentField`
- Updated all references to `detailedStatus.Agent` to use `.Agent.Value`
- The unmarshaler prioritizes the "available" field over "enabled" to ensure
we only query when the agent is actually responding
This fix maintains backward compatibility with older Proxmox versions while
supporting the new format introduced in Proxmox 8.3+.
Addresses two issues preventing configuration backup/restore:
1. Export passphrase validation mismatch: UI only validated 12+ char
requirement when using custom passphrase, but backend always enforced
it. Users with shorter login passwords saw unexplained failures.
- Frontend now validates all passphrases meet 12-char minimum
- Clear error message suggests custom passphrase if login password too short
2. Import data parsing failed silently: Frontend sent `exportData.data`
which was undefined for legacy/CLI backups (raw base64 strings).
Backend rejected these with no logs.
- Frontend now handles both formats: {status, data} and raw strings
- Backend logs validation failures for easier troubleshooting
Related to #646 where user reported "error after entering password" with
no container logs. These changes ensure proper validation feedback and
make the backup system resilient to different export formats.
The bootstrap token security requirement was added proactively but
lacked discoverability, causing user friction during first-run setup.
These improvements make the token easier to find while maintaining
the security benefit.
Improvements:
- Display bootstrap token prominently in startup logs with ASCII box
(previously: single line log message)
- Add `pulse bootstrap-token` CLI command to display token on demand
(Docker: docker exec <container> /app/pulse bootstrap-token)
- Improve error messages in quick-setup API to show exact commands
for retrieving token when missing or invalid
- Error messages now include both Docker and bare metal examples
User experience improvements:
- Token visible in `docker logs` output immediately
- Clear instructions printed with token
- Helpful error messages if token is wrong/missing
- CLI helper for operators who need to retrieve token later
Security unchanged:
- Bootstrap token still required for first-run setup
- Token still auto-deleted after successful setup
- No bypass mechanism added
Related to discussion about bootstrap token UX friction.
Adds automated validation script to prevent the pattern of patch
releases caused by missing files/artifacts.
scripts/validate-release.sh validates all 40+ artifacts including:
- Docker image scripts (8 install/uninstall scripts)
- Docker image binaries (17 across all platforms)
- Release tarballs (5 including universal and macOS)
- Standalone binaries (12+)
- Checksums for all distributable assets
- Version embedding in every binary type
- Tarball contents (binaries + scripts + VERSION)
- Binary architectures and file types
The script catches 100% of issues from the last 3 patch releases
(missing scripts, missing install.sh, missing binaries, broken
version embedding).
Updated RELEASE_CHECKLIST.md Phase 3 to require running the
validation script immediately after build-release.sh and before
proceeding to Docker build/publish phases.
Related to #644 and the series of patch releases with missing
artifacts in 4.26.x.
Hashed static assets (e.g., index-BXHytNQV.js, index-TvhSzimt.css) are
now cached for 1 year with immutable flag since content hash changes
when files change.
Benefits:
- Faster page loads on subsequent visits
- Reduced server bandwidth
- Better user experience on demo and production instances
Only index.html and non-hashed assets remain uncached to ensure
users always get the latest version.
Demo mode now permits login/logout and OIDC authentication endpoints
while still blocking all modification requests. This allows demo
instances to require authentication while remaining read-only.
Authentication endpoints are read-only operations that verify
credentials and issue session tokens without modifying any state.
All POST/PUT/DELETE/PATCH operations remain blocked.
Related to #617
This fixes a misconfiguration scenario where Docker containers could
attempt direct SSH connections (producing [preauth] log spam) instead
of using the sensor proxy.
Changes:
- Fix container detection to check PULSE_DOCKER=true in addition to
system.InContainer() heuristics (both temperature.go and config_handlers.go)
- Upgrade temperature collection log from Error to Warn with actionable
guidance about mounting the proxy socket
- Add Info log when dev mode override is active so operators understand
the security posture
- Add troubleshooting section to docs for SSH [preauth] logs from containers
The container detection was inconsistent - monitor.go checked both flags
but temperature.go and config_handlers.go only checked InContainer().
Now all locations consistently check PULSE_DOCKER || InContainer().