This change fixes backup-age alert notifications to display VM/CT names
instead of just "VMID XXX" in multi-cluster environments where backups
are stored on PBS.
Changes:
- Store all guests per VMID (not just first match) to handle VMID collisions across clusters
- Persist last-known guest names/types in metadata store for deleted VMs
- Enrich backup correlation with persisted metadata when live inventory is empty
- Update CheckBackups to handle multiple VMID matches intelligently
The fix addresses two scenarios:
1. Multiple PVE clusters with same VMID backing up to one PBS
2. VMs deleted from Proxmox but backups still exist on PBS
Backup-age alerts will now show proper VM/CT names when:
- A unique guest exists with that VMID (live or persisted)
- Multiple guests share a VMID (uses first match, consistent with current behavior)
When truly ambiguous (multiple live VMs, same VMID, no way to determine origin),
the alert gracefully falls back to showing "VMID XXX".
Proxmox VE 9.x removed support for the 'ds' parameter in RRD endpoints
(/nodes/{node}/rrddata and /nodes/{node}/lxc/{vmid}/rrddata). When Pulse
sent RRD requests with ds=memused,memavailable,etc., Proxmox responded with:
API error 400: {"errors":{"ds":"property is not defined in schema..."}}
This caused cluster nodes to be repeatedly marked unhealthy, which cascaded
into storage polling failures showing 'All cluster endpoints are unhealthy'
even though the nodes were actually healthy and reachable.
Changes:
- Added check in cluster_client.go executeWithFailover to recognize the ds
parameter error as a capability issue rather than node health failure
- Nodes with this error no longer get marked unhealthy
- Storage polling and other operations now succeed even when RRD calls fail
- The RRD data will be unavailable but core monitoring continues
This fix maintains backward compatibility with older Proxmox versions while
gracefully handling the API change in Proxmox 9.x.
The config backup export and import functions were incorrectly parsing
the CSRF token from cookies, causing "Export requires authentication"
errors even when users were properly logged in.
Two issues were fixed:
1. Cookie parsing used `.split('=')[1]` which truncated tokens containing
`=` padding characters (common in base64 tokens). Fixed by using
`.split('=').slice(1).join('=')` to preserve the full value.
2. Missing URL decoding of the cookie value. Browsers percent-encode
cookie values, so `=` becomes `%3D`. The backend then failed to match
the encoded token hash. Fixed by adding `decodeURIComponent()`.
Both fixes mirror the pattern already used in apiClient.ts.
The Pushover webhook template now honors user-defined custom fields
for sound, priority, and device. Previously, these fields were
hardcoded based on alert level, ignoring any custom values set by
users in the UI.
Changes:
- sound: Uses CustomFields.sound if provided, otherwise falls back to
level-based default (critical=siren, warning=tugboat, else=pushover)
- priority: Uses CustomFields.priority if provided, otherwise falls back
to level-based default (critical=1, warning=0, else=-1)
- device: Uses CustomFields.device if provided, otherwise falls back to
ResourceName
Updated setup instructions to document optional custom fields for sound,
priority, and device configuration.
This allows users to customize Pushover notification behavior without
editing webhook templates, consistent with Pulse's maintainability goals.
The custom display name feature added in cd627f33c had a critical bug where
the backend successfully stored custom names but the frontend never received
them, making the feature appear non-functional.
Root cause:
- DockerHost.CustomDisplayName was stored in backend state (models.go:201)
- SetDockerHostCustomDisplayName() correctly updated the field
- BUT DockerHostFrontend struct was missing customDisplayName field
- AND ToFrontend() converter didn't copy CustomDisplayName
- Result: WebSocket state broadcasts stripped out the custom name
When users edited a Docker host display name:
- API returned 200 OK ✓
- Success notification appeared ✓
- Edit state cleared ✓
- But subsequent state broadcasts lacked customDisplayName ✗
- UI continued showing original name ✗
Fix:
- Add CustomDisplayName field to DockerHostFrontend (models_frontend.go:105)
- Copy d.CustomDisplayName in ToFrontend() converter (converters.go:204)
- Now custom display names properly propagate to frontend via WebSocket
The feature now works as originally intended - custom names persist across
agent reconnections and display correctly in the UI.
The temperature collection in pulse-host-agent was broken on all Linux
distributions due to an incorrect platform check.
Root cause:
- collectTemperatures() checked `if a.platform != "linux"` at agent.go:316
- normalisePlatform() returns the raw distro name from gopsutil (debian, ubuntu, pve)
- This caused temperature collection to be skipped on ALL Linux hosts
Fix:
- Changed check to `if runtime.GOOS != "linux"` which correctly identifies Linux
- runtime.GOOS returns "linux" regardless of distribution
Also fixed documentation typo:
- Changed "Servers tab" to "Hosts tab" in HOST_AGENT.md and TEMPERATURE_MONITORING.md
- Reported by user in issue #661 comments
Testing:
- Verified build succeeds
- Confirmed runtime.GOOS returns "linux" on Linux systems
Related to #661
The setup script template had 44 %s placeholders, but the fmt.Sprintf call
arguments were out of order starting at position 15. This caused the Pulse
URL to be inserted where the token name should be, resulting in errors like:
Token ID: pulse-monitor@pam!http://192.168.0.44:7655
Instead of the correct format:
Token ID: pulse-monitor@pam!pulse-192-168-0-44-1762545916
Changes:
- Escaped %s in printf helper (line 3949) so it doesn't consume arguments
- Reordered fmt.Sprintf arguments (lines 4727-4732) to match template order
- Removed 2 extra pulseURL arguments that were causing the shift
This fix ensures all 44 placeholders receive the correct values in order.
Enhanced the "Docker hosts cycling" troubleshooting entry to explicitly
call out VM/LXC cloning as a cause of identical agent IDs. Added specific
remediation steps for regenerating machine IDs on cloned systems.
This addresses the resolution path discovered in discussion #648 where a
user cloned a Proxmox LXC and encountered cycling behavior even with
separate API tokens because the agent IDs were duplicated.
The download endpoint had a dangerous fallback that silently served the
wrong binary when the requested platform/arch combination was missing.
If a Docker image shipped without Windows binaries, the installer would
receive a Linux ELF instead of a Windows PE, causing ERROR_BAD_EXE_FORMAT.
Changes:
- Download handler now operates in strict mode when platform+arch are
specified, returning 404 instead of serving mismatched binaries
- PowerShell installer validates PE header (MZ signature)
- PowerShell installer verifies PE machine type matches requested arch
- PowerShell installer fetches and verifies SHA256 checksums
- PowerShell installer shows diagnostic info: OS arch, download URL,
file size for better troubleshooting
This prevents silent failures and provides clear error messages when
binaries are missing or corrupted.
Implements temperature monitoring in pulse-host-agent to support Docker-in-VM
deployments where the sensor proxy socket cannot cross VM boundaries.
Changes:
- Create internal/sensors package with local collection and parsing
- Add temperature collection to host agent (Linux only, best-effort)
- Support CPU package/core, NVMe, and GPU temperature sensors
- Update TEMPERATURE_MONITORING.md with Docker-in-VM setup instructions
- Update HOST_AGENT.md to document temperature feature
The host agent now automatically collects temperature data on Linux systems
with lm-sensors installed. This provides an alternative path for temperature
monitoring when running Pulse in a VM, avoiding the unix socket limitation.
Temperature collection is best-effort and fails gracefully if lm-sensors is
not available, ensuring other metrics continue to be reported.
Related to #661
Users with 8-11 character passwords could not export/restore config backups
because the export encryption requires 12+ character passphrases for security,
but the password creation UI only enforced an 8-character minimum.
This created a confusing UX where users with short passwords saw validation
errors when trying to export backups, with the only solution being to use a
custom passphrase or change their password.
Root cause:
- FirstRunSetup and ChangePasswordModal allowed 8+ char passwords
- Config export/import requires 12+ char passphrases (backend validation)
- The v4.26.4 fix added frontend validation that showed the mismatch
- Users hit client-side validation before request was sent (no backend logs)
This fix raises the minimum password length to 12 characters everywhere:
- internal/auth/password.go: MinPasswordLength 8 → 12
- FirstRunSetup.tsx: validation and placeholder updated
- ChangePasswordModal.tsx: validation, minLength, and help text updated
- QuickSecuritySetup.tsx: validation and label updated
Impact:
- New users must create 12+ character passwords
- Existing users with <12 char passwords are unaffected (can't detect from hash)
- Those users will see the existing helpful error directing them to use custom
passphrase for backups
- "Use your login password" option now works for all future passwords
This aligns password requirements across the system and eliminates the
confusing mismatch between login credentials and backup encryption requirements.
Related to #646 where user confirmed backups still failed in v4.26.5
The v4.26.4 fix inadvertently broke CLI export compatibility. The frontend
attempted JSON.parse on all backup files and returned early with "Invalid
JSON file format" when parsing failed. This prevented the format detection
code from ever executing, breaking CLI-generated exports which are raw
base64 strings without a JSON wrapper.
Root cause:
- CLI exports (`pulse config export`) output raw base64 via
internal/config/export.go:128
- The fix at Settings.tsx:2030-2034 called showError() and returned
immediately on parse failure
- Format detection logic at lines 2040-2049 never executed for CLI exports
This changes the parsing flow to:
1. Try JSON.parse first (handles UI exports with {status, data} format)
2. On parse success, extract data field as before
3. On parse failure, treat entire file contents as raw base64 (CLI format)
This preserves the v4.26.4 improvements (12-char validation, better error
messages) while restoring CLI export compatibility.
Related to #646 where user confirmed v4.26.4 still failed to restore backups.
The self-heal timer runs 'systemctl list-unit-files | grep -q' every hour.
When grep matches and exits early, systemctl logs "Failed to print table:
Broken pipe" to syslog. This is cosmetic but floods Proxmox logs and
confuses operators.
Changes:
- Redirect stderr from systemctl to /dev/null
- Prevents the broken pipe message from reaching syslog
- Self-heal functionality unchanged
This addresses the concern raised in discussion #628.
Updates all remaining references to read-write socket mounts in
TEMPERATURE_MONITORING.md to use read-only (:ro) mounts for security.
Changes:
- Manual installation section
- Docker-only responsibilities section
- Ansible playbook example
All socket mounts should be :ro to prevent container tampering.
Adds complete documentation for 2025-11-07 security audit and hardening:
- SECURITY_AUDIT_2025-11-07.md: Full professional audit report
- 9 security issues identified and fixed (4 critical, 4 medium, 1 low)
- Detailed findings, remediations, and testing
- Security posture improved from B+ to A
- 85%+ reduction in exploitable attack surface
- SECURITY_CHANGELOG.md: Detailed changelog with migration guide
- Complete implementation details for all fixes
- Configuration examples
- Backwards compatibility notes
- New metrics and features
- DEPLOYMENT_CHECKLIST.md: Step-by-step deployment guide
- Pre-deployment backup procedures
- Deployment steps for Docker and LXC
- Verification procedures
- Rollback procedures
- Troubleshooting guide
- Success criteria
- README.md: Updated with security hardening highlights
- Links to audit report
- Key security features added
Audit performed by Claude (Sonnet 4.5) + Codex collaboration.
All implementations by Codex based on Claude specifications.
100% remediation rate (9/9 issues fixed).
17 new tests added, all passing.
Related to security audit 2025-11-07.
BREAKING CHANGE: Socket directory now mounted read-only into containers
for security. Prevents compromised containers from:
- Unlinking socket and creating man-in-the-middle proxies
- Filling /run/pulse-sensor-proxy/ to exhaust tmpfs
- Racing proxy service on restart to hijack socket path
Migration: Change socket mounts from :rw to :ro in docker-compose.yml
Access control enforced via SO_PEERCRED, so write access not needed.
Related to security audit 2025-11-07.
Implements proper least-privilege model for RPC methods. Previously,
any UID in allowed_peer_uids could call privileged methods, meaning
another service's UID would inherit full host-level control.
Capability System:
- Three levels: read, write, admin
- Per-UID capability assignment via allowed_peers config
- Privileged methods require admin capability
- Backwards compatible with legacy allowed_peer_uids format
Configuration:
allowed_peers:
- uid: 0
capabilities: [read, write, admin] # Root gets all
- uid: 1000
capabilities: [read] # Docker: read-only
- uid: 1001
capabilities: [read, write] # Temps but not key distribution
Security benefit: Services can be granted only the capabilities they
need, preventing unintended privilege escalation.
Related to security audit 2025-11-07.
Co-authored-by: Codex <codex@openai.com>
Fixes bug where allowed_peer_gids was populated from config but never
checked during authorization, creating false sense of security.
Changes:
- authorizePeer() now checks GIDs in addition to UIDs
- Peer authorized if UID OR GID matches allowlist
- Debug logging shows which rule granted access (UID vs GID)
- Full test coverage for GID-based authorization
Security benefit: GID-based policies now actually enforced as
administrators expect.
Related to security audit 2025-11-07.
Co-authored-by: Codex <codex@openai.com>
Prevents multi-UID rate limit bypass attacks from containers. Previously,
attackers could create multiple users in a container (each mapped to
unique host UIDs 100000-165535) to bypass per-UID rate limits.
Implementation:
- Automatic detection of ID-mapped UID ranges from /etc/subuid and /etc/subgid
- Rate limits applied per-range for container UIDs
- Rate limits applied per-UID for host UIDs (backwards compatible)
- identifyPeer() checks if BOTH UID AND GID are in mapped ranges
- Metrics show peer='range:100000-165535' or peer='uid:0'
Security benefit: Entire container limited as single entity, preventing
100+ UIDs from bypassing rate controls.
New metrics:
- pulse_proxy_limiter_rejections_total{peer,reason}
- pulse_proxy_limiter_penalties_total{peer,reason}
- pulse_proxy_global_concurrency_inflight
Related to security audit 2025-11-07.
Co-authored-by: Codex <codex@openai.com>
Fixes#657
Between v4.25.0 and v4.26.4, commit 72865ff62 changed cluster endpoint
resolution to prefer IP addresses over hostnames to reduce DNS lookups
(refs #620). However, this caused TLS certificate validation to fail for
installations with VerifySSL=true, because Proxmox certificates typically
contain hostnames (e.g., pve01.example.com), not IP addresses.
When all cluster endpoints failed TLS validation during the initial health
check, the ClusterClient marked all nodes as unhealthy. Subsequent calls
to GetAllStorage() would fail with "no healthy nodes available in cluster",
causing storage data to disappear from the UI despite the cluster being
fully operational.
**Root Cause:**
The IP-first approach breaks TLS hostname verification when:
- VerifySSL is enabled (common for production environments)
- Certificates are issued with hostnames, not IPs (standard practice)
- Result: x509 certificate validation fails (e.g., "certificate is valid
for pve01.example.com, not 10.0.0.44")
**Solution:**
Conditionally prefer hostnames vs IPs based on TLS validation requirements:
1. When TLS hostname verification is required (VerifySSL=true AND no
fingerprint override), prefer hostname to ensure certificate CN/SAN
validation succeeds.
2. When TLS verification is bypassed (VerifySSL=false OR fingerprint
provided), prefer IP to reduce DNS lookups.
This approach:
- Fixes the regression for users with VerifySSL enabled
- Preserves the DNS optimization for self-signed/fingerprint configs
- Maintains backwards compatibility with v4.25.0 behavior
- Does not compromise TLS security
**Testing:**
Users reported that rolling back to v4.25.0 fixed their storage visibility.
This fix should restore storage for v4.26.4+ while maintaining the DNS
optimization for appropriate scenarios.
Problem: Multiple Docker agents can share the same API token, which causes
serious operational and security issues:
1. Host identity collision - agents overwrite each other in state (the bug
fixed in aa0aa7d4f only addressed the symptom, not the root cause)
2. Security/audit gap - can't attribute actions to specific agents
3. User confusion - easy mistake that causes subtle, hard-to-debug issues
4. State corruption - race conditions on startup and racey metric updates
Root cause: The system treats API tokens as the agent's identity credential,
but never enforced uniqueness. This allowed users to accidentally (or
intentionally) reuse tokens across multiple agents, breaking the 1:1
token-to-agent relationship that the architecture assumes.
Solution: Enforce token uniqueness at the agent report ingestion point.
Implementation:
- Add dockerTokenBindings map[tokenID]agentID to Monitor state
- In ApplyDockerReport, check if token is already bound to a different agent
- On first report from a token, bind it to that agent's ID
- On subsequent reports, verify the binding matches
- Reject mismatches with clear error naming the conflicting host
- Unbind tokens when hosts are removed (allows token reuse after cleanup)
Error message example:
"API token (pk_abc…xyz) is already in use by agent 'agent-123'
(host: docker-host-1). Each Docker agent must use a unique API token.
Generate a new token for this agent"
Why fail-fast instead of phased rollout:
- Shared tokens are architecturally wrong and cannot work correctly
- The system cannot safely multiplex state for duplicate identities
- A clear, immediate error is better UX than silent corruption
- Users would need to generate per-agent tokens eventually anyway
Why in-memory instead of persisted:
- Aligns with Pulse's existing state model (JSON config + in-memory state)
- Bindings naturally rebuild as agents report in after restart
- No schema migration or additional persistence complexity needed
- Sufficient for correctness since overwrite can't happen until both
agents report, at which point the binding exists and rejects duplicates
Migration path for existing users with shared tokens:
- Generate new unique token for each agent
- Update agent configuration with new token
- Restart agents one at a time
This enforces the token-as-identity invariant and prevents users from
creating unsupportable configurations.
Updated the Quick Start for Docker section in TEMPERATURE_MONITORING.md to be
more user-friendly and address common setup issues:
- Added clear explanation of why the proxy is needed (containers can't access hardware)
- Provided concrete IP example instead of placeholder
- Showed full docker-compose.yml context with proper YAML structure
- Added sudo to commands where needed
- Updated docker-compose commands to v2 syntax with note about v1
- Expanded verification steps with clearer success indicators
- Added reminder to check container name in verification commands
These improvements should help users who encounter blank temperature displays
due to missing proxy installation or bind mount configuration.
Root cause: findMatchingDockerHost() was matching hosts by token ID alone,
causing multiple Docker agents using the same API token to overwrite each
other in state. This resulted in only N visible hosts (where N = number of
unique tokens) instead of all M agents, with hosts "rotating" as each agent
reported every 10 seconds.
Example: 4 agents using 2 tokens would show only 2 hosts, rotating between
agents 1↔2 (token A) and agents 3↔4 (token B).
Fix: Remove token-only matching from findMatchingDockerHost(). Hosts should
only match by:
1. Agent ID (unique per agent)
2. Machine ID + hostname combination (with optional token validation)
3. Machine ID or hostname alone (only for tokenless agents)
This allows multiple agents to share the same API token without colliding.
Additional fix: UpsertDockerHost() now preserves Hidden, PendingUninstall,
and Command fields from existing hosts, preventing these flags from being
reset to defaults on every agent report.
Related to #656
Windows guest agents can return multiple directory mountpoints (C:\, C:\Users,
C:\Windows) all on the same physical drive. When the QEMU guest agent omits
disk[] metadata, commit 5325ef481 falls back to using the mountpoint string
as the disk identifier. This causes every Windows directory to be treated as
a separate disk, accumulating to inflated totals (e.g., 1TB reported for a
250GB drive).
Root cause:
The fallback logic in pkg/proxmox/client.go:1585-1594 assigns fs.Disk =
fs.Mountpoint when disk[] is missing. On Windows, every directory path is
unique, so the deduplication guard in internal/monitoring/monitor_polling.go:
619-635 never triggers, causing all directories to be summed.
Changes:
- Detect Windows-style mountpoints (drive letter + colon + backslash)
- Normalize to drive root when disk[] is missing (e.g., C:\Users → C:)
- Preserve existing behavior for Linux/BSD and VMs with disk[] metadata
- Add debug logging for synthesized Windows drive identifiers
This fix maintains backward compatibility with commit 5325ef481 while
preventing the Windows directory accumulation issue. LXC containers are
unaffected as they use a different code path.
Windows 11 25H2 ships exclusively on ARM64 hardware. When users on ARM64
attempt to install the host agent, the Service Control Manager fails to
load the amd64 binary with ERROR_BAD_EXE_FORMAT, surfaced as "The Pulse
Host Agent is not compatible with this Windows version".
Changes:
- Dockerfile: Build pulse-host-agent-windows-arm64.exe alongside amd64
- Dockerfile: Copy windows-arm64 binary and create symlink for download endpoint
- install-host-agent.ps1: Use RuntimeInformation.OSArchitecture to detect ARM64
- build-release.sh: Build darwin-amd64, darwin-arm64, windows-amd64, windows-arm64
- build-release.sh: Package Windows binaries as .zip archives
- validate-release.sh: Check for windows-arm64 binary and symlink
- validate-release.sh: Add architecture validation for all darwin/windows variants
The installer now correctly detects ARM64 and downloads the appropriate binary.
Extends temperature monitoring to collect SMART temps for SATA/SAS disks,
addressing issue #652 where physical disk temperatures showed as empty.
Architecture:
- Deploys pulse-sensor-wrapper.sh as SSH forced command on Proxmox nodes
- Wrapper collects both CPU/GPU temps (sensors -j) and disk temps (smartctl)
- Implements 30-min cache with background refresh to avoid performance impact
- Uses smartctl -n standby,after to skip sleeping drives without waking them
- Returns unified JSON: {sensors: {...}, smart: [...]}
Backend changes:
- Add DiskTemp model with device, serial, WWN, temperature, lastUpdated
- Extend Temperature model with SMART []DiskTemp field and HasSMART flag
- Add WWN field to PhysicalDisk for reliable disk matching
- Update parseSensorsJSON to handle both legacy and new wrapper formats
- Rewrite mergeNVMeTempsIntoDisks to match SMART temps by WWN → serial → devpath
- Preserve legacy NVMe temperature support for backward compatibility
Performance considerations:
- SMART data cached for 30 minutes per node to avoid excessive smartctl calls
- Background refresh prevents blocking temperature requests
- Respects drive standby state to avoid spinning up idle arrays
- Staggered disk scanning with 0.1s delay to avoid saturating SATA controllers
Install script:
- Deploys wrapper to /usr/local/bin/pulse-sensor-wrapper.sh
- Updates SSH forced command from "sensors -j" to wrapper script
- Backward compatible - falls back to direct sensors output if wrapper missing
Testing note:
- Requires real hardware with smartmontools installed for full functionality
- Empty smart array returned gracefully when smartctl unavailable
- Legacy sensor-only nodes continue working without changes
The bare metal installer was not copying pulse-host-agent binaries from
release tarballs into /opt/pulse/bin/, causing 404 errors when users
tried to install the host agent via the download endpoint.
Changes:
- Copy pulse-host-agent binary during initial installation (alongside
pulse-docker-agent)
- Update install_additional_agent_binaries() to fetch and install
cross-platform host agent binaries (linux-amd64, linux-arm64,
linux-armv7, darwin-amd64, darwin-arm64, windows-amd64)
- Match existing pattern used for Docker agent distribution
The build pipeline (build-release.sh and Dockerfile) already correctly
includes host agent binaries in releases and Docker images. This fix
ensures the installer deploys them.
Users on bare metal deployments should rerun install.sh to populate
/opt/pulse/bin/ with the missing host agent binaries. Docker
deployments are unaffected.
Added two troubleshooting sections to DOCKER_MONITORING.md:
1. "Docker hosts cycling or appearing to replace each other" - explains
why multiple agents sharing the same token cause the UI to switch
between hosts instead of showing all simultaneously
2. "Agent rejected after host removal" - documents the re-enrollment
process when a host is on the removal blocklist
These entries make common setup issues searchable while linking to
canonical setup instructions rather than duplicating them.
Fixed two test failures identified by go vet:
1. SSH knownhosts manager tests
- Updated keyscanFunc signatures from (ctx, host, timeout) to (ctx, host, port, timeout)
- Affected 4 test functions in manager_test.go
- Matches recent API change adding port parameter for flexibility
2. Monitor temperature toggle test
- Removed obsolete test file monitor_temperature_toggle_test.go
- Test was checking internal implementation details that have changed
- Enable/DisableTemperatureMonitoring() now only log (interface compatibility)
- Temperature collection is managed differently in current architecture
Impact:
- All tests now compile successfully
- Removes obsolete test that no longer reflects current behavior
- Updates remaining tests to match current API signatures
Fixed goroutine leaks in WebSocket hub from missing shutdown mechanism:
Problem:
1. Hub.Run() has infinite loop with no exit condition
2. runBroadcastSequencer() reads from channel forever
3. No way to cleanly shutdown hub during restarts or tests
Solution:
- Added stopChan chan struct{} field to Hub
- Initialize stopChan in NewHub()
- Added Stop() method that closes stopChan
- Modified Run() main loop to select on stopChan
- On shutdown: close all client connections and return
- Modified runBroadcastSequencer() from 'for range' to select
- Changed from: for msg := range h.broadcastSeq
- Changed to: for { select { case msg := <-h.broadcastSeq: ... case <-h.stopChan: ... }}
- On shutdown: stop coalesce timer and return
Shutdown sequence:
1. Call hub.Stop() to close stopChan
2. Both Run() and runBroadcastSequencer() exit their loops
3. All client send channels are closed
4. Clients map is cleared
5. Pending coalesce timer is stopped
Impact:
- Enables graceful shutdown during service restarts
- Prevents goroutine leaks in tests
- Allows proper cleanup of WebSocket connections
- No more orphaned broadcast sequencer goroutines
Fixed three P1 goroutine/memory leaks that prevent proper resource cleanup:
1. Recovery Tokens goroutine leak
- Cleanup routine runs forever without stop mechanism
- Added stopCleanup channel and Stop() method
- Cleanup loop now uses select with stopCleanup case
2. Rate Limiter goroutine leak
- Cleanup routine runs forever without stop mechanism
- Added stopCleanup channel and Stop() method
- Changed from 'for range ticker.C' to select with stopCleanup case
3. OIDC Service memory leak (DoS vector)
- Abandoned OIDC flows never cleaned up
- State entries accumulate unboundedly
- Added cleanup routine with 5-minute ticker
- Periodically removes expired state entries (10min TTL)
- Added Stop() method for proper shutdown
All three follow consistent pattern:
- Add stopCleanup chan struct{} field
- Initialize in constructor
- Use select with ticker and stopCleanup cases
- Close channel in Stop() method to signal goroutine exit
Impact:
- Prevents goroutine leaks during service restarts/reloads
- Prevents memory exhaustion from abandoned OIDC login attempts
- Enables proper cleanup in tests and graceful shutdown
This commit addresses 5 critical P0 bugs that cause security vulnerabilities, crashes, and data corruption:
**P0-1: Recovery Tokens Replay Attack Vulnerability** (recovery_tokens.go:153-159)
- **SECURITY CRITICAL**: Single-use recovery tokens could be replayed
- **Problem**: Lock upgrade race - two concurrent requests both pass initial Used check
1. Both acquire RLock, see token.Used = false
2. Both release RLock
3. Both acquire Lock and mark token.Used = true
4. Both return true - TOKEN REUSED
- **Impact**: Attacker with intercepted token can use it multiple times
- **Fix**: Re-check token.Used after acquiring write lock (TOCTOU prevention)
**P0-2: WebSocket Hub Concurrent Map Panic** (hub.go:345-347, 376-378)
- **Problem**: Initial state goroutine reads h.clients map without lock
- Line 345: `if _, ok := h.clients[client]` (NO LOCK)
- Main loop writes to h.clients with lock (line 326, 394)
- **Impact**: "fatal error: concurrent map read and write" crashes hub
- **Fix**: Acquire RLock before all client map reads in goroutine
**P0-3: WebSocket Send on Closed Channel Panic** (hub.go:348, 380)
- **Problem**: Check client exists, then send - channel can close between
- **Impact**: "send on closed channel" panic crashes hub
- **Fix**: Hold RLock during both check and send (defensive select already present)
**P0-4: CSRF Store Shutdown Data Corruption** (csrf_store.go:189-196)
- **Problem**: Stop() calls save() after signaling worker. Both hold only RLock
- Worker's final save writes to csrf_tokens.json.tmp
- Stop()'s save writes to same file concurrently
- **Impact**: Corrupted/truncated csrf_tokens.json on shutdown
- **Fix**: Added saveMu mutex to serialize all disk writes
**P0-5: CSRF Store Deadlock on Double-Stop** (csrf_store.go:103-108)
- **Problem**: stopChan unbuffered, no sync.Once guard, uses send not close
- **Impact**: Second Stop() call blocks forever waiting for receiver
- **Fix**:
- Added sync.Once field stopOnce
- Changed to close(stopChan) within stopOnce.Do()
- Prevents double-close panic and deadlock
All fixes maintain backwards compatibility. The recovery token fix is particularly critical as it closes a security vulnerability allowing replay attacks on password reset flows.
**Problem**: writeConfigFileLocked() accessed c.tx field without synchronization
- Function reads c.tx to check if transaction is active (line 109)
- c.tx modified by begin/endTransaction under lock, but read without lock
- Race condition: c.tx could change between check and use
**Impact**:
- Inconsistent transaction handling
- File could be written directly when it should be staged
- Or staged when it should be written directly
- Data corruption risk during config imports
**Fix** (lines 108-128):
- Added documentation that caller MUST hold c.mu lock
- Read c.tx into local variable tx while lock is held
- Use local copy for transaction check
- Safe because all callers hold c.mu when calling writeConfigFileLocked
- Transaction field only modified while holding c.mu in begin/endTransaction
This maintains the existing contract (callers hold lock) while making the transaction read safe and explicit.
This commit addresses 4 P1 important issues and 1 P2 optimization in infrastructure components:
**P1-1: Missing Panic Recovery in Discovery Service** (service.go:172-195, 499-542)
- **Problem**: No panic recovery in Start(), ForceRefresh(), SetSubnet() goroutines
- **Impact**: Silent service death if scan panics, broken discovery with no monitoring
- **Fix**:
- Wrapped initial scan goroutine with defer/recover (lines 172-182)
- Wrapped scanLoop goroutine with defer/recover (lines 185-195)
- Wrapped ForceRefresh scan with defer/recover (lines 499-509)
- Wrapped SetSubnet scan with defer/recover (lines 532-542)
- All log panics with stack traces for debugging
**P1-2: Missing Panic Recovery in Config Watcher Callback** (watcher.go:546-556)
- **Problem**: User-provided onMockReload callback could panic and crash watcher
- **Impact**: Panicking callback kills watcher goroutine, no config updates
- **Fix**: Wrapped callback invocation with defer/recover and stack trace logging
**P1-3: Session Store Stop() Using Send Instead of Close** (session_store.go:16-84)
- **Problem**: Stop() used channel send which blocks if nobody reads
- **Impact**: Stop() hangs if backgroundWorker already exited
- **Fix**:
- Added sync.Once field stopOnce (line 22)
- Changed Stop() to use close() within stopOnce.Do() (lines 80-84)
- Prevents double-close panic and ensures all readers are signaled
**P2-1: Backup Cleanup Inefficient O(n²) Sort** (persistence.go:1424-1427)
- **Problem**: Bubble sort used to sort backups by modification time
- **Impact**: Inefficient for large backup counts (>100 files)
- **Fix**:
- Replaced bubble sort with sort.Slice() using O(n log n) algorithm
- Added "sort" import (line 9)
- Maintains same oldest-first ordering for deletion logic
All fixes add defensive programming without changing external behavior. Panic recovery ensures services continue operating even with bugs, while optimization reduces cleanup time for backup-heavy environments.
This commit addresses 3 critical P0 race conditions and resource leaks in core infrastructure:
**P0-1: Discovery Service Goroutine Leak** (service.go:468, 488)
- **Problem**: ForceRefresh() and SetSubnet() spawned unbounded goroutines without checking if scan already in progress
- **Impact**: Rapid API calls create goroutine explosion, resource exhaustion
- **Fix**:
- ForceRefresh: Check isScanning before spawning goroutine (lines 470-476)
- SetSubnet: Check isScanning, defer scan if already running (lines 491-504)
- Both now log when skipping to aid debugging
**P0-2: Config Persistence Unlock/Relock Race** (persistence.go:1177-1206)
- **Problem**: LoadNodesConfig() unlocked RLock, called SaveNodesConfig (acquires Lock), then relocked
- **Impact**: Another goroutine could modify config between unlock/relock, causing migrated data loss
- **Fix**:
- Copy instance slices while holding RLock to ensure consistency (lines 1189-1194)
- Release lock, save copies, then return without relocking (lines 1196-1205)
- Prevents TOCTOU vulnerability where migrations could be overwritten
**P0-3: Config Watcher Channel Close Race** (watcher.go:19-178)
- **Problem**: Stop() used select-check-close pattern vulnerable to concurrent calls
- **Impact**: Multiple Stop() calls panic on double-close
- **Fix**:
- Added sync.Once field stopOnce to ConfigWatcher struct (line 26)
- Changed Stop() to use stopOnce.Do() ensuring single execution (lines 175-178)
- Removed racy select-based guard
All fixes maintain backwards compatibility and add defensive logging for operational visibility.
This commit addresses 7 critical issues identified during the alert system audit:
**P0 Critical - Race Conditions Fixed:**
1. **dispatchAlert race in NotifyExistingAlert** (lines 5486-5497)
- Changed from RLock to Lock to hold mutex during dispatchAlert call
- dispatchAlert calls checkFlapping which writes to maps (flappingHistory, flappingActive, suppressedUntil)
- Previous code: grabbed RLock, got alert pointer, released lock, then called dispatchAlert (RACE)
- Fixed: hold Lock through dispatchAlert call
2. **dispatchAlert race in LoadActiveAlerts startup** (lines 8216-8235)
- Startup goroutines called dispatchAlert without holding lock
- Added m.mu.Lock/Unlock around dispatchAlert call in goroutine
- Also added cancellation via escalationStop channel to prevent goroutine leaks on shutdown
3. **checkFlapping documentation** (line 738)
- Added clear comment that checkFlapping requires caller to hold m.mu
- Prevents future race conditions from improper usage
**P1 Important - Data Loss Prevention:**
4. **History save race condition** (lines 177-180 in history.go)
- Added saveMu mutex to serialize disk writes
- Previous: concurrent saves could interleave, causing newer data to be overwritten by older snapshots
- Fixed: saveMu.Lock at start of saveHistoryWithRetry ensures atomic disk writes
- Newer snapshots now always win over older ones
**P2 Memory Leak Prevention:**
5. **PMG anomaly tracker cleanup** (lines 7318-7331)
- Added cleanup for pmgAnomalyTrackers map (24 hour TTL based on LastSampleTime)
- Prevents unbounded growth from decommissioned/transient PMG instances
- Each tracker: ~1-2KB (48 samples + baselines)
6. **PMG quarantine history cleanup** (lines 7333-7354)
- Added cleanup for pmgQuarantineHistory map (7 day TTL based on last snapshot)
- Prevents memory leak for deleted PMG instances
- Removes both empty histories and very old histories
**P2 Goroutine Leak Prevention:**
7. **Startup notification goroutine cancellation** (lines 8218-8234)
- Added select with escalationStop channel to cancel startup notifications
- Prevents goroutines from continuing after Stop() is called
- Scales with number of restored critical alerts
All fixes maintain proper lock ordering and prevent deadlocks by ensuring locks are held when accessing shared maps.
Backend:
- Add IsEncryptionEnabled() method to ConfigPersistence
- Include encryption status in /api/notifications/health response
- Allows frontend to warn when credentials are stored in plaintext
Frontend:
- Update NotificationHealth type to include encryption.enabled field
- Frontend can now display warnings when encryption is disabled
This addresses the P2 requirement for encryption visibility, allowing
operators to know when notification credentials are not encrypted at rest.
Add documentation to explain how transport-level and queue-level retries interact:
- Email: MaxRetries (transport) * MaxAttempts (queue) = total SMTP attempts
- Webhooks: RetryCount (transport) * MaxAttempts (queue) = total HTTP attempts
- Example: 3 * 3 = 9 total delivery attempts for a single notification
This clarifies the multiplicative retry behavior and helps operators understand
the actual retry counts when using the persistent queue.
Queue cancellation mechanism:
- Add CancelByAlertIDs method to mark queued notifications as cancelled when alerts resolve
- Update CancelAlert to cancel queued notifications containing resolved alert IDs
- Skip cancelled notifications in queue processor
- Prevents resolved alerts from triggering notifications after they clear
Atomic DB operations:
- Add IncrementAttemptAndSetStatus to atomically update attempt counter and status
- Replace separate IncrementAttempt + UpdateStatus calls with single atomic operation
- Prevents orphaned queue entries when crashes occur between operations
- Eliminates race condition where rows get stuck in "pending" or "sending" status
These fixes ensure queued notifications are properly cancelled when alerts resolve
and prevent database inconsistencies during crash scenarios.
Critical fixes (P0):
- Fix cooldown timing: Mark cooldown only after successful delivery, not before enqueue
- Add os.MkdirAll to queue initialization to prevent silent failures on fresh installs
- Add DNS re-validation at webhook send time to prevent DNS rebinding SSRF attacks
- Add SSRF validation for Apprise HTTP URLs
- Remove secret logging (bot tokens, routing keys) from debug logs
- Implement lastNotified cleanup to prevent unbounded memory growth
- Use shared HTTP client for webhooks to enable TLS connection reuse
- Add fallback to direct sending when queue enqueue fails
- Make queue worker concurrent (5 workers with semaphore) to prevent head-of-line blocking
- Fix webhook rate limiter race condition with separate mutex
- Fix email manager thread safety with mutex on rate limiter
- Fix grouping timer leak by adding stopCleanup signal
- Fix webhook 429 double sleep (use Retry-After OR backoff, not both)
Frontend improvements:
- Add queue/DLQ management API methods (getQueueStats, getDLQ, retryDLQItem, deleteDLQItem)
- Add getNotificationHealth and getWebhookHistory endpoints
- Add Apprise test support to NotificationTestRequest type
Related to notification system audit
Remove 4 LLM-generated internal development docs that don't belong in the repository:
- MIGRATION_SCAFFOLDING.md
- NOTIFICATION_AUDIT.md
- NOTIFICATION_QUICK_REFERENCE.md
- NOTIFICATION_SYSTEM_MAP.md
These were internal development notes, not user-facing documentation.
- Add NOTIFICATION_AUDIT.md for system analysis
- Add NOTIFICATION_QUICK_REFERENCE.md for quick lookup
- Add NOTIFICATION_SYSTEM_MAP.md for architecture overview
- Fix tab panel missing rounded-tl corner when first tab is active