Commit graph

241 commits

Author SHA1 Message Date
rcourtman
7ee252bd84 Fix Docker host display bug when multiple agents share API tokens (related to #658)
Root cause: findMatchingDockerHost() was matching hosts by token ID alone,
causing multiple Docker agents using the same API token to overwrite each
other in state. This resulted in only N visible hosts (where N = number of
unique tokens) instead of all M agents, with hosts "rotating" as each agent
reported every 10 seconds.

Example: 4 agents using 2 tokens would show only 2 hosts, rotating between
agents 1↔2 (token A) and agents 3↔4 (token B).

Fix: Remove token-only matching from findMatchingDockerHost(). Hosts should
only match by:
1. Agent ID (unique per agent)
2. Machine ID + hostname combination (with optional token validation)
3. Machine ID or hostname alone (only for tokenless agents)

This allows multiple agents to share the same API token without colliding.

Additional fix: UpsertDockerHost() now preserves Hidden, PendingUninstall,
and Command fields from existing hosts, preventing these flags from being
reset to defaults on every agent report.
2025-11-07 13:46:35 +00:00
rcourtman
2a79d57f73 Add SMART temperature collection for physical disks (related to #652)
Extends temperature monitoring to collect SMART temps for SATA/SAS disks,
addressing issue #652 where physical disk temperatures showed as empty.

Architecture:
- Deploys pulse-sensor-wrapper.sh as SSH forced command on Proxmox nodes
- Wrapper collects both CPU/GPU temps (sensors -j) and disk temps (smartctl)
- Implements 30-min cache with background refresh to avoid performance impact
- Uses smartctl -n standby,after to skip sleeping drives without waking them
- Returns unified JSON: {sensors: {...}, smart: [...]}

Backend changes:
- Add DiskTemp model with device, serial, WWN, temperature, lastUpdated
- Extend Temperature model with SMART []DiskTemp field and HasSMART flag
- Add WWN field to PhysicalDisk for reliable disk matching
- Update parseSensorsJSON to handle both legacy and new wrapper formats
- Rewrite mergeNVMeTempsIntoDisks to match SMART temps by WWN → serial → devpath
- Preserve legacy NVMe temperature support for backward compatibility

Performance considerations:
- SMART data cached for 30 minutes per node to avoid excessive smartctl calls
- Background refresh prevents blocking temperature requests
- Respects drive standby state to avoid spinning up idle arrays
- Staggered disk scanning with 0.1s delay to avoid saturating SATA controllers

Install script:
- Deploys wrapper to /usr/local/bin/pulse-sensor-wrapper.sh
- Updates SSH forced command from "sensors -j" to wrapper script
- Backward compatible - falls back to direct sensors output if wrapper missing

Testing note:
- Requires real hardware with smartmontools installed for full functionality
- Empty smart array returned gracefully when smartctl unavailable
- Legacy sensor-only nodes continue working without changes
2025-11-07 11:46:57 +00:00
rcourtman
94b07a892e Fix test failures from API signature changes
Fixed two test failures identified by go vet:

1. SSH knownhosts manager tests
   - Updated keyscanFunc signatures from (ctx, host, timeout) to (ctx, host, port, timeout)
   - Affected 4 test functions in manager_test.go
   - Matches recent API change adding port parameter for flexibility

2. Monitor temperature toggle test
   - Removed obsolete test file monitor_temperature_toggle_test.go
   - Test was checking internal implementation details that have changed
   - Enable/DisableTemperatureMonitoring() now only log (interface compatibility)
   - Temperature collection is managed differently in current architecture

Impact:
- All tests now compile successfully
- Removes obsolete test that no longer reflects current behavior
- Updates remaining tests to match current API signatures
2025-11-07 10:43:06 +00:00
rcourtman
d30d76bb92 Fix P1: Add shutdown mechanism to WebSocket Hub
Fixed goroutine leaks in WebSocket hub from missing shutdown mechanism:

Problem:
1. Hub.Run() has infinite loop with no exit condition
2. runBroadcastSequencer() reads from channel forever
3. No way to cleanly shutdown hub during restarts or tests

Solution:
- Added stopChan chan struct{} field to Hub
- Initialize stopChan in NewHub()
- Added Stop() method that closes stopChan
- Modified Run() main loop to select on stopChan
  - On shutdown: close all client connections and return
- Modified runBroadcastSequencer() from 'for range' to select
  - Changed from: for msg := range h.broadcastSeq
  - Changed to: for { select { case msg := <-h.broadcastSeq: ... case <-h.stopChan: ... }}
  - On shutdown: stop coalesce timer and return

Shutdown sequence:
1. Call hub.Stop() to close stopChan
2. Both Run() and runBroadcastSequencer() exit their loops
3. All client send channels are closed
4. Clients map is cleared
5. Pending coalesce timer is stopped

Impact:
- Enables graceful shutdown during service restarts
- Prevents goroutine leaks in tests
- Allows proper cleanup of WebSocket connections
- No more orphaned broadcast sequencer goroutines
2025-11-07 10:20:26 +00:00
rcourtman
e30757720a Fix P1: Resource leaks in Recovery Tokens, Rate Limiter, and OIDC Service
Fixed three P1 goroutine/memory leaks that prevent proper resource cleanup:

1. Recovery Tokens goroutine leak
   - Cleanup routine runs forever without stop mechanism
   - Added stopCleanup channel and Stop() method
   - Cleanup loop now uses select with stopCleanup case

2. Rate Limiter goroutine leak
   - Cleanup routine runs forever without stop mechanism
   - Added stopCleanup channel and Stop() method
   - Changed from 'for range ticker.C' to select with stopCleanup case

3. OIDC Service memory leak (DoS vector)
   - Abandoned OIDC flows never cleaned up
   - State entries accumulate unboundedly
   - Added cleanup routine with 5-minute ticker
   - Periodically removes expired state entries (10min TTL)
   - Added Stop() method for proper shutdown

All three follow consistent pattern:
- Add stopCleanup chan struct{} field
- Initialize in constructor
- Use select with ticker and stopCleanup cases
- Close channel in Stop() method to signal goroutine exit

Impact:
- Prevents goroutine leaks during service restarts/reloads
- Prevents memory exhaustion from abandoned OIDC login attempts
- Enables proper cleanup in tests and graceful shutdown
2025-11-07 10:18:44 +00:00
rcourtman
1bf9cfea88 Fix critical P0 security and crash issues in API/WebSocket layer
This commit addresses 5 critical P0 bugs that cause security vulnerabilities, crashes, and data corruption:

**P0-1: Recovery Tokens Replay Attack Vulnerability** (recovery_tokens.go:153-159)
- **SECURITY CRITICAL**: Single-use recovery tokens could be replayed
- **Problem**: Lock upgrade race - two concurrent requests both pass initial Used check
  1. Both acquire RLock, see token.Used = false
  2. Both release RLock
  3. Both acquire Lock and mark token.Used = true
  4. Both return true - TOKEN REUSED
- **Impact**: Attacker with intercepted token can use it multiple times
- **Fix**: Re-check token.Used after acquiring write lock (TOCTOU prevention)

**P0-2: WebSocket Hub Concurrent Map Panic** (hub.go:345-347, 376-378)
- **Problem**: Initial state goroutine reads h.clients map without lock
  - Line 345: `if _, ok := h.clients[client]` (NO LOCK)
  - Main loop writes to h.clients with lock (line 326, 394)
- **Impact**: "fatal error: concurrent map read and write" crashes hub
- **Fix**: Acquire RLock before all client map reads in goroutine

**P0-3: WebSocket Send on Closed Channel Panic** (hub.go:348, 380)
- **Problem**: Check client exists, then send - channel can close between
- **Impact**: "send on closed channel" panic crashes hub
- **Fix**: Hold RLock during both check and send (defensive select already present)

**P0-4: CSRF Store Shutdown Data Corruption** (csrf_store.go:189-196)
- **Problem**: Stop() calls save() after signaling worker. Both hold only RLock
  - Worker's final save writes to csrf_tokens.json.tmp
  - Stop()'s save writes to same file concurrently
- **Impact**: Corrupted/truncated csrf_tokens.json on shutdown
- **Fix**: Added saveMu mutex to serialize all disk writes

**P0-5: CSRF Store Deadlock on Double-Stop** (csrf_store.go:103-108)
- **Problem**: stopChan unbuffered, no sync.Once guard, uses send not close
- **Impact**: Second Stop() call blocks forever waiting for receiver
- **Fix**:
  - Added sync.Once field stopOnce
  - Changed to close(stopChan) within stopOnce.Do()
  - Prevents double-close panic and deadlock

All fixes maintain backwards compatibility. The recovery token fix is particularly critical as it closes a security vulnerability allowing replay attacks on password reset flows.
2025-11-07 10:13:15 +00:00
rcourtman
431769024f Fix P1: Config Persistence transaction field synchronization
**Problem**: writeConfigFileLocked() accessed c.tx field without synchronization
- Function reads c.tx to check if transaction is active (line 109)
- c.tx modified by begin/endTransaction under lock, but read without lock
- Race condition: c.tx could change between check and use

**Impact**:
- Inconsistent transaction handling
- File could be written directly when it should be staged
- Or staged when it should be written directly
- Data corruption risk during config imports

**Fix** (lines 108-128):
- Added documentation that caller MUST hold c.mu lock
- Read c.tx into local variable tx while lock is held
- Use local copy for transaction check
- Safe because all callers hold c.mu when calling writeConfigFileLocked
- Transaction field only modified while holding c.mu in begin/endTransaction

This maintains the existing contract (callers hold lock) while making the transaction read safe and explicit.
2025-11-07 10:00:31 +00:00
rcourtman
6ca4d9b750 Fix P1/P2 infrastructure issues: panic recovery and optimizations
This commit addresses 4 P1 important issues and 1 P2 optimization in infrastructure components:

**P1-1: Missing Panic Recovery in Discovery Service** (service.go:172-195, 499-542)
- **Problem**: No panic recovery in Start(), ForceRefresh(), SetSubnet() goroutines
- **Impact**: Silent service death if scan panics, broken discovery with no monitoring
- **Fix**:
  - Wrapped initial scan goroutine with defer/recover (lines 172-182)
  - Wrapped scanLoop goroutine with defer/recover (lines 185-195)
  - Wrapped ForceRefresh scan with defer/recover (lines 499-509)
  - Wrapped SetSubnet scan with defer/recover (lines 532-542)
  - All log panics with stack traces for debugging

**P1-2: Missing Panic Recovery in Config Watcher Callback** (watcher.go:546-556)
- **Problem**: User-provided onMockReload callback could panic and crash watcher
- **Impact**: Panicking callback kills watcher goroutine, no config updates
- **Fix**: Wrapped callback invocation with defer/recover and stack trace logging

**P1-3: Session Store Stop() Using Send Instead of Close** (session_store.go:16-84)
- **Problem**: Stop() used channel send which blocks if nobody reads
- **Impact**: Stop() hangs if backgroundWorker already exited
- **Fix**:
  - Added sync.Once field stopOnce (line 22)
  - Changed Stop() to use close() within stopOnce.Do() (lines 80-84)
  - Prevents double-close panic and ensures all readers are signaled

**P2-1: Backup Cleanup Inefficient O(n²) Sort** (persistence.go:1424-1427)
- **Problem**: Bubble sort used to sort backups by modification time
- **Impact**: Inefficient for large backup counts (>100 files)
- **Fix**:
  - Replaced bubble sort with sort.Slice() using O(n log n) algorithm
  - Added "sort" import (line 9)
  - Maintains same oldest-first ordering for deletion logic

All fixes add defensive programming without changing external behavior. Panic recovery ensures services continue operating even with bugs, while optimization reduces cleanup time for backup-heavy environments.
2025-11-07 09:55:22 +00:00
rcourtman
ba6d934204 Fix critical P0 infrastructure concurrency issues
This commit addresses 3 critical P0 race conditions and resource leaks in core infrastructure:

**P0-1: Discovery Service Goroutine Leak** (service.go:468, 488)
- **Problem**: ForceRefresh() and SetSubnet() spawned unbounded goroutines without checking if scan already in progress
- **Impact**: Rapid API calls create goroutine explosion, resource exhaustion
- **Fix**:
  - ForceRefresh: Check isScanning before spawning goroutine (lines 470-476)
  - SetSubnet: Check isScanning, defer scan if already running (lines 491-504)
  - Both now log when skipping to aid debugging

**P0-2: Config Persistence Unlock/Relock Race** (persistence.go:1177-1206)
- **Problem**: LoadNodesConfig() unlocked RLock, called SaveNodesConfig (acquires Lock), then relocked
- **Impact**: Another goroutine could modify config between unlock/relock, causing migrated data loss
- **Fix**:
  - Copy instance slices while holding RLock to ensure consistency (lines 1189-1194)
  - Release lock, save copies, then return without relocking (lines 1196-1205)
  - Prevents TOCTOU vulnerability where migrations could be overwritten

**P0-3: Config Watcher Channel Close Race** (watcher.go:19-178)
- **Problem**: Stop() used select-check-close pattern vulnerable to concurrent calls
- **Impact**: Multiple Stop() calls panic on double-close
- **Fix**:
  - Added sync.Once field stopOnce to ConfigWatcher struct (line 26)
  - Changed Stop() to use stopOnce.Do() ensuring single execution (lines 175-178)
  - Removed racy select-based guard

All fixes maintain backwards compatibility and add defensive logging for operational visibility.
2025-11-07 09:49:55 +00:00
rcourtman
1183b87fa1 Fix critical alert system concurrency and memory leak issues
This commit addresses 7 critical issues identified during the alert system audit:

**P0 Critical - Race Conditions Fixed:**

1. **dispatchAlert race in NotifyExistingAlert** (lines 5486-5497)
   - Changed from RLock to Lock to hold mutex during dispatchAlert call
   - dispatchAlert calls checkFlapping which writes to maps (flappingHistory, flappingActive, suppressedUntil)
   - Previous code: grabbed RLock, got alert pointer, released lock, then called dispatchAlert (RACE)
   - Fixed: hold Lock through dispatchAlert call

2. **dispatchAlert race in LoadActiveAlerts startup** (lines 8216-8235)
   - Startup goroutines called dispatchAlert without holding lock
   - Added m.mu.Lock/Unlock around dispatchAlert call in goroutine
   - Also added cancellation via escalationStop channel to prevent goroutine leaks on shutdown

3. **checkFlapping documentation** (line 738)
   - Added clear comment that checkFlapping requires caller to hold m.mu
   - Prevents future race conditions from improper usage

**P1 Important - Data Loss Prevention:**

4. **History save race condition** (lines 177-180 in history.go)
   - Added saveMu mutex to serialize disk writes
   - Previous: concurrent saves could interleave, causing newer data to be overwritten by older snapshots
   - Fixed: saveMu.Lock at start of saveHistoryWithRetry ensures atomic disk writes
   - Newer snapshots now always win over older ones

**P2 Memory Leak Prevention:**

5. **PMG anomaly tracker cleanup** (lines 7318-7331)
   - Added cleanup for pmgAnomalyTrackers map (24 hour TTL based on LastSampleTime)
   - Prevents unbounded growth from decommissioned/transient PMG instances
   - Each tracker: ~1-2KB (48 samples + baselines)

6. **PMG quarantine history cleanup** (lines 7333-7354)
   - Added cleanup for pmgQuarantineHistory map (7 day TTL based on last snapshot)
   - Prevents memory leak for deleted PMG instances
   - Removes both empty histories and very old histories

**P2 Goroutine Leak Prevention:**

7. **Startup notification goroutine cancellation** (lines 8218-8234)
   - Added select with escalationStop channel to cancel startup notifications
   - Prevents goroutines from continuing after Stop() is called
   - Scales with number of restored critical alerts

All fixes maintain proper lock ordering and prevent deadlocks by ensuring locks are held when accessing shared maps.
2025-11-07 09:12:28 +00:00
rcourtman
99e5a38534 Fix critical monitoring system issues and add robustness improvements
This commit addresses 9 critical issues identified during the monitoring system audit:

**Race Conditions Fixed:**
- PBS backup pollers: Moved lock earlier to eliminate check-then-act race (lines 7316-7378)
- PVE backup poll timing: Fixed double write to lastPVEBackupPoll with proper synchronization (lines 5927-5977)
- Docker hosts cleanup: Refactored to avoid holding both m.mu and s.mu locks simultaneously (lines 1911-1937)

**Context Propagation Fixed:**
- Replaced all context.Background() calls with parent context for proper cancellation chain:
  - PBS backup poller (line 7367)
  - PVE backup poller (line 5955)
  - PBS fallback check (line 7154)

**Memory Leak Prevention:**
- Added cleanup for guest metadata cache (10 minute TTL, lines 1942-1957)
- Added cleanup for diagnostic snapshots (1 hour TTL, lines 1959-1987)
- Added cleanup for RRD cache (1 minute TTL, lines 1989-2007)
- All cleanup methods called on 10-second ticker (lines 3791-3793)

**Panic Recovery:**
- Added recoverFromPanic helper to log panics with stack traces (lines 1910-1920)
- Protected all critical goroutines:
  - poll (line 4020)
  - taskWorker (line 4200)
  - retryFailedConnections (line 3851)
  - checkMockAlerts (line 8896)
  - pollPVEInstance (line 4886)
  - pollPBSInstance (line 7164)
  - pollPMGInstance (line 7498)

**Import Fixes:**
- Added missing sync import to email_enhanced.go
- Added missing os import to queue.go

All fixes maintain proper lock ordering and release locks before calling methods that acquire other locks to prevent deadlocks.
2025-11-07 08:52:37 +00:00
rcourtman
9257071ca1 Add encryption status to notification health endpoint (P2)
Backend:
- Add IsEncryptionEnabled() method to ConfigPersistence
- Include encryption status in /api/notifications/health response
- Allows frontend to warn when credentials are stored in plaintext

Frontend:
- Update NotificationHealth type to include encryption.enabled field
- Frontend can now display warnings when encryption is disabled

This addresses the P2 requirement for encryption visibility, allowing
operators to know when notification credentials are not encrypted at rest.
2025-11-07 08:36:55 +00:00
rcourtman
b70dc3d00d Document layered retry semantics (P2 documentation)
Add documentation to explain how transport-level and queue-level retries interact:
- Email: MaxRetries (transport) * MaxAttempts (queue) = total SMTP attempts
- Webhooks: RetryCount (transport) * MaxAttempts (queue) = total HTTP attempts
- Example: 3 * 3 = 9 total delivery attempts for a single notification

This clarifies the multiplicative retry behavior and helps operators understand
the actual retry counts when using the persistent queue.
2025-11-07 08:35:00 +00:00
rcourtman
7ee11105f5 Implement queue cancellation and atomic DB operations (P1 fixes)
Queue cancellation mechanism:
- Add CancelByAlertIDs method to mark queued notifications as cancelled when alerts resolve
- Update CancelAlert to cancel queued notifications containing resolved alert IDs
- Skip cancelled notifications in queue processor
- Prevents resolved alerts from triggering notifications after they clear

Atomic DB operations:
- Add IncrementAttemptAndSetStatus to atomically update attempt counter and status
- Replace separate IncrementAttempt + UpdateStatus calls with single atomic operation
- Prevents orphaned queue entries when crashes occur between operations
- Eliminates race condition where rows get stuck in "pending" or "sending" status

These fixes ensure queued notifications are properly cancelled when alerts resolve
and prevent database inconsistencies during crash scenarios.
2025-11-07 08:33:09 +00:00
rcourtman
c6a69e525c Fix critical notification system bugs and security issues
Critical fixes (P0):
- Fix cooldown timing: Mark cooldown only after successful delivery, not before enqueue
- Add os.MkdirAll to queue initialization to prevent silent failures on fresh installs
- Add DNS re-validation at webhook send time to prevent DNS rebinding SSRF attacks
- Add SSRF validation for Apprise HTTP URLs
- Remove secret logging (bot tokens, routing keys) from debug logs
- Implement lastNotified cleanup to prevent unbounded memory growth
- Use shared HTTP client for webhooks to enable TLS connection reuse
- Add fallback to direct sending when queue enqueue fails
- Make queue worker concurrent (5 workers with semaphore) to prevent head-of-line blocking
- Fix webhook rate limiter race condition with separate mutex
- Fix email manager thread safety with mutex on rate limiter
- Fix grouping timer leak by adding stopCleanup signal
- Fix webhook 429 double sleep (use Retry-After OR backoff, not both)

Frontend improvements:
- Add queue/DLQ management API methods (getQueueStats, getDLQ, retryDLQItem, deleteDLQItem)
- Add getNotificationHealth and getWebhookHistory endpoints
- Add Apprise test support to NotificationTestRequest type

Related to notification system audit
2025-11-07 08:29:13 +00:00
rcourtman
febce91145 Remove internal development documentation files
Remove 4 LLM-generated internal development docs that don't belong in the repository:
- MIGRATION_SCAFFOLDING.md
- NOTIFICATION_AUDIT.md
- NOTIFICATION_QUICK_REFERENCE.md
- NOTIFICATION_SYSTEM_MAP.md

These were internal development notes, not user-facing documentation.
2025-11-07 08:23:19 +00:00
rcourtman
6a48c759e8 Fix critical notification system bugs and security issues
This commit addresses multiple critical issues identified in the notification
system audit conducted with Codex:

**Critical Fixes:**

1. **Queue Retry Logic (Critical #1)**
   - Fixed broken retry/DLQ system where send functions never returned errors
   - Made sendGroupedEmail(), sendGroupedWebhook(), sendGroupedApprise() return errors
   - Made sendWebhookRequest() return errors
   - ProcessQueuedNotification() now properly propagates errors to queue
   - Retry logic and DLQ now function correctly

2. **Attempt Counter Bug (Critical #2)**
   - Fixed double-increment bug in queue processing
   - Separated UpdateStatus() from attempt tracking
   - Added IncrementAttempt() method
   - Notifications now get correct number of retry attempts

3. **Secret Exposure (Critical #3 & #4)**
   - Masked webhook headers and customFields in GET /api/notifications/webhooks
   - Added redactSecretsFromURL() to sanitize webhook URLs in history
   - Truncated/redacted response bodies in webhook history
   - Protected against credential harvesting via API

4. **Email Rate Limiting (Critical #5)**
   - Added emailManager field to NotificationManager
   - Shared EnhancedEmailManager instance across sends
   - Rate limiter now accumulates across multiple emails
   - SMTP rate limits are now enforced correctly

5. **SSRF Protection (High #6)**
   - Added DNS resolution of webhook URLs
   - Added isPrivateIP() check using CIDR ranges
   - Blocks all private IP ranges (10/8, 172.16/12, 192.168/16, 127/8, 169.254/16)
   - Blocks IPv6 private ranges (::1, fe80::/10, fc00::/7)
   - Prevents DNS rebinding attacks
   - Returns error instead of warning for private IPs

**New Features:**

6. **Health Endpoint (High #8)**
   - Added GET /api/notifications/health
   - Returns queue stats (pending, sending, sent, failed, dlq)
   - Shows email/webhook configuration status
   - Provides overall health indicator

**Related to notification system audit**

Files changed:
- internal/notifications/notifications.go: Error returns, rate limiting, SSRF hardening
- internal/notifications/queue.go: Attempt tracking fix
- internal/api/notifications.go: Secret masking, health endpoint
2025-11-06 23:26:03 +00:00
rcourtman
4891f06e76 Fix webhook alerts persisting when DisableAll* flags are enabled
The original fix in c6c0ac63e only handled per-resource overrides when
thresholds were disabled (trigger <= 0 or Disabled=true). It did not
handle global DisableAll* flags (DisableAllStorage, DisableAllNodes,
DisableAllGuests, etc.).

When a user toggled a DisableAll* flag from false to true:
- Check* functions returned early without processing
- Existing active alerts remained in m.activeAlerts map
- Those alerts continued generating webhook notifications
- reevaluateActiveAlertsLocked didn't check DisableAll* flags

This commit fixes the issue by:

1. Updating reevaluateActiveAlertsLocked to check all DisableAll* flags
   and resolve alerts for those resource types during config updates

2. Adding alert cleanup to Check* functions before early returns:
   - CheckStorage: clears usage and offline alerts
   - CheckNode: clears cpu/memory/disk/temperature and offline alerts
   - CheckPMG: clears queue/message alerts and offline alerts
   - CheckPBS: clears cpu/memory and offline alerts
   - CheckHost: calls existing cleanup helpers

3. Adding comprehensive test coverage for DisableAllStorage scenario

Related to #561
2025-11-06 21:17:56 +00:00
rcourtman
1a78dcbba2 Fix guest agent disk data regression on Proxmox 8.3+
Related to #630

Proxmox 8.3+ changed the VM status API to return the `agent` field as an
object ({"enabled":1,"available":1}) instead of an integer (0 or 1). This
caused Pulse to incorrectly treat VMs as having no guest agent, resulting
in missing disk usage data (disk:-1) even when the guest agent was running
and functional.

The issue manifested as:
- VMs showing "Guest details unavailable" or missing disk data
- Pulse logs showing no "Guest agent enabled, querying filesystem info" messages
- `pvesh get /nodes/<node>/qemu/<vmid>/agent/get-fsinfo` working correctly
  from the command line, confirming the agent was functional

Root cause:
The VMStatus struct defined `Agent` as an int field. When Proxmox 8.3+ sent
the new object format, JSON unmarshaling silently left the field at zero,
causing Pulse to skip all guest agent queries.

Changes:
- Created VMAgentField type with custom UnmarshalJSON to handle both formats:
  * Legacy (Proxmox <8.3): integer (0 or 1)
  * Modern (Proxmox 8.3+): object {"enabled":N,"available":N}
- Updated VMStatus.Agent from `int` to `VMAgentField`
- Updated all references to `detailedStatus.Agent` to use `.Agent.Value`
- The unmarshaler prioritizes the "available" field over "enabled" to ensure
  we only query when the agent is actually responding

This fix maintains backward compatibility with older Proxmox versions while
supporting the new format introduced in Proxmox 8.3+.
2025-11-06 18:42:46 +00:00
rcourtman
7ed9203e4b Fix config backup/restore failures (related to #646)
Addresses two issues preventing configuration backup/restore:

1. Export passphrase validation mismatch: UI only validated 12+ char
   requirement when using custom passphrase, but backend always enforced
   it. Users with shorter login passwords saw unexplained failures.
   - Frontend now validates all passphrases meet 12-char minimum
   - Clear error message suggests custom passphrase if login password too short

2. Import data parsing failed silently: Frontend sent `exportData.data`
   which was undefined for legacy/CLI backups (raw base64 strings).
   Backend rejected these with no logs.
   - Frontend now handles both formats: {status, data} and raw strings
   - Backend logs validation failures for easier troubleshooting

Related to #646 where user reported "error after entering password" with
no container logs. These changes ensure proper validation feedback and
make the backup system resilient to different export formats.
2025-11-06 17:53:54 +00:00
rcourtman
dd1d222ad0 Improve bootstrap token UX for easier discovery
The bootstrap token security requirement was added proactively but
lacked discoverability, causing user friction during first-run setup.
These improvements make the token easier to find while maintaining
the security benefit.

Improvements:
- Display bootstrap token prominently in startup logs with ASCII box
  (previously: single line log message)
- Add `pulse bootstrap-token` CLI command to display token on demand
  (Docker: docker exec <container> /app/pulse bootstrap-token)
- Improve error messages in quick-setup API to show exact commands
  for retrieving token when missing or invalid
- Error messages now include both Docker and bare metal examples

User experience improvements:
- Token visible in `docker logs` output immediately
- Clear instructions printed with token
- Helpful error messages if token is wrong/missing
- CLI helper for operators who need to retrieve token later

Security unchanged:
- Bootstrap token still required for first-run setup
- Token still auto-deleted after successful setup
- No bypass mechanism added

Related to discussion about bootstrap token UX friction.
2025-11-06 17:29:49 +00:00
rcourtman
c8e0281953 Add comprehensive alert system reliability improvements
This commit implements critical reliability features to prevent data loss
and improve alert system robustness:

**Persistent Notification Queue:**
- SQLite-backed queue with WAL journaling for crash recovery
- Dead Letter Queue (DLQ) for notifications that exhaust retries
- Exponential backoff retry logic (100ms → 200ms → 400ms)
- Full audit trail for all notification delivery attempts
- New file: internal/notifications/queue.go (661 lines)

**DLQ Management API:**
- GET /api/notifications/dlq - Retrieve DLQ items
- GET /api/notifications/queue/stats - Queue statistics
- POST /api/notifications/dlq/retry - Retry failed notifications
- POST /api/notifications/dlq/delete - Delete DLQ items
- New file: internal/api/notification_queue.go (145 lines)

**Prometheus Metrics:**
- 18 comprehensive metrics for alerts and notifications
- Metric hooks integrated via function pointers to avoid import cycles
- /metrics endpoint exposed for Prometheus scraping
- New file: internal/metrics/alert_metrics.go (193 lines)

**Alert History Reliability:**
- Exponential backoff retry for history saves (3 attempts)
- Automatic backup restoration on write failure
- Modified: internal/alerts/history.go

**Flapping Detection:**
- Detects and suppresses rapidly oscillating alerts
- Configurable window (default: 5 minutes)
- Configurable threshold (default: 5 state changes)
- Configurable cooldown (default: 15 minutes)
- Automatic cleanup of inactive flapping history

**Alert TTL & Auto-Cleanup:**
- MaxAlertAgeDays: Auto-cleanup old alerts (default: 7 days)
- MaxAcknowledgedAgeDays: Faster cleanup for acked alerts (default: 1 day)
- AutoAcknowledgeAfterHours: Auto-ack long-running alerts (default: 24 hours)
- Prevents memory leaks from long-running alerts

**WebSocket Broadcast Sequencer:**
- Channel-based sequencing ensures ordered message delivery
- 100ms coalescing window for rapid state updates
- Prevents race conditions in WebSocket broadcasts
- Modified: internal/websocket/hub.go

**Configuration Fields Added:**
- FlappingEnabled, FlappingWindowSeconds, FlappingThreshold, FlappingCooldownMinutes
- MaxAlertAgeDays, MaxAcknowledgedAgeDays, AutoAcknowledgeAfterHours

All features are production-ready and build successfully.
2025-11-06 16:46:30 +00:00
rcourtman
20099549c6 Add comprehensive release validation to prevent missing artifacts
Adds automated validation script to prevent the pattern of patch
releases caused by missing files/artifacts.

scripts/validate-release.sh validates all 40+ artifacts including:
- Docker image scripts (8 install/uninstall scripts)
- Docker image binaries (17 across all platforms)
- Release tarballs (5 including universal and macOS)
- Standalone binaries (12+)
- Checksums for all distributable assets
- Version embedding in every binary type
- Tarball contents (binaries + scripts + VERSION)
- Binary architectures and file types

The script catches 100% of issues from the last 3 patch releases
(missing scripts, missing install.sh, missing binaries, broken
version embedding).

Updated RELEASE_CHECKLIST.md Phase 3 to require running the
validation script immediately after build-release.sh and before
proceeding to Docker build/publish phases.

Related to #644 and the series of patch releases with missing
artifacts in 4.26.x.
2025-11-06 16:33:49 +00:00
rcourtman
becda56897 Fix critical rollback download URL bug and doc inconsistencies
Issues found during systematic audit after #642:

1. CRITICAL BUG - Rollback downloads were completely broken:
   - Code constructed: pulse-linux-amd64 (no version, no .tar.gz)
   - Actual asset name: pulse-v4.26.1-linux-amd64.tar.gz
   - This would cause 404 errors on all rollback attempts
   - Fixed: Construct correct tarball URL with version
   - Added: Extract tarball after download to get binary

2. TEMPERATURE_MONITORING.md referenced non-existent v4.27.0:
   - Changed to use /latest/download/ for future-proof docs

3. API.md example had wrong filename format:
   - Changed pulse-linux-amd64.tar.gz to pulse-v4.30.0-linux-amd64.tar.gz
   - Ensures example matches actual release asset naming

The rollback bug would have affected any user attempting to roll back
to a previous version via the UI or API.
2025-11-06 14:25:32 +00:00
rcourtman
fa3b0db243 Improve static asset caching for hashed files
Hashed static assets (e.g., index-BXHytNQV.js, index-TvhSzimt.css) are
now cached for 1 year with immutable flag since content hash changes
when files change.

Benefits:
- Faster page loads on subsequent visits
- Reduced server bandwidth
- Better user experience on demo and production instances

Only index.html and non-hashed assets remain uncached to ensure
users always get the latest version.
2025-11-06 13:54:26 +00:00
rcourtman
a9d2209edd Fix demo mode to allow authentication endpoints
Demo mode now permits login/logout and OIDC authentication endpoints
while still blocking all modification requests. This allows demo
instances to require authentication while remaining read-only.

Authentication endpoints are read-only operations that verify
credentials and issue session tokens without modifying any state.
All POST/PUT/DELETE/PATCH operations remain blocked.
2025-11-06 13:48:28 +00:00
rcourtman
fdcec85931 Fix critical version embedding issues for 4.26 release
Addresses the root cause of issue #631 (infinite Docker agent restart loop)
and prevents similar issues with host-agent and sensor-proxy.

Changes:
- Set dockeragent.Version default to "dev" instead of hardcoded version
- Add version embedding to server build in Dockerfile
- Add version embedding to host-agent builds (all platforms)
- Add version embedding to sensor-proxy builds (all platforms)

This ensures:
1. Server's /api/agent/version endpoint returns correct v4.26.0
2. Downloaded agent binaries have matching embedded versions
3. Dev builds skip auto-update (Version="dev")
4. No version mismatch triggers infinite restart loops

Related to #631
2025-11-06 11:42:52 +00:00
rcourtman
20854256c3 Fix VM migration issue where custom alert thresholds are lost
Resolves #641

## Problem
When a VM migrates between Proxmox nodes, Pulse was treating it as a new
resource and discarding custom alert threshold overrides. This occurred
because guest IDs included the node name (e.g., `instance-node-VMID`),
causing the ID to change when the VM moved to a different node.

Users reported that after migrating a VM, previously disabled alerts
(e.g., memory threshold set to 0) would resume firing.

## Root Cause
Guest IDs were constructed as:
- Standalone: `node-VMID`
- Cluster: `instance-node-VMID`

When a VM migrated from node1 to node2, the ID changed from
`instance-node1-100` to `instance-node2-100`, causing:
- Alert threshold overrides to be orphaned (keyed by old ID)
- Guest metadata (custom URLs, descriptions) to be orphaned
- Active alerts to reference the wrong resource ID

## Solution
Changed guest ID format to be stable across node migrations:
- New format: `instance-VMID` (for both standalone and cluster)
- Retains uniqueness across instances while being node-independent
- Allows VMs to migrate freely without losing configuration

## Implementation

### Backend Changes
1. **Guest ID Construction** (`monitor_polling.go`):
   - Simplified to always use `instance-VMID` format
   - Removed node from the ID construction logic

2. **Alert Override Migration** (`alerts.go`):
   - Added lazy migration in `getGuestThresholds()`
   - Detects legacy ID formats and migrates to new format
   - Preserves user configurations automatically

3. **Guest Metadata Migration** (`guest_metadata.go`):
   - Added `GetWithLegacyMigration()` helper method
   - Called during VM/container polling to migrate metadata
   - Preserves custom URLs and descriptions

4. **Active Alerts Migration** (`alerts.go`):
   - Added migration logic in `LoadActiveAlerts()`
   - Translates legacy alert resource IDs to new format
   - Preserves alert acknowledgments across restarts

### Frontend Changes
5. **ID Construction Updates**:
   - `ThresholdsTable.tsx`: Updated fallback from `instance-node-vmid` to `instance-vmid`
   - `Dashboard.tsx`: Simplified guest ID construction
   - `GuestRow.tsx`: Updated `buildGuestId()` helper

## Migration Strategy
- **Lazy Migration**: Configs are migrated as guests are discovered
- **Backwards Compatible**: Old IDs are detected and automatically converted
- **Zero Downtime**: No manual intervention required
- **Persisted**: Migrated configs are saved on next config write cycle

## Testing Recommendations
After deployment:
1. Verify existing alert overrides still apply
2. Test VM migration - confirm thresholds persist
3. Check guest metadata (custom URLs) survive migration
4. Verify active alerts maintain acknowledgment state

## Related
- Addresses similar issues with guest metadata and active alert tracking
- Lays groundwork for any future guest-specific configuration features
- Aligns with project philosophy: correctness and UX over implementation complexity
2025-11-06 10:27:15 +00:00
rcourtman
dfe960deb4 Fix container SSH detection and improve troubleshooting for issue #617
Related to #617

This fixes a misconfiguration scenario where Docker containers could
attempt direct SSH connections (producing [preauth] log spam) instead
of using the sensor proxy.

Changes:
- Fix container detection to check PULSE_DOCKER=true in addition to
  system.InContainer() heuristics (both temperature.go and config_handlers.go)
- Upgrade temperature collection log from Error to Warn with actionable
  guidance about mounting the proxy socket
- Add Info log when dev mode override is active so operators understand
  the security posture
- Add troubleshooting section to docs for SSH [preauth] logs from containers

The container detection was inconsistent - monitor.go checked both flags
but temperature.go and config_handlers.go only checked InContainer().
Now all locations consistently check PULSE_DOCKER || InContainer().
2025-11-06 09:57:53 +00:00
rcourtman
12dc8693c4 Add NVIDIA GPU temperature monitoring support (nouveau driver)
- Add nouveau chip recognition to temperature parser
- Implement parseNouveauGPUTemps() for NVIDIA GPU temps via nouveau driver
- Map "GPU core" sensor to edge temperature field
- Supports systems using open-source nouveau driver

This complements the AMD GPU support added previously. Systems using
the nouveau driver will now see NVIDIA GPU temperatures in the
dashboard. For proprietary nvidia driver users, GPU temps are not
available via lm-sensors and would require nvidia-smi integration.
2025-11-06 00:24:42 +00:00
rcourtman
d62259ffa7 Add AMD GPU temperature monitoring support
Related to #600

- Add GPU field to Temperature model with edge, junction, and mem sensors
- Add amdgpu chip recognition to temperature parser
- Implement parseGPUTemps() to extract AMD GPU temperature data
- Update frontend TypeScript types to include GPU temperatures
- Display GPU temps in node table tooltip alongside CPU temps
- Set hasGPU flag when GPU data is available

This enables temperature monitoring for AMD GPUs (amdgpu sensors)
that was previously being collected via SSH but silently discarded
during parsing.
2025-11-06 00:19:04 +00:00
rcourtman
af55362009 Fix inflated RAM usage reporting for LXC containers
Related to #553

## Problem

LXC containers showed inflated memory usage (e.g., 90%+ when actual usage was 50-60%,
96% when actual was 61%) because the code used the raw `mem` value from Proxmox's
`/cluster/resources` API endpoint. This value comes from cgroup `memory.current` which
includes reclaimable cache and buffers, making memory appear nearly full even when
plenty is available.

## Root Cause

- **Nodes**: Had sophisticated cache-aware memory calculation with RRD fallbacks
- **VMs (qemu)**: Had detailed memory calculation using guest agent meminfo
- **LXCs**: Naively used `res.Mem` directly without any cache-aware correction

The Proxmox cluster resources API's `mem` field for LXCs includes cache/buffers
(from cgroup memory accounting), which should be excluded for accurate "used" memory.

## Solution

Implement cache-aware memory calculation for LXC containers by:

1. Adding `GetLXCRRDData()` method to fetch RRD metrics for LXC containers from
   `/nodes/{node}/lxc/{vmid}/rrddata`
2. Using RRD `memavailable` to calculate actual used memory (total - available)
3. Falling back to RRD `memused` if `memavailable` is not available
4. Only using cluster resources `mem` value as last resort

This matches the approach already used for nodes and VMs, providing consistent
cache-aware memory reporting across all resource types.

## Changes

- Added `GuestRRDPoint` type and `GetLXCRRDData()` method to pkg/proxmox
- Added `GetLXCRRDData()` to ClusterClient for cluster-aware operations
- Modified LXC memory calculation in `pollPVEInstance()` to use RRD data when available
- Added guest memory snapshot recording for LXC containers
- Updated test stubs to implement the new interface method

## Testing

- Code compiles successfully
- Follows the same proven pattern used for nodes and VMs
- Includes diagnostic snapshot recording for troubleshooting
2025-11-06 00:16:18 +00:00
rcourtman
7936808193 Add custom display name support for Docker hosts
This implements the ability for users to assign custom display names to Docker hosts,
similar to the existing functionality for Proxmox nodes. This addresses the issue where
multiple Docker hosts with identical hostnames but different IPs/domains cannot be
easily distinguished in the UI.

Backend changes:
- Add CustomDisplayName field to DockerHost model (internal/models/models.go:201)
- Update UpsertDockerHost to preserve custom display names across updates (internal/models/models.go:1110-1113)
- Add SetDockerHostCustomDisplayName method to State for updating names (internal/models/models.go:1221-1235)
- Add SetDockerHostCustomDisplayName method to Monitor (internal/monitoring/monitor.go:1070-1088)
- Add HandleSetCustomDisplayName API handler (internal/api/docker_agents.go:385-426)
- Route /api/agents/docker/hosts/{id}/display-name PUT requests (internal/api/docker_agents.go:117-120)

Frontend changes:
- Add customDisplayName field to DockerHost TypeScript interface (frontend-modern/src/types/api.ts:136)
- Add MonitoringAPI.setDockerHostDisplayName method (frontend-modern/src/api/monitoring.ts:151-187)
- Update getDisplayName function to prioritize custom names (frontend-modern/src/components/Settings/DockerAgents.tsx:84-89)
- Add inline editing UI with save/cancel buttons in Docker Agents settings (frontend-modern/src/components/Settings/DockerAgents.tsx:1349-1413)
- Update sorting to use custom display names (frontend-modern/src/components/Docker/DockerHosts.tsx:58-59)
- Update DockerHostSummaryTable to display custom names (frontend-modern/src/components/Docker/DockerHostSummaryTable.tsx:40-42, 87, 120, 254)

Users can now click the edit icon next to any Docker host name in Settings > Docker Agents
to set a custom display name. The custom name will be preserved across agent reconnections
and takes priority over the hostname reported by the agent.

Related to #623
2025-11-05 23:18:03 +00:00
rcourtman
0647a76c55 Fix temperature monitoring SSH key availability in containerized setup flow
Addresses issue #635 where users encounter "can't find the SSH key" errors
when enabling temperature monitoring during automated PVE setup with Pulse
running in Docker.

Root cause:
- Setup script embeds SSH keys at generation time (when downloaded)
- For containerized Pulse, keys are empty until pulse-sensor-proxy is installed
- Script auto-installs proxy, but didn't refresh keys after installation
- This caused temperature monitoring setup to fail with confusing errors

Changes:
1. After successful proxy installation, immediately fetch and populate the
   proxy's SSH public key (lines 4068-4080)
2. Update bash variables SSH_SENSORS_PUBLIC_KEY and SSH_SENSORS_KEY_ENTRY
   so temperature monitoring setup can proceed in the same script run
3. Improve error messaging when keys aren't available (lines 4424-4453):
   - Clear explanation of containerized Pulse requirements
   - Step-by-step instructions for container restart and verification
   - Separate guidance for bare-metal vs containerized deployments

Flow improvements:
- Initial run: Proxy installs → keys fetched → temp monitoring configures
- Rerun after container restart: Keys fetched at script start → works
- Both scenarios now handled correctly

Related to #635
2025-11-05 23:11:45 +00:00
rcourtman
d28cfed3c7 Improve temperature monitoring setup messaging for containerized deployments
When Pulse is running in a container and the SSH key is not available,
provide clearer guidance about the pulse-sensor-proxy requirement and
include documentation link for Docker deployments.

This helps users understand that containerized Pulse needs the host-side
sensor proxy to access temperature data from Proxmox hosts.
2025-11-05 23:05:47 +00:00
rcourtman
e21a72578f Add configurable SSH port for temperature monitoring
Related to #595

This change adds support for custom SSH ports when collecting temperature
data from Proxmox nodes, resolving issues for users who run SSH on non-standard
ports.

**Why SSH is still needed:**
Temperature monitoring requires reading /sys/class/hwmon sensors on Proxmox
nodes, which is not exposed via the Proxmox API. Even when using API tokens
for authentication, Pulse needs SSH access to collect temperature data.

**Changes:**
- Add `sshPort` configuration to SystemSettings (system.json)
- Add `SSHPort` field to Config with environment variable support (SSH_PORT)
- Add per-node SSH port override capability for PVE, PBS, and PMG instances
- Update TemperatureCollector to accept and use custom SSH port
- Update SSH known_hosts manager to support non-standard ports
- Add NewTemperatureCollectorWithPort() constructor with port parameter
- Maintain backward compatibility with NewTemperatureCollector() (uses port 22)
- Update frontend TypeScript types for SSH port configuration

**Configuration methods:**
1. Environment variable: SSH_PORT=2222
2. system.json: {"sshPort": 2222}
3. Per-node override in nodes.enc (future UI support)

**Default behavior:**
- Defaults to port 22 if not configured
- Maintains full backward compatibility
- No changes required for existing deployments

The implementation includes proper ssh-keyscan port handling and known_hosts
management for non-standard ports using [host]:port notation per SSH standards.
2025-11-05 20:03:29 +00:00
rcourtman
dc94f6092a Add retry logic for guest agent filesystem info in efficient polling
Related to #630

When using the efficient polling path (cluster/resources endpoint), guest
agent calls to GetVMFSInfo were made without retry logic. This could cause
transient "Guest details unavailable" errors during initialization when the
guest agent wasn't immediately ready to respond.

The traditional polling path already used retryGuestAgentCall for filesystem
info queries, providing resilience against transient timeouts. This commit
applies the same retry logic to the efficient polling path for consistency.

Changes:
- Wrap GetVMFSInfo call in efficient polling with retryGuestAgentCall
- Use configured guestAgentFSInfoTimeout and guestAgentRetries settings
- Ensures consistent behavior between traditional and efficient polling paths

This should resolve the transient initialization issue reported in #630 where
guest details were unavailable until after a reinstall/restart.
2025-11-05 19:49:17 +00:00
rcourtman
23691d5b41 Improve cluster health diagnostics and error messaging
Related to #405

Enhances error reporting and logging when all cluster endpoints are
unhealthy, making it easier to diagnose connectivity issues.

Changes:

1. Enhanced error messages in cluster_client.go:
   - Error now includes list of unreachable endpoints
   - Added detailed logging when no healthy endpoints available
   - Log at WARN level (not DEBUG) when cluster health check fails
   - Better context in recovery attempts with start/completion summaries

2. Improved storage polling resilience in monitor_polling.go:
   - Better error context when cluster storage polling fails
   - Specific guidance for "no healthy nodes available" scenario
   - Storage polling continues with direct node queries even if
     cluster-wide query fails (already worked, but now clearer)

3. Better recovery logging:
   - Log when recovery attempts start with list of unhealthy endpoints
   - Log individual recovery failures at DEBUG level
   - Log recovery summary (success/failure counts)
   - Track throttled endpoints separately for clearer diagnostics

These changes help users understand:
- Which specific endpoints are unreachable
- Whether it's a network/connectivity issue vs. API issue
- That Pulse will continue trying to recover endpoints automatically
- That storage monitoring continues via direct node queries

The root issue is that Pulse's internal health tracking can mark all
endpoints unhealthy when they're unreachable from the Pulse server,
even if Proxmox reports them as "online" in cluster status. Better
logging helps diagnose these network connectivity issues.
2025-11-05 19:44:29 +00:00
rcourtman
9670afe0cb Fix NODE column in backups to show actual guest node
Related to discussion #577

When backups are stored on shared storage accessible from multiple nodes,
the backup polling code was incorrectly assigning the backup to whichever
node it was discovered on during the scan, rather than the node where the
VM/container actually resides.

This fix:
- Builds a lookup map of VMID -> actual node at the start of backup polling
- Uses this map to assign the correct node for guest backups (VMID > 0)
- Preserves existing behavior for host backups (VMID == 0)
- Falls back to the queried node if the guest is not found in the map

This ensures the NODE column accurately reflects which node hosts each
guest, matching the information displayed on the main page.
2025-11-05 19:38:32 +00:00
rcourtman
059e8bf562 Redirect to login when authentication expires
Related to #626

When authentication expires after some time, users see "Connection lost"
and must refresh the page to see "Authentication required". This commit
implements automatic redirect to login when authentication expires.

Changes:
- Add authentication check to WebSocket endpoint to prevent unauthenticated
  WebSocket connections
- Handle WebSocket close with code 1008 (policy violation) as auth failure
  and redirect to login
- Intercept 401 responses on API calls (except initial auth checks) and
  automatically redirect to login page
- Clear stored credentials and set logout flag before redirect to ensure
  clean login flow

This provides a better user experience by immediately redirecting to the
login page when the session expires, rather than showing a confusing
"Connection lost" message that requires manual page refresh.
2025-11-05 19:36:01 +00:00
rcourtman
b44084af3c Skip false health alerts for Samsung 980/990 SSDs and improve Docker CPU calculation
Related to #547 and #622

## Samsung SSD Fix (#547)
Samsung 980 and 990 series SSDs have known firmware bugs that cause them to
report incorrect health status (typically FAILED or critical warnings) even
when the drives are actually healthy. This is commonly due to incorrect
temperature threshold reporting in the firmware.

This change adds special handling to detect these drives and skip health
status alerts while still monitoring wearout metrics, which remain reliable.
The fix also clears any existing false alerts for these drives.

Users experiencing these false alerts should update their Samsung SSD firmware
to the latest version from Samsung, which typically resolves the issue.

## Docker Agent CPU Fix (#622)
Addresses issue where Docker container CPU usage shows 0%. The Docker
agent uses ContainerStatsOneShot which typically doesn't populate
PreCPUStats, requiring manual delta tracking between collection cycles.

Changes:
- Fix logic bug where prevContainerCPU was updated before checking if
  previous sample existed, causing incorrect delta calculations
- Add comprehensive debug logging showing which calculation method
  succeeded (PreCPUStats, system delta, or time-based fallback)
- Add warning after 10 PreCPUStats failures to inform about manual
  tracking mode (normal for one-shot stats)
- Add detailed failure logging when CPU calculation cannot complete

Expected behavior: First collection cycle returns 0% (no previous
sample), subsequent cycles show accurate CPU metrics.
2025-11-05 19:33:16 +00:00
rcourtman
4c1d7a2797 Fix PMG API parameter issues causing 400 errors
Related to #614

Corrects three issues with PMG monitoring:

1. Remove unsupported timeframe parameter from GetMailStatistics
   - PMG API /statistics/mail does not accept timeframe parameter
   - Previously sent "timeframe=day" causing 400 error
   - API returns current day statistics by default

2. Fix GetMailCount timespan parameter to use seconds
   - Changed from 24 (hours) to 86400 (seconds)
   - PMG API expects timespan in seconds, not hours
   - Previously sent "timespan=24" causing 400 error

3. Update function signature and tests
   - Renamed GetMailCount parameter from timespanHours to timespanSeconds
   - Updated test expectations to match corrected API calls
   - Tests verify parameters are sent correctly

These changes align the PMG client with actual PMG API requirements,
fixing the data population issues reported in v4.25.0.
2025-11-05 19:28:37 +00:00
rcourtman
fcba710183 Guard PBS backups from failed polls
Related to #613

When all PBS datastore queries fail (e.g., due to network issues or PBS
downtime), the system was clearing all backups and showing an empty list.
This adds the same preservation logic that exists for PVE storage backups.

Changes:
- Add shouldPreservePBSBackups() helper function
- Track datastore query success/failure counts in pollPBSBackups()
- Preserve existing backups when all datastore queries fail
- Add comprehensive unit tests for PBS backup preservation logic

This ensures users can still see their backup history even during
temporary connectivity issues with PBS, matching the behavior already
implemented for PVE storage backups.
2025-11-05 19:26:20 +00:00
rcourtman
350828a260 Prefer IP addresses over hostnames for cluster communication
This change modifies the `clusterEndpointEffectiveURL` function to prioritize
IP addresses over hostnames when building cluster endpoint URLs. This eliminates
excessive DNS lookups that can overwhelm DNS servers (e.g., pi-hole), which was
causing hundreds of thousands of unnecessary DNS queries.

When Pulse communicates with Proxmox cluster nodes, it will now:
1. First try to use the IP address from ClusterEndpoint.IP
2. Fall back to ClusterEndpoint.Host only if IP is not available

This is a minimal, backwards-compatible change that maintains existing
functionality while dramatically reducing DNS traffic for clusters where
node IPs are already known and stored.

Related to #620
2025-11-05 19:23:26 +00:00
rcourtman
f0088070be Improve guest agent error classification to prevent false permission errors
Related to #596

**Problem:**
Users were seeing persistent "permission denied" error messages for VMs
that simply didn't have qemu-guest-agent installed or running. The error
detection logic was too broad and classified Proxmox API 500 errors as
permission issues, even when they indicated guest agent unavailability.

**Root Cause:**
When qemu-guest-agent is not installed or not running, Proxmox API returns
various error responses (500, 403) that may contain permission-related text.
The previous error detection logic checked for "permission denied" strings
without considering the HTTP status code context, leading to:
- VMs with guest agent: guest details display correctly
- VMs without guest agent: false "Permission denied" error shown

**Solution:**
Enhanced error classification logic to distinguish between:
1. Actual permission issues (401/403 with permission keywords)
2. Guest agent unavailability (500 errors)
3. Agent timeout issues
4. Other agent errors

The fix ensures that only explicit authentication/authorization errors
(401 Unauthorized, 403 Forbidden with permission keywords) are classified
as permission-denied, while API 500 errors are correctly identified as
agent-not-running issues.

**Changes:**
- Reordered error detection to check most specific patterns first
- Added HTTP status code context to permission error detection
- 500 errors now correctly map to "agent-not-running" status
- Only 401/403 errors with explicit permission keywords trigger "permission-denied"
- Improved log messages to guide users toward correct resolution
- Fixed err.Error() vs errStr variable inconsistency

**Impact:**
Users will now see accurate error messages that guide them to:
- Install qemu-guest-agent when it's missing (most common case)
- Check permissions only when there's an actual auth/authz issue
- Understand the difference between agent problems and permission problems
2025-11-05 19:21:58 +00:00
rcourtman
ddc787418b Round float values in webhook payloads to 1 decimal place
Webhook alert payloads now round Value and Threshold fields to 1 decimal
place before template rendering. This eliminates excessive precision in
webhook messages (e.g., 62.27451680630036 becomes 62.3).

The fix is applied in prepareWebhookData() so all webhook templates
benefit automatically, including Google Space webhooks, generic JSON
webhooks, and custom templates.

Related to #619
2025-11-05 19:19:10 +00:00
rcourtman
b1831d7b3e Add guest URL support for PVE hosts
Related to discussion #615

Add optional GuestURL field to PVE instances and cluster endpoints,
allowing users to specify a separate guest-accessible URL for web UI
navigation that differs from the internal management URL.

Backend changes:
- Add GuestURL field to PVEInstance and ClusterEndpoint structs
- Add GuestURL field to Node model
- Update cluster auto-discovery to preserve existing GuestURL values
- Update node creation logic to populate GuestURL from config
- Update API handlers to accept and persist GuestURL field

Frontend changes:
- Add GuestURL input field to NodeModal for configuration
- Update NodeGroupHeader and NodeSummaryTable to use GuestURL for navigation
- Add GuestURL to Node and PVENodeConfig TypeScript interfaces

When GuestURL is configured, it will be used for navigation links
instead of the Host URL, allowing users to access PVE hosts through
a reverse proxy or different domain while maintaining internal API
connections.
2025-11-05 19:06:08 +00:00
rcourtman
02864f54dd Add test notification functionality for Apprise
- Add support for testing Apprise notifications via /api/notifications/test endpoint
- Users can now test their Apprise configuration (both CLI and HTTP modes) using method="apprise"
- Added comprehensive unit tests for both CLI and HTTP modes
- Tests verify correct behavior when Apprise is enabled/disabled
- Tests validate that notifications are properly sent through Apprise channels

Related to #584
2025-11-05 18:54:18 +00:00
rcourtman
6404b6a5fc Expand temperature sensor compatibility for SuperIO and AMD CPUs
Users with NCT6687 SuperIO chips and AMD processors reporting only chiplet
temperatures were unable to see CPU temperature data. Added support for
Nuvoton/Winbond/Fintek SuperIO chips and AMD Tccd chiplet temperatures,
with debug logging to aid troubleshooting unsupported sensor configurations.

Related to discussion #586
2025-11-05 18:47:21 +00:00
rcourtman
b972b7f05f Fix broken documentation links for containerized deployments
Replace non-functional docs.pulseapp.io URLs with direct GitHub repository
links. The containerized deployment security documentation exists in
SECURITY.md and was previously inaccessible via the external link.

Changes:
- Update SECURITY.md documentation reference
- Fix three documentation links in config_handlers.go (SSH verification,
  setup script, and security block error messages)
- All links now point to GitHub repository where docs actually live

Related to #607
2025-11-05 18:46:41 +00:00