Commit graph

152 commits

Author SHA1 Message Date
rcourtman
93acb6f564 Merge update service refactor with SSE and job queue
- Add job queue system to ensure only one update runs at a time
- Add Server-Sent Events (SSE) for real-time push updates
- Increase rate limit from 20/min to 60/min for update endpoints
- Add unit tests for queue and SSE functionality
- Frontend: Update modal now uses SSE with polling fallback

Eliminates: 429 rate limit errors, duplicate modals, race conditions
Related to #671
2025-11-11 10:06:16 +00:00
Claude
0af921dc23 Refactor update service to eliminate polling and race conditions
This commit implements a comprehensive refactoring of the update system
to address race conditions, redundant polling, and rate limiting issues.

Backend changes:
- Add job queue system to ensure only ONE update runs at a time
- Implement Server-Sent Events (SSE) for real-time update progress
- Add rate limiting to /api/updates/status (5-second minimum per client)
- Create SSE broadcaster for push-based status updates
- Integrate job queue with update manager for atomic operations
- Add comprehensive unit tests for queue and SSE components

Frontend changes:
- Update UpdateProgressModal to use SSE as primary mechanism
- Implement automatic fallback to polling when SSE unavailable
- Maintain backward compatibility with existing update flow
- Clean up SSE connections on component unmount

API changes:
- Add new endpoint: GET /api/updates/stream (SSE)
- Enhance /api/updates/status with client-based rate limiting
- Return cached status with appropriate headers when rate limited

Benefits:
- Eliminates 429 rate limit errors during updates
- Only one update job can run at a time (prevents race conditions)
- Real-time updates via SSE reduce unnecessary polling
- Graceful degradation to polling when SSE unavailable
- Better resource utilization and reduced server load

Testing:
- All existing tests pass
- New unit tests for queue and SSE functionality
- Integration tests verify complete update flow
2025-11-11 09:33:05 +00:00
rcourtman
e894bc7b1d Fix recurring update issues (related to #671)
This commit addresses three recurring issues with the update system:

1. **Checksum mismatches (v4.27.0, v4.28.0):**
   - Root cause: Release process uploads checksums.txt first, but if artifacts
     are rebuilt after that upload, checksums become stale
   - Fix: Update RELEASE_CHECKLIST.md to REQUIRE running validate-release.sh
     before publishing (step 9, non-negotiable)
   - The validation script exists and catches these errors, but wasn't being
     enforced in the release process

2. **Duplicate error modals:**
   - Root cause: UpdateProgressModal rendered in both App.tsx
     (GlobalUpdateProgressWatcher) and UpdateBanner.tsx
   - Fix: Remove UpdateProgressModal from UpdateBanner.tsx
   - GlobalUpdateProgressWatcher automatically shows the modal when updates
     start, so the banner's modal is redundant

3. **Rate limiting too strict:**
   - Root cause: UpdateProgressModal polls /api/updates/status every 2 seconds
     (30 req/min), but rate limit was 20/min
   - Fix: Increase UpdateEndpoints rate limit from 20/min to 60/min
   - Allows modal to poll without hitting rate limits during updates

These were all manual process errors and configuration issues, not code bugs.
The validation script enforcement prevents future checksum mismatches.
2025-11-11 09:09:30 +00:00
rcourtman
df185985eb Fix bootstrap token path display for Docker deployments (related to #680)
The first-run setup UI was displaying incorrect bootstrap token paths for
Docker deployments. It showed `/etc/pulse/.bootstrap_token` regardless of
deployment type, but Docker containers use `/data/.bootstrap_token` by
default (via PULSE_DATA_DIR env var).

Changes:
- Extended `/api/security/status` endpoint to include `bootstrapTokenPath`
  and `isDocker` fields when a bootstrap token is active
- Updated FirstRunSetup component to fetch and display the correct path
  dynamically based on actual deployment configuration
- For Docker deployments, UI now shows both `docker exec` command and
  in-container command
- Falls back to showing both standard and Docker paths if API data
  unavailable (backward compatibility)

This fix ensures users always see the correct command for their specific
deployment, including custom PULSE_DATA_DIR configurations.
2025-11-09 23:41:55 +00:00
rcourtman
425ea00ba2 Fix upgrade path when DISABLE_AUTH detected but no credentials exist (fixes #678)
Users upgrading from v4.25 (where DISABLE_AUTH actually disabled auth) to
v4.27.1 (where DISABLE_AUTH is ignored but triggers a deprecation warning)
were stuck in a catch-22:

- They had no credentials (old version had auth disabled)
- DISABLE_AUTH detection incorrectly required authentication
- Setup wizard returned 401, preventing first credential creation
- Could not complete setup to create credentials and remove flag

Root cause: When DISABLE_AUTH was detected, the code set forceRequested=true
which triggered the authentication requirement even when authConfigured=false.

Fix: Only require authentication when credentials actually exist. When no
auth is configured, allow the bootstrap token flow regardless of whether
DISABLE_AUTH is detected.

This lets users upgrade from legacy DISABLE_AUTH deployments by using the
bootstrap token to create their first credentials, then removing the flag.
2025-11-09 20:33:58 +00:00
rcourtman
62a9f40cc7 Fix diagnostics incorrectly warning about /run mount in Docker (related to #600)
The diagnostic code was warning ALL deployments using /run/pulse-sensor-proxy
socket path to "remove and re-add" their configuration to use /mnt/pulse-proxy
instead. This was incorrect for Docker deployments where /run is the correct
and documented mount path (see docker-compose.yml line 15).

The warning was only meant for LXC containers where the managed mount at
/mnt/pulse-proxy is preferred over a legacy hand-crafted /run mount.

Fix: Only show the warning in non-Docker environments (check PULSE_DOCKER env).
Docker deployments correctly use /run/pulse-sensor-proxy per compose file.

Impact: Docker users were seeing confusing diagnostic warnings telling them
to reconfigure a correct setup.
2025-11-09 16:49:49 +00:00
rcourtman
bb7ca93c18 feat: Add mdadm RAID monitoring support for host agents
Implements comprehensive mdadm RAID array monitoring for Linux hosts
via pulse-host-agent. Arrays are automatically detected and monitored
with real-time status updates, rebuild progress tracking, and automatic
alerting for degraded or failed arrays.

Key changes:

**Backend:**
- Add mdadm package for parsing mdadm --detail output
- Extend host agent report structure with RAID array data
- Integrate mdadm collection into host agent (Linux-only, best-effort)
- Add RAID array processing in monitoring system
- Implement automatic alerting:
  - Critical alerts for degraded arrays or arrays with failed devices
  - Warning alerts for rebuilding/resyncing arrays with progress tracking
  - Auto-clear alerts when arrays return to healthy state

**Frontend:**
- Add TypeScript types for RAID arrays and devices
- Display RAID arrays in host details drawer with:
  - Array status (clean/degraded/recovering) with color-coded indicators
  - Device counts (active/total/failed/spare)
  - Rebuild progress percentage and speed when applicable
  - Green for healthy, amber for rebuilding, red for degraded

**Documentation:**
- Document mdadm monitoring feature in HOST_AGENT.md
- Explain requirements (Linux, mdadm installed, root access)
- Clarify scope (software RAID only, hardware RAID not supported)

**Testing:**
- Add comprehensive tests for mdadm output parsing
- Test parsing of healthy, degraded, and rebuilding arrays
- Verify proper extraction of device states and rebuild progress

All builds pass successfully. RAID monitoring is automatic and best-effort
- if mdadm is not installed or no arrays exist, host agent continues
reporting other metrics normally.

Related to #676
2025-11-09 16:36:33 +00:00
rcourtman
4834dea05b Add support for linux-386 and linux-armv6 architectures (related to #674)
Adds build support for 32-bit x86 (i386/i686) and ARMv6 (older Raspberry Pi models) architectures across all agents and install scripts.

Changes:
- Add linux-386 and linux-armv6 to build-release.sh builds array
- Update Dockerfile to build docker-agent, host-agent, and sensor-proxy for new architectures
- Update all install scripts to detect and handle i386/i686 and armv6l architectures
- Add architecture normalization in router download endpoints
- Update update manager architecture mapping
- Update validate-release.sh to expect 24 binaries (was 18)

This enables Pulse agents to run on older/legacy hardware including 32-bit x86 systems and Raspberry Pi Zero/Zero W devices.
2025-11-09 08:35:24 +00:00
rcourtman
1b221cca71 feat: Add configurable allowlist for webhook private IP targets (addresses #673)
Allow homelab users to send webhooks to internal services while maintaining security defaults.

Changes:
- Add webhookAllowedPrivateCIDRs field to SystemSettings (persistent config)
- Implement CIDR parsing and validation in NotificationManager
- Convert ValidateWebhookURL to instance method to access allowlist
- Add UI controls in System Settings for configuring trusted CIDR ranges
- Maintain strict security by default (block all private IPs)
- Keep localhost, link-local, and cloud metadata services blocked regardless of allowlist
- Re-validate on both config save and webhook delivery (DNS rebinding protection)
- Add comprehensive tests for CIDR parsing and IP matching

Backend:
- UpdateAllowedPrivateCIDRs() parses comma-separated CIDRs with validation
- Support for bare IPs (auto-converts to /32 or /128)
- Thread-safe allowlist updates with RWMutex
- Logging when allowlist is updated or used
- Validation errors prevent invalid CIDRs from being saved

Frontend:
- New "Webhook Security" section in System Settings
- Input field with examples and helpful placeholder text
- Real-time unsaved changes tracking
- Loads and saves allowlist via system settings API

Security:
- Default behavior unchanged (all private IPs blocked)
- Explicit opt-in required via configuration
- Localhost (127/8) always blocked
- Link-local (169.254/16) always blocked
- Cloud metadata services always blocked
- DNS resolution checked at both save and send time

Testing:
- Tests for CIDR parsing (valid/invalid inputs)
- Tests for IP allowlist matching
- Tests for bare IP address handling
- Tests for security boundaries (localhost, link-local remain blocked)

Related to #673

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 08:31:12 +00:00
rcourtman
6bf32f98d6 Fix storage/disk/backup disappearing for clusters with VerifySSL enabled
Related to #670, #657

The fix in v4.26.5 (commit 59a97f2e3) attempted to resolve storage disappearing
by preferring hostnames over IPs when TLS hostname verification is required
(VerifySSL=true and no fingerprint). However, that fix was ineffective because
the cluster discovery code was populating BOTH the Host and IP fields with the
IP address.

**Root Cause:**
In internal/api/config_handlers.go, the detectPVECluster function was setting:
- endpoint.Host = schemePrefix + clusterNode.IP (when IP was available)
- endpoint.IP = clusterNode.IP

This meant both fields contained the same IP address. When the monitoring code
tried to prefer endpoint.Host for TLS validation (internal/monitoring/monitor.go:
361-368), it was still getting an IP, causing certificate validation to fail
with "certificate is valid for pve01.example.com, not 10.0.0.44".

**Solution:**
Separate the Host and IP fields properly during cluster discovery:
- endpoint.Host = hostname (e.g., "https://pve01:8006") for TLS validation
- endpoint.IP = IP address (e.g., "10.0.0.44") for DNS-free connections

The existing logic in clusterEndpointEffectiveURL() can now correctly choose
between them based on TLS requirements.

**Impact:**
Users with VerifySSL=true who upgraded to v4.26.1-v4.26.5 and lost storage
visibility should now see storage, VM disks, and backups again after this fix.
2025-11-08 23:07:49 +00:00
rcourtman
270840801a Fix setup script fmt.Sprintf argument misalignment (related to #663)
The setup script template had 44 %s placeholders, but the fmt.Sprintf call
arguments were out of order starting at position 15. This caused the Pulse
URL to be inserted where the token name should be, resulting in errors like:

  Token ID: pulse-monitor@pam!http://192.168.0.44:7655

Instead of the correct format:

  Token ID: pulse-monitor@pam!pulse-192-168-0-44-1762545916

Changes:
- Escaped %s in printf helper (line 3949) so it doesn't consume arguments
- Reordered fmt.Sprintf arguments (lines 4727-4732) to match template order
- Removed 2 extra pulseURL arguments that were causing the shift

This fix ensures all 44 placeholders receive the correct values in order.
2025-11-08 07:52:19 +00:00
rcourtman
16c29463f9 Fix Windows host agent installer reliability (related to #654)
The download endpoint had a dangerous fallback that silently served the
wrong binary when the requested platform/arch combination was missing.
If a Docker image shipped without Windows binaries, the installer would
receive a Linux ELF instead of a Windows PE, causing ERROR_BAD_EXE_FORMAT.

Changes:
- Download handler now operates in strict mode when platform+arch are
  specified, returning 404 instead of serving mismatched binaries
- PowerShell installer validates PE header (MZ signature)
- PowerShell installer verifies PE machine type matches requested arch
- PowerShell installer fetches and verifies SHA256 checksums
- PowerShell installer shows diagnostic info: OS arch, download URL,
  file size for better troubleshooting

This prevents silent failures and provides clear error messages when
binaries are missing or corrupted.
2025-11-07 22:55:03 +00:00
rcourtman
e30757720a Fix P1: Resource leaks in Recovery Tokens, Rate Limiter, and OIDC Service
Fixed three P1 goroutine/memory leaks that prevent proper resource cleanup:

1. Recovery Tokens goroutine leak
   - Cleanup routine runs forever without stop mechanism
   - Added stopCleanup channel and Stop() method
   - Cleanup loop now uses select with stopCleanup case

2. Rate Limiter goroutine leak
   - Cleanup routine runs forever without stop mechanism
   - Added stopCleanup channel and Stop() method
   - Changed from 'for range ticker.C' to select with stopCleanup case

3. OIDC Service memory leak (DoS vector)
   - Abandoned OIDC flows never cleaned up
   - State entries accumulate unboundedly
   - Added cleanup routine with 5-minute ticker
   - Periodically removes expired state entries (10min TTL)
   - Added Stop() method for proper shutdown

All three follow consistent pattern:
- Add stopCleanup chan struct{} field
- Initialize in constructor
- Use select with ticker and stopCleanup cases
- Close channel in Stop() method to signal goroutine exit

Impact:
- Prevents goroutine leaks during service restarts/reloads
- Prevents memory exhaustion from abandoned OIDC login attempts
- Enables proper cleanup in tests and graceful shutdown
2025-11-07 10:18:44 +00:00
rcourtman
1bf9cfea88 Fix critical P0 security and crash issues in API/WebSocket layer
This commit addresses 5 critical P0 bugs that cause security vulnerabilities, crashes, and data corruption:

**P0-1: Recovery Tokens Replay Attack Vulnerability** (recovery_tokens.go:153-159)
- **SECURITY CRITICAL**: Single-use recovery tokens could be replayed
- **Problem**: Lock upgrade race - two concurrent requests both pass initial Used check
  1. Both acquire RLock, see token.Used = false
  2. Both release RLock
  3. Both acquire Lock and mark token.Used = true
  4. Both return true - TOKEN REUSED
- **Impact**: Attacker with intercepted token can use it multiple times
- **Fix**: Re-check token.Used after acquiring write lock (TOCTOU prevention)

**P0-2: WebSocket Hub Concurrent Map Panic** (hub.go:345-347, 376-378)
- **Problem**: Initial state goroutine reads h.clients map without lock
  - Line 345: `if _, ok := h.clients[client]` (NO LOCK)
  - Main loop writes to h.clients with lock (line 326, 394)
- **Impact**: "fatal error: concurrent map read and write" crashes hub
- **Fix**: Acquire RLock before all client map reads in goroutine

**P0-3: WebSocket Send on Closed Channel Panic** (hub.go:348, 380)
- **Problem**: Check client exists, then send - channel can close between
- **Impact**: "send on closed channel" panic crashes hub
- **Fix**: Hold RLock during both check and send (defensive select already present)

**P0-4: CSRF Store Shutdown Data Corruption** (csrf_store.go:189-196)
- **Problem**: Stop() calls save() after signaling worker. Both hold only RLock
  - Worker's final save writes to csrf_tokens.json.tmp
  - Stop()'s save writes to same file concurrently
- **Impact**: Corrupted/truncated csrf_tokens.json on shutdown
- **Fix**: Added saveMu mutex to serialize all disk writes

**P0-5: CSRF Store Deadlock on Double-Stop** (csrf_store.go:103-108)
- **Problem**: stopChan unbuffered, no sync.Once guard, uses send not close
- **Impact**: Second Stop() call blocks forever waiting for receiver
- **Fix**:
  - Added sync.Once field stopOnce
  - Changed to close(stopChan) within stopOnce.Do()
  - Prevents double-close panic and deadlock

All fixes maintain backwards compatibility. The recovery token fix is particularly critical as it closes a security vulnerability allowing replay attacks on password reset flows.
2025-11-07 10:13:15 +00:00
rcourtman
6ca4d9b750 Fix P1/P2 infrastructure issues: panic recovery and optimizations
This commit addresses 4 P1 important issues and 1 P2 optimization in infrastructure components:

**P1-1: Missing Panic Recovery in Discovery Service** (service.go:172-195, 499-542)
- **Problem**: No panic recovery in Start(), ForceRefresh(), SetSubnet() goroutines
- **Impact**: Silent service death if scan panics, broken discovery with no monitoring
- **Fix**:
  - Wrapped initial scan goroutine with defer/recover (lines 172-182)
  - Wrapped scanLoop goroutine with defer/recover (lines 185-195)
  - Wrapped ForceRefresh scan with defer/recover (lines 499-509)
  - Wrapped SetSubnet scan with defer/recover (lines 532-542)
  - All log panics with stack traces for debugging

**P1-2: Missing Panic Recovery in Config Watcher Callback** (watcher.go:546-556)
- **Problem**: User-provided onMockReload callback could panic and crash watcher
- **Impact**: Panicking callback kills watcher goroutine, no config updates
- **Fix**: Wrapped callback invocation with defer/recover and stack trace logging

**P1-3: Session Store Stop() Using Send Instead of Close** (session_store.go:16-84)
- **Problem**: Stop() used channel send which blocks if nobody reads
- **Impact**: Stop() hangs if backgroundWorker already exited
- **Fix**:
  - Added sync.Once field stopOnce (line 22)
  - Changed Stop() to use close() within stopOnce.Do() (lines 80-84)
  - Prevents double-close panic and ensures all readers are signaled

**P2-1: Backup Cleanup Inefficient O(n²) Sort** (persistence.go:1424-1427)
- **Problem**: Bubble sort used to sort backups by modification time
- **Impact**: Inefficient for large backup counts (>100 files)
- **Fix**:
  - Replaced bubble sort with sort.Slice() using O(n log n) algorithm
  - Added "sort" import (line 9)
  - Maintains same oldest-first ordering for deletion logic

All fixes add defensive programming without changing external behavior. Panic recovery ensures services continue operating even with bugs, while optimization reduces cleanup time for backup-heavy environments.
2025-11-07 09:55:22 +00:00
rcourtman
9257071ca1 Add encryption status to notification health endpoint (P2)
Backend:
- Add IsEncryptionEnabled() method to ConfigPersistence
- Include encryption status in /api/notifications/health response
- Allows frontend to warn when credentials are stored in plaintext

Frontend:
- Update NotificationHealth type to include encryption.enabled field
- Frontend can now display warnings when encryption is disabled

This addresses the P2 requirement for encryption visibility, allowing
operators to know when notification credentials are not encrypted at rest.
2025-11-07 08:36:55 +00:00
rcourtman
6a48c759e8 Fix critical notification system bugs and security issues
This commit addresses multiple critical issues identified in the notification
system audit conducted with Codex:

**Critical Fixes:**

1. **Queue Retry Logic (Critical #1)**
   - Fixed broken retry/DLQ system where send functions never returned errors
   - Made sendGroupedEmail(), sendGroupedWebhook(), sendGroupedApprise() return errors
   - Made sendWebhookRequest() return errors
   - ProcessQueuedNotification() now properly propagates errors to queue
   - Retry logic and DLQ now function correctly

2. **Attempt Counter Bug (Critical #2)**
   - Fixed double-increment bug in queue processing
   - Separated UpdateStatus() from attempt tracking
   - Added IncrementAttempt() method
   - Notifications now get correct number of retry attempts

3. **Secret Exposure (Critical #3 & #4)**
   - Masked webhook headers and customFields in GET /api/notifications/webhooks
   - Added redactSecretsFromURL() to sanitize webhook URLs in history
   - Truncated/redacted response bodies in webhook history
   - Protected against credential harvesting via API

4. **Email Rate Limiting (Critical #5)**
   - Added emailManager field to NotificationManager
   - Shared EnhancedEmailManager instance across sends
   - Rate limiter now accumulates across multiple emails
   - SMTP rate limits are now enforced correctly

5. **SSRF Protection (High #6)**
   - Added DNS resolution of webhook URLs
   - Added isPrivateIP() check using CIDR ranges
   - Blocks all private IP ranges (10/8, 172.16/12, 192.168/16, 127/8, 169.254/16)
   - Blocks IPv6 private ranges (::1, fe80::/10, fc00::/7)
   - Prevents DNS rebinding attacks
   - Returns error instead of warning for private IPs

**New Features:**

6. **Health Endpoint (High #8)**
   - Added GET /api/notifications/health
   - Returns queue stats (pending, sending, sent, failed, dlq)
   - Shows email/webhook configuration status
   - Provides overall health indicator

**Related to notification system audit**

Files changed:
- internal/notifications/notifications.go: Error returns, rate limiting, SSRF hardening
- internal/notifications/queue.go: Attempt tracking fix
- internal/api/notifications.go: Secret masking, health endpoint
2025-11-06 23:26:03 +00:00
rcourtman
1a78dcbba2 Fix guest agent disk data regression on Proxmox 8.3+
Related to #630

Proxmox 8.3+ changed the VM status API to return the `agent` field as an
object ({"enabled":1,"available":1}) instead of an integer (0 or 1). This
caused Pulse to incorrectly treat VMs as having no guest agent, resulting
in missing disk usage data (disk:-1) even when the guest agent was running
and functional.

The issue manifested as:
- VMs showing "Guest details unavailable" or missing disk data
- Pulse logs showing no "Guest agent enabled, querying filesystem info" messages
- `pvesh get /nodes/<node>/qemu/<vmid>/agent/get-fsinfo` working correctly
  from the command line, confirming the agent was functional

Root cause:
The VMStatus struct defined `Agent` as an int field. When Proxmox 8.3+ sent
the new object format, JSON unmarshaling silently left the field at zero,
causing Pulse to skip all guest agent queries.

Changes:
- Created VMAgentField type with custom UnmarshalJSON to handle both formats:
  * Legacy (Proxmox <8.3): integer (0 or 1)
  * Modern (Proxmox 8.3+): object {"enabled":N,"available":N}
- Updated VMStatus.Agent from `int` to `VMAgentField`
- Updated all references to `detailedStatus.Agent` to use `.Agent.Value`
- The unmarshaler prioritizes the "available" field over "enabled" to ensure
  we only query when the agent is actually responding

This fix maintains backward compatibility with older Proxmox versions while
supporting the new format introduced in Proxmox 8.3+.
2025-11-06 18:42:46 +00:00
rcourtman
7ed9203e4b Fix config backup/restore failures (related to #646)
Addresses two issues preventing configuration backup/restore:

1. Export passphrase validation mismatch: UI only validated 12+ char
   requirement when using custom passphrase, but backend always enforced
   it. Users with shorter login passwords saw unexplained failures.
   - Frontend now validates all passphrases meet 12-char minimum
   - Clear error message suggests custom passphrase if login password too short

2. Import data parsing failed silently: Frontend sent `exportData.data`
   which was undefined for legacy/CLI backups (raw base64 strings).
   Backend rejected these with no logs.
   - Frontend now handles both formats: {status, data} and raw strings
   - Backend logs validation failures for easier troubleshooting

Related to #646 where user reported "error after entering password" with
no container logs. These changes ensure proper validation feedback and
make the backup system resilient to different export formats.
2025-11-06 17:53:54 +00:00
rcourtman
dd1d222ad0 Improve bootstrap token UX for easier discovery
The bootstrap token security requirement was added proactively but
lacked discoverability, causing user friction during first-run setup.
These improvements make the token easier to find while maintaining
the security benefit.

Improvements:
- Display bootstrap token prominently in startup logs with ASCII box
  (previously: single line log message)
- Add `pulse bootstrap-token` CLI command to display token on demand
  (Docker: docker exec <container> /app/pulse bootstrap-token)
- Improve error messages in quick-setup API to show exact commands
  for retrieving token when missing or invalid
- Error messages now include both Docker and bare metal examples

User experience improvements:
- Token visible in `docker logs` output immediately
- Clear instructions printed with token
- Helpful error messages if token is wrong/missing
- CLI helper for operators who need to retrieve token later

Security unchanged:
- Bootstrap token still required for first-run setup
- Token still auto-deleted after successful setup
- No bypass mechanism added

Related to discussion about bootstrap token UX friction.
2025-11-06 17:29:49 +00:00
rcourtman
c8e0281953 Add comprehensive alert system reliability improvements
This commit implements critical reliability features to prevent data loss
and improve alert system robustness:

**Persistent Notification Queue:**
- SQLite-backed queue with WAL journaling for crash recovery
- Dead Letter Queue (DLQ) for notifications that exhaust retries
- Exponential backoff retry logic (100ms → 200ms → 400ms)
- Full audit trail for all notification delivery attempts
- New file: internal/notifications/queue.go (661 lines)

**DLQ Management API:**
- GET /api/notifications/dlq - Retrieve DLQ items
- GET /api/notifications/queue/stats - Queue statistics
- POST /api/notifications/dlq/retry - Retry failed notifications
- POST /api/notifications/dlq/delete - Delete DLQ items
- New file: internal/api/notification_queue.go (145 lines)

**Prometheus Metrics:**
- 18 comprehensive metrics for alerts and notifications
- Metric hooks integrated via function pointers to avoid import cycles
- /metrics endpoint exposed for Prometheus scraping
- New file: internal/metrics/alert_metrics.go (193 lines)

**Alert History Reliability:**
- Exponential backoff retry for history saves (3 attempts)
- Automatic backup restoration on write failure
- Modified: internal/alerts/history.go

**Flapping Detection:**
- Detects and suppresses rapidly oscillating alerts
- Configurable window (default: 5 minutes)
- Configurable threshold (default: 5 state changes)
- Configurable cooldown (default: 15 minutes)
- Automatic cleanup of inactive flapping history

**Alert TTL & Auto-Cleanup:**
- MaxAlertAgeDays: Auto-cleanup old alerts (default: 7 days)
- MaxAcknowledgedAgeDays: Faster cleanup for acked alerts (default: 1 day)
- AutoAcknowledgeAfterHours: Auto-ack long-running alerts (default: 24 hours)
- Prevents memory leaks from long-running alerts

**WebSocket Broadcast Sequencer:**
- Channel-based sequencing ensures ordered message delivery
- 100ms coalescing window for rapid state updates
- Prevents race conditions in WebSocket broadcasts
- Modified: internal/websocket/hub.go

**Configuration Fields Added:**
- FlappingEnabled, FlappingWindowSeconds, FlappingThreshold, FlappingCooldownMinutes
- MaxAlertAgeDays, MaxAcknowledgedAgeDays, AutoAcknowledgeAfterHours

All features are production-ready and build successfully.
2025-11-06 16:46:30 +00:00
rcourtman
20099549c6 Add comprehensive release validation to prevent missing artifacts
Adds automated validation script to prevent the pattern of patch
releases caused by missing files/artifacts.

scripts/validate-release.sh validates all 40+ artifacts including:
- Docker image scripts (8 install/uninstall scripts)
- Docker image binaries (17 across all platforms)
- Release tarballs (5 including universal and macOS)
- Standalone binaries (12+)
- Checksums for all distributable assets
- Version embedding in every binary type
- Tarball contents (binaries + scripts + VERSION)
- Binary architectures and file types

The script catches 100% of issues from the last 3 patch releases
(missing scripts, missing install.sh, missing binaries, broken
version embedding).

Updated RELEASE_CHECKLIST.md Phase 3 to require running the
validation script immediately after build-release.sh and before
proceeding to Docker build/publish phases.

Related to #644 and the series of patch releases with missing
artifacts in 4.26.x.
2025-11-06 16:33:49 +00:00
rcourtman
fa3b0db243 Improve static asset caching for hashed files
Hashed static assets (e.g., index-BXHytNQV.js, index-TvhSzimt.css) are
now cached for 1 year with immutable flag since content hash changes
when files change.

Benefits:
- Faster page loads on subsequent visits
- Reduced server bandwidth
- Better user experience on demo and production instances

Only index.html and non-hashed assets remain uncached to ensure
users always get the latest version.
2025-11-06 13:54:26 +00:00
rcourtman
a9d2209edd Fix demo mode to allow authentication endpoints
Demo mode now permits login/logout and OIDC authentication endpoints
while still blocking all modification requests. This allows demo
instances to require authentication while remaining read-only.

Authentication endpoints are read-only operations that verify
credentials and issue session tokens without modifying any state.
All POST/PUT/DELETE/PATCH operations remain blocked.
2025-11-06 13:48:28 +00:00
rcourtman
dfe960deb4 Fix container SSH detection and improve troubleshooting for issue #617
Related to #617

This fixes a misconfiguration scenario where Docker containers could
attempt direct SSH connections (producing [preauth] log spam) instead
of using the sensor proxy.

Changes:
- Fix container detection to check PULSE_DOCKER=true in addition to
  system.InContainer() heuristics (both temperature.go and config_handlers.go)
- Upgrade temperature collection log from Error to Warn with actionable
  guidance about mounting the proxy socket
- Add Info log when dev mode override is active so operators understand
  the security posture
- Add troubleshooting section to docs for SSH [preauth] logs from containers

The container detection was inconsistent - monitor.go checked both flags
but temperature.go and config_handlers.go only checked InContainer().
Now all locations consistently check PULSE_DOCKER || InContainer().
2025-11-06 09:57:53 +00:00
rcourtman
7936808193 Add custom display name support for Docker hosts
This implements the ability for users to assign custom display names to Docker hosts,
similar to the existing functionality for Proxmox nodes. This addresses the issue where
multiple Docker hosts with identical hostnames but different IPs/domains cannot be
easily distinguished in the UI.

Backend changes:
- Add CustomDisplayName field to DockerHost model (internal/models/models.go:201)
- Update UpsertDockerHost to preserve custom display names across updates (internal/models/models.go:1110-1113)
- Add SetDockerHostCustomDisplayName method to State for updating names (internal/models/models.go:1221-1235)
- Add SetDockerHostCustomDisplayName method to Monitor (internal/monitoring/monitor.go:1070-1088)
- Add HandleSetCustomDisplayName API handler (internal/api/docker_agents.go:385-426)
- Route /api/agents/docker/hosts/{id}/display-name PUT requests (internal/api/docker_agents.go:117-120)

Frontend changes:
- Add customDisplayName field to DockerHost TypeScript interface (frontend-modern/src/types/api.ts:136)
- Add MonitoringAPI.setDockerHostDisplayName method (frontend-modern/src/api/monitoring.ts:151-187)
- Update getDisplayName function to prioritize custom names (frontend-modern/src/components/Settings/DockerAgents.tsx:84-89)
- Add inline editing UI with save/cancel buttons in Docker Agents settings (frontend-modern/src/components/Settings/DockerAgents.tsx:1349-1413)
- Update sorting to use custom display names (frontend-modern/src/components/Docker/DockerHosts.tsx:58-59)
- Update DockerHostSummaryTable to display custom names (frontend-modern/src/components/Docker/DockerHostSummaryTable.tsx:40-42, 87, 120, 254)

Users can now click the edit icon next to any Docker host name in Settings > Docker Agents
to set a custom display name. The custom name will be preserved across agent reconnections
and takes priority over the hostname reported by the agent.

Related to #623
2025-11-05 23:18:03 +00:00
rcourtman
0647a76c55 Fix temperature monitoring SSH key availability in containerized setup flow
Addresses issue #635 where users encounter "can't find the SSH key" errors
when enabling temperature monitoring during automated PVE setup with Pulse
running in Docker.

Root cause:
- Setup script embeds SSH keys at generation time (when downloaded)
- For containerized Pulse, keys are empty until pulse-sensor-proxy is installed
- Script auto-installs proxy, but didn't refresh keys after installation
- This caused temperature monitoring setup to fail with confusing errors

Changes:
1. After successful proxy installation, immediately fetch and populate the
   proxy's SSH public key (lines 4068-4080)
2. Update bash variables SSH_SENSORS_PUBLIC_KEY and SSH_SENSORS_KEY_ENTRY
   so temperature monitoring setup can proceed in the same script run
3. Improve error messaging when keys aren't available (lines 4424-4453):
   - Clear explanation of containerized Pulse requirements
   - Step-by-step instructions for container restart and verification
   - Separate guidance for bare-metal vs containerized deployments

Flow improvements:
- Initial run: Proxy installs → keys fetched → temp monitoring configures
- Rerun after container restart: Keys fetched at script start → works
- Both scenarios now handled correctly

Related to #635
2025-11-05 23:11:45 +00:00
rcourtman
d28cfed3c7 Improve temperature monitoring setup messaging for containerized deployments
When Pulse is running in a container and the SSH key is not available,
provide clearer guidance about the pulse-sensor-proxy requirement and
include documentation link for Docker deployments.

This helps users understand that containerized Pulse needs the host-side
sensor proxy to access temperature data from Proxmox hosts.
2025-11-05 23:05:47 +00:00
rcourtman
e21a72578f Add configurable SSH port for temperature monitoring
Related to #595

This change adds support for custom SSH ports when collecting temperature
data from Proxmox nodes, resolving issues for users who run SSH on non-standard
ports.

**Why SSH is still needed:**
Temperature monitoring requires reading /sys/class/hwmon sensors on Proxmox
nodes, which is not exposed via the Proxmox API. Even when using API tokens
for authentication, Pulse needs SSH access to collect temperature data.

**Changes:**
- Add `sshPort` configuration to SystemSettings (system.json)
- Add `SSHPort` field to Config with environment variable support (SSH_PORT)
- Add per-node SSH port override capability for PVE, PBS, and PMG instances
- Update TemperatureCollector to accept and use custom SSH port
- Update SSH known_hosts manager to support non-standard ports
- Add NewTemperatureCollectorWithPort() constructor with port parameter
- Maintain backward compatibility with NewTemperatureCollector() (uses port 22)
- Update frontend TypeScript types for SSH port configuration

**Configuration methods:**
1. Environment variable: SSH_PORT=2222
2. system.json: {"sshPort": 2222}
3. Per-node override in nodes.enc (future UI support)

**Default behavior:**
- Defaults to port 22 if not configured
- Maintains full backward compatibility
- No changes required for existing deployments

The implementation includes proper ssh-keyscan port handling and known_hosts
management for non-standard ports using [host]:port notation per SSH standards.
2025-11-05 20:03:29 +00:00
rcourtman
059e8bf562 Redirect to login when authentication expires
Related to #626

When authentication expires after some time, users see "Connection lost"
and must refresh the page to see "Authentication required". This commit
implements automatic redirect to login when authentication expires.

Changes:
- Add authentication check to WebSocket endpoint to prevent unauthenticated
  WebSocket connections
- Handle WebSocket close with code 1008 (policy violation) as auth failure
  and redirect to login
- Intercept 401 responses on API calls (except initial auth checks) and
  automatically redirect to login page
- Clear stored credentials and set logout flag before redirect to ensure
  clean login flow

This provides a better user experience by immediately redirecting to the
login page when the session expires, rather than showing a confusing
"Connection lost" message that requires manual page refresh.
2025-11-05 19:36:01 +00:00
rcourtman
b1831d7b3e Add guest URL support for PVE hosts
Related to discussion #615

Add optional GuestURL field to PVE instances and cluster endpoints,
allowing users to specify a separate guest-accessible URL for web UI
navigation that differs from the internal management URL.

Backend changes:
- Add GuestURL field to PVEInstance and ClusterEndpoint structs
- Add GuestURL field to Node model
- Update cluster auto-discovery to preserve existing GuestURL values
- Update node creation logic to populate GuestURL from config
- Update API handlers to accept and persist GuestURL field

Frontend changes:
- Add GuestURL input field to NodeModal for configuration
- Update NodeGroupHeader and NodeSummaryTable to use GuestURL for navigation
- Add GuestURL to Node and PVENodeConfig TypeScript interfaces

When GuestURL is configured, it will be used for navigation links
instead of the Host URL, allowing users to access PVE hosts through
a reverse proxy or different domain while maintaining internal API
connections.
2025-11-05 19:06:08 +00:00
rcourtman
b972b7f05f Fix broken documentation links for containerized deployments
Replace non-functional docs.pulseapp.io URLs with direct GitHub repository
links. The containerized deployment security documentation exists in
SECURITY.md and was previously inaccessible via the external link.

Changes:
- Update SECURITY.md documentation reference
- Fix three documentation links in config_handlers.go (SSH verification,
  setup script, and security block error messages)
- All links now point to GitHub repository where docs actually live

Related to #607
2025-11-05 18:46:41 +00:00
rcourtman
449d77504f Improve PMG connection testing to validate metrics endpoints
Related to #551

Enhanced the PMG connection test to actually validate the metrics
endpoints that Pulse uses for monitoring, rather than only checking
the version endpoint. This provides users with immediate feedback if
their PMG credentials lack the necessary permissions to collect metrics.

Backend changes:
- Test mail statistics, cluster status, and quarantine endpoints during
  connection test (internal/api/config_handlers.go:1695-1714)
- Return warnings array in test response when endpoints are unavailable
- Increased timeout from 10s to 15s to accommodate multiple endpoint checks
- Added warning logs for failed endpoint checks

Frontend changes:
- Added showWarning() toast function for warning messages
- Enhanced NodeModal to display warning status with amber styling
- Added warnings list display in test results UI
- Updated Settings.tsx to show warnings from connection tests

This change helps users identify permission issues immediately rather
than discovering later that metrics aren't being collected despite a
"successful" connection.
2025-11-05 18:40:39 +00:00
rcourtman
f434a7b9e7 Fix fmt.Sprintf argument count in setup script after Docker/LXC changes
The previous commit added 4 new %s format specifiers for Docker/LXC
instructions but didn't add the corresponding arguments to fmt.Sprintf.

Added 4 pulseURL arguments to match the new format specifiers in the
'unknown environment' section of the setup script.
2025-11-05 18:18:04 +00:00
rcourtman
a1fb79ae6a Fix temperature proxy documentation and setup script for Docker vs LXC clarity
This addresses confusion around temperature monitoring setup for Docker
deployments where users expected a turnkey experience similar to LXC.

The core issue: The setup script and documentation suggested that
temperature monitoring was "automatically configured" for all containerized
deployments, but in reality only LXC containers have a fully automatic
setup. Docker requires manual steps.

Changes:

**Setup Script (config_handlers.go):**
- Fixed "unknown environment" path to show separate instructions for LXC vs Docker
- Docker instructions now correctly show --standalone flag (was incorrectly showing --ctid)
- Added docker-compose.yml bind mount instructions inline
- Added restart command for Docker deployments

**Documentation (TEMPERATURE_MONITORING.md):**
- Added prominent "Deployment-Specific Setup" callout at the top
- Clarified that LXC is fully automatic, Docker requires manual steps
- Reorganized "Setup (Automatic)" section to clearly distinguish:
  - LXC: Fully turnkey (no manual steps)
  - Docker: Manual proxy installation required
  - Node configuration: Works for both
- Updated "Host-side responsibilities" to specify it's Docker-only
- Fixed architecture benefits to reflect LXC vs Docker differences

Why this matters:
- LXC setup script auto-detects the container and runs install-sensor-proxy.sh --ctid
- Docker deployments can't be auto-detected and require --standalone flag
- Users running Docker were getting incorrect instructions (--ctid instead of --standalone)
- Documentation suggested everything was automatic, leading to confusion

Now the documentation and setup script accurately reflect that:
- LXC = Turnkey (automatic)
- Docker = Manual steps required (but well-documented)
- Native = Direct SSH (no proxy)

Related to GitHub Discussion #605
2025-11-05 18:18:04 +00:00
rcourtman
fdf0977be2 Add host agent multi-platform binary distribution and improve host details UI
- Build host agent binaries for all platforms (linux/darwin/windows, amd64/arm64/armv7) in Docker
- Add Makefile target for building agent binaries locally
- Add startup validation to check for missing agent binaries
- Improve download endpoint error messages with troubleshooting guidance
- Enhance host details drawer layout with better organization and visual hierarchy
- Update base images to rolling versions (node:20-alpine, golang:1.24-alpine, alpine:3.20)
2025-11-05 17:38:17 +00:00
rcourtman
27f2038dab Add per-node temperature monitoring and fix critical config update bug
This commit implements per-node temperature monitoring control and fixes a critical
bug where partial node updates were destroying existing configuration.

Backend changes:
- Add TemperatureMonitoringEnabled field (*bool) to PVEInstance, PBSInstance, and PMGInstance
- Update monitor.go to check per-node temperature setting with global fallback
- Convert all NodeConfigRequest boolean fields to *bool pointers
- Add nil checks in HandleUpdateNode to prevent overwriting unmodified fields
- Fix critical bug where partial updates zeroed out MonitorVMs, MonitorContainers, etc.
- Update NodeResponse, NodeFrontend, and StateSnapshot to include temperature setting
- Fix HandleAddNode and test connection handlers to use pointer-based boolean fields

Frontend changes:
- Add temperatureMonitoringEnabled to Node interface and config types
- Create per-node temperature monitoring toggle handler with optimistic updates
- Update NodeModal to wire up per-node temperature toggle
- Add isTemperatureMonitoringEnabled helper to check effective monitoring state
- Update ConfiguredNodeTables to show/hide temperature badge based on monitoring state
- Update NodeSummaryTable to conditionally show temperature column
- Pass globalTemperatureMonitoringEnabled prop through component tree

The critical bug fix ensures that when updating a single field (like temperature
monitoring), the backend only modifies that specific field instead of zeroing out
all other boolean configuration fields.
2025-11-05 14:11:53 +00:00
rcourtman
d52ac6d8b5 Fix CSRF token validation and improve token management
- Add Access-Control-Expose-Headers to allow frontend to read X-CSRF-Token response header
- Implement proactive CSRF token issuance on GET requests when session exists but CSRF cookie is missing
- Ensures frontend always has valid CSRF token before making POST requests
- Fixes 403 Forbidden errors when toggling system settings

This resolves CSRF validation failures that occurred when CSRF tokens expired or were missing while valid sessions existed.
2025-11-05 09:23:44 +00:00
rcourtman
10862db4e4 Enhance container detection for temperature SSH safeguards (refs #601) 2025-11-04 22:30:35 +00:00
rcourtman
6eb1a10d9b Refactor: Code cleanup and localStorage consolidation
This commit includes comprehensive codebase cleanup and refactoring:

## Code Cleanup
- Remove dead TypeScript code (types/monitoring.ts - 194 lines duplicate)
- Remove unused Go functions (GetClusterNodes, MigratePassword, GetClusterHealthInfo)
- Clean up commented-out code blocks across multiple files
- Remove unused TypeScript exports (helpTextClass, private tag color helpers)
- Delete obsolete test files and components

## localStorage Consolidation
- Centralize all storage keys into STORAGE_KEYS constant
- Update 5 files to use centralized keys:
  * utils/apiClient.ts (AUTH, LEGACY_TOKEN)
  * components/Dashboard/Dashboard.tsx (GUEST_METADATA)
  * components/Docker/DockerHosts.tsx (DOCKER_METADATA)
  * App.tsx (PLATFORMS_SEEN)
  * stores/updates.ts (UPDATES)
- Benefits: Single source of truth, prevents typos, better maintainability

## Previous Work Committed
- Docker monitoring improvements and disk metrics
- Security enhancements and setup fixes
- API refactoring and cleanup
- Documentation updates
- Build system improvements

## Testing
- All frontend tests pass (29 tests)
- All Go tests pass (15 packages)
- Production build successful
- Zero breaking changes

Total: 186 files changed, 5825 insertions(+), 11602 deletions(-)
2025-11-04 21:50:46 +00:00
rcourtman
5c4be1921c chore: snapshot current changes 2025-11-02 22:47:55 +00:00
rcourtman
6b670a7af3 Auto-clear removal block after successful Docker host stop
When a Docker host successfully completes a stop command and confirms
it has disabled itself, automatically clear the removal block to allow
immediate re-enrollment.

This fixes the UX issue where users who remove a Docker host cannot
immediately reinstall it with a new token, as the host ID remains
blocked for 24 hours. The block is still needed to prevent zombie
reports from stale agents, but once the agent confirms it stopped
successfully, there's no need to keep the block.

Changes:
- Clear removal block in HandleCommandAck after successful host removal
- Allows remove → reinstall workflow without manual intervention
- Block remains for forced removals or offline hosts (as intended)
2025-10-29 12:40:22 +00:00
rcourtman
b3285c05c8 Consolidate pending changes
- Add Docker metadata test comment
- Update alerts configuration and thresholds
- Enhance config file watcher
- Update documentation
- Refine settings UI
2025-10-28 23:20:44 +00:00
rcourtman
99b11760ac Implement Docker metadata API endpoints
Add backend support for storing and managing Docker resource metadata:

- Create DockerMetadataStore for managing Docker container/service metadata
- Implement DockerMetadataHandler with GET/PUT/DELETE operations
- Register /api/docker/metadata routes with proper authentication
- Store metadata in docker_metadata.json file
- Validate custom URLs (http/https scheme, valid host)
- Supports resource IDs in format: {hostId}:container:{containerId}

Enables the frontend Docker URL editing feature to persist data.
2025-10-28 22:56:53 +00:00
rcourtman
e07336dd9f refactor: remove legacy DISABLE_AUTH flag and enhance authentication UX
Major authentication system improvements:

- Remove deprecated DISABLE_AUTH environment variable support
- Update all documentation to remove DISABLE_AUTH references
- Add auth recovery instructions to docs (create .auth_recovery file)
- Improve first-run setup and Quick Security wizard flows
- Enhance login page with better error messaging and validation
- Refactor Docker hosts view with new unified table and tree components
- Add useDebouncedValue hook for better search performance
- Improve Settings page with better security configuration UX
- Update mock mode and development scripts for consistency
- Add ScrollableTable persistence and improved responsive design

Backend changes:
- Remove DISABLE_AUTH flag detection and handling
- Improve auth configuration validation and error messages
- Enhance security status endpoint responses
- Update router integration tests

Frontend changes:
- New Docker components: DockerUnifiedTable, DockerTree, DockerSummaryStats
- Better connection status indicator positioning
- Improved authentication state management
- Enhanced CSRF and session handling
- Better loading states and error recovery

This completes the migration away from the insecure DISABLE_AUTH pattern
toward proper authentication with recovery mechanisms.
2025-10-27 19:46:51 +00:00
rcourtman
334ed3aedc Improve setup script auth usability 2025-10-25 19:08:48 +00:00
rcourtman
5a2d808aa1 Harden setup token flow and enforce encrypted persistence 2025-10-25 16:00:37 +00:00
rcourtman
a279e6720e Add auth enforcement integration tests 2025-10-25 15:02:48 +00:00
rcourtman
7075cef326 Harden API auth and token handling 2025-10-25 14:54:03 +00:00
rcourtman
77282bd3a6 Implement Pulse tag overrides and alert clear persistence 2025-10-25 14:28:32 +00:00