Commit graph

2328 commits

Author SHA1 Message Date
rcourtman
85ffe10aed docs: add Mermaid diagrams to improve visual documentation
Enhance documentation with six Mermaid diagrams to better explain
complex system implementations:

- Adaptive polling lifecycle flowchart showing enqueue→execute→feedback
  cycle with scheduler, priority queue, and worker interactions
- Circuit breaker state machine diagram illustrating Closed↔Open↔Half-open
  transitions with triggers and recovery paths
- Temperature proxy architecture diagram highlighting trust boundaries,
  security controls, and data flow between host/container/cluster
- Sensor proxy request flow sequence diagram showing auth, rate limiting,
  validation, and SSH execution pipeline
- Alert webhook pipeline flowchart detailing template resolution, URL
  rendering, HTTP dispatch, and retry logic
- Script library workflow diagram illustrating dev→test→bundle→distribute
  lifecycle emphasizing modular design

These visualizations make it easier for operators and contributors to
understand Pulse's sophisticated architectural patterns.
2025-10-21 10:40:33 +00:00
rcourtman
66b97333f7 fix: skip update check for source builds and show appropriate UI message
Source builds use commit hashes (main-c147fa1) not semantic versions
(v4.23.0), so update checks would always fail or show misleading
"Update Available" banners.

Changes:
- Add IsSourceBuild flag to VersionInfo struct
- Detect source builds via BUILD_FROM_SOURCE marker file
- Skip update check for source builds (like Docker)
- Update frontend to show "Built from source" message
- Disable manual update check button for source builds
- Return "source" deployment type for source builds

Backend:
- internal/updates/version.go: Add isSourceBuildEnvironment() detection
- internal/updates/manager.go: Skip check with appropriate message
- internal/api/types.go: Add isSourceBuild to API response
- internal/api/router.go: Include isSourceBuild in version endpoint

Frontend:
- src/api/updates.ts: Add isSourceBuild to VersionInfo type
- src/stores/updates.ts: Don't poll for updates on source builds
- src/components/Settings/Settings.tsx: Show "Built from source" message

Fixes the confusing "Update Available" banner for users who explicitly
chose --source to get latest main branch code.

Co-authored-by: Codex AI
2025-10-21 10:08:00 +00:00
rcourtman
56c6c0cc0c feat: improve discovery with progress tracking, validation, and structured errors
Significantly enhanced network discovery feature to eliminate false positives,
provide real-time progress updates, and better error reporting.

Key improvements:
- Require positive Proxmox identification (version data, auth headers, or certificates)
  instead of reporting any service on ports 8006/8007
- Add real-time progress tracking with phase/target counts and completion percentage
- Implement structured error reporting with IP, phase, type, and timestamp details
- Fix TLS timeout handling to prevent hangs on unresponsive hosts
- Expose progress and structured errors via WebSocket for UI consumption
- Reduce log verbosity by moving discovery logs to debug level
- Fix duplicate IP counting to ensure progress reaches 100%

Breaking changes: None (backward compatible with legacy API methods)
2025-10-20 22:29:30 +00:00
rcourtman
8194ce9e7a feat: add containerization detection to version endpoint
Added containerized and containerId fields to /api/version endpoint
to enable automatic temperature proxy installation for LXC containers.

Changes:
- Added Containerized bool field to VersionResponse
- Added ContainerId string field to VersionResponse
- Detect containerization by checking /run/systemd/container file
- Extract container ID from hostname for LXC containers
- Set deployment type from container type (lxc/docker)

This allows the PVE setup script to:
1. Detect that Pulse is running in a container
2. Find the container ID by matching IPs
3. Automatically install pulse-sensor-proxy on the host
4. Configure bind mount for secure socket communication

Fixes the issue where setup script showed 'Proxy not available'
even when Pulse was containerized.
2025-10-20 22:14:03 +00:00
rcourtman
d430efcecb fix: correct fmt.Sprintf argument alignment in PVE setup script
Critical bug fix: The setup script's format string had 33 placeholders
but was only receiving 27 arguments, causing:
- INSTALLER_URL to receive authToken instead of pulseURL
- This made curl try to resolve the token value as a hostname
- Error: 'curl: (6) Could not resolve host: N7AE3P'
- Token ID showed '%!s(MISSING)' in manual setup instructions

Fixed by:
- Added missing tokenName at position 7
- Added literal '%s' strings for version_ge printf placeholders
- Added authToken arguments for Authorization headers (positions 29, 31)
- Ensured all 33 format placeholders have corresponding arguments

Now generates correct URLs:
- INSTALLER_URL: http://192.168.0.160:7655/api/install/install-sensor-proxy.sh
- --pulse-server: http://192.168.0.160:7655
- Token ID: pulse-monitor@pam!pulse-192-168-0-160-[timestamp]
2025-10-20 21:58:37 +00:00
rcourtman
d421f101ba feat: harden temperature proxy installation with better validation and error handling
Setup script improvements (config_handlers.go):
- Remove redundant mount configuration and container restart logic
- Let installer handle all mount/restart operations (single source of truth)
- Eliminate hard-coded mp0 assumption

Installer improvements (install-sensor-proxy.sh):
- Add mount configuration persistence validation via pct config check
- Surface pct set errors instead of silencing with 2>/dev/null
- Capture and display curl download errors with temp files
- Check systemd daemon-reload/enable/restart exit codes
- Show journalctl output when service fails to start
- Make socket verification fatal (was warning)
- Provide clear manual steps when hot-plug fails on running container

This makes the installation fail fast with actionable error messages
instead of silently proceeding with broken configuration.
2025-10-20 21:14:00 +00:00
rcourtman
07f198da63 fix: pass Pulse server URL as argument instead of env var for proxy installer
Changes:
- Replace PULSE_SENSOR_PROXY_FALLBACK_URL env export with --pulse-server argument
- Remove --quiet flag from installer invocation to show download progress
- More reliable than environment variable inheritance in subshells

This ensures the proxy installer can reliably download the binary from the
Pulse server fallback when GitHub is unavailable.
2025-10-20 20:58:25 +00:00
rcourtman
db54233769 fix: show full installer output instead of filtering
The setup script was filtering installer output to only show lines with
✓|⚠️|ERROR, which hid successful download messages like:
'Downloading pulse-sensor-proxy-linux-amd64 from Pulse server...'

This made it appear the installer failed even when the Pulse server
fallback download succeeded. Changed to show all installer output for
better visibility and debugging.

Users will now see the complete installation flow including:
- GitHub download attempt (expected to fail for dev builds)
- Pulse server fallback download (should succeed)
- All setup steps and validations

Improves transparency and reduces confusion during setup
2025-10-20 20:47:41 +00:00
rcourtman
dcad3a3a27 fix: allow dev/main builds to bypass version check
Version check was blocking dev/main builds (e.g., '0.0.0-main-da9da6f')
from using temperature proxy, even though they have the latest code.

Added regex to skip version check for builds matching:
- ^0\.0\.0-main (main branch builds)
- ^dev (dev builds)
- ^main (main version strings)

These builds are assumed to have proxy support since they're built from
the latest codebase.

Fixes testing workflow when installing Pulse with --main flag
2025-10-20 18:19:31 +00:00
rcourtman
93a601d7c7 fix: only check Pulse version for containerized deployments
The version check was blocking ALL v4.23.0 users from temperature monitoring,
even non-containerized ones who don't need the proxy.

Changed to only check version when PULSE_IS_CONTAINERIZED=true, since:
- Non-containerized Pulse can use direct SSH on any version
- Containerized Pulse requires v4.24.0+ for proxy support

This ensures non-containerized v4.23.0 users can still use temperature monitoring
via direct SSH while properly blocking proxy setup for containerized v4.23.0.

Fixes regression introduced in commit fbe4ab83a
2025-10-20 18:03:09 +00:00
rcourtman
001d7f5f1c fix: comprehensive temperature proxy setup improvements
Addresses multiple issues that prevented successful temperature monitoring setup:

1. **Missing log directory (install-sensor-proxy.sh)**
   - Added LogsDirectory=pulse/sensor-proxy to both systemd service templates
   - Fixes crash: "open /var/log/pulse/sensor-proxy/audit.log: read-only file system"
   - Uses systemd's LogsDirectory directive for proper permissions

2. **Invalid pct restart command (install-sensor-proxy.sh:822)**
   - Changed from `pct restart` (doesn't exist) to `pct stop && sleep 2 && pct start`
   - Fixes container restart failures during proxy setup

3. **Version compatibility check (config_handlers.go)**
   - Added const minProxyReadyVersion = "4.24.0"
   - Setup script now queries /api/version endpoint
   - Blocks proxy setup on Pulse < v4.24.0 with clear upgrade message
   - Prevents users from attempting proxy setup on incompatible versions

4. **Proxy service health validation (config_handlers.go)**
   - Verifies pulse-sensor-proxy service is actually running
   - Checks socket exists at /run/pulse-sensor-proxy/pulse-sensor-proxy.sock
   - Shows journalctl command for troubleshooting on failure
   - Sets TEMP_MONITORING_AVAILABLE=false to skip remaining steps

5. **Interactive LXC restart prompt (config_handlers.go)**
   - Replaced passive "please restart" message with interactive prompt
   - Default action is "yes" for easy acceptance
   - Actually executes pct stop/start on confirmation
   - Handles non-interactive environments gracefully

6. **Post-restart socket verification (config_handlers.go)**
   - Validates socket is accessible inside container after restart
   - Provides clear error if mount didn't work
   - Prevents claiming success when setup is incomplete

All changes tested with fresh LXC installation. Temperature monitoring now
works end-to-end with proper error handling and user guidance.

Fixes temperature proxy setup flow for v4.24.0+
2025-10-20 18:00:21 +00:00
rcourtman
5ebb32ce10 feat: enhance runtime configuration and system settings management
Improves configuration handling and system settings APIs to support
v4.24.0 features including runtime logging controls, adaptive polling
configuration, and enhanced config export/persistence.

Changes:
- Add config override system for discovery service
- Enhance system settings API with runtime logging controls
- Improve config persistence and export functionality
- Update security setup handling
- Refine monitoring and discovery service integration

These changes provide the backend support for the configuration
features documented in the v4.24.0 release.
2025-10-20 17:41:19 +00:00
rcourtman
c91b7874ac docs: comprehensive v4.24.0 documentation audit and updates
Complete documentation overhaul for Pulse v4.24.0 release covering all new
features and operational procedures.

Documentation Updates (19 files):

P0 Release-Critical:
- Operations: Rewrote ADAPTIVE_POLLING_ROLLOUT.md as GA operations runbook
- Operations: Updated ADAPTIVE_POLLING_MANAGEMENT_ENDPOINTS.md with DEFERRED status
- Operations: Enhanced audit-log-rotation.md with scheduler health checks
- Security: Updated proxy hardening docs with rate limit defaults
- Docker: Added runtime logging and rollback procedures

P1 Deployment & Integration:
- KUBERNETES.md: Runtime logging config, adaptive polling, post-upgrade verification
- PORT_CONFIGURATION.md: Service naming, change tracking via update history
- REVERSE_PROXY.md: Rate limit headers, error pass-through, v4.24.0 verification
- PROXY_AUTH.md, OIDC.md, WEBHOOKS.md: Runtime logging integration
- TROUBLESHOOTING.md, VM_DISK_MONITORING.md, zfs-monitoring.md: Updated workflows

Features Documented:
- X-RateLimit-* headers for all API responses
- Updates rollback workflow (UI & CLI)
- Scheduler health API with rich metadata
- Runtime logging configuration (no restart required)
- Adaptive polling (GA, enabled by default)
- Enhanced audit logging
- Circuit breakers and dead-letter queue

Supporting Changes:
- Discovery service enhancements
- Config handlers updates
- Sensor proxy installer improvements

Total Changes: 1,626 insertions(+), 622 deletions(-)
Files Modified: 24 (19 docs, 5 code)

All documentation is production-ready for v4.24.0 release.
2025-10-20 17:20:13 +00:00
rcourtman
73fb9d986f feat: add PBS/PMG stubs to test harness and implement HTTP config fetch
Resolves two remaining TODOs from codebase audit.

## 1. PBS/PMG Test Harness Stubs

**Location:** internal/monitoring/harness_integration.go:149-151

**Changes:**
- Added PBS client stub registration: `monitor.pbsClients[inst.Name] = &pbs.Client{}`
- Added PMG client stub registration: `monitor.pmgClients[inst.Name] = &pmg.Client{}`
- Added imports for pkg/pbs and pkg/pmg

**Purpose:**
Enables integration test scenarios to include PBS and PMG instance types
alongside existing PVE support. Stubs allow scheduler to register and
execute tasks for these instance types during integration testing.

**Testing:**
 TestAdaptiveSchedulerIntegration passes (55.5s)
 Integration test harness now supports all three instance types

## 2. HTTP Config URL Fetch

**Location:** cmd/pulse/config.go:226-261

**Problem:**
`PULSE_INIT_CONFIG_URL` was recognized but not implemented, returning
"URL import not yet implemented" error.

**Implementation:**
- URL validation (http/https schemes only)
- HTTP client with 15 second timeout
- Status code validation (2xx required)
- Empty response detection
- Base64 decoding with fallback to raw data
- Matches existing env-var behavior for `PULSE_INIT_CONFIG_DATA`

**Security:**
- Both HTTP and HTTPS supported (HTTPS recommended for production)
- URL scheme validation prevents file:// or other protocols
- Timeout prevents hanging on unresponsive servers

**Usage:**
```bash
export PULSE_INIT_CONFIG_URL="https://config-server/encrypted-config"
export PULSE_INIT_CONFIG_PASSPHRASE="secret"
pulse config auto-import
```

**Testing:**
 Code compiles cleanly
 Follows same pattern as existing PULSE_INIT_CONFIG_DATA handling

## Impact

- Completes integration test infrastructure for all instance types
- Enables automated config distribution via HTTP(S) for container deployments
- Removes last TODOs from codebase (no TODO/FIXME remaining in Go files)
2025-10-20 16:05:45 +00:00
rcourtman
c1bf03fe39 fix: use proper Monitor constructor in PMG tests to initialize all maps
Fixes panic: assignment to entry in nil map in PMG polling tests.

**Problem:**
Tests were manually creating Monitor structs without initializing internal
maps like pollStatusMap, causing nil map panics when recordTaskResult()
tried to update task status.

**Root Cause:**
- TestPollPMGInstancePopulatesState (line 90)
- TestPollPMGInstanceRecordsAuthFailures (line 189)

Both created Monitor with only partial field initialization, missing:
- pollStatusMap
- dlqInsightMap
- instanceInfoCache
- Other internal state maps

**Solution:**
Changed both tests to use New() constructor which properly initializes all
maps and internal state (monitor.go:1541). This ensures tests match production
initialization and will automatically pick up any future map additions.

**Tests:**
 TestPollPMGInstancePopulatesState - now passes
 TestPollPMGInstanceRecordsAuthFailures - now passes
 All monitoring tests pass (0.125s)

Follows best practice: use constructors instead of manual struct creation
to maintain initialization invariants.
2025-10-20 15:22:23 +00:00
rcourtman
039a07b8b0 test: add X-RateLimit-Limit header regression test (#578)
test: add X-RateLimit-Limit header regression test
2025-10-20 16:14:40 +01:00
rcourtman
97871bec82 feat: implement updates rollback logic (Phase 1 follow-up)
Implement complete rollback functionality for systemd/LXC deployments:

**Rollback Strategy:**
- Downloads old binary from GitHub releases
- Restores config from timestamped backups
- Service detection (pulse/pulse-backend/pulse-hot-dev)
- Comprehensive health verification

**Implementation:**

Main rollback flow:
1. Create rollback history entry
2. Detect active service name
3. Download old binary version from GitHub
4. Stop Pulse service
5. Create safety backup of current config
6. Restore config from backup directory
7. Install old binary
8. Start service
9. Wait for health check (30s timeout)
10. Update rollback history (success/failure)

**Helper Functions:**

- detectServiceName(): Auto-detect active service from candidates
- downloadBinary(): Download specific version from GitHub releases
  - Auto-detects architecture (amd64/arm64)
  - Validates download success
  - Sets executable permissions
- stopService/startService(): Systemctl service management
- restoreConfig(): Atomic config restoration
- installBinary(): Safe binary installation with backup
- waitForHealth(): Retry health endpoint with timeout

**Safety Features:**
- Safety backup before restore (rollback-safety timestamp)
- Pre-rollback binary backup (.pre-rollback)
- Health check verification post-rollback
- Comprehensive error logging
- History tracking for audit

**Limitations:**
- Binary backup deleted by install.sh (downloads from GitHub)
- Network dependency for binary retrieval
- Config-only backups from current install.sh

**Testing:**
- Compiles cleanly
- Ready for unit/integration tests

Closes Phase 1 technical debt - rollback capability now functional.

Part of Phase 1 Security Hardening follow-up work
2025-10-20 15:13:38 +00:00
rcourtman
9b1709a05b feat: enhance scheduler health API with rich instance metadata
Add comprehensive instance-level diagnostics to /api/monitoring/scheduler/health

**New Response Structure:**

Enhanced "instances" array with per-instance details:
- Instance metadata: displayName, type, connection URL
- Poll status: last success/error timestamps, error messages, error category
- Circuit breaker: state, timestamps, failure counts, retry windows
- Dead letter: present flag, reason, attempt history, retry schedule

**Implementation:**

Data structures:
- instanceInfo: cache of display names, URLs, types
- pollStatus: tracks successes/errors with timestamps and categories
- dlqInsight: DLQ entry metadata (reason, attempts, schedule)
- circuitBreaker: enhanced with stateSince, lastTransition

Tracking logic:
- buildInstanceInfoCache: populate metadata from config on startup
- recordTaskResult: track poll outcomes, error details, categories
- sendToDeadLetter: capture DLQ insights (reason, timestamps)
- circuitBreaker: record state transitions with timestamps

**Backward Compatible:**
- Existing fields (deadLetter, breakers, staleness) unchanged
- New "instances" array is additive
- Old clients can ignore new fields

**Testing:**
- Unit test: TestSchedulerHealth_EnhancedResponse validates all fields
- Integration tests: still passing (55s)
- All error tracking and breaker history verified

**Operator Benefits:**
- Diagnose issues without log digging
- See error messages directly in API
- Understand breaker states and retry schedules
- Track DLQ entries with full context
- Single API call for complete instance health view

Example: Quickly identify "401 unauthorized" on specific PBS instance,
see it's in DLQ after 5 retries, and know when next retry scheduled.

Part of Phase 2 follow-up work to improve observability.
2025-10-20 15:13:38 +00:00
rcourtman
14d06a1654 test: add soak test with runtime instrumentation (Phase 2 Task 9d)
Add comprehensive soak testing capabilities:

**Runtime Instrumentation:**
- Periodic sampling of heap, stack, goroutines, GC count
- Sample every 10s during harness runs
- HarnessReport includes full RuntimeSamples history
- Detect memory leaks (>10% sustained growth)
- Detect goroutine leaks (>20 leaked goroutines)

**Soak Test:**
- TestAdaptiveSchedulerSoak with 15min+ duration
- Skip unless -soak flag or HARNESS_SOAK_MINUTES set
- 80 synthetic instances (60 healthy, 15 transient, 5 permanent)
- Configurable duration via env var
- Validates: heap growth <10%, goroutines stable, queue depth bounded
- Staleness threshold: 45s for long-running tests

**Wrapper Script:**
- testing-tools/run_adaptive_soak.sh for easy execution
- Accepts duration in minutes: ./run_adaptive_soak.sh 30
- Logs to tmp/adaptive_soak_<timestamp>.log
- Sets proper timeout (duration + 5min buffer)

**Test Results (2-minute validation):**
- 80 instances, 17 samples
- Heap: 2.3MB → 3.1MB (healthy)
- Goroutines: 16 → 6 (no leak, actually decreased)
- Circuit breakers: correctly blocking transient failures

Run with: go test -tags=integration ./internal/monitoring -run TestAdaptiveSchedulerSoak -soak -timeout 20m

Part of Phase 2 Task 9 (Integration/Soak Testing)
2025-10-20 15:13:38 +00:00
rcourtman
2636ba9137 test: add comprehensive integration test harness for adaptive polling (Phase 2 Task 9c)
Add PollExecutor seam and integration test infrastructure:

**PollExecutor Interface:**
- Add pluggable executor interface for testability
- Implement realExecutor wrapping existing poll functions
- Add SetExecutor() for test injection
- Zero impact on production behavior

**Integration Test Harness:**
- Build-tagged integration tests (go:build integration)
- Synthetic workload generator with configurable scenarios
- Fake executor simulating latencies, failures, recovery
- Runtime metrics collection (queue depth, staleness, goroutines)

**Comprehensive Assertions:**
- Queue depth bounds: stays within 1.5× instance count
- Staleness: healthy instances <20s, multiple poll cycles
- Circuit breakers: transient failures recover, permanent stay blocked
- Dead-letter queue: only permanent failures routed
- Scheduler health: snapshot consistency validation

**Test Scenarios:**
- 10 healthy PVE instances (rapid polling)
- 1 transient failure instance (fail → recover)
- 1 permanent failure instance (DLQ routing)
- 55s test duration with 3s base intervals
- Validates full adaptive scheduler lifecycle

Runs with: go test -tags=integration ./internal/monitoring -run TestAdaptiveSchedulerIntegration

Part of Phase 2 Task 9 (Integration/Soak Testing)
2025-10-20 15:13:38 +00:00
rcourtman
7d422d2909 feat: add professional logging with runtime configuration and performance optimization
Implements structured logging package with LOG_LEVEL/LOG_FORMAT env support, debug level guards for hot paths, enriched error messages with actionable context, and stack trace capture for production debugging. Improves observability and reduces log overhead in high-frequency polling loops.
2025-10-20 15:13:38 +00:00
rcourtman
25b797f18d test: add comprehensive staleness tracker unit tests (Phase 2 Task 9b)
Added 17 test cases covering:
- UpdateSuccess/UpdateError state management
- Staleness scoring (fresh, stale, max-stale, never-succeeded)
- Score normalization and capping (0.0 to 1.0 range)
- SetBounds behavior and defaults
- Snapshot merging logic
- Snapshot() API for full state export
- Nil safety and concurrent access

All tests verify correct freshness calculation based on lastSuccess
timestamps and configurable maxStale bounds.

Phase 2 testing status:
-  Backoff exponential growth and jitter (13 tests)
-  Circuit breaker state machine (10 tests)
-  Staleness tracker scoring (17 tests)
- Total: 40+ unit tests covering core scheduling logic
2025-10-20 15:13:38 +00:00
rcourtman
24ae6d8d78 test: add comprehensive unit tests for backoff and circuit breaker (Phase 2 Task 9a)
Added 30+ test cases covering:

Backoff tests (backoff_test.go):
- Exponential growth with multiplier
- Jitter distribution and bounds
- Max delay capping
- Edge cases (negative attempts, zero config values)
- Realistic production scenarios

Circuit breaker tests (circuit_breaker_test.go):
- State transitions: closed → open → half-open → closed
- Retry interval backoff with bit-shifting (5s << failureCount)
- Half-open window behavior
- Concurrent access safety
- Default parameter validation

All tests pass with proper handling of time-based state transitions
and exponential backoff mechanics (bit-shift based retry intervals).
2025-10-20 15:13:38 +00:00
rcourtman
160adeb3b8 feat: add scheduler health API endpoint (Phase 2 Task 8)
Task 8 of 10 complete. Exposes read-only scheduler health data including:
- Queue depth and distribution by instance type
- Dead-letter queue inspection (top 25 tasks with error details)
- Circuit breaker states (instance-level)
- Staleness scores per instance

New API endpoint:
  GET /api/monitoring/scheduler/health (requires authentication)

New snapshot methods:
- StalenessTracker.Snapshot() - exports all staleness data
- TaskQueue.Snapshot() - queue depth & per-type distribution
- TaskQueue.PeekAll() - dead-letter task inspection
- circuitBreaker.State() - exports state, failures, retryAt
- Monitor.SchedulerHealth() - aggregates all health data

Documentation updated with API spec, field descriptions, and usage examples.
2025-10-20 15:13:38 +00:00
rcourtman
b1f445b33d feat: implement error handling with circuit breakers and backoff (Phase 2 Task 7)
Adds comprehensive error resilience:
- Circuit breaker with closed/open/half-open states (3 failures = trip)
- Exponential backoff with jitter (2s initial, 2x multiplier, 5min max)
- Dead-letter queue for tasks exceeding 5 retry attempts
- Error classification (transient vs permanent) using internal/errors helpers
- Per-instance failure tracking and breaker state management
- Integration with staleness tracker for outcome recording

Task 7 of 10 complete (70%). Ready for API surfaces and testing.
2025-10-20 15:13:37 +00:00
rcourtman
aa5c08ad4a feat: implement priority queue-based task execution (Phase 2 Task 6)
Replaces immediate polling with queue-based scheduling:
- TaskQueue with min-heap (container/heap) for NextRun-ordered execution
- Worker goroutines that block on WaitNext() until tasks are due
- Tasks only execute when NextRun <= now, respecting adaptive intervals
- Automatic rescheduling after execution via scheduler.BuildPlan
- Queue depth tracking for backpressure-aware interval adjustments
- Upsert semantics for updating scheduled tasks without duplicates

Task 6 of 10 complete (60%). Ready for error/backoff policies.
2025-10-20 15:13:37 +00:00
rcourtman
c554380cb5 feat: verify adaptive interval logic implementation (Phase 2 Task 5)
Confirms adaptive scheduling logic is fully operational:
- EMA smoothing (alpha=0.6) to prevent interval oscillations
- Staleness-based interpolation between min/max intervals
- Error penalty (0.6x per error) for faster recovery detection
- Queue depth stretch (0.1x per task) for backpressure handling
- ±5% jitter to prevent thundering herd effects
- Per-instance state tracking for smooth transitions

Task 5 of 10 complete. Scheduler foundation ready for queue-based execution.
2025-10-20 15:13:37 +00:00
rcourtman
c7d1abf874 feat: implement staleness tracker for adaptive polling (Phase 2 Task 4)
Adds freshness metadata tracking for all monitored instances:
- StalenessTracker with per-instance last success/error/mutation timestamps
- Change hash detection using SHA1 for detecting data mutations
- Normalized staleness scoring (0-1 scale) based on age vs maxStale
- Integration with PollMetrics for authoritative last-success data
- Wired into all poll functions (PVE/PBS/PMG) via UpdateSuccess/UpdateError
- Connected to scheduler as StalenessSource implementation

Task 4 of 10 complete. Ready for adaptive interval logic.
2025-10-20 15:13:37 +00:00
rcourtman
57429900a6 feat: add adaptive polling scheduler infrastructure (Phase 2 Tasks 1-3)
Implements adaptive scheduling foundation for Phase 2:
- Poll cycle metrics: duration, staleness, queue depth, in-flight counters
- Adaptive scheduler with pluggable staleness/interval/enqueue interfaces
- Config support: ADAPTIVE_POLLING_ENABLED flag + min/max/base intervals
- Feature flag defaults to disabled for safe rollout
- Scheduler wiring into Monitor with conditional instantiation

Tasks 1-3 of 10 complete. Ready for staleness tracker implementation.
2025-10-20 15:13:37 +00:00
rcourtman
524f42cc28 security: complete Phase 1 sensor proxy hardening
Implements comprehensive security hardening for pulse-sensor-proxy:
- Privilege drop from root to unprivileged user (UID 995)
- Hash-chained tamper-evident audit logging with remote forwarding
- Per-UID rate limiting (0.2 QPS, burst 2) with concurrency caps
- Enhanced command validation with 10+ attack pattern tests
- Fuzz testing (7M+ executions, 0 crashes)
- SSH hardening, AppArmor/seccomp profiles, operational runbooks

All 27 Phase 1 tasks complete. Ready for production deployment.
2025-10-20 15:13:37 +00:00
rcourtman
6619dc803e refactor: use strconv.Itoa instead of string(rune()) in test
Replace string(rune(i)) with strconv.Itoa(i) in hub_concurrency_test.go
for generating client IDs. While this is test code and not a production bug,
it uses the same incorrect pattern that caused the PR #575 bug.

This ensures consistent best practices across the codebase and avoids
confusion for developers who might copy this pattern.

Related: #575
2025-10-20 15:12:14 +00:00
rcourtman
8d6346a008 test: add X-RateLimit-Limit header regression test
Add regression test for PR #575 to ensure rate limit headers are formatted
as decimal strings (e.g., "10") instead of Unicode control characters.

Also fixes pre-existing fmt.Sprintf argument count mismatch in PVE setup
script (internal/api/config_handlers.go:3077). The template had 28 format
specifiers (excluding %%s escape sequence) but was only receiving 24
arguments. Added missing pulseURL and tokenName arguments to match template.

Related: #575
2025-10-20 15:10:59 +00:00
rcourtman
20d94f4c90 Fix X-RateLimit-Limit header value (#575)
Fix X-RateLimit-Limit header value
2025-10-20 15:57:28 +01:00
rcourtman
1e25fa572a security: add resilience and error handling to tempproxy client
Implements comprehensive client-side improvements for production reliability:

1. Context Support with Deadlines:
   - Added callWithContext() for context-aware RPC calls
   - Respects context deadlines and cancellation
   - Prevents goroutine pileup under network issues

2. Exponential Backoff with Jitter:
   - Automatic retry with exponential backoff (100ms → 10s)
   - ±10% jitter to prevent thundering herd
   - Max 3 retries for transient failures
   - Smart retry decision based on error classification

3. Error Classification:
   - ProxyError type with classification (Transport, Auth, SSH, Sensor, Timeout)
   - Retryable vs non-retryable error identification
   - Better error messages for debugging
   - Structured error handling throughout

4. Improved Connection Handling:
   - DialContext for cancellable connections
   - Proper deadline propagation
   - Clean separation of single-attempt vs retry logic
   - Legacy call() method preserved for backwards compatibility

Security Notes:
- SSH fallback already blocked in containers (temperature.go:69-77)
- Per-client token auth not needed after method-level authz (commit d55112ac4)
- ID-mapped root blocked from privileged methods

Performance:
- No retry on non-retryable errors (auth, sensor failures)
- Context cancellation short-circuits retry loops
- Jitter prevents synchronized retry storms

Addresses Codex findings #4 and #5 from security audit.
2025-10-19 16:37:11 +00:00
rcourtman
049f79987f feat: add turnkey Docker installer with automatic proxy setup
Adds a one-command Docker deployment flow that:
- Detects if running in LXC and installs Docker if needed
- Automatically installs pulse-sensor-proxy on the Proxmox host
- Configures bind mount for proxy socket into LXC
- Generates optimized docker-compose.yml with proxy socket
- Enables temperature monitoring via host-side proxy

The install-docker.sh script handles the complete setup including:
- Docker installation (if needed)
- ACL configuration for container UIDs
- Bind mount setup
- Automatic apparmor=unconfined for socket access

Accessible via: curl -sSL http://pulse:7655/api/install/install-docker.sh | bash
2025-10-19 15:03:24 +00:00
rcourtman
a841a1a6fe fix: show success message instead of warning when using pulse-sensor-proxy
When the setup script detects TEMPERATURE_PROXY_KEY (proxy is available),
it now shows a clear success message instead of attempting SSH verification.

The verification check doesn't work with proxy-based setups since the
container doesn't have SSH keys - all temperature collection happens via
the Unix socket to pulse-sensor-proxy, which handles SSH.

Now shows:
✓ Temperature monitoring configured via pulse-sensor-proxy
  Temperature data will appear in the dashboard within 10 seconds

Instead of the misleading:
⚠️  Unable to verify SSH connectivity.
   Temperature data will appear once SSH connectivity is configured.
2025-10-19 14:06:18 +00:00
rcourtman
557eedb247 fix: detect and use proxy SSH key in setup script for Docker deployments
When pulse-sensor-proxy is available, the setup script now automatically
detects and uses the proxy's SSH public key instead of trying to generate
keys inside the container.

This fixes temperature monitoring setup for Docker deployments where:
- Container has proxy socket mounted at /mnt/pulse-proxy
- Proxy handles SSH connections to nodes
- Setup script needs to distribute the proxy's key, not container's key

The fix queries /api/system/proxy-public-key during setup script generation
and overrides SSH_SENSORS_PUBLIC_KEY if the proxy is available.

Tested with Docker on native Proxmox host (delly) - temperatures collected
successfully via proxy socket.
2025-10-19 13:50:08 +00:00
Sangar
ce21a6b94f Fix X-RateLimit-Limit header value 2025-10-19 11:43:03 +02:00
rcourtman
21712111e7 fix: enable variable expansion in cluster node SSH key heredoc
Changed heredoc delimiter from <<'EOF' to <<EOF to allow bash variable
expansion. Previously $SSH_PUBLIC_KEY and $SSH_RESTRICTED_KEY_ENTRY
were being passed as literal strings instead of their actual values,
so cluster nodes never received the correct SSH keys.

This fixes cluster node ProxyJump setup - now both restricted and
unrestricted keys are properly added to cluster nodes.
2025-10-19 09:08:00 +00:00
rcourtman
c17059ca8e fix: add ProxyJump key to all cluster nodes automatically
The setup script now adds both the restricted and unrestricted SSH keys
to ALL cluster nodes, not just the first one. This makes temperature
monitoring truly turnkey - you say 'yes' to configure cluster nodes and
it automatically sets up both keys on each node.

This ensures:
- All nodes can act as ProxyJump hosts if needed
- All nodes can provide temperature data via sensors
- No manual SSH key configuration required

Fixes turnkey cluster temperature monitoring setup.
2025-10-19 09:02:28 +00:00
rcourtman
bfde490ad4 fix: add unrestricted SSH key for ProxyJump on jump host
When using ProxyJump for cluster temperature monitoring, the jump host
(typically the first cluster node) needs an unrestricted SSH key to allow
connection forwarding. Previously only the restricted key with
command="sensors -j" was added, which blocked ProxyJump.

Now the setup script adds TWO keys:
1. Unrestricted key (for ProxyJump/connection forwarding)
2. Restricted key (for running sensors -j directly)

This allows containerized Pulse to:
- Connect through the jump host to other cluster nodes
- Collect temperature data from all cluster members

Fixes cluster temperature monitoring for Docker/LXC deployments.
2025-10-19 08:56:52 +00:00
rcourtman
78c2228b89 fix: add HostName entries for cluster nodes in SSH config
Added logic to resolve IP addresses for cluster nodes and include them as
HostName entries in the SSH config. Without this, Pulse couldn't connect
to cluster nodes like 'minipc' because the container couldn't resolve
the hostname.

Uses getent to resolve node names to IPs, with fallback to hostname if
resolution fails (for environments where DNS works).
2025-10-19 08:48:25 +00:00
rcourtman
dd70bdee08 feat: switch to Ed25519 SSH keys and add openssh-client to container
- Changed SSH key generation from RSA 2048 to Ed25519 (more secure, faster, smaller)
- Added openssh-client package to Docker image (required for temperature monitoring)
- Updated SSH config template to use id_ed25519
- Removed unused crypto/rsa and crypto/x509 imports

Ed25519 provides better security with shorter keys and faster operations
compared to RSA. The container now has SSH client tools needed to connect
to Proxmox nodes for temperature data collection.
2025-10-19 08:43:20 +00:00
rcourtman
6acfc3f121 fix: use id_rsa in SSH config instead of id_ed25519
The setup script was generating SSH config with IdentityFile ~/.ssh/id_ed25519
but Pulse generates id_rsa keys. Updated SSH config template to use id_rsa
to match the actual key type generated by the monitoring system.
2025-10-19 08:39:55 +00:00
rcourtman
759a3b7d2f fix: bypass middleware auth for ssh-config with setup token
Added middleware exception for /api/system/ssh-config when a valid setup
token is provided, matching the pattern used for verify-temperature-ssh.

The middleware was blocking ssh-config requests before they reached the
handler, even though the handler had setup token validation logic.
2025-10-19 08:35:39 +00:00
rcourtman
4b1d0013c0 fix: allow setup token auth for SSH config endpoint
The ssh-config endpoint was using RequireAuth which only accepts Pulse
API tokens, but the setup script sends a temporary setup token via the
auth_token parameter. Updated to follow the same pattern as
verify-temperature-ssh: check setup token first, then fall back to API auth.

This fixes the 401 error when the setup script tries to configure ProxyJump
for containerized Pulse deployments.
2025-10-19 08:31:05 +00:00
rcourtman
8c51ba727d fix: pass authToken to verify-temperature-ssh endpoint
The setup script was passing pulseURL instead of authToken as the last
parameter, causing 'Authentication required' errors when verifying SSH
connectivity. Fixed parameter order in fmt.Sprintf call.
2025-10-19 08:23:31 +00:00
rcourtman
74c426b87a feat: implement allowlist-based SSH config validation per Codex review
Security improvements to HandleSSHConfig endpoint:
- Add defer r.Body.Close() for proper resource cleanup
- Return 413 status for oversized requests with errors.As check
- Switch from blocklist to allowlist-based directive validation
- Use case-insensitive parsing with comment stripping via bufio.Scanner
- Add Content-Type: application/json header to response

Codex identified that blocklist approach was insufficient and recommended
allowlist validation to prevent unexpected directives. Only permits the
specific SSH directives Pulse needs for ProxyJump configuration.
2025-10-18 23:27:14 +00:00
rcourtman
71abcb2a37 fix: harden SSH config endpoint per Codex security review
Addressed security concerns identified by Codex code review:

1. **Memory exhaustion protection**
   - Added http.MaxBytesReader with 32KB limit
   - Prevents malicious large POST from killing server

2. **Dangerous directive blocking**
   - Reject ProxyCommand, LocalCommand, RemoteCommand
   - Prevents command injection via SSH config

3. **Improved error handling**
   - Check all error returns properly
   - Return 5xx on failures
   - Log file size and path for debugging

4. **Scoped SSH config (critical fix)**
   - Changed from `Host *` to specific cluster nodes
   - Prevents overriding ALL SSH connections
   - Only affects Proxmox nodes for temperature monitoring
   - Preserves other SSH functionality (git, etc.)

Before: Host * broke all SSH connections from Pulse
After: Only Proxmox cluster nodes use ProxyJump

Credit: Codex code review identified these issues
2025-10-18 23:21:59 +00:00
rcourtman
8595b4c001 feat: automatic ProxyJump for turnkey temperature monitoring
Make temperature monitoring truly turnkey by automatically configuring
SSH ProxyJump when running in containers without pulse-sensor-proxy.

How it works:
1. Setup script runs on Proxmox host (e.g., delly)
2. Detects Pulse is containerized but proxy unavailable
3. Automatically configures SSH ProxyJump through the current host
4. Writes SSH config to /home/pulse/.ssh/config in container
5. Temperature monitoring "just works" without manual configuration

Changes:
- Track TEMP_MONITORING_AVAILABLE flag during proxy installation
- Auto-configure ProxyJump if proxy installation fails
- Add /api/system/ssh-config endpoint to write SSH config
- Only prompt for temperature monitoring if it can actually work
- Automatic SSH config: ProxyJump through Proxmox host

Before: User had to manually configure ProxyJump or install proxy
After: Temperature monitoring works automatically after setup script

This makes Docker deployments as turnkey as LXC deployments.
2025-10-18 23:17:38 +00:00