Source builds use commit hashes (main-c147fa1) not semantic versions
(v4.23.0), so update checks would always fail or show misleading
"Update Available" banners.
Changes:
- Add IsSourceBuild flag to VersionInfo struct
- Detect source builds via BUILD_FROM_SOURCE marker file
- Skip update check for source builds (like Docker)
- Update frontend to show "Built from source" message
- Disable manual update check button for source builds
- Return "source" deployment type for source builds
Backend:
- internal/updates/version.go: Add isSourceBuildEnvironment() detection
- internal/updates/manager.go: Skip check with appropriate message
- internal/api/types.go: Add isSourceBuild to API response
- internal/api/router.go: Include isSourceBuild in version endpoint
Frontend:
- src/api/updates.ts: Add isSourceBuild to VersionInfo type
- src/stores/updates.ts: Don't poll for updates on source builds
- src/components/Settings/Settings.tsx: Show "Built from source" message
Fixes the confusing "Update Available" banner for users who explicitly
chose --source to get latest main branch code.
Co-authored-by: Codex AI
Significantly enhanced network discovery feature to eliminate false positives,
provide real-time progress updates, and better error reporting.
Key improvements:
- Require positive Proxmox identification (version data, auth headers, or certificates)
instead of reporting any service on ports 8006/8007
- Add real-time progress tracking with phase/target counts and completion percentage
- Implement structured error reporting with IP, phase, type, and timestamp details
- Fix TLS timeout handling to prevent hangs on unresponsive hosts
- Expose progress and structured errors via WebSocket for UI consumption
- Reduce log verbosity by moving discovery logs to debug level
- Fix duplicate IP counting to ensure progress reaches 100%
Breaking changes: None (backward compatible with legacy API methods)
Added containerized and containerId fields to /api/version endpoint
to enable automatic temperature proxy installation for LXC containers.
Changes:
- Added Containerized bool field to VersionResponse
- Added ContainerId string field to VersionResponse
- Detect containerization by checking /run/systemd/container file
- Extract container ID from hostname for LXC containers
- Set deployment type from container type (lxc/docker)
This allows the PVE setup script to:
1. Detect that Pulse is running in a container
2. Find the container ID by matching IPs
3. Automatically install pulse-sensor-proxy on the host
4. Configure bind mount for secure socket communication
Fixes the issue where setup script showed 'Proxy not available'
even when Pulse was containerized.
Critical bug fix: The setup script's format string had 33 placeholders
but was only receiving 27 arguments, causing:
- INSTALLER_URL to receive authToken instead of pulseURL
- This made curl try to resolve the token value as a hostname
- Error: 'curl: (6) Could not resolve host: N7AE3P'
- Token ID showed '%!s(MISSING)' in manual setup instructions
Fixed by:
- Added missing tokenName at position 7
- Added literal '%s' strings for version_ge printf placeholders
- Added authToken arguments for Authorization headers (positions 29, 31)
- Ensured all 33 format placeholders have corresponding arguments
Now generates correct URLs:
- INSTALLER_URL: http://192.168.0.160:7655/api/install/install-sensor-proxy.sh
- --pulse-server: http://192.168.0.160:7655
- Token ID: pulse-monitor@pam!pulse-192-168-0-160-[timestamp]
Setup script improvements (config_handlers.go):
- Remove redundant mount configuration and container restart logic
- Let installer handle all mount/restart operations (single source of truth)
- Eliminate hard-coded mp0 assumption
Installer improvements (install-sensor-proxy.sh):
- Add mount configuration persistence validation via pct config check
- Surface pct set errors instead of silencing with 2>/dev/null
- Capture and display curl download errors with temp files
- Check systemd daemon-reload/enable/restart exit codes
- Show journalctl output when service fails to start
- Make socket verification fatal (was warning)
- Provide clear manual steps when hot-plug fails on running container
This makes the installation fail fast with actionable error messages
instead of silently proceeding with broken configuration.
Changes:
- Replace PULSE_SENSOR_PROXY_FALLBACK_URL env export with --pulse-server argument
- Remove --quiet flag from installer invocation to show download progress
- More reliable than environment variable inheritance in subshells
This ensures the proxy installer can reliably download the binary from the
Pulse server fallback when GitHub is unavailable.
The setup script was filtering installer output to only show lines with
✓|⚠️|ERROR, which hid successful download messages like:
'Downloading pulse-sensor-proxy-linux-amd64 from Pulse server...'
This made it appear the installer failed even when the Pulse server
fallback download succeeded. Changed to show all installer output for
better visibility and debugging.
Users will now see the complete installation flow including:
- GitHub download attempt (expected to fail for dev builds)
- Pulse server fallback download (should succeed)
- All setup steps and validations
Improves transparency and reduces confusion during setup
Version check was blocking dev/main builds (e.g., '0.0.0-main-da9da6f')
from using temperature proxy, even though they have the latest code.
Added regex to skip version check for builds matching:
- ^0\.0\.0-main (main branch builds)
- ^dev (dev builds)
- ^main (main version strings)
These builds are assumed to have proxy support since they're built from
the latest codebase.
Fixes testing workflow when installing Pulse with --main flag
The version check was blocking ALL v4.23.0 users from temperature monitoring,
even non-containerized ones who don't need the proxy.
Changed to only check version when PULSE_IS_CONTAINERIZED=true, since:
- Non-containerized Pulse can use direct SSH on any version
- Containerized Pulse requires v4.24.0+ for proxy support
This ensures non-containerized v4.23.0 users can still use temperature monitoring
via direct SSH while properly blocking proxy setup for containerized v4.23.0.
Fixes regression introduced in commit fbe4ab83a
Improves configuration handling and system settings APIs to support
v4.24.0 features including runtime logging controls, adaptive polling
configuration, and enhanced config export/persistence.
Changes:
- Add config override system for discovery service
- Enhance system settings API with runtime logging controls
- Improve config persistence and export functionality
- Update security setup handling
- Refine monitoring and discovery service integration
These changes provide the backend support for the configuration
features documented in the v4.24.0 release.
Resolves two remaining TODOs from codebase audit.
## 1. PBS/PMG Test Harness Stubs
**Location:** internal/monitoring/harness_integration.go:149-151
**Changes:**
- Added PBS client stub registration: `monitor.pbsClients[inst.Name] = &pbs.Client{}`
- Added PMG client stub registration: `monitor.pmgClients[inst.Name] = &pmg.Client{}`
- Added imports for pkg/pbs and pkg/pmg
**Purpose:**
Enables integration test scenarios to include PBS and PMG instance types
alongside existing PVE support. Stubs allow scheduler to register and
execute tasks for these instance types during integration testing.
**Testing:**
✅ TestAdaptiveSchedulerIntegration passes (55.5s)
✅ Integration test harness now supports all three instance types
## 2. HTTP Config URL Fetch
**Location:** cmd/pulse/config.go:226-261
**Problem:**
`PULSE_INIT_CONFIG_URL` was recognized but not implemented, returning
"URL import not yet implemented" error.
**Implementation:**
- URL validation (http/https schemes only)
- HTTP client with 15 second timeout
- Status code validation (2xx required)
- Empty response detection
- Base64 decoding with fallback to raw data
- Matches existing env-var behavior for `PULSE_INIT_CONFIG_DATA`
**Security:**
- Both HTTP and HTTPS supported (HTTPS recommended for production)
- URL scheme validation prevents file:// or other protocols
- Timeout prevents hanging on unresponsive servers
**Usage:**
```bash
export PULSE_INIT_CONFIG_URL="https://config-server/encrypted-config"
export PULSE_INIT_CONFIG_PASSPHRASE="secret"
pulse config auto-import
```
**Testing:**
✅ Code compiles cleanly
✅ Follows same pattern as existing PULSE_INIT_CONFIG_DATA handling
## Impact
- Completes integration test infrastructure for all instance types
- Enables automated config distribution via HTTP(S) for container deployments
- Removes last TODOs from codebase (no TODO/FIXME remaining in Go files)
Fixes panic: assignment to entry in nil map in PMG polling tests.
**Problem:**
Tests were manually creating Monitor structs without initializing internal
maps like pollStatusMap, causing nil map panics when recordTaskResult()
tried to update task status.
**Root Cause:**
- TestPollPMGInstancePopulatesState (line 90)
- TestPollPMGInstanceRecordsAuthFailures (line 189)
Both created Monitor with only partial field initialization, missing:
- pollStatusMap
- dlqInsightMap
- instanceInfoCache
- Other internal state maps
**Solution:**
Changed both tests to use New() constructor which properly initializes all
maps and internal state (monitor.go:1541). This ensures tests match production
initialization and will automatically pick up any future map additions.
**Tests:**
✅ TestPollPMGInstancePopulatesState - now passes
✅ TestPollPMGInstanceRecordsAuthFailures - now passes
✅ All monitoring tests pass (0.125s)
Follows best practice: use constructors instead of manual struct creation
to maintain initialization invariants.
Implement complete rollback functionality for systemd/LXC deployments:
**Rollback Strategy:**
- Downloads old binary from GitHub releases
- Restores config from timestamped backups
- Service detection (pulse/pulse-backend/pulse-hot-dev)
- Comprehensive health verification
**Implementation:**
Main rollback flow:
1. Create rollback history entry
2. Detect active service name
3. Download old binary version from GitHub
4. Stop Pulse service
5. Create safety backup of current config
6. Restore config from backup directory
7. Install old binary
8. Start service
9. Wait for health check (30s timeout)
10. Update rollback history (success/failure)
**Helper Functions:**
- detectServiceName(): Auto-detect active service from candidates
- downloadBinary(): Download specific version from GitHub releases
- Auto-detects architecture (amd64/arm64)
- Validates download success
- Sets executable permissions
- stopService/startService(): Systemctl service management
- restoreConfig(): Atomic config restoration
- installBinary(): Safe binary installation with backup
- waitForHealth(): Retry health endpoint with timeout
**Safety Features:**
- Safety backup before restore (rollback-safety timestamp)
- Pre-rollback binary backup (.pre-rollback)
- Health check verification post-rollback
- Comprehensive error logging
- History tracking for audit
**Limitations:**
- Binary backup deleted by install.sh (downloads from GitHub)
- Network dependency for binary retrieval
- Config-only backups from current install.sh
**Testing:**
- Compiles cleanly
- Ready for unit/integration tests
Closes Phase 1 technical debt - rollback capability now functional.
Part of Phase 1 Security Hardening follow-up work
Add comprehensive instance-level diagnostics to /api/monitoring/scheduler/health
**New Response Structure:**
Enhanced "instances" array with per-instance details:
- Instance metadata: displayName, type, connection URL
- Poll status: last success/error timestamps, error messages, error category
- Circuit breaker: state, timestamps, failure counts, retry windows
- Dead letter: present flag, reason, attempt history, retry schedule
**Implementation:**
Data structures:
- instanceInfo: cache of display names, URLs, types
- pollStatus: tracks successes/errors with timestamps and categories
- dlqInsight: DLQ entry metadata (reason, attempts, schedule)
- circuitBreaker: enhanced with stateSince, lastTransition
Tracking logic:
- buildInstanceInfoCache: populate metadata from config on startup
- recordTaskResult: track poll outcomes, error details, categories
- sendToDeadLetter: capture DLQ insights (reason, timestamps)
- circuitBreaker: record state transitions with timestamps
**Backward Compatible:**
- Existing fields (deadLetter, breakers, staleness) unchanged
- New "instances" array is additive
- Old clients can ignore new fields
**Testing:**
- Unit test: TestSchedulerHealth_EnhancedResponse validates all fields
- Integration tests: still passing (55s)
- All error tracking and breaker history verified
**Operator Benefits:**
- Diagnose issues without log digging
- See error messages directly in API
- Understand breaker states and retry schedules
- Track DLQ entries with full context
- Single API call for complete instance health view
Example: Quickly identify "401 unauthorized" on specific PBS instance,
see it's in DLQ after 5 retries, and know when next retry scheduled.
Part of Phase 2 follow-up work to improve observability.
Implements structured logging package with LOG_LEVEL/LOG_FORMAT env support, debug level guards for hot paths, enriched error messages with actionable context, and stack trace capture for production debugging. Improves observability and reduces log overhead in high-frequency polling loops.
Task 8 of 10 complete. Exposes read-only scheduler health data including:
- Queue depth and distribution by instance type
- Dead-letter queue inspection (top 25 tasks with error details)
- Circuit breaker states (instance-level)
- Staleness scores per instance
New API endpoint:
GET /api/monitoring/scheduler/health (requires authentication)
New snapshot methods:
- StalenessTracker.Snapshot() - exports all staleness data
- TaskQueue.Snapshot() - queue depth & per-type distribution
- TaskQueue.PeekAll() - dead-letter task inspection
- circuitBreaker.State() - exports state, failures, retryAt
- Monitor.SchedulerHealth() - aggregates all health data
Documentation updated with API spec, field descriptions, and usage examples.
Replaces immediate polling with queue-based scheduling:
- TaskQueue with min-heap (container/heap) for NextRun-ordered execution
- Worker goroutines that block on WaitNext() until tasks are due
- Tasks only execute when NextRun <= now, respecting adaptive intervals
- Automatic rescheduling after execution via scheduler.BuildPlan
- Queue depth tracking for backpressure-aware interval adjustments
- Upsert semantics for updating scheduled tasks without duplicates
Task 6 of 10 complete (60%). Ready for error/backoff policies.
Confirms adaptive scheduling logic is fully operational:
- EMA smoothing (alpha=0.6) to prevent interval oscillations
- Staleness-based interpolation between min/max intervals
- Error penalty (0.6x per error) for faster recovery detection
- Queue depth stretch (0.1x per task) for backpressure handling
- ±5% jitter to prevent thundering herd effects
- Per-instance state tracking for smooth transitions
Task 5 of 10 complete. Scheduler foundation ready for queue-based execution.
Adds freshness metadata tracking for all monitored instances:
- StalenessTracker with per-instance last success/error/mutation timestamps
- Change hash detection using SHA1 for detecting data mutations
- Normalized staleness scoring (0-1 scale) based on age vs maxStale
- Integration with PollMetrics for authoritative last-success data
- Wired into all poll functions (PVE/PBS/PMG) via UpdateSuccess/UpdateError
- Connected to scheduler as StalenessSource implementation
Task 4 of 10 complete. Ready for adaptive interval logic.
Replace string(rune(i)) with strconv.Itoa(i) in hub_concurrency_test.go
for generating client IDs. While this is test code and not a production bug,
it uses the same incorrect pattern that caused the PR #575 bug.
This ensures consistent best practices across the codebase and avoids
confusion for developers who might copy this pattern.
Related: #575
Add regression test for PR #575 to ensure rate limit headers are formatted
as decimal strings (e.g., "10") instead of Unicode control characters.
Also fixes pre-existing fmt.Sprintf argument count mismatch in PVE setup
script (internal/api/config_handlers.go:3077). The template had 28 format
specifiers (excluding %%s escape sequence) but was only receiving 24
arguments. Added missing pulseURL and tokenName arguments to match template.
Related: #575
Adds a one-command Docker deployment flow that:
- Detects if running in LXC and installs Docker if needed
- Automatically installs pulse-sensor-proxy on the Proxmox host
- Configures bind mount for proxy socket into LXC
- Generates optimized docker-compose.yml with proxy socket
- Enables temperature monitoring via host-side proxy
The install-docker.sh script handles the complete setup including:
- Docker installation (if needed)
- ACL configuration for container UIDs
- Bind mount setup
- Automatic apparmor=unconfined for socket access
Accessible via: curl -sSL http://pulse:7655/api/install/install-docker.sh | bash
When the setup script detects TEMPERATURE_PROXY_KEY (proxy is available),
it now shows a clear success message instead of attempting SSH verification.
The verification check doesn't work with proxy-based setups since the
container doesn't have SSH keys - all temperature collection happens via
the Unix socket to pulse-sensor-proxy, which handles SSH.
Now shows:
✓ Temperature monitoring configured via pulse-sensor-proxy
Temperature data will appear in the dashboard within 10 seconds
Instead of the misleading:
⚠️ Unable to verify SSH connectivity.
Temperature data will appear once SSH connectivity is configured.
When pulse-sensor-proxy is available, the setup script now automatically
detects and uses the proxy's SSH public key instead of trying to generate
keys inside the container.
This fixes temperature monitoring setup for Docker deployments where:
- Container has proxy socket mounted at /mnt/pulse-proxy
- Proxy handles SSH connections to nodes
- Setup script needs to distribute the proxy's key, not container's key
The fix queries /api/system/proxy-public-key during setup script generation
and overrides SSH_SENSORS_PUBLIC_KEY if the proxy is available.
Tested with Docker on native Proxmox host (delly) - temperatures collected
successfully via proxy socket.
Changed heredoc delimiter from <<'EOF' to <<EOF to allow bash variable
expansion. Previously $SSH_PUBLIC_KEY and $SSH_RESTRICTED_KEY_ENTRY
were being passed as literal strings instead of their actual values,
so cluster nodes never received the correct SSH keys.
This fixes cluster node ProxyJump setup - now both restricted and
unrestricted keys are properly added to cluster nodes.
The setup script now adds both the restricted and unrestricted SSH keys
to ALL cluster nodes, not just the first one. This makes temperature
monitoring truly turnkey - you say 'yes' to configure cluster nodes and
it automatically sets up both keys on each node.
This ensures:
- All nodes can act as ProxyJump hosts if needed
- All nodes can provide temperature data via sensors
- No manual SSH key configuration required
Fixes turnkey cluster temperature monitoring setup.
When using ProxyJump for cluster temperature monitoring, the jump host
(typically the first cluster node) needs an unrestricted SSH key to allow
connection forwarding. Previously only the restricted key with
command="sensors -j" was added, which blocked ProxyJump.
Now the setup script adds TWO keys:
1. Unrestricted key (for ProxyJump/connection forwarding)
2. Restricted key (for running sensors -j directly)
This allows containerized Pulse to:
- Connect through the jump host to other cluster nodes
- Collect temperature data from all cluster members
Fixes cluster temperature monitoring for Docker/LXC deployments.
Added logic to resolve IP addresses for cluster nodes and include them as
HostName entries in the SSH config. Without this, Pulse couldn't connect
to cluster nodes like 'minipc' because the container couldn't resolve
the hostname.
Uses getent to resolve node names to IPs, with fallback to hostname if
resolution fails (for environments where DNS works).
- Changed SSH key generation from RSA 2048 to Ed25519 (more secure, faster, smaller)
- Added openssh-client package to Docker image (required for temperature monitoring)
- Updated SSH config template to use id_ed25519
- Removed unused crypto/rsa and crypto/x509 imports
Ed25519 provides better security with shorter keys and faster operations
compared to RSA. The container now has SSH client tools needed to connect
to Proxmox nodes for temperature data collection.
The setup script was generating SSH config with IdentityFile ~/.ssh/id_ed25519
but Pulse generates id_rsa keys. Updated SSH config template to use id_rsa
to match the actual key type generated by the monitoring system.
Added middleware exception for /api/system/ssh-config when a valid setup
token is provided, matching the pattern used for verify-temperature-ssh.
The middleware was blocking ssh-config requests before they reached the
handler, even though the handler had setup token validation logic.
The ssh-config endpoint was using RequireAuth which only accepts Pulse
API tokens, but the setup script sends a temporary setup token via the
auth_token parameter. Updated to follow the same pattern as
verify-temperature-ssh: check setup token first, then fall back to API auth.
This fixes the 401 error when the setup script tries to configure ProxyJump
for containerized Pulse deployments.
The setup script was passing pulseURL instead of authToken as the last
parameter, causing 'Authentication required' errors when verifying SSH
connectivity. Fixed parameter order in fmt.Sprintf call.
Security improvements to HandleSSHConfig endpoint:
- Add defer r.Body.Close() for proper resource cleanup
- Return 413 status for oversized requests with errors.As check
- Switch from blocklist to allowlist-based directive validation
- Use case-insensitive parsing with comment stripping via bufio.Scanner
- Add Content-Type: application/json header to response
Codex identified that blocklist approach was insufficient and recommended
allowlist validation to prevent unexpected directives. Only permits the
specific SSH directives Pulse needs for ProxyJump configuration.
Make temperature monitoring truly turnkey by automatically configuring
SSH ProxyJump when running in containers without pulse-sensor-proxy.
How it works:
1. Setup script runs on Proxmox host (e.g., delly)
2. Detects Pulse is containerized but proxy unavailable
3. Automatically configures SSH ProxyJump through the current host
4. Writes SSH config to /home/pulse/.ssh/config in container
5. Temperature monitoring "just works" without manual configuration
Changes:
- Track TEMP_MONITORING_AVAILABLE flag during proxy installation
- Auto-configure ProxyJump if proxy installation fails
- Add /api/system/ssh-config endpoint to write SSH config
- Only prompt for temperature monitoring if it can actually work
- Automatic SSH config: ProxyJump through Proxmox host
Before: User had to manually configure ProxyJump or install proxy
After: Temperature monitoring works automatically after setup script
This makes Docker deployments as turnkey as LXC deployments.