Enhance request ID middleware to support distributed tracing:
- Honor incoming X-Request-ID headers from upstream proxies/load balancers
- Use logging.WithRequestID() for consistent ID generation across codebase
- Return X-Request-ID in response headers for client correlation
- Include request_id in panic recovery logs for debugging
This enables better request tracing across multiple Pulse instances
and integrates with standard distributed tracing practices.
Implement 5 medium/low priority improvements identified in systematic review:
UX IMPROVEMENTS:
- Notify existing critical alerts when activating from pending_review state
Previously: critical alerts during observation window would never notify
Now: users receive notifications for active critical alerts after activation
Implementation: Added NotifyExistingAlert() method and logic in ActivateAlerts()
PERFORMANCE OPTIMIZATIONS:
- Replace per-alert cleanup goroutines with periodic batch cleanup
Prevents spawning 1000s of goroutines during alert flapping
recentlyResolved entries now cleaned up once per minute instead of 1 goroutine per alert
- Simplify GetActiveAlerts() implementation
Removed intermediate map copy, holds lock slightly longer but operation is fast
Cleaner code with reduced memory allocation
CONFIGURATION VALIDATION:
- Validate timezone in quiet hours configuration
Invalid timezones now disable quiet hours with error log instead of silent fallback
Prevents unexpected behavior when timezone is typo'd or invalid
GRACEFUL SHUTDOWN:
- Add 100ms delay in Stop() for background goroutine cleanup
Reduces risk of state corruption during shutdown
Allows escalation checker and periodic save to exit cleanly
Technical details:
- internal/alerts/alerts.go: Added NotifyExistingAlert(), optimized cleanup patterns
- internal/api/alerts.go: Enhanced ActivateAlerts() to notify existing critical alerts
- Removed ~20 lines of goroutine spawning code
- Added periodic cleanup for recentlyResolved map
- All changes preserve backward compatibility
Testing: Verified compilation with 'go build -o /dev/null ./...'
Source builds use commit hashes (main-c147fa1) not semantic versions
(v4.23.0), so update checks would always fail or show misleading
"Update Available" banners.
Changes:
- Add IsSourceBuild flag to VersionInfo struct
- Detect source builds via BUILD_FROM_SOURCE marker file
- Skip update check for source builds (like Docker)
- Update frontend to show "Built from source" message
- Disable manual update check button for source builds
- Return "source" deployment type for source builds
Backend:
- internal/updates/version.go: Add isSourceBuildEnvironment() detection
- internal/updates/manager.go: Skip check with appropriate message
- internal/api/types.go: Add isSourceBuild to API response
- internal/api/router.go: Include isSourceBuild in version endpoint
Frontend:
- src/api/updates.ts: Add isSourceBuild to VersionInfo type
- src/stores/updates.ts: Don't poll for updates on source builds
- src/components/Settings/Settings.tsx: Show "Built from source" message
Fixes the confusing "Update Available" banner for users who explicitly
chose --source to get latest main branch code.
Co-authored-by: Codex AI
Significantly enhanced network discovery feature to eliminate false positives,
provide real-time progress updates, and better error reporting.
Key improvements:
- Require positive Proxmox identification (version data, auth headers, or certificates)
instead of reporting any service on ports 8006/8007
- Add real-time progress tracking with phase/target counts and completion percentage
- Implement structured error reporting with IP, phase, type, and timestamp details
- Fix TLS timeout handling to prevent hangs on unresponsive hosts
- Expose progress and structured errors via WebSocket for UI consumption
- Reduce log verbosity by moving discovery logs to debug level
- Fix duplicate IP counting to ensure progress reaches 100%
Breaking changes: None (backward compatible with legacy API methods)
Added containerized and containerId fields to /api/version endpoint
to enable automatic temperature proxy installation for LXC containers.
Changes:
- Added Containerized bool field to VersionResponse
- Added ContainerId string field to VersionResponse
- Detect containerization by checking /run/systemd/container file
- Extract container ID from hostname for LXC containers
- Set deployment type from container type (lxc/docker)
This allows the PVE setup script to:
1. Detect that Pulse is running in a container
2. Find the container ID by matching IPs
3. Automatically install pulse-sensor-proxy on the host
4. Configure bind mount for secure socket communication
Fixes the issue where setup script showed 'Proxy not available'
even when Pulse was containerized.
Critical bug fix: The setup script's format string had 33 placeholders
but was only receiving 27 arguments, causing:
- INSTALLER_URL to receive authToken instead of pulseURL
- This made curl try to resolve the token value as a hostname
- Error: 'curl: (6) Could not resolve host: N7AE3P'
- Token ID showed '%!s(MISSING)' in manual setup instructions
Fixed by:
- Added missing tokenName at position 7
- Added literal '%s' strings for version_ge printf placeholders
- Added authToken arguments for Authorization headers (positions 29, 31)
- Ensured all 33 format placeholders have corresponding arguments
Now generates correct URLs:
- INSTALLER_URL: http://192.168.0.160:7655/api/install/install-sensor-proxy.sh
- --pulse-server: http://192.168.0.160:7655
- Token ID: pulse-monitor@pam!pulse-192-168-0-160-[timestamp]
Setup script improvements (config_handlers.go):
- Remove redundant mount configuration and container restart logic
- Let installer handle all mount/restart operations (single source of truth)
- Eliminate hard-coded mp0 assumption
Installer improvements (install-sensor-proxy.sh):
- Add mount configuration persistence validation via pct config check
- Surface pct set errors instead of silencing with 2>/dev/null
- Capture and display curl download errors with temp files
- Check systemd daemon-reload/enable/restart exit codes
- Show journalctl output when service fails to start
- Make socket verification fatal (was warning)
- Provide clear manual steps when hot-plug fails on running container
This makes the installation fail fast with actionable error messages
instead of silently proceeding with broken configuration.
Changes:
- Replace PULSE_SENSOR_PROXY_FALLBACK_URL env export with --pulse-server argument
- Remove --quiet flag from installer invocation to show download progress
- More reliable than environment variable inheritance in subshells
This ensures the proxy installer can reliably download the binary from the
Pulse server fallback when GitHub is unavailable.
The setup script was filtering installer output to only show lines with
✓|⚠️|ERROR, which hid successful download messages like:
'Downloading pulse-sensor-proxy-linux-amd64 from Pulse server...'
This made it appear the installer failed even when the Pulse server
fallback download succeeded. Changed to show all installer output for
better visibility and debugging.
Users will now see the complete installation flow including:
- GitHub download attempt (expected to fail for dev builds)
- Pulse server fallback download (should succeed)
- All setup steps and validations
Improves transparency and reduces confusion during setup
Version check was blocking dev/main builds (e.g., '0.0.0-main-da9da6f')
from using temperature proxy, even though they have the latest code.
Added regex to skip version check for builds matching:
- ^0\.0\.0-main (main branch builds)
- ^dev (dev builds)
- ^main (main version strings)
These builds are assumed to have proxy support since they're built from
the latest codebase.
Fixes testing workflow when installing Pulse with --main flag
The version check was blocking ALL v4.23.0 users from temperature monitoring,
even non-containerized ones who don't need the proxy.
Changed to only check version when PULSE_IS_CONTAINERIZED=true, since:
- Non-containerized Pulse can use direct SSH on any version
- Containerized Pulse requires v4.24.0+ for proxy support
This ensures non-containerized v4.23.0 users can still use temperature monitoring
via direct SSH while properly blocking proxy setup for containerized v4.23.0.
Fixes regression introduced in commit fbe4ab83a
Improves configuration handling and system settings APIs to support
v4.24.0 features including runtime logging controls, adaptive polling
configuration, and enhanced config export/persistence.
Changes:
- Add config override system for discovery service
- Enhance system settings API with runtime logging controls
- Improve config persistence and export functionality
- Update security setup handling
- Refine monitoring and discovery service integration
These changes provide the backend support for the configuration
features documented in the v4.24.0 release.
Implements structured logging package with LOG_LEVEL/LOG_FORMAT env support, debug level guards for hot paths, enriched error messages with actionable context, and stack trace capture for production debugging. Improves observability and reduces log overhead in high-frequency polling loops.
Task 8 of 10 complete. Exposes read-only scheduler health data including:
- Queue depth and distribution by instance type
- Dead-letter queue inspection (top 25 tasks with error details)
- Circuit breaker states (instance-level)
- Staleness scores per instance
New API endpoint:
GET /api/monitoring/scheduler/health (requires authentication)
New snapshot methods:
- StalenessTracker.Snapshot() - exports all staleness data
- TaskQueue.Snapshot() - queue depth & per-type distribution
- TaskQueue.PeekAll() - dead-letter task inspection
- circuitBreaker.State() - exports state, failures, retryAt
- Monitor.SchedulerHealth() - aggregates all health data
Documentation updated with API spec, field descriptions, and usage examples.
Add regression test for PR #575 to ensure rate limit headers are formatted
as decimal strings (e.g., "10") instead of Unicode control characters.
Also fixes pre-existing fmt.Sprintf argument count mismatch in PVE setup
script (internal/api/config_handlers.go:3077). The template had 28 format
specifiers (excluding %%s escape sequence) but was only receiving 24
arguments. Added missing pulseURL and tokenName arguments to match template.
Related: #575
Adds a one-command Docker deployment flow that:
- Detects if running in LXC and installs Docker if needed
- Automatically installs pulse-sensor-proxy on the Proxmox host
- Configures bind mount for proxy socket into LXC
- Generates optimized docker-compose.yml with proxy socket
- Enables temperature monitoring via host-side proxy
The install-docker.sh script handles the complete setup including:
- Docker installation (if needed)
- ACL configuration for container UIDs
- Bind mount setup
- Automatic apparmor=unconfined for socket access
Accessible via: curl -sSL http://pulse:7655/api/install/install-docker.sh | bash
When the setup script detects TEMPERATURE_PROXY_KEY (proxy is available),
it now shows a clear success message instead of attempting SSH verification.
The verification check doesn't work with proxy-based setups since the
container doesn't have SSH keys - all temperature collection happens via
the Unix socket to pulse-sensor-proxy, which handles SSH.
Now shows:
✓ Temperature monitoring configured via pulse-sensor-proxy
Temperature data will appear in the dashboard within 10 seconds
Instead of the misleading:
⚠️ Unable to verify SSH connectivity.
Temperature data will appear once SSH connectivity is configured.
When pulse-sensor-proxy is available, the setup script now automatically
detects and uses the proxy's SSH public key instead of trying to generate
keys inside the container.
This fixes temperature monitoring setup for Docker deployments where:
- Container has proxy socket mounted at /mnt/pulse-proxy
- Proxy handles SSH connections to nodes
- Setup script needs to distribute the proxy's key, not container's key
The fix queries /api/system/proxy-public-key during setup script generation
and overrides SSH_SENSORS_PUBLIC_KEY if the proxy is available.
Tested with Docker on native Proxmox host (delly) - temperatures collected
successfully via proxy socket.
Changed heredoc delimiter from <<'EOF' to <<EOF to allow bash variable
expansion. Previously $SSH_PUBLIC_KEY and $SSH_RESTRICTED_KEY_ENTRY
were being passed as literal strings instead of their actual values,
so cluster nodes never received the correct SSH keys.
This fixes cluster node ProxyJump setup - now both restricted and
unrestricted keys are properly added to cluster nodes.
The setup script now adds both the restricted and unrestricted SSH keys
to ALL cluster nodes, not just the first one. This makes temperature
monitoring truly turnkey - you say 'yes' to configure cluster nodes and
it automatically sets up both keys on each node.
This ensures:
- All nodes can act as ProxyJump hosts if needed
- All nodes can provide temperature data via sensors
- No manual SSH key configuration required
Fixes turnkey cluster temperature monitoring setup.
When using ProxyJump for cluster temperature monitoring, the jump host
(typically the first cluster node) needs an unrestricted SSH key to allow
connection forwarding. Previously only the restricted key with
command="sensors -j" was added, which blocked ProxyJump.
Now the setup script adds TWO keys:
1. Unrestricted key (for ProxyJump/connection forwarding)
2. Restricted key (for running sensors -j directly)
This allows containerized Pulse to:
- Connect through the jump host to other cluster nodes
- Collect temperature data from all cluster members
Fixes cluster temperature monitoring for Docker/LXC deployments.
Added logic to resolve IP addresses for cluster nodes and include them as
HostName entries in the SSH config. Without this, Pulse couldn't connect
to cluster nodes like 'minipc' because the container couldn't resolve
the hostname.
Uses getent to resolve node names to IPs, with fallback to hostname if
resolution fails (for environments where DNS works).
- Changed SSH key generation from RSA 2048 to Ed25519 (more secure, faster, smaller)
- Added openssh-client package to Docker image (required for temperature monitoring)
- Updated SSH config template to use id_ed25519
- Removed unused crypto/rsa and crypto/x509 imports
Ed25519 provides better security with shorter keys and faster operations
compared to RSA. The container now has SSH client tools needed to connect
to Proxmox nodes for temperature data collection.
The setup script was generating SSH config with IdentityFile ~/.ssh/id_ed25519
but Pulse generates id_rsa keys. Updated SSH config template to use id_rsa
to match the actual key type generated by the monitoring system.
Added middleware exception for /api/system/ssh-config when a valid setup
token is provided, matching the pattern used for verify-temperature-ssh.
The middleware was blocking ssh-config requests before they reached the
handler, even though the handler had setup token validation logic.
The ssh-config endpoint was using RequireAuth which only accepts Pulse
API tokens, but the setup script sends a temporary setup token via the
auth_token parameter. Updated to follow the same pattern as
verify-temperature-ssh: check setup token first, then fall back to API auth.
This fixes the 401 error when the setup script tries to configure ProxyJump
for containerized Pulse deployments.
The setup script was passing pulseURL instead of authToken as the last
parameter, causing 'Authentication required' errors when verifying SSH
connectivity. Fixed parameter order in fmt.Sprintf call.
Security improvements to HandleSSHConfig endpoint:
- Add defer r.Body.Close() for proper resource cleanup
- Return 413 status for oversized requests with errors.As check
- Switch from blocklist to allowlist-based directive validation
- Use case-insensitive parsing with comment stripping via bufio.Scanner
- Add Content-Type: application/json header to response
Codex identified that blocklist approach was insufficient and recommended
allowlist validation to prevent unexpected directives. Only permits the
specific SSH directives Pulse needs for ProxyJump configuration.
Make temperature monitoring truly turnkey by automatically configuring
SSH ProxyJump when running in containers without pulse-sensor-proxy.
How it works:
1. Setup script runs on Proxmox host (e.g., delly)
2. Detects Pulse is containerized but proxy unavailable
3. Automatically configures SSH ProxyJump through the current host
4. Writes SSH config to /home/pulse/.ssh/config in container
5. Temperature monitoring "just works" without manual configuration
Changes:
- Track TEMP_MONITORING_AVAILABLE flag during proxy installation
- Auto-configure ProxyJump if proxy installation fails
- Add /api/system/ssh-config endpoint to write SSH config
- Only prompt for temperature monitoring if it can actually work
- Automatic SSH config: ProxyJump through Proxmox host
Before: User had to manually configure ProxyJump or install proxy
After: Temperature monitoring works automatically after setup script
This makes Docker deployments as turnkey as LXC deployments.
Changed the SSH connectivity check failure message from a scary
"FAILED" warning with complex ProxyJump instructions to a simple
informational message.
Before:
- ⚠️ SSH connectivity FAILED for: ...
- Complex multi-line ProxyJump configuration
- Confusing for users who don't need temperature monitoring
After:
- ℹ️ Temperature monitoring will be available once SSH configured
- Simple list of pending nodes
- Brief note about pulse-sensor-proxy for LXC
- Link to docs for details
This makes the setup experience much more turnkey by reducing
noise and focusing on successful completion rather than optional
features that require additional configuration.
Setup Script Improvements:
- Remove confusing "Could not download installer" warning for proxy
- Skip SSH connectivity check in containerized environments without proxy
- Simplify proxy installation prompts (automatic when available)
- Better messaging for containerized setups
These changes make the setup script more turnkey by reducing noise
and warnings that don't apply to test/development environments or
containerized installations.
Discovery Fixes:
- Always update cache even when scan finds no servers (prevents stale data)
- Remove automatic re-add of deleted nodes to discovery (was causing confusion)
- Optimize Docker subnet scanning from 762 IPs to 254 IPs (3x faster)
- Add getHostSubnetFromGateway() to detect host network from container
Frontend Type Fixes:
- Fix ThresholdsTable editScope type errors
- Fix SnapshotAlertConfig index signature
- Remove unused variable in Settings.tsx
These changes make discovery faster, more reliable, and fix the issue where
deleted nodes would persist in the discovery cache or immediately reappear.
Fixes container detection when Docker health checks are enabled.
Previously, the setup script only matched "running" status exactly,
causing it to skip containers showing "running (healthy)" status.
This prevented:
- Proper detection of containerized Pulse installations
- pulse-sensor-proxy installation for temperature monitoring
- Temperature data collection for affected users
The fix captures the full status output and searches for "running"
anywhere in the output, supporting all status variations:
- status: running
- status: running (healthy)
- status: running (unhealthy)
Related to #101