Implements structured logging package with LOG_LEVEL/LOG_FORMAT env support, debug level guards for hot paths, enriched error messages with actionable context, and stack trace capture for production debugging. Improves observability and reduces log overhead in high-frequency polling loops.
Task 8 of 10 complete. Exposes read-only scheduler health data including:
- Queue depth and distribution by instance type
- Dead-letter queue inspection (top 25 tasks with error details)
- Circuit breaker states (instance-level)
- Staleness scores per instance
New API endpoint:
GET /api/monitoring/scheduler/health (requires authentication)
New snapshot methods:
- StalenessTracker.Snapshot() - exports all staleness data
- TaskQueue.Snapshot() - queue depth & per-type distribution
- TaskQueue.PeekAll() - dead-letter task inspection
- circuitBreaker.State() - exports state, failures, retryAt
- Monitor.SchedulerHealth() - aggregates all health data
Documentation updated with API spec, field descriptions, and usage examples.
Add regression test for PR #575 to ensure rate limit headers are formatted
as decimal strings (e.g., "10") instead of Unicode control characters.
Also fixes pre-existing fmt.Sprintf argument count mismatch in PVE setup
script (internal/api/config_handlers.go:3077). The template had 28 format
specifiers (excluding %%s escape sequence) but was only receiving 24
arguments. Added missing pulseURL and tokenName arguments to match template.
Related: #575
Adds a one-command Docker deployment flow that:
- Detects if running in LXC and installs Docker if needed
- Automatically installs pulse-sensor-proxy on the Proxmox host
- Configures bind mount for proxy socket into LXC
- Generates optimized docker-compose.yml with proxy socket
- Enables temperature monitoring via host-side proxy
The install-docker.sh script handles the complete setup including:
- Docker installation (if needed)
- ACL configuration for container UIDs
- Bind mount setup
- Automatic apparmor=unconfined for socket access
Accessible via: curl -sSL http://pulse:7655/api/install/install-docker.sh | bash
When the setup script detects TEMPERATURE_PROXY_KEY (proxy is available),
it now shows a clear success message instead of attempting SSH verification.
The verification check doesn't work with proxy-based setups since the
container doesn't have SSH keys - all temperature collection happens via
the Unix socket to pulse-sensor-proxy, which handles SSH.
Now shows:
✓ Temperature monitoring configured via pulse-sensor-proxy
Temperature data will appear in the dashboard within 10 seconds
Instead of the misleading:
⚠️ Unable to verify SSH connectivity.
Temperature data will appear once SSH connectivity is configured.
When pulse-sensor-proxy is available, the setup script now automatically
detects and uses the proxy's SSH public key instead of trying to generate
keys inside the container.
This fixes temperature monitoring setup for Docker deployments where:
- Container has proxy socket mounted at /mnt/pulse-proxy
- Proxy handles SSH connections to nodes
- Setup script needs to distribute the proxy's key, not container's key
The fix queries /api/system/proxy-public-key during setup script generation
and overrides SSH_SENSORS_PUBLIC_KEY if the proxy is available.
Tested with Docker on native Proxmox host (delly) - temperatures collected
successfully via proxy socket.
Changed heredoc delimiter from <<'EOF' to <<EOF to allow bash variable
expansion. Previously $SSH_PUBLIC_KEY and $SSH_RESTRICTED_KEY_ENTRY
were being passed as literal strings instead of their actual values,
so cluster nodes never received the correct SSH keys.
This fixes cluster node ProxyJump setup - now both restricted and
unrestricted keys are properly added to cluster nodes.
The setup script now adds both the restricted and unrestricted SSH keys
to ALL cluster nodes, not just the first one. This makes temperature
monitoring truly turnkey - you say 'yes' to configure cluster nodes and
it automatically sets up both keys on each node.
This ensures:
- All nodes can act as ProxyJump hosts if needed
- All nodes can provide temperature data via sensors
- No manual SSH key configuration required
Fixes turnkey cluster temperature monitoring setup.
When using ProxyJump for cluster temperature monitoring, the jump host
(typically the first cluster node) needs an unrestricted SSH key to allow
connection forwarding. Previously only the restricted key with
command="sensors -j" was added, which blocked ProxyJump.
Now the setup script adds TWO keys:
1. Unrestricted key (for ProxyJump/connection forwarding)
2. Restricted key (for running sensors -j directly)
This allows containerized Pulse to:
- Connect through the jump host to other cluster nodes
- Collect temperature data from all cluster members
Fixes cluster temperature monitoring for Docker/LXC deployments.
Added logic to resolve IP addresses for cluster nodes and include them as
HostName entries in the SSH config. Without this, Pulse couldn't connect
to cluster nodes like 'minipc' because the container couldn't resolve
the hostname.
Uses getent to resolve node names to IPs, with fallback to hostname if
resolution fails (for environments where DNS works).
- Changed SSH key generation from RSA 2048 to Ed25519 (more secure, faster, smaller)
- Added openssh-client package to Docker image (required for temperature monitoring)
- Updated SSH config template to use id_ed25519
- Removed unused crypto/rsa and crypto/x509 imports
Ed25519 provides better security with shorter keys and faster operations
compared to RSA. The container now has SSH client tools needed to connect
to Proxmox nodes for temperature data collection.
The setup script was generating SSH config with IdentityFile ~/.ssh/id_ed25519
but Pulse generates id_rsa keys. Updated SSH config template to use id_rsa
to match the actual key type generated by the monitoring system.
Added middleware exception for /api/system/ssh-config when a valid setup
token is provided, matching the pattern used for verify-temperature-ssh.
The middleware was blocking ssh-config requests before they reached the
handler, even though the handler had setup token validation logic.
The ssh-config endpoint was using RequireAuth which only accepts Pulse
API tokens, but the setup script sends a temporary setup token via the
auth_token parameter. Updated to follow the same pattern as
verify-temperature-ssh: check setup token first, then fall back to API auth.
This fixes the 401 error when the setup script tries to configure ProxyJump
for containerized Pulse deployments.
The setup script was passing pulseURL instead of authToken as the last
parameter, causing 'Authentication required' errors when verifying SSH
connectivity. Fixed parameter order in fmt.Sprintf call.
Security improvements to HandleSSHConfig endpoint:
- Add defer r.Body.Close() for proper resource cleanup
- Return 413 status for oversized requests with errors.As check
- Switch from blocklist to allowlist-based directive validation
- Use case-insensitive parsing with comment stripping via bufio.Scanner
- Add Content-Type: application/json header to response
Codex identified that blocklist approach was insufficient and recommended
allowlist validation to prevent unexpected directives. Only permits the
specific SSH directives Pulse needs for ProxyJump configuration.
Make temperature monitoring truly turnkey by automatically configuring
SSH ProxyJump when running in containers without pulse-sensor-proxy.
How it works:
1. Setup script runs on Proxmox host (e.g., delly)
2. Detects Pulse is containerized but proxy unavailable
3. Automatically configures SSH ProxyJump through the current host
4. Writes SSH config to /home/pulse/.ssh/config in container
5. Temperature monitoring "just works" without manual configuration
Changes:
- Track TEMP_MONITORING_AVAILABLE flag during proxy installation
- Auto-configure ProxyJump if proxy installation fails
- Add /api/system/ssh-config endpoint to write SSH config
- Only prompt for temperature monitoring if it can actually work
- Automatic SSH config: ProxyJump through Proxmox host
Before: User had to manually configure ProxyJump or install proxy
After: Temperature monitoring works automatically after setup script
This makes Docker deployments as turnkey as LXC deployments.
Changed the SSH connectivity check failure message from a scary
"FAILED" warning with complex ProxyJump instructions to a simple
informational message.
Before:
- ⚠️ SSH connectivity FAILED for: ...
- Complex multi-line ProxyJump configuration
- Confusing for users who don't need temperature monitoring
After:
- ℹ️ Temperature monitoring will be available once SSH configured
- Simple list of pending nodes
- Brief note about pulse-sensor-proxy for LXC
- Link to docs for details
This makes the setup experience much more turnkey by reducing
noise and focusing on successful completion rather than optional
features that require additional configuration.
Setup Script Improvements:
- Remove confusing "Could not download installer" warning for proxy
- Skip SSH connectivity check in containerized environments without proxy
- Simplify proxy installation prompts (automatic when available)
- Better messaging for containerized setups
These changes make the setup script more turnkey by reducing noise
and warnings that don't apply to test/development environments or
containerized installations.
Discovery Fixes:
- Always update cache even when scan finds no servers (prevents stale data)
- Remove automatic re-add of deleted nodes to discovery (was causing confusion)
- Optimize Docker subnet scanning from 762 IPs to 254 IPs (3x faster)
- Add getHostSubnetFromGateway() to detect host network from container
Frontend Type Fixes:
- Fix ThresholdsTable editScope type errors
- Fix SnapshotAlertConfig index signature
- Remove unused variable in Settings.tsx
These changes make discovery faster, more reliable, and fix the issue where
deleted nodes would persist in the discovery cache or immediately reappear.
Fixes container detection when Docker health checks are enabled.
Previously, the setup script only matched "running" status exactly,
causing it to skip containers showing "running (healthy)" status.
This prevented:
- Proper detection of containerized Pulse installations
- pulse-sensor-proxy installation for temperature monitoring
- Temperature data collection for affected users
The fix captures the full status output and searches for "running"
anywhere in the output, supporting all status variations:
- status: running
- status: running (healthy)
- status: running (unhealthy)
Related to #101
- Changed temperature monitoring menu from [K/r/s] to [1/2/3]
- Now all multi-choice menus use numbers consistently
- Main menu: [1/2/3]
- Temperature menu: [1/2/3] (was [K/r/s])
- Yes/no questions still use y/n (standard convention)
- Fix script input handling to work with standard curl | bash pattern by prioritizing /dev/tty
- Add Raspberry Pi temperature sensor support (cpu_thermal chip and generic temp sensors)
- Add comprehensive documentation for turnkey standalone node setup
- Fix printf formatting error in setup script
Implements automatic temperature monitoring setup for standalone
Proxmox/Pimox nodes without manual SSH key configuration.
Changes:
- Add /api/system/proxy-public-key endpoint to expose proxy's SSH public key
- Setup script now detects standalone nodes (non-cluster)
- Auto-fetches and installs proxy SSH key with forced commands
- Add Raspberry Pi temperature support via cpu_thermal and /sys/class/thermal
- Enhance setup script with better error handling for lm-sensors installation
- Add RPi detection to skip lm-sensors and use native thermal interface
Security:
- Public key endpoint is safe (public keys are meant to be public)
- All installed keys use forced command="sensors -j" with full restrictions
- No shell access, port forwarding, or other SSH features enabled
Fixes two issues with the sensor proxy installation:
1. Local node IP detection now uses exact matching instead of substring matching to avoid false negatives
2. Removes duplicate output filtering in the setup script wrapper
These changes ensure that the proxy SSH key is correctly configured on the local node during cluster installations.
Implements automated cleanup workflow when nodes are deleted from Pulse, removing all monitoring footprint from the host. Changes include a new RPC handler in the sensor proxy for cleanup requests, enhanced node deletion modal with detailed cleanup explanations, and improved SSH key management with proper tagging for atomic updates.
The setup script was restarting the container but never running the
pct set command to configure the bind mount. This meant the socket
was never accessible inside the container.
Now runs: pct set <ctid> -mp0 /run/pulse-sensor-proxy,mp=/mnt/pulse-proxy
before restarting the container to ensure the mount is configured.
Improvements to pulse-sensor-proxy:
- Fix cluster discovery to use pvecm status for IP addresses instead of node names
- Add standalone node support for non-clustered Proxmox hosts
- Enhanced SSH key push with detailed logging, success/failure tracking, and error reporting
- Add --pulse-server flag to installer for custom Pulse URLs
- Configure www-data group membership for Proxmox IPC access
UI and API cleanup:
- Remove unused "Ensure cluster keys" button from Settings
- Remove /api/diagnostics/temperature-proxy/ensure-cluster-keys endpoint
- Remove EnsureClusterKeys method from tempproxy client
The setup script already handles SSH key distribution during initial configuration,
making the manual refresh button redundant.