Commit graph

260 commits

Author SHA1 Message Date
rcourtman
dbf32baaa0 Rebuild agent token bindings on API token config reload
When api_tokens.json is modified on disk, the ConfigWatcher reloads
the tokens into memory. However, the Monitor's dockerTokenBindings and
hostTokenBindings maps were not synchronized with the new token set,
causing orphaned bindings when agents reconnect after reinstall.

Add SetAPITokenReloadCallback to ConfigWatcher that triggers Monitor's
new RebuildTokenBindings method after token reload. This method
reconstructs the binding maps from current Docker host and host agent
state, keeping only bindings for tokens that still exist in config.

Related to #773
2025-11-29 14:09:30 +00:00
rcourtman
2ff0a0988f refactor: remove unnecessary type conversions
Remove redundant type conversions identified by unconvert linter:
- Remove int() conversions for already-int VMID fields
- Remove int64() conversions for already-int64 arithmetic results
- Remove uint64() conversions for already-uint64 Disk/MaxDisk fields
- Remove int() on syscall.Stdin (already int constant)
2025-11-27 10:33:35 +00:00
rcourtman
8152197207 fix: mark unused parameters to satisfy unparam linter
Mark intentionally unused parameters with underscore to:
- Silence unparam warnings for legitimate unused parameters
- Keep function signatures intact for API compatibility
- Remove unused req from serveChecksum helper
2025-11-27 10:12:48 +00:00
rcourtman
8f6a481cd2 refactor: use builtin max() and fix unused parameter
- Replace custom maxInt64 helper with Go 1.21+ builtin max()
- Mark unused cfg parameter in newAdaptiveIntervalSelector
- Remove test for deleted helper function
2025-11-27 10:08:37 +00:00
rcourtman
6ff345fb6b chore: fix staticcheck SA warnings
- Fix SA4006 unused value issues in ssh.go, validation.go, generator.go
- Replace deprecated ioutil with io/os in config.go
- Replace deprecated tar.TypeRegA with tar.TypeReg
- Remove deprecated rand.Seed calls (auto-seeded in Go 1.20+)
- Fix always-true nil check in main.go
- Fix impossible nil comparison in tempproxy/client.go
- Add nil check for config in monitor.New()
2025-11-27 09:16:53 +00:00
rcourtman
ea335546fc feat: improve legacy agent detection and migration UX
Add seamless migration path from legacy agents to unified agent:

- Add AgentType field to report payloads (unified vs legacy detection)
- Update server to detect legacy agents by type instead of version
- Add UI banner showing upgrade command when legacy agents are detected
- Add deprecation notice to install-host-agent.ps1
- Create install-docker-agent.sh stub that redirects to unified installer

Legacy agents (pulse-host-agent, pulse-docker-agent) now show a "Legacy"
badge in the UI with a one-click copy command to upgrade to the unified
agent.
2025-11-25 23:26:22 +00:00
courtmanr@gmail.com
1716774e71 feat: adaptive node table layout, guest row fixes, and legacy agent detection
- Implemented adaptive layout for NodeSummaryTable with responsive columns and sticky name column.
- Fixed GuestRow background display issues.
- Added IsLegacy field to Host and DockerHost models to flag legacy agents (version < 1.0.0).
- Updated monitor to populate IsLegacy based on agent version.
2025-11-25 17:19:36 +00:00
courtmanr@gmail.com
584ad94ee5 Refactor: Parallelize PVE node polling 2025-11-25 08:38:03 +00:00
courtmanr@gmail.com
d6addee8b4 Fix: Correct context cancellation in loop (Fixes #727)
- Replaced defer in loop with explicit cancellation to avoid resource leak
- Properly tagged issue #727
2025-11-23 22:28:28 +00:00
courtmanr@gmail.com
78308cbc10 Fix: Prevent single node auth failure from disabling global SSH temperature collection
- Removed global legacySSHDisabled flag that was triggered by any single node auth failure
- Changed disableLegacySSHOnAuthFailure to only log warnings
- Fixed potential context leak in monitor.go
- Updated tests to reflect removal of global disable logic
2025-11-23 22:24:15 +00:00
courtmanr@gmail.com
5f3fd17025 Fix temperature collection regression for cluster nodes. Related to #727 2025-11-23 12:13:57 +00:00
courtmanr@gmail.com
83e07969f0 fix: ensure proxmox nodes are displayed even if cluster endpoints are missing
Fixes #727. Previously, if temperature monitoring was enabled and a node wasn't found in ClusterEndpoints, the entire node processing was skipped. This change ensures we only skip temperature collection.
2025-11-22 23:31:30 +00:00
rcourtman
a0c27e84ae Persist PVE fallback host after portless retry 2025-11-22 17:06:15 +00:00
rcourtman
9fc019b90c Handle PVE portless fallback when default port fails 2025-11-22 17:01:16 +00:00
rcourtman
255357d2fe Add recovery notifications and grouping controls 2025-11-21 22:07:00 +00:00
rcourtman
61a7c2829c Honor configured PVE polling interval in scheduler 2025-11-20 22:00:56 +00:00
rcourtman
bd0c47ed1b Improve token collision handling and installer subnet support 2025-11-20 09:45:36 +00:00
rcourtman
ce0fb90182 feat: avoid redundant PBS snapshot polling (Related to #717) 2025-11-18 23:10:43 +00:00
rcourtman
15d32adb10 feat: surface LXC mountpoints in UI (related to #715) 2025-11-18 22:57:20 +00:00
rcourtman
51b368ddc1 feat: make PVE polling interval configurable (related to #467) 2025-11-18 21:30:04 +00:00
rcourtman
b474a77b65 Add direct node fallback for storage polling 2025-11-18 19:58:38 +00:00
rcourtman
a03f8115b6 Improve installer temperature proxy and backup polling 2025-11-18 18:42:33 +00:00
rcourtman
a9e751d165 Allow PBS backup poll to finish after poller returns 2025-11-18 16:55:46 +00:00
rcourtman
1abff55feb Improve temperature proxy detection 2025-11-18 14:25:09 +00:00
rcourtman
23d194128d Skip inactive storages during content scans 2025-11-18 09:46:48 +00:00
rcourtman
13daa61d1d Harden turnkey install and proxy auto-registration 2025-11-18 00:24:50 +00:00
rcourtman
f9341ae1fc Improve temperature proxy workflow 2025-11-17 14:25:46 +00:00
rcourtman
47d5c14aef Improve temperature proxy control-plane flow 2025-11-15 21:49:51 +00:00
rcourtman
b1aad303b7 Fix incorrect temperature data during cluster initialization
During cluster startup, nodes were temporarily using the primary cluster
endpoint for temperature collection before cluster metadata validation
completed. This caused all nodes to show the same (incorrect) temperature
values for ~4 minutes until validation finished and per-node endpoints
were established.

Example: minipc would show delly's temperature (90°C) instead of its own
(50°C) from startup until cluster validation completed.

Root cause:
- Temperature collection started immediately at startup
- Cluster endpoint validation happened asynchronously
- Code fell back to primary endpoint when ClusterEndpoints was empty
- All nodes used same endpoint, got same temperature data

Fix: Skip temperature collection for cluster nodes until:
1. ClusterEndpoints array is populated (validation complete)
2. Node's specific endpoint is found in the cluster metadata

This ensures correct temperature data from the very first collection,
maintaining data integrity during startup. When persisted config exists,
endpoints are available immediately so no delay occurs. For new clusters,
temperature collection begins once validation completes (~30s).

Preserves Pulse's correctness guarantee: users can trust metrics
immediately after restart without waiting for "warm-up" period.
2025-11-14 23:38:44 +00:00
rcourtman
d49c333283 monitoring: add poll watchdog to prevent worker leaks (refs #696) 2025-11-14 11:24:59 +00:00
rcourtman
2ee693cc63 Add HTTP mode to pulse-sensor-proxy for multi-instance temperature monitoring
This implements HTTP/HTTPS support for pulse-sensor-proxy to enable
temperature monitoring across multiple separate Proxmox instances.

Architecture changes:
- Dual-mode operation: Unix socket (local) + HTTPS (remote)
- Unix socket remains default for security/performance (no breaking change)
- HTTP mode enables temps from external PVE hosts

Backend implementation:
- Add HTTPS server with TLS + Bearer token authentication to sensor-proxy
- Add TemperatureProxyURL and TemperatureProxyToken fields to PVEInstance
- Add HTTP client (internal/tempproxy/http_client.go) for remote proxy calls
- Update temperature collector to prefer HTTP proxy when configured
- Fallback logic: HTTP proxy → Unix socket → direct SSH (if not containerized)

Configuration:
- pulse-sensor-proxy config: http_enabled, http_listen_addr, http_tls_cert/key, http_auth_token
- PVEInstance config: temperature_proxy_url, temperature_proxy_token
- Environment variables: PULSE_SENSOR_PROXY_HTTP_* for all HTTP settings

Security:
- TLS 1.2+ with modern cipher suites
- Constant-time token comparison (timing attack prevention)
- Rate limiting applied to HTTP requests (shared with socket mode)
- Audit logging for all HTTP requests

Next steps:
- Update installer script to support HTTP mode + auto-registration
- Add Pulse API endpoint for proxy registration
- Generate TLS certificates during installation
- Test multi-instance temperature collection

Related to #571 (multi-instance architecture)
2025-11-13 16:13:53 +00:00
rcourtman
ddac48e640 Ensure agent ID collisions respect token boundaries (Related to #658) 2025-11-12 22:46:56 +00:00
rcourtman
e27d9f7deb Preserve storage backups after partial failures (Related to #704) 2025-11-12 21:10:18 +00:00
rcourtman
d2247d3ad7 Related to #692: Skip unsupported guest OS info calls 2025-11-12 19:17:09 +00:00
rcourtman
2e1ef44ecd Filter read-only filesystems from host agent disk metrics (related to #690)
Squashfs snap mounts on Ubuntu (and similar read-only filesystems like
erofs on Home Assistant OS) always report near-full usage and trigger
false disk alerts. The filter logic existed in Proxmox monitoring but
wasn't applied to host agents.

Changes:
- Extract read-only filesystem filter to shared pkg/fsfilters package
- Apply filter in hostmetrics.collectDisks() for host/docker agents
- Apply filter in monitor.ApplyHostReport() for backward compatibility
- Convert internal/monitoring/fs_filters.go to wrapper functions

This prevents squashfs, erofs, iso9660, cdfs, udf, cramfs, romfs, and
saturated overlay filesystems from generating alerts. Filtering happens
at both collection time (agents) and ingestion time (server) to ensure
older agents don't cause false alerts until they're updated.
2025-11-12 09:47:02 +00:00
rcourtman
cc595da28b Fix guest agent OS info calls causing OpenBSD VM crashes (related to #692)
Add defensive mitigation to prevent repeated guest-get-osinfo calls that
trigger buggy behavior in QEMU guest agent 9.0.2 on OpenBSD 7.6.

The issue: OpenBSD doesn't have /etc/os-release (Linux convention), and
qemu-ga 9.0.2 appears to spawn excessive helper processes trying to read
this file whenever guest-get-osinfo is called. These helpers don't clean
up properly, eventually exhausting the process table and crashing the VM.

The fix: Track consecutive OS info failures per VM. After 3 failures,
automatically skip future guest-get-osinfo calls for that VM while
continuing to fetch other guest agent data (network interfaces, version).
This prevents triggering the buggy code path while maintaining most guest
agent functionality.

The counter resets on success, so if the guest agent is upgraded or the
issue is resolved, Pulse will automatically resume OS info collection.

Related to #692
2025-11-11 22:27:22 +00:00
rcourtman
bb7ca93c18 feat: Add mdadm RAID monitoring support for host agents
Implements comprehensive mdadm RAID array monitoring for Linux hosts
via pulse-host-agent. Arrays are automatically detected and monitored
with real-time status updates, rebuild progress tracking, and automatic
alerting for degraded or failed arrays.

Key changes:

**Backend:**
- Add mdadm package for parsing mdadm --detail output
- Extend host agent report structure with RAID array data
- Integrate mdadm collection into host agent (Linux-only, best-effort)
- Add RAID array processing in monitoring system
- Implement automatic alerting:
  - Critical alerts for degraded arrays or arrays with failed devices
  - Warning alerts for rebuilding/resyncing arrays with progress tracking
  - Auto-clear alerts when arrays return to healthy state

**Frontend:**
- Add TypeScript types for RAID arrays and devices
- Display RAID arrays in host details drawer with:
  - Array status (clean/degraded/recovering) with color-coded indicators
  - Device counts (active/total/failed/spare)
  - Rebuild progress percentage and speed when applicable
  - Green for healthy, amber for rebuilding, red for degraded

**Documentation:**
- Document mdadm monitoring feature in HOST_AGENT.md
- Explain requirements (Linux, mdadm installed, root access)
- Clarify scope (software RAID only, hardware RAID not supported)

**Testing:**
- Add comprehensive tests for mdadm output parsing
- Test parsing of healthy, degraded, and rebuilding arrays
- Verify proper extraction of device states and rebuild progress

All builds pass successfully. RAID monitoring is automatic and best-effort
- if mdadm is not installed or no arrays exist, host agent continues
reporting other metrics normally.

Related to #676
2025-11-09 16:36:33 +00:00
rcourtman
8f05fc0a57 Improve backup-age alerts to show VM/CT names in multi-cluster setups (related to #668)
This change fixes backup-age alert notifications to display VM/CT names
instead of just "VMID XXX" in multi-cluster environments where backups
are stored on PBS.

Changes:
- Store all guests per VMID (not just first match) to handle VMID collisions across clusters
- Persist last-known guest names/types in metadata store for deleted VMs
- Enrich backup correlation with persisted metadata when live inventory is empty
- Update CheckBackups to handle multiple VMID matches intelligently

The fix addresses two scenarios:
1. Multiple PVE clusters with same VMID backing up to one PBS
2. VMs deleted from Proxmox but backups still exist on PBS

Backup-age alerts will now show proper VM/CT names when:
- A unique guest exists with that VMID (live or persisted)
- Multiple guests share a VMID (uses first match, consistent with current behavior)

When truly ambiguous (multiple live VMs, same VMID, no way to determine origin),
the alert gracefully falls back to showing "VMID XXX".
2025-11-08 18:24:04 +00:00
rcourtman
3ad35976b2 Clarify Docker agent cycling troubleshooting for cloned VMs/LXCs (related to #648)
Enhanced the "Docker hosts cycling" troubleshooting entry to explicitly
call out VM/LXC cloning as a cause of identical agent IDs. Added specific
remediation steps for regenerating machine IDs on cloned systems.

This addresses the resolution path discovered in discussion #648 where a
user cloned a Proxmox LXC and encountered cycling behavior even with
separate API tokens because the agent IDs were duplicated.
2025-11-07 22:59:19 +00:00
rcourtman
59a97f2e3e Fix storage disappearing after upgrade by preserving TLS validation
Fixes #657

Between v4.25.0 and v4.26.4, commit 72865ff62 changed cluster endpoint
resolution to prefer IP addresses over hostnames to reduce DNS lookups
(refs #620). However, this caused TLS certificate validation to fail for
installations with VerifySSL=true, because Proxmox certificates typically
contain hostnames (e.g., pve01.example.com), not IP addresses.

When all cluster endpoints failed TLS validation during the initial health
check, the ClusterClient marked all nodes as unhealthy. Subsequent calls
to GetAllStorage() would fail with "no healthy nodes available in cluster",
causing storage data to disappear from the UI despite the cluster being
fully operational.

**Root Cause:**
The IP-first approach breaks TLS hostname verification when:
- VerifySSL is enabled (common for production environments)
- Certificates are issued with hostnames, not IPs (standard practice)
- Result: x509 certificate validation fails (e.g., "certificate is valid
  for pve01.example.com, not 10.0.0.44")

**Solution:**
Conditionally prefer hostnames vs IPs based on TLS validation requirements:

1. When TLS hostname verification is required (VerifySSL=true AND no
   fingerprint override), prefer hostname to ensure certificate CN/SAN
   validation succeeds.

2. When TLS verification is bypassed (VerifySSL=false OR fingerprint
   provided), prefer IP to reduce DNS lookups.

This approach:
- Fixes the regression for users with VerifySSL enabled
- Preserves the DNS optimization for self-signed/fingerprint configs
- Maintains backwards compatibility with v4.25.0 behavior
- Does not compromise TLS security

**Testing:**
Users reported that rolling back to v4.25.0 fixed their storage visibility.
This fix should restore storage for v4.26.4+ while maintaining the DNS
optimization for appropriate scenarios.
2025-11-07 15:36:52 +00:00
rcourtman
19091d47c9 Enforce Docker agent API token uniqueness (related to #658)
Problem: Multiple Docker agents can share the same API token, which causes
serious operational and security issues:

1. Host identity collision - agents overwrite each other in state (the bug
   fixed in aa0aa7d4f only addressed the symptom, not the root cause)
2. Security/audit gap - can't attribute actions to specific agents
3. User confusion - easy mistake that causes subtle, hard-to-debug issues
4. State corruption - race conditions on startup and racey metric updates

Root cause: The system treats API tokens as the agent's identity credential,
but never enforced uniqueness. This allowed users to accidentally (or
intentionally) reuse tokens across multiple agents, breaking the 1:1
token-to-agent relationship that the architecture assumes.

Solution: Enforce token uniqueness at the agent report ingestion point.

Implementation:
- Add dockerTokenBindings map[tokenID]agentID to Monitor state
- In ApplyDockerReport, check if token is already bound to a different agent
- On first report from a token, bind it to that agent's ID
- On subsequent reports, verify the binding matches
- Reject mismatches with clear error naming the conflicting host
- Unbind tokens when hosts are removed (allows token reuse after cleanup)

Error message example:
  "API token (pk_abc…xyz) is already in use by agent 'agent-123'
  (host: docker-host-1). Each Docker agent must use a unique API token.
  Generate a new token for this agent"

Why fail-fast instead of phased rollout:
- Shared tokens are architecturally wrong and cannot work correctly
- The system cannot safely multiplex state for duplicate identities
- A clear, immediate error is better UX than silent corruption
- Users would need to generate per-agent tokens eventually anyway

Why in-memory instead of persisted:
- Aligns with Pulse's existing state model (JSON config + in-memory state)
- Bindings naturally rebuild as agents report in after restart
- No schema migration or additional persistence complexity needed
- Sufficient for correctness since overwrite can't happen until both
  agents report, at which point the binding exists and rejects duplicates

Migration path for existing users with shared tokens:
- Generate new unique token for each agent
- Update agent configuration with new token
- Restart agents one at a time

This enforces the token-as-identity invariant and prevents users from
creating unsupportable configurations.
2025-11-07 15:19:42 +00:00
rcourtman
48fabdd827 Improve Docker temperature monitoring documentation for clarity (related to #600)
Updated the Quick Start for Docker section in TEMPERATURE_MONITORING.md to be
more user-friendly and address common setup issues:

- Added clear explanation of why the proxy is needed (containers can't access hardware)
- Provided concrete IP example instead of placeholder
- Showed full docker-compose.yml context with proper YAML structure
- Added sudo to commands where needed
- Updated docker-compose commands to v2 syntax with note about v1
- Expanded verification steps with clearer success indicators
- Added reminder to check container name in verification commands

These improvements should help users who encounter blank temperature displays
due to missing proxy installation or bind mount configuration.
2025-11-07 15:09:42 +00:00
rcourtman
7ee252bd84 Fix Docker host display bug when multiple agents share API tokens (related to #658)
Root cause: findMatchingDockerHost() was matching hosts by token ID alone,
causing multiple Docker agents using the same API token to overwrite each
other in state. This resulted in only N visible hosts (where N = number of
unique tokens) instead of all M agents, with hosts "rotating" as each agent
reported every 10 seconds.

Example: 4 agents using 2 tokens would show only 2 hosts, rotating between
agents 1↔2 (token A) and agents 3↔4 (token B).

Fix: Remove token-only matching from findMatchingDockerHost(). Hosts should
only match by:
1. Agent ID (unique per agent)
2. Machine ID + hostname combination (with optional token validation)
3. Machine ID or hostname alone (only for tokenless agents)

This allows multiple agents to share the same API token without colliding.

Additional fix: UpsertDockerHost() now preserves Hidden, PendingUninstall,
and Command fields from existing hosts, preventing these flags from being
reset to defaults on every agent report.
2025-11-07 13:46:35 +00:00
rcourtman
2a79d57f73 Add SMART temperature collection for physical disks (related to #652)
Extends temperature monitoring to collect SMART temps for SATA/SAS disks,
addressing issue #652 where physical disk temperatures showed as empty.

Architecture:
- Deploys pulse-sensor-wrapper.sh as SSH forced command on Proxmox nodes
- Wrapper collects both CPU/GPU temps (sensors -j) and disk temps (smartctl)
- Implements 30-min cache with background refresh to avoid performance impact
- Uses smartctl -n standby,after to skip sleeping drives without waking them
- Returns unified JSON: {sensors: {...}, smart: [...]}

Backend changes:
- Add DiskTemp model with device, serial, WWN, temperature, lastUpdated
- Extend Temperature model with SMART []DiskTemp field and HasSMART flag
- Add WWN field to PhysicalDisk for reliable disk matching
- Update parseSensorsJSON to handle both legacy and new wrapper formats
- Rewrite mergeNVMeTempsIntoDisks to match SMART temps by WWN → serial → devpath
- Preserve legacy NVMe temperature support for backward compatibility

Performance considerations:
- SMART data cached for 30 minutes per node to avoid excessive smartctl calls
- Background refresh prevents blocking temperature requests
- Respects drive standby state to avoid spinning up idle arrays
- Staggered disk scanning with 0.1s delay to avoid saturating SATA controllers

Install script:
- Deploys wrapper to /usr/local/bin/pulse-sensor-wrapper.sh
- Updates SSH forced command from "sensors -j" to wrapper script
- Backward compatible - falls back to direct sensors output if wrapper missing

Testing note:
- Requires real hardware with smartmontools installed for full functionality
- Empty smart array returned gracefully when smartctl unavailable
- Legacy sensor-only nodes continue working without changes
2025-11-07 11:46:57 +00:00
rcourtman
99e5a38534 Fix critical monitoring system issues and add robustness improvements
This commit addresses 9 critical issues identified during the monitoring system audit:

**Race Conditions Fixed:**
- PBS backup pollers: Moved lock earlier to eliminate check-then-act race (lines 7316-7378)
- PVE backup poll timing: Fixed double write to lastPVEBackupPoll with proper synchronization (lines 5927-5977)
- Docker hosts cleanup: Refactored to avoid holding both m.mu and s.mu locks simultaneously (lines 1911-1937)

**Context Propagation Fixed:**
- Replaced all context.Background() calls with parent context for proper cancellation chain:
  - PBS backup poller (line 7367)
  - PVE backup poller (line 5955)
  - PBS fallback check (line 7154)

**Memory Leak Prevention:**
- Added cleanup for guest metadata cache (10 minute TTL, lines 1942-1957)
- Added cleanup for diagnostic snapshots (1 hour TTL, lines 1959-1987)
- Added cleanup for RRD cache (1 minute TTL, lines 1989-2007)
- All cleanup methods called on 10-second ticker (lines 3791-3793)

**Panic Recovery:**
- Added recoverFromPanic helper to log panics with stack traces (lines 1910-1920)
- Protected all critical goroutines:
  - poll (line 4020)
  - taskWorker (line 4200)
  - retryFailedConnections (line 3851)
  - checkMockAlerts (line 8896)
  - pollPVEInstance (line 4886)
  - pollPBSInstance (line 7164)
  - pollPMGInstance (line 7498)

**Import Fixes:**
- Added missing sync import to email_enhanced.go
- Added missing os import to queue.go

All fixes maintain proper lock ordering and release locks before calling methods that acquire other locks to prevent deadlocks.
2025-11-07 08:52:37 +00:00
rcourtman
1a78dcbba2 Fix guest agent disk data regression on Proxmox 8.3+
Related to #630

Proxmox 8.3+ changed the VM status API to return the `agent` field as an
object ({"enabled":1,"available":1}) instead of an integer (0 or 1). This
caused Pulse to incorrectly treat VMs as having no guest agent, resulting
in missing disk usage data (disk:-1) even when the guest agent was running
and functional.

The issue manifested as:
- VMs showing "Guest details unavailable" or missing disk data
- Pulse logs showing no "Guest agent enabled, querying filesystem info" messages
- `pvesh get /nodes/<node>/qemu/<vmid>/agent/get-fsinfo` working correctly
  from the command line, confirming the agent was functional

Root cause:
The VMStatus struct defined `Agent` as an int field. When Proxmox 8.3+ sent
the new object format, JSON unmarshaling silently left the field at zero,
causing Pulse to skip all guest agent queries.

Changes:
- Created VMAgentField type with custom UnmarshalJSON to handle both formats:
  * Legacy (Proxmox <8.3): integer (0 or 1)
  * Modern (Proxmox 8.3+): object {"enabled":N,"available":N}
- Updated VMStatus.Agent from `int` to `VMAgentField`
- Updated all references to `detailedStatus.Agent` to use `.Agent.Value`
- The unmarshaler prioritizes the "available" field over "enabled" to ensure
  we only query when the agent is actually responding

This fix maintains backward compatibility with older Proxmox versions while
supporting the new format introduced in Proxmox 8.3+.
2025-11-06 18:42:46 +00:00
rcourtman
20854256c3 Fix VM migration issue where custom alert thresholds are lost
Resolves #641

## Problem
When a VM migrates between Proxmox nodes, Pulse was treating it as a new
resource and discarding custom alert threshold overrides. This occurred
because guest IDs included the node name (e.g., `instance-node-VMID`),
causing the ID to change when the VM moved to a different node.

Users reported that after migrating a VM, previously disabled alerts
(e.g., memory threshold set to 0) would resume firing.

## Root Cause
Guest IDs were constructed as:
- Standalone: `node-VMID`
- Cluster: `instance-node-VMID`

When a VM migrated from node1 to node2, the ID changed from
`instance-node1-100` to `instance-node2-100`, causing:
- Alert threshold overrides to be orphaned (keyed by old ID)
- Guest metadata (custom URLs, descriptions) to be orphaned
- Active alerts to reference the wrong resource ID

## Solution
Changed guest ID format to be stable across node migrations:
- New format: `instance-VMID` (for both standalone and cluster)
- Retains uniqueness across instances while being node-independent
- Allows VMs to migrate freely without losing configuration

## Implementation

### Backend Changes
1. **Guest ID Construction** (`monitor_polling.go`):
   - Simplified to always use `instance-VMID` format
   - Removed node from the ID construction logic

2. **Alert Override Migration** (`alerts.go`):
   - Added lazy migration in `getGuestThresholds()`
   - Detects legacy ID formats and migrates to new format
   - Preserves user configurations automatically

3. **Guest Metadata Migration** (`guest_metadata.go`):
   - Added `GetWithLegacyMigration()` helper method
   - Called during VM/container polling to migrate metadata
   - Preserves custom URLs and descriptions

4. **Active Alerts Migration** (`alerts.go`):
   - Added migration logic in `LoadActiveAlerts()`
   - Translates legacy alert resource IDs to new format
   - Preserves alert acknowledgments across restarts

### Frontend Changes
5. **ID Construction Updates**:
   - `ThresholdsTable.tsx`: Updated fallback from `instance-node-vmid` to `instance-vmid`
   - `Dashboard.tsx`: Simplified guest ID construction
   - `GuestRow.tsx`: Updated `buildGuestId()` helper

## Migration Strategy
- **Lazy Migration**: Configs are migrated as guests are discovered
- **Backwards Compatible**: Old IDs are detected and automatically converted
- **Zero Downtime**: No manual intervention required
- **Persisted**: Migrated configs are saved on next config write cycle

## Testing Recommendations
After deployment:
1. Verify existing alert overrides still apply
2. Test VM migration - confirm thresholds persist
3. Check guest metadata (custom URLs) survive migration
4. Verify active alerts maintain acknowledgment state

## Related
- Addresses similar issues with guest metadata and active alert tracking
- Lays groundwork for any future guest-specific configuration features
- Aligns with project philosophy: correctness and UX over implementation complexity
2025-11-06 10:27:15 +00:00
rcourtman
af55362009 Fix inflated RAM usage reporting for LXC containers
Related to #553

## Problem

LXC containers showed inflated memory usage (e.g., 90%+ when actual usage was 50-60%,
96% when actual was 61%) because the code used the raw `mem` value from Proxmox's
`/cluster/resources` API endpoint. This value comes from cgroup `memory.current` which
includes reclaimable cache and buffers, making memory appear nearly full even when
plenty is available.

## Root Cause

- **Nodes**: Had sophisticated cache-aware memory calculation with RRD fallbacks
- **VMs (qemu)**: Had detailed memory calculation using guest agent meminfo
- **LXCs**: Naively used `res.Mem` directly without any cache-aware correction

The Proxmox cluster resources API's `mem` field for LXCs includes cache/buffers
(from cgroup memory accounting), which should be excluded for accurate "used" memory.

## Solution

Implement cache-aware memory calculation for LXC containers by:

1. Adding `GetLXCRRDData()` method to fetch RRD metrics for LXC containers from
   `/nodes/{node}/lxc/{vmid}/rrddata`
2. Using RRD `memavailable` to calculate actual used memory (total - available)
3. Falling back to RRD `memused` if `memavailable` is not available
4. Only using cluster resources `mem` value as last resort

This matches the approach already used for nodes and VMs, providing consistent
cache-aware memory reporting across all resource types.

## Changes

- Added `GuestRRDPoint` type and `GetLXCRRDData()` method to pkg/proxmox
- Added `GetLXCRRDData()` to ClusterClient for cluster-aware operations
- Modified LXC memory calculation in `pollPVEInstance()` to use RRD data when available
- Added guest memory snapshot recording for LXC containers
- Updated test stubs to implement the new interface method

## Testing

- Code compiles successfully
- Follows the same proven pattern used for nodes and VMs
- Includes diagnostic snapshot recording for troubleshooting
2025-11-06 00:16:18 +00:00
rcourtman
7936808193 Add custom display name support for Docker hosts
This implements the ability for users to assign custom display names to Docker hosts,
similar to the existing functionality for Proxmox nodes. This addresses the issue where
multiple Docker hosts with identical hostnames but different IPs/domains cannot be
easily distinguished in the UI.

Backend changes:
- Add CustomDisplayName field to DockerHost model (internal/models/models.go:201)
- Update UpsertDockerHost to preserve custom display names across updates (internal/models/models.go:1110-1113)
- Add SetDockerHostCustomDisplayName method to State for updating names (internal/models/models.go:1221-1235)
- Add SetDockerHostCustomDisplayName method to Monitor (internal/monitoring/monitor.go:1070-1088)
- Add HandleSetCustomDisplayName API handler (internal/api/docker_agents.go:385-426)
- Route /api/agents/docker/hosts/{id}/display-name PUT requests (internal/api/docker_agents.go:117-120)

Frontend changes:
- Add customDisplayName field to DockerHost TypeScript interface (frontend-modern/src/types/api.ts:136)
- Add MonitoringAPI.setDockerHostDisplayName method (frontend-modern/src/api/monitoring.ts:151-187)
- Update getDisplayName function to prioritize custom names (frontend-modern/src/components/Settings/DockerAgents.tsx:84-89)
- Add inline editing UI with save/cancel buttons in Docker Agents settings (frontend-modern/src/components/Settings/DockerAgents.tsx:1349-1413)
- Update sorting to use custom display names (frontend-modern/src/components/Docker/DockerHosts.tsx:58-59)
- Update DockerHostSummaryTable to display custom names (frontend-modern/src/components/Docker/DockerHostSummaryTable.tsx:40-42, 87, 120, 254)

Users can now click the edit icon next to any Docker host name in Settings > Docker Agents
to set a custom display name. The custom name will be preserved across agent reconnections
and takes priority over the hostname reported by the agent.

Related to #623
2025-11-05 23:18:03 +00:00
rcourtman
e21a72578f Add configurable SSH port for temperature monitoring
Related to #595

This change adds support for custom SSH ports when collecting temperature
data from Proxmox nodes, resolving issues for users who run SSH on non-standard
ports.

**Why SSH is still needed:**
Temperature monitoring requires reading /sys/class/hwmon sensors on Proxmox
nodes, which is not exposed via the Proxmox API. Even when using API tokens
for authentication, Pulse needs SSH access to collect temperature data.

**Changes:**
- Add `sshPort` configuration to SystemSettings (system.json)
- Add `SSHPort` field to Config with environment variable support (SSH_PORT)
- Add per-node SSH port override capability for PVE, PBS, and PMG instances
- Update TemperatureCollector to accept and use custom SSH port
- Update SSH known_hosts manager to support non-standard ports
- Add NewTemperatureCollectorWithPort() constructor with port parameter
- Maintain backward compatibility with NewTemperatureCollector() (uses port 22)
- Update frontend TypeScript types for SSH port configuration

**Configuration methods:**
1. Environment variable: SSH_PORT=2222
2. system.json: {"sshPort": 2222}
3. Per-node override in nodes.enc (future UI support)

**Default behavior:**
- Defaults to port 22 if not configured
- Maintains full backward compatibility
- No changes required for existing deployments

The implementation includes proper ssh-keyscan port handling and known_hosts
management for non-standard ports using [host]:port notation per SSH standards.
2025-11-05 20:03:29 +00:00