Alerts being toggled off should only suppress notifications, not lock
users out of the Thresholds, Destinations and Schedule config tabs.
Removes the redirect-to-overview effect and disabled state from all
sidebar and mobile tab buttons when alerts are inactive.
Pulse was generating tag colours from a hash of the tag name instead
of using the colours configured in Proxmox. Now polls /cluster/options
once per PVE instance and merges the tag-style colour map into state,
which the frontend uses as the first-priority colour source for tag
badges. Falls back to the existing special-tag and hash-based colours
when Proxmox hasn't set a custom colour for a tag.
Backend already supported updateAlertDelayHours: -1 to suppress update
alerts but there was no way to configure it from the UI. Adds a toggle
in Settings → Alerts → Docker tab that maps to that backend field.
Three follow-up fixes:
1. RestartAIChat() now performs the full post-start wiring (MCP providers,
patrol adapter, investigation orchestrator) when the service starts for
the first time via Restart(). Previously these were only wired via
StartAIChat(), leaving first-time configure with a partially wired service.
2. The Ollama→OpenAI-compatible fallback in createProviderForModel is now
guarded by !strings.HasPrefix(modelStr, "ollama:") so explicit
"ollama:llama3" models are never silently rerouted to a different provider.
3. Windows install script registration check now uses the $Hostname override
(if set) instead of always looking up $env:COMPUTERNAME, so post-install
verification works correctly when a custom hostname is specified.
Adds $Hostname / $env:PULSE_HOSTNAME parameter so users can set a
custom display name at install time, matching the Linux install.sh
behaviour. Persists to config.json and passes --hostname to the agent
binary args.
Closes discussion #818
When Pulse starts before AI is configured, legacyService is nil.
Saving AI settings called Restart() which bailed immediately on the
nil check, leaving the service unstarted (503 on /api/ai/sessions)
until a full process restart.
Merged the nil and !IsRunning checks so first-time configure now
starts the service inline, same as the already-handled stopped case.
Also: bare model names that ParseModelString routes to Ollama (e.g.
"qwen3-omni") now fall back to a configured custom OpenAI base URL
when Ollama is not explicitly configured — handles manually-typed
model names on self-hosted OpenAI-compatible endpoints.
Fixes#1339, #1296
Rename the amber segment label from "Cache" to "Reclaimable" to avoid
jargon confusion. Add a "Proxmox view: X%" line in the tooltip so
users immediately see why the percentage differs from Proxmox (which
includes reclaimable cache as used memory).
Show reclaimable buff/cache as a distinct amber segment between used
(green) and free (gray) in the memory bar. This explains why Pulse's
memory percentage differs from Proxmox: Pulse reports cache-aware
usage (MemAvailable) while Proxmox includes cache as used (Total-Free).
Backend: add Cache field to Memory model, derived from MemInfo
(Available - Free). Only uses MemInfo.Free (not FreeMem fallback) to
avoid inflating cache by the balloon gap on ballooned VMs.
Frontend: StackedMemoryBar renders three segments with tooltip
breakdown. Tooltip Free accounts for balloon limit when active.
Percentage label and alerts remain cache-aware (unchanged).
Replace the diskUsage <= 0 heuristic with a diskFromAgent bool that is
only set when the guest agent actually returns valid filesystem data.
Prevents carry-forward from firing on a genuine 0% disk reading.
Prevents stale disk data from persisting indefinitely in the efficient
poller when a user disables the guest agent after it had been providing
data. Matches the fallback poller's agent-disabled exclusion.
Carry forward previous cycle's disk data when the QEMU guest agent
times out or errors, instead of falling back to Proxmox cluster/resources
which always reports 0 for VM disk usage. Applied to both polling paths
(pollVMsAndContainersEfficient and pollVMsWithNodes) with safety guards
against uint64 underflow and permanent-failure exclusions.
Host agents removed from the UI would reappear on the next report cycle
because there was no rejection mechanism — unlike Docker agents which
already had resurrection prevention. Mirror the Docker agent pattern:
- Track removed host IDs in a `removedHosts` map with 24hr TTL
- Persist removal records in `State.RemovedHosts` for frontend display
- Reject reports from removed hosts in `ApplyHostReport()`
- Add `AllowHostReenroll()` + API route to clear the block
- Show removed host agents in the Settings UI with "Allow re-enroll"
- Sync removed-agent maps from state on startup for all agent types
- Fix mock integration snapshot missing `RemovedDockerHosts` field
Two fixes for missing recovery/resolved notifications:
1. API config PUT handler now preserves notifyOnResolve when the client
omits it from the request body. Go decodes a missing bool as false,
which silently disabled recovery notifications on older clients.
2. CancelAlert now always cleans up the cooldown record even when the
alert has already left the pending buffer, preventing stale cooldown
entries from suppressing future alert cycles.
Move the guest-agent file-read of /proc/meminfo earlier in the memory
fallback chain so it runs before RRD, giving real-time MemAvailable that
correctly excludes reclaimable buff/cache on Linux VMs. Also add
VM.GuestAgent.FileRead permission for PVE 9 and fix install.sh to use
comma-separated privilege strings.
Two root causes: (1) When Proxmox cluster/resources returns a partial
response (e.g. during migration or transient API issue), VMs missing
from a responsive node were silently dropped because the node appeared
in nodesWithResources, bypassing grace-period preservation. Now
preserves recently-seen guests from online nodes for up to the grace
window. (2) The task queue allowed overlapping polls for the same PVE
instance — a slower stale poll could overwrite a newer complete VM list.
Added per-instance execution lock to skip duplicate scheduled tasks.
The 10ms goroutine drain pause was insufficient under full parallel
test suite load, causing intermittent failures in
TestPulseMonitorOnlySkipsDispatchButRetainsAlert.
FreeBSD disk discovery now falls back to scanning /dev for ada*, da*,
nvd*, nda* and other FreeBSD disk names when kern.disks misses them.
Probe order prefers the correct device type first (sat for ada, nvme
for nvd). Standby disks are preserved as valid results instead of
being dropped.
ZFS zvols (zd*), device-mapper, virtio disks, and other virtual block
devices don't support SMART and were being reported as FAILED. Use lsblk
JSON metadata to filter by device prefix, transport, subsystem, and
vendor/model. Also treat missing smart_status as unknown rather than
failed, and ignore UNKNOWN health in Patrol/AI signals.
If license save fails, the in-memory license was being cleared, which
could drop a valid existing license. Now snapshots the current license
before activation and restores it if persistence fails.
Two nodes in the same PVE cluster generated identical Proxmox API token
names, so the second node's setup rotated the shared token and broke the
first node. Include the hostname in the token name so each node gets its
own token. Also refresh the stored cluster credential on the server when
a new endpoint merges into an existing cluster entry.
A broken or hung qemu-agent on one VM could stall the entire polling
loop, preventing higher-VMID VMs from being detected. Wrap all guest
agent work in a 10s per-VM budget with panic recovery, and add a 2s
timeout to GetVMStatus in the efficient poller to match the legacy path.
The /api/auto-register endpoint returned a generic "Invalid or expired
setup code" for all auth failures, making cluster registration issues
impossible to diagnose. Now returns specific errors for expired tokens,
wrong scope, invalid API tokens, etc.
Also extend the setup token grace window to /api/auto-register so
multiple cluster nodes can register with the same token within the
1-minute grace period after first use.
The auto-update flow stops the Pulse service before applying updates.
If the update fails, the rollback path restored files but never
restarted the service. Since the main unit was explicitly stopped
(not crashed), systemd's Restart=always didn't rescue it.
Add restart-on-failure guards to both pulse-auto-update.sh and
install.sh so Pulse is always restarted after a failed update attempt.
Normalize SystemSettingsMonitor interface assignments via reflect to
prevent typed-nil-in-interface (same class as #1324 fix). Also add
defer/recover to the background OIDC token refresh goroutine so a
panic there cannot take down the process.
SystemSettingsHandler.mtMonitor was an interface field. A nil
*MultiTenantMonitor stored in it became a non-nil interface
(Go typed-nil-in-interface), bypassing the nil guard in getMonitor()
and panicking on every settings save in single-tenant mode.
Change mtMonitor to concrete *monitoring.MultiTenantMonitor so nil
checks work correctly. Also resolve getMonitor() once per request
instead of repeated calls to eliminate a TOCTOU race.
Add singleton watchdog with lock dir, pidfile tracking, and signal
traps to prevent multiple pulse-agent instances spawning on QNAP.
Tighten procfs matching to avoid killing unrelated processes.
The applyAuthContextHeaders early-return in CheckAuth skipped the OIDC
token refresh block, causing long-lived OIDC sessions to expire instead
of auto-refreshing. Move the refresh trigger into extractAndStoreAuthContext
so it fires at the middleware level before CheckAuth's early return.
Also add a nil guard on mtPersistence in AISettingsHandler.GetAIService
for non-default org paths, preventing a potential panic if background
code carries a non-default org context in v5 single-tenant mode.
The single-tenant lockdown (499ab812e) set mtPersistence to nil but
only patched AISettingsHandler with a legacy fallback. AIHandler (chat
service) and ConfigProfileHandler were missed, so AI features (Patrol,
Chat) failed with "chat service not available" and config profiles
would panic on nil dereference. Wire legacy persistence into both
handlers and add the same fallback to ProfileSuggestionHandler.
Fixes#1322