Three follow-up fixes:
1. RestartAIChat() now performs the full post-start wiring (MCP providers,
patrol adapter, investigation orchestrator) when the service starts for
the first time via Restart(). Previously these were only wired via
StartAIChat(), leaving first-time configure with a partially wired service.
2. The Ollama→OpenAI-compatible fallback in createProviderForModel is now
guarded by !strings.HasPrefix(modelStr, "ollama:") so explicit
"ollama:llama3" models are never silently rerouted to a different provider.
3. Windows install script registration check now uses the $Hostname override
(if set) instead of always looking up $env:COMPUTERNAME, so post-install
verification works correctly when a custom hostname is specified.
When Pulse starts before AI is configured, legacyService is nil.
Saving AI settings called Restart() which bailed immediately on the
nil check, leaving the service unstarted (503 on /api/ai/sessions)
until a full process restart.
Merged the nil and !IsRunning checks so first-time configure now
starts the service inline, same as the already-handled stopped case.
Also: bare model names that ParseModelString routes to Ollama (e.g.
"qwen3-omni") now fall back to a configured custom OpenAI base URL
when Ollama is not explicitly configured — handles manually-typed
model names on self-hosted OpenAI-compatible endpoints.
Fixes#1339, #1296
Host agents removed from the UI would reappear on the next report cycle
because there was no rejection mechanism — unlike Docker agents which
already had resurrection prevention. Mirror the Docker agent pattern:
- Track removed host IDs in a `removedHosts` map with 24hr TTL
- Persist removal records in `State.RemovedHosts` for frontend display
- Reject reports from removed hosts in `ApplyHostReport()`
- Add `AllowHostReenroll()` + API route to clear the block
- Show removed host agents in the Settings UI with "Allow re-enroll"
- Sync removed-agent maps from state on startup for all agent types
- Fix mock integration snapshot missing `RemovedDockerHosts` field
Two fixes for missing recovery/resolved notifications:
1. API config PUT handler now preserves notifyOnResolve when the client
omits it from the request body. Go decodes a missing bool as false,
which silently disabled recovery notifications on older clients.
2. CancelAlert now always cleans up the cooldown record even when the
alert has already left the pending buffer, preventing stale cooldown
entries from suppressing future alert cycles.
Move the guest-agent file-read of /proc/meminfo earlier in the memory
fallback chain so it runs before RRD, giving real-time MemAvailable that
correctly excludes reclaimable buff/cache on Linux VMs. Also add
VM.GuestAgent.FileRead permission for PVE 9 and fix install.sh to use
comma-separated privilege strings.
If license save fails, the in-memory license was being cleared, which
could drop a valid existing license. Now snapshots the current license
before activation and restores it if persistence fails.
Two nodes in the same PVE cluster generated identical Proxmox API token
names, so the second node's setup rotated the shared token and broke the
first node. Include the hostname in the token name so each node gets its
own token. Also refresh the stored cluster credential on the server when
a new endpoint merges into an existing cluster entry.
The /api/auto-register endpoint returned a generic "Invalid or expired
setup code" for all auth failures, making cluster registration issues
impossible to diagnose. Now returns specific errors for expired tokens,
wrong scope, invalid API tokens, etc.
Also extend the setup token grace window to /api/auto-register so
multiple cluster nodes can register with the same token within the
1-minute grace period after first use.
Normalize SystemSettingsMonitor interface assignments via reflect to
prevent typed-nil-in-interface (same class as #1324 fix). Also add
defer/recover to the background OIDC token refresh goroutine so a
panic there cannot take down the process.
SystemSettingsHandler.mtMonitor was an interface field. A nil
*MultiTenantMonitor stored in it became a non-nil interface
(Go typed-nil-in-interface), bypassing the nil guard in getMonitor()
and panicking on every settings save in single-tenant mode.
Change mtMonitor to concrete *monitoring.MultiTenantMonitor so nil
checks work correctly. Also resolve getMonitor() once per request
instead of repeated calls to eliminate a TOCTOU race.
The applyAuthContextHeaders early-return in CheckAuth skipped the OIDC
token refresh block, causing long-lived OIDC sessions to expire instead
of auto-refreshing. Move the refresh trigger into extractAndStoreAuthContext
so it fires at the middleware level before CheckAuth's early return.
Also add a nil guard on mtPersistence in AISettingsHandler.GetAIService
for non-default org paths, preventing a potential panic if background
code carries a non-default org context in v5 single-tenant mode.
The single-tenant lockdown (499ab812e) set mtPersistence to nil but
only patched AISettingsHandler with a legacy fallback. AIHandler (chat
service) and ConfigProfileHandler were missed, so AI features (Patrol,
Chat) failed with "chat service not available" and config profiles
would panic on nil dereference. Wire legacy persistence into both
handlers and add the same fallback to ProfileSuggestionHandler.
Fixes#1322
The alert callback logged at Info level for every alert regardless of
whether patrol was enabled. TriggerPatrolForAlert already has an
enabled/running guard and its own debug logging.
When FetchFingerprint fails during agent auto-registration, set verifySSL
based on whether a fingerprint was captured rather than hardcoding true.
Also heal already-broken nodes (verifySSL=true with empty fingerprint) on
legacy re-register to prevent permanent connection failures with self-signed
Proxmox certs.
The legacy auto-register endpoint captured TLS fingerprints via
FetchFingerprint() but never persisted them to the node config. Nodes
with self-signed certs registered via the agent would fail with
"x509: certificate signed by unknown authority" on subsequent polls.
Store the fingerprint in all add/update paths for both PVE and PBS,
guard updates against empty-fingerprint clobber when FetchFingerprint
fails, and pass the fingerprint to cluster detection configs.
HandleUndismissFinding now checks both patrol and unified stores
before returning. Returns 404 with error message when the finding
is not found or not dismissed, instead of silently returning success.
Add an actions menu to the hosts overview with a "Remove host from
Pulse" button. Includes permission checks (requires settings:write
scope), confirmation handling, and a security regression test for
the delete endpoint scope enforcement.
The Undismiss() method existed on FindingsStore but was never exposed
via the API. Users who dismissed findings as "not_an_issue" had no way
to revert them.
- Add HandleUndismissFinding handler and route
- Add Undismiss() to UnifiedStore for parity with FindingsStore
- Also remove matching explicit suppression rules on undismiss
Auto-register was running the monitor reload in a background goroutine,
so the HTTP response was sent before the poller picked up the new node.
If reload failed or was slow, the node appeared in Settings > Proxmox
(reads config from disk) but not on the main Proxmox tab (reads from
active polling state).
Changed both auto-register paths to reload synchronously, matching the
manual add path (HandleAddNode).
Patrol runs, evaluation passes, and QuickAnalysis calls were consuming
LLM tokens without recording them in the cost store. This made the
cost_budget_usd_30d budget setting ineffective since enforceBudget()
never saw patrol spend.
- Add RecordUsage() to ai.Service for thread-safe cost recording
- Add recordPatrolUsage() helper to PatrolService, called on both
success and error paths for main patrol and evaluation pass
- Record QuickAnalysis token usage in cost store
- Return partial PatrolResponse (with token counts) on error instead
of nil, so callers can always record consumed tokens
- Propagate partial response through chat_service_adapter on error
Expose PublicURL from runtime config in the system settings API response
so the frontend displays the actual value instead of the placeholder when
the env var is set.
Add w-full to PVE, PBS, and PMG node tables so they expand to fill the
container in full-width mode.
The printf '%s\n' calls in shell code within the Go Sprintf template
were being counted as format verbs, causing a build failure (10 verbs
but 9 args). Using %%s produces literal %s in the output.
The PVE setup script had three bugs in the temperature monitoring SSH key setup:
- Nested double quotes in SSH_SENSORS_KEY_ENTRY broke the bash string, causing
"No such file or directory" errors for the key options
- The grep/mv pattern to update authorized_keys destroyed the symlink that
Proxmox maintains from /root/.ssh/authorized_keys to /etc/pve/priv/
- The uninstall path grepped for "# pulse-managed-key" but keys were tagged
"# pulse-sensors", so uninstall never cleaned up sensor keys
Fixes: resolve symlinks with readlink -f before operating, create temp files in
/tmp with mv-then-cp fallback for cross-device moves, escape inner quotes, and
broaden the uninstall filter to match all pulse-prefixed keys.