QNAP's autorun.sh fires well before encrypted data volumes are
unlocked, so the previous one-line entry that invoked
start-pulse-agent.sh on the encrypted volume failed immediately —
the wrapper did not exist yet, and the agent never started after
reboot.
Replace the entry with a backgrounded waiter that polls for the
wrapper (every 2 s, up to 30 min) and execs it once the volume
comes up. On unencrypted volumes the loop exits on the first
check, so behaviour is unchanged. A timeout message is logged to
/var/log/pulse-agent.log if the volume never unlocks within the
window. The block is uninstall-safe: no internal blank lines, so
the existing sed marker-to-blank-line range still removes it
cleanly.
On QNAP, /usr/local/bin is a tiny RAM disk that gets wiped on every
reboot. The install wrapper stores the real binary under
${QNAP_VOL}/.pulse-agent/<name> and a boot script copies it back into
/usr/local/bin. Without refreshing the stored copy, auto-updates applied
to the RAM disk were silently reverted on the next reboot.
Mirror the Unraid persistence pattern: after the atomic in-place swap,
when running on QNAP, rewrite the stored binary via a temp-file rename.
Skip when the running binary already is the persistent copy (fallback
mode, where the rename step already updated it).
The host-side identifier path applies sanitizeDockerHostSuffix before
storing Host.ID, while the docker-side uses AgentKey() raw. For a QNAP
unified agent those two derivations can produce different IDs, so the
UnifiedAgents merge keyed on d.id === h.id split the single install
into two rows.
Add a 1:1 hostname fallback: if exactly one unmerged host row and one
unmerged docker row share the same hostname, merge them. The strict
1:1 constraint prevents distinct machines that happen to share a
hostname from being collapsed together.
Dashboard's group-level <For> iterated over Object.entries(groupedGuests()).sort(...),
which produces brand-new tuple arrays on every refresh. Solid's <For> diffs by
reference, so every tick it destroyed and recreated all child rows — wiping out
GuestDrawer's activeTab signal (snapping Discovery back to Overview), graph
hover tooltips, and scroll position inside the expanded row.
Iterate over a memoized array of instance-ID strings instead. Primitive equality
keeps the outer For stable, so only the guest data inside each group updates
on each tick and the drawer's local state survives.
Fixes#1427
The infra discovery service auto-started with a hardcoded 5-minute
ticker the moment the AI service initialized, regardless of the user's
Patrol schedule. Each tick called AnalyzeForDiscovery, which hit the
Ollama chat endpoint and reset Ollama's keep_alive (5 min default), so
the model never had a chance to unload between requests.
Default the discovery interval to 24h and align it with the user's
Patrol preset (GetPatrolInterval) when the AI service constructs the
discovery service. With Patrol at its 6h default, the LLM now sits idle
long enough for Ollama to release it.
Fixes#1425
Apply quiet-hours suppression to escalation notifications so offline and other suppressed categories do not bypass the normal notification rules during escalation.
Fixes#1398.
Prefer Podman's reported CPU percentage from the compat stats payload and fall back to Podman's wall-clock calculation instead of Docker's multi-core normalization.
Fixes#1391.