Commit graph

80 commits

Author SHA1 Message Date
rcourtman
830215e4c3 Bill quickstart by Patrol execution 2026-04-03 19:00:40 +01:00
rcourtman
0d25939921 Use a Pulse-owned alias for hosted quickstart models 2026-04-03 10:44:58 +01:00
rcourtman
57cc212f34 Replace Patrol quickstart with server bootstrap 2026-04-02 23:15:06 +01:00
rcourtman
73597f8b1a Forward-port Ollama runtime auth continuity 2026-04-01 14:38:39 +01:00
rcourtman
21fa343fa1 Enable structured AI auto-recovery paths 2026-03-31 09:24:56 +01:00
rcourtman
984bc7c636 Normalize API-backed AI read routing hints 2026-03-31 08:56:22 +01:00
rcourtman
c1509103f8 Fix VMware assistant read-only guidance 2026-03-31 00:00:32 +01:00
rcourtman
ac9375a34b Tighten VMware control wording boundaries 2026-03-30 23:47:38 +01:00
rcourtman
dd5f099cda Lock VMware phase-1 exclusion integrity 2026-03-30 23:42:32 +01:00
rcourtman
16b9e079a6 Implement VMware assistant mention floor 2026-03-30 22:44:34 +01:00
rcourtman
861ac9ab4d fix(ai): use canonical app-container mentions 2026-03-30 10:18:07 +01:00
rcourtman
56c14ca19f feat(ai): add canonical truenas app config reads 2026-03-29 20:36:43 +01:00
rcourtman
298b23626b feat(ai): add canonical truenas app log reads 2026-03-29 20:13:39 +01:00
rcourtman
b0ba88d541 feat(ai): add canonical truenas app control 2026-03-29 19:50:31 +01:00
rcourtman
82b24f5d90 Harden AI storage leaf path handling 2026-03-29 13:35:32 +01:00
rcourtman
d6536932fc Harden outbound URLs and file-backed storage 2026-03-29 12:47:55 +01:00
rcourtman
90bef80aa5 test(ai): keep recovery storage visible through prompt filtering 2026-03-27 08:43:18 +00:00
rcourtman
e8d2d59226 test(ai): cover patrol recovery storage fallbacks 2026-03-27 08:23:14 +00:00
rcourtman
fa98e1c6d7 test(ai): cover recovery storage tool calls through agentic chat 2026-03-26 23:25:26 +00:00
rcourtman
5e158d144c test(ai): prove recovery storage tool fallbacks through service 2026-03-26 23:17:22 +00:00
rcourtman
2afb96ee13 fix(release): align api and hostagent rc contracts 2026-03-26 17:08:48 +00:00
rcourtman
2617bb795b fix(ai): support quickstart explore prepass
Canonical fix: keep the hosted quickstart model valid in the explore pre-pass path as well as the main chat execution path.
2026-03-25 17:52:51 +00:00
rcourtman
e7a6e05c63 Trim policy summary duplication 2026-03-19 15:13:01 +00:00
rcourtman
8927ca7b78 Derive cloud summary routing from scope 2026-03-19 15:10:48 +00:00
rcourtman
e536c635bf Remove dead cloud raw signals field 2026-03-19 15:07:31 +00:00
rcourtman
cfd6eb634f Remove raw signals from policy surface 2026-03-19 15:02:38 +00:00
rcourtman
cc806171dc Trim dead resource graph surface 2026-03-19 14:26:30 +00:00
rcourtman
aabbd85350 Centralize discovery canonicalization helpers 2026-03-19 05:40:04 +00:00
rcourtman
ffb59010db Remove local governed mention shim 2026-03-19 05:26:01 +00:00
rcourtman
345a17c1be Centralize governed mention block formatting 2026-03-19 04:35:37 +00:00
rcourtman
3c063730f1 Centralize governed mention copy 2026-03-19 04:33:21 +00:00
rcourtman
ad77ea0302 Centralize governed mention summary gate 2026-03-19 04:31:18 +00:00
rcourtman
ab44a5d09b Centralize governed label helpers 2026-03-19 03:17:54 +00:00
rcourtman
30e8b164a8 Centralize chat policy cloning 2026-03-19 03:14:19 +00:00
rcourtman
699a81f7a2 Centralize resource policy cloning 2026-03-19 03:11:38 +00:00
rcourtman
1d1c7bf636 Centralize aiSafeSummary policy decisions 2026-03-19 02:37:13 +00:00
rcourtman
b321170da1 Centralize governed mention policy summaries 2026-03-19 02:32:59 +00:00
rcourtman
3c62e8e5f5 Persist action audits through tool executor 2026-03-18 17:35:45 +00:00
rcourtman
778a2577b6 feat: Pulse v6 release 2026-03-18 16:06:30 +00:00
rcourtman
ae2edbde20 fix(ai): complete wiring on first-time configure; guard Ollama fallback
Three follow-up fixes:

1. RestartAIChat() now performs the full post-start wiring (MCP providers,
   patrol adapter, investigation orchestrator) when the service starts for
   the first time via Restart(). Previously these were only wired via
   StartAIChat(), leaving first-time configure with a partially wired service.

2. The Ollama→OpenAI-compatible fallback in createProviderForModel is now
   guarded by !strings.HasPrefix(modelStr, "ollama:") so explicit
   "ollama:llama3" models are never silently rerouted to a different provider.

3. Windows install script registration check now uses the $Hostname override
   (if set) instead of always looking up $env:COMPUTERNAME, so post-install
   verification works correctly when a custom hostname is specified.
2026-03-13 12:06:08 +00:00
rcourtman
e137f3fbf7 fix(ai): start chat service on first-time configure without restart
When Pulse starts before AI is configured, legacyService is nil.
Saving AI settings called Restart() which bailed immediately on the
nil check, leaving the service unstarted (503 on /api/ai/sessions)
until a full process restart.

Merged the nil and !IsRunning checks so first-time configure now
starts the service inline, same as the already-handled stopped case.

Also: bare model names that ParseModelString routes to Ollama (e.g.
"qwen3-omni") now fall back to a configured custom OpenAI base URL
when Ollama is not explicitly configured — handles manually-typed
model names on self-hosted OpenAI-compatible endpoints.

Fixes #1339, #1296
2026-03-13 11:13:27 +00:00
rcourtman
82c615b3b9 Filter virtual disks from SMART checks to prevent false positives (#1329)
ZFS zvols (zd*), device-mapper, virtio disks, and other virtual block
devices don't support SMART and were being reported as FAILED. Use lsblk
JSON metadata to filter by device prefix, transport, subsystem, and
vendor/model. Also treat missing smart_status as unknown rather than
failed, and ignore UNKNOWN health in Patrol/AI signals.
2026-03-08 22:16:24 +00:00
rcourtman
d46b5fc84b fix(ai): route OpenRouter slash-delimited models to OpenAI provider (#1296)
createProviderForModel() only handled "provider:model" colon format.
Models like "google/gemini-2.5-flash" or "google/gemini-2.0-flash:free"
(OpenRouter format) failed because the colon split produced invalid
provider names.

Now uses config.ParseModelString() which correctly detects slash-
delimited models as OpenRouter (routed via OpenAI-compatible API).
2026-03-01 22:29:45 +00:00
rcourtman
d852964696 fix(ai): record patrol and QuickAnalysis token usage in cost store for budget enforcement
Patrol runs, evaluation passes, and QuickAnalysis calls were consuming
LLM tokens without recording them in the cost store. This made the
cost_budget_usd_30d budget setting ineffective since enforceBudget()
never saw patrol spend.

- Add RecordUsage() to ai.Service for thread-safe cost recording
- Add recordPatrolUsage() helper to PatrolService, called on both
  success and error paths for main patrol and evaluation pass
- Record QuickAnalysis token usage in cost store
- Return partial PatrolResponse (with token counts) on error instead
  of nil, so callers can always record consumed tokens
- Propagate partial response through chat_service_adapter on error
2026-03-01 19:19:47 +00:00
rcourtman
24f5b1cb31 fix(patrol): cap per-run tokens and reset patrol session history 2026-02-24 11:29:47 +00:00
rcourtman
8bb89c4031 test: add memory regression coverage for AI stores 2026-02-04 19:56:12 +00:00
rcourtman
8720708e70 fix: address AI patrol concurrency and streaming issues
- HIGH: Create per-request AgenticLoop instead of sharing one across
  concurrent sessions. This prevents race conditions where ExecuteStream
  calls would overwrite each other's FSM, knowledge accumulator, and
  other session-specific state.

- MEDIUM: TriggerManager.GetStatus now recomputes adaptive interval after
  pruning old events. Previously, currentInterval could remain stuck in
  busy/quiet mode after events aged out of the window.

- MEDIUM: Patrol stream phases are now broadcast to subscribers. Fixed
  setStreamPhase() to emit phase events and SubscribeToStream() to send
  phase events to late joiners. UI was stuck on 'Starting patrol...'
  because phase events were never emitted.

- LOW: Fixed TriggerStatus.CurrentInterval JSON serialization. Changed
  from time.Duration (serializes as nanoseconds) to int64 milliseconds
  to match the 'current_interval_ms' tag.
2026-02-03 14:39:00 +00:00
rcourtman
a55ae78715 Revert "Add config option to disable tools for OpenAI-compatible endpoints"
This reverts commit 81229f206f.
2026-02-03 13:26:26 +00:00
rcourtman
81229f206f Add config option to disable tools for OpenAI-compatible endpoints
Some local LLM servers (LM Studio, llama.cpp) expose OpenAI-compatible
APIs but don't support function calling. When tools are sent to these
models, they output raw control tokens instead of proper responses.

This change adds:
- openai_tools_disabled config field in AIConfig
- AreToolsDisabledForProvider() method to check at runtime
- API support to get/set the new setting
- Tests for the new functionality

When enabled and using a custom OpenAI base URL, the chat service will
skip sending tools to the model, allowing basic chat functionality to
work even with models that don't support function calling.

Fixes #1154
2026-02-03 13:21:44 +00:00
rcourtman
900e05025a Fix OpenAI-compatible endpoint support for chat
Two issues fixed:

1. Custom base URL wasn't being passed to the OpenAI client in
   createProviderForModel() - requests went to api.openai.com instead
   of the configured endpoint (e.g., LM Studio, llama.cpp)

2. Tool schemas were missing the "properties" field when tools had no
   parameters. OpenAI API requires "properties" to always be present
   as an object, even if empty.

Fixes #1154
2026-02-03 12:03:06 +00:00