Model name detection used substring matching (.includes('gemini')) which
falsely required Gemini provider config for OpenRouter model IDs like
"google/gemini-2.5-flash". Now only known provider prefixes are treated
as explicit delimiters, slash-containing names route to OpenAI (OpenRouter
convention), and colons in model names (e.g. "llama3.2:latest") are no
longer misinterpreted as provider prefixes.
Some local LLM servers (LM Studio, llama.cpp) expose OpenAI-compatible
APIs but don't support function calling. When tools are sent to these
models, they output raw control tokens instead of proper responses.
This change adds:
- openai_tools_disabled config field in AIConfig
- AreToolsDisabledForProvider() method to check at runtime
- API support to get/set the new setting
- Tests for the new functionality
When enabled and using a custom OpenAI base URL, the chat service will
skip sending tools to the model, allowing basic chat functionality to
work even with models that don't support function calling.
Fixes#1154
The "Every" dropdown on the Patrol page was not being respected. Setting
15 min would show "Runs every 6 hours" and the countdown timer was wrong.
Root cause: PatrolSchedulePreset and PatrolIntervalMinutes had omitempty
JSON tags. When the API handler cleared the preset to "", json.Marshal
dropped the field. On reload, NewDefaultAIConfig() re-introduced "6hr"
as the preset, which took priority over the user's custom minutes.
Additional fixes in the same area:
- Track nextScheduledAt explicitly in the patrol loop so next_patrol_at
reflects the actual ticker schedule, not a stale lastPatrol + interval
calculation that diverges when the interval changes mid-cycle.
- Refetch patrol status in the frontend after an interval change so the
countdown timer updates immediately.
- Seed lastPatrol from persisted run history on startup so the header
countdown timer appears immediately after a backend restart.
- Remove deprecated config.ModelInfo type (use providers.ModelInfo)
- Remove deprecated GetAvailableModels function (always returned nil)
- Remove associated test
- Update AISettingsResponse to use providers.ModelInfo
When collecting reports, the runtime re-detection was passing RuntimeAuto
instead of the user's configured preference. This caused podman to switch
back to docker on systems like CoreOS where podman provides a docker-
compatible socket at /var/run/docker.sock.
Now the current runtime (set at init from user's --docker-runtime flag)
is passed as the preference, preventing spurious runtime switching.
Related to #1022
Users who haven't enabled AI were seeing AI patrol findings from
heuristic analysis that they couldn't dismiss (license-gated).
- IsPatrolEnabled() now checks if Enabled is true
- IsAlertTriggeredAnalysisEnabled() also checks Enabled
- Updated tests to reflect new behavior
AI patrol and alert-triggered analysis require AI to be enabled
as a master switch. This prevents confusing UX where users see
AI features without having configured them.