Commit graph

9 commits

Author SHA1 Message Date
rcourtman
af712006c9 fix(ai): allow Gemini and other models via OpenRouter without false provider warning (#1296)
Model name detection used substring matching (.includes('gemini')) which
falsely required Gemini provider config for OpenRouter model IDs like
"google/gemini-2.5-flash". Now only known provider prefixes are treated
as explicit delimiters, slash-containing names route to OpenAI (OpenRouter
convention), and colons in model names (e.g. "llama3.2:latest") are no
longer misinterpreted as provider prefixes.
2026-02-26 20:49:10 +00:00
rcourtman
a55ae78715 Revert "Add config option to disable tools for OpenAI-compatible endpoints"
This reverts commit 81229f206f.
2026-02-03 13:26:26 +00:00
rcourtman
81229f206f Add config option to disable tools for OpenAI-compatible endpoints
Some local LLM servers (LM Studio, llama.cpp) expose OpenAI-compatible
APIs but don't support function calling. When tools are sent to these
models, they output raw control tokens instead of proper responses.

This change adds:
- openai_tools_disabled config field in AIConfig
- AreToolsDisabledForProvider() method to check at runtime
- API support to get/set the new setting
- Tests for the new functionality

When enabled and using a custom OpenAI base URL, the chat service will
skip sending tools to the model, allowing basic chat functionality to
work even with models that don't support function calling.

Fixes #1154
2026-02-03 13:21:44 +00:00
rcourtman
eed80e2883 Fix: patrol interval not applied — omitempty caused preset to persist across reloads
The "Every" dropdown on the Patrol page was not being respected. Setting
15 min would show "Runs every 6 hours" and the countdown timer was wrong.

Root cause: PatrolSchedulePreset and PatrolIntervalMinutes had omitempty
JSON tags. When the API handler cleared the preset to "", json.Marshal
dropped the field. On reload, NewDefaultAIConfig() re-introduced "6hr"
as the preset, which took priority over the user's custom minutes.

Additional fixes in the same area:
- Track nextScheduledAt explicitly in the patrol loop so next_patrol_at
  reflects the actual ticker schedule, not a stale lastPatrol + interval
  calculation that diverges when the interval changes mid-cycle.
- Refetch patrol status in the frontend after an interval change so the
  countdown timer updates immediately.
- Seed lastPatrol from persisted run history on startup so the header
  countdown timer appears immediately after a backend restart.
2026-02-02 22:53:24 +00:00
rcourtman
de2cb7a29b chore: remove deprecated GetAvailableModels and ModelInfo
- Remove deprecated config.ModelInfo type (use providers.ModelInfo)
- Remove deprecated GetAvailableModels function (always returned nil)
- Remove associated test
- Update AISettingsResponse to use providers.ModelInfo
2026-01-24 23:00:16 +00:00
rcourtman
ed78509f92 Fix flaky tests and improve coverage across alerts, api, and config packages
- Fix deadlock and race conditions in internal/alerts
- Add comprehensive error path tests for internal/config
- Fix 401 handling in internal/api
- Fix Docker Swarm task filtering test logic
2026-01-03 18:36:17 +00:00
rcourtman
a47c7803bb fix: Preserve configured runtime preference during report collection
When collecting reports, the runtime re-detection was passing RuntimeAuto
instead of the user's configured preference. This caused podman to switch
back to docker on systems like CoreOS where podman provides a docker-
compatible socket at /var/run/docker.sock.

Now the current runtime (set at init from user's --docker-runtime flag)
is passed as the preference, preventing spurious runtime switching.

Related to #1022
2026-01-03 11:30:25 +00:00
rcourtman
e86998ec58 fix: AI Patrol only runs when AI is enabled. Related to #885
Users who haven't enabled AI were seeing AI patrol findings from
heuristic analysis that they couldn't dismiss (license-gated).

- IsPatrolEnabled() now checks if Enabled is true
- IsAlertTriggeredAnalysisEnabled() also checks Enabled
- Updated tests to reflect new behavior

AI patrol and alert-triggered analysis require AI to be enabled
as a master switch. This prevents confusing UX where users see
AI features without having configured them.
2025-12-24 16:05:07 +00:00
rcourtman
65e38fac91 test: improve test coverage for AI, license, config, and monitoring packages
New test files:
- internal/ai/providers/gemini_test.go: Comprehensive Gemini provider tests
- internal/api/ai_intelligence_handlers_test.go: AI intelligence endpoint tests
- internal/api/ai_patrol_handlers_test.go: AI patrol endpoint tests
- internal/api/license_handlers_test.go: License API handler tests
- internal/api/security_oidc_response_test.go: OIDC response formatting tests
- internal/config/ai_config_test.go: AI configuration function tests
- internal/config/persistence_ai_test.go: AI config persistence tests
- internal/config/persistence_extended_test.go: Extended persistence tests
- internal/license/persistence_test.go: License persistence tests
- internal/license/pubkey_test.go: Public key handling tests
- internal/monitoring/host_agent_temps_test.go: Temperature processing tests

Enhanced existing files:
- internal/api/updates_test.go: Added update handler tests
- internal/license/license_test.go: Added Service method tests

Coverage improvements:
- ai/providers: 57.3% -> 73.0% (+15.7%)
- license: 78.3% -> 85.9% (+7.6%)
- config: 49.7% -> 53.9% (+4.2%)
- monitoring: 49.8% -> 50.8% (+1.0%)
- api: 28.4% -> 29.8% (+1.4%)
2025-12-19 22:49:30 +00:00