When local LLM servers (LM Studio, llama.cpp) receive tool definitions
but the model doesn't support function calling, they output internal
control tokens like <|channel|>, <|im_start|>, etc. instead of proper
responses.
This change detects these control tokens during streaming and returns
a clear error message explaining that the model doesn't support function
calling and recommending compatible models (Llama 3.1+, Mistral, Qwen).
This is better than the previous approach of offering a "disable tools"
option, which would have crippled Pulse Assistant/Patrol functionality.
Users need to use compatible models for the AI features to work properly.
Related to #1154
Provider updates across all supported backends:
- Anthropic: Better tool call handling
- OpenAI: Improved response parsing
- Gemini: Enhanced compatibility
- Ollama: Local model support improvements
Includes test updates for OpenAI provider.
- Updated AI providers and tests for context/tenant awareness
- Refactored tool executor for multi-tenant state handling
- Added new tests for Docker control and update tools
- Add comprehensive tests for internal/api/config_handlers.go (Phases 1-3)
- Improve test coverage for AI tools, chat service, and session management
- Enhance alert and notification tests (ResolvedAlert, Webhook)
- Add frontend unit tests for utils (searchHistory, tagColors, temperature, url)
- Add proximity client API tests
Users providing base URLs like "https://openrouter.ai/api/v1" were
getting HTML error responses because the client used the URL directly
without appending "/chat/completions".
- Normalize baseURL in NewOpenAIClient to ensure it ends with /chat/completions
- Fix modelsEndpoint() to derive /models from the normalized baseURL
- Add tests for URL normalization with various endpoint formats
The DisableDockerUpdateActions setting was being saved to disk but not
updated in h.config, causing the UI toggle to appear to revert on page
refresh since the API returned the stale runtime value.
Related to #1023
Adds RequestTimeoutSeconds to AI config (default 300s / 5 min).
Users with low-power hardware running Ollama can increase this
value in Settings to prevent timeouts on slower inference.
- Add integration tests for Ollama provider (17 tests against real API)
- Add unit tests for baseline, correlation, patterns, memory, knowledge, cost packages
- Add context formatter and builder tests
- Add factory tests for provider initialization
- Add Makefile targets: test-integration, test-all
- Clean up test theatre (removed struct field tests)
Integration tests require Ollama at OLLAMA_URL (default: 192.168.0.124:11434)
Run with: make test-integration