Enhance devcontainer and CI workflows

- Add persistent volume mounts for Go/npm caches (faster rebuilds)
- Add shell config with helpful aliases and custom prompt
- Add comprehensive devcontainer documentation
- Add pre-commit hooks for Go formatting and linting
- Use go-version-file in CI workflows instead of hardcoded versions
- Simplify docker compose commands with --wait flag
- Add gitignore entries for devcontainer auth files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
rcourtman 2026-01-01 22:29:15 +00:00
parent cb99673b7c
commit 3fdf753a5b
106 changed files with 1648 additions and 1373 deletions

47
.devcontainer/.bashrc Normal file
View file

@ -0,0 +1,47 @@
# Pulse Dev Container Shell Configuration
# Better prompt showing git branch and mock mode
parse_git_branch() {
git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/ (\1)/'
}
get_mock_status() {
if [ -f /workspaces/pulse/mock.env ] && grep -q "PULSE_MOCK_MODE=true" /workspaces/pulse/mock.env 2>/dev/null; then
echo " [MOCK]"
fi
}
export PS1='\[\033[01;32m\]\u@pulse-dev\[\033[00m\]:\[\033[01;34m\]\w\[\033[33m\]$(parse_git_branch)\[\033[35m\]$(get_mock_status)\[\033[00m\]\$ '
# Useful aliases
alias pd='cd /workspaces/pulse && ./scripts/hot-dev.sh'
alias ptest='go test ./...'
alias plint='golangci-lint run ./...'
alias pfmt='gofmt -w -s .'
alias plog='tail -f /tmp/pulse-dev.log'
alias mock-on='cd /workspaces/pulse && npm run mock:on'
alias mock-off='cd /workspaces/pulse && npm run mock:off'
alias mock-edit='cd /workspaces/pulse && npm run mock:edit'
# Helpful shortcuts
alias ll='ls -lah'
alias gs='git status'
alias gp='git pull'
alias gc='git commit'
# Show helpful info on shell start
echo ""
echo "🚀 Pulse Dev Container"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Shortcuts:"
echo " pd - Start hot-reload dev server"
echo " ptest - Run all Go tests"
echo " plint - Run Go linter"
echo " pfmt - Format Go code"
echo " plog - View dev server logs"
echo " mock-on/off - Toggle mock mode"
echo ""
echo "Debug: Press F5 in VS Code to start debugger"
echo "Tasks: Cmd+Shift+P → 'Tasks: Run Task'"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""

264
.devcontainer/README.md Normal file
View file

@ -0,0 +1,264 @@
# Pulse Dev Container Setup
This dev container provides a complete, reproducible development environment for Pulse with hot-reload, debugging, and testing capabilities.
## What's Included
### Development Tools
- **Go 1.24** - Backend development
- **Node.js 20** - Frontend development
- **gopls v0.17.0** - Go language server
- **Delve** - Go debugger
- **inotify-tools** - File watching for hot reload
- **lsof** - Port management
- **Gemini CLI** - AI-assisted coding
### Features
- ✅ Hot reload for both frontend (Vite) and backend (Go)
- ✅ VS Code debugging with breakpoints
- ✅ Test Explorer integration
- ✅ Pre-commit hooks (formatting, linting)
- ✅ Persistent build caches (fast rebuilds)
- ✅ Mock data mode for safe development
## Quick Start
### First Time Setup
1. **Open in VS Code**:
```bash
# On your Mac, connect to dev-containers VM via Remote-SSH
# File → Open Folder → /root/pulse
```
2. **Reopen in Container**:
- VS Code will prompt: "Reopen in Container"
- Or: Cmd+Shift+P → "Dev Containers: Reopen in Container"
3. **Wait for build** (first time takes ~5 minutes):
- Downloads base image
- Installs Node.js
- Installs Go tools
- Runs npm install
### Daily Development
**Start the dev server:**
```bash
pd # Alias for ./scripts/hot-dev.sh
```
Or use VS Code task: `Cmd+Shift+P` → "Tasks: Run Task" → "Start Pulse Dev Server"
**Access the app:**
- Frontend: http://localhost:7655
- Backend API: http://localhost:7656
- Metrics: http://localhost:9091
## Development Workflows
### Hot Reload
**Frontend (instant):**
- Edit any file in `frontend-modern/src/`
- Save → Browser updates automatically
**Backend (3-5 seconds):**
- Edit any `.go` file
- Save → Terminal shows:
```
🔄 Change detected: yourfile.go
Rebuilding backend...
✓ Build successful, restarting backend...
```
### Debugging
**Debug backend:**
1. Set breakpoints in Go files (click left of line number)
2. Press `F5` or click "Run and Debug" → "Debug Pulse Backend"
3. App starts in debug mode
4. Execution pauses at breakpoints
**Debug tests:**
1. Open a test file
2. Press `F5` → "Debug Current Go Test"
3. Or use Test Explorer sidebar (beaker icon)
### Testing
**Run all tests:**
```bash
ptest # Alias for go test ./...
```
**Run specific test:**
- Use Test Explorer (beaker icon in sidebar)
- Click play button next to test name
- See results inline with code coverage
**Test with coverage:**
```bash
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
```
### Mock vs Real Data
**Mock mode (default):**
```bash
mock-on # Enable mock data
mock-edit # Configure number of nodes/VMs
pd # Restart dev server
```
Mock mode creates fake Proxmox nodes/VMs/containers for safe testing without touching real infrastructure.
**Real infrastructure mode:**
```bash
mock-off # Connect to real Proxmox/PBS
pd # Restart dev server
```
⚠️ Use carefully - connects to production minipc, debian-go, pulse-relay
### Git Workflow
**Pre-commit hooks automatically run:**
- Go code formatting (`gofmt`)
- Go linting (`golangci-lint`)
- Frontend linting (ESLint)
**Make a commit:**
```bash
git add .
git commit -m "Your message" # Hooks run automatically
git push
```
If hooks fail, fix the issues and commit again.
## Useful Aliases
| Alias | Command | Description |
|-------|---------|-------------|
| `pd` | `./scripts/hot-dev.sh` | Start dev server |
| `ptest` | `go test ./...` | Run all tests |
| `plint` | `golangci-lint run` | Run linter |
| `pfmt` | `gofmt -w -s .` | Format Go code |
| `plog` | `tail -f /tmp/pulse-dev.log` | View logs |
| `mock-on` | Toggle mock mode | Enable mock data |
| `mock-off` | Toggle mock mode | Use real infrastructure |
| `ll` | `ls -lah` | List files |
| `gs` | `git status` | Git status |
## Persistence & Rebuilds
### What Persists
- ✅ Your code (on VM disk at `/root/pulse`)
- ✅ Git commits and branches
- ✅ Go build cache (Docker volume)
- ✅ npm cache (Docker volume)
- ✅ VS Code extensions
### What Gets Reset on Rebuild
- ❌ Running processes (dev server stops)
- ❌ Terminal history
- ❌ Uncommitted environment changes
### When to Rebuild
**Rebuild needed when:**
- You change `Dockerfile` or `devcontainer.json`
- You want to update base image
- Container is corrupted
**How to rebuild:**
`Cmd+Shift+P` → "Dev Containers: Rebuild Container"
**Rebuilds are fast** (~30 seconds) thanks to:
- Persistent Go build cache
- Persistent npm cache
- Docker layer caching
## Troubleshooting
### Port already in use
```bash
# Kill processes on ports 7655, 7656
lsof -ti:7655,7656 | xargs kill -9
pd # Restart
```
### Out of disk space
```bash
# On dev-containers VM (via SSH)
docker system prune -af --volumes
```
### gopls not working
```bash
# Reinstall Go tools
go install golang.org/x/tools/gopls@v0.17.0
```
### Hot reload not working
Check terminal for file watcher errors. Restart dev server with `Ctrl+C` then `pd`.
### Container won't start
1. Check Docker is running: `docker ps`
2. Check VM has resources: `df -h` and `free -h`
3. Rebuild: `Cmd+Shift+P` → "Rebuild Container"
## Environment Variables
Set in `devcontainer.json``containerEnv`:
| Variable | Value | Purpose |
|----------|-------|---------|
| `PULSE_DEV_API_HOST` | `localhost` | Backend API host |
| `FRONTEND_DEV_HOST` | `0.0.0.0` | Frontend bind address |
| `LAN_IP` | `localhost` | LAN IP for URLs |
Custom overrides: Create `.env.devcontainer` (gitignored)
## Resources
- **VM Specs**: 8GB RAM, 30GB disk, 2 CPU cores
- **Base Image**: `golang:1.24` (Ubuntu-based)
- **Caches**: ~2-3GB for Go modules and build artifacts
## Tips & Tricks
1. **Split terminals**: Click + in terminal to run dev server in one, commands in another
2. **Quick test**: Click ▶️ next to test function name in editor
3. **Go to definition**: Cmd+click on function/type
4. **Find references**: Right-click → "Find All References"
5. **Refactor**: Right-click → "Rename Symbol" (renames everywhere)
6. **Format on save**: Already enabled for Go and frontend files
7. **Problems panel**: See all errors/warnings in one place
8. **Git sidebar**: View changes, stage, commit without terminal
## Architecture
```
MacBook (VS Code)
↓ Remote-SSH
dev-containers VM (Proxmox)
↓ Docker
Dev Container
├── Go 1.24 + tools
├── Node 20 + npm
├── Your code (/workspaces/pulse)
├── Hot reload watchers
└── Running dev server
```
Code lives on VM disk, container mounts it. VS Code connects remotely and forwards ports to your Mac.
## Next Steps
- Read `CONTRIBUTING.md` for contribution guidelines
- Check `ARCHITECTURE.md` to understand the codebase
- Run `ptest` to ensure all tests pass
- Start coding! Changes hot-reload automatically 🚀

View file

@ -9,6 +9,10 @@
"version": "20"
}
},
"mounts": [
"source=pulse-go-build-cache,target=/go/pkg,type=volume",
"source=pulse-npm-cache,target=/home/vscode/.npm,type=volume"
],
"customizations": {
"vscode": {
"extensions": [
@ -19,7 +23,9 @@
],
"settings": {
"go.gopath": "/go",
"go.toolsManagement.autoUpdate": true
"go.toolsManagement.autoUpdate": true,
"go.useLanguageServer": true,
"go.testExplorer.enable": true
}
}
},
@ -39,6 +45,7 @@
"FRONTEND_DEV_HOST": "0.0.0.0",
"LAN_IP": "localhost"
},
"updateContentCommand": "sudo chown -R vscode:vscode /workspaces/pulse && cd frontend-modern && npm install",
"updateContentCommand": "sudo chown -R vscode:vscode /workspaces/pulse /go /home/vscode/.npm && cd frontend-modern && npm install",
"postCreateCommand": "echo source /workspaces/pulse/.devcontainer/.bashrc >> ~/.bashrc",
"remoteUser": "vscode"
}

View file

@ -49,7 +49,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.24'
go-version-file: go.mod
cache: true
- name: Run backend tests
@ -90,13 +90,7 @@ jobs:
MOCK_RATE_LIMIT: "false"
MOCK_STALE_RELEASE: "false"
run: |
docker-compose -f docker-compose.test.yml up -d
echo "Waiting for mock-github to be healthy..."
timeout 60 sh -c 'until docker inspect --format="{{json .State.Health.Status}}" pulse-mock-github | grep -q "healthy"; do sleep 2; done'
echo "Waiting for pulse-test-server to be healthy..."
timeout 60 sh -c 'until docker inspect --format="{{json .State.Health.Status}}" pulse-test-server | grep -q "healthy"; do sleep 2; done'
docker compose -f docker-compose.test.yml up -d --wait
echo "Verifying Pulse API is reachable..."
timeout 60 sh -c 'until curl -fsS http://localhost:7655/api/health > /dev/null; do sleep 2; done'
@ -107,9 +101,9 @@ jobs:
echo "Running API-level update integration test..."
UPDATE_API_BASE_URL=http://localhost:7655 go test ../../tests/integration/api -run TestUpdateFlowIntegration -count=1
docker-compose -f docker-compose.test.yml down -v
docker compose -f docker-compose.test.yml down -v
- name: Cleanup integration environment
if: always()
working-directory: tests/integration
run: docker-compose -f docker-compose.test.yml down -v || true
run: docker compose -f docker-compose.test.yml down -v || true

View file

@ -38,7 +38,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.23'
go-version-file: go.mod
cache: true
- name: Set up Node.js
@ -82,8 +82,7 @@ jobs:
MOCK_RATE_LIMIT: "false"
MOCK_STALE_RELEASE: "false"
run: |
docker compose -f docker-compose.test.yml up -d
sleep 15 # Wait for services to be ready
docker compose -f docker-compose.test.yml up -d --wait
npx playwright test tests/00-diagnostic.spec.ts --reporter=list,html
UPDATE_API_BASE_URL=http://localhost:7655 go test ../../tests/integration/api -run TestUpdateFlowIntegration -count=1
docker compose -f docker-compose.test.yml down -v

4
.gitignore vendored
View file

@ -178,3 +178,7 @@ landing-page/
.gemini/
deployment/
start-pulse-agent.sh
# Dev container extension auth
.devcontainer/.claude/
.devcontainer/.config/

25
.husky/pre-commit Executable file
View file

@ -0,0 +1,25 @@
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"
# Run Go formatting
echo "Running Go formatter..."
gofmt -w -s .
# Run Go linting (if golangci-lint is available)
if command -v golangci-lint >/dev/null 2>&1; then
echo "Running golangci-lint..."
golangci-lint run ./...
fi
# Run frontend linting (if package.json has lint script)
if [ -f frontend-modern/package.json ]; then
echo "Running frontend linter..."
cd frontend-modern
npm run lint --if-present || true
cd ..
fi
# Stage formatted files
git add -u
echo "Pre-commit checks passed!"

View file

@ -162,7 +162,6 @@ func DefaultPolicy() *CommandPolicy {
`>\s*/dev/sd`,
`>\s*/dev/nvme`,
// System destruction
`shutdown`,
`reboot`,

View file

@ -14,28 +14,28 @@ const (
MsgTypeCommandResult MessageType = "command_result"
// Server -> Agent messages
MsgTypeRegistered MessageType = "registered"
MsgTypePong MessageType = "pong"
MsgTypeExecuteCmd MessageType = "execute_command"
MsgTypeReadFile MessageType = "read_file"
MsgTypeRegistered MessageType = "registered"
MsgTypePong MessageType = "pong"
MsgTypeExecuteCmd MessageType = "execute_command"
MsgTypeReadFile MessageType = "read_file"
)
// Message is the envelope for all WebSocket messages
type Message struct {
Type MessageType `json:"type"`
ID string `json:"id,omitempty"` // Unique message ID for request/response correlation
ID string `json:"id,omitempty"` // Unique message ID for request/response correlation
Timestamp time.Time `json:"timestamp"`
Payload interface{} `json:"payload,omitempty"`
}
// AgentRegisterPayload is sent by agent on connection
type AgentRegisterPayload struct {
AgentID string `json:"agent_id"`
Hostname string `json:"hostname"`
Version string `json:"version"`
Platform string `json:"platform"` // "linux", "windows", "darwin"
Tags []string `json:"tags,omitempty"`
Token string `json:"token"` // API token for authentication
AgentID string `json:"agent_id"`
Hostname string `json:"hostname"`
Version string `json:"version"`
Platform string `json:"platform"` // "linux", "windows", "darwin"
Tags []string `json:"tags,omitempty"`
Token string `json:"token"` // API token for authentication
}
// RegisteredPayload is sent by server after successful registration
@ -48,9 +48,9 @@ type RegisteredPayload struct {
type ExecuteCommandPayload struct {
RequestID string `json:"request_id"`
Command string `json:"command"`
TargetType string `json:"target_type"` // "host", "container", "vm"
TargetType string `json:"target_type"` // "host", "container", "vm"
TargetID string `json:"target_id,omitempty"` // VMID for container/VM
Timeout int `json:"timeout,omitempty"` // seconds, 0 = default
Timeout int `json:"timeout,omitempty"` // seconds, 0 = default
}
// ReadFilePayload is sent by server to request file content
@ -75,10 +75,10 @@ type CommandResultPayload struct {
// ConnectedAgent represents an agent connected via WebSocket
type ConnectedAgent struct {
AgentID string
Hostname string
Version string
Platform string
Tags []string
AgentID string
Hostname string
Version string
Platform string
Tags []string
ConnectedAt time.Time
}

View file

@ -33,21 +33,21 @@ const (
)
var (
maxBinarySizeBytes int64 = maxBinarySize
runtimeGOOS = runtime.GOOS
runtimeGOARCH = runtime.GOARCH
unameCommand = func() ([]byte, error) { return exec.Command("uname", "-m").Output() }
unraidVersionPath = "/etc/unraid-version"
unraidPersistentPathFn = unraidPersistentPath
restartProcessFn = restartProcess
osExecutableFn = os.Executable
evalSymlinksFn = filepath.EvalSymlinks
createTempFn = os.CreateTemp
chmodFn = os.Chmod
renameFn = os.Rename
closeFileFn = func(f *os.File) error { return f.Close() }
readFileFn = os.ReadFile
writeFileFn = os.WriteFile
maxBinarySizeBytes int64 = maxBinarySize
runtimeGOOS = runtime.GOOS
runtimeGOARCH = runtime.GOARCH
unameCommand = func() ([]byte, error) { return exec.Command("uname", "-m").Output() }
unraidVersionPath = "/etc/unraid-version"
unraidPersistentPathFn = unraidPersistentPath
restartProcessFn = restartProcess
osExecutableFn = os.Executable
evalSymlinksFn = filepath.EvalSymlinks
createTempFn = os.CreateTemp
chmodFn = os.Chmod
renameFn = os.Rename
closeFileFn = func(f *os.File) error { return f.Close() }
readFileFn = os.ReadFile
writeFileFn = os.WriteFile
)
// Config holds the configuration for the updater.

View file

@ -191,4 +191,3 @@ func TestAlertConversions_Nil(t *testing.T) {
t.Fatalf("expected empty AlertInfo from nil models alert, got %+v", info)
}
}

View file

@ -198,18 +198,18 @@ func formatTimeAgo(t time.Time) string {
// AlertInvestigationRequest represents a request to investigate an alert
type AlertInvestigationRequest struct {
AlertID string `json:"alert_id"`
ResourceID string `json:"resource_id"`
ResourceName string `json:"resource_name"`
ResourceType string `json:"resource_type"` // guest, node, storage, docker
AlertType string `json:"alert_type"` // cpu, memory, disk, offline, etc.
Level string `json:"level"` // warning, critical
AlertID string `json:"alert_id"`
ResourceID string `json:"resource_id"`
ResourceName string `json:"resource_name"`
ResourceType string `json:"resource_type"` // guest, node, storage, docker
AlertType string `json:"alert_type"` // cpu, memory, disk, offline, etc.
Level string `json:"level"` // warning, critical
Value float64 `json:"value"`
Threshold float64 `json:"threshold"`
Message string `json:"message"`
Duration string `json:"duration"` // How long the alert has been active
Node string `json:"node,omitempty"`
VMID int `json:"vmid,omitempty"`
Message string `json:"message"`
Duration string `json:"duration"` // How long the alert has been active
Node string `json:"node,omitempty"`
VMID int `json:"vmid,omitempty"`
}
// GenerateAlertInvestigationPrompt creates a focused prompt for alert investigation

View file

@ -9,7 +9,6 @@ import (
"github.com/rcourtman/pulse-go-rewrite/internal/models"
)
func TestAlertTriggeredAnalyzer_AnalyzeNodeFromAlert(t *testing.T) {
// Create a mock state provider
stateProvider := &mockStateProvider{
@ -439,7 +438,6 @@ func TestAlertTriggeredAnalyzer_AnalyzeGenericResourceFromAlert(t *testing.T) {
}
}
func TestAlertTriggeredAnalyzer_ResourceKeyFromAlert(t *testing.T) {
analyzer := NewAlertTriggeredAnalyzer(nil, nil)
@ -495,7 +493,7 @@ func TestAlertTriggeredAnalyzer_CleanupOldCooldowns(t *testing.T) {
// Add some cooldown entries - one old, one recent
analyzer.mu.Lock()
analyzer.lastAnalyzed["old-resource"] = time.Now().Add(-2 * time.Hour) // > 1 hour old
analyzer.lastAnalyzed["recent-resource"] = time.Now() // Recent
analyzer.lastAnalyzed["recent-resource"] = time.Now() // Recent
analyzer.mu.Unlock()
// Cleanup
@ -829,9 +827,9 @@ func TestAlertTriggeredAnalyzer_AnalyzeResourceByAlert(t *testing.T) {
// Test cpu alert with node resource ID
alertNodeCPU := &alerts.Alert{
ID: "node-cpu",
Type: "cpu",
ResourceID: "cluster/node/pve1",
ID: "node-cpu",
Type: "cpu",
ResourceID: "cluster/node/pve1",
ResourceName: "pve1",
}
findingsNodeCPU := analyzer.analyzeResourceByAlert(context.Background(), alertNodeCPU)

View file

@ -17,11 +17,11 @@ import (
// MetricBaseline represents learned "normal" behavior for a single metric
type MetricBaseline struct {
Mean float64 `json:"mean"` // Average value
StdDev float64 `json:"stddev"` // Standard deviation
Percentiles map[int]float64 `json:"percentiles"` // 5, 25, 50, 75, 95
SampleCount int `json:"sample_count"` // Number of samples used
Mean float64 `json:"mean"` // Average value
StdDev float64 `json:"stddev"` // Standard deviation
Percentiles map[int]float64 `json:"percentiles"` // 5, 25, 50, 75, 95
SampleCount int `json:"sample_count"` // Number of samples used
// Time-of-day patterns (future enhancement)
HourlyMeans [24]float64 `json:"hourly_means,omitempty"`
}
@ -38,12 +38,12 @@ type ResourceBaseline struct {
type Store struct {
mu sync.RWMutex
baselines map[string]*ResourceBaseline // resourceID -> baseline
// Configuration
learningWindow time.Duration // How far back to learn from (default: 7 days)
minSamples int // Minimum samples needed (default: 50)
updateInterval time.Duration // How often to recompute (default: 1 hour)
learningWindow time.Duration // How far back to learn from (default: 7 days)
minSamples int // Minimum samples needed (default: 50)
updateInterval time.Duration // How often to recompute (default: 1 hour)
// Persistence
dataDir string
persistence Persistence
@ -57,22 +57,21 @@ type Persistence interface {
// StoreConfig configures the baseline store
type StoreConfig struct {
LearningWindow time.Duration
MinSamples int
UpdateInterval time.Duration
DataDir string
LearningWindow time.Duration
MinSamples int
UpdateInterval time.Duration
DataDir string
}
// DefaultConfig returns sensible defaults
func DefaultConfig() StoreConfig {
return StoreConfig{
LearningWindow: 14 * 24 * time.Hour, // 14 days to capture weekly patterns
MinSamples: 50,
UpdateInterval: 1 * time.Hour,
LearningWindow: 14 * 24 * time.Hour, // 14 days to capture weekly patterns
MinSamples: 50,
UpdateInterval: 1 * time.Hour,
}
}
// NewStore creates a new baseline store
func NewStore(cfg StoreConfig) *Store {
if cfg.LearningWindow == 0 {
@ -84,7 +83,7 @@ func NewStore(cfg StoreConfig) *Store {
if cfg.UpdateInterval == 0 {
cfg.UpdateInterval = 1 * time.Hour
}
s := &Store{
baselines: make(map[string]*ResourceBaseline),
learningWindow: cfg.LearningWindow,
@ -92,7 +91,7 @@ func NewStore(cfg StoreConfig) *Store {
updateInterval: cfg.UpdateInterval,
dataDir: cfg.DataDir,
}
// Try to load existing baselines from disk
if cfg.DataDir != "" {
if err := s.loadFromDisk(); err != nil {
@ -101,7 +100,7 @@ func NewStore(cfg StoreConfig) *Store {
log.Info().Int("count", len(s.baselines)).Msg("Loaded baselines from disk")
}
}
return s
}
@ -122,13 +121,13 @@ func (s *Store) Learn(resourceID, resourceType, metric string, points []MetricPo
Msg("Insufficient data for baseline learning")
return nil // Not an error, just not enough data yet
}
// Extract values
values := make([]float64, len(points))
for i, p := range points {
values[i] = p.Value
}
// Compute statistics
baseline := &MetricBaseline{
Mean: computeMean(values),
@ -136,10 +135,10 @@ func (s *Store) Learn(resourceID, resourceType, metric string, points []MetricPo
Percentiles: computePercentiles(values),
SampleCount: len(values),
}
s.mu.Lock()
defer s.mu.Unlock()
// Get or create resource baseline
rb, exists := s.baselines[resourceID]
if !exists {
@ -150,10 +149,10 @@ func (s *Store) Learn(resourceID, resourceType, metric string, points []MetricPo
}
s.baselines[resourceID] = rb
}
rb.Metrics[metric] = baseline
rb.LastUpdated = time.Now()
log.Debug().
Str("resource", resourceID).
Str("metric", metric).
@ -161,7 +160,7 @@ func (s *Store) Learn(resourceID, resourceType, metric string, points []MetricPo
Float64("stddev", baseline.StdDev).
Int("samples", baseline.SampleCount).
Msg("Baseline learned")
return nil
}
@ -169,12 +168,12 @@ func (s *Store) Learn(resourceID, resourceType, metric string, points []MetricPo
func (s *Store) GetBaseline(resourceID, metric string) (*MetricBaseline, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
rb, exists := s.baselines[resourceID]
if !exists {
return nil, false
}
mb, exists := rb.Metrics[metric]
return mb, exists
}
@ -183,12 +182,12 @@ func (s *Store) GetBaseline(resourceID, metric string) (*MetricBaseline, bool) {
func (s *Store) GetResourceBaseline(resourceID string) (*ResourceBaseline, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
rb, exists := s.baselines[resourceID]
if !exists {
return nil, false
}
// Return a copy to prevent mutation
copy := &ResourceBaseline{
ResourceID: rb.ResourceID,
@ -209,15 +208,15 @@ func (s *Store) IsAnomaly(resourceID, metric string, value float64) (bool, float
if !ok || baseline.SampleCount < s.minSamples {
return false, 0 // Not enough data to determine
}
// Calculate absolute difference
absDiff := math.Abs(value - baseline.Mean)
// Don't flag small absolute changes as anomalies
if absDiff < 3.0 {
return false, 0
}
if baseline.StdDev == 0 {
// No variance - only flag if change is significant (> 5 percentage points)
if absDiff > 5.0 {
@ -225,19 +224,19 @@ func (s *Store) IsAnomaly(resourceID, metric string, value float64) (bool, float
}
return false, 0
}
// Apply minimum stddev floor
effectiveStdDev := baseline.StdDev
if effectiveStdDev < 1.0 {
effectiveStdDev = 1.0
}
zScore := (value - baseline.Mean) / effectiveStdDev
// Consider anything > 2 standard deviations as anomalous
// (covers ~95% of normal distribution)
isAnomaly := math.Abs(zScore) > 2.0
return isAnomaly, zScore
}
@ -258,10 +257,10 @@ func (s *Store) CheckAnomaly(resourceID, metric string, value float64) (AnomalyS
if !ok || baseline.SampleCount < s.minSamples {
return AnomalyNone, 0, nil
}
// Calculate absolute difference for threshold checks
absDiff := math.Abs(value - baseline.Mean)
// Handle zero stddev case more intelligently
// When values have been completely stable, small variations aren't anomalies
if baseline.StdDev == 0 {
@ -275,23 +274,23 @@ func (s *Store) CheckAnomaly(resourceID, metric string, value float64) (AnomalyS
// since we don't have historical variance data to judge severity
return AnomalyMedium, 0, baseline
}
// Apply minimum stddev floor to prevent tiny variations from appearing extreme
// If historical stddev is < 1%, use 1% as the floor for z-score calculation
effectiveStdDev := baseline.StdDev
if effectiveStdDev < 1.0 {
effectiveStdDev = 1.0
}
zScore := (value - baseline.Mean) / effectiveStdDev
absZ := math.Abs(zScore)
// Also require a minimum absolute difference for practical significance
// Don't flag anomalies for changes < 3 percentage points regardless of z-score
if absDiff < 3.0 {
return AnomalyNone, zScore, baseline
}
var severity AnomalySeverity
switch {
case absZ < 2.0:
@ -305,47 +304,47 @@ func (s *Store) CheckAnomaly(resourceID, metric string, value float64) (AnomalyS
default:
severity = AnomalyCritical
}
return severity, zScore, baseline
}
// AnomalyReport represents a detected anomaly for a single metric
type AnomalyReport struct {
ResourceID string `json:"resource_id"`
ResourceName string `json:"resource_name,omitempty"`
ResourceType string `json:"resource_type,omitempty"`
Metric string `json:"metric"`
CurrentValue float64 `json:"current_value"`
BaselineMean float64 `json:"baseline_mean"`
BaselineStdDev float64 `json:"baseline_std_dev"`
ZScore float64 `json:"z_score"`
Severity AnomalySeverity `json:"severity"`
Description string `json:"description"`
ResourceID string `json:"resource_id"`
ResourceName string `json:"resource_name,omitempty"`
ResourceType string `json:"resource_type,omitempty"`
Metric string `json:"metric"`
CurrentValue float64 `json:"current_value"`
BaselineMean float64 `json:"baseline_mean"`
BaselineStdDev float64 `json:"baseline_std_dev"`
ZScore float64 `json:"z_score"`
Severity AnomalySeverity `json:"severity"`
Description string `json:"description"`
}
// CheckResourceAnomalies checks multiple metrics for a resource and returns all anomalies
func (s *Store) CheckResourceAnomalies(resourceID string, metrics map[string]float64) []AnomalyReport {
var anomalies []AnomalyReport
for metric, value := range metrics {
severity, zScore, baseline := s.CheckAnomaly(resourceID, metric, value)
if severity != AnomalyNone && baseline != nil {
// Compute ratio: current value / baseline mean
ratio := value / baseline.Mean
// Apply metric-specific filters to reduce noise
// Different metrics have different thresholds for what's "actionable"
shouldReport := false
switch metric {
case "disk":
// Disk is critical - report if:
// 1. Usage is above 85% (absolute threshold), OR
// 2. Usage increased by more than 15 percentage points from baseline
if value >= 85.0 || (value - baseline.Mean) >= 15.0 {
if value >= 85.0 || (value-baseline.Mean) >= 15.0 {
shouldReport = true
}
case "cpu":
// CPU fluctuates a lot - only report if:
// 1. Current usage is above 70% (actually busy), AND
@ -353,7 +352,7 @@ func (s *Store) CheckResourceAnomalies(resourceID string, metrics map[string]flo
if value >= 70.0 && ratio >= 2.0 {
shouldReport = true
}
case "memory":
// Memory is more stable - report if:
// 1. Current usage is above 80% (getting tight), OR
@ -361,44 +360,43 @@ func (s *Store) CheckResourceAnomalies(resourceID string, metrics map[string]flo
if value >= 80.0 || (ratio >= 1.5 && value >= 60.0) {
shouldReport = true
}
default:
// For other metrics (network, etc), use 2x threshold
if ratio >= 2.0 || ratio <= 0.5 {
shouldReport = true
}
}
if !shouldReport {
continue
}
report := AnomalyReport{
ResourceID: resourceID,
Metric: metric,
CurrentValue: value,
ZScore: zScore,
Severity: severity,
BaselineMean: baseline.Mean,
ResourceID: resourceID,
Metric: metric,
CurrentValue: value,
ZScore: zScore,
Severity: severity,
BaselineMean: baseline.Mean,
BaselineStdDev: baseline.StdDev,
}
// Generate human-readable description
direction := "above"
if zScore < 0 {
direction = "below"
}
report.Description = formatAnomalyDescription(metric, ratio, direction, severity)
anomalies = append(anomalies, report)
}
}
return anomalies
}
// formatAnomalyDescription generates a human-readable anomaly description
func formatAnomalyDescription(metric string, ratio float64, direction string, severity AnomalySeverity) string {
metricLabel := metric
@ -414,7 +412,7 @@ func formatAnomalyDescription(metric string, ratio float64, direction string, se
case "network_out":
metricLabel = "Network outbound"
}
severityLabel := ""
switch severity {
case AnomalyCritical:
@ -426,7 +424,7 @@ func formatAnomalyDescription(metric string, ratio float64, direction string, se
case AnomalyLow:
severityLabel = "Minor anomaly: "
}
return severityLabel + metricLabel + " is " + formatRatio(ratio) + " " + direction + " normal baseline"
}
@ -462,7 +460,7 @@ func (s *Store) GetAllAnomalies(metricsProvider func(resourceID string) map[stri
resourceIDs = append(resourceIDs, id)
}
s.mu.RUnlock()
var allAnomalies []AnomalyReport
for _, resourceID := range resourceIDs {
metrics := metricsProvider(resourceID)
@ -471,22 +469,22 @@ func (s *Store) GetAllAnomalies(metricsProvider func(resourceID string) map[stri
allAnomalies = append(allAnomalies, anomalies...)
}
}
return allAnomalies
}
// TrendPrediction represents a forecast for when a resource might be exhausted
type TrendPrediction struct {
ResourceID string `json:"resource_id"`
ResourceName string `json:"resource_name,omitempty"`
ResourceType string `json:"resource_type,omitempty"`
Metric string `json:"metric"`
CurrentValue float64 `json:"current_value"` // Current % usage
DailyChange float64 `json:"daily_change"` // Average change per day
DaysToFull int `json:"days_to_full"` // Estimated days until 100% (or -1 if decreasing/stable)
Severity string `json:"severity"` // "critical", "warning", "info"
Description string `json:"description"`
ConfidenceNote string `json:"confidence_note,omitempty"`
ResourceID string `json:"resource_id"`
ResourceName string `json:"resource_name,omitempty"`
ResourceType string `json:"resource_type,omitempty"`
Metric string `json:"metric"`
CurrentValue float64 `json:"current_value"` // Current % usage
DailyChange float64 `json:"daily_change"` // Average change per day
DaysToFull int `json:"days_to_full"` // Estimated days until 100% (or -1 if decreasing/stable)
Severity string `json:"severity"` // "critical", "warning", "info"
Description string `json:"description"`
ConfidenceNote string `json:"confidence_note,omitempty"`
}
// CalculateTrend analyzes a time series of values and predicts future exhaustion
@ -497,10 +495,10 @@ func CalculateTrend(samples []float64, currentValue float64) *TrendPrediction {
if len(samples) < 5 {
return nil // Not enough data for meaningful trend
}
// Simple linear regression to find slope
n := float64(len(samples))
// Calculate means
sumX := 0.0
sumY := 0.0
@ -510,7 +508,7 @@ func CalculateTrend(samples []float64, currentValue float64) *TrendPrediction {
}
meanX := sumX / n
meanY := sumY / n
// Calculate slope (least squares)
numerator := 0.0
denominator := 0.0
@ -519,27 +517,27 @@ func CalculateTrend(samples []float64, currentValue float64) *TrendPrediction {
numerator += (x - meanX) * (v - meanY)
denominator += (x - meanX) * (x - meanX)
}
slope := numerator / denominator
// slope is change per sample, convert to daily change
// Assume samples are taken regularly; if 24 samples per day, divide by 24
// For now, assume hourly samples = 24 per day
samplesPerDay := 24.0
dailyChange := slope * samplesPerDay
prediction := &TrendPrediction{
CurrentValue: currentValue,
DailyChange: dailyChange,
}
// Calculate days to full if trending upward
if dailyChange > 0.1 { // More than 0.1% increase per day
remaining := 100.0 - currentValue
if remaining > 0 {
daysToFull := remaining / dailyChange
prediction.DaysToFull = int(math.Ceil(daysToFull))
// Set severity based on time to full
if prediction.DaysToFull <= 7 {
prediction.Severity = "critical"
@ -568,7 +566,7 @@ func CalculateTrend(samples []float64, currentValue float64) *TrendPrediction {
prediction.Severity = "info"
prediction.Description = "Usage stable - no significant trend detected"
}
return prediction
}
@ -581,7 +579,7 @@ func formatTrendDescription(daysToFull int, dailyChange float64, severity string
} else {
changeDesc = " (+" + floatToStr(dailyChange, 2) + "% per day)"
}
switch severity {
case "critical":
return "⚠️ Resource will be full in " + timeFrame + changeDesc
@ -625,7 +623,7 @@ func floatToStr(f float64, precision int) string {
// Simple implementation for small numbers
intPart := int(f)
fracPart := f - float64(intPart)
if precision == 1 {
fracPart = math.Round(fracPart*10) / 10
if fracPart < 0.1 {
@ -634,7 +632,7 @@ func floatToStr(f float64, precision int) string {
d := byte('0' + int(fracPart*10))
return string([]byte{'0' + byte(intPart), '.', d})
}
fracPart = math.Round(fracPart*100) / 100
if fracPart < 0.01 {
return string([]byte{'0' + byte(intPart)})
@ -651,7 +649,6 @@ func (s *Store) ResourceCount() int {
return len(s.baselines)
}
// FlatBaseline is a flattened representation of a single metric baseline for API responses
type FlatBaseline struct {
ResourceID string `json:"resource_id"`
@ -668,7 +665,7 @@ type FlatBaseline struct {
func (s *Store) GetAllBaselines() map[string]*FlatBaseline {
s.mu.RLock()
defer s.mu.RUnlock()
result := make(map[string]*FlatBaseline)
for resourceID, rb := range s.baselines {
for metric, mb := range rb.Metrics {
@ -701,10 +698,10 @@ func (s *Store) Save() error {
if s.dataDir == "" {
return nil
}
s.mu.RLock()
defer s.mu.RUnlock()
return s.saveToDisk()
}
@ -713,27 +710,27 @@ func (s *Store) saveToDisk() error {
if s.dataDir == "" {
return nil
}
path := filepath.Join(s.dataDir, "baselines.json")
data, err := json.MarshalIndent(s.baselines, "", " ")
if err != nil {
return err
}
// Write to temp file first, then rename for atomicity
tmpPath := path + ".tmp"
if err := os.WriteFile(tmpPath, data, 0600); err != nil {
return err
}
return os.Rename(tmpPath, path)
}
// loadFromDisk reads baselines from JSON file
func (s *Store) loadFromDisk() error {
path := filepath.Join(s.dataDir, "baselines.json")
data, err := os.ReadFile(path)
if err != nil {
if os.IsNotExist(err) {
@ -741,7 +738,7 @@ func (s *Store) loadFromDisk() error {
}
return err
}
return json.Unmarshal(data, &s.baselines)
}
@ -776,12 +773,12 @@ func computePercentiles(values []float64) map[int]float64 {
if len(values) == 0 {
return nil
}
// Sort a copy
sorted := make([]float64, len(values))
copy(sorted, values)
sort.Float64s(sorted)
percentiles := map[int]float64{
5: percentile(sorted, 5),
25: percentile(sorted, 25),
@ -789,7 +786,7 @@ func computePercentiles(values []float64) map[int]float64 {
75: percentile(sorted, 75),
95: percentile(sorted, 95),
}
return percentiles
}
@ -797,16 +794,16 @@ func percentile(sorted []float64, p int) float64 {
if len(sorted) == 0 {
return 0
}
// Use linear interpolation
rank := float64(p) / 100.0 * float64(len(sorted)-1)
lower := int(rank)
upper := lower + 1
if upper >= len(sorted) {
return sorted[len(sorted)-1]
}
// Interpolate
weight := rank - float64(lower)
return sorted[lower]*(1-weight) + sorted[upper]*weight

View file

@ -8,7 +8,7 @@ import (
func TestLearn_Basic(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 10})
// Create 50 data points with mean ~50 and some variance
points := make([]MetricPoint, 50)
now := time.Now()
@ -18,27 +18,27 @@ func TestLearn_Basic(t *testing.T) {
Timestamp: now.Add(-time.Duration(50-i) * time.Minute),
}
}
err := store.Learn("test-vm", "vm", "cpu", points)
if err != nil {
t.Fatalf("Learn failed: %v", err)
}
baseline, ok := store.GetBaseline("test-vm", "cpu")
if !ok {
t.Fatal("Baseline not found after learning")
}
// Check mean is around 50
if math.Abs(baseline.Mean-50) > 1 {
t.Errorf("Expected mean ~50, got %f", baseline.Mean)
}
// Check stddev is reasonable (should be ~3 for our data)
if baseline.StdDev < 1 || baseline.StdDev > 5 {
t.Errorf("Expected stddev ~3, got %f", baseline.StdDev)
}
if baseline.SampleCount != 50 {
t.Errorf("Expected 50 samples, got %d", baseline.SampleCount)
}
@ -46,18 +46,18 @@ func TestLearn_Basic(t *testing.T) {
func TestLearn_InsufficientData(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 50})
// Only 10 points, not enough
points := make([]MetricPoint, 10)
for i := 0; i < 10; i++ {
points[i] = MetricPoint{Value: float64(i)}
}
err := store.Learn("test-vm", "vm", "cpu", points)
if err != nil {
t.Fatalf("Learn should not error on insufficient data: %v", err)
}
_, ok := store.GetBaseline("test-vm", "cpu")
if ok {
t.Error("Should not have baseline with insufficient data")
@ -66,7 +66,7 @@ func TestLearn_InsufficientData(t *testing.T) {
func TestIsAnomaly(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 10})
// Create stable data around 50 with low variance
points := make([]MetricPoint, 100)
for i := 0; i < 100; i++ {
@ -74,27 +74,27 @@ func TestIsAnomaly(t *testing.T) {
Value: 50 + float64(i%3) - 1, // Values 49, 50, 51
}
}
store.Learn("test-vm", "vm", "cpu", points)
// Test normal value
isAnomaly, zScore := store.IsAnomaly("test-vm", "cpu", 50)
if isAnomaly {
t.Errorf("50 should not be anomaly, zScore=%f", zScore)
}
// Test slightly high - with stddev ~0.82, 51 is within 2 std devs
isAnomaly, zScore = store.IsAnomaly("test-vm", "cpu", 51)
if isAnomaly {
t.Errorf("51 should not be anomaly with this variance, zScore=%f", zScore)
}
// Test very high (should be anomaly)
isAnomaly, zScore = store.IsAnomaly("test-vm", "cpu", 60)
if !isAnomaly {
t.Errorf("60 should be anomaly, zScore=%f", zScore)
}
// Test very low (should be anomaly)
isAnomaly, zScore = store.IsAnomaly("test-vm", "cpu", 40)
if !isAnomaly {
@ -104,7 +104,7 @@ func TestIsAnomaly(t *testing.T) {
func TestCheckAnomaly_Severity(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 10})
// Create data with larger stddev to allow meaningful tests
// Mean = 50, and we'll create values with wider variance
points := make([]MetricPoint, 100)
@ -112,15 +112,15 @@ func TestCheckAnomaly_Severity(t *testing.T) {
// Values from 45 to 55 to give stddev ~3
points[i] = MetricPoint{Value: 50 + float64(i%11) - 5}
}
store.Learn("test-vm", "vm", "cpu", points)
baseline, _ := store.GetBaseline("test-vm", "cpu")
// The stddev should be around 3.0
if baseline.StdDev < 2 || baseline.StdDev > 4 {
t.Logf("Stddev is %f (expected ~3)", baseline.StdDev)
}
testCases := []struct {
value float64
expectedSeverity AnomalySeverity
@ -129,11 +129,11 @@ func TestCheckAnomaly_Severity(t *testing.T) {
// Values at or near mean - no anomaly
{50, AnomalyNone, "Mean value"},
{52, AnomalyNone, "Within 1 std dev and <3 point diff"},
// Small statistical deviations but below minimum absolute threshold (3 points)
// should be filtered out
{52.5, AnomalyNone, "Small absolute difference"},
// Larger deviations that meet both thresholds
// With stddev ~3.2: z-score = diff/stddev
// 44: 6pts/3.2 = 1.87z - below 2.0 threshold = None
@ -147,15 +147,15 @@ func TestCheckAnomaly_Severity(t *testing.T) {
// 35: 15pts/3.2 = 4.7z - critical range (>4.0)
{35, AnomalyCritical, ">4 std devs with 15 point diff"},
}
for _, tc := range testCases {
severity, zScore, _ := store.CheckAnomaly("test-vm", "cpu", tc.value)
if severity != tc.expectedSeverity {
t.Errorf("%s: Value %f (z=%.2f): expected severity %s, got %s",
t.Errorf("%s: Value %f (z=%.2f): expected severity %s, got %s",
tc.description, tc.value, zScore, tc.expectedSeverity, severity)
}
}
// Test that small differences are filtered even with high z-scores
// Create very stable data (stddev near 0)
stablePoints := make([]MetricPoint, 100)
@ -163,13 +163,13 @@ func TestCheckAnomaly_Severity(t *testing.T) {
stablePoints[i] = MetricPoint{Value: 50}
}
store.Learn("stable-vm", "vm", "disk", stablePoints)
// Small change from perfectly stable baseline should NOT be anomaly
severity, _, _ := store.CheckAnomaly("stable-vm", "disk", 51)
if severity != AnomalyNone {
t.Error("1% change from stable baseline should not be anomaly")
}
// Larger change from stable baseline SHOULD be anomaly (medium severity)
severity, _, _ = store.CheckAnomaly("stable-vm", "disk", 56)
if severity == AnomalyNone {
@ -179,7 +179,7 @@ func TestCheckAnomaly_Severity(t *testing.T) {
func TestGetResourceBaseline(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 10})
// Learn multiple metrics
cpuPoints := make([]MetricPoint, 50)
memPoints := make([]MetricPoint, 50)
@ -187,27 +187,27 @@ func TestGetResourceBaseline(t *testing.T) {
cpuPoints[i] = MetricPoint{Value: 30}
memPoints[i] = MetricPoint{Value: 70}
}
store.Learn("test-vm", "vm", "cpu", cpuPoints)
store.Learn("test-vm", "vm", "memory", memPoints)
rb, ok := store.GetResourceBaseline("test-vm")
if !ok {
t.Fatal("Resource baseline not found")
}
if rb.ResourceType != "vm" {
t.Errorf("Expected resource type 'vm', got '%s'", rb.ResourceType)
}
if len(rb.Metrics) != 2 {
t.Errorf("Expected 2 metrics, got %d", len(rb.Metrics))
}
if rb.Metrics["cpu"] == nil {
t.Error("CPU metric baseline missing")
}
if rb.Metrics["memory"] == nil {
t.Error("Memory metric baseline missing")
}
@ -216,17 +216,17 @@ func TestGetResourceBaseline(t *testing.T) {
func TestPercentiles(t *testing.T) {
values := []float64{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
percentiles := computePercentiles(values)
// P50 should be ~5.5 for 1-10
if percentiles[50] < 5 || percentiles[50] > 6 {
t.Errorf("P50 should be ~5.5, got %f", percentiles[50])
}
// P5 should be close to 1
if percentiles[5] < 1 || percentiles[5] > 2 {
t.Errorf("P5 should be ~1, got %f", percentiles[5])
}
// P95 should be close to 10
if percentiles[95] < 9 || percentiles[95] > 10 {
t.Errorf("P95 should be ~10, got %f", percentiles[95])
@ -236,12 +236,12 @@ func TestPercentiles(t *testing.T) {
func TestComputeStats(t *testing.T) {
// Test mean and stddev with known values
values := []float64{2, 4, 4, 4, 5, 5, 7, 9} // Mean = 5, Stddev = 2 (sample)
mean := computeMean(values)
if mean != 5 {
t.Errorf("Expected mean 5, got %f", mean)
}
stddev := computeStdDev(values)
// Sample stddev of [2,4,4,4,5,5,7,9] is approximately 2.14, not exactly 2
if math.Abs(stddev-2.14) > 0.1 {
@ -265,17 +265,17 @@ func TestCalculateTrend_IncreasingTrend(t *testing.T) {
for i := 0; i < 48; i++ {
samples[i] = 50 + float64(i) // 50, 51, 52, ...
}
result := CalculateTrend(samples, 97) // Currently at 97%
if result == nil {
t.Fatal("Expected non-nil result for increasing trend")
}
// Should be trending toward full
if result.DaysToFull <= 0 {
t.Errorf("Expected positive DaysToFull for increasing trend, got %d", result.DaysToFull)
}
// With 24% increase per day and 3% remaining, should be full very soon
if result.Severity != "critical" && result.Severity != "warning" {
t.Errorf("Expected critical or warning severity, got %s", result.Severity)
@ -288,17 +288,17 @@ func TestCalculateTrend_DecreasingTrend(t *testing.T) {
for i := 0; i < 48; i++ {
samples[i] = 80 - float64(i)*0.5 // 80, 79.5, 79, ...
}
result := CalculateTrend(samples, 56)
if result == nil {
t.Fatal("Expected non-nil result for decreasing trend")
}
// Should indicate decreasing (DaysToFull = -1)
if result.DaysToFull != -1 {
t.Errorf("Expected DaysToFull=-1 for decreasing trend, got %d", result.DaysToFull)
}
if result.Severity != "info" {
t.Errorf("Expected info severity for decreasing trend, got %s", result.Severity)
}
@ -310,12 +310,12 @@ func TestCalculateTrend_StableTrend(t *testing.T) {
for i := 0; i < 48; i++ {
samples[i] = 50 + float64(i%3-1)*0.01 // Tiny fluctuations
}
result := CalculateTrend(samples, 50)
if result == nil {
t.Fatal("Expected non-nil result for stable trend")
}
// Should indicate stable (DaysToFull = -1)
if result.DaysToFull != -1 {
t.Errorf("Expected DaysToFull=-1 for stable trend, got %d", result.DaysToFull)
@ -336,7 +336,7 @@ func TestFormatDays(t *testing.T) {
{60, "~2 months"},
{400, ">1 year"},
}
for _, tc := range testCases {
result := formatDays(tc.days)
if result != tc.expected {
@ -347,28 +347,28 @@ func TestFormatDays(t *testing.T) {
func TestCheckResourceAnomalies_Disk(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 10})
// Create stable data with mean ~60% disk usage
points := make([]MetricPoint, 100)
for i := 0; i < 100; i++ {
points[i] = MetricPoint{Value: 60 + float64(i%5) - 2} // 58-62
}
store.Learn("test-vm", "vm", "disk", points)
// Test: disk above 85% should be reported
metrics := map[string]float64{"disk": 90}
anomalies := store.CheckResourceAnomalies("test-vm", metrics)
if len(anomalies) == 0 {
t.Error("Expected disk anomaly to be reported for 90% usage")
}
// Test: disk increase >15 points from baseline should be reported
metrics = map[string]float64{"disk": 80}
anomalies = store.CheckResourceAnomalies("test-vm", metrics)
if len(anomalies) == 0 {
t.Error("Expected disk anomaly to be reported for 20 point increase from baseline")
}
// Test: disk at baseline should not be reported (no significant deviation)
metrics = map[string]float64{"disk": 60}
anomalies = store.CheckResourceAnomalies("test-vm", metrics)
@ -379,28 +379,28 @@ func TestCheckResourceAnomalies_Disk(t *testing.T) {
func TestCheckResourceAnomalies_CPU(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 10})
// Create stable data with mean ~20% CPU usage
points := make([]MetricPoint, 100)
for i := 0; i < 100; i++ {
points[i] = MetricPoint{Value: 20 + float64(i%3) - 1} // 19-21
}
store.Learn("test-vm", "vm", "cpu", points)
// Test: CPU at 80% (above 70% and >2x baseline) should be reported
metrics := map[string]float64{"cpu": 80}
anomalies := store.CheckResourceAnomalies("test-vm", metrics)
if len(anomalies) == 0 {
t.Error("Expected CPU anomaly to be reported for 80% (>70% and 4x baseline)")
}
// Test: CPU at 50% should NOT be reported (below 70% threshold)
metrics = map[string]float64{"cpu": 50}
anomalies = store.CheckResourceAnomalies("test-vm", metrics)
if len(anomalies) != 0 {
t.Errorf("Expected no anomaly for CPU at 50%%, got %d", len(anomalies))
}
// Test: CPU at 20% (baseline) should not be reported
metrics = map[string]float64{"cpu": 20}
anomalies = store.CheckResourceAnomalies("test-vm", metrics)
@ -411,28 +411,28 @@ func TestCheckResourceAnomalies_CPU(t *testing.T) {
func TestCheckResourceAnomalies_Memory(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 10})
// Create stable data with mean ~40% memory usage
points := make([]MetricPoint, 100)
for i := 0; i < 100; i++ {
points[i] = MetricPoint{Value: 40 + float64(i%3) - 1} // 39-41
}
store.Learn("test-vm", "vm", "memory", points)
// Test: Memory at 85% should be reported (above 80% threshold)
metrics := map[string]float64{"memory": 85}
anomalies := store.CheckResourceAnomalies("test-vm", metrics)
if len(anomalies) == 0 {
t.Error("Expected memory anomaly to be reported for 85% (above 80%)")
}
// Test: Memory at 70% with 1.75x baseline should be reported (>1.5x and >60%)
metrics = map[string]float64{"memory": 70}
anomalies = store.CheckResourceAnomalies("test-vm", metrics)
if len(anomalies) == 0 {
t.Error("Expected memory anomaly to be reported for 70% (1.75x baseline, >60%)")
}
// Test: Memory at 50% should NOT be reported (not >1.5x enough or >80%)
metrics = map[string]float64{"memory": 50}
anomalies = store.CheckResourceAnomalies("test-vm", metrics)
@ -443,21 +443,21 @@ func TestCheckResourceAnomalies_Memory(t *testing.T) {
func TestCheckResourceAnomalies_OtherMetrics(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 10})
// Create stable network data with mean ~100
points := make([]MetricPoint, 100)
for i := 0; i < 100; i++ {
points[i] = MetricPoint{Value: 100 + float64(i%5) - 2} // 98-102
}
store.Learn("test-vm", "vm", "network_in", points)
// Test: network_in at 2x baseline should be reported
metrics := map[string]float64{"network_in": 250}
anomalies := store.CheckResourceAnomalies("test-vm", metrics)
if len(anomalies) == 0 {
t.Error("Expected network anomaly to be reported for 2.5x baseline")
}
// Test: network_in at 0.3x baseline should be reported (below 0.5x)
metrics = map[string]float64{"network_in": 30}
anomalies = store.CheckResourceAnomalies("test-vm", metrics)
@ -468,7 +468,7 @@ func TestCheckResourceAnomalies_OtherMetrics(t *testing.T) {
func TestCheckResourceAnomalies_NoBaseline(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 10})
// No baselines learned - should return empty
metrics := map[string]float64{"cpu": 90, "memory": 85}
anomalies := store.CheckResourceAnomalies("unknown-vm", metrics)
@ -492,7 +492,7 @@ func TestFormatRatio(t *testing.T) {
{4.0, "3x"},
{6.0, "~6x"},
}
for _, tc := range testCases {
result := formatRatio(tc.ratio)
if result != tc.expected {
@ -515,7 +515,7 @@ func TestFormatAnomalyDescription(t *testing.T) {
{"network_in", 2.0, "below", AnomalyLow, "Minor anomaly: Network inbound"},
{"network_out", 1.5, "above", AnomalyNone, "Network outbound"},
}
for _, tc := range testCases {
result := formatAnomalyDescription(tc.metric, tc.ratio, tc.direction, tc.severity)
if !contains(result, tc.contains) {
@ -540,7 +540,7 @@ func containsHelper(s, substr string) bool {
func TestGetAllAnomalies(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 10})
// Learn baselines for multiple resources
points := make([]MetricPoint, 100)
for i := 0; i < 100; i++ {
@ -548,13 +548,13 @@ func TestGetAllAnomalies(t *testing.T) {
}
store.Learn("vm-1", "vm", "cpu", points)
store.Learn("vm-2", "vm", "cpu", points)
diskPoints := make([]MetricPoint, 100)
for i := 0; i < 100; i++ {
diskPoints[i] = MetricPoint{Value: 50 + float64(i%3) - 1}
}
store.Learn("vm-1", "vm", "disk", diskPoints)
// Create a metrics provider that returns anomalous values
metricsProvider := func(resourceID string) map[string]float64 {
switch resourceID {
@ -566,14 +566,14 @@ func TestGetAllAnomalies(t *testing.T) {
return nil
}
}
anomalies := store.GetAllAnomalies(metricsProvider)
// Should have anomalies for vm-1 (cpu 4x baseline + disk at 90%)
if len(anomalies) < 1 {
t.Errorf("Expected at least 1 anomaly, got %d", len(anomalies))
}
// vm-2 should not have anomalies
for _, a := range anomalies {
if a.ResourceID == "vm-2" {
@ -584,11 +584,11 @@ func TestGetAllAnomalies(t *testing.T) {
func TestGetAllAnomalies_EmptyStore(t *testing.T) {
store := NewStore(StoreConfig{MinSamples: 10})
metricsProvider := func(resourceID string) map[string]float64 {
return map[string]float64{"cpu": 90}
}
anomalies := store.GetAllAnomalies(metricsProvider)
if len(anomalies) != 0 {
t.Errorf("Expected no anomalies from empty store, got %d", len(anomalies))
@ -608,7 +608,7 @@ func TestFloatToStr(t *testing.T) {
{0.5, 1, "0.5"},
{0.05, 2, "0.05"},
}
for _, tc := range testCases {
result := floatToStr(tc.value, tc.precision)
if result != tc.expected {
@ -616,4 +616,3 @@ func TestFloatToStr(t *testing.T) {
}
}
}

View file

@ -22,12 +22,12 @@ func (a *BaselineStoreAdapter) CheckAnomaly(resourceID, metric string, value flo
if a.store == nil {
return "", 0, 0, 0, false
}
s, z, b := a.store.CheckAnomaly(resourceID, metric, value)
if b == nil {
return "", 0, 0, 0, false
}
return string(s), z, b.Mean, b.StdDev, true
}
@ -36,11 +36,11 @@ func (a *BaselineStoreAdapter) GetBaseline(resourceID, metric string) (mean floa
if a.store == nil {
return 0, 0, 0, false
}
b, exists := a.store.GetBaseline(resourceID, metric)
if !exists || b == nil {
return 0, 0, 0, false
}
return b.Mean, b.StdDev, b.SampleCount, true
}

View file

@ -22,17 +22,17 @@ After comprehensive analysis of your infrastructure, I identified several issues
1. **Critical CPU overload on Tower host**`
result := cleanThinkingTokens(input)
// Should NOT contain thinking markers
if contains(result, "<end▁of▁thinking>") {
t.Errorf("cleanThinkingTokens() should have removed DeepSeek thinking markers")
}
// Should NOT contain internal reasoning
if contains(result, "Now, also consider") || contains(result, "Let's add an info") {
t.Errorf("cleanThinkingTokens() should have removed internal reasoning")
}
// Should still contain the actual content
if !contains(result, "## Analysis Summary") {
t.Errorf("cleanThinkingTokens() removed header")
@ -53,7 +53,7 @@ Now, let's check something.
## Real Content`
result := cleanThinkingTokens(input)
if result != "## Real Content" {
t.Errorf("cleanThinkingTokens() failed for ASCII variant: got %q", result)
}
@ -71,12 +71,12 @@ Now, I need to look at memory.
- Issue 1`
result := cleanThinkingTokens(input)
// Should remove the reasoning lines but keep the findings
if !contains(result, "## Analysis") || !contains(result, "### Findings") || !contains(result, "- Issue 1") {
t.Errorf("cleanThinkingTokens() removed too much: got %q", result)
}
if contains(result, "Let's check") || contains(result, "Now, I need") {
t.Errorf("cleanThinkingTokens() should have removed reasoning: got %q", result)
}
@ -100,7 +100,7 @@ This is a normal response without any thinking tokens.
2. Issue two`
result := cleanThinkingTokens(input)
// Should be mostly unchanged (just trimmed)
if result != input {
t.Errorf("cleanThinkingTokens() modified clean content:\nGot: %q\nExpected: %q", result, input)

View file

@ -129,24 +129,24 @@ func (b *Builder) BuildForInfrastructure(state models.StateSnapshot) *Infrastruc
continue
}
trends := b.computeGuestTrends(ct.ID)
// Determine container type - OCI containers are treated specially
containerType := "container"
if ct.IsOCI {
containerType = "oci_container"
}
resourceCtx := FormatGuestForContext(
ct.ID, ct.Name, ct.Node, containerType, ct.Status,
ct.VMID,
ct.CPU, ct.Memory.Usage, ct.Disk.Usage,
ct.Uptime, ct.LastBackup, trends,
)
// Add raw metric samples for LLM interpretation
// This lets the LLM see actual patterns without pre-computed heuristics
resourceCtx.MetricSamples = b.computeGuestMetricSamples(ct.ID)
// Add OCI image info for AI context
if ct.IsOCI && ct.OSTemplate != "" {
if resourceCtx.Metadata == nil {
@ -154,7 +154,7 @@ func (b *Builder) BuildForInfrastructure(state models.StateSnapshot) *Infrastruc
}
resourceCtx.Metadata["oci_image"] = ct.OSTemplate
}
b.enrichWithNotes(&resourceCtx)
b.enrichWithAnomalies(&resourceCtx)
ctx.Containers = append(ctx.Containers, resourceCtx)
@ -164,13 +164,13 @@ func (b *Builder) BuildForInfrastructure(state models.StateSnapshot) *Infrastruc
for _, storage := range state.Storage {
trends := b.computeStorageTrends(storage.ID)
resourceCtx := FormatStorageForContext(storage, trends)
// Add capacity predictions for storage
if predictions := b.computeStoragePredictions(storage, trends); len(predictions) > 0 {
resourceCtx.Predictions = predictions
ctx.Predictions = append(ctx.Predictions, predictions...)
}
ctx.Storage = append(ctx.Storage, resourceCtx)
}
@ -204,7 +204,7 @@ func (b *Builder) BuildForInfrastructure(state models.StateSnapshot) *Infrastruc
// computeNodeTrends computes trends for a node's metrics
func (b *Builder) computeNodeTrends(nodeID string) map[string]Trend {
trends := make(map[string]Trend)
if b.metricsHistory == nil || !b.includeTrends {
return trends
}
@ -233,26 +233,26 @@ func (b *Builder) computeNodeTrends(nodeID string) map[string]Trend {
// computeGuestTrends computes trends for a guest's metrics
func (b *Builder) computeGuestTrends(guestID string) map[string]Trend {
trends := make(map[string]Trend)
if b.metricsHistory == nil || !b.includeTrends {
return trends
}
// Get all metrics at once for efficiency
allMetrics := b.metricsHistory.GetAllGuestMetrics(guestID, b.trendWindow7d)
for metric, points := range allMetrics {
if len(points) < 3 {
continue
}
// Compute 24h trend
recent := filterRecentPoints(points, b.trendWindow24h)
if len(recent) >= 3 {
trend := ComputeTrend(recent, metric, b.trendWindow24h)
trends[metric+"_24h"] = trend
}
// Compute 7d trend if enough data
if len(points) >= 10 {
trend := ComputeTrend(points, metric, b.trendWindow7d)
@ -266,13 +266,13 @@ func (b *Builder) computeGuestTrends(guestID string) map[string]Trend {
// computeStorageTrends computes trends for storage
func (b *Builder) computeStorageTrends(storageID string) map[string]Trend {
trends := make(map[string]Trend)
if b.metricsHistory == nil || !b.includeTrends {
return trends
}
allMetrics := b.metricsHistory.GetAllStorageMetrics(storageID, b.trendWindow7d)
// Focus on usage metric for storage
if points, ok := allMetrics["usage"]; ok && len(points) >= 3 {
recent := filterRecentPoints(points, b.trendWindow24h)
@ -325,15 +325,15 @@ func (b *Builder) computeStoragePredictions(storage models.Storage, trends map[s
if daysUntil > 0 && daysUntil <= 30 { // Only predict within 30 days
predictions = append(predictions, Prediction{
ResourceID: storage.ID,
Metric: "usage",
Event: threshold.event,
ETA: time.Now().Add(time.Duration(daysUntil*24) * time.Hour),
DaysUntil: daysUntil,
Confidence: trend.Confidence,
Basis: formatPredictionBasis(trend),
GrowthRate: trend.RatePerDay,
CurrentPct: currentPct,
ResourceID: storage.ID,
Metric: "usage",
Event: threshold.event,
ETA: time.Now().Add(time.Duration(daysUntil*24) * time.Hour),
DaysUntil: daysUntil,
Confidence: trend.Confidence,
Basis: formatPredictionBasis(trend),
GrowthRate: trend.RatePerDay,
CurrentPct: currentPct,
})
}
}
@ -343,7 +343,7 @@ func (b *Builder) computeStoragePredictions(storage models.Storage, trends map[s
// formatPredictionBasis creates explanation for a prediction
func formatPredictionBasis(trend Trend) string {
return "Growing " + formatRate(trend.RatePerDay) + " based on " +
return "Growing " + formatRate(trend.RatePerDay) + " based on " +
formatDuration(trend.Period) + " of data"
}
@ -440,12 +440,12 @@ func (b *Builder) enrichWithAnomalies(ctx *ResourceContext) {
}
anomaly := Anomaly{
Metric: metric,
Current: value,
Expected: mean,
Deviation: zScore,
Severity: severity,
Since: time.Now(), // We don't track onset time yet
Metric: metric,
Current: value,
Expected: mean,
Deviation: zScore,
Severity: severity,
Since: time.Now(), // We don't track onset time yet
Description: formatAnomalyDescription(metric, value, mean, stddev, severity, direction),
}
ctx.Anomalies = append(ctx.Anomalies, anomaly)
@ -523,11 +523,11 @@ func filterRecentPoints(points []MetricPoint, duration time.Duration) []MetricPo
func (b *Builder) MergeContexts(target *ResourceContext, infrastructure *InfrastructureContext) string {
// For targeted requests, highlight the target first, then add relevant related context
var result strings.Builder
result.WriteString("# Target Resource\n")
result.WriteString(FormatResourceContext(*target))
result.WriteString("\n")
// Add related resources (same node, dependencies, etc.)
// This could be expanded with dependency mapping in the future
if target.Node != "" {
@ -544,6 +544,6 @@ func (b *Builder) MergeContexts(target *ResourceContext, infrastructure *Infrast
}
}
}
return result.String()
}

View file

@ -406,8 +406,8 @@ func TestFilterRecentPoints_AllRecent(t *testing.T) {
func TestFilterRecentPoints_FilterOld(t *testing.T) {
now := time.Now()
points := []MetricPoint{
{Timestamp: now.Add(-3 * time.Hour), Value: 1.0}, // Old
{Timestamp: now.Add(-2 * time.Hour), Value: 2.0}, // Old
{Timestamp: now.Add(-3 * time.Hour), Value: 1.0}, // Old
{Timestamp: now.Add(-2 * time.Hour), Value: 2.0}, // Old
{Timestamp: now.Add(-30 * time.Minute), Value: 3.0}, // Recent
}
@ -423,33 +423,33 @@ func TestFilterRecentPoints_FilterOld(t *testing.T) {
func TestFormatAnomalyDescription(t *testing.T) {
tests := []struct {
name string
metric string
current float64
mean float64
stddev float64
severity string
direction string
name string
metric string
current float64
mean float64
stddev float64
severity string
direction string
wantContains []string
}{
{
name: "cpu high",
metric: "cpu",
current: 95.0,
mean: 50.0,
stddev: 10.0,
severity: "significantly",
direction: "above",
name: "cpu high",
metric: "cpu",
current: 95.0,
mean: 50.0,
stddev: 10.0,
severity: "significantly",
direction: "above",
wantContains: []string{"Cpu", "significantly", "above", "95%", "50%"},
},
{
name: "memory low",
metric: "memory",
current: 20.0,
mean: 60.0,
stddev: 15.0,
severity: "slightly",
direction: "below",
name: "memory low",
metric: "memory",
current: 20.0,
mean: 60.0,
stddev: 15.0,
severity: "slightly",
direction: "below",
wantContains: []string{"Memory", "slightly", "below", "20%", "60%"},
},
}
@ -481,4 +481,3 @@ func TestFormatPredictionBasis(t *testing.T) {
t.Errorf("Expected 'based on' in result, got %q", result)
}
}

View file

@ -134,7 +134,7 @@ func formatTrendLine(metric string, trend Trend) string {
}
metricLabel := strings.Title(metric)
// Direction with rate
var directionStr string
switch trend.Direction {
@ -188,7 +188,7 @@ func formatMetricSamples(metric string, points []MetricPoint) string {
}
metricLabel := strings.Title(metric)
// Build compact arrow-separated value list
var values []string
prevValue := -1.0
@ -486,9 +486,9 @@ func FormatGuestForContext(
ResourceName: name,
Node: node,
VMID: vmid,
CurrentCPU: cpu * 100, // Convert from 0-1 to percentage
CurrentMemory: memUsage, // Already 0-100 percentage from Memory.Usage
CurrentDisk: diskUsage, // Already 0-100 percentage from Disk.Usage
CurrentCPU: cpu * 100, // Convert from 0-1 to percentage
CurrentMemory: memUsage, // Already 0-100 percentage from Memory.Usage
CurrentDisk: diskUsage, // Already 0-100 percentage from Disk.Usage
Status: status,
Uptime: time.Duration(uptime) * time.Second,
Trends: trends,

View file

@ -234,9 +234,9 @@ func TestFormatTrendLine(t *testing.T) {
expected string
}{
{
name: "insufficient data",
metric: "cpu",
trend: Trend{DataPoints: 2},
name: "insufficient data",
metric: "cpu",
trend: Trend{DataPoints: 2},
expected: "",
},
{
@ -385,11 +385,11 @@ func TestFormatGuestForContext(t *testing.T) {
"pve-1",
"vm",
"running",
100, // VMID
0.35, // CPU (0-1)
65.0, // Memory (0-100)
45.0, // Disk (0-100)
3600, // 1 hour uptime
100, // VMID
0.35, // CPU (0-1)
65.0, // Memory (0-100)
45.0, // Disk (0-100)
3600, // 1 hour uptime
lastBackup,
trends,
)
@ -427,9 +427,9 @@ func TestFormatStorageForContext(t *testing.T) {
Name: "local-zfs",
Node: "pve-1",
Status: "available",
Used: 500 * 1024 * 1024 * 1024, // 500GB
Used: 500 * 1024 * 1024 * 1024, // 500GB
Total: 1000 * 1024 * 1024 * 1024, // 1TB
Usage: 0, // Will be calculated
Usage: 0, // Will be calculated
}
trends := map[string]Trend{
@ -721,7 +721,7 @@ func TestFormatMetricSamples_InsufficientData(t *testing.T) {
func TestDownsampleMetrics(t *testing.T) {
now := time.Now()
// Create 100 points
points := make([]MetricPoint, 100)
for i := 0; i < 100; i++ {
@ -752,7 +752,7 @@ func TestDownsampleMetrics(t *testing.T) {
func TestDownsampleMetrics_SmallInput(t *testing.T) {
now := time.Now()
// Create 5 points - less than target
points := []MetricPoint{
{Value: 10, Timestamp: now.Add(-4 * time.Minute)},
@ -773,11 +773,11 @@ func TestDownsampleMetrics_SmallInput(t *testing.T) {
func TestFormatResourceContext_WithMetricSamples(t *testing.T) {
now := time.Now()
ctx := ResourceContext{
ResourceID: "ct-105",
ResourceType: "container",
ResourceName: "frigate",
Status: "running",
CurrentDisk: 30.7,
ResourceID: "ct-105",
ResourceType: "container",
ResourceName: "frigate",
Status: "running",
CurrentDisk: 30.7,
MetricSamples: map[string][]MetricPoint{
"disk": {
{Value: 26.2, Timestamp: now.Add(-3 * time.Hour)},
@ -794,4 +794,3 @@ func TestFormatResourceContext_WithMetricSamples(t *testing.T) {
t.Error("Expected result to contain History section with metric samples")
}
}

View file

@ -97,7 +97,7 @@ func applyTrendSanityChecks(trend Trend, actualSpan time.Duration, metricName st
// For percentage metrics (0-100 range), apply physical limits
// A metric bounded 0-100 can't grow more than 100% per day
isPercentageMetric := metricName == "cpu" || metricName == "memory" || metricName == "disk" ||
isPercentageMetric := metricName == "cpu" || metricName == "memory" || metricName == "disk" ||
metricName == "usage" || mean <= 100
if isPercentageMetric {
@ -169,7 +169,7 @@ func linearRegression(points []MetricPoint) LinearRegressionResult {
}
n := float64(len(points))
// Use time relative to first point for numerical stability
baseTime := points[0].Timestamp
@ -277,19 +277,19 @@ func ComputePercentiles(points []MetricPoint, percentiles ...int) map[int]float6
if p < 0 || p > 100 {
continue
}
// Calculate index for percentile
idx := float64(p) / 100.0 * float64(len(values)-1)
lower := int(math.Floor(idx))
upper := int(math.Ceil(idx))
if lower >= len(values) {
lower = len(values) - 1
}
if upper >= len(values) {
upper = len(values) - 1
}
if lower == upper {
result[p] = values[lower]
} else {
@ -388,7 +388,7 @@ func trimTrailingZeros(s string) string {
if dotIdx == -1 {
return s // No decimal point
}
// Trim trailing zeros after decimal
end := len(s)
for end > dotIdx+1 && s[end-1] == '0' {

View file

@ -39,7 +39,7 @@ func TestComputeTrend_Stable(t *testing.T) {
points := make([]MetricPoint, 24)
for i := 0; i < 24; i++ {
// Small random-looking variation around 50%, but no trend
offset := float64(i%3 - 1) * 0.2
offset := float64(i%3-1) * 0.2
points[i] = MetricPoint{
Value: 50 + offset,
Timestamp: now.Add(time.Duration(-24+i) * time.Hour),
@ -92,7 +92,7 @@ func TestComputeTrend_Volatile(t *testing.T) {
trend := ComputeTrend(points, "cpu", 24*time.Hour)
if trend.Direction != TrendVolatile {
t.Errorf("Expected TrendVolatile, got %s (stddev: %.2f, mean: %.2f)",
t.Errorf("Expected TrendVolatile, got %s (stddev: %.2f, mean: %.2f)",
trend.Direction, trend.StdDev, trend.Average)
}
}
@ -282,13 +282,13 @@ func TestComputeTrend_PercentageCapping(t *testing.T) {
// Even with a long time span, if the raw rate comes out absurdly high
// (which shouldn't happen with good data, but let's test the cap)
now := time.Now()
// Create data that would naively produce a >100%/day rate
// 5 points over 2 hours with aggressive growth
points := make([]MetricPoint, 5)
for i := 0; i < 5; i++ {
points[i] = MetricPoint{
Value: 20 + float64(i)*10, // 20, 30, 40, 50, 60
Value: 20 + float64(i)*10, // 20, 30, 40, 50, 60
Timestamp: now.Add(time.Duration(-4+i) * 30 * time.Minute), // 30 min apart
}
}
@ -320,11 +320,11 @@ func TestComputeTrend_MediumTimeSpan(t *testing.T) {
if trend.RatePerHour == 0 {
t.Errorf("Medium time span should have non-zero hourly rate")
}
// But daily extrapolation should be constrained
observedChange := 1.5 * 6 // ~9% change
if trend.RatePerDay > observedChange*15 {
t.Errorf("Daily rate %.2f should not vastly exceed observed change %.2f",
t.Errorf("Daily rate %.2f should not vastly exceed observed change %.2f",
trend.RatePerDay, observedChange)
}
}
@ -348,7 +348,7 @@ func TestComputeTrend_LongTimeSpanNoChange(t *testing.T) {
if trend.Direction == TrendGrowing {
t.Errorf("Stable oscillating data should not be classified as Growing")
}
// Rate should be tiny
if trend.RatePerDay > 1 || trend.RatePerDay < -1 {
t.Errorf("Stable data should have near-zero rate, got %.2f/day", trend.RatePerDay)
@ -439,5 +439,3 @@ func TestTrimTrailingZeros(t *testing.T) {
}
}
}

View file

@ -65,15 +65,15 @@ type Anomaly struct {
// Prediction represents a forecasted future event
type Prediction struct {
ResourceID string // Which resource this prediction is for
Metric string // Which metric
Event string // Type of predicted event (capacity_full, oom, etc.)
ETA time.Time // When the event is predicted to occur
DaysUntil float64 // Days until event
Confidence float64 // 0-1 confidence level
Basis string // Explanation of how prediction was made
GrowthRate float64 // Rate of change used for projection
CurrentPct float64 // Current usage percentage
ResourceID string // Which resource this prediction is for
Metric string // Which metric
Event string // Type of predicted event (capacity_full, oom, etc.)
ETA time.Time // When the event is predicted to occur
DaysUntil float64 // Days until event
Confidence float64 // 0-1 confidence level
Basis string // Explanation of how prediction was made
GrowthRate float64 // Rate of change used for projection
CurrentPct float64 // Current usage percentage
}
// Change represents a detected configuration or state change
@ -91,23 +91,23 @@ type Change struct {
type ChangeType string
const (
ChangeCreated ChangeType = "created" // New resource appeared
ChangeDeleted ChangeType = "deleted" // Resource disappeared
ChangeConfig ChangeType = "config" // Configuration change (RAM, CPU)
ChangeStatus ChangeType = "status" // Status change (started, stopped)
ChangeMigrated ChangeType = "migrated" // Moved to different node
ChangeCreated ChangeType = "created" // New resource appeared
ChangeDeleted ChangeType = "deleted" // Resource disappeared
ChangeConfig ChangeType = "config" // Configuration change (RAM, CPU)
ChangeStatus ChangeType = "status" // Status change (started, stopped)
ChangeMigrated ChangeType = "migrated" // Moved to different node
ChangePerformance ChangeType = "performance" // Significant performance shift
)
// ResourceTrends contains all trend data for a single resource
type ResourceTrends struct {
ResourceID string // Unique identifier
ResourceType string // node, vm, container, storage, docker_host
ResourceName string // Display name
Trends map[string]Trend // Metric name -> trend data
DataAvailable bool // Whether we have historical data for this resource
OldestData time.Time // Timestamp of oldest data point
NewestData time.Time // Timestamp of newest data point
ResourceID string // Unique identifier
ResourceType string // node, vm, container, storage, docker_host
ResourceName string // Display name
Trends map[string]Trend // Metric name -> trend data
DataAvailable bool // Whether we have historical data for this resource
OldestData time.Time // Timestamp of oldest data point
NewestData time.Time // Timestamp of newest data point
}
// ResourceContext contains all context for a single resource
@ -119,11 +119,11 @@ type ResourceContext struct {
VMID int // Proxmox VMID for VMs/containers (0 if not applicable)
// Current state (point-in-time)
CurrentCPU float64
CurrentMemory float64
CurrentDisk float64
Status string
Uptime time.Duration
CurrentCPU float64
CurrentMemory float64
CurrentDisk float64
Status string
Uptime time.Duration
// Historical analysis
Trends map[string]Trend // metric -> trend (24h and 7d)
@ -139,10 +139,10 @@ type ResourceContext struct {
Predictions []Prediction
// Operational memory
UserNotes []string // User-provided annotations
PastIssues []string // Summary of past findings
LastRemediation string // What was done last time
RecentChanges []Change // Recent configuration changes
UserNotes []string // User-provided annotations
PastIssues []string // Summary of past findings
LastRemediation string // What was done last time
RecentChanges []Change // Recent configuration changes
// Additional metadata (e.g., OCI image for OCI containers)
Metadata map[string]interface{}
@ -152,11 +152,11 @@ type ResourceContext struct {
type InfrastructureContext struct {
// Timestamp of this context snapshot
GeneratedAt time.Time
// Summary statistics
TotalResources int
ResourcesWithData int // Resources with historical data available
TotalResources int
ResourcesWithData int // Resources with historical data available
// Categorized resources with their context
Nodes []ResourceContext
VMs []ResourceContext
@ -164,7 +164,7 @@ type InfrastructureContext struct {
Storage []ResourceContext
DockerHosts []ResourceContext
Hosts []ResourceContext
// Global insights
Anomalies []Anomaly // Cross-infrastructure anomalies
Predictions []Prediction // Capacity and failure predictions
@ -173,12 +173,12 @@ type InfrastructureContext struct {
// Stats contains summary statistics for a metric
type Stats struct {
Count int
Min float64
Max float64
Sum float64
Mean float64
StdDev float64
Count int
Min float64
Max float64
Sum float64
Mean float64
StdDev float64
}
// LinearRegressionResult contains the results of linear regression

View file

@ -81,7 +81,7 @@ func TestDetector_CorrelationDetection(t *testing.T) {
// Check correlations
correlations := d.GetCorrelations()
found := false
for _, c := range correlations {
if c.SourceID == "storage-1" && c.TargetID == "vm-100" {
@ -92,7 +92,7 @@ func TestDetector_CorrelationDetection(t *testing.T) {
break
}
}
if !found {
t.Error("Expected correlation between storage-1 and vm-100")
}
@ -230,7 +230,7 @@ func TestDetector_FormatForContext(t *testing.T) {
if context == "" {
t.Error("Expected non-empty context")
}
if !contains(context, "Correlation") {
t.Errorf("Expected context to mention correlations: %s", context)
}
@ -333,4 +333,3 @@ func TestIntToStr(t *testing.T) {
}
}
}

View file

@ -139,10 +139,10 @@ func TestEstimateUSD_AnthropicModels(t *testing.T) {
model string
known bool
}{
{"claude-sonnet-4-20250514", true}, // matches claude-sonnet*
{"claude-opus-4-20250514", true}, // matches claude-opus*
{"claude-haiku-something", true}, // matches claude-haiku*
{"unknown-claude", false}, // no match
{"claude-sonnet-4-20250514", true}, // matches claude-sonnet*
{"claude-opus-4-20250514", true}, // matches claude-opus*
{"claude-haiku-something", true}, // matches claude-haiku*
{"unknown-claude", false}, // no match
}
for _, tt := range tests {
@ -174,8 +174,8 @@ func TestEstimateUSD_OpenAIModels(t *testing.T) {
func TestEstimateUSD_DeepSeekModels(t *testing.T) {
tests := []struct {
model string
known bool
model string
known bool
}{
{"deepseek-chat", true},
{"deepseek-coder", true},
@ -236,8 +236,6 @@ func TestSummary_RetentionInfo(t *testing.T) {
}
}
func TestClear_MultipleTimes(t *testing.T) {
store := NewStore(30)

View file

@ -327,4 +327,3 @@ func TestSetPersistence_TrimsOldEventsOnLoad(t *testing.T) {
t.Fatalf("expected old events to be trimmed, got %d provider models", len(summary.ProviderModels))
}
}

View file

@ -18,10 +18,10 @@ func IsDemoMode() bool {
// mockResourcePatterns contains name patterns that indicate mock/demo resources
var mockResourcePatterns = []string{
"pve1", "pve2", "pve3", "pve4", "pve5", "pve6", "pve7", // mock PVE nodes
"mock-cluster", "mock-", // generic mock prefixes
"Ceres", "Atlas", "Nova", "Orion", "Vega", "Rigel", // mock host agent names
"docker-host-", "k8s-cluster-", // mock Docker/K8s names
"demo-", // demo prefixes
"mock-cluster", "mock-", // generic mock prefixes
"Ceres", "Atlas", "Nova", "Orion", "Vega", "Rigel", // mock host agent names
"docker-host-", "k8s-cluster-", // mock Docker/K8s names
"demo-", // demo prefixes
}
// IsMockResource returns true if the resource name/ID appears to be mock data
@ -31,7 +31,7 @@ func IsMockResource(resourceID, resourceName, node string) bool {
if IsDemoMode() {
return false
}
// Check against mock patterns
toCheck := []string{resourceID, resourceName, node}
for _, value := range toCheck {
@ -47,7 +47,6 @@ func IsMockResource(resourceID, resourceName, node string) bool {
return false
}
// InjectDemoFindings populates the patrol service with realistic mock findings
// This is used for demo instances to showcase AI features without actual AI API calls
func (p *PatrolService) InjectDemoFindings() {
@ -80,7 +79,7 @@ func (p *PatrolService) InjectDemoFindings() {
**Long-term:**
- Add additional storage or migrate VMs to other pools
- Enable ZFS compression if not already enabled`,
DetectedAt: now.Add(-2 * time.Hour),
DetectedAt: now.Add(-2 * time.Hour),
LastSeenAt: now.Add(-5 * time.Minute),
TimesRaised: 3,
Source: "patrol",
@ -101,7 +100,7 @@ func (p *PatrolService) InjectDemoFindings() {
2. Check for memory leaks in Jellyfin: restart the service
3. Limit transcoding to reduce memory pressure
4. Review Jellyfin cache settings in the dashboard`,
DetectedAt: now.Add(-6 * time.Hour),
DetectedAt: now.Add(-6 * time.Hour),
LastSeenAt: now.Add(-10 * time.Minute),
TimesRaised: 5,
Source: "patrol",
@ -124,7 +123,7 @@ func (p *PatrolService) InjectDemoFindings() {
**Manual backup:**
` + "`vzdump 105 --storage pbs --mode snapshot`",
DetectedAt: now.Add(-24 * time.Hour),
DetectedAt: now.Add(-24 * time.Hour),
LastSeenAt: now.Add(-15 * time.Minute),
TimesRaised: 2,
Source: "patrol",
@ -147,7 +146,7 @@ func (p *PatrolService) InjectDemoFindings() {
**Consider:**
- Live-migrate a VM to another node: ` + "`qm migrate <vmid> pve1 --online`" + `
- Set CPU limits on high-usage VMs`,
DetectedAt: now.Add(-2 * time.Hour),
DetectedAt: now.Add(-2 * time.Hour),
LastSeenAt: now.Add(-8 * time.Minute),
TimesRaised: 4,
Source: "patrol",
@ -170,7 +169,7 @@ func (p *PatrolService) InjectDemoFindings() {
- OOM kills: check ` + "`docker stats uptime-kuma`" + `
- Configuration errors in environment variables
- Database corruption (check data volume)`,
DetectedAt: now.Add(-12 * time.Hour),
DetectedAt: now.Add(-12 * time.Hour),
LastSeenAt: now.Add(-20 * time.Minute),
TimesRaised: 3,
Source: "patrol",
@ -228,10 +227,10 @@ func (p *PatrolService) injectDemoRunHistory() {
for i := 1; i <= 12; i++ {
offset := time.Duration(i*6) * time.Hour
startTime := now.Add(-offset)
// Vary the duration slightly
duration := time.Duration(40 + (i % 30)) * time.Second
duration := time.Duration(40+(i%30)) * time.Second
// Outcomes vary over time
var summary string
var status string
@ -444,29 +443,29 @@ EVIDENCE: PermitRootLogin yes found in config
// GenerateDemoAIStream acts like GenerateDemoAIResponse but streams content via callback
func GenerateDemoAIStream(prompt string, callback StreamCallback) (*ExecuteResponse, error) {
resp := GenerateDemoAIResponse(prompt)
// Simulate streaming by sending chunks
chunkSize := 10
content := resp.Content
for i := 0; i < len(content); i += chunkSize {
end := i + chunkSize
if end > len(content) {
end = len(content)
}
callback(StreamEvent{
Type: "content",
Data: content[i:end],
})
// Tiny sleep to simulate generation speed
time.Sleep(10 * time.Millisecond)
}
callback(StreamEvent{
Type: "done",
})
return resp, nil
}

View file

@ -89,4 +89,3 @@ func (a *FindingsPersistenceAdapter) LoadFindings() (map[string]*Finding, error)
}
return findings, nil
}

View file

@ -239,7 +239,7 @@ func TestFindingsStore_SetUserNote(t *testing.T) {
func TestFindingsStore_Suppression(t *testing.T) {
store := NewFindingsStore()
// Add a finding
finding := &Finding{
ID: "f1",
@ -276,7 +276,7 @@ func TestFindingsStore_Suppression(t *testing.T) {
Category: FindingCategoryPerformance,
}
isNew := store.Add(finding2)
if isNew {
t.Error("Expected Add to return false for suppressed finding")
}
@ -291,7 +291,7 @@ func TestFindingsStore_Suppression(t *testing.T) {
if len(rules) == 0 {
t.Fatal("Should have at least one suppression rule")
}
if !store.DeleteSuppressionRule(rules[0].ID) {
t.Fatal("DeleteSuppressionRule should return true")
}
@ -303,7 +303,7 @@ func TestFindingsStore_Suppression(t *testing.T) {
func TestFindingsStore_AddSuppressionRule(t *testing.T) {
store := NewFindingsStore()
rule := store.AddSuppressionRule("res-1", "Res 1", FindingCategoryCapacity, "Manual suppression")
if rule == nil {
t.Fatal("AddSuppressionRule returned nil")
@ -329,12 +329,12 @@ func TestFindingsStore_AddSuppressionRule(t *testing.T) {
func TestFindingsStore_Cleanup(t *testing.T) {
store := NewFindingsStore()
now := time.Now()
// Add an active finding (should NOT be cleaned up)
store.Add(&Finding{ID: "active", Title: "Active"})
// Add an old resolved finding (should BE cleaned up)
resolvedOld := &Finding{
ID: "resolved-old",
@ -342,7 +342,7 @@ func TestFindingsStore_Cleanup(t *testing.T) {
ResolvedAt: timePtr(now.Add(-48 * time.Hour)),
}
store.Add(resolvedOld)
// Add a recent resolved finding (should NOT be cleaned up if maxAge is 24h)
resolvedRecent := &Finding{
ID: "resolved-recent",
@ -369,7 +369,7 @@ func TestFindingsStore_Cleanup(t *testing.T) {
func TestFindingsStore_GetDismissedForContext(t *testing.T) {
store := NewFindingsStore()
// Add dismissed finding
finding1 := &Finding{
ID: "f1",
@ -395,7 +395,7 @@ func TestFindingsStore_GetDismissedForContext(t *testing.T) {
store.Suppress("f2")
ctx := store.GetDismissedForContext()
if !strings.Contains(ctx, "High CPU on web-1") {
t.Error("Context should contain dismissed finding title")
}
@ -413,7 +413,7 @@ func TestFindingsStore_GetDismissedForContext(t *testing.T) {
func TestFindingsStore_Persistence(t *testing.T) {
store := NewFindingsStore()
mockP := &mockPersistence{}
err := store.SetPersistence(mockP)
if err != nil {
t.Fatalf("SetPersistence failed: %v", err)
@ -421,7 +421,7 @@ func TestFindingsStore_Persistence(t *testing.T) {
// Add a finding - should trigger save (debounced, but we can ForceSave)
store.Add(&Finding{ID: "f1", Title: "Persist me"})
err = store.ForceSave()
if err != nil {
t.Fatalf("ForceSave failed: %v", err)
@ -846,7 +846,7 @@ func timePtr(t time.Time) *time.Time {
func TestFindingsStore_DeleteSuppressionRule_NotFound(t *testing.T) {
store := NewFindingsStore()
// Try to delete non-existent rule
if store.DeleteSuppressionRule("nonexistent-id") {
t.Error("DeleteSuppressionRule should return false for non-existent rule")
@ -855,7 +855,7 @@ func TestFindingsStore_DeleteSuppressionRule_NotFound(t *testing.T) {
func TestFindingsStore_SetPersistence_Nil(t *testing.T) {
store := NewFindingsStore()
// Setting nil persistence should succeed (disables persistence)
err := store.SetPersistence(nil)
if err != nil {
@ -865,7 +865,7 @@ func TestFindingsStore_SetPersistence_Nil(t *testing.T) {
func TestFindingsStore_Acknowledge_NotFound(t *testing.T) {
store := NewFindingsStore()
if store.Acknowledge("nonexistent") {
t.Error("Acknowledge should return false for non-existent finding")
}
@ -873,7 +873,7 @@ func TestFindingsStore_Acknowledge_NotFound(t *testing.T) {
func TestFindingsStore_Dismiss_NotFound(t *testing.T) {
store := NewFindingsStore()
if store.Dismiss("nonexistent", "reason", "note") {
t.Error("Dismiss should return false for non-existent finding")
}
@ -881,7 +881,7 @@ func TestFindingsStore_Dismiss_NotFound(t *testing.T) {
func TestFindingsStore_SetUserNote_NotFound(t *testing.T) {
store := NewFindingsStore()
if store.SetUserNote("nonexistent", "note") {
t.Error("SetUserNote should return false for non-existent finding")
}
@ -889,7 +889,7 @@ func TestFindingsStore_SetUserNote_NotFound(t *testing.T) {
func TestFindingsStore_Suppress_NotFound(t *testing.T) {
store := NewFindingsStore()
if store.Suppress("nonexistent") {
t.Error("Suppress should return false for non-existent finding")
}
@ -897,7 +897,7 @@ func TestFindingsStore_Suppress_NotFound(t *testing.T) {
func TestFindingsStore_Resolve_NotFound(t *testing.T) {
store := NewFindingsStore()
if store.Resolve("nonexistent", false) {
t.Error("Resolve should return false for non-existent finding")
}
@ -905,7 +905,7 @@ func TestFindingsStore_Resolve_NotFound(t *testing.T) {
func TestFindingsStore_Resolve_AlreadyResolved(t *testing.T) {
store := NewFindingsStore()
finding := &Finding{
ID: "f1",
ResourceID: "res-1",
@ -914,13 +914,13 @@ func TestFindingsStore_Resolve_AlreadyResolved(t *testing.T) {
}
store.Add(finding)
store.Resolve("f1", false)
// Verify it's resolved
f := store.Get("f1")
if !f.IsResolved() {
t.Error("Finding should be resolved after Resolve call")
}
// Try to resolve again - should return false
if store.Resolve("f1", false) {
t.Error("Resolve should return false for already-resolved finding")
@ -929,7 +929,7 @@ func TestFindingsStore_Resolve_AlreadyResolved(t *testing.T) {
func TestFindingsStore_GetActive_Empty(t *testing.T) {
store := NewFindingsStore()
active := store.GetActive(FindingSeverityInfo)
if len(active) != 0 {
t.Errorf("Expected 0 active findings from empty store, got %d", len(active))
@ -938,15 +938,15 @@ func TestFindingsStore_GetActive_Empty(t *testing.T) {
func TestFindingsStore_GetSummary(t *testing.T) {
store := NewFindingsStore()
// Add findings of each severity
store.Add(&Finding{ID: "crit", Severity: FindingSeverityCritical, ResourceID: "r1", Title: "Critical"})
store.Add(&Finding{ID: "warn", Severity: FindingSeverityWarning, ResourceID: "r2", Title: "Warning"})
store.Add(&Finding{ID: "watch", Severity: FindingSeverityWatch, ResourceID: "r3", Title: "Watch"})
store.Add(&Finding{ID: "info", Severity: FindingSeverityInfo, ResourceID: "r4", Title: "Info"})
summary := store.GetSummary()
if summary.Critical != 1 {
t.Errorf("Expected 1 critical, got %d", summary.Critical)
}
@ -969,7 +969,7 @@ func TestFindingsStore_GetSummary(t *testing.T) {
func TestFindingsStore_GetDismissedForContext_Empty(t *testing.T) {
store := NewFindingsStore()
ctx := store.GetDismissedForContext()
if ctx != "" {
t.Errorf("Expected empty context for empty store, got: %s", ctx)
@ -985,12 +985,12 @@ func TestFinding_Status(t *testing.T) {
if !resolved.IsResolved() {
t.Error("Finding with ResolvedAt set should be resolved")
}
notResolved := Finding{ID: "not-resolved"}
if notResolved.IsResolved() {
t.Error("Finding without ResolvedAt should not be resolved")
}
// Test IsDismissed
dismissed := Finding{
ID: "dismissed",
@ -1005,15 +1005,15 @@ func TestFinding_Status(t *testing.T) {
func TestFindingsStore_Add_EmptyID(t *testing.T) {
store := NewFindingsStore()
finding := &Finding{
ResourceID: "res-1",
Title: "Test",
}
// Should generate ID if empty
store.Add(finding)
// Verify something was added
all := store.GetAll(nil)
if len(all) != 1 {
@ -1023,7 +1023,7 @@ func TestFindingsStore_Add_EmptyID(t *testing.T) {
func TestFindingsStore_GetSuppressionRules_Empty(t *testing.T) {
store := NewFindingsStore()
rules := store.GetSuppressionRules()
if len(rules) != 0 {
t.Errorf("Expected 0 rules from new store, got %d", len(rules))

View file

@ -21,12 +21,12 @@ func TestNewIntelligence(t *testing.T) {
func TestIntelligence_GetSummary_NoSubsystems(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
summary := intel.GetSummary()
if summary == nil {
t.Fatal("Expected non-nil summary")
}
// Should have default healthy state
if summary.OverallHealth.Score != 100 {
t.Errorf("Expected health score 100, got %f", summary.OverallHealth.Score)
@ -41,16 +41,16 @@ func TestIntelligence_GetSummary_NoSubsystems(t *testing.T) {
func TestIntelligence_GetResourceIntelligence_NoSubsystems(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
resourceIntel := intel.GetResourceIntelligence("test-resource")
if resourceIntel == nil {
t.Fatal("Expected non-nil resource intelligence")
}
if resourceIntel.ResourceID != "test-resource" {
t.Errorf("Expected resource ID 'test-resource', got %s", resourceIntel.ResourceID)
}
// Should have default healthy state
if resourceIntel.Health.Score != 100 {
t.Errorf("Expected health score 100, got %f", resourceIntel.Health.Score)
@ -59,7 +59,7 @@ func TestIntelligence_GetResourceIntelligence_NoSubsystems(t *testing.T) {
func TestIntelligence_FormatContext_NoSubsystems(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// With no subsystems, context should be empty
ctx := intel.FormatContext("test-resource")
if ctx != "" {
@ -69,7 +69,7 @@ func TestIntelligence_FormatContext_NoSubsystems(t *testing.T) {
func TestIntelligence_CreatePredictionFinding(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
pred := patterns.FailurePrediction{
ResourceID: "vm-100",
EventType: patterns.EventHighCPU, // Use the constant instead of cast
@ -77,17 +77,17 @@ func TestIntelligence_CreatePredictionFinding(t *testing.T) {
Confidence: 0.85, // High confidence
Basis: "Pattern detected",
}
finding := intel.CreatePredictionFinding(pred)
if finding == nil {
t.Fatal("Expected non-nil finding")
}
// High confidence + < 1 day should be critical
if finding.Severity != FindingSeverityCritical {
t.Errorf("Expected critical severity for imminent high-confidence prediction, got %s", finding.Severity)
}
if finding.ResourceID != "vm-100" {
t.Errorf("Expected resource ID 'vm-100', got %s", finding.ResourceID)
}
@ -95,31 +95,31 @@ func TestIntelligence_CreatePredictionFinding(t *testing.T) {
func TestIntelligence_SetSubsystems(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create a findings store
findings := NewFindingsStore()
// Add a finding
findings.Add(&Finding{
ID: "test-finding",
Key: "test:finding",
Severity: FindingSeverityWarning,
Category: FindingCategoryPerformance,
ResourceID: "vm-100",
ID: "test-finding",
Key: "test:finding",
Severity: FindingSeverityWarning,
Category: FindingCategoryPerformance,
ResourceID: "vm-100",
ResourceName: "test-vm",
ResourceType: "vm",
Title: "Test Finding",
DetectedAt: time.Now(),
LastSeenAt: time.Now(),
Source: "test",
Title: "Test Finding",
DetectedAt: time.Now(),
LastSeenAt: time.Now(),
Source: "test",
})
// Set subsystems with just findings
intel.SetSubsystems(findings, nil, nil, nil, nil, nil, nil, nil)
// Get summary
summary := intel.GetSummary()
// Should have 1 warning
if summary.FindingsCount.Warning != 1 {
t.Errorf("Expected 1 warning, got %d", summary.FindingsCount.Warning)
@ -127,7 +127,7 @@ func TestIntelligence_SetSubsystems(t *testing.T) {
if summary.FindingsCount.Total != 1 {
t.Errorf("Expected 1 total finding, got %d", summary.FindingsCount.Total)
}
// Health should be reduced due to warning
if summary.OverallHealth.Score >= 100 {
t.Error("Expected health score < 100 due to warning finding")
@ -151,7 +151,7 @@ func TestIntelligence_HealthGrades(t *testing.T) {
{30, HealthGradeF},
{0, HealthGradeF},
}
for _, tt := range tests {
grade := scoreToGrade(tt.score)
if grade != tt.grade {
@ -162,13 +162,13 @@ func TestIntelligence_HealthGrades(t *testing.T) {
func TestIntelligence_CheckBaselinesForResource(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// With no baseline store, should return nil
anomalies := intel.CheckBaselinesForResource("vm-100", map[string]float64{
"cpu": 85.0,
"memory": 90.0,
})
if anomalies != nil {
t.Error("Expected nil anomalies when baseline store not configured")
}
@ -178,29 +178,29 @@ func TestIntelligence_CheckBaselinesForResource(t *testing.T) {
func TestIntelligence_SetStateProvider(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
mockSP := &mockStateProvider{}
intel.SetStateProvider(mockSP)
// State provider should be set (we can't directly access it, but no panic = success)
}
func TestIntelligence_FormatGlobalContext(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// With no subsystems, should return empty
ctx := intel.FormatGlobalContext()
if ctx != "" {
t.Errorf("Expected empty context with no subsystems, got: %s", ctx)
}
// Set up knowledge store
knowledgeStore, err := knowledge.NewStore(t.TempDir())
if err != nil {
t.Fatalf("Failed to create knowledge store: %v", err)
}
intel.SetSubsystems(nil, nil, nil, nil, nil, knowledgeStore, nil, nil)
// Should still be empty (no knowledge saved)
ctx = intel.FormatGlobalContext()
// Empty or with headers only is fine
@ -208,17 +208,17 @@ func TestIntelligence_FormatGlobalContext(t *testing.T) {
func TestIntelligence_FormatGlobalContext_WithSubsystems(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Set up incident store with some data
incidentStore := memory.NewIncidentStore(memory.IncidentStoreConfig{
MaxIncidents: 10,
})
// Create some incidents to format
incidentStore.RecordAnalysis("alert-1", "Test analysis", nil)
intel.SetSubsystems(nil, nil, nil, nil, incidentStore, nil, nil, nil)
ctx := intel.FormatGlobalContext()
// Having set incidents, there should be some context
// (may be empty if no actual incidents, but shouldn't panic)
@ -227,20 +227,20 @@ func TestIntelligence_FormatGlobalContext_WithSubsystems(t *testing.T) {
func TestIntelligence_RecordLearning(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Without knowledge store, should return nil
err := intel.RecordLearning("vm-100", "test-vm", "vm", "Test Title", "Test content")
if err != nil {
t.Errorf("Expected nil error without knowledge store, got: %v", err)
}
// With knowledge store
knowledgeStore, err := knowledge.NewStore(t.TempDir())
if err != nil {
t.Fatalf("Failed to create knowledge store: %v", err)
}
intel.SetSubsystems(nil, nil, nil, nil, nil, knowledgeStore, nil, nil)
err = intel.RecordLearning("vm-100", "test-vm", "vm", "Test Title", "Test content")
if err != nil {
t.Errorf("Expected nil error, got: %v", err)
@ -258,7 +258,7 @@ func TestSeverityOrder(t *testing.T) {
{FindingSeverityInfo, 3},
{FindingSeverity("unknown"), 4},
}
for _, tt := range tests {
result := severityOrder(tt.severity)
if result != tt.expected {
@ -269,24 +269,24 @@ func TestSeverityOrder(t *testing.T) {
func TestIntelligence_CheckBaselinesForResource_WithBaselines(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create baseline store with learned data
baselineStore := baseline.NewStore(baseline.StoreConfig{MinSamples: 10})
// Learn baseline for CPU at ~20%
points := make([]baseline.MetricPoint, 100)
for i := 0; i < 100; i++ {
points[i] = baseline.MetricPoint{Value: 20 + float64(i%3) - 1} // 19-21
}
baselineStore.Learn("vm-100", "vm", "cpu", points)
intel.SetSubsystems(nil, nil, nil, baselineStore, nil, nil, nil, nil)
// Check with anomalous value (80% is 4x baseline)
anomalies := intel.CheckBaselinesForResource("vm-100", map[string]float64{
"cpu": 80.0,
})
// Should detect anomaly for CPU (80% with baseline of 20% is 4x = anomalous)
if len(anomalies) == 0 {
t.Error("Expected anomaly for CPU at 80% with baseline of 20%")
@ -295,7 +295,7 @@ func TestIntelligence_CheckBaselinesForResource_WithBaselines(t *testing.T) {
func TestIntelligence_GetResourceIntelligence_WithSubsystems(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create findings store with a finding for the resource
findings := NewFindingsStore()
findings.Add(&Finding{
@ -311,27 +311,27 @@ func TestIntelligence_GetResourceIntelligence_WithSubsystems(t *testing.T) {
LastSeenAt: time.Now(),
Source: "test",
})
// Create correlation detector
correlationDetector := correlation.NewDetector(correlation.DefaultConfig())
intel.SetSubsystems(findings, nil, correlationDetector, nil, nil, nil, nil, nil)
resourceIntel := intel.GetResourceIntelligence("vm-200")
if len(resourceIntel.ActiveFindings) != 1 {
t.Errorf("Expected 1 active finding, got %d", len(resourceIntel.ActiveFindings))
}
if resourceIntel.ResourceName != "critical-vm" {
t.Errorf("Expected resource name 'critical-vm', got %s", resourceIntel.ResourceName)
}
// Health should be reduced due to critical finding
if resourceIntel.Health.Score >= 100 {
t.Error("Expected reduced health score due to critical finding")
}
// Grade should be less than A
if resourceIntel.Health.Grade == HealthGradeA {
t.Error("Expected health grade less than A due to critical finding")
@ -340,18 +340,18 @@ func TestIntelligence_GetResourceIntelligence_WithSubsystems(t *testing.T) {
func TestIntelligence_FormatContext_WithKnowledge(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create knowledge store with data
knowledgeStore, err := knowledge.NewStore(t.TempDir())
if err != nil {
t.Fatalf("Failed to create knowledge store: %v", err)
}
knowledgeStore.SaveNote("vm-300", "test-vm", "vm", "general", "Test Note", "This is test content")
intel.SetSubsystems(nil, nil, nil, nil, nil, knowledgeStore, nil, nil)
ctx := intel.FormatContext("vm-300")
// Should contain the knowledge context
if ctx == "" {
t.Error("Expected non-empty context with knowledge")
@ -360,7 +360,7 @@ func TestIntelligence_FormatContext_WithKnowledge(t *testing.T) {
func TestIntelligence_CreatePredictionFinding_LowSeverity(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Prediction far in the future with low confidence
pred := patterns.FailurePrediction{
ResourceID: "vm-100",
@ -369,9 +369,9 @@ func TestIntelligence_CreatePredictionFinding_LowSeverity(t *testing.T) {
Confidence: 0.3, // Low confidence
Basis: "Pattern detected",
}
finding := intel.CreatePredictionFinding(pred)
// Should be watch severity (not critical or warning)
if finding.Severity != FindingSeverityWatch {
t.Errorf("Expected watch severity for far-off low-confidence prediction, got %s", finding.Severity)
@ -380,18 +380,18 @@ func TestIntelligence_CreatePredictionFinding_LowSeverity(t *testing.T) {
func TestIntelligence_CreatePredictionFinding_WarningSeverity(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Prediction soon but low confidence
pred := patterns.FailurePrediction{
ResourceID: "vm-100",
EventType: patterns.EventHighCPU,
DaysUntil: 0.5, // Soon
Confidence: 0.6, // Medium confidence (not > 0.8)
DaysUntil: 0.5, // Soon
Confidence: 0.6, // Medium confidence (not > 0.8)
Basis: "Pattern detected",
}
finding := intel.CreatePredictionFinding(pred)
// Should be warning (soon but not high confidence)
if finding.Severity != FindingSeverityWarning {
t.Errorf("Expected warning severity for imminent medium-confidence prediction, got %s", finding.Severity)
@ -400,9 +400,9 @@ func TestIntelligence_CreatePredictionFinding_WarningSeverity(t *testing.T) {
func TestIntelligence_GetSummary_WithCriticalFindings(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
findings := NewFindingsStore()
// Add multiple critical findings
for i := 0; i < 3; i++ {
findings.Add(&Finding{
@ -417,22 +417,22 @@ func TestIntelligence_GetSummary_WithCriticalFindings(t *testing.T) {
Source: "test",
})
}
intel.SetSubsystems(findings, nil, nil, nil, nil, nil, nil, nil)
summary := intel.GetSummary()
// Should have 3 critical
if summary.FindingsCount.Critical != 3 {
t.Errorf("Expected 3 critical findings, got %d", summary.FindingsCount.Critical)
}
// Health should be significantly reduced
// Note: critical impact is capped at 40 points, so score = 100 - 40 = 60
if summary.OverallHealth.Score > 60 {
t.Errorf("Expected health score <= 60 with 3 critical findings, got %f", summary.OverallHealth.Score)
}
// Prediction text should mention critical issues
if summary.OverallHealth.Prediction == "" {
t.Error("Expected non-empty prediction text")
@ -449,7 +449,7 @@ func TestAbsFloatIntel(t *testing.T) {
{0, 0},
{-0, 0},
}
for _, tt := range tests {
result := absFloatIntel(tt.input)
if result != tt.expected {
@ -460,15 +460,15 @@ func TestAbsFloatIntel(t *testing.T) {
func TestIntelligence_GetSummary_WithPatterns(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create pattern detector with predictions
patternDetector := patterns.NewDetector(patterns.DefaultConfig())
// Set up the subsystems
intel.SetSubsystems(nil, patternDetector, nil, nil, nil, nil, nil, nil)
summary := intel.GetSummary()
// Predictions count should be available
if summary.PredictionsCount < 0 {
t.Error("PredictionsCount should not be negative")
@ -477,10 +477,10 @@ func TestIntelligence_GetSummary_WithPatterns(t *testing.T) {
func TestIntelligence_GetResourceIntelligence_WithBaselines(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create baseline store with learned data
baselineStore := baseline.NewStore(baseline.StoreConfig{MinSamples: 10})
// Learn baseline for CPU
points := make([]baseline.MetricPoint, 100)
for i := 0; i < 100; i++ {
@ -488,16 +488,16 @@ func TestIntelligence_GetResourceIntelligence_WithBaselines(t *testing.T) {
}
baselineStore.Learn("vm-with-baseline", "vm", "cpu", points)
baselineStore.Learn("vm-with-baseline", "vm", "memory", points)
intel.SetSubsystems(nil, nil, nil, baselineStore, nil, nil, nil, nil)
resourceIntel := intel.GetResourceIntelligence("vm-with-baseline")
// Should have baselines
if resourceIntel.Baselines == nil || len(resourceIntel.Baselines) == 0 {
t.Error("Expected baselines to be populated")
}
// Check CPU baseline exists
if _, ok := resourceIntel.Baselines["cpu"]; !ok {
t.Error("Expected CPU baseline")
@ -509,19 +509,19 @@ func TestIntelligence_GetResourceIntelligence_WithBaselines(t *testing.T) {
func TestIntelligence_GetResourceIntelligence_WithIncidents(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create incident store with some incidents
incidentStore := memory.NewIncidentStore(memory.IncidentStoreConfig{
MaxIncidents: 10,
})
// Record an incident for the resource
incidentStore.RecordAnalysis("alert-vm-500", "Analysis for vm-500", nil)
intel.SetSubsystems(nil, nil, nil, nil, incidentStore, nil, nil, nil)
resourceIntel := intel.GetResourceIntelligence("vm-500")
// Should have the resource ID
if resourceIntel.ResourceID != "vm-500" {
t.Errorf("Expected resource ID 'vm-500', got %s", resourceIntel.ResourceID)
@ -530,7 +530,7 @@ func TestIntelligence_GetResourceIntelligence_WithIncidents(t *testing.T) {
func TestIntelligence_calculateResourceHealth_WithAnomalies(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create resource intelligence with anomalies
resourceIntel := &ResourceIntelligence{
ResourceID: "test-vm",
@ -553,14 +553,14 @@ func TestIntelligence_calculateResourceHealth_WithAnomalies(t *testing.T) {
},
},
}
health := intel.calculateResourceHealth(resourceIntel)
// Health should be reduced due to anomalies
if health.Score >= 100 {
t.Error("Expected reduced health score due to anomalies")
}
// Should have factors for anomalies
hasAnomalyFactor := false
for _, f := range health.Factors {
@ -576,7 +576,7 @@ func TestIntelligence_calculateResourceHealth_WithAnomalies(t *testing.T) {
func TestIntelligence_calculateResourceHealth_WithPredictions(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create resource intelligence with predictions
resourceIntel := &ResourceIntelligence{
ResourceID: "test-vm",
@ -590,14 +590,14 @@ func TestIntelligence_calculateResourceHealth_WithPredictions(t *testing.T) {
},
},
}
health := intel.calculateResourceHealth(resourceIntel)
// Health should be reduced due to predictions
if health.Score >= 100 {
t.Error("Expected reduced health score due to predictions")
}
// Should have a prediction factor
hasPredictionFactor := false
for _, f := range health.Factors {
@ -613,20 +613,20 @@ func TestIntelligence_calculateResourceHealth_WithPredictions(t *testing.T) {
func TestIntelligence_calculateResourceHealth_WithNotes(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create resource intelligence with notes (bonus for documentation)
resourceIntel := &ResourceIntelligence{
ResourceID: "test-vm",
NoteCount: 3,
}
health := intel.calculateResourceHealth(resourceIntel)
// Health should have a bonus for having notes
if health.Score < 100 {
t.Error("Expected health score >= 100 with only notes (bonus)")
}
// Should have a learning factor
hasLearningFactor := false
for _, f := range health.Factors {
@ -642,23 +642,23 @@ func TestIntelligence_calculateResourceHealth_WithNotes(t *testing.T) {
func TestIntelligence_GetSummary_WithLearningBonus(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create knowledge store with many resources
knowledgeStore, err := knowledge.NewStore(t.TempDir())
if err != nil {
t.Fatalf("Failed to create knowledge store: %v", err)
}
// Add knowledge for 6+ resources to trigger learning bonus
for i := 0; i < 7; i++ {
resourceID := "vm-" + string(rune('A'+i))
knowledgeStore.SaveNote(resourceID, "VM "+string(rune('A'+i)), "vm", "general", "Note", "Content")
}
intel.SetSubsystems(nil, nil, nil, nil, nil, knowledgeStore, nil, nil)
summary := intel.GetSummary()
// With 6+ resources learned, should have learning bonus factor
hasLearningFactor := false
for _, f := range summary.OverallHealth.Factors {
@ -674,12 +674,12 @@ func TestIntelligence_GetSummary_WithLearningBonus(t *testing.T) {
func TestIntelligence_FormatContext_WithCorrelation(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create correlation detector
correlationDetector := correlation.NewDetector(correlation.DefaultConfig())
intel.SetSubsystems(nil, nil, correlationDetector, nil, nil, nil, nil, nil)
// Should not panic even without correlations
ctx := intel.FormatContext("vm-test")
_ = ctx
@ -687,12 +687,12 @@ func TestIntelligence_FormatContext_WithCorrelation(t *testing.T) {
func TestIntelligence_FormatContext_WithPatterns(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create pattern detector
patternDetector := patterns.NewDetector(patterns.DefaultConfig())
intel.SetSubsystems(nil, patternDetector, nil, nil, nil, nil, nil, nil)
// Should not panic
ctx := intel.FormatContext("vm-test")
_ = ctx
@ -700,29 +700,29 @@ func TestIntelligence_FormatContext_WithPatterns(t *testing.T) {
func TestIntelligence_FormatContext_WithIncidents(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Create incident store
incidentStore := memory.NewIncidentStore(memory.IncidentStoreConfig{
MaxIncidents: 10,
})
intel.SetSubsystems(nil, nil, nil, nil, incidentStore, nil, nil, nil)
ctx := intel.FormatContext("vm-test")
_ = ctx
}
func TestIntelligence_FormatGlobalContext_Full(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
// Set up all context-contributing subsystems
knowledgeStore, _ := knowledge.NewStore(t.TempDir())
incidentStore := memory.NewIncidentStore(memory.IncidentStoreConfig{MaxIncidents: 10})
correlationDetector := correlation.NewDetector(correlation.DefaultConfig())
patternDetector := patterns.NewDetector(patterns.DefaultConfig())
intel.SetSubsystems(nil, patternDetector, correlationDetector, nil, incidentStore, knowledgeStore, nil, nil)
ctx := intel.FormatGlobalContext()
// May be empty if no data, but shouldn't panic
_ = ctx
@ -730,20 +730,20 @@ func TestIntelligence_FormatGlobalContext_Full(t *testing.T) {
func TestIntelligence_generateHealthPrediction_Warnings(t *testing.T) {
intel := NewIntelligence(IntelligenceConfig{})
health := HealthScore{
Score: 80,
Grade: HealthGradeB,
}
summary := &IntelligenceSummary{
FindingsCount: FindingsCounts{
Warning: 3,
},
}
prediction := intel.generateHealthPrediction(health, summary)
if prediction == "" {
t.Error("Expected non-empty prediction")
}

View file

@ -160,7 +160,7 @@ func (s *Store) SaveNote(guestID, guestName, guestType, category, title, content
GuestType: guestType,
Notes: []Note{},
}
// Check for existing file
filePath := s.guestFilePath(guestID)
if data, err := os.ReadFile(filePath); err == nil {
@ -365,8 +365,8 @@ func (s *Store) ListGuests() ([]string, error) {
// This is used when no specific target is selected to give the AI full context
// To prevent context bloat, it limits output to maxGuests and maxBytes
func (s *Store) FormatAllForContext() string {
const maxGuests = 10 // Only include the 10 most recently updated guests
const maxBytes = 8000 // Cap total output at ~8KB to leave room for other context
const maxGuests = 10 // Only include the 10 most recently updated guests
const maxBytes = 8000 // Cap total output at ~8KB to leave room for other context
guests, err := s.ListGuests()
if err != nil || len(guests) == 0 {
@ -450,7 +450,7 @@ func (s *Store) FormatAllForContext() string {
content = content[:2] + "****" + content[len(content)-2:]
}
noteLine := fmt.Sprintf("\n- **%s**: %s", note.Title, content)
// Check if adding this note would exceed our byte limit
if currentBytes+len(guestSection)+len(noteLine) > maxBytes {
// Stop adding notes, we've hit the limit

View file

@ -327,4 +327,3 @@ func TestListGuests(t *testing.T) {
t.Errorf("Expected 2 guests, got %d", len(guests))
}
}

View file

@ -7,8 +7,8 @@ import (
"testing"
"time"
"github.com/rcourtman/pulse-go-rewrite/internal/config"
"github.com/rcourtman/pulse-go-rewrite/internal/ai/providers"
"github.com/rcourtman/pulse-go-rewrite/internal/config"
"github.com/rcourtman/pulse-go-rewrite/internal/models"
)
@ -687,22 +687,22 @@ func TestSummarizeKubernetesDeployments_Truncates(t *testing.T) {
func TestBuildKubernetesClusterContext(t *testing.T) {
now := time.Now().Add(-5 * time.Minute)
cluster := models.KubernetesCluster{
ID: "cluster-1",
Name: "prod",
Status: "healthy",
Version: "1.27",
Server: "https://kube.local",
Context: "prod",
AgentVersion: "v1",
IntervalSeconds: 60,
LastSeen: now,
PendingUninstall: true,
Nodes: []models.KubernetesNode{{Name: "node-1", Ready: false, Unschedulable: true}},
ID: "cluster-1",
Name: "prod",
Status: "healthy",
Version: "1.27",
Server: "https://kube.local",
Context: "prod",
AgentVersion: "v1",
IntervalSeconds: 60,
LastSeen: now,
PendingUninstall: true,
Nodes: []models.KubernetesNode{{Name: "node-1", Ready: false, Unschedulable: true}},
Pods: []models.KubernetesPod{
{Name: "pod-1", Namespace: "default", Phase: "Pending"},
{Name: "pod-2", Namespace: "default", Phase: "Running", Restarts: 2},
},
Deployments: []models.KubernetesDeployment{{Namespace: "default", Name: "dep", DesiredReplicas: 1}},
Deployments: []models.KubernetesDeployment{{Namespace: "default", Name: "dep", DesiredReplicas: 1}},
}
ctx := buildKubernetesClusterContext(cluster)

View file

@ -657,4 +657,3 @@ func TestIncidentStore_RecordNote_NonexistentIncident(t *testing.T) {
t.Error("expected false for non-existent incident")
}
}

View file

@ -794,7 +794,7 @@ func TestIncidentStore_LoadFromDisk_Scenarios(t *testing.T) {
}
store := &IncidentStore{
filePath: path,
filePath: path,
maxIncidents: 1,
maxAge: 24 * time.Hour,
}

View file

@ -335,8 +335,8 @@ func TestFormatDuration(t *testing.T) {
input time.Duration
expected string
}{
{30 * time.Second, "just now"}, // < 1 minute returns "just now"
{1 * time.Second, "just now"}, // < 1 minute returns "just now"
{30 * time.Second, "just now"}, // < 1 minute returns "just now"
{1 * time.Second, "just now"}, // < 1 minute returns "just now"
{5 * time.Minute, "5 minutes"},
{1 * time.Minute, "1 minute"},
{2 * time.Hour, "2 hours"},
@ -380,7 +380,7 @@ func TestTruncateOutput(t *testing.T) {
expected string
}{
{"short", 10, "short"},
{"longer string", 5, "lo..."}, // truncates at maxLen-3 + "..."
{"longer string", 5, "lo..."}, // truncates at maxLen-3 + "..."
{"", 10, ""},
}
@ -394,8 +394,8 @@ func TestTruncateOutput(t *testing.T) {
func TestExtractKeywords(t *testing.T) {
tests := []struct {
input string
minKeywords int
input string
minKeywords int
}{
{"High memory usage causing OOM", 3},
{"CPU spike detected", 2},

View file

@ -238,32 +238,32 @@ func TestRemediationLog_GetRecentRemediationStats(t *testing.T) {
now := time.Now()
r.Log(RemediationRecord{
Timestamp: now.Add(-1 * time.Hour),
Problem: "p1",
Action: "a1",
Outcome: OutcomeResolved,
Automatic: true,
Timestamp: now.Add(-1 * time.Hour),
Problem: "p1",
Action: "a1",
Outcome: OutcomeResolved,
Automatic: true,
})
r.Log(RemediationRecord{
Timestamp: now.Add(-2 * time.Hour),
Problem: "p2",
Action: "a2",
Outcome: OutcomePartial,
Automatic: false,
Timestamp: now.Add(-2 * time.Hour),
Problem: "p2",
Action: "a2",
Outcome: OutcomePartial,
Automatic: false,
})
r.Log(RemediationRecord{
Timestamp: now.Add(-30 * time.Minute),
Problem: "p3",
Action: "a3",
Outcome: OutcomeFailed,
Automatic: true,
Timestamp: now.Add(-30 * time.Minute),
Problem: "p3",
Action: "a3",
Outcome: OutcomeFailed,
Automatic: true,
})
r.Log(RemediationRecord{
Timestamp: now.Add(-48 * time.Hour),
Problem: "old",
Action: "old",
Outcome: OutcomeResolved,
Automatic: false,
Timestamp: now.Add(-48 * time.Hour),
Problem: "old",
Action: "old",
Outcome: OutcomeResolved,
Automatic: false,
})
// Get stats for last 24 hours
@ -333,4 +333,3 @@ func TestChangeDetector_Limit(t *testing.T) {
t.Errorf("Expected max 5 changes due to limit, got %d", len(allChanges))
}
}

View file

@ -106,8 +106,8 @@ func CalculatePatrolThresholdsWithMode(provider ThresholdProvider, proactiveMode
// Exact mode (default): use exact alert thresholds
// Watch is slightly below warning, warning is at threshold
return PatrolThresholds{
NodeCPUWatch: clampThreshold(nodeCPU - 5), // Watch slightly before threshold
NodeCPUWarning: nodeCPU, // Warning at exact threshold
NodeCPUWatch: clampThreshold(nodeCPU - 5), // Watch slightly before threshold
NodeCPUWarning: nodeCPU, // Warning at exact threshold
NodeMemWatch: clampThreshold(nodeMem - 5),
NodeMemWarning: nodeMem,
GuestMemWatch: clampThreshold(guestMem - 5),
@ -256,8 +256,8 @@ type PatrolService struct {
intelligence *Intelligence
// Cached thresholds (recalculated when thresholdProvider changes)
thresholds PatrolThresholds
proactiveMode bool // When true, warn before thresholds; when false, use exact thresholds
thresholds PatrolThresholds
proactiveMode bool // When true, warn before thresholds; when false, use exact thresholds
// Runtime state
running bool
@ -891,19 +891,19 @@ func (p *PatrolService) runPatrol(ctx context.Context) {
}
errorFinding := &Finding{
ID: generateFindingID("ai-service", "reliability", "ai-patrol-error"),
Key: "ai-patrol-error",
Severity: "warning",
Category: "reliability",
ResourceID: "ai-service",
ResourceName: "AI Patrol Service",
ResourceType: "service",
Title: title,
Description: description,
ID: generateFindingID("ai-service", "reliability", "ai-patrol-error"),
Key: "ai-patrol-error",
Severity: "warning",
Category: "reliability",
ResourceID: "ai-service",
ResourceName: "AI Patrol Service",
ResourceType: "service",
Title: title,
Description: description,
Recommendation: recommendation,
Evidence: fmt.Sprintf("Error: %s", errMsg),
DetectedAt: time.Now(),
LastSeenAt: time.Now(),
Evidence: fmt.Sprintf("Error: %s", errMsg),
DetectedAt: time.Now(),
LastSeenAt: time.Now(),
}
trackFinding(errorFinding)
} else if aiResult != nil {
@ -1000,7 +1000,6 @@ func (p *PatrolService) runPatrol(ctx context.Context) {
Status: status,
}
// Add AI analysis details if available
if runStats.aiAnalysis != nil {
runRecord.AIAnalysis = runStats.aiAnalysis.Response
@ -1487,7 +1486,6 @@ func (p *PatrolService) analyzeDockerHost(host models.DockerHost) []*Finding {
return findings
}
// analyzeStorage checks storage for issues
func (p *PatrolService) analyzeStorage(storage models.Storage) []*Finding {
var findings []*Finding
@ -1647,7 +1645,6 @@ func (p *PatrolService) GetRunHistory(limit int) []PatrolRunRecord {
func (p *PatrolService) GetAllFindings() []*Finding {
findings := p.findings.GetActive(FindingSeverityWarning)
// Sort by severity (critical first) then by time
severityOrder := map[FindingSeverity]int{
FindingSeverityCritical: 0,
@ -2179,16 +2176,16 @@ func cleanThinkingTokens(content string) string {
if content == "" {
return content
}
// Remove DeepSeek thinking markers and everything before them on the same line
// These appear as: <end▁of▁thinking> or <|end_of_thinking|>
thinkingMarkers := []string{
"<end▁of▁thinking>", // DeepSeek Unicode variant
"<|end_of_thinking|>", // ASCII variant
"<|end▁of▁thinking|>", // Mixed variant
"</think>", // Generic thinking block end
"<end▁of▁thinking>", // DeepSeek Unicode variant
"<|end_of_thinking|>", // ASCII variant
"<|end▁of▁thinking|>", // Mixed variant
"</think>", // Generic thinking block end
}
for _, marker := range thinkingMarkers {
for strings.Contains(content, marker) {
idx := strings.Index(content, marker)
@ -2211,59 +2208,58 @@ func cleanThinkingTokens(content string) string {
}
}
}
// Also remove any lines that look like internal reasoning
// These typically start with patterns like "Now, " or "Let's " after a blank line
lines := strings.Split(content, "\n")
var cleanedLines []string
skipUntilContent := false
for i, line := range lines {
trimmed := strings.TrimSpace(line)
// Skip lines that look like internal reasoning
if skipUntilContent {
// Resume when we hit actual content (markdown headers, findings, etc.)
if strings.HasPrefix(trimmed, "#") ||
strings.HasPrefix(trimmed, "[FINDING]") ||
strings.HasPrefix(trimmed, "**") ||
strings.HasPrefix(trimmed, "-") ||
strings.HasPrefix(trimmed, "1.") {
if strings.HasPrefix(trimmed, "#") ||
strings.HasPrefix(trimmed, "[FINDING]") ||
strings.HasPrefix(trimmed, "**") ||
strings.HasPrefix(trimmed, "-") ||
strings.HasPrefix(trimmed, "1.") {
skipUntilContent = false
} else {
continue
}
}
// Detect reasoning patterns (typically after empty lines)
if trimmed == "" && i+1 < len(lines) {
nextTrimmed := strings.TrimSpace(lines[i+1])
if strings.HasPrefix(nextTrimmed, "Now, ") ||
strings.HasPrefix(nextTrimmed, "Let's ") ||
strings.HasPrefix(nextTrimmed, "Let me ") ||
strings.HasPrefix(nextTrimmed, "I should ") ||
strings.HasPrefix(nextTrimmed, "I'll ") ||
strings.HasPrefix(nextTrimmed, "I need to ") ||
strings.HasPrefix(nextTrimmed, "Checking ") ||
strings.HasPrefix(nextTrimmed, "Looking at ") {
strings.HasPrefix(nextTrimmed, "Let's ") ||
strings.HasPrefix(nextTrimmed, "Let me ") ||
strings.HasPrefix(nextTrimmed, "I should ") ||
strings.HasPrefix(nextTrimmed, "I'll ") ||
strings.HasPrefix(nextTrimmed, "I need to ") ||
strings.HasPrefix(nextTrimmed, "Checking ") ||
strings.HasPrefix(nextTrimmed, "Looking at ") {
skipUntilContent = true
continue
}
}
cleanedLines = append(cleanedLines, line)
}
// Clean up excessive blank lines
content = strings.Join(cleanedLines, "\n")
for strings.Contains(content, "\n\n\n") {
content = strings.ReplaceAll(content, "\n\n\n", "\n\n")
}
return strings.TrimSpace(content)
}
// runAIAnalysis uses the LLM to analyze infrastructure and identify issues
func (p *PatrolService) runAIAnalysis(ctx context.Context, state models.StateSnapshot) (*AIAnalysisResult, error) {
if p.aiService == nil {
@ -2325,10 +2321,10 @@ func (p *PatrolService) runAIAnalysis(ctx context.Context, state models.StateSna
if finalContent == "" {
finalContent = contentBuffer.String()
}
// Clean any thinking tokens that might have leaked through from the provider
finalContent = cleanThinkingTokens(finalContent)
inputTokens = resp.InputTokens
outputTokens = resp.OutputTokens
@ -2415,7 +2411,6 @@ BEFORE CREATING A FINDING, ASK YOURSELF:
If everything looks healthy, respond with NO findings. An empty report is the BEST report.`
if autoFix {
return basePrompt + `
@ -2593,7 +2588,6 @@ func (p *PatrolService) buildInfrastructureSummary(state models.StateSnapshot) s
return sb.String()
}
// buildEnrichedContext creates context with historical trends and predictions
// Falls back to basic summary if metrics history is not available
func (p *PatrolService) buildEnrichedContext(state models.StateSnapshot) string {

View file

@ -10,9 +10,6 @@ import (
"github.com/rcourtman/pulse-go-rewrite/internal/models"
)
func TestDefaultPatrolThresholds(t *testing.T) {
thresholds := DefaultPatrolThresholds()
@ -93,13 +90,13 @@ func TestClampThreshold(t *testing.T) {
input float64
expected float64
}{
{50, 50}, // Normal value passes through
{5, 10}, // Below minimum, clamped to 10
{-5, 10}, // Negative, clamped to 10
{100, 99}, // Above maximum, clamped to 99
{150, 99}, // Way above, clamped to 99
{10, 10}, // Exactly at minimum
{99, 99}, // Exactly at maximum
{50, 50}, // Normal value passes through
{5, 10}, // Below minimum, clamped to 10
{-5, 10}, // Negative, clamped to 10
{100, 99}, // Above maximum, clamped to 99
{150, 99}, // Way above, clamped to 99
{10, 10}, // Exactly at minimum
{99, 99}, // Exactly at maximum
}
for _, tt := range tests {
@ -478,12 +475,12 @@ func TestPatrolService_GetCurrentStreamOutput(t *testing.T) {
func TestPatrolService_SetMemoryProviders(t *testing.T) {
ps := NewPatrolService(nil, nil)
// Test SetChangeDetector
// Test SetChangeDetector
changeDetector := &ChangeDetector{} // Would need proper initialization
ps.mu.Lock()
ps.changeDetector = changeDetector
ps.mu.Unlock()
if ps.GetChangeDetector() != changeDetector {
t.Error("Expected change detector to be set")
}
@ -493,7 +490,7 @@ func TestPatrolService_SetMemoryProviders(t *testing.T) {
ps.mu.Lock()
ps.remediationLog = remLog
ps.mu.Unlock()
if ps.GetRemediationLog() != remLog {
t.Error("Expected remediation log to be set")
}
@ -528,7 +525,7 @@ func TestPatrolRunRecord(t *testing.T) {
func TestPatrolStatus_Fields(t *testing.T) {
now := time.Now()
next := now.Add(15 * time.Minute)
status := PatrolStatus{
Running: true,
Enabled: true,
@ -564,7 +561,7 @@ func TestFormatDurationPatrol(t *testing.T) {
{30 * time.Minute, "30m"},
{59 * time.Minute, "59m"},
{60 * time.Minute, "1h"},
{90 * time.Minute, "1h"}, // Less than 24h, shows hours
{90 * time.Minute, "1h"}, // Less than 24h, shows hours
{2 * time.Hour, "2h"},
{23 * time.Hour, "23h"},
{24 * time.Hour, "1d"},
@ -608,7 +605,7 @@ func TestFormatBytesInt64(t *testing.T) {
input int64
expected string
}{
{-100, "0 B"}, // Negative values return "0 B"
{-100, "0 B"}, // Negative values return "0 B"
{0, "0 B"},
{1024, "1.0 KB"},
{1073741824, "1.0 GB"},

View file

@ -70,7 +70,7 @@ func TestDetector_GetPredictions(t *testing.T) {
}
predictions := d.GetPredictions()
// Should have a prediction for OOM
found := false
for _, p := range predictions {
@ -83,7 +83,7 @@ func TestDetector_GetPredictions(t *testing.T) {
break
}
}
if !found {
t.Error("Expected OOM prediction for vm-100")
}
@ -160,7 +160,7 @@ func TestDetector_FormatForContext(t *testing.T) {
if context == "" {
t.Error("Expected non-empty context")
}
if !contains(context, "OOM") && !contains(context, "oom") {
t.Errorf("Expected context to mention OOM: %s", context)
}
@ -315,4 +315,3 @@ func TestIntToStr(t *testing.T) {
}
}
}

View file

@ -414,4 +414,3 @@ func TestAnthropicOAuthClient_ListModels_UsesConfiguredHost(t *testing.T) {
t.Fatalf("unexpected models: %+v", models)
}
}

View file

@ -51,7 +51,6 @@ func TestNewFromConfig_UnknownProvider(t *testing.T) {
}
}
func TestNewFromConfig_AnthropicWithAPIKey(t *testing.T) {
cfg := &config.AIConfig{
Enabled: true,

View file

@ -57,10 +57,10 @@ func (c *GeminiClient) Name() string {
// geminiRequest is the request body for the Gemini API
type geminiRequest struct {
Contents []geminiContent `json:"contents"`
SystemInstruction *geminiContent `json:"systemInstruction,omitempty"`
GenerationConfig *geminiGenerationConfig `json:"generationConfig,omitempty"`
Tools []geminiToolDef `json:"tools,omitempty"`
Contents []geminiContent `json:"contents"`
SystemInstruction *geminiContent `json:"systemInstruction,omitempty"`
GenerationConfig *geminiGenerationConfig `json:"generationConfig,omitempty"`
Tools []geminiToolDef `json:"tools,omitempty"`
}
type geminiContent struct {
@ -69,8 +69,8 @@ type geminiContent struct {
}
type geminiPart struct {
Text string `json:"text,omitempty"`
FunctionCall *geminiFunctionCall `json:"functionCall,omitempty"`
Text string `json:"text,omitempty"`
FunctionCall *geminiFunctionCall `json:"functionCall,omitempty"`
FunctionResponse *geminiFunctionResponse `json:"functionResponse,omitempty"`
}
@ -103,15 +103,15 @@ type geminiFunctionDeclaration struct {
// geminiResponse is the response from the Gemini API
type geminiResponse struct {
Candidates []geminiCandidate `json:"candidates"`
UsageMetadata *geminiUsageMetadata `json:"usageMetadata"`
Candidates []geminiCandidate `json:"candidates"`
UsageMetadata *geminiUsageMetadata `json:"usageMetadata"`
PromptFeedback *geminiPromptFeedback `json:"promptFeedback,omitempty"`
}
type geminiCandidate struct {
Content geminiContent `json:"content"`
FinishReason string `json:"finishReason"`
SafetyRatings []geminySafety `json:"safetyRatings,omitempty"`
Content geminiContent `json:"content"`
FinishReason string `json:"finishReason"`
SafetyRatings []geminySafety `json:"safetyRatings,omitempty"`
}
type geminySafety struct {
@ -464,9 +464,9 @@ func (c *GeminiClient) ListModels(ctx context.Context) ([]ModelInfo, error) {
var result struct {
Models []struct {
Name string `json:"name"`
DisplayName string `json:"displayName"`
Description string `json:"description"`
Name string `json:"name"`
DisplayName string `json:"displayName"`
Description string `json:"description"`
SupportedGenerationMethods []string `json:"supportedGenerationMethods"`
} `json:"models"`
}

View file

@ -7,11 +7,11 @@ import (
// Message represents a chat message
type Message struct {
Role string `json:"role"` // "user", "assistant", "system"
Content string `json:"content"` // Text content (simple case)
ReasoningContent string `json:"reasoning_content,omitempty"` // DeepSeek thinking mode
ToolCalls []ToolCall `json:"tool_calls,omitempty"` // For assistant messages with tool calls
ToolResult *ToolResult `json:"tool_result,omitempty"` // For user messages with tool results
Role string `json:"role"` // "user", "assistant", "system"
Content string `json:"content"` // Text content (simple case)
ReasoningContent string `json:"reasoning_content,omitempty"` // DeepSeek thinking mode
ToolCalls []ToolCall `json:"tool_calls,omitempty"` // For assistant messages with tool calls
ToolResult *ToolResult `json:"tool_result,omitempty"` // For user messages with tool results
}
// ToolCall represents a tool invocation from the AI
@ -30,11 +30,11 @@ type ToolResult struct {
// Tool represents an AI tool definition
type Tool struct {
Type string `json:"type,omitempty"` // "web_search_20250305" for web search, empty for regular tools
Type string `json:"type,omitempty"` // "web_search_20250305" for web search, empty for regular tools
Name string `json:"name"`
Description string `json:"description,omitempty"`
InputSchema map[string]interface{} `json:"input_schema,omitempty"`
MaxUses int `json:"max_uses,omitempty"` // For web search: limit searches per request
MaxUses int `json:"max_uses,omitempty"` // For web search: limit searches per request
}
// ChatRequest represents a request to the AI provider
@ -49,13 +49,13 @@ type ChatRequest struct {
// ChatResponse represents a response from the AI provider
type ChatResponse struct {
Content string `json:"content"`
ReasoningContent string `json:"reasoning_content,omitempty"` // DeepSeek thinking mode
Model string `json:"model"`
StopReason string `json:"stop_reason,omitempty"` // "end_turn", "tool_use"
ToolCalls []ToolCall `json:"tool_calls,omitempty"` // Tool invocations
InputTokens int `json:"input_tokens,omitempty"`
OutputTokens int `json:"output_tokens,omitempty"`
Content string `json:"content"`
ReasoningContent string `json:"reasoning_content,omitempty"` // DeepSeek thinking mode
Model string `json:"model"`
StopReason string `json:"stop_reason,omitempty"` // "end_turn", "tool_use"
ToolCalls []ToolCall `json:"tool_calls,omitempty"` // Tool invocations
InputTokens int `json:"input_tokens,omitempty"`
OutputTokens int `json:"output_tokens,omitempty"`
}
// ModelInfo represents information about an available model

View file

@ -15,14 +15,14 @@ type ResourceProvider interface {
GetWorkloads() []resources.Resource
GetByType(t resources.ResourceType) []resources.Resource
GetStats() resources.StoreStats
// Cross-platform query methods
GetTopByCPU(limit int, types []resources.ResourceType) []resources.Resource
GetTopByMemory(limit int, types []resources.ResourceType) []resources.Resource
GetTopByDisk(limit int, types []resources.ResourceType) []resources.Resource
GetRelated(resourceID string) map[string][]resources.Resource
GetResourceSummary() resources.ResourceSummary
// AI Routing support
FindContainerHost(containerNameOrID string) string
}
@ -243,7 +243,7 @@ func (s *Service) buildUnifiedResourceContext() string {
if summary.WithAlerts > 0 {
sections = append(sections, fmt.Sprintf("- Resources with alerts: %d", summary.WithAlerts))
}
// Show average resource usage by type
if len(summary.ByType) > 0 {
sections = append(sections, "- Average utilization by type:")

View file

@ -146,7 +146,7 @@ func (s *Service) routeToAgent(req ExecuteRequest, command string, agents []agen
s.mu.RLock()
rp := s.resourceProvider
s.mu.RUnlock()
if rp != nil {
// Try to find the host for this workload
resourceName := ""
@ -157,7 +157,7 @@ func (s *Service) routeToAgent(req ExecuteRequest, command string, agents []agen
} else if name, ok := req.Context["guestName"].(string); ok && name != "" {
resourceName = name
}
if resourceName != "" {
if host := rp.FindContainerHost(resourceName); host != "" {
result.TargetNode = strings.ToLower(host)
@ -252,7 +252,7 @@ func (s *Service) routeToAgent(req ExecuteRequest, command string, agents []agen
return nil, &RoutingError{
TargetNode: result.TargetNode,
AvailableAgents: agentHostnames,
Reason: fmt.Sprintf("No agent connected to node %q", result.TargetNode),
Reason: fmt.Sprintf("No agent connected to node %q", result.TargetNode),
Suggestion: fmt.Sprintf("Install pulse-agent on %q, or ensure it's in a cluster with %s",
result.TargetNode, strings.Join(agentHostnames, ", ")),
}
@ -325,7 +325,7 @@ func (s *Service) findClusterPeerAgent(targetNode string, agents []agentexec.Con
if s.persistence == nil {
return ""
}
// Load nodes config to check cluster membership
nodesConfig, err := s.persistence.LoadNodesConfig()
if err != nil || nodesConfig == nil {

File diff suppressed because it is too large Load diff

View file

@ -17,7 +17,7 @@ func TestService_Remediation(t *testing.T) {
t.Fatalf("Failed to create temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
persistence := config.NewConfigPersistence(tmpDir)
svc := NewService(persistence, nil)
patrol := NewPatrolService(svc, nil)
@ -62,7 +62,7 @@ func TestService_Remediation(t *testing.T) {
func TestService_Remediation_NoPatrolService(t *testing.T) {
svc := NewService(nil, nil)
// patrolService is nil, logRemediation should handle gracefully
req := ExecuteRequest{
TargetID: "vm-102",
TargetType: "vm",
@ -78,13 +78,13 @@ func TestService_Remediation_NoRemediationLog(t *testing.T) {
t.Fatalf("Failed to create temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
persistence := config.NewConfigPersistence(tmpDir)
svc := NewService(persistence, nil)
patrol := NewPatrolService(svc, nil)
svc.patrolService = patrol
// remediationLog is nil on patrol
req := ExecuteRequest{
TargetID: "vm-103",
TargetType: "vm",
@ -96,7 +96,7 @@ func TestService_Remediation_NoRemediationLog(t *testing.T) {
func TestService_BuildRemediationContext_Empty(t *testing.T) {
svc := NewService(nil, nil)
// With no remediationLog set, should return empty
ctx := svc.buildRemediationContext("unknown", "Unknown problem")
if ctx != "" {
@ -122,12 +122,12 @@ func TestTruncateString(t *testing.T) {
{"exactly at max minus ellipsis", "1234", 4, "1234"},
{"just over", "12345", 4, "1..."},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := truncateString(tt.input, tt.max)
if result != tt.expected {
t.Errorf("truncateString(%q, %d) = %q, expected %q",
t.Errorf("truncateString(%q, %d) = %q, expected %q",
tt.input, tt.max, result, tt.expected)
}
})
@ -145,14 +145,11 @@ func TestContainsString(t *testing.T) {
{"", "test", false},
{"test", "", true},
}
for _, tt := range tests {
if containsString(tt.haystack, tt.needle) != tt.expected {
t.Errorf("containsString(%q, %q) expected %v",
t.Errorf("containsString(%q, %q) expected %v",
tt.haystack, tt.needle, tt.expected)
}
}
}

View file

@ -9,8 +9,8 @@ import (
"github.com/rcourtman/pulse-go-rewrite/internal/ai/providers"
"github.com/rcourtman/pulse-go-rewrite/internal/config"
"github.com/rcourtman/pulse-go-rewrite/internal/resources"
"github.com/rcourtman/pulse-go-rewrite/internal/models"
"github.com/rcourtman/pulse-go-rewrite/internal/resources"
)
func TestNewService(t *testing.T) {
@ -288,11 +288,11 @@ func TestService_LookupNodeForVMID(t *testing.T) {
func TestExtractVMIDFromCommand(t *testing.T) {
tests := []struct {
name string
command string
expectedVMID int
expectedOwner bool
expectedFound bool
name string
command string
expectedVMID int
expectedOwner bool
expectedFound bool
}{
{
name: "pct exec",
@ -465,7 +465,7 @@ func TestService_Execute(t *testing.T) {
persistence := config.NewConfigPersistence(tmpDir)
svc := NewService(persistence, nil)
// Set enabled config
svc.cfg = &config.AIConfig{
Enabled: true,
@ -503,7 +503,7 @@ func TestService_Execute_Error(t *testing.T) {
persistence := config.NewConfigPersistence(tmpDir)
svc := NewService(persistence, nil)
svc.cfg = &config.AIConfig{Enabled: true}
mockP := &mockProvider{
chatFunc: func(ctx context.Context, req providers.ChatRequest) (*providers.ChatResponse, error) {
return nil, errors.New("API error")
@ -526,7 +526,7 @@ func TestService_ExecuteStream(t *testing.T) {
persistence := config.NewConfigPersistence(tmpDir)
svc := NewService(persistence, nil)
svc.cfg = &config.AIConfig{Enabled: true}
mockP := &mockProvider{
chatFunc: func(ctx context.Context, req providers.ChatRequest) (*providers.ChatResponse, error) {
return &providers.ChatResponse{
@ -749,7 +749,7 @@ func TestService_SetMetricsHistoryProvider(t *testing.T) {
func TestService_LicenseGating(t *testing.T) {
svc := NewService(nil, nil)
// Default should be true when no checker is set (dev mode)
if !svc.HasLicenseFeature("test") {
t.Error("Expected true for no license checker (dev mode)")
@ -757,11 +757,11 @@ func TestService_LicenseGating(t *testing.T) {
mockLC := &mockLicenseChecker{hasFeature: true}
svc.SetLicenseChecker(mockLC)
if !svc.HasLicenseFeature("test") {
t.Error("Expected true with mock license checker")
}
tier, ok := svc.GetLicenseState()
if tier != "active" || !ok {
t.Errorf("Expected active tier from mock, got %s, %v", tier, ok)
@ -774,7 +774,7 @@ func TestService_IsAutonomous(t *testing.T) {
if !svc.IsAutonomous() {
t.Error("Expected true")
}
svc.cfg.AutonomousMode = false
if svc.IsAutonomous() {
t.Error("Expected false")

View file

@ -104,7 +104,7 @@ func TestIsBlockedFetchIP(t *testing.T) {
t.Errorf("isBlockedFetchIP(%s) = %v, want %v", tt.ip, got, tt.blocked)
}
}
if !isBlockedFetchIP(nil) {
t.Error("nil IP should be blocked")
}

View file

@ -18,12 +18,12 @@ func TestRouteToAgent_TargetHostExplicit(t *testing.T) {
}
tests := []struct {
name string
req ExecuteRequest
command string
wantAgentID string
wantHostname string
wantMethod string
name string
req ExecuteRequest
command string
wantAgentID string
wantHostname string
wantMethod string
}{
{
name: "explicit node in context routes correctly",

View file

@ -723,7 +723,6 @@ func TestCheckSnapshotsRespectsOverrides(t *testing.T) {
}
}
func TestCheckSnapshotsForInstanceTriggersOnSnapshotSize(t *testing.T) {
m := newTestManager(t)
m.ClearActiveAlerts()
@ -1043,7 +1042,6 @@ func TestCheckBackupsRespectsOverrides(t *testing.T) {
}
}
func TestCheckBackupsHandlesPbsOnlyGuests(t *testing.T) {
m := newTestManager(t)
m.ClearActiveAlerts()

View file

@ -98,9 +98,9 @@ func (hm *HistoryManager) AddAlert(alert Alert) {
hm.history = append(hm.history, entry)
callbacks := hm.callbacks
hm.mu.Unlock()
log.Debug().Str("alertID", alert.ID).Msg("Added alert to history")
// Call callbacks outside the lock
for _, cb := range callbacks {
cb(alert)

View file

@ -172,11 +172,11 @@ type AISettingsResponse struct {
AuthMethod string `json:"auth_method"` // "api_key" or "oauth"
OAuthConnected bool `json:"oauth_connected"` // true if OAuth tokens are configured
// Patrol settings for token efficiency
PatrolSchedulePreset string `json:"patrol_schedule_preset"` // DEPRECATED: legacy preset
PatrolIntervalMinutes int `json:"patrol_interval_minutes"` // Patrol interval in minutes (0 = disabled)
PatrolAutoFix bool `json:"patrol_auto_fix"` // true if patrol can auto-fix issues
AlertTriggeredAnalysis bool `json:"alert_triggered_analysis"` // true if AI analyzes when alerts fire
UseProactiveThresholds bool `json:"use_proactive_thresholds"` // true if patrol warns before thresholds (false = use exact thresholds)
PatrolSchedulePreset string `json:"patrol_schedule_preset"` // DEPRECATED: legacy preset
PatrolIntervalMinutes int `json:"patrol_interval_minutes"` // Patrol interval in minutes (0 = disabled)
PatrolAutoFix bool `json:"patrol_auto_fix"` // true if patrol can auto-fix issues
AlertTriggeredAnalysis bool `json:"alert_triggered_analysis"` // true if AI analyzes when alerts fire
UseProactiveThresholds bool `json:"use_proactive_thresholds"` // true if patrol warns before thresholds (false = use exact thresholds)
AvailableModels []config.ModelInfo `json:"available_models"` // List of models for current provider
// Multi-provider credentials - shows which providers are configured
AnthropicConfigured bool `json:"anthropic_configured"` // true if Anthropic API key or OAuth is set
@ -207,11 +207,11 @@ type AISettingsUpdateRequest struct {
CustomContext *string `json:"custom_context,omitempty"` // user-provided infrastructure context
AuthMethod *string `json:"auth_method,omitempty"` // "api_key" or "oauth"
// Patrol settings for token efficiency
PatrolSchedulePreset *string `json:"patrol_schedule_preset,omitempty"` // DEPRECATED: use patrol_interval_minutes
PatrolIntervalMinutes *int `json:"patrol_interval_minutes,omitempty"` // Custom interval in minutes (0 = disabled, minimum 10)
PatrolAutoFix *bool `json:"patrol_auto_fix,omitempty"` // true if patrol can auto-fix issues
AlertTriggeredAnalysis *bool `json:"alert_triggered_analysis,omitempty"` // true if AI analyzes when alerts fire
UseProactiveThresholds *bool `json:"use_proactive_thresholds,omitempty"` // true if patrol warns before thresholds (default: false = exact thresholds)
PatrolSchedulePreset *string `json:"patrol_schedule_preset,omitempty"` // DEPRECATED: use patrol_interval_minutes
PatrolIntervalMinutes *int `json:"patrol_interval_minutes,omitempty"` // Custom interval in minutes (0 = disabled, minimum 10)
PatrolAutoFix *bool `json:"patrol_auto_fix,omitempty"` // true if patrol can auto-fix issues
AlertTriggeredAnalysis *bool `json:"alert_triggered_analysis,omitempty"` // true if AI analyzes when alerts fire
UseProactiveThresholds *bool `json:"use_proactive_thresholds,omitempty"` // true if patrol warns before thresholds (default: false = exact thresholds)
// Multi-provider credentials
AnthropicAPIKey *string `json:"anthropic_api_key,omitempty"` // Set Anthropic API key
OpenAIAPIKey *string `json:"openai_api_key,omitempty"` // Set OpenAI API key
@ -273,18 +273,18 @@ func (h *AISettingsHandler) HandleGetAISettings(w http.ResponseWriter, r *http.R
AuthMethod: authMethod,
OAuthConnected: settings.OAuthAccessToken != "",
// Patrol settings
PatrolSchedulePreset: settings.PatrolSchedulePreset,
PatrolIntervalMinutes: settings.PatrolIntervalMinutes,
PatrolAutoFix: settings.PatrolAutoFix,
AlertTriggeredAnalysis: settings.AlertTriggeredAnalysis,
UseProactiveThresholds: settings.UseProactiveThresholds,
AvailableModels: nil, // Now populated via /api/ai/models endpoint
PatrolSchedulePreset: settings.PatrolSchedulePreset,
PatrolIntervalMinutes: settings.PatrolIntervalMinutes,
PatrolAutoFix: settings.PatrolAutoFix,
AlertTriggeredAnalysis: settings.AlertTriggeredAnalysis,
UseProactiveThresholds: settings.UseProactiveThresholds,
AvailableModels: nil, // Now populated via /api/ai/models endpoint
// Multi-provider configuration
AnthropicConfigured: settings.HasProvider(config.AIProviderAnthropic),
OpenAIConfigured: settings.HasProvider(config.AIProviderOpenAI),
DeepSeekConfigured: settings.HasProvider(config.AIProviderDeepSeek),
GeminiConfigured: settings.HasProvider(config.AIProviderGemini),
OllamaConfigured: settings.HasProvider(config.AIProviderOllama),
AnthropicConfigured: settings.HasProvider(config.AIProviderAnthropic),
OpenAIConfigured: settings.HasProvider(config.AIProviderOpenAI),
DeepSeekConfigured: settings.HasProvider(config.AIProviderDeepSeek),
GeminiConfigured: settings.HasProvider(config.AIProviderGemini),
OllamaConfigured: settings.HasProvider(config.AIProviderOllama),
OllamaBaseURL: settings.GetBaseURLForProvider(config.AIProviderOllama),
OpenAIBaseURL: settings.OpenAIBaseURL,
ConfiguredProviders: settings.GetConfiguredProviders(),
@ -573,18 +573,18 @@ func (h *AISettingsHandler) HandleUpdateAISettings(w http.ResponseWriter, r *htt
CustomContext: settings.CustomContext,
AuthMethod: authMethod,
OAuthConnected: settings.OAuthAccessToken != "",
PatrolSchedulePreset: settings.PatrolSchedulePreset,
PatrolIntervalMinutes: settings.PatrolIntervalMinutes,
PatrolAutoFix: settings.PatrolAutoFix,
AlertTriggeredAnalysis: settings.AlertTriggeredAnalysis,
UseProactiveThresholds: settings.UseProactiveThresholds,
AvailableModels: nil, // Now populated via /api/ai/models endpoint
PatrolSchedulePreset: settings.PatrolSchedulePreset,
PatrolIntervalMinutes: settings.PatrolIntervalMinutes,
PatrolAutoFix: settings.PatrolAutoFix,
AlertTriggeredAnalysis: settings.AlertTriggeredAnalysis,
UseProactiveThresholds: settings.UseProactiveThresholds,
AvailableModels: nil, // Now populated via /api/ai/models endpoint
// Multi-provider configuration
AnthropicConfigured: settings.HasProvider(config.AIProviderAnthropic),
OpenAIConfigured: settings.HasProvider(config.AIProviderOpenAI),
DeepSeekConfigured: settings.HasProvider(config.AIProviderDeepSeek),
GeminiConfigured: settings.HasProvider(config.AIProviderGemini),
OllamaConfigured: settings.HasProvider(config.AIProviderOllama),
AnthropicConfigured: settings.HasProvider(config.AIProviderAnthropic),
OpenAIConfigured: settings.HasProvider(config.AIProviderOpenAI),
DeepSeekConfigured: settings.HasProvider(config.AIProviderDeepSeek),
GeminiConfigured: settings.HasProvider(config.AIProviderGemini),
OllamaConfigured: settings.HasProvider(config.AIProviderOllama),
OllamaBaseURL: settings.GetBaseURLForProvider(config.AIProviderOllama),
OpenAIBaseURL: settings.OpenAIBaseURL,
ConfiguredProviders: settings.GetConfiguredProviders(),

View file

@ -1081,4 +1081,3 @@ func TestAISettingsHandler_SetCorrelationDetector(t *testing.T) {
// Set nil correlation detector should not panic
handler.SetCorrelationDetector(nil)
}

View file

@ -658,11 +658,11 @@ func (h *AISettingsHandler) HandleGetAnomalies(w http.ResponseWriter, r *http.Re
// Get all baselines and check current metrics
allBaselines := baselineStore.GetAllBaselines()
// Group by resource ID
resourceMetrics := make(map[string]map[string]float64)
resourceInfo := make(map[string]struct{ name, rtype string })
for _, baseline := range allBaselines {
if resourceID != "" && baseline.ResourceID != resourceID {
continue
@ -674,18 +674,18 @@ func (h *AISettingsHandler) HandleGetAnomalies(w http.ResponseWriter, r *http.Re
// Get current state to extract live metrics
state := stateProvider.GetState()
// Check VMs
for _, vm := range state.VMs {
if vm.Template {
continue // Skip templates
}
// Skip VMs that aren't running - stopped VMs with 0% usage is expected, not an anomaly
if vm.Status != "running" {
continue
}
// Skip if we don't have baselines for this resource
if _, ok := resourceMetrics[vm.ID]; !ok {
if resourceID == "" {
@ -696,15 +696,14 @@ func (h *AISettingsHandler) HandleGetAnomalies(w http.ResponseWriter, r *http.Re
}
}
metrics := map[string]float64{
"cpu": vm.CPU * 100, // CPU is already 0-1, convert to percentage
"cpu": vm.CPU * 100, // CPU is already 0-1, convert to percentage
"memory": vm.Memory.Usage, // Memory.Usage is already in percentage
}
if vm.Disk.Usage > 0 {
metrics["disk"] = vm.Disk.Usage
}
anomalies := baselineStore.CheckResourceAnomalies(vm.ID, metrics)
for _, anomaly := range anomalies {
result = append(result, map[string]interface{}{
@ -720,22 +719,22 @@ func (h *AISettingsHandler) HandleGetAnomalies(w http.ResponseWriter, r *http.Re
"description": anomaly.Description,
})
}
// Store info for any additional processing
resourceInfo[vm.ID] = struct{ name, rtype string }{vm.Name, "vm"}
}
// Check Containers
for _, ct := range state.Containers {
if ct.Template {
continue // Skip templates
}
// Skip containers that aren't running - stopped containers with 0% usage is expected, not an anomaly
if ct.Status != "running" {
continue
}
// Skip if we don't have baselines for this resource
if _, ok := resourceMetrics[ct.ID]; !ok {
if resourceID == "" {
@ -746,15 +745,14 @@ func (h *AISettingsHandler) HandleGetAnomalies(w http.ResponseWriter, r *http.Re
}
}
metrics := map[string]float64{
"cpu": ct.CPU * 100, // CPU is already 0-1, convert to percentage
"cpu": ct.CPU * 100, // CPU is already 0-1, convert to percentage
"memory": ct.Memory.Usage, // Memory.Usage is already in percentage
}
if ct.Disk.Usage > 0 {
metrics["disk"] = ct.Disk.Usage
}
anomalies := baselineStore.CheckResourceAnomalies(ct.ID, metrics)
for _, anomaly := range anomalies {
result = append(result, map[string]interface{}{
@ -770,15 +768,15 @@ func (h *AISettingsHandler) HandleGetAnomalies(w http.ResponseWriter, r *http.Re
"description": anomaly.Description,
})
}
// Store info for any additional processing
resourceInfo[ct.ID] = struct{ name, rtype string }{ct.Name, "container"}
}
// Check nodes
for _, node := range state.Nodes {
nodeID := node.ID
// Skip if we don't have baselines for this resource
if _, ok := resourceMetrics[nodeID]; !ok {
if resourceID == "" {
@ -788,12 +786,12 @@ func (h *AISettingsHandler) HandleGetAnomalies(w http.ResponseWriter, r *http.Re
continue
}
}
metrics := map[string]float64{
"cpu": node.CPU * 100, // CPU is already 0-1, convert to percentage
"cpu": node.CPU * 100, // CPU is already 0-1, convert to percentage
"memory": node.Memory.Usage, // Memory.Usage is already in percentage
}
anomalies := baselineStore.CheckResourceAnomalies(nodeID, metrics)
for _, anomaly := range anomalies {
result = append(result, map[string]interface{}{
@ -812,7 +810,7 @@ func (h *AISettingsHandler) HandleGetAnomalies(w http.ResponseWriter, r *http.Re
}
count := len(result)
// Count by severity for summary
severityCounts := map[string]int{
"critical": 0,
@ -890,12 +888,12 @@ func (h *AISettingsHandler) HandleGetLearningStatus(w http.ResponseWriter, r *ht
// Get all baselines and count metrics
baselines := baselineStore.GetAllBaselines()
resourceCount := baselineStore.ResourceCount()
// Count unique resources and total metrics
resourceIDs := make(map[string]bool)
totalMetrics := 0
metricCounts := make(map[string]int) // cpu, memory, disk counts
for _, baseline := range baselines {
resourceIDs[baseline.ResourceID] = true
totalMetrics++

View file

@ -161,9 +161,9 @@ func TestMetricPointStructure(t *testing.T) {
t.Parallel()
tests := []struct {
name string
point MetricPoint
wantJSON string
name string
point MetricPoint
wantJSON string
}{
{
name: "positive values",
@ -201,8 +201,8 @@ func TestTimeRangeConversion(t *testing.T) {
// Test the time range conversion logic used in handleCharts
tests := []struct {
rangeStr string
expectedDur time.Duration
rangeStr string
expectedDur time.Duration
}{
{"5m", 5 * time.Minute},
{"15m", 15 * time.Minute},
@ -344,10 +344,10 @@ func TestDockerContainerDiskPercentCalculation(t *testing.T) {
// Test disk percentage calculation for Docker containers
// Mirrors: float64(container.WritableLayerBytes) / float64(container.RootFilesystemBytes) * 100
tests := []struct {
name string
writableLayerBytes uint64
rootFilesystemBytes uint64
expectedDiskPercent float64
name string
writableLayerBytes uint64
rootFilesystemBytes uint64
expectedDiskPercent float64
}{
{"50% usage", 500, 1000, 50.0},
{"100% usage", 1000, 1000, 100.0},
@ -379,12 +379,12 @@ func TestChartStatsOldestTimestamp(t *testing.T) {
t.Parallel()
now := time.Now().Unix() * 1000
oneHourAgo := now - 3600000 // 1 hour in ms
oneHourAgo := now - 3600000 // 1 hour in ms
fourHoursAgo := now - 14400000 // 4 hours in ms
// Simulate finding oldest timestamp
timestamps := []int64{now, oneHourAgo, fourHoursAgo, now - 1800000}
oldestTimestamp := now
for _, ts := range timestamps {
if ts < oldestTimestamp {
@ -398,7 +398,7 @@ func TestChartStatsOldestTimestamp(t *testing.T) {
stats := ChartStats{OldestDataTimestamp: oldestTimestamp}
if stats.OldestDataTimestamp != fourHoursAgo {
t.Errorf("ChartStats.OldestDataTimestamp: got %d, want %d",
t.Errorf("ChartStats.OldestDataTimestamp: got %d, want %d",
stats.OldestDataTimestamp, fourHoursAgo)
}
}

View file

@ -383,4 +383,3 @@ func (h *HostAgentHandlers) HandleUnlink(w http.ResponseWriter, r *http.Request)
log.Error().Err(err).Msg("Failed to serialize host unlink response")
}
}

View file

@ -40,15 +40,15 @@ func TestMakeOIDCResponse_EnabledWithSecret(t *testing.T) {
t.Parallel()
cfg := &config.OIDCConfig{
Enabled: true,
IssuerURL: "https://auth.example.com",
ClientID: "pulse-client",
ClientSecret: "super-secret-value",
RedirectURL: "https://pulse.example.com/auth/callback",
Scopes: []string{"openid", "profile", "email"},
Enabled: true,
IssuerURL: "https://auth.example.com",
ClientID: "pulse-client",
ClientSecret: "super-secret-value",
RedirectURL: "https://pulse.example.com/auth/callback",
Scopes: []string{"openid", "profile", "email"},
UsernameClaim: "preferred_username",
EmailClaim: "email",
GroupsClaim: "groups",
EmailClaim: "email",
GroupsClaim: "groups",
}
resp := makeOIDCResponse(cfg, "https://pulse.example.com")

View file

@ -56,4 +56,3 @@ func TestHandleDownloadUnifiedAgentSetsChecksumAndInvalidatesOnChange(t *testing
t.Fatalf("unexpected response body after update")
}
}

View file

@ -23,17 +23,17 @@ func NewUpdateDetectionHandlers(monitor *monitoring.Monitor) *UpdateDetectionHan
// ContainerUpdateInfo represents a container with an available update
type ContainerUpdateInfo struct {
HostID string `json:"hostId"`
HostName string `json:"hostName"`
ContainerID string `json:"containerId"`
ContainerName string `json:"containerName"`
Image string `json:"image"`
CurrentDigest string `json:"currentDigest,omitempty"`
LatestDigest string `json:"latestDigest,omitempty"`
UpdateAvailable bool `json:"updateAvailable"`
LastChecked int64 `json:"lastChecked,omitempty"`
Error string `json:"error,omitempty"`
ResourceType string `json:"resourceType"`
HostID string `json:"hostId"`
HostName string `json:"hostName"`
ContainerID string `json:"containerId"`
ContainerName string `json:"containerName"`
Image string `json:"image"`
CurrentDigest string `json:"currentDigest,omitempty"`
LatestDigest string `json:"latestDigest,omitempty"`
UpdateAvailable bool `json:"updateAvailable"`
LastChecked int64 `json:"lastChecked,omitempty"`
Error string `json:"error,omitempty"`
ResourceType string `json:"resourceType"`
}
// HandleGetInfraUpdates returns all tracked infrastructure updates with optional filtering.
@ -238,7 +238,6 @@ func (h *UpdateDetectionHandlers) HandleTriggerInfraUpdateCheck(w http.ResponseW
writeErrorResponse(w, http.StatusBadRequest, "missing_params", "Either hostId or resourceId is required", nil)
}
// HandleGetInfraUpdatesForHost returns all updates for a specific host.
// GET /api/infra-updates/host/{hostId}
func (h *UpdateDetectionHandlers) HandleGetInfraUpdatesForHost(w http.ResponseWriter, r *http.Request, hostID string) {

View file

@ -11,7 +11,6 @@ import (
"github.com/rcourtman/pulse-go-rewrite/internal/monitoring"
)
func TestHandleGetInfraUpdates(t *testing.T) {
// We can't easily create a real Monitor, so we'll test the core logic
t.Run("collectDockerUpdates filters correctly", func(t *testing.T) {

View file

@ -40,16 +40,16 @@ type AIConfig struct {
OAuthExpiresAt time.Time `json:"oauth_expires_at,omitempty"` // Token expiration time
// Patrol settings for background AI monitoring
PatrolEnabled bool `json:"patrol_enabled"` // Enable background AI health patrol
PatrolIntervalMinutes int `json:"patrol_interval_minutes,omitempty"` // How often to run quick patrols (default: 360 = 6 hours)
PatrolSchedulePreset string `json:"patrol_schedule_preset,omitempty"` // User-friendly preset: "15min", "1hr", "6hr", "12hr", "daily", "disabled"
PatrolAnalyzeNodes bool `json:"patrol_analyze_nodes"` // Include Proxmox nodes in patrol
PatrolAnalyzeGuests bool `json:"patrol_analyze_guests"` // Include VMs/containers in patrol
PatrolAnalyzeDocker bool `json:"patrol_analyze_docker"` // Include Docker hosts in patrol
PatrolAnalyzeStorage bool `json:"patrol_analyze_storage"` // Include storage in patrol
PatrolAutoFix bool `json:"patrol_auto_fix,omitempty"` // When true, patrol can attempt automatic remediation (default: false, observe only)
UseProactiveThresholds bool `json:"use_proactive_thresholds,omitempty"` // When true, patrol warns 5-15% BEFORE alert thresholds (default: false, use exact thresholds)
AutoFixModel string `json:"auto_fix_model,omitempty"` // Model for automatic remediation (defaults to PatrolModel, may want more capable model)
PatrolEnabled bool `json:"patrol_enabled"` // Enable background AI health patrol
PatrolIntervalMinutes int `json:"patrol_interval_minutes,omitempty"` // How often to run quick patrols (default: 360 = 6 hours)
PatrolSchedulePreset string `json:"patrol_schedule_preset,omitempty"` // User-friendly preset: "15min", "1hr", "6hr", "12hr", "daily", "disabled"
PatrolAnalyzeNodes bool `json:"patrol_analyze_nodes"` // Include Proxmox nodes in patrol
PatrolAnalyzeGuests bool `json:"patrol_analyze_guests"` // Include VMs/containers in patrol
PatrolAnalyzeDocker bool `json:"patrol_analyze_docker"` // Include Docker hosts in patrol
PatrolAnalyzeStorage bool `json:"patrol_analyze_storage"` // Include storage in patrol
PatrolAutoFix bool `json:"patrol_auto_fix,omitempty"` // When true, patrol can attempt automatic remediation (default: false, observe only)
UseProactiveThresholds bool `json:"use_proactive_thresholds,omitempty"` // When true, patrol warns 5-15% BEFORE alert thresholds (default: false, use exact thresholds)
AutoFixModel string `json:"auto_fix_model,omitempty"` // Model for automatic remediation (defaults to PatrolModel, may want more capable model)
// Alert-triggered AI analysis - analyze specific resources when alerts fire
AlertTriggeredAnalysis bool `json:"alert_triggered_analysis"` // Enable AI analysis when alerts fire (token-efficient)
@ -77,7 +77,7 @@ const (
DefaultAIModelAnthropic = "claude-opus-4-5-20251101"
DefaultAIModelOpenAI = "gpt-4o"
DefaultAIModelOllama = "llama3"
DefaultAIModelDeepSeek = "deepseek-chat" // V3.2 with tool-use support
DefaultAIModelDeepSeek = "deepseek-chat" // V3.2 with tool-use support
DefaultAIModelGemini = "gemini-2.5-flash" // Latest stable Gemini model
DefaultOllamaBaseURL = "http://localhost:11434"
DefaultDeepSeekBaseURL = "https://api.deepseek.com/chat/completions"

View file

@ -71,14 +71,14 @@ func IsPasswordHashed(password string) bool {
// NOTE: The envconfig tags are legacy and not used - configuration is loaded from encrypted JSON files
type Config struct {
// Server settings
BackendHost string `envconfig:"BACKEND_HOST" default:""`
BackendPort int `envconfig:"BACKEND_PORT" default:"3000"`
FrontendHost string `envconfig:"FRONTEND_HOST" default:""`
FrontendPort int `envconfig:"FRONTEND_PORT" default:"7655"`
ConfigPath string `envconfig:"CONFIG_PATH"`
DataPath string `envconfig:"DATA_DIR"`
AppRoot string `json:"-"` // Root directory of the application (where binary lives)
PublicURL string `envconfig:"PULSE_PUBLIC_URL" default:""` // Full URL to access Pulse (e.g., http://192.168.1.100:7655)
BackendHost string `envconfig:"BACKEND_HOST" default:""`
BackendPort int `envconfig:"BACKEND_PORT" default:"3000"`
FrontendHost string `envconfig:"FRONTEND_HOST" default:""`
FrontendPort int `envconfig:"FRONTEND_PORT" default:"7655"`
ConfigPath string `envconfig:"CONFIG_PATH"`
DataPath string `envconfig:"DATA_DIR"`
AppRoot string `json:"-"` // Root directory of the application (where binary lives)
PublicURL string `envconfig:"PULSE_PUBLIC_URL" default:""` // Full URL to access Pulse (e.g., http://192.168.1.100:7655)
AgentConnectURL string `envconfig:"PULSE_AGENT_CONNECT_URL" default:""` // Dedicated direct connect URL for agents (e.g. http://192.168.1.5:7655)
// Proxmox VE connections
@ -129,17 +129,17 @@ type Config struct {
LogCompress bool `envconfig:"LOG_COMPRESS" default:"true"`
// Security settings
APIToken string `envconfig:"API_TOKEN"`
APITokenEnabled bool `envconfig:"API_TOKEN_ENABLED" default:"false"`
APITokens []APITokenRecord `json:"-"`
SuppressedEnvMigrations []string `json:"-"` // Hashes of env tokens deleted by user (prevent re-migration)
AuthUser string `envconfig:"PULSE_AUTH_USER"`
AuthPass string `envconfig:"PULSE_AUTH_PASS"`
DisableAuthEnvDetected bool `json:"-"`
DemoMode bool `envconfig:"DEMO_MODE" default:"false"` // Read-only demo mode
AllowedOrigins string `envconfig:"ALLOWED_ORIGINS" default:"*"`
IframeEmbeddingAllow string `envconfig:"IFRAME_EMBEDDING_ALLOW" default:"SAMEORIGIN"`
HideLocalLogin bool `envconfig:"PULSE_AUTH_HIDE_LOCAL_LOGIN" default:"false"`
APIToken string `envconfig:"API_TOKEN"`
APITokenEnabled bool `envconfig:"API_TOKEN_ENABLED" default:"false"`
APITokens []APITokenRecord `json:"-"`
SuppressedEnvMigrations []string `json:"-"` // Hashes of env tokens deleted by user (prevent re-migration)
AuthUser string `envconfig:"PULSE_AUTH_USER"`
AuthPass string `envconfig:"PULSE_AUTH_PASS"`
DisableAuthEnvDetected bool `json:"-"`
DemoMode bool `envconfig:"DEMO_MODE" default:"false"` // Read-only demo mode
AllowedOrigins string `envconfig:"ALLOWED_ORIGINS" default:"*"`
IframeEmbeddingAllow string `envconfig:"IFRAME_EMBEDDING_ALLOW" default:"SAMEORIGIN"`
HideLocalLogin bool `envconfig:"PULSE_AUTH_HIDE_LOCAL_LOGIN" default:"false"`
// Proxy authentication settings
ProxyAuthSecret string `envconfig:"PROXY_AUTH_SECRET"`

View file

@ -20,23 +20,23 @@ import (
// ConfigPersistence handles saving and loading configuration
type ConfigPersistence struct {
mu sync.RWMutex
tx *importTransaction
configDir string
alertFile string
emailFile string
webhookFile string
appriseFile string
nodesFile string
systemFile string
oidcFile string
mu sync.RWMutex
tx *importTransaction
configDir string
alertFile string
emailFile string
webhookFile string
appriseFile string
nodesFile string
systemFile string
oidcFile string
apiTokensFile string
envTokenSuppressionsFile string
aiFile string
aiFindingsFile string
aiPatrolRunsFile string
aiUsageHistoryFile string
crypto *crypto.CryptoManager
aiFindingsFile string
aiPatrolRunsFile string
aiUsageHistoryFile string
crypto *crypto.CryptoManager
}
// NewConfigPersistence creates a new config persistence manager.
@ -69,21 +69,21 @@ func newConfigPersistence(configDir string) (*ConfigPersistence, error) {
}
cp := &ConfigPersistence{
configDir: configDir,
alertFile: filepath.Join(configDir, "alerts.json"),
emailFile: filepath.Join(configDir, "email.enc"),
webhookFile: filepath.Join(configDir, "webhooks.enc"),
appriseFile: filepath.Join(configDir, "apprise.enc"),
nodesFile: filepath.Join(configDir, "nodes.enc"),
systemFile: filepath.Join(configDir, "system.json"),
oidcFile: filepath.Join(configDir, "oidc.enc"),
apiTokensFile: filepath.Join(configDir, "api_tokens.json"),
envTokenSuppressionsFile: filepath.Join(configDir, "env_token_suppressions.json"),
aiFile: filepath.Join(configDir, "ai.enc"),
aiFindingsFile: filepath.Join(configDir, "ai_findings.json"),
aiPatrolRunsFile: filepath.Join(configDir, "ai_patrol_runs.json"),
aiUsageHistoryFile: filepath.Join(configDir, "ai_usage_history.json"),
crypto: cryptoMgr,
configDir: configDir,
alertFile: filepath.Join(configDir, "alerts.json"),
emailFile: filepath.Join(configDir, "email.enc"),
webhookFile: filepath.Join(configDir, "webhooks.enc"),
appriseFile: filepath.Join(configDir, "apprise.enc"),
nodesFile: filepath.Join(configDir, "nodes.enc"),
systemFile: filepath.Join(configDir, "system.json"),
oidcFile: filepath.Join(configDir, "oidc.enc"),
apiTokensFile: filepath.Join(configDir, "api_tokens.json"),
envTokenSuppressionsFile: filepath.Join(configDir, "env_token_suppressions.json"),
aiFile: filepath.Join(configDir, "ai.enc"),
aiFindingsFile: filepath.Join(configDir, "ai_findings.json"),
aiPatrolRunsFile: filepath.Join(configDir, "ai_patrol_runs.json"),
aiUsageHistoryFile: filepath.Join(configDir, "ai_usage_history.json"),
crypto: cryptoMgr,
}
log.Debug().

View file

@ -17,10 +17,10 @@ func TestAIConfigPersistence(t *testing.T) {
}
cfg := config.AIConfig{
Enabled: true,
Enabled: true,
Provider: "anthropic",
APIKey: "test-key",
Model: "claude-3-opus",
APIKey: "test-key",
Model: "claude-3-opus",
}
if err := cp.SaveAIConfig(cfg); err != nil {
@ -46,9 +46,9 @@ func TestAIFindingsPersistence(t *testing.T) {
findings := map[string]*config.AIFindingRecord{
"f1": {
ID: "f1",
Title: "Test Finding",
Severity: "warning",
ID: "f1",
Title: "Test Finding",
Severity: "warning",
ResourceID: "res-1",
},
}
@ -70,7 +70,7 @@ func TestAIFindingsPersistence(t *testing.T) {
func TestIsEncryptionEnabled(t *testing.T) {
tempDir := t.TempDir()
cp := config.NewConfigPersistence(tempDir)
// NewConfigPersistence always enables encryption by generating a key if missing
if !cp.IsEncryptionEnabled() {
t.Error("Encryption should be enabled by default")
@ -94,7 +94,7 @@ func TestMetadataPersistence(t *testing.T) {
Notes: []string{"Important guest"},
},
}
// Create the file manually since SaveGuestMetadata doesn't exist in ConfigPersistence (it's in GuestMetadataStore)
// but LoadGuestMetadata is in ConfigPersistence.
// This tests the LoadGuestMetadata method in persistence.go
@ -113,7 +113,7 @@ func TestMetadataPersistence(t *testing.T) {
// 2. Docker Metadata
dockerMeta := map[string]*config.DockerMetadata{
"docker-1": {
ID: "docker-1",
ID: "docker-1",
Notes: []string{"Worker node"},
},
}

View file

@ -23,7 +23,7 @@ func TestConfigPersistence_DataDir(t *testing.T) {
func TestConfigPersistence_MigrateWebhooksIfNeeded(t *testing.T) {
tempDir := t.TempDir()
cp := config.NewConfigPersistence(tempDir)
// 1. Create legacy file
legacyFile := filepath.Join(tempDir, "webhooks.json")
legacyWebhooks := []notifications.WebhookConfig{
@ -91,7 +91,7 @@ func TestConfigPersistence_PatrolRunHistory(t *testing.T) {
func TestConfigPersistence_UpdateEnvFile(t *testing.T) {
tempDir := t.TempDir()
envFile := filepath.Join(tempDir, ".env")
initialContent := `UPDATE_CHANNEL=stable
AUTO_UPDATE_ENABLED=false
POLLING_INTERVAL=10
@ -99,7 +99,7 @@ CUSTOM_VAR=value`
os.WriteFile(envFile, []byte(initialContent), 0644)
cp := config.NewConfigPersistence(tempDir)
settings := config.SystemSettings{
UpdateChannel: "beta",
AutoUpdateEnabled: true,

View file

@ -13,7 +13,7 @@ func TestNewConfigPersistenceFailsWhenEncryptedDataPresentWithoutKey(t *testing.
// We need to temporarily rename it if it exists to properly test this scenario
systemKeyPath := "/etc/pulse/.encryption.key"
backupKeyPath := "/etc/pulse/.encryption.key.test-backup"
if _, err := os.Stat(systemKeyPath); err == nil {
// Key exists - temporarily rename it
if err := os.Rename(systemKeyPath, backupKeyPath); err != nil {

View file

@ -43,7 +43,7 @@ func (a *Agent) cleanupOrphanedBackups(ctx context.Context) {
if len(parts) < 2 {
continue
}
timestampStr := parts[len(parts)-1]
backupTime, err := time.Parse("20060102_150405", timestampStr)
if err != nil {

View file

@ -174,15 +174,15 @@ func (a *Agent) updateContainerWithProgress(ctx context.Context, containerID str
// Only set the first network here; we'll connect to others after creation
for netName, netConfig := range inspect.NetworkSettings.Networks {
networkingConfig.EndpointsConfig[netName] = &network.EndpointSettings{
Aliases: netConfig.Aliases,
IPAMConfig: netConfig.IPAMConfig,
Links: netConfig.Links,
NetworkID: netConfig.NetworkID,
EndpointID: "", // Will be assigned
Gateway: "", // Will be assigned
IPAddress: "", // Will be assigned
MacAddress: netConfig.MacAddress,
DriverOpts: netConfig.DriverOpts,
Aliases: netConfig.Aliases,
IPAMConfig: netConfig.IPAMConfig,
Links: netConfig.Links,
NetworkID: netConfig.NetworkID,
EndpointID: "", // Will be assigned
Gateway: "", // Will be assigned
IPAddress: "", // Will be assigned
MacAddress: netConfig.MacAddress,
DriverOpts: netConfig.DriverOpts,
}
break // Only set one network during creation
}

View file

@ -50,7 +50,7 @@ func TestRegistryChecker_CheckImageUpdate_Behavior(t *testing.T) {
}
}),
}
result := checker.CheckImageUpdate(context.Background(), "", "sha256:current", "", "", "")
if result == nil {
t.Fatal("Expected result for empty image")
@ -210,7 +210,6 @@ func TestParseImageReference_EdgeCases(t *testing.T) {
}
}
func TestImageUpdateResult_Fields(t *testing.T) {
result := ImageUpdateResult{
Image: "nginx:latest",

View file

@ -2,24 +2,24 @@ package dockeragent
import (
"context"
"github.com/rs/zerolog"
"net/http"
"testing"
"github.com/rs/zerolog"
)
func TestRegistryChecker_ResolveManifestList(t *testing.T) {
logger := zerolog.Nop()
logger := zerolog.Nop()
t.Run("resolve manifest list", func(t *testing.T) {
checker := NewRegistryChecker(logger)
checker.httpClient = &http.Client{
Transport: roundTripFunc(func(req *http.Request) (*http.Response, error) {
if req.Method == "HEAD" {
return newStringResponse(http.StatusOK, map[string]string{
"Content-Type": "application/vnd.docker.distribution.manifest.list.v2+json",
}, ""), nil
}
// GET request for body
body := `{
if req.Method == "HEAD" {
return newStringResponse(http.StatusOK, map[string]string{
"Content-Type": "application/vnd.docker.distribution.manifest.list.v2+json",
}, ""), nil
}
// GET request for body
body := `{
"manifests": [
{
"digest": "sha256:armv7",
@ -31,20 +31,20 @@ func TestRegistryChecker_ResolveManifestList(t *testing.T) {
}
]
}`
return newStringResponse(http.StatusOK, nil, body), nil
return newStringResponse(http.StatusOK, nil, body), nil
}),
}
// Test matching amd64
result := checker.CheckImageUpdate(context.Background(), "image:tag", "sha256:current", "amd64", "linux", "")
if result.LatestDigest != "sha256:amd64" {
t.Errorf("Expected sha256:amd64, got %s", result.LatestDigest)
}
if result.LatestDigest != "sha256:amd64" {
t.Errorf("Expected sha256:amd64, got %s", result.LatestDigest)
}
// Test matching arm/v7
result = checker.CheckImageUpdate(context.Background(), "image:tag", "sha256:current", "arm", "linux", "v7")
if result.LatestDigest != "sha256:armv7" {
t.Errorf("Expected sha256:armv7, got %s", result.LatestDigest)
}
// Test matching arm/v7
result = checker.CheckImageUpdate(context.Background(), "image:tag", "sha256:current", "arm", "linux", "v7")
if result.LatestDigest != "sha256:armv7" {
t.Errorf("Expected sha256:armv7, got %s", result.LatestDigest)
}
})
}

View file

@ -87,4 +87,3 @@ func TestAgent_flushBuffer_RetryAfterTransientFailure(t *testing.T) {
t.Fatalf("expected buffer to be empty, has %d items", a.reportBuffer.Len())
}
}

View file

@ -44,11 +44,11 @@ func TestAgent_collectTemperatures_MapsKeys(t *testing.T) {
got := a.collectTemperatures(context.Background())
want := map[string]float64{
"cpu_package": 55.5,
"cpu_core_0": 44,
"cpu_core_1": 45,
"nvme0": 40,
"amdgpu-pci-0100": 60,
"cpu_package": 55.5,
"cpu_core_0": 44,
"cpu_core_1": 45,
"nvme0": 40,
"amdgpu-pci-0100": 60,
}
if got.TemperatureCelsius == nil {

View file

@ -28,9 +28,9 @@ type CommandClient struct {
insecureSkipVerify bool
logger zerolog.Logger
conn *websocket.Conn
connMu sync.Mutex
done chan struct{}
conn *websocket.Conn
connMu sync.Mutex
done chan struct{}
}
// NewCommandClient creates a new command execution client

View file

@ -87,14 +87,14 @@ func TestCommandClient_connectAndHandle_ExecutesCommandAndReturnsResult(t *testi
defer cancel()
client := &CommandClient{
pulseURL: strings.TrimRight(server.URL, "/"),
apiToken: "token",
agentID: "agent-1",
hostname: "host-1",
platform: "linux",
version: "1.2.3",
logger: zerolog.Nop(),
done: make(chan struct{}),
pulseURL: strings.TrimRight(server.URL, "/"),
apiToken: "token",
agentID: "agent-1",
hostname: "host-1",
platform: "linux",
version: "1.2.3",
logger: zerolog.Nop(),
done: make(chan struct{}),
}
errCh := make(chan error, 1)
@ -129,4 +129,3 @@ func TestCommandClient_connectAndHandle_ExecutesCommandAndReturnsResult(t *testi
t.Fatalf("timed out waiting for connectAndHandle to return")
}
}

View file

@ -148,4 +148,3 @@ func TestCommandClient_executeCommand_TruncatesLargeOutput(t *testing.T) {
t.Fatalf("stdout len=%d, expected <= %d", len(result.Stdout), 1024*1024+64)
}
}

View file

@ -18,15 +18,15 @@ import (
// System call wrappers for testing
var (
cpuCounts = gocpu.CountsWithContext
cpuPercent = gocpu.PercentWithContext
loadAvg = goload.AvgWithContext
virtualMemory = gomem.VirtualMemoryWithContext
diskPartitions = godisk.PartitionsWithContext
diskUsage = godisk.UsageWithContext
diskIOCounters = godisk.IOCountersWithContext
netInterfaces = gonet.InterfacesWithContext
netIOCounters = gonet.IOCountersWithContext
cpuCounts = gocpu.CountsWithContext
cpuPercent = gocpu.PercentWithContext
loadAvg = goload.AvgWithContext
virtualMemory = gomem.VirtualMemoryWithContext
diskPartitions = godisk.PartitionsWithContext
diskUsage = godisk.UsageWithContext
diskIOCounters = godisk.IOCountersWithContext
netInterfaces = gonet.InterfacesWithContext
netIOCounters = gonet.IOCountersWithContext
)
// Snapshot represents a host resource utilisation sample.

View file

@ -145,12 +145,12 @@ func fallbackZFSDisks(bestDatasets map[string]zfsDatasetUsage, mountpoints map[s
// zpool in different locations that might not be in the agent's PATH.
// This helps fix issue #718 where TrueNAS reports inflated storage.
var commonZpoolPaths = []string{
"/usr/sbin/zpool", // TrueNAS SCALE, Debian, Ubuntu
"/sbin/zpool", // FreeBSD, older Linux
"/usr/local/sbin/zpool", // FreeBSD ports, custom builds
"/usr/local/bin/zpool", // Custom installations
"/opt/zfs/bin/zpool", // Some enterprise Linux
"/usr/bin/zpool", // Some distributions
"/usr/sbin/zpool", // TrueNAS SCALE, Debian, Ubuntu
"/sbin/zpool", // FreeBSD, older Linux
"/usr/local/sbin/zpool", // FreeBSD ports, custom builds
"/usr/local/bin/zpool", // Custom installations
"/opt/zfs/bin/zpool", // Some enterprise Linux
"/usr/bin/zpool", // Some distributions
}
// findZpool returns the path to the zpool binary by first trying exec.LookPath,

View file

@ -44,11 +44,11 @@ type Config struct {
KubeContext string
// Report shaping
IncludeNamespaces []string
ExcludeNamespaces []string
IncludeAllPods bool // Include all non-succeeded pods (still capped)
IncludeAllDeployments bool // Include all deployments, not just problem ones
MaxPods int // Max pods included in the report
IncludeNamespaces []string
ExcludeNamespaces []string
IncludeAllPods bool // Include all non-succeeded pods (still capped)
IncludeAllDeployments bool // Include all deployments, not just problem ones
MaxPods int // Max pods included in the report
}
type Agent struct {

View file

@ -293,9 +293,9 @@ func TestSendReport_SetsHeadersAndHandlesStatus(t *testing.T) {
defer server.Close()
a := &Agent{
cfg: Config{APIToken: "token"},
httpClient: server.Client(),
pulseURL: server.URL,
cfg: Config{APIToken: "token"},
httpClient: server.Client(),
pulseURL: server.URL,
agentVersion: "1.2.3",
}
@ -306,4 +306,3 @@ func TestSendReport_SetsHeadersAndHandlesStatus(t *testing.T) {
t.Fatalf("expected server to receive request")
}
}

View file

@ -448,14 +448,14 @@ func ValidateLicense(licenseKey string) (*License, error) {
// Grace period: 7 days after expiration
gracePeriodDuration := 7 * 24 * time.Hour
gracePeriodEnd := expirationTime.Add(gracePeriodDuration)
if time.Now().Before(gracePeriodEnd) {
// Within grace period - allow activation but mark as in grace period
license.GracePeriodEnd = &gracePeriodEnd
// License is still valid during grace period
} else {
// Past grace period - reject
return nil, fmt.Errorf("%w: expired on %s (grace period ended %s)",
return nil, fmt.Errorf("%w: expired on %s (grace period ended %s)",
ErrExpiredLicense,
expirationTime.Format("2006-01-02"),
gracePeriodEnd.Format("2006-01-02"))

View file

@ -14,7 +14,6 @@ func init() {
os.Setenv("PULSE_LICENSE_DEV_MODE", "true")
}
func TestTierHasFeature(t *testing.T) {
tests := []struct {
name string
@ -133,27 +132,27 @@ func TestLicenseExpiration(t *testing.T) {
IssuedAt: time.Now().Add(-33 * 24 * time.Hour).Unix(),
ExpiresAt: expiredAt,
}
license := &License{
Raw: testKey,
Claims: claims,
}
// License is technically expired
if !license.IsExpired() {
t.Error("License should be expired")
}
// But with grace period set, it should still work
gracePeriodEnd := time.Now().Add(4 * 24 * time.Hour)
license.GracePeriodEnd = &gracePeriodEnd
// Service should recognize grace period
service := NewService()
service.mu.Lock()
service.license = license
service.mu.Unlock()
// Should still have features during grace period
if !service.HasFeature(FeatureAIPatrol) {
t.Error("Should have feature during grace period")
@ -161,7 +160,7 @@ func TestLicenseExpiration(t *testing.T) {
if !service.IsValid() {
t.Error("Should be valid during grace period")
}
// Status should show grace period
status := service.Status()
if !status.InGracePeriod {
@ -390,9 +389,9 @@ func TestPublicKeyRequiredWithoutDevMode(t *testing.T) {
func TestStatusSetsGracePeriodDynamically(t *testing.T) {
// Test that Status() dynamically sets GracePeriodEnd when license expires
// without requiring HasFeature() to be called first
service := NewService()
// Create a license that expired 3 days ago (within 7-day grace)
expiredAt := time.Now().Add(-3 * 24 * time.Hour)
lic := &License{
@ -406,25 +405,25 @@ func TestStatusSetsGracePeriodDynamically(t *testing.T) {
ValidatedAt: time.Now().Add(-33 * 24 * time.Hour),
// Note: GracePeriodEnd is NOT set - simulating runtime expiration
}
// Manually set the license without grace period
service.mu.Lock()
service.license = lic
service.mu.Unlock()
// Verify GracePeriodEnd is nil initially
if lic.GracePeriodEnd != nil {
t.Fatal("GracePeriodEnd should be nil initially")
}
// Call Status() - this should set GracePeriodEnd dynamically
status := service.Status()
// Verify Status() set the grace period
if lic.GracePeriodEnd == nil {
t.Fatal("Status() should have set GracePeriodEnd")
}
// Status should show as valid during grace period
if !status.Valid {
t.Error("Status should be valid during grace period")
@ -435,7 +434,7 @@ func TestStatusSetsGracePeriodDynamically(t *testing.T) {
if status.GracePeriodEnd == nil {
t.Error("Status should include GracePeriodEnd")
}
// Verify HasFeature also works during grace
if !service.HasFeature(FeatureAIPatrol) {
t.Error("HasFeature should return true during grace period")
@ -696,7 +695,7 @@ func TestValidateLicense_RealSignature(t *testing.T) {
header := base64.RawURLEncoding.EncodeToString([]byte(`{"alg":"EdDSA","typ":"JWT"}`))
payloadBytes, _ := json.Marshal(claims)
payload := base64.RawURLEncoding.EncodeToString(payloadBytes)
signedData := header + "." + payload
signature := ed25519.Sign(priv, []byte(signedData))
sigEncoded := base64.RawURLEncoding.EncodeToString(signature)

View file

@ -68,7 +68,7 @@ func TestPersistence(t *testing.T) {
t.Run("Load non-existent", func(t *testing.T) {
tmpDirEmpty, _ := os.MkdirTemp("", "pulse-license-test-empty-*")
defer os.RemoveAll(tmpDirEmpty)
pEmpty, _ := NewPersistence(tmpDirEmpty)
key, err := pEmpty.Load()
if err != nil {
@ -115,26 +115,26 @@ func TestPersistence(t *testing.T) {
if string(data) == testLicenseKey {
t.Error("License file should be encrypted, not raw text")
}
// Ensure it's not JSON either in raw form
if data[0] == '{' {
t.Error("License file should be encrypted, not raw JSON")
}
})
t.Run("Decrypt with wrong key material", func(t *testing.T) {
err := p.Save(testLicenseKey)
if err != nil {
t.Fatalf("Failed to save license: %v", err)
}
// Create a new persistence with different encryption key
pWrong := &Persistence{
configDir: tmpDir,
encryptionKey: "different-encryption-key",
machineID: "different-machine-id",
}
_, err = pWrong.Load()
if err == nil {
t.Error("Expected error when decrypting with wrong key material")
@ -200,4 +200,3 @@ func TestPersistence(t *testing.T) {
}
})
}

View file

@ -61,7 +61,7 @@ func InitPublicKey() {
func decodePublicKey(encoded string) (ed25519.PublicKey, error) {
// Remove any whitespace
encoded = strings.TrimSpace(encoded)
// Try standard base64 first, then URL-safe
decoded, err := base64.StdEncoding.DecodeString(encoded)
if err != nil {

View file

@ -13,10 +13,10 @@ func TestInitPublicKey(t *testing.T) {
base64Pub := base64.StdEncoding.EncodeToString(pub)
tests := []struct {
name string
envKey string
embeddedKey string
devMode string
name string
envKey string
embeddedKey string
devMode string
expectedLoaded bool
}{
{
@ -78,7 +78,7 @@ func TestInitPublicKey(t *testing.T) {
func TestDecodePublicKey(t *testing.T) {
pub, _, _ := ed25519.GenerateKey(nil)
tests := []struct {
name string
input string

View file

@ -191,8 +191,8 @@ type Host struct {
IsLegacy bool `json:"isLegacy,omitempty"`
// Linking: When this host agent is running on a known PVE node/VM/container
LinkedNodeID string `json:"linkedNodeId,omitempty"` // ID of the PVE node this agent is running on
LinkedVMID string `json:"linkedVmId,omitempty"` // ID of the VM this agent is running inside
LinkedNodeID string `json:"linkedNodeId,omitempty"` // ID of the PVE node this agent is running on
LinkedVMID string `json:"linkedVmId,omitempty"` // ID of the VM this agent is running inside
LinkedContainerID string `json:"linkedContainerId,omitempty"` // ID of the container this agent is running inside
}

View file

@ -241,27 +241,27 @@ type RemovedKubernetesClusterFrontend struct {
// DockerContainerFrontend represents a Docker container for the frontend
type DockerContainerFrontend struct {
ID string `json:"id"`
Name string `json:"name"`
Image string `json:"image"`
State string `json:"state"`
Status string `json:"status"`
Health string `json:"health,omitempty"`
CPUPercent float64 `json:"cpuPercent"`
MemoryUsage int64 `json:"memoryUsageBytes"`
MemoryLimit int64 `json:"memoryLimitBytes"`
MemoryPercent float64 `json:"memoryPercent"`
UptimeSeconds int64 `json:"uptimeSeconds"`
RestartCount int `json:"restartCount"`
ExitCode int `json:"exitCode"`
CreatedAt int64 `json:"createdAt"`
StartedAt *int64 `json:"startedAt,omitempty"`
FinishedAt *int64 `json:"finishedAt,omitempty"`
Ports []DockerContainerPortFrontend `json:"ports,omitempty"`
Labels map[string]string `json:"labels,omitempty"`
Networks []DockerContainerNetworkFrontend `json:"networks,omitempty"`
WritableLayerBytes int64 `json:"writableLayerBytes,omitempty"`
RootFilesystemBytes int64 `json:"rootFilesystemBytes,omitempty"`
ID string `json:"id"`
Name string `json:"name"`
Image string `json:"image"`
State string `json:"state"`
Status string `json:"status"`
Health string `json:"health,omitempty"`
CPUPercent float64 `json:"cpuPercent"`
MemoryUsage int64 `json:"memoryUsageBytes"`
MemoryLimit int64 `json:"memoryLimitBytes"`
MemoryPercent float64 `json:"memoryPercent"`
UptimeSeconds int64 `json:"uptimeSeconds"`
RestartCount int `json:"restartCount"`
ExitCode int `json:"exitCode"`
CreatedAt int64 `json:"createdAt"`
StartedAt *int64 `json:"startedAt,omitempty"`
FinishedAt *int64 `json:"finishedAt,omitempty"`
Ports []DockerContainerPortFrontend `json:"ports,omitempty"`
Labels map[string]string `json:"labels,omitempty"`
Networks []DockerContainerNetworkFrontend `json:"networks,omitempty"`
WritableLayerBytes int64 `json:"writableLayerBytes,omitempty"`
RootFilesystemBytes int64 `json:"rootFilesystemBytes,omitempty"`
BlockIO *DockerContainerBlockIOFrontend `json:"blockIo,omitempty"`
Mounts []DockerContainerMountFrontend `json:"mounts,omitempty"`
Podman *DockerPodmanContainerFrontend `json:"podman,omitempty"`
@ -277,7 +277,6 @@ type DockerContainerUpdateStatusFrontend struct {
Error string `json:"error,omitempty"` // e.g., "rate limited", "auth required"
}
// DockerContainerPortFrontend represents a container port mapping
type DockerContainerPortFrontend struct {
PrivatePort int `json:"privatePort"`
@ -440,10 +439,10 @@ type HostFrontend struct {
// HostSensorSummaryFrontend mirrors HostSensorSummary with primitives for the frontend.
type HostSensorSummaryFrontend struct {
TemperatureCelsius map[string]float64 `json:"temperatureCelsius,omitempty"`
FanRPM map[string]float64 `json:"fanRpm,omitempty"`
Additional map[string]float64 `json:"additional,omitempty"`
SMART []HostDiskSMARTFrontend `json:"smart,omitempty"` // S.M.A.R.T. disk data
TemperatureCelsius map[string]float64 `json:"temperatureCelsius,omitempty"`
FanRPM map[string]float64 `json:"fanRpm,omitempty"`
Additional map[string]float64 `json:"additional,omitempty"`
SMART []HostDiskSMARTFrontend `json:"smart,omitempty"` // S.M.A.R.T. disk data
}
// HostDiskSMARTFrontend represents S.M.A.R.T. data for a disk from a host agent.

View file

@ -62,4 +62,3 @@ func TestEnrichContainerMetadata_DetectsOCIForStoppedContainer(t *testing.T) {
t.Fatalf("expected container.Type oci, got %q", container.Type)
}
}

View file

@ -239,7 +239,6 @@ func TestMakeGuestID(t *testing.T) {
}
}
func TestConvertPoolInfoToModel(t *testing.T) {
t.Parallel()

View file

@ -267,21 +267,21 @@ func mergeTemperatureData(hostAgentTemp, proxyTemp *models.Temperature) *models.
// Start with host agent data as base since it's more reliable
result := &models.Temperature{
CPUPackage: hostAgentTemp.CPUPackage,
CPUMax: hostAgentTemp.CPUMax,
CPUMin: proxyTemp.CPUMin, // Preserve historical min
CPUPackage: hostAgentTemp.CPUPackage,
CPUMax: hostAgentTemp.CPUMax,
CPUMin: proxyTemp.CPUMin, // Preserve historical min
CPUMaxRecord: math.Max(hostAgentTemp.CPUPackage, proxyTemp.CPUMaxRecord), // Update historical max
MinRecorded: proxyTemp.MinRecorded,
MaxRecorded: proxyTemp.MaxRecorded,
Cores: hostAgentTemp.Cores,
GPU: hostAgentTemp.GPU,
NVMe: hostAgentTemp.NVMe,
Available: true,
HasCPU: hostAgentTemp.HasCPU,
HasGPU: hostAgentTemp.HasGPU,
HasNVMe: hostAgentTemp.HasNVMe,
HasSMART: hostAgentTemp.HasSMART || proxyTemp.HasSMART,
LastUpdate: hostAgentTemp.LastUpdate,
MinRecorded: proxyTemp.MinRecorded,
MaxRecorded: proxyTemp.MaxRecorded,
Cores: hostAgentTemp.Cores,
GPU: hostAgentTemp.GPU,
NVMe: hostAgentTemp.NVMe,
Available: true,
HasCPU: hostAgentTemp.HasCPU,
HasGPU: hostAgentTemp.HasGPU,
HasNVMe: hostAgentTemp.HasNVMe,
HasSMART: hostAgentTemp.HasSMART || proxyTemp.HasSMART,
LastUpdate: hostAgentTemp.LastUpdate,
}
// Use host agent CPU data if available, fall back to proxy

View file

@ -78,10 +78,10 @@ func TestConvertHostSensorsToTemperature_NVMe(t *testing.T) {
func TestConvertHostSensorsToTemperature_GPU(t *testing.T) {
sensors := models.HostSensorSummary{
TemperatureCelsius: map[string]float64{
"cpu_package": 45.0,
"gpu_edge": 60.0,
"cpu_package": 45.0,
"gpu_edge": 60.0,
"gpu_junction": 65.0,
"gpu_mem": 55.0,
"gpu_mem": 55.0,
},
}
result := convertHostSensorsToTemperature(sensors, time.Now())

View file

@ -99,4 +99,3 @@ func TestSeedMockMetricsHistory_PopulatesSeries(t *testing.T) {
t.Fatalf("expected last docker cpu point to match current, got=%v want=%v", got, want)
}
}

View file

@ -119,15 +119,15 @@ func TestFromNode(t *testing.T) {
func TestFromVM(t *testing.T) {
vm := models.VM{
ID: "pve1/qemu/100",
VMID: 100,
Name: "webserver",
Node: "node1",
Instance: "pve1",
Status: "running",
Type: "qemu",
CPU: 0.15,
CPUs: 4,
ID: "pve1/qemu/100",
VMID: 100,
Name: "webserver",
Node: "node1",
Instance: "pve1",
Status: "running",
Type: "qemu",
CPU: 0.15,
CPUs: 4,
Memory: models.Memory{
Total: 8 * 1024 * 1024 * 1024,
Used: 4 * 1024 * 1024 * 1024,
@ -391,17 +391,17 @@ func TestFromHost(t *testing.T) {
func TestFromDockerHost(t *testing.T) {
dh := models.DockerHost{
ID: "docker-host-1",
AgentID: "agent-xyz",
Hostname: "docker-server",
DisplayName: "Docker Server",
MachineID: "machine-id-123",
OS: "linux",
Architecture: "amd64",
Runtime: "docker",
ID: "docker-host-1",
AgentID: "agent-xyz",
Hostname: "docker-server",
DisplayName: "Docker Server",
MachineID: "machine-id-123",
OS: "linux",
Architecture: "amd64",
Runtime: "docker",
DockerVersion: "24.0.5",
CPUs: 16,
CPUUsage: 35.0,
CPUs: 16,
CPUUsage: 35.0,
Memory: models.Memory{
Total: 64 * 1024 * 1024 * 1024,
Used: 32 * 1024 * 1024 * 1024,

View file

@ -185,4 +185,3 @@ func TestPopulateFromSnapshotRemovesStaleResources(t *testing.T) {
t.Logf("SUCCESS: Removed resources are correctly cleaned up!")
t.Logf("After second snapshot: %d container(s) - 'container-to-remove' was properly removed", len(containers))
}

View file

@ -777,4 +777,3 @@ func TestGetResourceSummary(t *testing.T) {
t.Errorf("Expected 2 VMs, got %d", vmStats.Count)
}
}

View file

@ -263,7 +263,7 @@ func TestCounterWraparound(t *testing.T) {
// Test wraparound case (energy2 < energy1 means counter wrapped)
energy1 = uint64(18446744073709551610) // Close to max uint64
energy2 = uint64(100) // After wrap
energy2 = uint64(100) // After wrap
if energy2 >= energy1 {
deltaUJ = energy2 - energy1

View file

@ -88,15 +88,15 @@ func (m *Manager) ProcessDockerContainerUpdate(
// Create or update the update entry
updateID := "docker:" + hostID + ":" + containerID
update := &UpdateInfo{
ID: updateID,
ResourceID: containerID,
ResourceType: "docker",
ResourceName: containerName,
HostID: hostID,
Type: UpdateTypeDockerImage,
CurrentDigest: updateStatus.CurrentDigest,
LatestDigest: updateStatus.LatestDigest,
LastChecked: updateStatus.LastChecked,
ID: updateID,
ResourceID: containerID,
ResourceType: "docker",
ResourceName: containerName,
HostID: hostID,
Type: UpdateTypeDockerImage,
CurrentDigest: updateStatus.CurrentDigest,
LatestDigest: updateStatus.LatestDigest,
LastChecked: updateStatus.LastChecked,
CurrentVersion: image,
}

View file

@ -17,7 +17,7 @@ import (
// RegistryConfig holds authentication for a container registry.
type RegistryConfig struct {
Host string `json:"host"` // e.g., "registry-1.docker.io", "ghcr.io"
Host string `json:"host"` // e.g., "registry-1.docker.io", "ghcr.io"
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"` // token or password
Insecure bool `json:"insecure,omitempty"` // Skip TLS verification
@ -59,10 +59,10 @@ func NewRegistryChecker(logger zerolog.Logger) *RegistryChecker {
TLSClientConfig: &tls.Config{
MinVersion: tls.VersionTLS12,
},
MaxIdleConns: 10,
IdleConnTimeout: 90 * time.Second,
DisableCompression: false,
DisableKeepAlives: false,
MaxIdleConns: 10,
IdleConnTimeout: 90 * time.Second,
DisableCompression: false,
DisableKeepAlives: false,
},
},
configs: make(map[string]RegistryConfig),
@ -130,7 +130,7 @@ func ParseImageReference(image string) (registry, repository, tag string) {
// CheckImageUpdate compares current digest with registry's latest.
func (r *RegistryChecker) CheckImageUpdate(ctx context.Context, image, currentDigest string) (*ImageUpdateInfo, error) {
registry, repository, tag := ParseImageReference(image)
// Skip digest-pinned images
if registry == "" {
return &ImageUpdateInfo{
@ -168,7 +168,7 @@ func (r *RegistryChecker) CheckImageUpdate(ctx context.Context, image, currentDi
if err != nil {
// Cache the error to avoid hammering the registry
r.cacheError(cacheKey, err.Error())
r.logger.Debug().
Str("image", image).
Str("registry", registry).
@ -278,7 +278,7 @@ func (r *RegistryChecker) getAuthToken(ctx context.Context, registry, repository
// Docker Hub requires auth token even for public images
if registry == "registry-1.docker.io" {
tokenURL := fmt.Sprintf("https://auth.docker.io/token?service=registry.docker.io&scope=repository:%s:pull", repository)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, tokenURL, nil)
if err != nil {
return "", err

Some files were not shown because too many files have changed in this diff Show more