* feat: replace provider config with credential-based system (#477) Introduce a new credential management system replacing the old ProviderConfig singleton and standalone Models page. Each credential stores encrypted API keys and provider-specific configuration with full CRUD support via a unified settings UI. Backend: - Add Credential domain model with encrypted API key storage - Add credentials API router (CRUD, discovery, registration, testing) - Add encryption utilities for secure key storage - Add key_provider for DB-first env-var fallback provisioning - Add connection tester and model discovery services - Integrate ModelManager with credential-based config - Add provider name normalization for Esperanto compatibility - Add database migrations 11-12 for credential schema Frontend: - Rewrite settings/api-keys page with credential management UI - Add model discovery dialog with search and custom model support - Add compact default model assignments (primary/advanced layout) - Add inline model testing and credential connection testing - Add env-var migration banner - Update navigation to unified settings page - Remove standalone models page and old settings components i18n: - Update all 7 locale files with credential and model management keys Closes #477 Co-Authored-By: JFMD <git@jfmd.us> Co-Authored-By: OraCatQAQ <570768706@qq.com> * fix: address PR #540 review comments - Fix docs referencing removed Models page - Fix error-handler returning raw messages instead of i18n keys - Fix auth.py misleading docstring and missing no-password guard - Fix connection_tester using wrong env var for openai_compatible - Add provision_provider_keys before model discovery/sync - Update CLAUDE.md to reflect credential-based system - Fix missing closing brace in api-keys page useEffect * fix: add logging to credential migration and surface errors in UI - Add comprehensive logging to migrate-from-env and migrate-from-provider-config endpoints (start, per-provider progress, success/failure with stack traces, final summary) - Fix frontend migration hooks ignoring errors array from response - Show error toast when migration fails instead of "nothing to migrate" - Invalidate status/envStatus queries after migration so banner updates * docs: update CLAUDE.md files for credential system Replace stale ProviderConfig and /api-keys/ references across 8 CLAUDE.md files to reflect the new Credential-based system from PR #540. * docs: update user documentation for credential-based system Replace env var API key instructions with Settings UI credential workflow across all user-facing documentation. The new flow is: set OPEN_NOTEBOOK_ENCRYPTION_KEY → start services → add credential in Settings UI → test → discover models → register. - Rewrite ai-providers.md, api-configuration.md, environment-reference.md - Update all quick-start guides and installation docs - Update ollama.md, openai-compatible.md, local-tts/stt networking sections - Update reverse-proxy.md, development-setup.md, security.md - Fix broken links to non-existent docs/deployment/ paths - Add credentials endpoints to api-reference.md - Move all API key env vars to deprecated/legacy sections * chore: bump version to 1.7.0-rc1 Release candidate for credential-based provider management system. * fix: initialize provider before try block in test_credential Prevents UnboundLocalError when Credential.get() throws (e.g., invalid credential_id) before provider is assigned. * fix: reorder down migration to drop index before table Removes duplicate REMOVE FIELD statement and reorders so the index is dropped before the table, preventing rollback failures. * refactor: simplify encryption key to always derive via SHA-256 Remove the dual code path in _ensure_fernet_key() that detected native Fernet keys. Since the credential system is new, always deriving via SHA-256 removes unnecessary complexity. Also removes the generate_key() function and Fernet.generate_key() references from docs. * fix: correct mock patch targets in embedding tests and URL validation Fix embedding tests patching wrong module path for model_manager (was targeting open_notebook.utils.embedding.model_manager but it's imported locally from open_notebook.ai.models). Also fix URL validation to allow unresolvable hostnames since they may be valid in the deployment environment (e.g., Azure endpoints, internal DNS). * feat: add global setup banner for encryption and migration status Show a persistent banner in AppShell when encryption key is missing (red) or env var API keys can be migrated (amber), so users see these prompts on every page instead of only on Settings > API Keys. Includes a docs link for the encryption banner and i18n support across all 7 locales. * docs: several improvements to docker-compose e env examples * Update README.md Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com> * docs: fix env var format in README and update model setup instructions Align the encryption key snippet in README Step 2 with the list format used in the compose file. Replace deprecated "Settings → Models" instructions with credential-based Discover Models flow. * fix: address credential system review issues - Fix SSRF bypass via IPv4-mapped IPv6 addresses (::ffff:169.254.x.x) - Fix TTS connection test missing config parameter - Add Azure-specific model discovery using api-key auth header - Add Vertex static model list for credential-based discovery - Fix PROVIDER_DISCOVERY_FUNCTIONS incorrect azure/vertex mapping - Extract business logic to api/credentials_service.py (service layer) - Move credential Pydantic schemas to api/models.py - Update tests to use new service imports and ValueError assertions * fix: sanitize error responses and migrate key_provider to Credential - Replace raw exception messages in all credential router 500 responses with generic error strings (internal details logged server-side only) - Refactor key_provider.py to use Credential.get_by_provider() instead of deprecated ProviderConfig.get_instance() - Remove unused functions (get_provider_configs, get_default_api_key, get_provider_config) that were dead code --------- Co-authored-by: JFMD <git@jfmd.us> Co-authored-by: OraCatQAQ <570768706@qq.com>
9.7 KiB
Advanced Configuration
Performance tuning, debugging, and advanced features.
Performance Tuning
Concurrency Control
# Max concurrent database operations (default: 5)
# Increase: Faster processing, more conflicts
# Decrease: Slower, fewer conflicts
SURREAL_COMMANDS_MAX_TASKS=5
Guidelines:
- CPU: 2 cores → 2-3 tasks
- CPU: 4 cores → 5 tasks (default)
- CPU: 8+ cores → 10-20 tasks
Higher concurrency = more throughput but more database conflicts (retries handle this).
Retry Strategy
# How to wait between retries
SURREAL_COMMANDS_RETRY_WAIT_STRATEGY=exponential_jitter
# Options:
# - exponential_jitter (recommended)
# - exponential
# - fixed
# - random
For high-concurrency deployments, use exponential_jitter to prevent thundering herd.
Timeout Tuning
# Client timeout (default: 300 seconds)
API_CLIENT_TIMEOUT=300
# LLM timeout (default: 60 seconds)
ESPERANTO_LLM_TIMEOUT=60
Guideline: Set API_CLIENT_TIMEOUT > ESPERANTO_LLM_TIMEOUT + buffer
Example:
ESPERANTO_LLM_TIMEOUT=120
API_CLIENT_TIMEOUT=180 # 120 + 60 second buffer
Batching
TTS Batch Size
For podcast generation, control concurrent TTS requests:
# Default: 5
TTS_BATCH_SIZE=2
Providers and recommendations:
- OpenAI: 5 (can handle many concurrent)
- Google: 4 (good concurrency)
- ElevenLabs: 2 (limited concurrent requests)
- Local TTS: 1 (single-threaded)
Lower = slower but more stable. Higher = faster but more load on provider.
Logging & Debugging
Enable Detailed Logging
# Start with debug logging
RUST_LOG=debug # For Rust components
LOGLEVEL=DEBUG # For Python components
Debug Specific Components
# Only surreal operations
RUST_LOG=surrealdb=debug
# Only langchain
LOGLEVEL=langchain:debug
# Only specific module
RUST_LOG=open_notebook::database=debug
LangSmith Tracing
For debugging LLM workflows:
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_API_KEY=your-key
LANGCHAIN_PROJECT="Open Notebook"
Then visit https://smith.langchain.com to see traces.
Port Configuration
Default Ports
Frontend: 8502 (Docker deployment)
Frontend: 3000 (Development from source)
API: 5055
SurrealDB: 8000
Changing Frontend Port
Edit docker-compose.yml:
services:
open-notebook:
ports:
- "8001:8502" # Change from 8502 to 8001
Access at: http://localhost:8001
API auto-detects to: http://localhost:5055 ✓
Changing API Port
services:
open-notebook:
ports:
- "127.0.0.1:8502:8502" # Frontend
- "5056:5055" # Change API from 5055 to 5056
environment:
- API_URL=http://localhost:5056 # Update API_URL
Access API directly: http://localhost:5056/docs
Note: When changing API port, you must set API_URL explicitly since auto-detection assumes port 5055.
Changing SurrealDB Port
services:
surrealdb:
ports:
- "8001:8000" # Change from 8000 to 8001
environment:
- SURREAL_URL=ws://surrealdb:8001/rpc # Update connection URL
Important: Internal Docker network uses container name (surrealdb), not localhost.
SSL/TLS Configuration
Custom CA Certificate
For self-signed certs on local providers:
ESPERANTO_SSL_CA_BUNDLE=/path/to/ca-bundle.pem
Disable Verification (Development Only)
# WARNING: Only for testing/development
# Vulnerable to MITM attacks
ESPERANTO_SSL_VERIFY=false
Multi-Provider Setup
Use Different Providers for Different Tasks
Configure multiple AI providers via Settings → API Keys. Each provider gets its own credential:
- Add a credential for your main language model provider (e.g., OpenAI, Anthropic)
- Add a credential for embeddings (e.g., Voyage AI, or use the same provider)
- Add a credential for TTS (e.g., ElevenLabs, or OpenAI-Compatible for local Speaches)
- Each credential's models are registered and available independently
Multiple Endpoints for OpenAI-Compatible
When using OpenAI-Compatible providers, you can configure per-service URLs in a single credential:
- Go to Settings → API Keys
- Click Add Credential → Select OpenAI-Compatible
- Configure separate URLs for LLM, Embedding, TTS, and STT
- Click Save, then Test Connection
Security Hardening
Change Default Credentials
# Don't use defaults in production
SURREAL_USER=your_secure_username
SURREAL_PASSWORD=$(openssl rand -base64 32) # Generate secure password
Add Password Protection
# Protect your Open Notebook instance
OPEN_NOTEBOOK_PASSWORD=your_secure_password
Use HTTPS
# Always use HTTPS in production
API_URL=https://mynotebook.example.com
Firewall Rules
Restrict access to your Open Notebook:
- Port 8502 (frontend): Only from your IP
- Port 5055 (API): Only from frontend
- Port 8000 (SurrealDB): Never expose to internet
Web Scraping & Content Extraction
Open Notebook uses multiple services for content extraction:
Firecrawl
For advanced web scraping:
FIRECRAWL_API_KEY=your-key
Get key from: https://firecrawl.dev/
Jina AI
Alternative web extraction:
JINA_API_KEY=your-key
Get key from: https://jina.ai/
Environment Variable Groups
Credential Storage (Required)
OPEN_NOTEBOOK_ENCRYPTION_KEY # Required for storing credentials
AI provider API keys are configured via Settings → API Keys (not environment variables).
Database
SURREAL_URL
SURREAL_USER
SURREAL_PASSWORD
SURREAL_NAMESPACE
SURREAL_DATABASE
Performance
SURREAL_COMMANDS_MAX_TASKS
SURREAL_COMMANDS_RETRY_ENABLED
SURREAL_COMMANDS_RETRY_MAX_ATTEMPTS
SURREAL_COMMANDS_RETRY_WAIT_STRATEGY
SURREAL_COMMANDS_RETRY_WAIT_MIN
SURREAL_COMMANDS_RETRY_WAIT_MAX
API Settings
API_URL
INTERNAL_API_URL
API_CLIENT_TIMEOUT
ESPERANTO_LLM_TIMEOUT
Audio/TTS
TTS_BATCH_SIZE
Note:
ELEVENLABS_API_KEYis deprecated. Configure ElevenLabs via Settings → API Keys.
Debugging
LANGCHAIN_TRACING_V2
LANGCHAIN_ENDPOINT
LANGCHAIN_API_KEY
LANGCHAIN_PROJECT
Testing Configuration
Quick Test
# Test API health
curl http://localhost:5055/health
# Test with sample (requires configured credential and registered models)
curl -X POST http://localhost:5055/api/chat \
-H "Content-Type: application/json" \
-d '{"message":"Hello"}'
Validate Config
# Check environment variables are set
env | grep OPEN_NOTEBOOK_ENCRYPTION_KEY
# Verify database connection
python -c "import os; print(os.getenv('SURREAL_URL'))"
Troubleshooting Performance
High Memory Usage
# Reduce concurrency
SURREAL_COMMANDS_MAX_TASKS=2
# Reduce TTS batch size
TTS_BATCH_SIZE=1
High CPU Usage
# Check worker count
SURREAL_COMMANDS_MAX_TASKS
# Reduce if maxed out:
SURREAL_COMMANDS_MAX_TASKS=5
Slow Responses
# Check timeout settings
API_CLIENT_TIMEOUT=300
# Check retry config
SURREAL_COMMANDS_RETRY_MAX_ATTEMPTS=3
Database Conflicts
# Reduce concurrency
SURREAL_COMMANDS_MAX_TASKS=3
# Use jitter strategy
SURREAL_COMMANDS_RETRY_WAIT_STRATEGY=exponential_jitter
Backup & Restore
Data Locations
| Path | Contents |
|---|---|
./data or /app/data |
Uploads, podcasts, checkpoints |
./surreal_data or /mydata |
SurrealDB database files |
Quick Backup
# Stop services (recommended for consistency)
docker compose down
# Create timestamped backup
tar -czf backup-$(date +%Y%m%d-%H%M%S).tar.gz \
notebook_data/ surreal_data/
# Restart services
docker compose up -d
Automated Backup Script
#!/bin/bash
# backup.sh - Run daily via cron
BACKUP_DIR="/path/to/backups"
DATE=$(date +%Y%m%d-%H%M%S)
# Create backup
tar -czf "$BACKUP_DIR/open-notebook-$DATE.tar.gz" \
/path/to/notebook_data \
/path/to/surreal_data
# Keep only last 7 days
find "$BACKUP_DIR" -name "open-notebook-*.tar.gz" -mtime +7 -delete
echo "Backup complete: open-notebook-$DATE.tar.gz"
Add to cron:
# Daily backup at 2 AM
0 2 * * * /path/to/backup.sh >> /var/log/open-notebook-backup.log 2>&1
Restore
# Stop services
docker compose down
# Remove old data (careful!)
rm -rf notebook_data/ surreal_data/
# Extract backup
tar -xzf backup-20240115-120000.tar.gz
# Restart services
docker compose up -d
Migration Between Servers
# On source server
docker compose down
tar -czf open-notebook-migration.tar.gz notebook_data/ surreal_data/
# Transfer to new server
scp open-notebook-migration.tar.gz user@newserver:/path/
# On new server
tar -xzf open-notebook-migration.tar.gz
docker compose up -d
Container Management
Common Commands
# Start services
docker compose up -d
# Stop services
docker compose down
# View logs (all services)
docker compose logs -f
# View logs (specific service)
docker compose logs -f api
# Restart specific service
docker compose restart api
# Update to latest version
docker compose down
docker compose pull
docker compose up -d
# Check resource usage
docker stats
# Check service health
docker compose ps
Clean Up
# Remove stopped containers
docker compose rm
# Remove unused images
docker image prune
# Full cleanup (careful!)
docker system prune -a
Summary
Most deployments need:
- One AI provider API key
- Default database settings
- Default timeouts
Tune performance only if:
- You have specific bottlenecks
- High-concurrency workload
- Custom hardware (very fast or very slow)
Advanced features:
- Firecrawl for better web scraping
- LangSmith for debugging workflows
- Custom CA bundles for self-signed certs