* feat: replace provider config with credential-based system (#477) Introduce a new credential management system replacing the old ProviderConfig singleton and standalone Models page. Each credential stores encrypted API keys and provider-specific configuration with full CRUD support via a unified settings UI. Backend: - Add Credential domain model with encrypted API key storage - Add credentials API router (CRUD, discovery, registration, testing) - Add encryption utilities for secure key storage - Add key_provider for DB-first env-var fallback provisioning - Add connection tester and model discovery services - Integrate ModelManager with credential-based config - Add provider name normalization for Esperanto compatibility - Add database migrations 11-12 for credential schema Frontend: - Rewrite settings/api-keys page with credential management UI - Add model discovery dialog with search and custom model support - Add compact default model assignments (primary/advanced layout) - Add inline model testing and credential connection testing - Add env-var migration banner - Update navigation to unified settings page - Remove standalone models page and old settings components i18n: - Update all 7 locale files with credential and model management keys Closes #477 Co-Authored-By: JFMD <git@jfmd.us> Co-Authored-By: OraCatQAQ <570768706@qq.com> * fix: address PR #540 review comments - Fix docs referencing removed Models page - Fix error-handler returning raw messages instead of i18n keys - Fix auth.py misleading docstring and missing no-password guard - Fix connection_tester using wrong env var for openai_compatible - Add provision_provider_keys before model discovery/sync - Update CLAUDE.md to reflect credential-based system - Fix missing closing brace in api-keys page useEffect * fix: add logging to credential migration and surface errors in UI - Add comprehensive logging to migrate-from-env and migrate-from-provider-config endpoints (start, per-provider progress, success/failure with stack traces, final summary) - Fix frontend migration hooks ignoring errors array from response - Show error toast when migration fails instead of "nothing to migrate" - Invalidate status/envStatus queries after migration so banner updates * docs: update CLAUDE.md files for credential system Replace stale ProviderConfig and /api-keys/ references across 8 CLAUDE.md files to reflect the new Credential-based system from PR #540. * docs: update user documentation for credential-based system Replace env var API key instructions with Settings UI credential workflow across all user-facing documentation. The new flow is: set OPEN_NOTEBOOK_ENCRYPTION_KEY → start services → add credential in Settings UI → test → discover models → register. - Rewrite ai-providers.md, api-configuration.md, environment-reference.md - Update all quick-start guides and installation docs - Update ollama.md, openai-compatible.md, local-tts/stt networking sections - Update reverse-proxy.md, development-setup.md, security.md - Fix broken links to non-existent docs/deployment/ paths - Add credentials endpoints to api-reference.md - Move all API key env vars to deprecated/legacy sections * chore: bump version to 1.7.0-rc1 Release candidate for credential-based provider management system. * fix: initialize provider before try block in test_credential Prevents UnboundLocalError when Credential.get() throws (e.g., invalid credential_id) before provider is assigned. * fix: reorder down migration to drop index before table Removes duplicate REMOVE FIELD statement and reorders so the index is dropped before the table, preventing rollback failures. * refactor: simplify encryption key to always derive via SHA-256 Remove the dual code path in _ensure_fernet_key() that detected native Fernet keys. Since the credential system is new, always deriving via SHA-256 removes unnecessary complexity. Also removes the generate_key() function and Fernet.generate_key() references from docs. * fix: correct mock patch targets in embedding tests and URL validation Fix embedding tests patching wrong module path for model_manager (was targeting open_notebook.utils.embedding.model_manager but it's imported locally from open_notebook.ai.models). Also fix URL validation to allow unresolvable hostnames since they may be valid in the deployment environment (e.g., Azure endpoints, internal DNS). * feat: add global setup banner for encryption and migration status Show a persistent banner in AppShell when encryption key is missing (red) or env var API keys can be migrated (amber), so users see these prompts on every page instead of only on Settings > API Keys. Includes a docs link for the encryption banner and i18n support across all 7 locales. * docs: several improvements to docker-compose e env examples * Update README.md Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com> * docs: fix env var format in README and update model setup instructions Align the encryption key snippet in README Step 2 with the list format used in the compose file. Replace deprecated "Settings → Models" instructions with credential-based Discover Models flow. * fix: address credential system review issues - Fix SSRF bypass via IPv4-mapped IPv6 addresses (::ffff:169.254.x.x) - Fix TTS connection test missing config parameter - Add Azure-specific model discovery using api-key auth header - Add Vertex static model list for credential-based discovery - Fix PROVIDER_DISCOVERY_FUNCTIONS incorrect azure/vertex mapping - Extract business logic to api/credentials_service.py (service layer) - Move credential Pydantic schemas to api/models.py - Update tests to use new service imports and ValueError assertions * fix: sanitize error responses and migrate key_provider to Credential - Replace raw exception messages in all credential router 500 responses with generic error strings (internal details logged server-side only) - Refactor key_provider.py to use Credential.get_by_provider() instead of deprecated ProviderConfig.get_instance() - Remove unused functions (get_provider_configs, get_default_api_key, get_provider_config) that were dead code --------- Co-authored-by: JFMD <git@jfmd.us> Co-authored-by: OraCatQAQ <570768706@qq.com>
11 KiB
Complete Environment Reference
Comprehensive list of all environment variables available in Open Notebook.
API Configuration
| Variable | Required? | Default | Description |
|---|---|---|---|
API_URL |
No | Auto-detected | URL where frontend reaches API (e.g., http://localhost:5055) |
INTERNAL_API_URL |
No | http://localhost:5055 | Internal API URL for Next.js server-side proxying |
API_CLIENT_TIMEOUT |
No | 300 | Client timeout in seconds (how long to wait for API response) |
OPEN_NOTEBOOK_PASSWORD |
No | None | Password to protect Open Notebook instance |
OPEN_NOTEBOOK_ENCRYPTION_KEY |
Yes | None | Secret string to encrypt credentials stored in database (any string works). Required for the credential system. Supports Docker secrets via _FILE suffix. |
HOSTNAME |
No | 0.0.0.0 (in Docker) |
Network interface for Next.js to bind to. Default 0.0.0.0 ensures accessibility from reverse proxies |
Important:
OPEN_NOTEBOOK_ENCRYPTION_KEYis required for storing AI provider credentials via the Settings UI. Without it, you cannot save credentials. If you change or lose this key, all stored credentials become unreadable.
Database: SurrealDB
| Variable | Required? | Default | Description |
|---|---|---|---|
SURREAL_URL |
Yes | ws://surrealdb:8000/rpc | SurrealDB WebSocket connection URL |
SURREAL_USER |
Yes | root | SurrealDB username |
SURREAL_PASSWORD |
Yes | root | SurrealDB password |
SURREAL_NAMESPACE |
Yes | open_notebook | SurrealDB namespace |
SURREAL_DATABASE |
Yes | open_notebook | SurrealDB database name |
Database: Retry Configuration
| Variable | Required? | Default | Description |
|---|---|---|---|
SURREAL_COMMANDS_RETRY_ENABLED |
No | true | Enable retries on failure |
SURREAL_COMMANDS_RETRY_MAX_ATTEMPTS |
No | 3 | Maximum retry attempts |
SURREAL_COMMANDS_RETRY_WAIT_STRATEGY |
No | exponential_jitter | Retry wait strategy (exponential_jitter/exponential/fixed/random) |
SURREAL_COMMANDS_RETRY_WAIT_MIN |
No | 1 | Minimum wait time between retries (seconds) |
SURREAL_COMMANDS_RETRY_WAIT_MAX |
No | 30 | Maximum wait time between retries (seconds) |
Database: Concurrency
| Variable | Required? | Default | Description |
|---|---|---|---|
SURREAL_COMMANDS_MAX_TASKS |
No | 5 | Maximum concurrent database tasks |
LLM Timeouts
| Variable | Required? | Default | Description |
|---|---|---|---|
ESPERANTO_LLM_TIMEOUT |
No | 60 | LLM inference timeout in seconds |
ESPERANTO_SSL_VERIFY |
No | true | Verify SSL certificates (false = development only) |
ESPERANTO_SSL_CA_BUNDLE |
No | None | Path to custom CA certificate bundle |
Text-to-Speech (TTS)
| Variable | Required? | Default | Description |
|---|---|---|---|
TTS_BATCH_SIZE |
No | 5 | Concurrent TTS requests (1-5, depends on provider) |
Content Extraction
| Variable | Required? | Default | Description |
|---|---|---|---|
FIRECRAWL_API_KEY |
No | None | Firecrawl API key for advanced web scraping |
JINA_API_KEY |
No | None | Jina AI API key for web extraction |
Setup:
- Firecrawl: https://firecrawl.dev/
- Jina: https://jina.ai/
Network / Proxy
| Variable | Required? | Default | Description |
|---|---|---|---|
HTTP_PROXY |
No | None | HTTP proxy URL for outbound HTTP requests |
HTTPS_PROXY |
No | None | HTTPS proxy URL for outbound HTTPS requests |
NO_PROXY |
No | None | Comma-separated list of hosts to bypass proxy |
Route all outbound HTTP requests through a proxy server. Useful for corporate/firewalled environments.
The underlying libraries (esperanto, content-core, podcast-creator) automatically detect proxy settings from these standard environment variables.
Affects:
- AI provider API calls (OpenAI, Anthropic, Google, Groq, etc.)
- Content extraction from URLs (web scraping, YouTube transcripts)
- Podcast generation (LLM and TTS provider calls)
Format: http://[user:pass@]host:port or https://[user:pass@]host:port
Examples:
# Basic proxy
HTTP_PROXY=http://proxy.corp.com:8080
HTTPS_PROXY=http://proxy.corp.com:8080
# Authenticated proxy
HTTP_PROXY=http://user:password@proxy.corp.com:8080
HTTPS_PROXY=http://user:password@proxy.corp.com:8080
# Bypass proxy for local hosts
NO_PROXY=localhost,127.0.0.1,.local
Debugging & Monitoring
| Variable | Required? | Default | Description |
|---|---|---|---|
LANGCHAIN_TRACING_V2 |
No | false | Enable LangSmith tracing |
LANGCHAIN_ENDPOINT |
No | https://api.smith.langchain.com | LangSmith endpoint |
LANGCHAIN_API_KEY |
No | None | LangSmith API key |
LANGCHAIN_PROJECT |
No | Open Notebook | LangSmith project name |
Setup: https://smith.langchain.com/
Environment Variables by Use Case
Minimal Setup (New Installation)
OPEN_NOTEBOOK_ENCRYPTION_KEY=my-secret-key
SURREAL_URL=ws://surrealdb:8000/rpc
SURREAL_USER=root
SURREAL_PASSWORD=password
SURREAL_NAMESPACE=open_notebook
SURREAL_DATABASE=open_notebook
Then configure AI providers via Settings → API Keys in the browser.
Production Deployment
OPEN_NOTEBOOK_ENCRYPTION_KEY=your-strong-secret-key
OPEN_NOTEBOOK_PASSWORD=your-secure-password
API_URL=https://mynotebook.example.com
SURREAL_USER=production_user
SURREAL_PASSWORD=secure_password
Self-Hosted Behind Reverse Proxy
OPEN_NOTEBOOK_ENCRYPTION_KEY=your-secret-key
API_URL=https://mynotebook.example.com
Corporate Environment (Behind Proxy)
OPEN_NOTEBOOK_ENCRYPTION_KEY=your-secret-key
HTTP_PROXY=http://proxy.corp.com:8080
HTTPS_PROXY=http://proxy.corp.com:8080
NO_PROXY=localhost,127.0.0.1
High-Performance Deployment
OPEN_NOTEBOOK_ENCRYPTION_KEY=your-secret-key
SURREAL_COMMANDS_MAX_TASKS=10
TTS_BATCH_SIZE=5
API_CLIENT_TIMEOUT=600
Debugging
OPEN_NOTEBOOK_ENCRYPTION_KEY=your-secret-key
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your-key
Validation
Check if a variable is set:
# Check single variable
echo $OPEN_NOTEBOOK_ENCRYPTION_KEY
# Check multiple
env | grep -E "OPEN_NOTEBOOK|API_URL"
# Print all config
env | grep -E "^[A-Z_]+=" | sort
Notes
- Case-sensitive:
OPEN_NOTEBOOK_ENCRYPTION_KEY≠open_notebook_encryption_key - No spaces:
OPEN_NOTEBOOK_ENCRYPTION_KEY=my-keynotOPEN_NOTEBOOK_ENCRYPTION_KEY = my-key - Quote values: Use quotes for values with spaces:
API_URL="http://my server:5055" - Restart required: Changes take effect after restarting services
- Secrets: Don't commit encryption keys or passwords to git
- AI Providers: Configure via Settings → API Keys in the browser (not via env vars)
- Migration: Use Settings UI to migrate existing env vars to the credential system. See API Configuration
Quick Setup Checklist
- Set
OPEN_NOTEBOOK_ENCRYPTION_KEYin docker-compose.yml - Set database credentials (
SURREAL_*) - Start services
- Open browser → Go to Settings → API Keys
- Add Credential for your AI provider
- Test Connection to verify
- Discover & Register Models
- Set
API_URLif behind reverse proxy - Change
SURREAL_PASSWORDin production - Try a test chat
Done!
Legacy: AI Provider Environment Variables (Deprecated)
Deprecated: The following AI provider API key environment variables are deprecated. Configure providers via the Settings UI instead. These variables may still work as a fallback but are no longer recommended.
If you have these variables configured from a previous installation, click the Migrate to Database button in Settings → API Keys to import them into the credential system, then remove them from your configuration.
| Variable | Provider | Replacement |
|---|---|---|
OPENAI_API_KEY |
OpenAI | Settings → API Keys → Add OpenAI Credential |
ANTHROPIC_API_KEY |
Anthropic | Settings → API Keys → Add Anthropic Credential |
GOOGLE_API_KEY |
Google Gemini | Settings → API Keys → Add Google Credential |
GEMINI_API_BASE_URL |
Google Gemini | Configure in Google Gemini credential |
VERTEX_PROJECT |
Vertex AI | Settings → API Keys → Add Vertex AI Credential |
VERTEX_LOCATION |
Vertex AI | Configure in Vertex AI credential |
GOOGLE_APPLICATION_CREDENTIALS |
Vertex AI | Configure in Vertex AI credential |
GROQ_API_KEY |
Groq | Settings → API Keys → Add Groq Credential |
MISTRAL_API_KEY |
Mistral | Settings → API Keys → Add Mistral Credential |
DEEPSEEK_API_KEY |
DeepSeek | Settings → API Keys → Add DeepSeek Credential |
XAI_API_KEY |
xAI | Settings → API Keys → Add xAI Credential |
OLLAMA_API_BASE |
Ollama | Settings → API Keys → Add Ollama Credential |
OPENROUTER_API_KEY |
OpenRouter | Settings → API Keys → Add OpenRouter Credential |
OPENROUTER_BASE_URL |
OpenRouter | Configure in OpenRouter credential |
VOYAGE_API_KEY |
Voyage AI | Settings → API Keys → Add Voyage AI Credential |
ELEVENLABS_API_KEY |
ElevenLabs | Settings → API Keys → Add ElevenLabs Credential |
OPENAI_COMPATIBLE_BASE_URL |
OpenAI-Compatible | Settings → API Keys → Add OpenAI-Compatible Credential |
OPENAI_COMPATIBLE_API_KEY |
OpenAI-Compatible | Configure in OpenAI-Compatible credential |
OPENAI_COMPATIBLE_BASE_URL_LLM |
OpenAI-Compatible | Configure per-service URL in credential |
OPENAI_COMPATIBLE_API_KEY_LLM |
OpenAI-Compatible | Configure per-service key in credential |
OPENAI_COMPATIBLE_BASE_URL_EMBEDDING |
OpenAI-Compatible | Configure per-service URL in credential |
OPENAI_COMPATIBLE_API_KEY_EMBEDDING |
OpenAI-Compatible | Configure per-service key in credential |
OPENAI_COMPATIBLE_BASE_URL_STT |
OpenAI-Compatible | Configure per-service URL in credential |
OPENAI_COMPATIBLE_API_KEY_STT |
OpenAI-Compatible | Configure per-service key in credential |
OPENAI_COMPATIBLE_BASE_URL_TTS |
OpenAI-Compatible | Configure per-service URL in credential |
OPENAI_COMPATIBLE_API_KEY_TTS |
OpenAI-Compatible | Configure per-service key in credential |
AZURE_OPENAI_API_KEY |
Azure OpenAI | Settings → API Keys → Add Azure OpenAI Credential |
AZURE_OPENAI_ENDPOINT |
Azure OpenAI | Configure in Azure OpenAI credential |
AZURE_OPENAI_API_VERSION |
Azure OpenAI | Configure in Azure OpenAI credential |
AZURE_OPENAI_API_KEY_LLM |
Azure OpenAI | Configure per-service in credential |
AZURE_OPENAI_ENDPOINT_LLM |
Azure OpenAI | Configure per-service in credential |
AZURE_OPENAI_API_VERSION_LLM |
Azure OpenAI | Configure per-service in credential |
AZURE_OPENAI_API_KEY_EMBEDDING |
Azure OpenAI | Configure per-service in credential |
AZURE_OPENAI_ENDPOINT_EMBEDDING |
Azure OpenAI | Configure per-service in credential |
AZURE_OPENAI_API_VERSION_EMBEDDING |
Azure OpenAI | Configure per-service in credential |