* feat: replace provider config with credential-based system (#477)
Introduce a new credential management system replacing the old
ProviderConfig singleton and standalone Models page. Each credential
stores encrypted API keys and provider-specific configuration with
full CRUD support via a unified settings UI.
Backend:
- Add Credential domain model with encrypted API key storage
- Add credentials API router (CRUD, discovery, registration, testing)
- Add encryption utilities for secure key storage
- Add key_provider for DB-first env-var fallback provisioning
- Add connection tester and model discovery services
- Integrate ModelManager with credential-based config
- Add provider name normalization for Esperanto compatibility
- Add database migrations 11-12 for credential schema
Frontend:
- Rewrite settings/api-keys page with credential management UI
- Add model discovery dialog with search and custom model support
- Add compact default model assignments (primary/advanced layout)
- Add inline model testing and credential connection testing
- Add env-var migration banner
- Update navigation to unified settings page
- Remove standalone models page and old settings components
i18n:
- Update all 7 locale files with credential and model management keys
Closes#477
Co-Authored-By: JFMD <git@jfmd.us>
Co-Authored-By: OraCatQAQ <570768706@qq.com>
* fix: address PR #540 review comments
- Fix docs referencing removed Models page
- Fix error-handler returning raw messages instead of i18n keys
- Fix auth.py misleading docstring and missing no-password guard
- Fix connection_tester using wrong env var for openai_compatible
- Add provision_provider_keys before model discovery/sync
- Update CLAUDE.md to reflect credential-based system
- Fix missing closing brace in api-keys page useEffect
* fix: add logging to credential migration and surface errors in UI
- Add comprehensive logging to migrate-from-env and
migrate-from-provider-config endpoints (start, per-provider
progress, success/failure with stack traces, final summary)
- Fix frontend migration hooks ignoring errors array from response
- Show error toast when migration fails instead of "nothing to migrate"
- Invalidate status/envStatus queries after migration so banner updates
* docs: update CLAUDE.md files for credential system
Replace stale ProviderConfig and /api-keys/ references across 8 CLAUDE.md
files to reflect the new Credential-based system from PR #540.
* docs: update user documentation for credential-based system
Replace env var API key instructions with Settings UI credential
workflow across all user-facing documentation. The new flow is:
set OPEN_NOTEBOOK_ENCRYPTION_KEY → start services → add credential
in Settings UI → test → discover models → register.
- Rewrite ai-providers.md, api-configuration.md, environment-reference.md
- Update all quick-start guides and installation docs
- Update ollama.md, openai-compatible.md, local-tts/stt networking sections
- Update reverse-proxy.md, development-setup.md, security.md
- Fix broken links to non-existent docs/deployment/ paths
- Add credentials endpoints to api-reference.md
- Move all API key env vars to deprecated/legacy sections
* chore: bump version to 1.7.0-rc1
Release candidate for credential-based provider management system.
* fix: initialize provider before try block in test_credential
Prevents UnboundLocalError when Credential.get() throws (e.g.,
invalid credential_id) before provider is assigned.
* fix: reorder down migration to drop index before table
Removes duplicate REMOVE FIELD statement and reorders so the index
is dropped before the table, preventing rollback failures.
* refactor: simplify encryption key to always derive via SHA-256
Remove the dual code path in _ensure_fernet_key() that detected native
Fernet keys. Since the credential system is new, always deriving via
SHA-256 removes unnecessary complexity. Also removes the generate_key()
function and Fernet.generate_key() references from docs.
* fix: correct mock patch targets in embedding tests and URL validation
Fix embedding tests patching wrong module path for model_manager
(was targeting open_notebook.utils.embedding.model_manager but it's
imported locally from open_notebook.ai.models). Also fix URL validation
to allow unresolvable hostnames since they may be valid in the
deployment environment (e.g., Azure endpoints, internal DNS).
* feat: add global setup banner for encryption and migration status
Show a persistent banner in AppShell when encryption key is missing
(red) or env var API keys can be migrated (amber), so users see
these prompts on every page instead of only on Settings > API Keys.
Includes a docs link for the encryption banner and i18n support
across all 7 locales.
* docs: several improvements to docker-compose e env examples
* Update README.md
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
* docs: fix env var format in README and update model setup instructions
Align the encryption key snippet in README Step 2 with the list
format used in the compose file. Replace deprecated "Settings →
Models" instructions with credential-based Discover Models flow.
* fix: address credential system review issues
- Fix SSRF bypass via IPv4-mapped IPv6 addresses (::ffff:169.254.x.x)
- Fix TTS connection test missing config parameter
- Add Azure-specific model discovery using api-key auth header
- Add Vertex static model list for credential-based discovery
- Fix PROVIDER_DISCOVERY_FUNCTIONS incorrect azure/vertex mapping
- Extract business logic to api/credentials_service.py (service layer)
- Move credential Pydantic schemas to api/models.py
- Update tests to use new service imports and ValueError assertions
* fix: sanitize error responses and migrate key_provider to Credential
- Replace raw exception messages in all credential router 500 responses
with generic error strings (internal details logged server-side only)
- Refactor key_provider.py to use Credential.get_by_provider() instead
of deprecated ProviderConfig.get_instance()
- Remove unused functions (get_provider_configs, get_default_api_key,
get_provider_config) that were dead code
---------
Co-authored-by: JFMD <git@jfmd.us>
Co-authored-by: OraCatQAQ <570768706@qq.com>
Add comprehensive documentation for setting up local speech-to-text
using Speaches with faster-whisper. Includes model recommendations,
GPU acceleration, Docker networking, and troubleshooting.
Also updates related docs with cross-references to the new guide.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* chore: force env file on api
* docs: add reverse proxy timeout configuration for all proxies
- Add timeout configuration to Caddy examples (read_timeout, write_timeout)
- Add Traefik serversTransport timeout configuration
- Increase nginx timeout from 300s to 600s to match frontend
- Expand timeout troubleshooting section with symptoms and solutions
- Reference API_CLIENT_TIMEOUT and ESPERANTO_LLM_TIMEOUT env vars
Fixes#447
* chore: force env file on api
* fix: Docker networking and permissions documentation
- Add HOSTNAME=0.0.0.0 as default in Dockerfiles for reverse proxy compatibility
- Document Linux extra_hosts requirement for host.docker.internal
- Add user: root to SurrealDB examples for bind mount permissions on Linux
- Update environment reference and reverse proxy docs
Fixes#483, fixes#485, fixes#409
Update proxy configuration to use industry-standard environment variables
(HTTP_PROXY, HTTPS_PROXY, NO_PROXY) instead of custom variables.
The underlying libraries (esperanto, content-core, podcast-creator)
now automatically detect proxy settings from these standard variables.
- Bump content-core>=1.14.1 (fixes#494)
- Bump esperanto>=2.18
- Bump podcast-creator>=0.9
- Update documentation with new proxy configuration
* docs: update CHANGELOG for v1.6.0 release
* fix: improve error logging for chat model configuration issues (#358)
- Add detailed error logging in provision.py when model lookup fails
- Add warning logging in models.py when default model is not configured
- Add traceback logging in chat router exception handler
- Update Ollama docs with model name configuration guidance
- Update troubleshooting docs with "Failed to send message" solutions
- Bump version to 1.6.1
* chore: uvlock
* docs: add conda installation instructions to README
docs: add conda installation instructions to README
* docs: add conda environment setup as an alternative to uv
docs: add conda environment setup as an alternative to uv
- Upgrade Next.js from 15.4.10 to 16.1.1
- Upgrade React from 19.1.0 to 19.2.3
- Rename middleware.ts → proxy.ts (Next.js 16 requirement)
- Update function name: middleware → proxy
- Enable proxyClientMaxBodySize configuration (now supported in Next.js 16)
- Update documentation to reference Next.js 16 requirement
- Fix upload size limit issue for files >10MB
This upgrade fixes GitHub issue #361 where users cannot upload files
larger than 10MB. The proxyClientMaxBodySize configuration option was
introduced in Next.js 16.1+ and allows configuring the proxy body size
limit to 100MB.
Fixes#361
Related to PR #405
- Added custom exception handler to ensure CORS headers are included in
all HTTP error responses from the API
- Added documentation for 413 (Payload Too Large) errors when behind
reverse proxies (nginx, traefik, kubernetes ingress)
- Added client_max_body_size to nginx configuration examples
- Documented how to configure CORS headers for proxy-level error responses
Fixes#401
Changed SurrealDB configuration from in-memory storage to RocksDB with
bind mounts for data persistence. Users' data will now survive container
restarts.
Fixes#398
Add pull_policy: always to all open_notebook service definitions
in docker-compose examples across documentation. This ensures
Docker always checks for and pulls newer images when running
docker compose up.
Closes#393
- Replace old docs structure with new comprehensive documentation
- Organize into 8 major sections (0-START-HERE through 7-DEVELOPMENT)
- Convert CONFIGURATION.md, CONTRIBUTING.md, MAINTAINER_GUIDE.md to redirects
- Remove outdated MIGRATION.md and DESIGN_PRINCIPLES.md
- Fix all internal documentation links and cross-references
- Add progressive disclosure paths for different user types
- Include 44 focused guides covering all features
- Update README.md to remove v1.0 breaking changes notice
- Add fix for quotes in environment variables causing empty URL errors
- Add SurrealDB configuration section (single container includes DB, v2 only)
- Add network timeout fix for slow connections and Chinese users
- Update table of contents with new sections
Addresses feedback from issue #316
* chore: improve podcast transcripts
* docs: update Azure OpenAI documentation to reflect STT and TTS support
Updated documentation to show that Azure OpenAI now supports all four
modalities (Language, Embedding, STT, TTS) with mode-specific configuration.
Changes:
- Updated Provider Support Matrix table to show Azure supports STT and TTS
- Added recommended STT and TTS models to Azure OpenAI section
- Added clarifying comment for STT and TTS environment variables
Related to modality-specific Azure configuration implementation.
* chore: improve podcast transcripts
* fix: remove date from insight - fixes#241
* fix: improve scrolling on source and insights - fixes#237
* chore: update esperanto to fix: #234
* chore: update esperanto to fix#226
* fix: process vectorization as subcommands to handle larger documents more gracefully - fix: #229
* feat: enable background job retry capabilities
* feat: reenable content types that were disabled during alpha version
* fix: remove unnecessary model caching causing many issues.
* feat: support multiple azure endpoints and keys just like openai compatible. Fixes#215
* docs: update azure variables
* chore: bump and update dependencies
* feat: simplify reverse proxy configuration with Next.js rewrites
Add Next.js API rewrites to proxy /api/* requests internally from port 8502
to the FastAPI backend on port 5055. This eliminates the need for complex
reverse proxy configurations with multiple upstreams and location blocks.
Changes:
- Add rewrites to next.config.ts proxying /api/* to INTERNAL_API_URL
- Introduce INTERNAL_API_URL env var (defaults to http://localhost:5055)
- Update supervisord configs to pass INTERNAL_API_URL to Next.js
- Document INTERNAL_API_URL in .env.example with usage examples
- Add simplified reverse proxy examples for nginx, Traefik, Caddy, Coolify
- Update README architecture diagram to show internal proxying
- Add explanatory comments to _config route handler
Benefits:
- Reduces reverse proxy config from 12 lines to 3 (75% reduction)
- Single-port deployment (8502 only) for 95% of use cases
- Zero breaking changes - backward compatible with existing setups
- Zero performance overhead (validated through testing)
- Preserves proxy headers (X-Forwarded-*) for rate limiting/SSL
Resolves: #179
Related: OSS-321
* fix: rename _config to config to fix production routing
CRITICAL BUG FIX: The /_config endpoint has never worked in production builds
because Next.js treats folders starting with underscore as "private folders"
and excludes them from routing entirely.
This endpoint is critical for:
- Providing API_URL to the browser at runtime
- Enabling zero-config deployments with auto-detection
- Supporting reverse proxy scenarios where API URL differs from frontend URL
Changes:
- Rename frontend/src/app/_config/ → frontend/src/app/config/
- Update client code references (/_config → /config)
- Update documentation with correct endpoint path
- Bump version to 1.1.0 (minor version for new rewrites feature + bug fix)
Impact:
- Runtime configuration now works in production builds
- /config returns {"apiUrl":"http://localhost:5055"} correctly
- Auto-detection for reverse proxy deployments now functional
Related: #179, OSS-321
* fix: resolve React hook exhaustive-deps warning in AddExistingSourceDialog
Wrap performSearch function in useCallback to properly memoize it and satisfy
React Hook exhaustive-deps rule. This prevents unnecessary re-renders and
ensures the useEffect dependency array is correctly specified.
Changes:
- Import useCallback from React
- Wrap performSearch with useCallback([debouncedSearchQuery, allSources])
- Add performSearch to useEffect dependency array
* final fixes
Move runtime configuration endpoint from /api/runtime-config to /_config to avoid
conflicts with reverse proxies that route all /api/* requests to the FastAPI backend.
This fixes an issue where users with reverse proxies would see port 5055 incorrectly
appended to their API_URL even when explicitly set via environment variable.
Changes:
- Move frontend/src/app/api/runtime-config/route.ts to frontend/src/app/_config/route.ts
- Update config.ts to fetch from /_config instead of /api/runtime-config
- Add troubleshooting documentation for reverse proxy users
- Update all reverse proxy examples to show correct routing (catch-all handles /_config)
- Bump version to 1.0.11
The new /_config endpoint is automatically handled by standard reverse proxy catch-all
rules (location / { proxy_pass http://frontend; }), requiring no additional configuration
for most users.
Fixes issue where API_URL environment variable was being ignored in reverse proxy setups,
causing CORS errors with "Status code: (null)" and incorrect port 5055 being added.
* fix: increase API client timeouts for transformation operations
- Increase frontend timeout from 30s to 300s (5 minutes)
- Increase Streamlit API client timeout from 30s to 300s
- Add API_CLIENT_TIMEOUT environment variable for configurability
- Add ESPERANTO_LLM_TIMEOUT environment variable documentation
- Update .env.example with comprehensive timeout documentation
Fixes#131 - API timeout errors during transformation generation
Transformations now have sufficient time to complete on slower
hardware (Ollama, LM Studio) without frontend timeout errors.
Users can now configure timeouts for both the API client layer
(API_CLIENT_TIMEOUT) and the LLM provider layer (ESPERANTO_LLM_TIMEOUT)
to accommodate their specific hardware and network conditions.
* docs: add timeout configuration documentation
- Add comprehensive timeout troubleshooting section to common-issues.md
- Add FAQ entry about timeout errors during transformations
- Document API_CLIENT_TIMEOUT and ESPERANTO_LLM_TIMEOUT usage
- Provide specific timeout recommendations for different hardware/network scenarios
- Link to GitHub issue #131 for reference
* chore: bump
* refactor: improve timeout configuration with validation and consistency
Based on PR review feedback, this commit addresses several improvements:
**Timeout Validation:**
- Add validation to ensure timeout values are between 30s and 3600s
- Invalid values fall back to default 300s with warning logs
- Handles edge cases (negative, zero, invalid strings)
**Fix Hard-coded Timeouts:**
- Replace all hard-coded timeout values in api/client.py
- ask_simple: 300s → self.timeout
- execute_transformation: 120s → self.timeout
- embed_content: 120s → self.timeout
- create_source: 300s → self.timeout
- rebuild_embeddings: Uses smart logic (2x timeout, max 3600s)
**Improved Documentation:**
- Add clarifying comments about ms vs seconds (frontend vs backend)
- Document that frontend uses 300000ms = backend 300s
- Add inline documentation for rebuild_embeddings timeout logic
**Development Dependencies:**
- Add pytest>=8.0.0 to dev dependencies for future test coverage
This makes timeout configuration more robust, consistent, and user-friendly
while maintaining backward compatibility.
* fix text
* remove lint from docker publish workflow
* gemini base url docs
* feat: add multimodal support for openai-compatible providers
- Add helper function to check OpenAI-compatible provider availability per mode
- Update provider detection to support language, embedding, STT, and TTS modalities
- Implement mode-specific environment variable detection (LLM, EMBEDDING, STT, TTS)
- Maintain backward compatibility with generic OPENAI_COMPATIBLE_BASE_URL
- Add comprehensive unit tests for all configuration scenarios
- Update .env.example with mode-specific environment variables
- Update provider support matrix in ai-models.md
- Create comprehensive openai-compatible.md setup guide
This enables users to configure different OpenAI-compatible endpoints for
different AI capabilities (e.g., LM Studio for language models, dedicated
server for embeddings) while maintaining full backward compatibility.
* upgrade
* chore: change docker release strategy
- Add Docker image registry section explaining both Docker Hub and GHCR options
- Include GHCR alternative in Quick Start examples
- Add comments showing how to use GHCR in docker-compose examples
- Help users understand they can use either registry interchangeably
New front-end
Launch Chat API
Manage Sources
Enable re-embedding of all contents
Sources can be added without a notebook now
Improved settings
Enable model selector on all chats
Background processing for better experience
Dark mode
Improved Notes
Improved Docs:
- Remove all Streamlit references from documentation
- Update deployment guides with React frontend setup
- Fix Docker environment variables format (SURREAL_URL, SURREAL_PASSWORD)
- Update docker image tag from :latest to :v1-latest
- Change navigation references (Settings → Models to just Models)
- Update development setup to include frontend npm commands
- Add MIGRATION.md guide for users upgrading from Streamlit
- Update quick-start guide with correct environment variables
- Add port 5055 documentation for API access
- Update project structure to reflect frontend/ directory
- Remove outdated source-chat documentation files
* fix: prevent project failing to start when cannot talk to github - fixes#128
* improve ollama documentation - see #127
* chore: update esperanto library to enable gpt-5 - see #107; update podcast-creator library to enable TTS_BATCH_SIZE - fixes#125
* add info on ollama env variables
* chore: ignore dev logs
* chore: bump