Broad 'except Exception' could silently swallow unexpected failures.
URLError and ConnectionError are both subclasses of OSError, so
'except (ImportError, OSError)' captures all real offline/not-installed
cases while letting genuine programming errors propagate.
Also include the exception detail in the warning message so failures
are diagnosable in logs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
In air-gapped / offline Docker deployments, tiktoken.get_encoding() tries
to download the encoding file from openaipublic.blob.core.windows.net.
When that request fails it raises a URLError / OSError — not an ImportError
— so the previous except clause silently missed it and the crash surfaced in
the UI.
Widened `except ImportError` to `except Exception` so all failures —
"not installed" and "network unreachable" — fall through to the word-count
fallback (words × 1.3). Added a loguru WARNING so operators can see when
the fallback is active.
TIKTOKEN_CACHE_DIR now reads from the environment with a blank-safe
fallback (`or` guard prevents os.makedirs("") on empty env var). This lets
Docker images redirect the cache to a path outside /app/data/ so user-data
volume mounts cannot shadow the pre-baked encoding.
Both images now pre-download the o200k_base encoding during the builder
stage (internet is available at build time) and copy it into the runtime
image at /app/tiktoken-cache. ENV TIKTOKEN_CACHE_DIR=/app/tiktoken-cache
is set in the runtime stage so no network call is ever needed at runtime.
Added test_token_count_network_error_fallback in tests/test_utils.py:
patches tiktoken.get_encoding with a URLError and asserts token_count()
returns a positive int instead of raising.
Fixes#264
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add batching to generate_embeddings() (50 texts per batch with per-batch retry)
to prevent 413 Payload Too Large errors on large documents
- Add 413 error classification rule for user-friendly error messages
- Fix misleading "Created 0 embedded chunks" log in process_source_command
by removing premature get_embedded_chunks() call (embedding is fire-and-forget)
Closes#594
Resolve conflicts in ask.py and chat.py by merging the try/except error
handling from main with the extract_text_content helper from the PR.
Also apply the same fix to source_chat.py and transformation.py which
had the same vulnerable isinstance/str() pattern for structured LLM
response content (e.g. Gemini's envelope format).
- Catch ConfigurationError alongside ValueError in get_default_model()
to preserve graceful fallback after ValueError→ConfigurationError migration
- Add _truncate() helper to error_classifier to cap pass-through and
default error messages at 200 chars, avoiding verbose internal details
Replace generic "An unexpected error occurred" messages with descriptive,
user-friendly error messages when LLM operations fail. Errors like invalid
API keys, wrong model names, and rate limits now surface clearly in the UI.
Adds error classification utility, global FastAPI exception handlers, and
frontend getApiErrorMessage() helper. Bumps version to 1.7.2.
* feat: replace provider config with credential-based system (#477)
Introduce a new credential management system replacing the old
ProviderConfig singleton and standalone Models page. Each credential
stores encrypted API keys and provider-specific configuration with
full CRUD support via a unified settings UI.
Backend:
- Add Credential domain model with encrypted API key storage
- Add credentials API router (CRUD, discovery, registration, testing)
- Add encryption utilities for secure key storage
- Add key_provider for DB-first env-var fallback provisioning
- Add connection tester and model discovery services
- Integrate ModelManager with credential-based config
- Add provider name normalization for Esperanto compatibility
- Add database migrations 11-12 for credential schema
Frontend:
- Rewrite settings/api-keys page with credential management UI
- Add model discovery dialog with search and custom model support
- Add compact default model assignments (primary/advanced layout)
- Add inline model testing and credential connection testing
- Add env-var migration banner
- Update navigation to unified settings page
- Remove standalone models page and old settings components
i18n:
- Update all 7 locale files with credential and model management keys
Closes#477
Co-Authored-By: JFMD <git@jfmd.us>
Co-Authored-By: OraCatQAQ <570768706@qq.com>
* fix: address PR #540 review comments
- Fix docs referencing removed Models page
- Fix error-handler returning raw messages instead of i18n keys
- Fix auth.py misleading docstring and missing no-password guard
- Fix connection_tester using wrong env var for openai_compatible
- Add provision_provider_keys before model discovery/sync
- Update CLAUDE.md to reflect credential-based system
- Fix missing closing brace in api-keys page useEffect
* fix: add logging to credential migration and surface errors in UI
- Add comprehensive logging to migrate-from-env and
migrate-from-provider-config endpoints (start, per-provider
progress, success/failure with stack traces, final summary)
- Fix frontend migration hooks ignoring errors array from response
- Show error toast when migration fails instead of "nothing to migrate"
- Invalidate status/envStatus queries after migration so banner updates
* docs: update CLAUDE.md files for credential system
Replace stale ProviderConfig and /api-keys/ references across 8 CLAUDE.md
files to reflect the new Credential-based system from PR #540.
* docs: update user documentation for credential-based system
Replace env var API key instructions with Settings UI credential
workflow across all user-facing documentation. The new flow is:
set OPEN_NOTEBOOK_ENCRYPTION_KEY → start services → add credential
in Settings UI → test → discover models → register.
- Rewrite ai-providers.md, api-configuration.md, environment-reference.md
- Update all quick-start guides and installation docs
- Update ollama.md, openai-compatible.md, local-tts/stt networking sections
- Update reverse-proxy.md, development-setup.md, security.md
- Fix broken links to non-existent docs/deployment/ paths
- Add credentials endpoints to api-reference.md
- Move all API key env vars to deprecated/legacy sections
* chore: bump version to 1.7.0-rc1
Release candidate for credential-based provider management system.
* fix: initialize provider before try block in test_credential
Prevents UnboundLocalError when Credential.get() throws (e.g.,
invalid credential_id) before provider is assigned.
* fix: reorder down migration to drop index before table
Removes duplicate REMOVE FIELD statement and reorders so the index
is dropped before the table, preventing rollback failures.
* refactor: simplify encryption key to always derive via SHA-256
Remove the dual code path in _ensure_fernet_key() that detected native
Fernet keys. Since the credential system is new, always deriving via
SHA-256 removes unnecessary complexity. Also removes the generate_key()
function and Fernet.generate_key() references from docs.
* fix: correct mock patch targets in embedding tests and URL validation
Fix embedding tests patching wrong module path for model_manager
(was targeting open_notebook.utils.embedding.model_manager but it's
imported locally from open_notebook.ai.models). Also fix URL validation
to allow unresolvable hostnames since they may be valid in the
deployment environment (e.g., Azure endpoints, internal DNS).
* feat: add global setup banner for encryption and migration status
Show a persistent banner in AppShell when encryption key is missing
(red) or env var API keys can be migrated (amber), so users see
these prompts on every page instead of only on Settings > API Keys.
Includes a docs link for the encryption banner and i18n support
across all 7 locales.
* docs: several improvements to docker-compose e env examples
* Update README.md
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
* docs: fix env var format in README and update model setup instructions
Align the encryption key snippet in README Step 2 with the list
format used in the compose file. Replace deprecated "Settings →
Models" instructions with credential-based Discover Models flow.
* fix: address credential system review issues
- Fix SSRF bypass via IPv4-mapped IPv6 addresses (::ffff:169.254.x.x)
- Fix TTS connection test missing config parameter
- Add Azure-specific model discovery using api-key auth header
- Add Vertex static model list for credential-based discovery
- Fix PROVIDER_DISCOVERY_FUNCTIONS incorrect azure/vertex mapping
- Extract business logic to api/credentials_service.py (service layer)
- Move credential Pydantic schemas to api/models.py
- Update tests to use new service imports and ValueError assertions
* fix: sanitize error responses and migrate key_provider to Credential
- Replace raw exception messages in all credential router 500 responses
with generic error strings (internal details logged server-side only)
- Refactor key_provider.py to use Credential.get_by_provider() instead
of deprecated ProviderConfig.get_instance()
- Remove unused functions (get_provider_configs, get_default_api_key,
get_provider_config) that were dead code
---------
Co-authored-by: JFMD <git@jfmd.us>
Co-authored-by: OraCatQAQ <570768706@qq.com>
Some LLM providers (notably Gemini, DeepSeek via OpenAI-compatible
proxies) return ai_message.content as a list of content parts:
[{'type': 'text', 'text': '...', 'extras': {...}}]
The current code uses str() on non-string content, which produces the
Python repr of the entire list — not valid JSON. This breaks
PydanticOutputParser.parse() with OutputParserException.
This adds extract_text_content() to properly unwrap text from both
string and structured content formats, applied in ask.py, chat.py,
and prompt.py.
Fixes#329
Adds OPEN_NOTEBOOK_CHUNK_SIZE and OPEN_NOTEBOOK_CHUNK_OVERLAP environment
variables to allow users to configure chunking behavior for different
embedding models with varying context window limits.
Key changes:
- CHUNK_SIZE is now configurable via OPEN_NOTEBOOK_CHUNK_SIZE (default: 1200)
- CHUNK_OVERLAP is configurable via OPEN_NOTEBOOK_CHUNK_OVERLAP (default: 15%)
- Validation with warnings for invalid or out-of-range values
- Updated documentation with configuration examples
This enables users of models like mxbai-embed-large with limited context
windows to reduce chunk size accordingly.
Closes#510
SqliteSaver does not support async methods like aget_state().
Use asyncio.to_thread() to run the sync get_state() call from
async context, maintaining compatibility with the existing
sync graph invocations.
Closes#509
* fix: filter empty content in rebuild embeddings queries
Update collect_items_for_rebuild() to properly filter out items with
empty or whitespace-only content before submitting embedding jobs.
Changes:
- Sources: add string::trim(full_text) != '' filter
- Notes: add string::trim(content) != '' filter
- Insights: add content != none AND string::trim(content) != '' filter
(previously had no content filter at all)
This prevents unnecessary job submissions that would fail validation
in the individual embed commands.
Ref #513
* feat: add command_id to embedding error logs
Add get_command_id() helper to extract command_id from execution context.
Include command_id in error logs for all embedding commands:
- embed_note_command
- embed_insight_command
- embed_source_command
- create_insight_command
This makes it easier to trace failed embedding jobs back to specific
command records in the database.
Ref #513
* fix: improve logging for embedding commands
Log improvements:
- Add command_id to all embedding error logs for traceability
- Transaction conflicts in repo_insert now log at DEBUG (not ERROR)
- Embedding API errors log at DEBUG, only ERROR when retries exhausted
- Friendlier retry messages: "This will be retried automatically"
- Include model name and command_id in generate_embeddings errors
Files changed:
- commands/embedding_commands.py: command_id in logs, friendlier messages
- open_notebook/database/repository.py: DEBUG for transaction conflicts
- open_notebook/utils/embedding.py: DEBUG logging, pass-through command_id
Ref #513
* fix: correct field names in rebuild embeddings status endpoint
The API status endpoint was looking for wrong field names:
- sources_processed → sources_submitted
- notes_processed → notes_submitted
- insights_processed → insights_submitted
- processed_items → jobs_submitted
- failed_items → failed_submissions
The command outputs "_submitted" because embedding happens async
(we count jobs submitted, not items processed).
Ref #513
* fix: update rebuild UI text to reflect async job submission
Changed terminology from "Completed/processed" to "Jobs Submitted"
since the rebuild command submits embedding jobs for async processing,
not completing them synchronously.
Updated in all locales: en-US, pt-BR, zh-CN, zh-TW, ja-JP
Ref #513
* refactor: migrate retry strategy from allowlist to blocklist
- Change from `retry_on: [RuntimeError, ...]` to `stop_on: [ValueError]`
- This is more resilient: new exception types auto-retry by default
- Simplified exception handling: ValueError = permanent, else = retry
- Transient errors logged at DEBUG (surreal-commands logs final failure)
- Permanent errors (ValueError) logged at ERROR
Ref #513
Update proxy configuration to use industry-standard environment variables
(HTTP_PROXY, HTTPS_PROXY, NO_PROXY) instead of custom variables.
The underlying libraries (esperanto, content-core, podcast-creator)
now automatically detect proxy settings from these standard variables.
- Bump content-core>=1.14.1 (fixes#494)
- Bump esperanto>=2.18
- Bump podcast-creator>=0.9
- Update documentation with new proxy configuration
* feat: decrease chunking size for maximum ollama compatibility
* docs: improve i18n info on Claude.md
* feat: add cascade deletion for notebooks with delete preview
- Add Notebook.get_delete_preview() to show counts of affected items
- Add Notebook.delete(delete_exclusive_sources) for cascade deletion
- Always delete notes when notebook is deleted
- Allow user to choose: delete or keep exclusive sources
- Shared sources are always unlinked but never deleted
- Add NotebookDeleteDialog component with radio button options
- Add delete-preview API endpoint
- Update delete endpoint with delete_exclusive_sources param
- Add i18n support for all 5 locales
Closes#77
* docs: remove harcoded config settings
* feat: content-type aware chunking and unified embedding
- Add chunking.py with HTML, Markdown, and plain text detection
- Add embedding.py with mean pooling for large content
- Create dedicated commands: embed_note, embed_insight, embed_source
- Use fire-and-forget pattern for embedding via submit_command()
- Refactor rebuild_embeddings_command to delegate to individual commands
- Remove legacy commands and needs_embedding() methods
- Reduce chunk size to 1500 chars for Ollama compatibility
- Update CLAUDE.md documentation for new architecture
Fixes#350, #142
* fix: address code review issues
- Note.save() now returns command_id for tracking embedding jobs
- Add length check after generate_embeddings() to fail fast on mismatch
- Add numpy as explicit dependency (was transitive)
- Remove hardcoded chunk sizes from docstrings
* docs: address code review comments
- Rename "SYNC PATH" to "DOMAIN MODEL PATH" in embedding router
- Add test_chunking.py and test_embedding.py to Testing Strategy
- Clarify auto-embedding behavior for each domain model
* fix: clean thinking tags from prompt graph output
Adds clean_thinking_content() to prompt.py to handle extended thinking
models that return <think>...</think> tags. This fixes empty titles
when saving notes from chat.
* chore: remove local docker-compose from git
* fix(frontend): handle null parent_id in search results
Add defensive check for null parent_id in search results to prevent
"Cannot read properties of null (reading 'split')" error. This can
happen with orphaned records in the database.
* fix: cascade delete embeddings and insights when source is deleted
When deleting a Source, now also deletes associated:
- source_embedding records
- source_insight records
This prevents orphaned records that cause null parent_id errors
in vector search results.
* fix: add cleanup for orphan embedding/insight records in migration 10
Deletes source_embedding and source_insight records where the
linked source no longer exists (source.id = NONE).
* chore: bump esperanto to 2.16
Increases ctx_num for Ollama models to accommodate larger notebook
context windows. See: https://github.com/lfnovo/esperanto/pull/69
Add thinking content cleaning to notebook and source chat graphs.
Previously, models that output <think>...</think> tags (like DeepSeek)
or malformed variants without opening tags (like Nemotron) would leak
reasoning content into user-visible responses.
Changes:
- chat.py: Clean AI response content before returning messages
- source_chat.py: Same fix for source-specific chat
- text_utils.py: Handle malformed output where opening <think> tag
is missing but </think> is present
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit addresses two related issues in the chat interface:
1. **Fix broken reference links (OSS-310)**
- Completely rewrote convertReferencesToMarkdownLinks() with greedy pattern matching
- Now handles all edge cases: references after commas, nested brackets, bold markdown
- Added visual icon indicators (FileText, Lightbulb, FileEdit) for reference types
- Implemented proper error handling with toast notifications
- Added validation for reference types and ID lengths
2. **Fix long URL/text overflow (#172)**
- Added break-words and overflow-wrap classes to chat messages
- Long URLs and text now wrap properly within chat bubbles
- Applied fix consistently across source chat, notebook chat, and search results
**Technical Details:**
- Enhanced reference detection algorithm processes from end to start to preserve indices
- Context analysis (50 chars before/after) determines original formatting
- Icons are 12px, accessible, and themed appropriately
- All changes pass linting and build successfully
**Files Modified:**
- frontend/src/lib/utils/source-references.tsx (core algorithm rewrite)
- frontend/src/components/source/ChatPanel.tsx (error handling + text wrapping)
- frontend/src/components/search/StreamingResponse.tsx (error handling + text wrapping)
- open_notebook/utils/token_utils.py (ruff formatting fix)
fixes#172
Configure tiktoken to cache tokenizer encodings in ./data/tiktoken-cache
instead of using system temp directory. This prevents re-downloading
encoding files on every container restart and improves startup time.
Changes:
- Add TIKTOKEN_CACHE_DIR configuration in config.py
- Set TIKTOKEN_CACHE_DIR environment variable in token_utils.py
- Bump version to 1.0.7
New front-end
Launch Chat API
Manage Sources
Enable re-embedding of all contents
Sources can be added without a notebook now
Improved settings
Enable model selector on all chats
Background processing for better experience
Dark mode
Improved Notes
Improved Docs:
- Remove all Streamlit references from documentation
- Update deployment guides with React frontend setup
- Fix Docker environment variables format (SURREAL_URL, SURREAL_PASSWORD)
- Update docker image tag from :latest to :v1-latest
- Change navigation references (Settings → Models to just Models)
- Update development setup to include frontend npm commands
- Add MIGRATION.md guide for users upgrading from Streamlit
- Update quick-start guide with correct environment variables
- Add port 5055 documentation for API access
- Update project structure to reflect frontend/ directory
- Remove outdated source-chat documentation files