SqliteSaver does not support async methods like aget_state().
Use asyncio.to_thread() to run the sync get_state() call from
async context, maintaining compatibility with the existing
sync graph invocations.
Closes#509
* fix: filter empty content in rebuild embeddings queries
Update collect_items_for_rebuild() to properly filter out items with
empty or whitespace-only content before submitting embedding jobs.
Changes:
- Sources: add string::trim(full_text) != '' filter
- Notes: add string::trim(content) != '' filter
- Insights: add content != none AND string::trim(content) != '' filter
(previously had no content filter at all)
This prevents unnecessary job submissions that would fail validation
in the individual embed commands.
Ref #513
* feat: add command_id to embedding error logs
Add get_command_id() helper to extract command_id from execution context.
Include command_id in error logs for all embedding commands:
- embed_note_command
- embed_insight_command
- embed_source_command
- create_insight_command
This makes it easier to trace failed embedding jobs back to specific
command records in the database.
Ref #513
* fix: improve logging for embedding commands
Log improvements:
- Add command_id to all embedding error logs for traceability
- Transaction conflicts in repo_insert now log at DEBUG (not ERROR)
- Embedding API errors log at DEBUG, only ERROR when retries exhausted
- Friendlier retry messages: "This will be retried automatically"
- Include model name and command_id in generate_embeddings errors
Files changed:
- commands/embedding_commands.py: command_id in logs, friendlier messages
- open_notebook/database/repository.py: DEBUG for transaction conflicts
- open_notebook/utils/embedding.py: DEBUG logging, pass-through command_id
Ref #513
* fix: correct field names in rebuild embeddings status endpoint
The API status endpoint was looking for wrong field names:
- sources_processed → sources_submitted
- notes_processed → notes_submitted
- insights_processed → insights_submitted
- processed_items → jobs_submitted
- failed_items → failed_submissions
The command outputs "_submitted" because embedding happens async
(we count jobs submitted, not items processed).
Ref #513
* fix: update rebuild UI text to reflect async job submission
Changed terminology from "Completed/processed" to "Jobs Submitted"
since the rebuild command submits embedding jobs for async processing,
not completing them synchronously.
Updated in all locales: en-US, pt-BR, zh-CN, zh-TW, ja-JP
Ref #513
* refactor: migrate retry strategy from allowlist to blocklist
- Change from `retry_on: [RuntimeError, ...]` to `stop_on: [ValueError]`
- This is more resilient: new exception types auto-retry by default
- Simplified exception handling: ValueError = permanent, else = retry
- Transient errors logged at DEBUG (surreal-commands logs final failure)
- Permanent errors (ValueError) logged at ERROR
Ref #513
Update proxy configuration to use industry-standard environment variables
(HTTP_PROXY, HTTPS_PROXY, NO_PROXY) instead of custom variables.
The underlying libraries (esperanto, content-core, podcast-creator)
now automatically detect proxy settings from these standard variables.
- Bump content-core>=1.14.1 (fixes#494)
- Bump esperanto>=2.18
- Bump podcast-creator>=0.9
- Update documentation with new proxy configuration
* feat: decrease chunking size for maximum ollama compatibility
* docs: improve i18n info on Claude.md
* feat: add cascade deletion for notebooks with delete preview
- Add Notebook.get_delete_preview() to show counts of affected items
- Add Notebook.delete(delete_exclusive_sources) for cascade deletion
- Always delete notes when notebook is deleted
- Allow user to choose: delete or keep exclusive sources
- Shared sources are always unlinked but never deleted
- Add NotebookDeleteDialog component with radio button options
- Add delete-preview API endpoint
- Update delete endpoint with delete_exclusive_sources param
- Add i18n support for all 5 locales
Closes#77
* docs: remove harcoded config settings
* feat: content-type aware chunking and unified embedding
- Add chunking.py with HTML, Markdown, and plain text detection
- Add embedding.py with mean pooling for large content
- Create dedicated commands: embed_note, embed_insight, embed_source
- Use fire-and-forget pattern for embedding via submit_command()
- Refactor rebuild_embeddings_command to delegate to individual commands
- Remove legacy commands and needs_embedding() methods
- Reduce chunk size to 1500 chars for Ollama compatibility
- Update CLAUDE.md documentation for new architecture
Fixes#350, #142
* fix: address code review issues
- Note.save() now returns command_id for tracking embedding jobs
- Add length check after generate_embeddings() to fail fast on mismatch
- Add numpy as explicit dependency (was transitive)
- Remove hardcoded chunk sizes from docstrings
* docs: address code review comments
- Rename "SYNC PATH" to "DOMAIN MODEL PATH" in embedding router
- Add test_chunking.py and test_embedding.py to Testing Strategy
- Clarify auto-embedding behavior for each domain model
* fix: clean thinking tags from prompt graph output
Adds clean_thinking_content() to prompt.py to handle extended thinking
models that return <think>...</think> tags. This fixes empty titles
when saving notes from chat.
* chore: remove local docker-compose from git
* fix(frontend): handle null parent_id in search results
Add defensive check for null parent_id in search results to prevent
"Cannot read properties of null (reading 'split')" error. This can
happen with orphaned records in the database.
* fix: cascade delete embeddings and insights when source is deleted
When deleting a Source, now also deletes associated:
- source_embedding records
- source_insight records
This prevents orphaned records that cause null parent_id errors
in vector search results.
* fix: add cleanup for orphan embedding/insight records in migration 10
Deletes source_embedding and source_insight records where the
linked source no longer exists (source.id = NONE).
* chore: bump esperanto to 2.16
Increases ctx_num for Ollama models to accommodate larger notebook
context windows. See: https://github.com/lfnovo/esperanto/pull/69
Add thinking content cleaning to notebook and source chat graphs.
Previously, models that output <think>...</think> tags (like DeepSeek)
or malformed variants without opening tags (like Nemotron) would leak
reasoning content into user-visible responses.
Changes:
- chat.py: Clean AI response content before returning messages
- source_chat.py: Same fix for source-specific chat
- text_utils.py: Handle malformed output where opening <think> tag
is missing but </think> is present
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit addresses two related issues in the chat interface:
1. **Fix broken reference links (OSS-310)**
- Completely rewrote convertReferencesToMarkdownLinks() with greedy pattern matching
- Now handles all edge cases: references after commas, nested brackets, bold markdown
- Added visual icon indicators (FileText, Lightbulb, FileEdit) for reference types
- Implemented proper error handling with toast notifications
- Added validation for reference types and ID lengths
2. **Fix long URL/text overflow (#172)**
- Added break-words and overflow-wrap classes to chat messages
- Long URLs and text now wrap properly within chat bubbles
- Applied fix consistently across source chat, notebook chat, and search results
**Technical Details:**
- Enhanced reference detection algorithm processes from end to start to preserve indices
- Context analysis (50 chars before/after) determines original formatting
- Icons are 12px, accessible, and themed appropriately
- All changes pass linting and build successfully
**Files Modified:**
- frontend/src/lib/utils/source-references.tsx (core algorithm rewrite)
- frontend/src/components/source/ChatPanel.tsx (error handling + text wrapping)
- frontend/src/components/search/StreamingResponse.tsx (error handling + text wrapping)
- open_notebook/utils/token_utils.py (ruff formatting fix)
fixes#172
Configure tiktoken to cache tokenizer encodings in ./data/tiktoken-cache
instead of using system temp directory. This prevents re-downloading
encoding files on every container restart and improves startup time.
Changes:
- Add TIKTOKEN_CACHE_DIR configuration in config.py
- Set TIKTOKEN_CACHE_DIR environment variable in token_utils.py
- Bump version to 1.0.7
New front-end
Launch Chat API
Manage Sources
Enable re-embedding of all contents
Sources can be added without a notebook now
Improved settings
Enable model selector on all chats
Background processing for better experience
Dark mode
Improved Notes
Improved Docs:
- Remove all Streamlit references from documentation
- Update deployment guides with React frontend setup
- Fix Docker environment variables format (SURREAL_URL, SURREAL_PASSWORD)
- Update docker image tag from :latest to :v1-latest
- Change navigation references (Settings → Models to just Models)
- Update development setup to include frontend npm commands
- Add MIGRATION.md guide for users upgrading from Streamlit
- Update quick-start guide with correct environment variables
- Add port 5055 documentation for API access
- Update project structure to reflect frontend/ directory
- Remove outdated source-chat documentation files