Add thinking content cleaning to notebook and source chat graphs.
Previously, models that output <think>...</think> tags (like DeepSeek)
or malformed variants without opening tags (like Nemotron) would leak
reasoning content into user-visible responses.
Changes:
- chat.py: Clean AI response content before returning messages
- source_chat.py: Same fix for source-specific chat
- text_utils.py: Handle malformed output where opening <think> tag
is missing but </think> is present
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit addresses two related issues in the chat interface:
1. **Fix broken reference links (OSS-310)**
- Completely rewrote convertReferencesToMarkdownLinks() with greedy pattern matching
- Now handles all edge cases: references after commas, nested brackets, bold markdown
- Added visual icon indicators (FileText, Lightbulb, FileEdit) for reference types
- Implemented proper error handling with toast notifications
- Added validation for reference types and ID lengths
2. **Fix long URL/text overflow (#172)**
- Added break-words and overflow-wrap classes to chat messages
- Long URLs and text now wrap properly within chat bubbles
- Applied fix consistently across source chat, notebook chat, and search results
**Technical Details:**
- Enhanced reference detection algorithm processes from end to start to preserve indices
- Context analysis (50 chars before/after) determines original formatting
- Icons are 12px, accessible, and themed appropriately
- All changes pass linting and build successfully
**Files Modified:**
- frontend/src/lib/utils/source-references.tsx (core algorithm rewrite)
- frontend/src/components/source/ChatPanel.tsx (error handling + text wrapping)
- frontend/src/components/search/StreamingResponse.tsx (error handling + text wrapping)
- open_notebook/utils/token_utils.py (ruff formatting fix)
fixes#172
Configure tiktoken to cache tokenizer encodings in ./data/tiktoken-cache
instead of using system temp directory. This prevents re-downloading
encoding files on every container restart and improves startup time.
Changes:
- Add TIKTOKEN_CACHE_DIR configuration in config.py
- Set TIKTOKEN_CACHE_DIR environment variable in token_utils.py
- Bump version to 1.0.7
New front-end
Launch Chat API
Manage Sources
Enable re-embedding of all contents
Sources can be added without a notebook now
Improved settings
Enable model selector on all chats
Background processing for better experience
Dark mode
Improved Notes
Improved Docs:
- Remove all Streamlit references from documentation
- Update deployment guides with React frontend setup
- Fix Docker environment variables format (SURREAL_URL, SURREAL_PASSWORD)
- Update docker image tag from :latest to :v1-latest
- Change navigation references (Settings → Models to just Models)
- Update development setup to include frontend npm commands
- Add MIGRATION.md guide for users upgrading from Streamlit
- Update quick-start guide with correct environment variables
- Add port 5055 documentation for API access
- Update project structure to reflect frontend/ directory
- Remove outdated source-chat documentation files