Append os.sep to the directory path before startswith() check so that
paths like /app/data/uploads_evil/ cannot bypass the uploads directory
validation.
- Bump ai-prompter to >=0.4.0 which uses Jinja2 SandboxedEnvironment,
preventing arbitrary code execution via user-provided transformation prompts
- Sanitize uploaded filenames with os.path.basename() and validate resolved
path stays within upload directory to prevent path traversal
- Validate file_path in source creation is within UPLOADS_FOLDER to prevent
arbitrary file read via Local File Inclusion
- Bump esperanto dependency to >=2.20.0 for new provider profiles
- Register both providers in credentials, key provider, connection tester, model discovery, and models router
- Add frontend provider entries (display names, modalities, docs links)
- Add documentation sections for both providers in ai-providers.md, environment-reference.md, and provider comparison
- #627: Set source.asset (URL/file_path) before save() in async creation
path so failed sources are identifiable and retry works
- #670: Only overwrite source title if it's a placeholder ("Processing...")
or empty, preserving user-set custom titles
- #651: Cascade-delete linked models when credential is deleted instead of
returning 409 Conflict; remove unused delete_models parameter
- Add tests for all three fixes (12 new tests)
- Add .harness and .mcp.json to .gitignore
* feat(podcasts): integrate model registry for profiles and credential passthrough
Replace loose provider/model string fields with record<model> references
in podcast profiles, enabling credential passthrough to podcast-creator.
Backend:
- EpisodeProfile: outline_llm, transcript_llm (record<model>) replace
outline_provider/outline_model strings. New language field (BCP 47).
- SpeakerProfile: voice_model (record<model>) replaces tts_provider/
tts_model strings. Per-speaker voice_model override support.
- Migration 14: schema changes making legacy fields optional, adding new
record<model> fields.
- Data migration (migration.py): auto-converts legacy profiles to model
registry references on startup. Idempotent.
- podcast_commands.py: resolves credentials for ALL profiles before
calling podcast-creator.
- New /api/languages endpoint (pycountry + babel) with BCP 47 locale
codes (pt-BR, en-US, etc.).
Frontend:
- Episode/speaker profile forms use ModelSelector instead of manual
provider/model dropdowns.
- Language dropdown with BCP 47 codes in episode profile form.
- Per-speaker TTS voice model override in speaker profile form.
- "Templates" tab renamed to "Profiles".
- Setup required badge on unconfigured profiles.
- i18n updated across all 8 locales.
Closes#486, closes#552
* fix(i18n): remove unused legacy podcast provider/model keys
Remove 10 orphaned i18n keys across all 8 locales that were left behind
after replacing manual provider/model dropdowns with ModelSelector.
* fix: address review violations in podcast model registry
- P1: Remove profiles with failed model resolution from dicts to prevent
podcast-creator validation errors on unrelated profiles
- P2: Use centralized QUERY_KEYS.languages instead of inline key
- P3: Fix ISO 639-1 → BCP 47 in model field description and CLAUDE.md
- P3: Update "templates" → "profiles" in locale string values (all 8)
* chore: bump version to 1.8.0
* docs: update CLAUDE.md and user docs for error handling and podcast retry
Add missing documentation for features introduced in v1.7.2 (#590) and
v1.7.3 (#595): error classification system, global exception handlers,
ConfigurationError, podcast failure recovery, and retry endpoint.
* chore: update uv.lock
* fix: surface podcast errors and enable retry for failed episodes
Fixes#335, #300
Re-raise exceptions in podcast command so surreal-commands marks jobs as
failed instead of completed. Surface error_message in API responses and
add a retry endpoint that deletes the failed episode and re-submits the
generation job. Frontend shows error details on failed episodes with a
retry button. Translations added for all 8 locales.
* fix: bump podcast-creator to >= 0.10
Fixes#302
* chore: release 1.7.3 - podcast failure recovery and retry
Bump podcast-creator to >= 0.11.2, disable automatic retries for
podcast generation to prevent duplicate episodes, and bump version
to 1.7.3.
Fixes#211, #218, #185, #355, #300, #302
* fix: resolve TypeScript error in handleRetry return type
Replace generic "An unexpected error occurred" messages with descriptive,
user-friendly error messages when LLM operations fail. Errors like invalid
API keys, wrong model names, and rate limits now surface clearly in the UI.
Adds error classification utility, global FastAPI exception handlers, and
frontend getApiErrorMessage() helper. Bumps version to 1.7.2.
Add break-all to SourceCard title and InlineEdit display text so long
unbroken strings wrap instead of overflowing the container. Add min-w-0
to NoteEditorDialog form to prevent grid item expansion.
Also fix RecordID type error in notes API by converting command_id to
string before passing to NoteResponse (fixes 500 on note create/update).
* feat: expose embed command_id in note API responses
Note.save() already returns the command_id from the embed_note
background job, but the API routes discarded it. This surfaces
the command_id in NoteResponse for both POST and PUT endpoints,
enabling callers to poll GET /api/commands/jobs/{command_id} to
know when embedding has completed.
* Add tests for note API command_id response
* feat: replace provider config with credential-based system (#477)
Introduce a new credential management system replacing the old
ProviderConfig singleton and standalone Models page. Each credential
stores encrypted API keys and provider-specific configuration with
full CRUD support via a unified settings UI.
Backend:
- Add Credential domain model with encrypted API key storage
- Add credentials API router (CRUD, discovery, registration, testing)
- Add encryption utilities for secure key storage
- Add key_provider for DB-first env-var fallback provisioning
- Add connection tester and model discovery services
- Integrate ModelManager with credential-based config
- Add provider name normalization for Esperanto compatibility
- Add database migrations 11-12 for credential schema
Frontend:
- Rewrite settings/api-keys page with credential management UI
- Add model discovery dialog with search and custom model support
- Add compact default model assignments (primary/advanced layout)
- Add inline model testing and credential connection testing
- Add env-var migration banner
- Update navigation to unified settings page
- Remove standalone models page and old settings components
i18n:
- Update all 7 locale files with credential and model management keys
Closes#477
Co-Authored-By: JFMD <git@jfmd.us>
Co-Authored-By: OraCatQAQ <570768706@qq.com>
* fix: address PR #540 review comments
- Fix docs referencing removed Models page
- Fix error-handler returning raw messages instead of i18n keys
- Fix auth.py misleading docstring and missing no-password guard
- Fix connection_tester using wrong env var for openai_compatible
- Add provision_provider_keys before model discovery/sync
- Update CLAUDE.md to reflect credential-based system
- Fix missing closing brace in api-keys page useEffect
* fix: add logging to credential migration and surface errors in UI
- Add comprehensive logging to migrate-from-env and
migrate-from-provider-config endpoints (start, per-provider
progress, success/failure with stack traces, final summary)
- Fix frontend migration hooks ignoring errors array from response
- Show error toast when migration fails instead of "nothing to migrate"
- Invalidate status/envStatus queries after migration so banner updates
* docs: update CLAUDE.md files for credential system
Replace stale ProviderConfig and /api-keys/ references across 8 CLAUDE.md
files to reflect the new Credential-based system from PR #540.
* docs: update user documentation for credential-based system
Replace env var API key instructions with Settings UI credential
workflow across all user-facing documentation. The new flow is:
set OPEN_NOTEBOOK_ENCRYPTION_KEY → start services → add credential
in Settings UI → test → discover models → register.
- Rewrite ai-providers.md, api-configuration.md, environment-reference.md
- Update all quick-start guides and installation docs
- Update ollama.md, openai-compatible.md, local-tts/stt networking sections
- Update reverse-proxy.md, development-setup.md, security.md
- Fix broken links to non-existent docs/deployment/ paths
- Add credentials endpoints to api-reference.md
- Move all API key env vars to deprecated/legacy sections
* chore: bump version to 1.7.0-rc1
Release candidate for credential-based provider management system.
* fix: initialize provider before try block in test_credential
Prevents UnboundLocalError when Credential.get() throws (e.g.,
invalid credential_id) before provider is assigned.
* fix: reorder down migration to drop index before table
Removes duplicate REMOVE FIELD statement and reorders so the index
is dropped before the table, preventing rollback failures.
* refactor: simplify encryption key to always derive via SHA-256
Remove the dual code path in _ensure_fernet_key() that detected native
Fernet keys. Since the credential system is new, always deriving via
SHA-256 removes unnecessary complexity. Also removes the generate_key()
function and Fernet.generate_key() references from docs.
* fix: correct mock patch targets in embedding tests and URL validation
Fix embedding tests patching wrong module path for model_manager
(was targeting open_notebook.utils.embedding.model_manager but it's
imported locally from open_notebook.ai.models). Also fix URL validation
to allow unresolvable hostnames since they may be valid in the
deployment environment (e.g., Azure endpoints, internal DNS).
* feat: add global setup banner for encryption and migration status
Show a persistent banner in AppShell when encryption key is missing
(red) or env var API keys can be migrated (amber), so users see
these prompts on every page instead of only on Settings > API Keys.
Includes a docs link for the encryption banner and i18n support
across all 7 locales.
* docs: several improvements to docker-compose e env examples
* Update README.md
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
* docs: fix env var format in README and update model setup instructions
Align the encryption key snippet in README Step 2 with the list
format used in the compose file. Replace deprecated "Settings →
Models" instructions with credential-based Discover Models flow.
* fix: address credential system review issues
- Fix SSRF bypass via IPv4-mapped IPv6 addresses (::ffff:169.254.x.x)
- Fix TTS connection test missing config parameter
- Add Azure-specific model discovery using api-key auth header
- Add Vertex static model list for credential-based discovery
- Fix PROVIDER_DISCOVERY_FUNCTIONS incorrect azure/vertex mapping
- Extract business logic to api/credentials_service.py (service layer)
- Move credential Pydantic schemas to api/models.py
- Update tests to use new service imports and ValueError assertions
* fix: sanitize error responses and migrate key_provider to Credential
- Replace raw exception messages in all credential router 500 responses
with generic error strings (internal details logged server-side only)
- Refactor key_provider.py to use Credential.get_by_provider() instead
of deprecated ProviderConfig.get_instance()
- Remove unused functions (get_provider_configs, get_default_api_key,
get_provider_config) that were dead code
---------
Co-authored-by: JFMD <git@jfmd.us>
Co-authored-by: OraCatQAQ <570768706@qq.com>
* fix: use sync get_state() for SqliteSaver in chat routers
Replace async aget_state() calls with sync get_state() wrapped in
asyncio.to_thread() to fix SqliteSaver compatibility issues.
SqliteSaver does not support async methods, so we need to run
the sync version in a separate thread.
This is a follow-up to #519 which fixed the same issue in
graph_utils.py but missed four locations:
- chat.py: get_session() and execute_chat()
- source_chat.py: get_source_chat_session() and stream_source_chat_response()
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* chore: translate comments to English
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Luis Novo <lfnovo@gmail.com>
Happened when requested to include a file using the api.
Tests Done:
. Requested again to include the same file, and it did not throw any
error.
. After, I requested the include of lots of other files, and had no
issue.
Co-authored-by: Luiza Carneiro <luiza.carneiro@cloudera.com>
* fix: filter empty content in rebuild embeddings queries
Update collect_items_for_rebuild() to properly filter out items with
empty or whitespace-only content before submitting embedding jobs.
Changes:
- Sources: add string::trim(full_text) != '' filter
- Notes: add string::trim(content) != '' filter
- Insights: add content != none AND string::trim(content) != '' filter
(previously had no content filter at all)
This prevents unnecessary job submissions that would fail validation
in the individual embed commands.
Ref #513
* feat: add command_id to embedding error logs
Add get_command_id() helper to extract command_id from execution context.
Include command_id in error logs for all embedding commands:
- embed_note_command
- embed_insight_command
- embed_source_command
- create_insight_command
This makes it easier to trace failed embedding jobs back to specific
command records in the database.
Ref #513
* fix: improve logging for embedding commands
Log improvements:
- Add command_id to all embedding error logs for traceability
- Transaction conflicts in repo_insert now log at DEBUG (not ERROR)
- Embedding API errors log at DEBUG, only ERROR when retries exhausted
- Friendlier retry messages: "This will be retried automatically"
- Include model name and command_id in generate_embeddings errors
Files changed:
- commands/embedding_commands.py: command_id in logs, friendlier messages
- open_notebook/database/repository.py: DEBUG for transaction conflicts
- open_notebook/utils/embedding.py: DEBUG logging, pass-through command_id
Ref #513
* fix: correct field names in rebuild embeddings status endpoint
The API status endpoint was looking for wrong field names:
- sources_processed → sources_submitted
- notes_processed → notes_submitted
- insights_processed → insights_submitted
- processed_items → jobs_submitted
- failed_items → failed_submissions
The command outputs "_submitted" because embedding happens async
(we count jobs submitted, not items processed).
Ref #513
* fix: update rebuild UI text to reflect async job submission
Changed terminology from "Completed/processed" to "Jobs Submitted"
since the rebuild command submits embedding jobs for async processing,
not completing them synchronously.
Updated in all locales: en-US, pt-BR, zh-CN, zh-TW, ja-JP
Ref #513
* refactor: migrate retry strategy from allowlist to blocklist
- Change from `retry_on: [RuntimeError, ...]` to `stop_on: [ValueError]`
- This is more resilient: new exception types auto-retry by default
- Simplified exception handling: ValueError = permanent, else = retry
- Transient errors logged at DEBUG (surreal-commands logs final failure)
- Permanent errors (ValueError) logged at ERROR
Ref #513
Migrate insight creation to the command system with automatic retry logic
to prevent SurrealDB transaction conflicts during batch imports.
Changes:
- Add create_insight_command with retry logic for transaction conflicts
- Add run_transformation_command for async transformation execution
- Make Source.add_insight() fire-and-forget (returns command_id)
- Update POST /sources/{id}/insights to return 202 Accepted immediately
- Frontend polls command status until complete, then refreshes
- Auto-update notebook page icon when source gains insights
- Add i18n keys for insight generation feedback
Related to #489
This PR fixes a potential UnboundLocalError in the API router
by ensuring file_path is initialized before the try block.
## Issue
In the create_source endpoint, file_path was initialized inside the
try block. If an exception occurred before this initialization,
exception handlers that reference file_path would crash with
UnboundLocalError.
## Fix
- Initialize file_path = None before the try block (line 289)
- Add explanatory comment for future maintainers
- Remove duplicate initialization inside the try block
This ensures exception handlers on line 415 can safely reference
file_path without causing runtime errors.
## Testing
- Verified exception handler path no longer crashes
- Confirmed file cleanup works correctly in error cases
Co-authored-by: POWERFULMOVES <POWERFULMOVES@users.noreply.github.com>
Co-authored-by: Claude Code <noreply@anthropic.com>
* feat: decrease chunking size for maximum ollama compatibility
* docs: improve i18n info on Claude.md
* feat: add cascade deletion for notebooks with delete preview
- Add Notebook.get_delete_preview() to show counts of affected items
- Add Notebook.delete(delete_exclusive_sources) for cascade deletion
- Always delete notes when notebook is deleted
- Allow user to choose: delete or keep exclusive sources
- Shared sources are always unlinked but never deleted
- Add NotebookDeleteDialog component with radio button options
- Add delete-preview API endpoint
- Update delete endpoint with delete_exclusive_sources param
- Add i18n support for all 5 locales
Closes#77
* docs: remove harcoded config settings
When async_processing=False, the sync path calls execute_command_sync()
which internally uses asyncio.run(). This fails when called from FastAPI's
already-running event loop with 'asyncio.run() cannot be called from a
running event loop'.
Wrapping the call in asyncio.to_thread() runs it in a thread pool executor,
avoiding the event loop conflict while preserving the synchronous behavior
from the API consumer's perspective.
Fixes#453
* docs: update CHANGELOG for v1.6.0 release
* fix: improve error logging for chat model configuration issues (#358)
- Add detailed error logging in provision.py when model lookup fails
- Add warning logging in models.py when default model is not configured
- Add traceback logging in chat router exception handler
- Update Ollama docs with model name configuration guidance
- Update troubleshooting docs with "Failed to send message" solutions
- Bump version to 1.6.1
* chore: uvlock
* feat: content-type aware chunking and unified embedding
- Add chunking.py with HTML, Markdown, and plain text detection
- Add embedding.py with mean pooling for large content
- Create dedicated commands: embed_note, embed_insight, embed_source
- Use fire-and-forget pattern for embedding via submit_command()
- Refactor rebuild_embeddings_command to delegate to individual commands
- Remove legacy commands and needs_embedding() methods
- Reduce chunk size to 1500 chars for Ollama compatibility
- Update CLAUDE.md documentation for new architecture
Fixes#350, #142
* fix: address code review issues
- Note.save() now returns command_id for tracking embedding jobs
- Add length check after generate_embeddings() to fail fast on mismatch
- Add numpy as explicit dependency (was transitive)
- Remove hardcoded chunk sizes from docstrings
* docs: address code review comments
- Rename "SYNC PATH" to "DOMAIN MODEL PATH" in embedding router
- Add test_chunking.py and test_embedding.py to Testing Strategy
- Clarify auto-embedding behavior for each domain model
* fix: clean thinking tags from prompt graph output
Adds clean_thinking_content() to prompt.py to handle extended thinking
models that return <think>...</think> tags. This fixes empty titles
when saving notes from chat.
* chore: remove local docker-compose from git
* fix(frontend): handle null parent_id in search results
Add defensive check for null parent_id in search results to prevent
"Cannot read properties of null (reading 'split')" error. This can
happen with orphaned records in the database.
* fix: cascade delete embeddings and insights when source is deleted
When deleting a Source, now also deletes associated:
- source_embedding records
- source_insight records
This prevents orphaned records that cause null parent_id errors
in vector search results.
* fix: add cleanup for orphan embedding/insight records in migration 10
Deletes source_embedding and source_insight records where the
linked source no longer exists (source.id = NONE).
* chore: bump esperanto to 2.16
Increases ctx_num for Ollama models to accommodate larger notebook
context windows. See: https://github.com/lfnovo/esperanto/pull/69
Change embedded check from `!= NONE` to `!= []` because SurrealDB's
SELECT VALUE returns an empty array [] when no results found, not NONE.
The comparison `[] != NONE` evaluates to true, causing all sources to
incorrectly show as embedded.
Fixes#397
* fix(i18n): resolve podcast dialog translation infinite loop and profile issues
- Remove incorrect translation keys for user-defined episode profiles
- Cache translation strings in ContentSelectionPanel to avoid repeated
Proxy accesses that triggered infinite loop detection
- Stabilize useEffect dependencies with dataKey pattern to prevent
re-initialization on every keystroke
- Replace unstable sourcesQueries prop with stable fetchingNotebookIds set
- Clean up unused getSourceModes function and TranslationKeys import
* chore: bump lock
* chore: bump version to 1.5.1 and update CHANGELOG
* fix: guard .join() call in dataKey when query data is undefined
* fix(api): use FETCH command instead of async status lookups for sources list
Replace N async calls to surreal-commands with SurrealDB FETCH clause
to resolve command status in a single query. This eliminates the
command status cascade bottleneck.
* perf(db): add indexes on source field for insights and embeddings
Add migration #10 that creates indexes on the `source` field of
`source_insight` and `source_embedding` tables. These indexes
dramatically improve the performance of source listing queries
that use subqueries to count insights and check embedding existence.
Performance improvement: ~8.5s -> ~0.3s for 30 sources (28x faster)
* perf(db): make index concurrent
* fix: add IF NOT EXISTS to index definitions for idempotency
* fix: address code review feedback
- Add IF EXISTS to rollback migration for safer rollbacks
- Add fallback for unresolved command references (status = "unknown")
- Added custom exception handler to ensure CORS headers are included in
all HTTP error responses from the API
- Added documentation for 413 (Payload Too Large) errors when behind
reverse proxies (nginx, traefik, kubernetes ingress)
- Added client_max_body_size to nginx configuration examples
- Documented how to configure CORS headers for proxy-level error responses
Fixes#401
The model uniqueness constraint now considers (provider, name, type)
instead of just (provider, name). This allows users to add the same
model name for different purposes (e.g., language vs embedding).
Fixes#391
- Use QUERY_KEYS.sourcesInfinite for infinite scroll query key
Starting with ['sources', ...] ensures mutations that invalidate
['sources'] will also invalidate the infinite scroll cache
- Use httpx.Timeout for chat service with short connect (10s) and
long read (600s) timeouts. Prevents 10 min wait on connection errors
Users with Ollama reported timeout errors on notebook chat while the
backend was still processing. The answer would appear after refresh.
- Frontend axios timeout: 5 min → 10 min
- Backend chat service timeout: 2 min → 10 min
Local LLMs can take several minutes for complex questions with large
contexts, especially on slower hardware.
* fix: add missing overflow wrapper to notebooks list page
Adds flex-1 overflow-y-auto wrapper to enable proper scrolling
when notebook list exceeds viewport height. Matches the layout
pattern used by all other dashboard pages.
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: reorder transformation routes to prevent dynamic route interception
Moved static routes (/transformations/execute and /transformations/default-prompt)
before dynamic routes (/transformations/{transformation_id}) to ensure FastAPI
matches them correctly. Previously, requests to static routes were incorrectly
captured by the dynamic route handler.
Fixes#250
Co-Authored-By: Claude <noreply@anthropic.com>
* chore: bump to 1.2.1
---------
Co-authored-by: Claude <noreply@anthropic.com>
* chore: improve podcast transcripts
* fix: remove date from insight - fixes#241
* fix: improve scrolling on source and insights - fixes#237
* chore: update esperanto to fix: #234
* chore: update esperanto to fix#226
* fix: process vectorization as subcommands to handle larger documents more gracefully - fix: #229
* feat: enable background job retry capabilities
* feat: reenable content types that were disabled during alpha version
* fix: remove unnecessary model caching causing many issues.
* feat: support multiple azure endpoints and keys just like openai compatible. Fixes#215
* docs: update azure variables
* chore: bump and update dependencies
* feat: prevent duplicate model names under same provider
Implement case-insensitive validation to prevent users from creating
duplicate model names under the same provider. This validation is
implemented both in the backend API and the frontend UI.
Changes:
- Backend: Add duplicate check in create_model endpoint (case-insensitive)
- Frontend: Add client-side validation in AddModelForm
- Frontend: Improve error message display from backend
- Tests: Add unit tests for duplicate model validation
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* refactor: optimize duplicate model validation and improve error handling
- Replace O(n) model iteration with efficient SurrealDB query for duplicate check
- Improve error message to include model name and provider for better UX
- Remove frontend duplicate validation (backend-only enforcement)
- Fix test authentication by setting OPEN_NOTEBOOK_PASSWORD before imports
- Update test mocking to use repo_query instead of Model.get_all()
- Add pytest fixture for TestClient to ensure proper test isolation
All 11 tests passing.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* remove unnecessary package
* fix: replace any with unknown type in error handler
- Change error type from 'any' to 'unknown' to satisfy ESLint
- Add proper type assertion for error object structure
- Maintains same runtime behavior with better type safety
---------
Co-authored-by: Claude <noreply@anthropic.com>
* fix: small issue where users cant change podcast segments
* chore: remove playwright mcp from gut
* feat: add ability to link existing sources to notebooks (OSS-311)
Implemented bidirectional source-notebook linking functionality:
Backend changes:
- Add POST endpoint to link sources to notebooks
- Include notebook associations in source detail response
- Implement idempotent linking with proper RecordID handling
Frontend changes:
- Add AddExistingSourceDialog with search and multi-select
- Add NotebookAssociations component for source detail view
- Add dropdown menu to "Add Source" button (new/existing)
- Implement useAddSourcesToNotebook hook with graceful error handling
- Fix dialog pointer-events during close animation
- Add loading states and disable checkboxes for linked sources
- Optimize dialog width with proper responsive breakpoints
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: address PR review feedback
- Fix sources.py query to use correct reference direction (OUT where IN)
- Remove debug console.log statements
- Add truncation warning for 100+ source lists
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
* fix: small issue where users cant change podcast segments
* feat: display source and note counts on notebook cards (OSS-312)
Add item counters to notebook listing page showing the number of sources
and notes in each notebook. Counts are displayed in a footer section with
FileText and StickyNote icons for visual consistency with ContextIndicator.
Backend changes:
- Add source_count and note_count to NotebookResponse model
- Update /notebooks endpoint to use SurrealDB graph traversal query
- Query: count(<-reference.in) for sources, count(<-artifact.in) for notes
- Update all notebook endpoints to include counts
Frontend changes:
- Add source_count and note_count to TypeScript NotebookResponse interface
- Add footer section to NotebookCard component
- Display counts with FileText and StickyNote icons (h-3 w-3)
- Use border-top separator and muted-foreground styling
Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* style: use colorful badges for notebook counts matching ContextIndicator
Update notebook card counts to use Badge components with primary color
styling instead of plain text, matching the visual style of the
ContextIndicator component in the chat window.
Changes:
- Replace plain text divs with Badge components
- Apply text-primary and border-primary/50 styling
- Use same spacing (gap-1.5, px-1.5, py-0.5) as ContextIndicator
- Remove bullet separator (not needed with badge layout)
Visual result matches the colorful badges shown in chat context.
Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
* fix: increase API client timeouts for transformation operations
- Increase frontend timeout from 30s to 300s (5 minutes)
- Increase Streamlit API client timeout from 30s to 300s
- Add API_CLIENT_TIMEOUT environment variable for configurability
- Add ESPERANTO_LLM_TIMEOUT environment variable documentation
- Update .env.example with comprehensive timeout documentation
Fixes#131 - API timeout errors during transformation generation
Transformations now have sufficient time to complete on slower
hardware (Ollama, LM Studio) without frontend timeout errors.
Users can now configure timeouts for both the API client layer
(API_CLIENT_TIMEOUT) and the LLM provider layer (ESPERANTO_LLM_TIMEOUT)
to accommodate their specific hardware and network conditions.
* docs: add timeout configuration documentation
- Add comprehensive timeout troubleshooting section to common-issues.md
- Add FAQ entry about timeout errors during transformations
- Document API_CLIENT_TIMEOUT and ESPERANTO_LLM_TIMEOUT usage
- Provide specific timeout recommendations for different hardware/network scenarios
- Link to GitHub issue #131 for reference
* chore: bump
* refactor: improve timeout configuration with validation and consistency
Based on PR review feedback, this commit addresses several improvements:
**Timeout Validation:**
- Add validation to ensure timeout values are between 30s and 3600s
- Invalid values fall back to default 300s with warning logs
- Handles edge cases (negative, zero, invalid strings)
**Fix Hard-coded Timeouts:**
- Replace all hard-coded timeout values in api/client.py
- ask_simple: 300s → self.timeout
- execute_transformation: 120s → self.timeout
- embed_content: 120s → self.timeout
- create_source: 300s → self.timeout
- rebuild_embeddings: Uses smart logic (2x timeout, max 3600s)
**Improved Documentation:**
- Add clarifying comments about ms vs seconds (frontend vs backend)
- Document that frontend uses 300000ms = backend 300s
- Add inline documentation for rebuild_embeddings timeout logic
**Development Dependencies:**
- Add pytest>=8.0.0 to dev dependencies for future test coverage
This makes timeout configuration more robust, consistent, and user-friendly
while maintaining backward compatibility.
* fix text
* remove lint from docker publish workflow
* gemini base url docs
* feat: add multimodal support for openai-compatible providers
- Add helper function to check OpenAI-compatible provider availability per mode
- Update provider detection to support language, embedding, STT, and TTS modalities
- Implement mode-specific environment variable detection (LLM, EMBEDDING, STT, TTS)
- Maintain backward compatibility with generic OPENAI_COMPATIBLE_BASE_URL
- Add comprehensive unit tests for all configuration scenarios
- Update .env.example with mode-specific environment variables
- Update provider support matrix in ai-models.md
- Create comprehensive openai-compatible.md setup guide
This enables users to configure different OpenAI-compatible endpoints for
different AI capabilities (e.g., LM Studio for language models, dedicated
server for embeddings) while maintaining full backward compatibility.
* upgrade
* chore: change docker release strategy
Changed create_source() timeout from default 30s to 300s (5 minutes) to handle
long-running operations like PDF processing with OCR.
Issue:
- PDF imports were timing out after 30 seconds with "Failed to connect to API: timed out"
- PDF processing (especially with OCR/parsing) takes longer than the default timeout
- Users were unable to import PDF documents
Solution:
- Increased timeout to 300 seconds (5 minutes), matching the timeout used by ask_simple()
- This gives sufficient time for document processing operations to complete
- Prevents premature connection timeout errors
Technical Details:
- Modified api/client.py create_source() method
- Added timeout=300.0 parameter to _make_request() call
- Consistent with existing long-running operations (ask_simple uses same timeout)
Testing:
- Users should now be able to import PDFs without timeout errors
- Smaller PDFs will still complete quickly
- Larger PDFs have sufficient time to process
New front-end
Launch Chat API
Manage Sources
Enable re-embedding of all contents
Sources can be added without a notebook now
Improved settings
Enable model selector on all chats
Background processing for better experience
Dark mode
Improved Notes
Improved Docs:
- Remove all Streamlit references from documentation
- Update deployment guides with React frontend setup
- Fix Docker environment variables format (SURREAL_URL, SURREAL_PASSWORD)
- Update docker image tag from :latest to :v1-latest
- Change navigation references (Settings → Models to just Models)
- Update development setup to include frontend npm commands
- Add MIGRATION.md guide for users upgrading from Streamlit
- Update quick-start guide with correct environment variables
- Add port 5055 documentation for API access
- Update project structure to reflect frontend/ directory
- Remove outdated source-chat documentation files
Creates the API layer for Open Notebook
Creates a services API gateway for the Streamlit front-end
Migrates the SurrealDB SDK to the official one
Change all database calls to async
New podcast framework supporting multiple speaker configurations
Implement the surreal-commands library for async processing
Improve docker image and docker-compose configurations