* docs: update CHANGELOG for v1.6.0 release
* fix: improve error logging for chat model configuration issues (#358)
- Add detailed error logging in provision.py when model lookup fails
- Add warning logging in models.py when default model is not configured
- Add traceback logging in chat router exception handler
- Update Ollama docs with model name configuration guidance
- Update troubleshooting docs with "Failed to send message" solutions
- Bump version to 1.6.1
* chore: uvlock
* docs: add conda installation instructions to README
docs: add conda installation instructions to README
* docs: add conda environment setup as an alternative to uv
docs: add conda environment setup as an alternative to uv
- Upgrade Next.js from 15.4.10 to 16.1.1
- Upgrade React from 19.1.0 to 19.2.3
- Rename middleware.ts → proxy.ts (Next.js 16 requirement)
- Update function name: middleware → proxy
- Enable proxyClientMaxBodySize configuration (now supported in Next.js 16)
- Update documentation to reference Next.js 16 requirement
- Fix upload size limit issue for files >10MB
This upgrade fixes GitHub issue #361 where users cannot upload files
larger than 10MB. The proxyClientMaxBodySize configuration option was
introduced in Next.js 16.1+ and allows configuring the proxy body size
limit to 100MB.
Fixes#361
Related to PR #405
- Added custom exception handler to ensure CORS headers are included in
all HTTP error responses from the API
- Added documentation for 413 (Payload Too Large) errors when behind
reverse proxies (nginx, traefik, kubernetes ingress)
- Added client_max_body_size to nginx configuration examples
- Documented how to configure CORS headers for proxy-level error responses
Fixes#401
Changed SurrealDB configuration from in-memory storage to RocksDB with
bind mounts for data persistence. Users' data will now survive container
restarts.
Fixes#398
Add pull_policy: always to all open_notebook service definitions
in docker-compose examples across documentation. This ensures
Docker always checks for and pulls newer images when running
docker compose up.
Closes#393
- Replace old docs structure with new comprehensive documentation
- Organize into 8 major sections (0-START-HERE through 7-DEVELOPMENT)
- Convert CONFIGURATION.md, CONTRIBUTING.md, MAINTAINER_GUIDE.md to redirects
- Remove outdated MIGRATION.md and DESIGN_PRINCIPLES.md
- Fix all internal documentation links and cross-references
- Add progressive disclosure paths for different user types
- Include 44 focused guides covering all features
- Update README.md to remove v1.0 breaking changes notice
- Add fix for quotes in environment variables causing empty URL errors
- Add SurrealDB configuration section (single container includes DB, v2 only)
- Add network timeout fix for slow connections and Chinese users
- Update table of contents with new sections
Addresses feedback from issue #316
* chore: improve podcast transcripts
* docs: update Azure OpenAI documentation to reflect STT and TTS support
Updated documentation to show that Azure OpenAI now supports all four
modalities (Language, Embedding, STT, TTS) with mode-specific configuration.
Changes:
- Updated Provider Support Matrix table to show Azure supports STT and TTS
- Added recommended STT and TTS models to Azure OpenAI section
- Added clarifying comment for STT and TTS environment variables
Related to modality-specific Azure configuration implementation.
* chore: improve podcast transcripts
* fix: remove date from insight - fixes#241
* fix: improve scrolling on source and insights - fixes#237
* chore: update esperanto to fix: #234
* chore: update esperanto to fix#226
* fix: process vectorization as subcommands to handle larger documents more gracefully - fix: #229
* feat: enable background job retry capabilities
* feat: reenable content types that were disabled during alpha version
* fix: remove unnecessary model caching causing many issues.
* feat: support multiple azure endpoints and keys just like openai compatible. Fixes#215
* docs: update azure variables
* chore: bump and update dependencies
* feat: simplify reverse proxy configuration with Next.js rewrites
Add Next.js API rewrites to proxy /api/* requests internally from port 8502
to the FastAPI backend on port 5055. This eliminates the need for complex
reverse proxy configurations with multiple upstreams and location blocks.
Changes:
- Add rewrites to next.config.ts proxying /api/* to INTERNAL_API_URL
- Introduce INTERNAL_API_URL env var (defaults to http://localhost:5055)
- Update supervisord configs to pass INTERNAL_API_URL to Next.js
- Document INTERNAL_API_URL in .env.example with usage examples
- Add simplified reverse proxy examples for nginx, Traefik, Caddy, Coolify
- Update README architecture diagram to show internal proxying
- Add explanatory comments to _config route handler
Benefits:
- Reduces reverse proxy config from 12 lines to 3 (75% reduction)
- Single-port deployment (8502 only) for 95% of use cases
- Zero breaking changes - backward compatible with existing setups
- Zero performance overhead (validated through testing)
- Preserves proxy headers (X-Forwarded-*) for rate limiting/SSL
Resolves: #179
Related: OSS-321
* fix: rename _config to config to fix production routing
CRITICAL BUG FIX: The /_config endpoint has never worked in production builds
because Next.js treats folders starting with underscore as "private folders"
and excludes them from routing entirely.
This endpoint is critical for:
- Providing API_URL to the browser at runtime
- Enabling zero-config deployments with auto-detection
- Supporting reverse proxy scenarios where API URL differs from frontend URL
Changes:
- Rename frontend/src/app/_config/ → frontend/src/app/config/
- Update client code references (/_config → /config)
- Update documentation with correct endpoint path
- Bump version to 1.1.0 (minor version for new rewrites feature + bug fix)
Impact:
- Runtime configuration now works in production builds
- /config returns {"apiUrl":"http://localhost:5055"} correctly
- Auto-detection for reverse proxy deployments now functional
Related: #179, OSS-321
* fix: resolve React hook exhaustive-deps warning in AddExistingSourceDialog
Wrap performSearch function in useCallback to properly memoize it and satisfy
React Hook exhaustive-deps rule. This prevents unnecessary re-renders and
ensures the useEffect dependency array is correctly specified.
Changes:
- Import useCallback from React
- Wrap performSearch with useCallback([debouncedSearchQuery, allSources])
- Add performSearch to useEffect dependency array
* final fixes
Move runtime configuration endpoint from /api/runtime-config to /_config to avoid
conflicts with reverse proxies that route all /api/* requests to the FastAPI backend.
This fixes an issue where users with reverse proxies would see port 5055 incorrectly
appended to their API_URL even when explicitly set via environment variable.
Changes:
- Move frontend/src/app/api/runtime-config/route.ts to frontend/src/app/_config/route.ts
- Update config.ts to fetch from /_config instead of /api/runtime-config
- Add troubleshooting documentation for reverse proxy users
- Update all reverse proxy examples to show correct routing (catch-all handles /_config)
- Bump version to 1.0.11
The new /_config endpoint is automatically handled by standard reverse proxy catch-all
rules (location / { proxy_pass http://frontend; }), requiring no additional configuration
for most users.
Fixes issue where API_URL environment variable was being ignored in reverse proxy setups,
causing CORS errors with "Status code: (null)" and incorrect port 5055 being added.
* fix: increase API client timeouts for transformation operations
- Increase frontend timeout from 30s to 300s (5 minutes)
- Increase Streamlit API client timeout from 30s to 300s
- Add API_CLIENT_TIMEOUT environment variable for configurability
- Add ESPERANTO_LLM_TIMEOUT environment variable documentation
- Update .env.example with comprehensive timeout documentation
Fixes#131 - API timeout errors during transformation generation
Transformations now have sufficient time to complete on slower
hardware (Ollama, LM Studio) without frontend timeout errors.
Users can now configure timeouts for both the API client layer
(API_CLIENT_TIMEOUT) and the LLM provider layer (ESPERANTO_LLM_TIMEOUT)
to accommodate their specific hardware and network conditions.
* docs: add timeout configuration documentation
- Add comprehensive timeout troubleshooting section to common-issues.md
- Add FAQ entry about timeout errors during transformations
- Document API_CLIENT_TIMEOUT and ESPERANTO_LLM_TIMEOUT usage
- Provide specific timeout recommendations for different hardware/network scenarios
- Link to GitHub issue #131 for reference
* chore: bump
* refactor: improve timeout configuration with validation and consistency
Based on PR review feedback, this commit addresses several improvements:
**Timeout Validation:**
- Add validation to ensure timeout values are between 30s and 3600s
- Invalid values fall back to default 300s with warning logs
- Handles edge cases (negative, zero, invalid strings)
**Fix Hard-coded Timeouts:**
- Replace all hard-coded timeout values in api/client.py
- ask_simple: 300s → self.timeout
- execute_transformation: 120s → self.timeout
- embed_content: 120s → self.timeout
- create_source: 300s → self.timeout
- rebuild_embeddings: Uses smart logic (2x timeout, max 3600s)
**Improved Documentation:**
- Add clarifying comments about ms vs seconds (frontend vs backend)
- Document that frontend uses 300000ms = backend 300s
- Add inline documentation for rebuild_embeddings timeout logic
**Development Dependencies:**
- Add pytest>=8.0.0 to dev dependencies for future test coverage
This makes timeout configuration more robust, consistent, and user-friendly
while maintaining backward compatibility.
* fix text
* remove lint from docker publish workflow
* gemini base url docs
* feat: add multimodal support for openai-compatible providers
- Add helper function to check OpenAI-compatible provider availability per mode
- Update provider detection to support language, embedding, STT, and TTS modalities
- Implement mode-specific environment variable detection (LLM, EMBEDDING, STT, TTS)
- Maintain backward compatibility with generic OPENAI_COMPATIBLE_BASE_URL
- Add comprehensive unit tests for all configuration scenarios
- Update .env.example with mode-specific environment variables
- Update provider support matrix in ai-models.md
- Create comprehensive openai-compatible.md setup guide
This enables users to configure different OpenAI-compatible endpoints for
different AI capabilities (e.g., LM Studio for language models, dedicated
server for embeddings) while maintaining full backward compatibility.
* upgrade
* chore: change docker release strategy
- Add Docker image registry section explaining both Docker Hub and GHCR options
- Include GHCR alternative in Quick Start examples
- Add comments showing how to use GHCR in docker-compose examples
- Help users understand they can use either registry interchangeably
New front-end
Launch Chat API
Manage Sources
Enable re-embedding of all contents
Sources can be added without a notebook now
Improved settings
Enable model selector on all chats
Background processing for better experience
Dark mode
Improved Notes
Improved Docs:
- Remove all Streamlit references from documentation
- Update deployment guides with React frontend setup
- Fix Docker environment variables format (SURREAL_URL, SURREAL_PASSWORD)
- Update docker image tag from :latest to :v1-latest
- Change navigation references (Settings → Models to just Models)
- Update development setup to include frontend npm commands
- Add MIGRATION.md guide for users upgrading from Streamlit
- Update quick-start guide with correct environment variables
- Add port 5055 documentation for API access
- Update project structure to reflect frontend/ directory
- Remove outdated source-chat documentation files
* fix: prevent project failing to start when cannot talk to github - fixes#128
* improve ollama documentation - see #127
* chore: update esperanto library to enable gpt-5 - see #107; update podcast-creator library to enable TTS_BATCH_SIZE - fixes#125
* add info on ollama env variables
* chore: ignore dev logs
* chore: bump
Creates the API layer for Open Notebook
Creates a services API gateway for the Streamlit front-end
Migrates the SurrealDB SDK to the official one
Change all database calls to async
New podcast framework supporting multiple speaker configurations
Implement the surreal-commands library for async processing
Improve docker image and docker-compose configurations