* chore: improve podcast transcripts * fix: remove date from insight - fixes #241 * fix: improve scrolling on source and insights - fixes #237 * chore: update esperanto to fix: #234 * chore: update esperanto to fix #226 * fix: process vectorization as subcommands to handle larger documents more gracefully - fix: #229 * feat: enable background job retry capabilities * feat: reenable content types that were disabled during alpha version * fix: remove unnecessary model caching causing many issues. * feat: support multiple azure endpoints and keys just like openai compatible. Fixes #215 * docs: update azure variables * chore: bump and update dependencies
5.7 KiB
Deployment Guide
This section provides comprehensive guides for deploying Open Notebook in different environments, from simple local setups to production deployments.
🚀 Quick Start
New to Open Notebook? Start with the Docker Setup Guide - it's the fastest way to get up and running.
📋 Deployment Options
1. Docker Deployment
Recommended for most users
- Complete beginner-friendly guide
- Single-container and multi-container options
- Supports all major AI providers
- Perfect for local development and testing
2. Single Container Deployment
Best for platforms like PikaPods
- All-in-one container solution
- Simplified deployment process
- Ideal for cloud hosting platforms
- Lower resource requirements
3. Development Setup
For contributors and advanced users
- Local development environment
- Source code installation
- Development tools and debugging
- Contributing to the project
4. Reverse Proxy Configuration
For production deployments with custom domains
- nginx, Caddy, Traefik configurations
- Custom domain setup
- SSL/HTTPS configuration
- Runtime API URL configuration
5. Security Configuration
Essential for public deployments
- Password protection setup
- Security best practices
- Production deployment considerations
- Troubleshooting security issues
6. Retry Configuration
For reliable background job processing
- Automatic retry for transient failures
- Database transaction conflict handling
- Embedding provider failure recovery
- Performance tuning and monitoring
🎯 Choose Your Deployment Method
Use Docker Setup if:
- You're new to Open Notebook
- You want the easiest setup experience
- You need multiple AI provider support
- You're running locally or on a private server
Use Single Container if:
- You're deploying on PikaPods, Railway, or similar platforms
- You want the simplest possible deployment
- You have resource constraints
- You don't need to scale services independently
Use Reverse Proxy Setup if:
- You're deploying with a custom domain
- You need HTTPS/SSL encryption
- You're using nginx, Caddy, or Traefik
- You want to expose only specific ports publicly
Use Development Setup if:
- You want to contribute to the project
- You need to modify the source code
- You're developing integrations or plugins
- You want to understand the codebase
📚 Additional Resources
Before You Start
- System Requirements - Hardware and software needs
- API Keys Guide - Getting keys from AI providers
- Environment Variables - Configuration reference
After Deployment
- First Notebook Guide - Create your first research project
- User Guide - Learn all the features
- Troubleshooting - Common issues and solutions
🔧 System Requirements
Minimum Requirements
- Memory: 2GB RAM
- CPU: 2 cores
- Storage: 10GB free space
- Network: Internet connection for AI providers
Recommended Requirements
- Memory: 4GB+ RAM
- CPU: 4+ cores
- Storage: 50GB+ free space
- Network: Stable high-speed internet
Platform Support
- Linux: Ubuntu 20.04+, CentOS 7+, or similar
- Windows: Windows 10+ with WSL2 (for Docker)
- macOS: macOS 10.14+
- Docker: Version 20.10+ required
🔑 API Keys
Open Notebook supports multiple AI providers. You'll need at least one:
Required for Basic Functionality
- OpenAI: For GPT models, embeddings, and TTS
- Get your key at platform.openai.com
- Provides: Language models, embeddings, speech services
Optional Providers
- Anthropic: For Claude models
- Google: For Gemini models
- Groq: For fast inference
- Ollama: For local models (no API key needed)
See the Model Providers Guide for detailed setup instructions.
🌍 Environment Variables
Core Configuration
# Database (auto-configured in Docker)
SURREAL_URL=ws://localhost:8000/rpc
SURREAL_USER=root
SURREAL_PASSWORD=root
SURREAL_NAMESPACE=open_notebook
SURREAL_DATABASE=production
# Security (optional)
OPEN_NOTEBOOK_PASSWORD=your_secure_password
AI Provider Keys
# OpenAI (recommended)
OPENAI_API_KEY=sk-...
# Additional providers (optional)
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AIzaSy...
GROQ_API_KEY=gsk_...
OLLAMA_API_BASE=http://localhost:11434
🆘 Getting Help
Community Support
- Discord Server - Real-time help and discussion
- GitHub Issues - Bug reports and feature requests
- GitHub Discussions - Questions and ideas
Documentation
- User Guide - Complete feature documentation
- Troubleshooting - Common issues and solutions
- API Reference - REST API documentation
📞 Support
Having trouble with deployment? Here's how to get help:
- Check the troubleshooting section in each deployment guide
- Search existing issues on GitHub
- Ask on Discord for real-time help
- Create a GitHub issue for bugs or feature requests
Remember to include:
- Your operating system and version
- Deployment method used
- Error messages (if any)
- Steps to reproduce the issue
Ready to deploy? Choose your deployment method above and follow the step-by-step guide!