The v1-single image is being phased out ahead of v2. This adds deprecation notices to the single-container docs and replaces v1-latest-single image references with v1-latest across all configuration guides and issue templates. Closes #498
4.2 KiB
Installation Guide
Choose your installation route based on your setup and use case.
Quick Decision: Which Route?
🚀 I want the easiest setup (Recommended for most)
→ Docker Compose - Multi-container setup, production-ready
- ✅ All features working
- ✅ Clear separation of services
- ✅ Easy to scale
- ✅ Works on Mac, Windows, Linux
- ⏱️ 5 minutes to running
🏠 I want everything in one container (Deprecated)
→ Single Container - Deprecated, will be removed in v2
- ⚠️ Deprecated — please use Docker Compose instead
- Still supported until v2 release
👨💻 I want to develop/contribute (Developers only)
→ From Source - Clone repo, set up locally
- ✅ Full control over code
- ✅ Easy to debug
- ✅ Can modify and test
- ⚠️ Requires Python 3.11+, Node.js
- ⏱️ 10 minutes to running
System Requirements
Minimum
- RAM: 4GB
- Storage: 2GB for app + space for documents
- CPU: Any modern processor
- Network: Internet (optional for offline setup)
Recommended
- RAM: 8GB+
- Storage: 10GB+ for documents and models
- CPU: Multi-core processor
- GPU: Optional (speeds up local AI models)
AI Provider Options
Cloud-Based (Pay-as-you-go)
- OpenAI - GPT-4, GPT-4o, fast and capable
- Anthropic (Claude) - Claude 3.5 Sonnet, excellent reasoning
- Google Gemini - Multimodal, cost-effective
- Groq - Ultra-fast inference
- Others: Mistral, DeepSeek, xAI, OpenRouter
Cost: Usually $0.01-$0.10 per 1K tokens Speed: Fast (sub-second) Privacy: Your data sent to cloud
Local (Free, Private)
- Ollama - Run open-source models locally
- LM Studio - Desktop app for local models
- Hugging Face models - Download and run
Cost: $0 (just electricity) Speed: Depends on your hardware (slow to medium) Privacy: 100% offline
Choose a Route
Already know which way to go? Pick your installation path:
- Docker Compose - Most users
- Single Container - Deprecated
- From Source - Developers
Privacy-first? Any installation method works with Ollama for 100% local AI. See Local Quick Start.
Pre-Installation Checklist
Before installing, you'll need:
- Docker (for Docker routes) or Node.js 18+ (for source)
- AI Provider API key (OpenAI, Anthropic, etc.) OR willingness to use free local models
- At least 4GB RAM available
- Stable internet (or offline setup with Ollama)
Detailed Installation Instructions
For Docker Users
- Install Docker Desktop
- Follow Docker Compose installation
- Follow the step-by-step guide
- Access at
http://localhost:8502
For Source Installation (Developers)
- Have Python 3.11+, Node.js 18+, Git installed
- Follow From Source
- Run
make start-all - Access at
http://localhost:8502(frontend) orhttp://localhost:5055(API)
After Installation
Once you're up and running:
- Configure Models - Choose your AI provider in Settings
- Create First Notebook - Start organizing research
- Add Sources - PDFs, web links, documents
- Explore Features - Chat, search, transformations
- Read Full Guide - User Guide
Troubleshooting During Installation
Having issues? Check the troubleshooting section in your chosen installation guide, or see Quick Fixes.
Need Help?
- Discord: Join community
- GitHub Issues: Report problems
- Docs: See Full Documentation
Production Deployment
Installing for production use? See additional resources:
Ready to install? Pick a route above! ⬆️