open-notebook/docs/0-START-HERE/quick-start-local.md
Luis Novo 3f352cfcce
feat: credential-based API key management (#477) (#540)
* feat: replace provider config with credential-based system (#477)

Introduce a new credential management system replacing the old
ProviderConfig singleton and standalone Models page. Each credential
stores encrypted API keys and provider-specific configuration with
full CRUD support via a unified settings UI.

Backend:
- Add Credential domain model with encrypted API key storage
- Add credentials API router (CRUD, discovery, registration, testing)
- Add encryption utilities for secure key storage
- Add key_provider for DB-first env-var fallback provisioning
- Add connection tester and model discovery services
- Integrate ModelManager with credential-based config
- Add provider name normalization for Esperanto compatibility
- Add database migrations 11-12 for credential schema

Frontend:
- Rewrite settings/api-keys page with credential management UI
- Add model discovery dialog with search and custom model support
- Add compact default model assignments (primary/advanced layout)
- Add inline model testing and credential connection testing
- Add env-var migration banner
- Update navigation to unified settings page
- Remove standalone models page and old settings components

i18n:
- Update all 7 locale files with credential and model management keys

Closes #477

Co-Authored-By: JFMD <git@jfmd.us>
Co-Authored-By: OraCatQAQ <570768706@qq.com>

* fix: address PR #540 review comments

- Fix docs referencing removed Models page
- Fix error-handler returning raw messages instead of i18n keys
- Fix auth.py misleading docstring and missing no-password guard
- Fix connection_tester using wrong env var for openai_compatible
- Add provision_provider_keys before model discovery/sync
- Update CLAUDE.md to reflect credential-based system
- Fix missing closing brace in api-keys page useEffect

* fix: add logging to credential migration and surface errors in UI

- Add comprehensive logging to migrate-from-env and
  migrate-from-provider-config endpoints (start, per-provider
  progress, success/failure with stack traces, final summary)
- Fix frontend migration hooks ignoring errors array from response
- Show error toast when migration fails instead of "nothing to migrate"
- Invalidate status/envStatus queries after migration so banner updates

* docs: update CLAUDE.md files for credential system

Replace stale ProviderConfig and /api-keys/ references across 8 CLAUDE.md
files to reflect the new Credential-based system from PR #540.

* docs: update user documentation for credential-based system

Replace env var API key instructions with Settings UI credential
workflow across all user-facing documentation. The new flow is:
set OPEN_NOTEBOOK_ENCRYPTION_KEY → start services → add credential
in Settings UI → test → discover models → register.

- Rewrite ai-providers.md, api-configuration.md, environment-reference.md
- Update all quick-start guides and installation docs
- Update ollama.md, openai-compatible.md, local-tts/stt networking sections
- Update reverse-proxy.md, development-setup.md, security.md
- Fix broken links to non-existent docs/deployment/ paths
- Add credentials endpoints to api-reference.md
- Move all API key env vars to deprecated/legacy sections

* chore: bump version to 1.7.0-rc1

Release candidate for credential-based provider management system.

* fix: initialize provider before try block in test_credential

Prevents UnboundLocalError when Credential.get() throws (e.g.,
invalid credential_id) before provider is assigned.

* fix: reorder down migration to drop index before table

Removes duplicate REMOVE FIELD statement and reorders so the index
is dropped before the table, preventing rollback failures.

* refactor: simplify encryption key to always derive via SHA-256

Remove the dual code path in _ensure_fernet_key() that detected native
Fernet keys. Since the credential system is new, always deriving via
SHA-256 removes unnecessary complexity. Also removes the generate_key()
function and Fernet.generate_key() references from docs.

* fix: correct mock patch targets in embedding tests and URL validation

Fix embedding tests patching wrong module path for model_manager
(was targeting open_notebook.utils.embedding.model_manager but it's
imported locally from open_notebook.ai.models). Also fix URL validation
to allow unresolvable hostnames since they may be valid in the
deployment environment (e.g., Azure endpoints, internal DNS).

* feat: add global setup banner for encryption and migration status

Show a persistent banner in AppShell when encryption key is missing
(red) or env var API keys can be migrated (amber), so users see
these prompts on every page instead of only on Settings > API Keys.

Includes a docs link for the encryption banner and i18n support
across all 7 locales.

* docs: several improvements to docker-compose e env examples

* Update README.md

Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>

* docs: fix env var format in README and update model setup instructions

Align the encryption key snippet in README Step 2 with the list
format used in the compose file. Replace deprecated "Settings →
Models" instructions with credential-based Discover Models flow.

* fix: address credential system review issues

- Fix SSRF bypass via IPv4-mapped IPv6 addresses (::ffff:169.254.x.x)
- Fix TTS connection test missing config parameter
- Add Azure-specific model discovery using api-key auth header
- Add Vertex static model list for credential-based discovery
- Fix PROVIDER_DISCOVERY_FUNCTIONS incorrect azure/vertex mapping
- Extract business logic to api/credentials_service.py (service layer)
- Move credential Pydantic schemas to api/models.py
- Update tests to use new service imports and ValueError assertions

* fix: sanitize error responses and migrate key_provider to Credential

- Replace raw exception messages in all credential router 500 responses
  with generic error strings (internal details logged server-side only)
- Refactor key_provider.py to use Credential.get_by_provider() instead
  of deprecated ProviderConfig.get_instance()
- Remove unused functions (get_provider_configs, get_default_api_key,
  get_provider_config) that were dead code

---------

Co-authored-by: JFMD <git@jfmd.us>
Co-authored-by: OraCatQAQ <570768706@qq.com>
2026-02-10 08:30:22 -03:00

7.3 KiB

Quick Start - Local & Private (5 minutes)

Get Open Notebook running with 100% local AI using Ollama. No cloud API keys needed, completely private.

Prerequisites

  1. Docker Desktop installed

  2. Local LLM - Choose one:

Step 1: Choose Your Setup (1 min)

Local Machine (Same Computer)

Everything runs on your machine. Recommended for testing/learning.

Remote Server (Raspberry Pi, NAS, Cloud VM)

Run on a different computer, access from another. Needs network configuration.


Step 2: Create Configuration (1 min)

Create a new folder open-notebook-local and add this file:

docker-compose.yml:

services:
  surrealdb:
    image: surrealdb/surrealdb:v2
    command: start --user root --pass password --bind 0.0.0.0:8000 rocksdb:/mydata/mydatabase.db
    ports:
      - "8000:8000"
    volumes:
      - ./surreal_data:/mydata

  open_notebook:
    image: lfnovo/open_notebook:v1-latest-single
    pull_policy: always
    ports:
      - "8502:8502"  # Web UI (React frontend)
      - "5055:5055"  # API (required!)
    environment:
      # Encryption key for credential storage (required)
      - OPEN_NOTEBOOK_ENCRYPTION_KEY=change-me-to-a-secret-string

      # Database (required)
      - SURREAL_URL=ws://surrealdb:8000/rpc
      - SURREAL_USER=root
      - SURREAL_PASSWORD=password
      - SURREAL_NAMESPACE=open_notebook
      - SURREAL_DATABASE=open_notebook
    volumes:
      - ./notebook_data:/app/data
      - ./surreal_data:/mydata
    depends_on:
      - surrealdb
    restart: always

  ollama:
    image: ollama/ollama:latest
    ports:
      - "11434:11434"
    volumes:
      - ./ollama_models:/root/.ollama
    environment:
      # Optional: set GPU support if available
      - OLLAMA_NUM_GPU=0
    restart: always

Edit the file:

  • Replace change-me-to-a-secret-string with your own secret (any string works)

Step 3: Start Services (1 min)

Open terminal in your open-notebook-local folder:

docker compose up -d

Wait 10-15 seconds for all services to start.


Step 4: Download a Model (2-3 min)

Ollama needs at least one language model. Pick one:

# Fastest & smallest (recommended for testing)
docker exec open_notebook-ollama-1 ollama pull mistral

# OR: Better quality but slower
docker exec open_notebook-ollama-1 ollama pull neural-chat

# OR: Even better quality, more VRAM needed
docker exec open_notebook-ollama-1 ollama pull llama2

This downloads the model (will take 1-5 minutes depending on your internet).


Step 5: Access Open Notebook (instant)

Open your browser:

http://localhost:8502

You should see the Open Notebook interface.


Step 6: Configure Ollama Provider (1 min)

  1. Go to SettingsAPI Keys
  2. Click Add Credential
  3. Select provider: Ollama
  4. Give it a name (e.g., "Local Ollama")
  5. Enter the base URL: http://ollama:11434
  6. Click Save
  7. Click Test Connection — should show success
  8. Click Discover ModelsRegister Models

Step 7: Configure Local Model (1 min)

  1. Go to SettingsModels
  2. Set:
    • Language Model: ollama/mistral (or whichever model you downloaded)
    • Embedding Model: ollama/nomic-embed-text (auto-downloads if missing)
  3. Click Save

Step 8: Create Your First Notebook (1 min)

  1. Click New Notebook
  2. Name: "My Private Research"
  3. Click Create

Step 9: Add Local Content (1 min)

  1. Click Add Source
  2. Choose Text
  3. Paste some text or a local document
  4. Click Add

Step 10: Chat With Your Content (1 min)

  1. Go to Chat
  2. Type: "What did you learn from this?"
  3. Click Send
  4. Watch as the local Ollama model responds!

Verification Checklist

  • Docker is running
  • You can access http://localhost:8502
  • Ollama credential is configured and tested
  • Models are registered
  • You created a notebook
  • Chat works with local model

All checked? You have a completely private, offline research assistant!


Advantages of Local Setup

  • No API costs - Free forever
  • No internet required - True offline capability
  • Privacy first - Your data never leaves your machine
  • No subscriptions - No monthly bills

Trade-off: Slower than cloud models (depends on your CPU/GPU)


Troubleshooting

"ollama: command not found"

Docker image name might be different:

docker ps  # Find the Ollama container name
docker exec <container_name> ollama pull mistral

Model Download Stuck

Check internet connection and restart:

docker compose restart ollama

Then retry the model pull command.

"Address already in use" Error

docker compose down
docker compose up -d

Low Performance

Check if GPU is available:

# Show available GPUs
docker exec open_notebook-ollama-1 ollama ps

# Enable GPU in docker-compose.yml:
# - OLLAMA_NUM_GPU=1

Then restart: docker compose restart ollama

Adding More Models

# List available models
docker exec open_notebook-ollama-1 ollama list

# Pull additional model
docker exec open_notebook-ollama-1 ollama pull neural-chat

Next Steps

Now that it's running:

  1. Add Your Own Content: PDFs, documents, articles (see 3-USER-GUIDE)
  2. Explore Features: Podcasts, transformations, search
  3. Full Documentation: See all features
  4. Scale Up: Deploy to a server with better hardware for faster responses
  5. Benchmark Models: Try different models to find the speed/quality tradeoff you prefer

Alternative: Using LM Studio Instead of Ollama

Prefer a GUI? LM Studio is easier for non-technical users:

  1. Download LM Studio: https://lmstudio.ai
  2. Open the app, download a model from the library
  3. Go to "Local Server" tab, start server (port 1234)
  4. In Open Notebook, go to SettingsAPI Keys
  5. Click Add Credential → Select OpenAI-Compatible
  6. Enter base URL: http://host.docker.internal:1234/v1
  7. Enter API key: lm-studio (placeholder)
  8. Click Save, then Test Connection
  9. Configure in Settings → Models → Select your LM Studio model

Note: LM Studio runs outside Docker, use host.docker.internal to connect.


Going Further

  • Switch models: Change in Settings → Models anytime
  • Add more models:
    • Ollama: Run ollama pull <model>, then re-discover models from the credential
    • LM Studio: Download from the app library
  • Deploy to server: Same docker-compose.yml works anywhere
  • Use cloud hybrid: Keep some local models, add cloud provider credentials for complex tasks

Common Model Choices

Model Speed Quality VRAM Best For
mistral Fast Good 4GB Testing, general use
neural-chat Medium Better 6GB Balanced, recommended
llama2 Slow Best 8GB+ Complex reasoning
phi Very Fast Fair 2GB Minimal hardware

Need Help? Join our Discord community - many users run local setups!