open-notebook/docs/5-CONFIGURATION/local-tts.md
Luis Novo 3f352cfcce
feat: credential-based API key management (#477) (#540)
* feat: replace provider config with credential-based system (#477)

Introduce a new credential management system replacing the old
ProviderConfig singleton and standalone Models page. Each credential
stores encrypted API keys and provider-specific configuration with
full CRUD support via a unified settings UI.

Backend:
- Add Credential domain model with encrypted API key storage
- Add credentials API router (CRUD, discovery, registration, testing)
- Add encryption utilities for secure key storage
- Add key_provider for DB-first env-var fallback provisioning
- Add connection tester and model discovery services
- Integrate ModelManager with credential-based config
- Add provider name normalization for Esperanto compatibility
- Add database migrations 11-12 for credential schema

Frontend:
- Rewrite settings/api-keys page with credential management UI
- Add model discovery dialog with search and custom model support
- Add compact default model assignments (primary/advanced layout)
- Add inline model testing and credential connection testing
- Add env-var migration banner
- Update navigation to unified settings page
- Remove standalone models page and old settings components

i18n:
- Update all 7 locale files with credential and model management keys

Closes #477

Co-Authored-By: JFMD <git@jfmd.us>
Co-Authored-By: OraCatQAQ <570768706@qq.com>

* fix: address PR #540 review comments

- Fix docs referencing removed Models page
- Fix error-handler returning raw messages instead of i18n keys
- Fix auth.py misleading docstring and missing no-password guard
- Fix connection_tester using wrong env var for openai_compatible
- Add provision_provider_keys before model discovery/sync
- Update CLAUDE.md to reflect credential-based system
- Fix missing closing brace in api-keys page useEffect

* fix: add logging to credential migration and surface errors in UI

- Add comprehensive logging to migrate-from-env and
  migrate-from-provider-config endpoints (start, per-provider
  progress, success/failure with stack traces, final summary)
- Fix frontend migration hooks ignoring errors array from response
- Show error toast when migration fails instead of "nothing to migrate"
- Invalidate status/envStatus queries after migration so banner updates

* docs: update CLAUDE.md files for credential system

Replace stale ProviderConfig and /api-keys/ references across 8 CLAUDE.md
files to reflect the new Credential-based system from PR #540.

* docs: update user documentation for credential-based system

Replace env var API key instructions with Settings UI credential
workflow across all user-facing documentation. The new flow is:
set OPEN_NOTEBOOK_ENCRYPTION_KEY → start services → add credential
in Settings UI → test → discover models → register.

- Rewrite ai-providers.md, api-configuration.md, environment-reference.md
- Update all quick-start guides and installation docs
- Update ollama.md, openai-compatible.md, local-tts/stt networking sections
- Update reverse-proxy.md, development-setup.md, security.md
- Fix broken links to non-existent docs/deployment/ paths
- Add credentials endpoints to api-reference.md
- Move all API key env vars to deprecated/legacy sections

* chore: bump version to 1.7.0-rc1

Release candidate for credential-based provider management system.

* fix: initialize provider before try block in test_credential

Prevents UnboundLocalError when Credential.get() throws (e.g.,
invalid credential_id) before provider is assigned.

* fix: reorder down migration to drop index before table

Removes duplicate REMOVE FIELD statement and reorders so the index
is dropped before the table, preventing rollback failures.

* refactor: simplify encryption key to always derive via SHA-256

Remove the dual code path in _ensure_fernet_key() that detected native
Fernet keys. Since the credential system is new, always deriving via
SHA-256 removes unnecessary complexity. Also removes the generate_key()
function and Fernet.generate_key() references from docs.

* fix: correct mock patch targets in embedding tests and URL validation

Fix embedding tests patching wrong module path for model_manager
(was targeting open_notebook.utils.embedding.model_manager but it's
imported locally from open_notebook.ai.models). Also fix URL validation
to allow unresolvable hostnames since they may be valid in the
deployment environment (e.g., Azure endpoints, internal DNS).

* feat: add global setup banner for encryption and migration status

Show a persistent banner in AppShell when encryption key is missing
(red) or env var API keys can be migrated (amber), so users see
these prompts on every page instead of only on Settings > API Keys.

Includes a docs link for the encryption banner and i18n support
across all 7 locales.

* docs: several improvements to docker-compose e env examples

* Update README.md

Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>

* docs: fix env var format in README and update model setup instructions

Align the encryption key snippet in README Step 2 with the list
format used in the compose file. Replace deprecated "Settings →
Models" instructions with credential-based Discover Models flow.

* fix: address credential system review issues

- Fix SSRF bypass via IPv4-mapped IPv6 addresses (::ffff:169.254.x.x)
- Fix TTS connection test missing config parameter
- Add Azure-specific model discovery using api-key auth header
- Add Vertex static model list for credential-based discovery
- Fix PROVIDER_DISCOVERY_FUNCTIONS incorrect azure/vertex mapping
- Extract business logic to api/credentials_service.py (service layer)
- Move credential Pydantic schemas to api/models.py
- Update tests to use new service imports and ValueError assertions

* fix: sanitize error responses and migrate key_provider to Credential

- Replace raw exception messages in all credential router 500 responses
  with generic error strings (internal details logged server-side only)
- Refactor key_provider.py to use Credential.get_by_provider() instead
  of deprecated ProviderConfig.get_instance()
- Remove unused functions (get_provider_configs, get_default_api_key,
  get_provider_config) that were dead code

---------

Co-authored-by: JFMD <git@jfmd.us>
Co-authored-by: OraCatQAQ <570768706@qq.com>
2026-02-10 08:30:22 -03:00

7.8 KiB

Local Text-to-Speech Setup

Run text-to-speech locally for free, private podcast generation using OpenAI-compatible TTS servers.


Why Local TTS?

Benefit Description
Free No per-character costs after setup
Private Audio never leaves your machine
Unlimited No rate limits or quotas
Offline Works without internet

Quick Start with Speaches

Speaches is an open-source, OpenAI-compatible TTS server.

💡 Ready-made Docker Compose files available:

These include complete setup instructions and configuration examples. Just copy and run!

Step 1: Create Docker Compose File

Create a folder and add docker-compose.yml:

services:
  speaches:
    image: ghcr.io/speaches-ai/speaches:latest-cpu
    container_name: speaches
    ports:
      - "8969:8000"
    volumes:
      - hf-hub-cache:/home/ubuntu/.cache/huggingface/hub
    restart: unless-stopped

volumes:
  hf-hub-cache:

Step 2: Start and Download Model

# Start Speaches
docker compose up -d

# Wait for startup
sleep 10

# Download voice model (~500MB)
docker compose exec speaches uv tool run speaches-cli model download speaches-ai/Kokoro-82M-v1.0-ONNX

Step 3: Test

curl "http://localhost:8969/v1/audio/speech" -s \
  -H "Content-Type: application/json" \
  --output test.mp3 \
  --data '{
    "input": "Hello! Local TTS is working.",
    "model": "speaches-ai/Kokoro-82M-v1.0-ONNX",
    "voice": "af_bella"
  }'

Play test.mp3 to verify.

Step 4: Configure Open Notebook

Via Settings UI (Recommended):

  1. Go to SettingsAPI Keys
  2. Click Add Credential → Select OpenAI-Compatible
  3. Enter base URL for TTS: http://host.docker.internal:8969/v1 (Docker) or http://localhost:8969/v1 (local)
  4. Click Save, then Test Connection

Legacy (Deprecated) — Environment variables:

# In your Open Notebook docker-compose.yml
environment:
  - OPENAI_COMPATIBLE_BASE_URL_TTS=http://host.docker.internal:8969/v1
# Local development
export OPENAI_COMPATIBLE_BASE_URL_TTS=http://localhost:8969/v1

Step 5: Add Model in Open Notebook

  1. Go to SettingsModels
  2. Click Add Model in Text-to-Speech section
  3. Configure:
    • Provider: openai_compatible
    • Model Name: speaches-ai/Kokoro-82M-v1.0-ONNX
    • Display Name: Local TTS
  4. Click Save
  5. Set as default if desired

Available Voices

The Kokoro model includes multiple voices:

Female Voices

Voice ID Description
af_bella Clear, professional
af_sarah Warm, friendly
af_nicole Energetic, expressive

Male Voices

Voice ID Description
am_adam Deep, authoritative
am_michael Friendly, conversational

British Accents

Voice ID Description
bf_emma British female, professional
bm_george British male, formal

Test Different Voices

for voice in af_bella af_sarah am_adam am_michael; do
  curl "http://localhost:8969/v1/audio/speech" -s \
    -H "Content-Type: application/json" \
    --output "test_${voice}.mp3" \
    --data "{
      \"input\": \"Hello, this is the ${voice} voice.\",
      \"model\": \"speaches-ai/Kokoro-82M-v1.0-ONNX\",
      \"voice\": \"${voice}\"
    }"
done

GPU Acceleration

For faster generation with NVIDIA GPUs:

services:
  speaches:
    image: ghcr.io/speaches-ai/speaches:latest-cuda
    container_name: speaches
    ports:
      - "8969:8000"
    volumes:
      - hf-hub-cache:/home/ubuntu/.cache/huggingface/hub
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

volumes:
  hf-hub-cache:

Docker Networking

When configuring your OpenAI-Compatible credential in Settings → API Keys, use the appropriate TTS base URL for your setup:

Open Notebook in Docker (macOS/Windows)

TTS Base URL: http://host.docker.internal:8969/v1

Open Notebook in Docker (Linux)

TTS Base URL (Option 1 — Docker bridge IP): http://172.17.0.1:8969/v1

Option 2: Use host networking mode (docker run --network host ...), then use: http://localhost:8969/v1

Remote Server

Run Speaches on a different machine:

TTS Base URL: http://server-ip:8969/v1 (replace with your server's IP)


Multi-Speaker Podcasts

Configure different voices for each speaker:

Speaker 1 (Host):
  Model: speaches-ai/Kokoro-82M-v1.0-ONNX
  Voice: af_bella

Speaker 2 (Guest):
  Model: speaches-ai/Kokoro-82M-v1.0-ONNX
  Voice: am_adam

Speaker 3 (Narrator):
  Model: speaches-ai/Kokoro-82M-v1.0-ONNX
  Voice: bf_emma

Troubleshooting

Service Won't Start

# Check logs
docker compose logs speaches

# Verify port available
lsof -i :8969

# Restart
docker compose down && docker compose up -d

Connection Refused

# Test Speaches is running
curl http://localhost:8969/v1/models

# From inside Open Notebook container
docker exec -it open-notebook curl http://host.docker.internal:8969/v1/models

Model Not Found

# List downloaded models
docker compose exec speaches uv tool run speaches-cli model list

# Download if missing
docker compose exec speaches uv tool run speaches-cli model download speaches-ai/Kokoro-82M-v1.0-ONNX

Poor Audio Quality

  • Try different voices
  • Adjust speed: "speed": 0.9 to 1.2
  • Check model downloaded completely
  • Allocate more memory

Slow Generation

Solution How
Use GPU Switch to latest-cuda image
More CPU Allocate more cores in Docker
Faster model Use smaller/quantized models
SSD storage Move Docker volumes to SSD

Performance Tips

Component Minimum Recommended
CPU 2 cores 4+ cores
RAM 2 GB 4+ GB
Storage 5 GB 10 GB (for multiple models)
GPU None NVIDIA (optional)

Resource Limits

services:
  speaches:
    # ... other config
    mem_limit: 4g
    cpus: 2

Monitor Usage

docker stats speaches

Comparison: Local vs Cloud

Aspect Local (Speaches) Cloud (OpenAI/ElevenLabs)
Cost Free $0.015-0.10/min
Privacy Complete Data sent to provider
Speed Depends on hardware Usually faster
Quality Good Excellent
Setup Moderate Simple API key
Offline Yes No
Voices Limited Many options

When to Use Local

  • Privacy-sensitive content
  • High-volume generation
  • Development/testing
  • Offline environments
  • Cost control

When to Use Cloud

  • Premium quality needs
  • Multiple languages
  • Time-sensitive projects
  • Limited hardware

Other Local TTS Options

Any OpenAI-compatible TTS server works. The key is:

  1. Server implements /v1/audio/speech endpoint
  2. Add an OpenAI-Compatible credential in Settings → API Keys with the TTS base URL
  3. Add model with provider openai_compatible