readest/docker
Huang Xin 6e7c9d1395
Some checks are pending
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (javascript-typescript) (push) Waiting to run
CodeQL Advanced / Analyze (rust) (push) Waiting to run
PR checks / rust_lint (push) Waiting to run
PR checks / build_web_app (push) Waiting to run
PR checks / test_web_app (push) Waiting to run
PR checks / build_tauri_app (push) Waiting to run
Deploy to vercel on merge / build_and_deploy (push) Waiting to run
feat(sync): bundled settings replica kind for cross-device prefs and credentials (#4094)
* feat(sync): add bundled `settings` replica kind for cross-device prefs and credentials

Adds a single-row `settings` replica that syncs a whitelist of
`SystemSettings` fields across devices via per-field LWW (one entry
per dot-namespaced path). Plaintext for theme / highlight colour /
TTS configuration; encrypted (AES-GCM under the user's sync
passphrase) for kosync / Readwise / Hardcover credentials.

Highlights:
- Push-side diff against an in-memory snapshot for plaintext paths
  and a localStorage SHA-256 hash for encrypted paths, so a refresh
  doesn't re-publish or re-prompt for the passphrase.
- Pull-side cipher-fingerprint dedupe + per-row passphrase gate;
  decryption failures surface as toasts (wrong passphrase / orphan
  cipher) instead of silent drops.
- Auto-recovery for orphaned ciphers: when a row references a
  saltId no longer in `replica_keys`, clear the local hash and
  re-encrypt under the current salt on the next save.
- Single in-flight `/sync/replica-keys` fetch with a value cache
  to coalesce the boot-time burst of concurrent unlock callers.

* fix(sync): guard settings dot-path helpers against prototype-polluting keys

Reject `__proto__`, `constructor`, and `prototype` segments in the
settings adapter's `readPath` / `writePath`. Every caller currently
passes a constant from `SETTINGS_WHITELIST`, so the guard is purely
defensive — but it silences the CodeQL prototype-pollution warning
on PR #4094 and keeps the helpers safe if a future call site ever
forwards an untrusted path.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 19:03:23 +02:00
..
volumes feat(sync): bundled settings replica kind for cross-device prefs and credentials (#4094) 2026-05-08 19:03:23 +02:00
.env.example chore(bump): bump Tauri to the latest version (#3716) 2026-04-01 10:06:23 +02:00
compose.yaml fix(docker): correct Kong gateway port for client build arg (#4046) 2026-05-03 17:10:16 +02:00
README.md feat(docker/podman): self-hosting with docker/podman compose (#3312) 2026-02-18 14:28:10 +01:00

Self-Hosting with Docker/Podman with Compose

Stack

service Image Description
client from ../Dockerfile readest frontend
db supabase/postgres psql db with supabase extensions
kong kong:2.8.1 api gateway routing requests to supabase services
auth supabase/gotrue:v2.185.0 auth service (email, JWT)
rest postgrest/postgrest:v14.3 psql rest api
minio minio/minio s3 storage
minio-setup minio/mc helper container to create s3 buckets

Exposed ports

Port Service
3000 readest
7000 kong API gateway
9000 MinIO S3 API
9001 MinIO console UI

Running with Docker/Podman Compose

1. setup .env

cp docker/.env.example docker/.env

update docker/.env:

  • update POSTGRES_PASSWORD to a strong password (32+ chars)
  • update JWT_SECRET to a random secret (32+ chars)
  • regenerate ANON_KEY and SERVICE_ROLE_KEY as HS256 JWTs signed with your JWT_SECRET (use jwt.io or a similar tool):
    • ANON_KEY payload: {"role": "anon"}
    • SERVICE_ROLE_KEY payload: {"role": "service_role"}
  • set MINIO_ROOT_PASSWORD to a strong password

2. Start the Stack

run from the docker/ directory:

cd docker
docker compose up --build -d

the client image is built locally on first run. subsequent starts reuse the cached image.

3. Access

  • Readest app: http://localhost:3000
  • MinIO console: http://localhost:9001 (login with MINIO_ROOT_USER / MINIO_ROOT_PASSWORD)

Hot Reload (development)

to develop using the compose stack, set the build target on client to development-stage, which'll runs the next.js dev server. to enable hot reload, uncomment the volumes block in the client service in compose.yaml:

volumes:
  - ../:/app
  - /app/node_modules
  - /app/apps/readest-app/node_modules
  - /app/apps/readest-app/public/vendor
  - /app/apps/readest-app/.next
  - /app/packages/foliate-js/node_modules

the first mount overlays your local repo into the container. the remaining anonymous volumes shadow the directories that were pre-built inside the image, so the container's installed deps and vendor assets are used instead of what's on your host.

Stop the Stack

cd docker
docker compose down

to also remove volumes (database and storage data):

cd docker
docker compose down -v

Building the Dockerfile standalone

the Dockerfile requires Build args for the next.js public env vars (they are inlined at build time)

docker build \
  --target production-stage \
  --build-arg NEXT_PUBLIC_SUPABASE_URL=http://localhost:7000 \
  --build-arg NEXT_PUBLIC_SUPABASE_ANON_KEY=<anon-key> \
  --build-arg NEXT_PUBLIC_APP_PLATFORM=web \
  --build-arg NEXT_PUBLIC_API_BASE_URL=http://localhost:3000 \
  --build-arg NEXT_PUBLIC_OBJECT_STORAGE_TYPE=s3 \
  --build-arg NEXT_PUBLIC_STORAGE_FIXED_QUOTA=1073741824 \
  --build-arg NEXT_PUBLIC_TRANSLATION_FIXED_QUOTA=50000 \
  -t readest-client \
  .

run the built image:

docker run -p 3000:3000 \
  -e SUPABASE_URL=http://kong:8000 \
  -e SUPABASE_ANON_KEY=<anon-key> \
  -e SUPABASE_ADMIN_KEY=<service-role-key> \
  -e S3_ENDPOINT=http://localhost:9000 \
  -e S3_REGION=us-east-1 \
  -e S3_BUCKET_NAME=readest-files \
  -e S3_ACCESS_KEY_ID=<minio-user> \
  -e S3_SECRET_ACCESS_KEY=<minio-password> \
  readest-client