spawn/lambda/README.md
L 70399cb8a7
Add E2B + Modal sandbox providers, restructure README (#22)
New sandbox-type cloud providers (no SSH, SDK-driven exec):
- e2b/: E2B sandboxed containers via CLI (~150ms cold start)
- modal/: Modal sandboxed containers via Python SDK (sub-second cold start)

README restructure:
- Root README.md simplified to matrix table with launch links
- Per-cloud README.md files with detailed docs, env vars, non-interactive mode

Matrix now 10 agents x 10 clouds = 100/100 implemented.

Co-authored-by: Sprite <noreply@sprite.dev>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-02-07 14:11:04 -08:00

1.3 KiB

Lambda Cloud

Lambda GPU Cloud instances via REST API. Lambda Cloud

GPU cloud, uses 'ubuntu' user. Manual tool install (no cloud-init).

Agents

Claude Code

bash <(curl -fsSL https://openrouter.ai/lab/spawn/lambda/claude.sh)

OpenClaw

bash <(curl -fsSL https://openrouter.ai/lab/spawn/lambda/openclaw.sh)

NanoClaw

bash <(curl -fsSL https://openrouter.ai/lab/spawn/lambda/nanoclaw.sh)

Aider

bash <(curl -fsSL https://openrouter.ai/lab/spawn/lambda/aider.sh)

Goose

bash <(curl -fsSL https://openrouter.ai/lab/spawn/lambda/goose.sh)

Codex CLI

bash <(curl -fsSL https://openrouter.ai/lab/spawn/lambda/codex.sh)

Open Interpreter

bash <(curl -fsSL https://openrouter.ai/lab/spawn/lambda/interpreter.sh)

Gemini CLI

bash <(curl -fsSL https://openrouter.ai/lab/spawn/lambda/gemini.sh)

Amazon Q CLI

bash <(curl -fsSL https://openrouter.ai/lab/spawn/lambda/amazonq.sh)

Cline

bash <(curl -fsSL https://openrouter.ai/lab/spawn/lambda/cline.sh)

Non-Interactive Mode

LAMBDA_SERVER_NAME=dev-mk1 \
LAMBDA_API_KEY=your-key \
OPENROUTER_API_KEY=sk-or-v1-xxxxx \
  bash <(curl -fsSL https://openrouter.ai/lab/spawn/lambda/claude.sh)