mirror of
https://github.com/OpenRouterTeam/spawn.git
synced 2026-04-30 12:59:32 +00:00
The upstream OpenCode installer pipes `curl -# -L | tar xz` which fails in container exec environments (Sprite, E2B, Modal, Daytona) where the binary stream gets corrupted through the exec layer, producing "gzip: stdin: not in gzip format" errors. Added opencode_install_cmd() to shared/common.sh that downloads the binary to a file first, then extracts it. Updated all 17 opencode.sh scripts to use this robust method instead of the upstream installer. The previous fix (#44) only addressed Sprite with a hardcoded linux-x86_64 architecture. This fix detects OS/arch dynamically and applies to all cloud providers. Fixes #42 Co-authored-by: Sprite <noreply@sprite.dev> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| lib | ||
| aider.sh | ||
| amazonq.sh | ||
| claude.sh | ||
| cline.sh | ||
| codex.sh | ||
| gemini.sh | ||
| goose.sh | ||
| gptme.sh | ||
| interpreter.sh | ||
| nanoclaw.sh | ||
| openclaw.sh | ||
| opencode.sh | ||
| plandex.sh | ||
| README.md | ||
RunPod
RunPod GPU cloud pods via GraphQL API. RunPod
Prerequisites
- A RunPod account with API key from Settings
- SSH public key added to your RunPod account (same settings page)
Agents
Claude Code
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/claude.sh)
OpenClaw
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/openclaw.sh)
NanoClaw
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/nanoclaw.sh)
Aider
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/aider.sh)
Goose
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/goose.sh)
Codex CLI
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/codex.sh)
Open Interpreter
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/interpreter.sh)
Gemini CLI
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/gemini.sh)
Amazon Q CLI
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/amazonq.sh)
Cline
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/cline.sh)
gptme
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/gptme.sh)
OpenCode
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/opencode.sh)
Plandex
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/plandex.sh)
Non-Interactive Mode
RUNPOD_SERVER_NAME=dev-gpu \
RUNPOD_API_KEY=your-api-key \
OPENROUTER_API_KEY=sk-or-v1-xxxxx \
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/claude.sh)
Environment Variables
| Variable | Description | Default |
|---|---|---|
RUNPOD_API_KEY |
RunPod API key | (prompted) |
RUNPOD_SERVER_NAME |
Pod name | (prompted) |
RUNPOD_GPU_TYPE |
GPU type ID | NVIDIA RTX A4000 |
RUNPOD_GPU_COUNT |
Number of GPUs | 1 |
RUNPOD_IMAGE |
Docker image | runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04 |
RUNPOD_VOLUME_GB |
Persistent volume size (GB) | 50 |
RUNPOD_CONTAINER_DISK_GB |
Container disk size (GB) | 20 |
RUNPOD_CLOUD_TYPE |
Cloud type (ALL, COMMUNITY, SECURE) |
ALL |
OPENROUTER_API_KEY |
OpenRouter API key | (prompted via OAuth) |
Notes
- RunPod is a GPU cloud provider -- pods come with NVIDIA GPUs and CUDA pre-installed
- SSH keys must be added via the RunPod web console (not via API)
- Pods use Docker containers; base tools are installed automatically on first run
- SSH access is via direct TCP port mapping or RunPod's SSH proxy