mirror of
https://github.com/OpenRouterTeam/spawn.git
synced 2026-04-30 12:59:32 +00:00
* Add RunPod GPU cloud provider with all 13 agent scripts - runpod/lib/common.sh: GraphQL API wrapper, pod creation/termination, SSH connectivity (direct TCP or proxy via ssh.runpod.io) - 13 agent scripts: claude, openclaw, nanoclaw, aider, goose, codex, interpreter, gemini, amazonq, cline, gptme, opencode, plandex - runpod/README.md with usage docs and environment variable reference - manifest.json: RunPod cloud entry + all matrix entries as implemented Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add UpCloud cloud provider with all 13 agent scripts - upcloud/lib/common.sh: UpCloud API wrapper with Basic Auth, server provisioning, SSH connectivity, base tool installation - 13 agent scripts: claude, openclaw, nanoclaw, aider, goose, codex, interpreter, gemini, amazonq, cline, gptme, opencode, plandex - upcloud/README.md with usage docs and env var reference - manifest.json updated with UpCloud cloud entry and 13 matrix entries UpCloud uses HTTP Basic Auth (username:password) instead of Bearer tokens. Servers are provisioned via POST /1.3/server with SSH keys injected via login_user. Ubuntu template UUID is dynamically resolved from the API. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Sprite <noreply@sprite.dev> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2.6 KiB
2.6 KiB
RunPod
RunPod GPU cloud pods via GraphQL API. RunPod
Prerequisites
- A RunPod account with API key from Settings
- SSH public key added to your RunPod account (same settings page)
Agents
Claude Code
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/claude.sh)
OpenClaw
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/openclaw.sh)
NanoClaw
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/nanoclaw.sh)
Aider
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/aider.sh)
Goose
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/goose.sh)
Codex CLI
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/codex.sh)
Open Interpreter
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/interpreter.sh)
Gemini CLI
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/gemini.sh)
Amazon Q CLI
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/amazonq.sh)
Cline
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/cline.sh)
gptme
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/gptme.sh)
OpenCode
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/opencode.sh)
Plandex
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/plandex.sh)
Non-Interactive Mode
RUNPOD_SERVER_NAME=dev-gpu \
RUNPOD_API_KEY=your-api-key \
OPENROUTER_API_KEY=sk-or-v1-xxxxx \
bash <(curl -fsSL https://openrouter.ai/lab/spawn/runpod/claude.sh)
Environment Variables
| Variable | Description | Default |
|---|---|---|
RUNPOD_API_KEY |
RunPod API key | (prompted) |
RUNPOD_SERVER_NAME |
Pod name | (prompted) |
RUNPOD_GPU_TYPE |
GPU type ID | NVIDIA RTX A4000 |
RUNPOD_GPU_COUNT |
Number of GPUs | 1 |
RUNPOD_IMAGE |
Docker image | runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04 |
RUNPOD_VOLUME_GB |
Persistent volume size (GB) | 50 |
RUNPOD_CONTAINER_DISK_GB |
Container disk size (GB) | 20 |
RUNPOD_CLOUD_TYPE |
Cloud type (ALL, COMMUNITY, SECURE) |
ALL |
OPENROUTER_API_KEY |
OpenRouter API key | (prompted via OAuth) |
Notes
- RunPod is a GPU cloud provider -- pods come with NVIDIA GPUs and CUDA pre-installed
- SSH keys must be added via the RunPod web console (not via API)
- Pods use Docker containers; base tools are installed automatically on first run
- SSH access is via direct TCP port mapping or RunPod's SSH proxy