* feat: add gptme agent to spawn matrix Add gptme (https://github.com/gptme/gptme) - a personal AI agent in the terminal with tools for code editing, terminal commands, web browsing, and more. Natively supports OpenRouter via OPENROUTER_API_KEY. - Add gptme agent entry to manifest.json with OpenRouter env vars - Implement sprite/gptme.sh deployment script - Implement hetzner/gptme.sh deployment script - Add "missing" matrix entries for remaining 8 clouds - Update README.md with usage instructions for Sprite and Hetzner Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add Fly.io cloud provider with claude and aider agents Add Fly.io as a new cloud provider using the Machines REST API for provisioning and flyctl CLI for SSH access. Docker-based machines with pay-per-second pricing. - Create fly/lib/common.sh with Fly.io Machines API integration - Implement fly/claude.sh for Claude Code deployment - Implement fly/aider.sh for Aider deployment - Update README.md with Fly.io usage instructions and env vars Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add gemini, amazonq, cline, gptme to Fly.io Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add openclaw, nanoclaw, goose, codex, interpreter to Fly.io Implements 5 new agent scripts for the Fly.io cloud provider: - fly/openclaw.sh: OpenClaw with gateway + TUI, model selection, config - fly/nanoclaw.sh: NanoClaw WhatsApp agent with .env configuration - fly/goose.sh: Block's Goose agent with OpenRouter provider - fly/codex.sh: OpenAI Codex CLI with OpenRouter base URL override - fly/interpreter.sh: Open Interpreter with OpenRouter base URL override All scripts follow the Fly.io pattern (flyctl-based, no IP args for run_server/interactive_session) and use upload_file for env injection. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add gptme agent to 8 remaining clouds Implement gptme agent scripts for digitalocean, vultr, linode, lambda, aws-lightsail, gcp, e2b, and modal. Each script follows the exact pattern of that cloud's existing aider.sh, adapted for gptme's install and launch commands. Updates manifest.json matrix entries from "missing" to "implemented". Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add guardrails from insights: CLAUDE.md rules, hooks, pre-commit Based on usage insights analysis: CLAUDE.md: - Shell script rules: curl|bash compat, macOS bash 3.x compat - Autonomous loop rules: test after each iteration, never revert fixes - Git workflow rules: always use feature branches .claude/settings.json: - PostToolUse hook validates .sh files on every Write/Edit: syntax check, no relative source, no echo -e, no set -u .githooks/pre-commit: - Blocks commits with: syntax errors, relative sources, echo -e, set -euo, references to deleted functions - Install: git config core.hooksPath .githooks README.md: - Added developer setup section with hook installation Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Sprite <noreply@sprite.dev> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
9.4 KiB
Spawn
Spawn is a matrix of agents x clouds. Every script provisions a cloud server, installs an agent, injects OpenRouter credentials, and drops the user into an interactive session.
The Matrix
manifest.json is the source of truth. It tracks:
- agents — coding agents / AI tools (Claude Code, OpenClaw, NanoClaw, ...)
- clouds — cloud providers to run them on (Sprite, Hetzner, ...)
- matrix — which
cloud/agentcombinations are"implemented"vs"missing"
How to Improve Spawn
When run via ./improve.sh, your job is to pick ONE of these tasks and execute it:
1. Fill a missing matrix entry
Look at manifest.json → matrix for any "missing" entry. To implement it:
- Find the cloud's
lib/common.sh— it has all the provider-specific primitives (create server, run command, upload file, interactive session) - Find the agent's existing script on another cloud — it shows the install steps, config files, env vars, and launch command
- Combine them: use the cloud's primitives to execute the agent's setup steps
- The script goes at
{cloud}/{agent}.sh
Pattern for every script:
1. Source {cloud}/lib/common.sh (local or remote fallback)
2. Authenticate with cloud provider
3. Provision server/VM
4. Wait for readiness
5. Install the agent
6. Get OpenRouter API key (env var or OAuth)
7. Inject env vars into shell config
8. Write agent-specific config files
9. Launch interactive session
OpenRouter injection is mandatory. Every agent script MUST:
- Set
OPENROUTER_API_KEYin the shell environment - Set provider-specific env vars (e.g.,
ANTHROPIC_BASE_URL=https://openrouter.ai/api) - These come from the agent's
envfield inmanifest.json
2. Add a new agent
Research coding agents, AI CLI tools, or AI-powered dev tools. To add one:
- Add an entry to
manifest.json→agentswith: name, description, url, install command, launch command, and env vars needed for OpenRouter - Add
"missing"entries to the matrix for every existing cloud - Implement the script for at least one cloud
- Update
README.md
Where to find new agents:
- GitHub trending in AI/coding categories
- OpenRouter's ecosystem
- HuggingFace agent frameworks
- CLI tools that accept
ANTHROPIC_API_KEYorOPENAI_API_KEY(these work with OpenRouter via base URL override)
3. Add a new cloud provider
Research cloud providers with API-based provisioning. To add one:
- Create
{cloud}/lib/common.shwith the provider's primitives:- Auth/token management (env var → config file → prompt)
- Server creation (API call or CLI)
- SSH/exec connectivity
- File upload
- Interactive session
- Server destruction
- Add an entry to
manifest.json→clouds - Add
"missing"entries to the matrix for every existing agent - Implement at least one agent script
- Update
README.md
Good candidate clouds have:
- REST API or simple CLI for provisioning
- SSH access to the created server
- Cloud-init or similar userdata support
- Pay-per-hour pricing (so users can destroy after use)
4. Extend tests
test/run.sh contains the test harness. When adding a new cloud or agent:
- Add mock functions for the cloud's CLI/API calls
- Add per-script assertions matching the agent's setup steps
- Run
bash test/run.shto verify
File Structure Convention
spawn/
cli/
src/index.ts # CLI entry point (bun/TypeScript)
src/manifest.ts # Manifest fetch + cache logic
src/commands.ts # All subcommands (interactive, list, run, etc.)
src/version.ts # Version constant
package.json # npm package (@openrouter/spawn)
install.sh # One-liner installer (bun → npm → bash fallback)
spawn.sh # Bash fallback CLI (no bun/node required)
shared/
common.sh # Provider-agnostic shared utilities
{cloud}/
lib/common.sh # Cloud-specific functions (sources shared/common.sh)
{agent}.sh # Agent deployment scripts
manifest.json # The matrix (source of truth)
improve.sh # Run this to trigger one improvement cycle
test/run.sh # Test harness
README.md # User-facing docs
CLAUDE.md # This file - contributor guide
Architecture: Shared Library Pattern
shared/common.sh - Core utilities used by all clouds:
- Logging:
log_info,log_warn,log_error(colored output) - Input handling:
safe_read(works in interactive and piped contexts) - OAuth flow:
try_oauth_flow,get_openrouter_api_key_oauth(browser-based auth) - Network utilities:
nc_listen(cross-platform netcat wrapper),open_browser - SSH helpers:
generate_ssh_key_if_missing,get_ssh_fingerprint,generic_ssh_wait - Security:
validate_model_id,json_escape
{cloud}/lib/common.sh - Cloud-specific extensions:
- Sources
shared/common.shat the top - Adds provider-specific functions:
- Sprite:
ensure_sprite_installed,get_sprite_name,run_sprite, etc. - Hetzner: API wrappers for server creation, SSH key management, etc.
- DigitalOcean: Droplet provisioning, API calls, etc.
- Vultr: Instance management via REST API
- Linode: Linode-specific provisioning functions
- Sprite:
Agent scripts ({cloud}/{agent}.sh):
- Source their cloud's
lib/common.sh(which auto-sourcesshared/common.sh) - Use shared functions for logging, OAuth, SSH setup
- Use cloud functions for provisioning and connecting to servers
- Deploy the specific agent with its configuration
Why This Structure?
- DRY principle: OAuth, logging, SSH logic written once in
shared/common.sh - Consistency: All scripts use same authentication and error handling patterns
- Maintainability: Bug fixes in shared code benefit all providers automatically
- Extensibility: New clouds only need to implement provider-specific logic
- Testability: Shared functions can be tested independently
Source Pattern
Every cloud's lib/common.sh starts with:
#!/bin/bash
# Cloud-specific functions for {provider}
# Source shared provider-agnostic functions
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../shared/common.sh" || {
echo "ERROR: Failed to load shared/common.sh" >&2
exit 1
}
# ... cloud-specific functions below ...
This pattern ensures:
- Shared utilities are always available
- Path resolution works when sourced from any location
- Script fails fast if shared library is missing
Shell Script Rules
These rules are non-negotiable — violating them breaks remote execution for all users.
curl|bash Compatibility
Every script MUST work when executed via bash <(curl -fsSL URL):
- NEVER use relative paths for sourcing (
source ./lib/...,source ../shared/...) - NEVER rely on
$0,dirname $0, orBASH_SOURCEresolving to a real filesystem path - ALWAYS use the local-or-remote fallback pattern:
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)" if [[ -f "$SCRIPT_DIR/lib/common.sh" ]]; then source "$SCRIPT_DIR/lib/common.sh" else eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/{cloud}/lib/common.sh)" fi - Similarly,
{cloud}/lib/common.shMUST use the same fallback forshared/common.sh
macOS bash 3.x Compatibility
macOS ships bash 3.2. All scripts MUST work on it:
- NO
echo -e— useprintffor escape sequences - NO
source <(cmd)insidebash <(curl ...)— useeval "$(cmd)"instead - NO
((var++))withset -e— usevar=$((var + 1))(avoids falsy-zero exit) - NO
localkeyword inside( ... ) &subshells — not function scope - NO
set -u(nounset) — use${VAR:-}for optional env var checks instead
Conventions
#!/bin/bash+set -eo pipefail(nouflag)- Use
${VAR:-}for all optional env var checks (OPENROUTER_API_KEY, cloud tokens, etc.) - Remote fallback URL:
https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/{path} - All env vars documented in the cloud's README.md
Autonomous Loops
When running autonomous improvement/refactoring loops (./improve.sh --loop):
- Run
bash -non every changed .sh file before committing — syntax errors break everything - NEVER revert a prior fix — if
shared/common.shwas changed to fix macOS compat, don't undo it - NEVER re-introduce deleted functions — if
write_oauth_response_filewas removed, don't call it - NEVER change the source/eval fallback pattern in lib/common.sh files — it's load-bearing for curl|bash
- Test after EACH iteration — don't batch multiple changes without verification
- If a change breaks tests, STOP — revert and ask for guidance rather than compounding the regression
Git Workflow
- Always work on a feature branch — never commit directly to main (except urgent one-line fixes)
- Before creating a PR, check
git statusandgit logto verify branch state - Use
gh pr createfrom the feature branch, thengh pr merge --squash - Never rebase main or use
--forceunless explicitly asked
After Each Change
bash -n {file}syntax check on all modified scripts- Update
manifest.jsonmatrix status to"implemented" - Update the cloud's
README.mdwith usage instructions - Commit with a descriptive message