refactor: Remove 8 GPU cloud providers

Spawn agents use remote LLM APIs for inference — they need cheap CPU
instances, not expensive GPU VMs. Removed:

- Lambda Cloud
- RunPod
- Vast.ai
- Hyperstack
- FluidStack
- Genesis Cloud
- Paperspace
- Crusoe Cloud

This removes 112 matrix entries and ~8700 lines of GPU-specific code.
Remaining: 25 clouds, 350 matrix entries — all affordable CPU compute.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
B 2026-02-11 00:28:50 +00:00
parent 3d0ac5e562
commit f29f6946cd
107 changed files with 2 additions and 8699 deletions

View file

@ -1,56 +0,0 @@
# Vast.ai
Vast.ai GPU marketplace via CLI. [Vast.ai](https://vast.ai/)
## Prerequisites
1. A Vast.ai account with API key from [Account Settings](https://cloud.vast.ai/account/)
2. Python 3 with pip (for installing the `vastai` CLI)
## Agents
#### Claude Code
```bash
bash <(curl -fsSL https://openrouter.ai/lab/spawn/vastai/claude.sh)
```
#### Aider
```bash
bash <(curl -fsSL https://openrouter.ai/lab/spawn/vastai/aider.sh)
```
#### Codex CLI
```bash
bash <(curl -fsSL https://openrouter.ai/lab/spawn/vastai/codex.sh)
```
## Non-Interactive Mode
```bash
VASTAI_SERVER_NAME=dev-gpu \
VASTAI_API_KEY=your-api-key \
OPENROUTER_API_KEY=sk-or-v1-xxxxx \
bash <(curl -fsSL https://openrouter.ai/lab/spawn/vastai/claude.sh)
```
## Environment Variables
| Variable | Description | Default |
|---|---|---|
| `VASTAI_API_KEY` | Vast.ai API key | _(prompted)_ |
| `VASTAI_SERVER_NAME` | Instance label | _(prompted)_ |
| `VASTAI_GPU_TYPE` | GPU type to search for | `RTX_4090` |
| `VASTAI_DISK_GB` | Disk size in GB | `40` |
| `VASTAI_IMAGE` | Docker image | `nvidia/cuda:12.1.0-devel-ubuntu22.04` |
| `OPENROUTER_API_KEY` | OpenRouter API key | _(prompted via OAuth)_ |
## Notes
- Vast.ai is a GPU marketplace -- instances come with NVIDIA GPUs and CUDA pre-installed
- The `vastai` CLI is installed automatically if not present (`pip install vastai`)
- Instances are Docker containers; base tools are installed automatically on first run
- SSH access is via dynamic port mapping (non-standard ports)
- Pricing is per-hour, varies by GPU type and availability

View file

@ -1,55 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "Aider on Vast.ai"
echo ""
ensure_vastai_cli
ensure_vastai_token
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
verify_server_connectivity
install_base_tools
log_warn "Installing Aider..."
run_server "${VASTAI_INSTANCE_ID}" "pip install aider-chat 2>/dev/null || pip3 install aider-chat"
if ! run_server "${VASTAI_INSTANCE_ID}" "command -v aider &> /dev/null && aider --version &> /dev/null"; then
log_error "Aider installation verification failed"
exit 1
fi
log_info "Aider installation verified successfully"
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
MODEL_ID=$(get_model_id_interactive "openrouter/auto" "Aider") || exit 1
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
log_warn "Starting Aider..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && aider --model openrouter/${MODEL_ID}"

View file

@ -1,58 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "Amazon Q on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Install Amazon Q CLI
log_warn "Installing Amazon Q CLI..."
run_server "${VASTAI_INSTANCE_ID}" "curl -fsSL https://desktop-release.q.us-east-1.amazonaws.com/latest/amazon-q-cli-install.sh | bash"
log_info "Amazon Q CLI installed"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}" \
"OPENAI_API_KEY=${OPENROUTER_API_KEY}" \
"OPENAI_BASE_URL=https://openrouter.ai/api/v1"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 7. Start Amazon Q interactively
log_warn "Starting Amazon Q..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && q chat"

View file

@ -1,69 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "Claude Code on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Verify Claude Code is installed (fallback to manual install)
log_warn "Verifying Claude Code installation..."
if ! run_server "${VASTAI_INSTANCE_ID}" "command -v claude" >/dev/null 2>&1; then
log_warn "Claude Code not found, installing manually..."
run_server "${VASTAI_INSTANCE_ID}" "curl -fsSL https://claude.ai/install.sh | bash"
fi
log_info "Claude Code is installed"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}" \
"ANTHROPIC_BASE_URL=https://openrouter.ai/api" \
"ANTHROPIC_AUTH_TOKEN=${OPENROUTER_API_KEY}" \
"ANTHROPIC_API_KEY=" \
"CLAUDE_CODE_SKIP_ONBOARDING=1" \
"CLAUDE_CODE_ENABLE_TELEMETRY=0"
# 7. Configure Claude Code settings
setup_claude_code_config "${OPENROUTER_API_KEY}" \
"upload_file ${VASTAI_INSTANCE_ID}" \
"run_server ${VASTAI_INSTANCE_ID}"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 8. Start Claude Code interactively
log_warn "Starting Claude Code..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && claude"

View file

@ -1,58 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "Cline on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Install Cline
log_warn "Installing Cline..."
run_server "${VASTAI_INSTANCE_ID}" "npm install -g cline"
log_info "Cline installed"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}" \
"OPENAI_API_KEY=${OPENROUTER_API_KEY}" \
"OPENAI_BASE_URL=https://openrouter.ai/api/v1"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 7. Start Cline interactively
log_warn "Starting Cline..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && cline"

View file

@ -1,49 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "Codex CLI on Vast.ai"
echo ""
ensure_vastai_cli
ensure_vastai_token
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
verify_server_connectivity
install_base_tools
log_warn "Installing Codex CLI..."
run_server "${VASTAI_INSTANCE_ID}" "npm install -g @openai/codex"
log_info "Codex CLI installed"
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}" \
"OPENAI_API_KEY=${OPENROUTER_API_KEY}" \
"OPENAI_BASE_URL=https://openrouter.ai/api/v1"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
log_warn "Starting Codex..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && codex"

View file

@ -1,59 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "Gemini CLI on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Install Gemini CLI
log_warn "Installing Gemini CLI..."
run_server "${VASTAI_INSTANCE_ID}" "npm install -g @google/gemini-cli"
log_info "Gemini CLI installed"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}" \
"GEMINI_API_KEY=${OPENROUTER_API_KEY}" \
"OPENAI_API_KEY=${OPENROUTER_API_KEY}" \
"OPENAI_BASE_URL=https://openrouter.ai/api/v1"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 7. Start Gemini CLI interactively
log_warn "Starting Gemini..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && gemini"

View file

@ -1,57 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "Goose on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Install Goose
log_warn "Installing Goose..."
run_server "${VASTAI_INSTANCE_ID}" "CONFIGURE=false curl -fsSL https://github.com/block/goose/releases/latest/download/download_cli.sh | bash"
log_info "Goose installed"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"GOOSE_PROVIDER=openrouter" \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 7. Start Goose interactively
log_warn "Starting Goose..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && goose"

View file

@ -1,65 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "gptme on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Install gptme
log_warn "Installing gptme..."
run_server "${VASTAI_INSTANCE_ID}" "pip install gptme 2>/dev/null || pip3 install gptme"
# Verify installation succeeded
if ! run_server "${VASTAI_INSTANCE_ID}" "command -v gptme && gptme --version" >/dev/null 2>&1; then
log_error "gptme installation verification failed"
exit 1
fi
log_info "gptme installation verified successfully"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# Get model preference
MODEL_ID=$(get_model_id_interactive "openrouter/auto" "gptme") || exit 1
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 7. Start gptme interactively
log_warn "Starting gptme..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && gptme -m openrouter/${MODEL_ID}"

View file

@ -1,58 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "Open Interpreter on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Install Open Interpreter
log_warn "Installing Open Interpreter..."
run_server "${VASTAI_INSTANCE_ID}" "pip install open-interpreter 2>/dev/null || pip3 install open-interpreter"
log_info "Open Interpreter installed"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}" \
"OPENAI_API_KEY=${OPENROUTER_API_KEY}" \
"OPENAI_BASE_URL=https://openrouter.ai/api/v1"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 7. Start Open Interpreter interactively
log_warn "Starting Open Interpreter..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && interpreter"

View file

@ -1,58 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "Kilo Code on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Install Kilo Code
log_warn "Installing Kilo Code..."
run_server "${VASTAI_INSTANCE_ID}" "npm install -g @kilocode/cli"
log_info "Kilo Code installed"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}" \
"KILO_PROVIDER_TYPE=openrouter" \
"KILO_OPEN_ROUTER_API_KEY=${OPENROUTER_API_KEY}"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 7. Start Kilo Code interactively
log_warn "Starting Kilo Code..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && kilocode"

View file

@ -1,362 +0,0 @@
#!/bin/bash
# Common bash functions for Vast.ai spawn scripts
# Uses Vast.ai CLI (vastai) — https://vast.ai/docs/
# Bash safety flags
set -eo pipefail
# ============================================================
# Provider-agnostic functions
# ============================================================
# Source shared provider-agnostic functions (local or remote fallback)
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
if [[ -n "${SCRIPT_DIR}" && -f "${SCRIPT_DIR}/../../shared/common.sh" ]]; then
source "${SCRIPT_DIR}/../../shared/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/shared/common.sh)"
fi
# ============================================================
# Vast.ai specific functions
# ============================================================
# SSH_OPTS is defined in shared/common.sh
# Configurable timeout/delay constants
INSTANCE_STATUS_POLL_DELAY=${INSTANCE_STATUS_POLL_DELAY:-10}
SSH_RETRY_DELAY=${SSH_RETRY_DELAY:-5}
# Ensure vastai CLI is installed
ensure_vastai_cli() {
if command -v vastai &>/dev/null; then
return 0
fi
log_warn "Installing Vast.ai CLI..."
pip install vastai 2>/dev/null || pip3 install vastai 2>/dev/null || {
log_error "Failed to install vastai CLI"
log_error "Please install manually: pip install vastai"
return 1
}
log_info "Vast.ai CLI installed"
}
# Test Vast.ai API key validity
test_vastai_token() {
local result
result=$(vastai show instances 2>&1) || true
if printf '%s' "${result}" | grep -qi "invalid\|unauthorized\|forbidden\|error.*key\|error.*auth"; then
log_error "Invalid Vast.ai API key"
log_warn "Get your API key from: https://cloud.vast.ai/account/"
return 1
fi
return 0
}
# Ensure VASTAI_API_KEY is available (env var -> config file -> prompt+save)
ensure_vastai_token() {
# Vast.ai CLI reads from ~/.vast_api_key, so check that too
if [[ -z "${VASTAI_API_KEY:-}" ]] && [[ -f "${HOME}/.vast_api_key" ]]; then
VASTAI_API_KEY=$(cat "${HOME}/.vast_api_key" 2>/dev/null)
if [[ -n "${VASTAI_API_KEY}" ]]; then
log_info "Using Vast.ai API key from ~/.vast_api_key"
export VASTAI_API_KEY
fi
fi
ensure_api_token_with_provider \
"Vast.ai" \
"VASTAI_API_KEY" \
"${HOME}/.config/spawn/vastai.json" \
"https://cloud.vast.ai/account/" \
"test_vastai_token"
# Also set the key for the vastai CLI
vastai set api-key "${VASTAI_API_KEY}" >/dev/null 2>&1 || true
}
get_server_name() {
local server_name
server_name=$(get_resource_name "VASTAI_SERVER_NAME" "Enter instance name: ") || return 1
if ! validate_server_name "${server_name}"; then
return 1
fi
echo "${server_name}"
}
# Validate Vast.ai create_server parameters
# Usage: _validate_vastai_params DISK_GB IMAGE GPU_TYPE
_validate_vastai_params() {
local disk_gb="${1}" image="${2}" gpu_type="${3}"
if [[ ! "${disk_gb}" =~ ^[0-9]+$ ]]; then
log_error "Invalid VASTAI_DISK_GB: must be numeric"
return 1
fi
if [[ "${image}" =~ [\"\`\$\\] ]]; then
log_error "Invalid VASTAI_IMAGE: contains unsafe characters"
return 1
fi
if [[ "${gpu_type}" =~ [\"\`\$\\] ]]; then
log_error "Invalid VASTAI_GPU_TYPE: contains unsafe characters"
return 1
fi
}
# Search for the cheapest available GPU offer on Vast.ai
# Prints the offer ID on success
# Usage: _find_cheapest_offer GPU_TYPE
_find_cheapest_offer() {
local gpu_type="${1}"
log_warn "Searching for available ${gpu_type} offers..."
local offer_id
offer_id=$(vastai search offers "gpu_name=${gpu_type} num_gpus=1 rentable=true inet_down>100 reliability>0.95" -o "dph_total" --raw 2>/dev/null | python3 -c "
import json, sys
data = json.loads(sys.stdin.read())
if not data:
sys.exit(1)
print(data[0]['id'])
" 2>/dev/null) || {
log_error "No available offers found for GPU type: ${gpu_type}"
log_warn "Try a different GPU type with VASTAI_GPU_TYPE (e.g., RTX_3090, RTX_4080)"
log_warn "Browse available GPUs at: https://cloud.vast.ai/create/"
return 1
}
log_info "Found offer: ${offer_id}"
printf '%s' "${offer_id}"
}
# Create a Vast.ai instance from an offer and extract its ID
# Sets: VASTAI_INSTANCE_ID
# Usage: _create_vastai_instance OFFER_ID NAME IMAGE DISK_GB
_create_vastai_instance() {
local offer_id="${1}" name="${2}" image="${3}" disk_gb="${4}"
local create_output
create_output=$(vastai create instance "${offer_id}" \
--image "${image}" \
--disk "${disk_gb}" \
--ssh \
--direct \
--label "${name}" \
--onstart-cmd "apt-get update -y && apt-get install -y curl unzip git zsh" \
2>&1) || {
log_error "Failed to create Vast.ai instance"
log_error "${create_output}"
log_warn "Common issues:"
log_warn " - Insufficient account balance (add funds at https://cloud.vast.ai/billing/)"
log_warn " - GPU type unavailable"
return 1
}
# Extract instance ID from create output
VASTAI_INSTANCE_ID=$(printf '%s' "${create_output}" | grep -oP "new instance is \K[0-9]+" 2>/dev/null || \
printf '%s' "${create_output}" | python3 -c "
import sys, re
text = sys.stdin.read()
m = re.search(r'(\d{5,})', text)
if m:
print(m.group(1))
else:
sys.exit(1)
" 2>/dev/null) || {
log_error "Could not extract instance ID from create output"
log_error "Output: ${create_output}"
return 1
}
export VASTAI_INSTANCE_ID
log_info "Instance created: ID=${VASTAI_INSTANCE_ID}"
}
# Search for an available offer and create an instance
# Sets: VASTAI_INSTANCE_ID
create_server() {
local name="${1}"
local gpu_type="${VASTAI_GPU_TYPE:-RTX_4090}"
local disk_gb="${VASTAI_DISK_GB:-40}"
local image="${VASTAI_IMAGE:-nvidia/cuda:12.1.0-devel-ubuntu22.04}"
_validate_vastai_params "${disk_gb}" "${image}" "${gpu_type}" || return 1
local offer_id
offer_id=$(_find_cheapest_offer "${gpu_type}") || return 1
log_warn "Creating instance '${name}' (GPU: ${gpu_type}, image: ${image})..."
_create_vastai_instance "${offer_id}" "${name}" "${image}" "${disk_gb}" || return 1
wait_for_instance_ready "${VASTAI_INSTANCE_ID}"
}
# Wait for a Vast.ai instance to become ready and set SSH connection vars
# Sets: VASTAI_SSH_HOST, VASTAI_SSH_PORT
wait_for_instance_ready() {
local instance_id="${1}"
local max_attempts=${2:-60}
local attempt=1
log_warn "Waiting for instance to become ready..."
while [[ "${attempt}" -le "${max_attempts}" ]]; do
local status
status=$(vastai show instances --raw 2>/dev/null | python3 -c "
import json, sys
data = json.loads(sys.stdin.read())
for inst in data:
if str(inst.get('id')) == '${instance_id}':
print(inst.get('actual_status', 'unknown'))
sys.exit(0)
print('not_found')
" 2>/dev/null || printf '%s' "unknown")
if [[ "${status}" == "running" ]]; then
# Get SSH connection info
local ssh_url
ssh_url=$(vastai ssh-url "${instance_id}" 2>/dev/null) || {
log_warn "Instance running but SSH URL not yet available, retrying..."
sleep "${INSTANCE_STATUS_POLL_DELAY}"
attempt=$((attempt + 1))
continue
}
# Parse SSH URL: ssh -p PORT root@HOST or ssh://root@HOST:PORT
VASTAI_SSH_HOST=$(printf '%s' "${ssh_url}" | grep -oP '@\K[^ :]+' 2>/dev/null || \
printf '%s' "${ssh_url}" | sed -n 's/.*@\([^: ]*\).*/\1/p')
VASTAI_SSH_PORT=$(printf '%s' "${ssh_url}" | grep -oP '\-p\s*\K[0-9]+' 2>/dev/null || \
printf '%s' "${ssh_url}" | grep -oP ':(\K[0-9]+)' 2>/dev/null || printf '%s' "22")
if [[ -z "${VASTAI_SSH_HOST}" ]]; then
log_warn "Could not parse SSH URL: ${ssh_url}, retrying..."
sleep "${INSTANCE_STATUS_POLL_DELAY}"
attempt=$((attempt + 1))
continue
fi
export VASTAI_SSH_HOST VASTAI_SSH_PORT
log_info "Instance ready: SSH at ${VASTAI_SSH_HOST}:${VASTAI_SSH_PORT}"
return 0
fi
log_warn "Instance status: ${status} (${attempt}/${max_attempts})"
sleep "${INSTANCE_STATUS_POLL_DELAY}"
attempt=$((attempt + 1))
done
log_error "Instance did not become ready after ${max_attempts} attempts"
return 1
}
# Build SSH options string for Vast.ai (uses non-standard port)
_vastai_ssh_opts() {
printf '%s' "${SSH_OPTS} -o ConnectTimeout=10 -p ${VASTAI_SSH_PORT}"
}
verify_server_connectivity() {
local max_attempts=${1:-30}
local attempt=1
local ssh_target="root@${VASTAI_SSH_HOST}"
log_warn "Waiting for SSH connectivity to ${ssh_target}:${VASTAI_SSH_PORT}..."
while [[ "${attempt}" -le "${max_attempts}" ]]; do
# shellcheck disable=SC2086
if ssh $(_vastai_ssh_opts) "${ssh_target}" "echo ok" >/dev/null 2>&1; then
log_info "SSH connection established"
return 0
fi
log_warn "Waiting for SSH... (${attempt}/${max_attempts})"
sleep "${SSH_RETRY_DELAY}"
attempt=$((attempt + 1))
done
log_error "Instance failed to respond via SSH after ${max_attempts} attempts"
return 1
}
# Install base tools (Vast.ai instances are Docker containers)
install_base_tools() {
local ssh_target="root@${VASTAI_SSH_HOST}"
log_warn "Installing base tools..."
# shellcheck disable=SC2086
ssh $(_vastai_ssh_opts) "${ssh_target}" "apt-get update -y && apt-get install -y curl unzip git zsh npm" >/dev/null 2>&1 || true
# Install Bun
log_warn "Installing Bun..."
# shellcheck disable=SC2086
ssh $(_vastai_ssh_opts) "${ssh_target}" "curl -fsSL https://bun.sh/install | bash" >/dev/null 2>&1 || true
# Install Claude Code
log_warn "Installing Claude Code..."
# shellcheck disable=SC2086
ssh $(_vastai_ssh_opts) "${ssh_target}" "curl -fsSL https://claude.ai/install.sh | bash" >/dev/null 2>&1 || true
# Configure PATH in .bashrc and .zshrc
# shellcheck disable=SC2086
ssh $(_vastai_ssh_opts) "${ssh_target}" "grep -q '.bun/bin' ~/.bashrc 2>/dev/null || printf '%s\n' 'export PATH=\"\${HOME}/.claude/local/bin:\${HOME}/.bun/bin:\${PATH}\"' >> ~/.bashrc; grep -q '.bun/bin' ~/.zshrc 2>/dev/null || printf '%s\n' 'export PATH=\"\${HOME}/.claude/local/bin:\${HOME}/.bun/bin:\${PATH}\"' >> ~/.zshrc" >/dev/null 2>&1 || true
log_info "Base tools installed"
}
# Vast.ai uses root user
# These functions follow the IP-first arg pattern for compatibility with inject_env_vars_ssh
# The "ip" arg is the instance ID (used for consistency, not for SSH target)
# shellcheck disable=SC2086
run_server() {
local _ip="${1}"
local cmd="${2}"
ssh $(_vastai_ssh_opts) "root@${VASTAI_SSH_HOST}" "${cmd}"
}
# shellcheck disable=SC2086
upload_file() {
local _ip="${1}"
local local_path="${2}"
local remote_path="${3}"
scp $(_vastai_ssh_opts) "${local_path}" "root@${VASTAI_SSH_HOST}:${remote_path}"
}
# shellcheck disable=SC2086
interactive_session() {
local _ip="${1}"
local cmd="${2}"
ssh -t $(_vastai_ssh_opts) "root@${VASTAI_SSH_HOST}" "${cmd}"
}
destroy_server() {
local instance_id="${1}"
log_warn "Destroying instance ${instance_id}..."
vastai destroy instance "${instance_id}" >/dev/null 2>&1
log_info "Instance ${instance_id} destroyed"
}
list_servers() {
vastai show instances --raw 2>/dev/null | python3 -c "
import json, sys
data = json.loads(sys.stdin.read())
if not data:
print('No instances found')
sys.exit(0)
fmt = '{:<25} {:<12} {:<15} {:<12} {:<30}'
print(fmt.format('LABEL', 'ID', 'STATUS', 'GPU', 'SSH'))
print('-' * 94)
for inst in data:
label = inst.get('label', 'N/A') or 'N/A'
iid = str(inst.get('id', 'N/A'))
status = inst.get('actual_status', 'N/A')
gpu = inst.get('gpu_name', 'N/A')
ssh_host = inst.get('ssh_host', '')
ssh_port = inst.get('ssh_port', '')
ssh_info = 'N/A'
if ssh_host and ssh_port:
ssh_info = '{}:{}'.format(ssh_host, ssh_port)
print(fmt.format(label[:25], iid[:12], status[:15], gpu[:12], ssh_info[:30]))
" || {
log_error "Failed to list instances"
return 1
}
}

View file

@ -1,73 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "NanoClaw on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Install Node.js deps and clone nanoclaw
log_warn "Installing tsx..."
run_server "${VASTAI_INSTANCE_ID}" "source ~/.bashrc && bun install -g tsx"
log_warn "Cloning and building nanoclaw..."
run_server "${VASTAI_INSTANCE_ID}" "git clone https://github.com/gavrielc/nanoclaw.git ~/nanoclaw && cd ~/nanoclaw && npm install && npm run build"
log_info "NanoClaw installed"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}" \
"ANTHROPIC_API_KEY=${OPENROUTER_API_KEY}" \
"ANTHROPIC_BASE_URL=https://openrouter.ai/api"
# 7. Create nanoclaw .env file
log_warn "Configuring nanoclaw..."
DOTENV_TEMP=$(mktemp)
trap 'rm -f "${DOTENV_TEMP}"' EXIT
chmod 600 "${DOTENV_TEMP}"
cat > "${DOTENV_TEMP}" << EOF
ANTHROPIC_API_KEY=${OPENROUTER_API_KEY}
EOF
upload_file "${VASTAI_INSTANCE_ID}" "${DOTENV_TEMP}" "/root/nanoclaw/.env"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 8. Start nanoclaw
log_warn "Starting nanoclaw..."
log_warn "You will need to scan a WhatsApp QR code to authenticate."
echo ""
interactive_session "${VASTAI_INSTANCE_ID}" "cd ~/nanoclaw && source ~/.zshrc && npm run dev"

View file

@ -1,66 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "OpenClaw on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Install openclaw via bun
log_warn "Installing openclaw..."
run_server "${VASTAI_INSTANCE_ID}" "source ~/.bashrc && bun install -g openclaw"
log_info "OpenClaw installed"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# Get model preference
MODEL_ID=$(get_model_id_interactive "openrouter/auto" "Openclaw") || exit 1
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}" \
"ANTHROPIC_API_KEY=${OPENROUTER_API_KEY}" \
"ANTHROPIC_BASE_URL=https://openrouter.ai/api"
# 7. Configure openclaw
setup_openclaw_config "${OPENROUTER_API_KEY}" "${MODEL_ID}" \
"upload_file ${VASTAI_INSTANCE_ID}" \
"run_server ${VASTAI_INSTANCE_ID}"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 8. Start openclaw gateway in background and launch TUI
log_warn "Starting openclaw..."
run_server "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && nohup openclaw gateway > /tmp/openclaw-gateway.log 2>&1 &"
sleep 2
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && openclaw tui"

View file

@ -1,56 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "OpenCode on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Install OpenCode
log_warn "Installing OpenCode..."
run_server "${VASTAI_INSTANCE_ID}" "$(opencode_install_cmd)"
log_info "OpenCode installed"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 7. Start OpenCode interactively
log_warn "Starting OpenCode..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && opencode"

View file

@ -1,62 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2154
set -eo pipefail
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
# shellcheck source=vastai/lib/common.sh
if [[ -f "${SCRIPT_DIR}/lib/common.sh" ]]; then
source "${SCRIPT_DIR}/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/vastai/lib/common.sh)"
fi
log_info "Plandex on Vast.ai"
echo ""
# 1. Ensure vastai CLI and API key are configured
ensure_vastai_cli
ensure_vastai_token
# 2. Get instance name and create instance
SERVER_NAME=$(get_server_name)
create_server "${SERVER_NAME}"
# 3. Wait for SSH connectivity and install base tools
verify_server_connectivity
install_base_tools
# 4. Install Plandex
log_warn "Installing Plandex..."
run_server "${VASTAI_INSTANCE_ID}" "curl -sL https://plandex.ai/install.sh | bash"
# Verify installation succeeded
if ! run_server "${VASTAI_INSTANCE_ID}" "command -v plandex &> /dev/null && plandex version &> /dev/null"; then
log_error "Plandex installation verification failed"
exit 1
fi
log_info "Plandex installation verified successfully"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# 6. Inject environment variables
log_warn "Setting up environment variables..."
inject_env_vars_ssh "${VASTAI_INSTANCE_ID}" upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}"
echo ""
log_info "Vast.ai instance setup completed successfully!"
log_info "Instance: ${SERVER_NAME} (ID: ${VASTAI_INSTANCE_ID})"
echo ""
# 7. Start Plandex interactively
log_warn "Starting Plandex..."
sleep 1
clear
interactive_session "${VASTAI_INSTANCE_ID}" "source ~/.zshrc && plandex"