spawn/modal/gptme.sh
Sprite 6fdfe1b014 refactor: Extract ENV_TEMP pattern to provider-specific inject functions
Completed ENV_TEMP pattern extraction across remaining providers:

1. Modal: gptme.sh (1 script) - uses inject_env_vars_local
2. GCP: all 10 agent scripts - uses inject_env_vars_ssh
3. Fly.io: all 11 agent scripts - uses new inject_env_vars_fly
   - Added inject_env_vars_fly() to fly/lib/common.sh
   - Handles both .bashrc and .zshrc (Fly-specific requirement)
4. Sprite: amazonq, cline, gemini (3 scripts) - uses inject_env_vars_sprite

Total scripts converted in this commit: 25
Total scripts converted in Round 25 Task #1: 78 scripts

Each conversion replaces 11-15 lines of temp file management with a single
function call that handles creation, permissions, content generation, upload,
sourcing, and cleanup.

The only remaining ENV_TEMP patterns are DOTENV_TEMP in nanoclaw scripts,
which are agent-specific .env files and should remain as-is.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-08 04:15:02 +00:00

60 lines
1.6 KiB
Bash

#!/bin/bash
set -e
# Source common functions - try local file first, fall back to remote
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" 2>/dev/null && pwd)"
if [[ -f "$SCRIPT_DIR/lib/common.sh" ]]; then
source "$SCRIPT_DIR/lib/common.sh"
else
eval "$(curl -fsSL https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/modal/lib/common.sh)"
fi
log_info "gptme on Modal"
echo ""
# 1. Ensure Modal CLI
ensure_modal_cli
# 2. Get sandbox name and create sandbox
SERVER_NAME=$(get_server_name)
create_server "$SERVER_NAME"
# 3. Wait for base tools
wait_for_cloud_init
# 4. Install gptme
log_warn "Installing gptme..."
run_server "pip install gptme 2>/dev/null || pip3 install gptme"
log_info "gptme installed"
# 5. Get OpenRouter API key
echo ""
if [[ -n "${OPENROUTER_API_KEY:-}" ]]; then
log_info "Using OpenRouter API key from environment"
else
OPENROUTER_API_KEY=$(get_openrouter_api_key_oauth 5180)
fi
# 6. Get model preference
echo ""
log_warn "Browse models at: https://openrouter.ai/models"
log_warn "Which model would you like to use with gptme?"
MODEL_ID=$(safe_read "Enter model ID [openrouter/auto]: ") || MODEL_ID=""
MODEL_ID="${MODEL_ID:-openrouter/auto}"
# 7. Inject environment variables into ~/.zshrc
log_warn "Setting up environment variables..."
inject_env_vars_local upload_file run_server \
"OPENROUTER_API_KEY=${OPENROUTER_API_KEY}"
echo ""
log_info "Modal sandbox setup completed successfully!"
log_info "Sandbox: $SERVER_NAME (ID: $MODAL_SANDBOX_ID)"
echo ""
# 8. Start gptme interactively
log_warn "Starting gptme..."
sleep 1
clear
interactive_session "source ~/.zshrc && gptme -m openrouter/${MODEL_ID}"