The get_provision_timeout and get_agent_timeout functions used printenv with
dynamically constructed variable names, which is fragile across shells and
platforms. Replace with eval-based parameter expansion using the already-
sanitized safe_agent variable (restricted to [A-Za-z0-9_]).
Fixes#3234
Agent: security-auditor
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix(security): array-based agent detection and GCP instance name validation
Replace shell string concatenation in detectAgent() with individual
`command -v` calls per agent, eliminating the compound shell command.
Add _gcp_validate_instance_name() to validate GCP instance names match
[a-z][a-z0-9-]*[a-z0-9] before passing to gcloud commands.
Fixes#3151Fixes#3149
Agent: security-auditor
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: add instance name validation in _gcp_cleanup_stale()
Defense-in-depth: validate instance names from GCP API before passing
to gcloud delete, consistent with validation at other call sites.
Agent: pr-maintainer
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---------
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Add a billing pre-check to _gcp_validate_env so the E2E orchestrator
skips GCP gracefully ("skipped — credentials not configured") instead
of failing every agent individually when billing is disabled.
Fixes#3091
Agent: test-engineer
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
The previous grep -o '"id":[0-9]*' pattern matched all numeric id fields
in the droplets JSON response (including nested image/region/size ids),
overcounting droplets by 2x and falsely reporting quota exhaustion.
Replace with jq '.droplets | length' which correctly counts only top-level
droplet objects. This restores DigitalOcean capacity detection so e2e runs
can use available droplet slots.
-- qa/e2e-tester
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
_digitalocean_max_parallel() called log_warn which writes colored output
to stdout, polluting the captured return value when invoked via
cloud_max=$(cloud_max_parallel). The downstream integer comparison
[ "${effective_parallel}" -gt "${cloud_max}" ] then fails with
'integer expression expected', silently leaving the droplet limit cap
unapplied. Fix: redirect log_warn output to stderr so only the numeric
value is captured.
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: L <6723574+louisgv@users.noreply.github.com>
GCP, Sprite, and DigitalOcean had commented-out code `# local agent="$2"`
in their `_headless_env` functions. Hetzner already used the cleaner style
`# $2 = agent (unused but part of the interface)`. Normalize to match.
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Replaces all references to DO_API_TOKEN with DIGITALOCEAN_ACCESS_TOKEN,
matching DigitalOcean's official CLI and API documentation. This includes
TypeScript source, tests, shell scripts, Packer config, CI workflows,
and documentation.
Supersedes #3068 (rebased onto current main).
Agent: pr-maintainer
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Fixes#3070
The port_check / port_check_r variables stored executable shell code as
strings and expanded them via ${port_check} inside cloud_exec commands.
This is an eval-equivalent pattern: if any part of the variable were ever
derived from dynamic input, it would be directly exploitable as command
injection.
Replace the pattern with _check_port_18789() remote function definitions
inside each cloud_exec call. The function is defined and called entirely
on the remote side — no shell code is stored in local bash variables.
Affected functions:
- _openclaw_ensure_gateway (2 usages)
- _openclaw_restart_gateway (1 usage)
- _openclaw_verify_gateway_resilience (3 usages)
Agent: security-auditor
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Clean up three remaining stale references to ~/.cursor/bin that were
not caught in the #3058 path migration:
- manifest.json: update notes field to reflect ~/.local/bin/agent
- sh/e2e/lib/provision.sh: remove ~/.cursor/bin from path_prefix
- sh/e2e/lib/verify.sh: remove ~/.cursor/bin from binary check PATH
Fixes#3065
Agent: issue-fixer
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
- E2E: _digitalocean_max_parallel() now returns 0 (not 1) when no capacity
- E2E: run_agents_for_cloud() skips cloud with actionable error when capacity is 0
- CLI: checkAccountStatus() includes droplet names in limit-reached error message
Fixes#3059
Agent: code-health
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
The cursor installer changed its binary install location from
~/.cursor/bin/agent to ~/.local/bin/agent (as of 2026-03-25 release).
Updates:
- agent-setup.ts: fix PATH in install, launchCmd, updateCmd, and
the pathScript written to ~/.bashrc/~/.zshrc
- verify.sh: fix E2E binary check to look in ~/.local/bin first
- Bump CLI to 0.27.3
-- qa/e2e-tester
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Add cursor to ALL_AGENTS, verify_cursor, input_test_cursor, and their
dispatch cases so e2e sweeps cover the cursor agent.
Fixes#3042
Agent: issue-fixer
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Replace StrictHostKeyChecking=no with accept-new across all E2E cloud
drivers (aws, gcp, digitalocean, hetzner), the shared SSH_BASE_OPTS
constant, and pull-history.ts. accept-new trusts new hosts on first
connection (needed for freshly provisioned VMs) but verifies on
subsequent connections, preventing MITM attacks on reconnect.
Fixes#3031
Agent: style-reviewer
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix(e2e): ensure agent binary available after spawnrc fallback
When the provision timeout kills the CLI before agent install completes
(common in --fast mode on Sprite), the manual .spawnrc fallback creates
credentials but does not verify the agent binary is present. This causes
"openclaw not found" failures in E2E verification.
Add _ensure_agent_binary() that runs after the manual .spawnrc fallback:
1. Checks if the agent binary exists on the remote VM
2. If missing, runs the agent's install command directly
3. Verifies the binary is available after install
Also adds cursor agent to the env vars fallback and binary check.
Fixes#3028
Agent: ux-engineer
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix(security): add --proto '=https' to cursor install curl command
Agent: pr-maintainer
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
---------
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Replace shell interpolation of base64-encoded commands in SSH invocations
with stdin piping. Previously the encoded command was interpolated into the
remote shell string; now it is passed via stdin to `base64 -d | bash`,
making the approach structurally immune to command injection regardless
of the encoded content.
Fixes#3029Fixes#3022
Agent: code-health
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
The GCP E2E cloud driver defaulted to us-central1-a when GCP_ZONE was
not set in the environment. The QA VM stores zone config in
~/.config/spawn/gcp.json (alongside GCP_PROJECT) but _gcp_validate_env
only read GCP_PROJECT from the environment — it never loaded GCP_ZONE.
This caused E2E failures when us-central1-a had insufficient resources:
3 agents (openclaw, opencode, kilocode) failed with "SSH port never
opened" because GCP couldn't provision instances in that zone.
Fix: load both GCP_PROJECT and GCP_ZONE from the config file in
_gcp_validate_env when they are not already set in the environment,
matching how key-request.sh loads GCP_PROJECT for provisioning.
Verified: all 3 previously failing agents now pass on europe-west1-b.
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
On interactive provision failure, save the harness log to a persistent
path (/tmp/spawn-interactive-harness-last.log) for post-mortem inspection,
and filter output to only show [harness] prefixed lines (30 lines) instead
of dumping 50 raw lines of mixed output.
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: L <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Ahmed Abushagur <ahmed@abushagur.com>
Hermes installs a Python virtualenv which takes 20+ min on fresh VMs.
The previous 300s install timeout caused the CLI to give up before
writing .spawnrc, leading to 30-min E2E timeouts on Hetzner, DigitalOcean,
and GCP (but not Sprite, which has a manual .spawnrc fallback).
Changes:
- agent-setup.ts: hermes installAgent timeout 300s → 600s
- common.sh: add hermes per-agent overrides (_PROVISION_TIMEOUT_hermes=720,
_AGENT_TIMEOUT_hermes=3600) to give the install enough headroom
- package.json: bump CLI version 0.25.26 → 0.25.27
-- qa/e2e-tester
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: L <6723574+louisgv@users.noreply.github.com>
The QA account's primary IP limit is ~3, so running 5 agents in parallel
exhausted the quota, causing codex and zeroclaw to fail with
resource_limit_exceeded. Reducing _hetzner_max_parallel to 3 keeps
provisioning within quota while still running agents concurrently.
Verified: zeroclaw and codex both PASS on Hetzner after this fix.
-- qa/e2e-tester
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: L <6723574+louisgv@users.noreply.github.com>
- hetzner.sh: Pipe base64-encoded command via stdin to SSH instead of
embedding it in the SSH command string via variable expansion. The
remote bash reads stdin, base64-decodes, and executes.
- verify.sh: Add remote-side re-validation of base64 and timeout values
in _stage_prompt_remotely and _stage_timeout_remotely. Values are
assigned to remote shell variables and validated before writing to
temp files, providing defense-in-depth against injection.
- provision.sh: Add explicit early rejection of dangerous shell chars
($, `, \) in env var values from cloud_headless_env, and add
remote-side re-validation of base64 payload before writing.
Fixes#2937Fixes#2938Fixes#2939
Agent: security-auditor
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
- fix misplaced interactive_provision comment block in interactive.sh:
the comment was positioned before _report_ux_issues but described the
interactive_provision function; moved it to be adjacent to its function
- apply interactive E2E improvements already in main working tree:
e2e.sh: add verify_agent call after interactive_provision to wait for
.spawnrc before running input tests (aligns interactive with headless flow)
-- qa/code-quality
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Three fixes for Sprite E2E failures in long-running batches (73+ min):
1. Retry `_sprite_provision_verify`: list failures now retry 3x with
exponential backoff (5s, 10s, 20s) instead of failing immediately.
Fixes kilocode batch 6 "Could not list Sprite instances" errors.
2. Increase `CREATE_TIMEOUT_SECS` default from 300s to 600s and add
`Client.Timeout`, `request canceled`, and `authentication failed`
to the transient error retry pattern in `spriteRetry`. Also uses
linear backoff (3s * attempt) instead of fixed 3s delay.
Fixes hermes batch 7 HTTP timeout errors.
3. Add `_sprite_refresh_auth` + `cloud_refresh_auth` interface. The
E2E orchestrator calls `cloud_refresh_auth` before each provisioning
batch. For Sprite, this re-validates the token via `sprite org list`
and attempts `sprite auth refresh` if expired.
Fixes junie batch 8 "authentication failed" errors.
Fixes#2934
Agent: ux-engineer
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Hetzner E2E runs fail with `resource_limit_exceeded` when stale primary
IPs from previous test runs consume the account quota. This adds proactive
cleanup at two levels:
1. E2E shell driver: `_hetzner_cleanup_orphaned_ips()` deletes unattached
primary IPs during pre-batch stale cleanup, freeing quota before any
new servers are provisioned.
2. TypeScript CLI: `hetzner/main.ts` calls `cleanupOrphanedPrimaryIps()`
before `createServer()` in headless/non-interactive mode, ensuring
each agent provisioning attempt starts with a clean IP quota.
The existing reactive cleanup (retry after failure) in `hetzner.ts`
remains as a fallback.
Fixes#2933
Agent: code-health
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(e2e): harden pkill regex escaping against all metacharacters (#2911)
The sed character class `[.[\*^$]` was malformed and missed several
extended regex metacharacters (+, ?, (, ), {, }, |). Replace with a
correct bracket expression that escapes all POSIX ERE metacharacters.
Although app_name is already validated to [A-Za-z0-9._-], fixing the
escaping is defense-in-depth against future changes to the validation.
Agent: security-auditor
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(e2e): correct sed bracket expression to escape ] character
Place ] first in character class so it's treated as literal.
Use \\ to match literal backslash.
Agent: pr-maintainer
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
---------
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(e2e): pass SPAWN_NAME + SPAWN_ENABLED_STEPS to interactive harness
Without SPAWN_NAME, cmdRun prompts 'Name your spawn' interactively.
The AI driver (Claude Haiku) can't respond because ANTHROPIC_AUTH_TOKEN
is an OpenRouter key — every Anthropic API call returns 401, so the harness
returns <wait> indefinitely until the 20-min SESSION_TIMEOUT_MS fires.
SPAWN_ENABLED_STEPS=auto-update bypasses the setup options multiselect,
ensuring the harness only tests the provisioning/installation UX.
* fix(e2e): fix _stage_timeout_remotely stdin pipe issue on Hetzner
Same root cause as _stage_prompt_remotely: _hetzner_exec runs commands via
"printf | base64 -d | bash", which makes bash's stdin the decode pipe.
So piped data from the outer SSH call never reaches subcommands.
"printf '%s' 'VALUE' | cloud_exec APP 'cat > /tmp/.e2e-timeout'" always
creates an empty file, causing "timeout: invalid time interval ''" when
the input test runs.
Fix: embed the validated numeric timeout value directly in the printf
command string (safe — _validate_timeout ensures only [0-9] digits).
* test(e2e): add claude PATH diagnostics to input_test_claude
Temporary debug output to trace where claude is installed
after interactive provision completes.
* test(e2e): save harness transcript JSON on success for debugging
* fix(e2e): remove 'is ready' from harness success pattern
'SSH is ready' (emitted ~15s into provision when SSH connects but before
any agent installation) matched the /is ready/ pattern, triggering false
success detection. The harness killed the spawn CLI during cloud-init wait,
leaving a VM with no agent installed.
Fix: use the same precise patterns as the main repo's harness:
/Starting agent\.\.\.|setup completed successfully/i
Both only fire after orchestrate.ts completes the full setup.
* chore(e2e): remove temporary debug instrumentation
* feat(e2e): add ai-powered ux review after interactive provision
After each successful interactive E2E run, the harness sends the full
terminal transcript to Claude (via OpenRouter) with a UX reviewer prompt.
It looks for confusing messages, noisy output, missing context in spinners,
and unhelpful errors that don't explain next steps.
Findings are returned as uxIssues[] in the harness JSON result.
interactive.sh then files a GitHub issue per run listing each problem
with a verbatim example and concrete suggestion.
Uses OPENROUTER_API_KEY (already in env) so it works on the QA VM
where ANTHROPIC_API_KEY is an OpenRouter key.
* refactor(e2e): throttle ux issue filing — 33% chance, 3+ issues required
- Random 33% gate: UX review runs on ~1 in 3 successful interactive
provisions, not every run
- Minimum bar: only surface findings when AI found 3+ clear issues
(filters one-off nits)
- Tighter system prompt: only flag obvious problems (repeated messages,
debug leaks, cryptic errors), not minor style preferences
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* refactor(e2e): replace random throttle with stricter ux review prompt
Instead of Math.random() to suppress issues, make the AI self-regulate:
the system prompt now instructs it to only flag genuinely bad problems
(repeated messages, raw stack traces, no-feedback waits) and treat
zero findings as a good outcome, not a failure.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
---------
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
The stdin piping approach was broken: _hetzner_exec runs remote commands via
"printf '%s' 'ENCODED_CMD' | base64 -d | bash", which connects bash's stdin to
the base64 pipe rather than SSH's outer stdin. So `cat > /tmp/.e2e-prompt` read
from EOF — the encoded prompt was never written to the remote file.
Fix: embed the validated base64 prompt directly in the command string using
printf. This is safe because _validate_base64 ensures the prompt contains only
[A-Za-z0-9+/=] — no characters that can break out of single quotes or inject
shell metacharacters.
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: L <6723574+louisgv@users.noreply.github.com>
* chore: update agent GitHub star counts
* fix(qa): load ANTHROPIC_AUTH_TOKEN as ANTHROPIC_API_KEY for interactive E2E
QA VMs store the Anthropic key as ANTHROPIC_AUTH_TOKEN in
/etc/spawn-qa-auth.env, but the e2e-interactive handler only looked for
ANTHROPIC_API_KEY — causing the 6am cron to fail immediately with
"ANTHROPIC_API_KEY not set". Accept either name when loading from the
auth env file.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(e2e): bump interactive harness timeout to 20min, fix zombie VM teardown
- SESSION_TIMEOUT_MS: 10min → 20min — provisioning a VM takes 3-4 min
before onboarding even starts; 10min wasn't enough headroom
- interactive.sh: call cloud_provision_verify even on harness failure so
teardown can find and delete any VM that was partially created (e.g.
on timeout mid-provision) — previously left zombie VMs with no .meta file
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
---------
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: L <6723574+louisgv@users.noreply.github.com>
AI log review now includes the git diff since the last fully passing
E2E run, enabling causal analysis like "this 404 likely caused by
commit abc123 which deleted file Y". After a fully green run, the
e2e-last-green tag advances to HEAD as the new baseline.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(security): harden remote command construction in provision.sh
Split the .spawnrc upload fallback into two separate cloud_exec calls
to separate data from commands. Step 1 writes the validated base64
payload to a remote temp file. Step 2 decodes from that file and
sets up shell rc sourcing using a static command string with no
interpolated variables.
This eliminates command injection risk in the control-flow portion
of the remote command (for loop, grep, etc.) even if the base64
validation were ever bypassed, since user-controlled data never
appears in the same command string as shell control flow.
Fixes#2882
Agent: complexity-hunter
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: correct error handling + use mktemp for temp file
- Return 1 (not 0) when step 1 fails to avoid masking provisioning failures
- Use mktemp -t spawnrc.b64 to avoid race conditions on concurrent provisions
Agent: pr-maintainer
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: propagate step 2 failure in provision.sh (return 1)
The else branch for step 2 (decode + shell rc setup) logged an error
but the function still returned 0, masking the failure. Now returns 1
so provisioning failures are correctly propagated.
Agent: pr-maintainer
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---------
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
The env value whitelist allowed @, %, +, =, :, and , characters that
are unnecessary for cloud resource names (server names, regions, sizes)
and could be used as shell metacharacters in certain contexts. Restrict
to only [A-Za-z0-9._/-] which matches all legitimate cloud resource
identifiers.
Fixes#2883
Agent: code-health
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Prevent shell metacharacter interpretation in test prompt handling
by staging INPUT_TEST_TIMEOUT and attempt number to remote temp files
instead of interpolating them into remote command strings.
Previously, _TIMEOUT='${INPUT_TEST_TIMEOUT}' and --session-id
e2e-test-${attempt} were interpolated directly into double-quoted
remote command strings. While _validate_timeout enforces digits-only,
the structural pattern of local-to-remote variable interpolation is
inherently risky. Now all dynamic values (prompt, timeout, attempt)
are piped to remote temp files via stdin and read back on the remote
side, eliminating the injection surface entirely.
Fixes#2884
Agent: test-engineer
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(e2e): update input tests for latest agent CLI interfaces + auto-load email creds
claude: add --dangerously-skip-permissions --no-session-persistence to bypass
trust dialog when running in /tmp/e2e-test (not in ~/.claude.json trusted
projects list written during install)
codex: replace `codex exec --full-auto` (removed in new @openai/codex) with
`codex -q -a full-auto` — quiet mode + full-auto approval, no exec subcommand
email: auto-load RESEND_API_KEY + KEY_REQUEST_EMAIL from
/etc/spawn-key-server-auth.env (QA VM) or ~/.config/spawn/resend.env (local)
so send_matrix_email fires on every e2e run, not just QA-cycle runs
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(e2e): correct claude and codex input test commands
- claude: pass prompt as positional arg to claude -p instead of piping
via stdin (stdin pipe breaks through SSH exec chain, causing
"Input must be provided either through stdin or as a prompt argument"
error)
- codex: revert to `codex exec --full-auto` subcommand (correct for
v0.116.0 — previous -q -a full-auto flags don't exist)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
---------
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(e2e): add AI-powered log review after provisioning
Feeds provision stderr/stdout logs to an LLM after each agent deploys.
Catches non-fatal issues that binary pass/fail checks miss: silent 404s,
failed component installs, connection instability, swallowed warnings.
This would have caught the keep-alive 404 and the sprite idle shutdown
that the existing E2E tests missed because installSpriteKeepAlive() is
non-fatal and the binary checks only verify final state.
- Uses gemini-flash-lite-2.0 via OpenRouter (cheap, fast)
- Advisory only — never fails the test, reports findings as warnings
- Truncates logs to last 200 lines to stay within token limits
- Skips gracefully if OPENROUTER_API_KEY is missing or API fails
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(e2e): add AI log review and --fast mode testing
AI log review:
- After each agent provisions, feeds stderr/stdout to gemini-flash-lite
to catch non-fatal issues binary checks miss (404s, failed installs,
connection drops, swallowed warnings)
- Advisory only — never fails the test, surfaces findings as warnings
- Would have caught the keep-alive 404 and sprite idle shutdown
--fast mode E2E:
- Add --fast flag to e2e.sh, passed through to spawn CLI during provision
- Update QA e2e-tester protocol to run both normal and --fast passes
- --fast enables images + tarballs + parallel boot
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: L <6723574+louisgv@users.noreply.github.com>
The manual .spawnrc fallback in provision.sh was using `printf '%s' "${env_b64}" | cloud_exec ...`,
which works for SSH-based clouds (Hetzner, GCP, AWS) where stdin is passed through the SSH
connection. However, Sprite's exec driver replaces stdin with the command pipe:
`printf '%s' "${cmd}" | sprite exec -s NAME -- bash`
This causes the outer env_b64 pipe to be lost — `base64 -d` receives no input and writes an
empty .spawnrc, which then fails the OPENROUTER_API_KEY and openrouter.ai verification checks.
Fix: embed the base64 data directly in the command string using `printf '%s' '${env_b64}'`.
This is safe because env_b64 is validated to contain only [A-Za-z0-9+/=] — the standard
base64 alphabet — which cannot break out of single quotes or cause shell injection.
Confirmed by E2E run where sprite/claude and sprite/openclaw both failed with:
[FAIL] OPENROUTER_API_KEY not found in .spawnrc
[FAIL] Failed to create manual .spawnrc
Co-authored-by: spawn-qa-bot <qa@openrouter.ai>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Add defense-in-depth validation of INPUT_TEST_TIMEOUT directly in verify.sh
(not just relying on common.sh). Each input test function now calls
_validate_timeout() to ensure the value contains only digits before use.
Additionally, instead of interpolating INPUT_TEST_TIMEOUT directly into
remote command strings passed to cloud_exec, the timeout value is now
assigned to a single-quoted remote variable (_TIMEOUT) and referenced via
"$_TIMEOUT" on the remote side. This eliminates the injection surface even
if validation were somehow bypassed.
Affected functions: input_test_claude(), input_test_codex(),
input_test_openclaw(), input_test_zeroclaw().
Fixes#2849
Agent: security-auditor
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Replaces command string interpolation with stdin piping for the base64
prompt in verify.sh. Also anchors the _validate_base64 regex.
Fixes#2833
Agent: code-health
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Fixes#2823: npm installs kilocode to /usr/local/bin when running as
root on GCP, but the E2E binary verify step didn't include /usr/local/bin
in PATH, causing false "binary not found" failures.
The .spawnrc PATH (generated by generateEnvConfig) already includes
/usr/local/bin, but verify_kilocode used a hardcoded PATH that omitted
it. This aligns kilocode and codex verify checks with openclaw and junie
which already include /usr/local/bin.
Also fixes the same latent issue in verify_codex.
Agent: code-health
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat: never-give-up resilience layer — retry every failure instead of exiting
Add retryOrQuit() helper to shared/ui.ts that prompts "Try again? (Y/n)"
after any recoverable failure. Wrap all fatal exit points with retry loops:
- Cloud auth (Hetzner, DigitalOcean, AWS, GCP): retry after 3 failed tokens
- API key acquisition: retry after 3 failed OAuth+manual attempts
- Server creation: retry on any createServer failure (both fast & sequential)
- SSH readiness: retry on waitForReady timeout
- Agent install: retry on install failure
- Pre-launch hooks: retry on preLaunch failure
Non-interactive mode (SPAWN_NON_INTERACTIVE=1) still throws immediately.
Ctrl+C at any retry prompt exits cleanly.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(e2e): add AI-driven interactive test harness
Add --interactive mode to the E2E test framework. Instead of running spawn
in headless mode (SPAWN_NON_INTERACTIVE=1), this spawns the CLI in a real
PTY and uses Claude Haiku to respond to prompts like a human user would.
New files:
- sh/e2e/interactive-harness.ts — Bun script that drives the PTY + AI loop
- sh/e2e/lib/interactive.sh — Bash integration with the E2E framework
Usage:
e2e.sh --cloud hetzner claude --interactive
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(qa): wire interactive E2E into scheduled QA pipeline
- Add `e2e-interactive` option to workflow_dispatch in qa.yml
- Add `e2e-interactive` run mode to qa.sh (loads cloud creds + ANTHROPIC_API_KEY)
- Runs `e2e.sh --cloud hetzner claude --interactive` directly (no Claude Code needed)
- Defaults to hetzner (cheapest), overridable via E2E_INTERACTIVE_CLOUD/AGENT env vars
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(qa): schedule interactive E2E daily at 6am UTC
Runs one agent (claude) on one cloud (hetzner) with AI-driven prompts.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(qa): offset soak cron to avoid GitHub Actions schedule dedup
GitHub Actions deduplicates overlapping cron schedules into one run,
making `github.event.schedule` unpredictable. The soak test at `0 3 * * 1`
was getting absorbed by the `0 */4 * * *` quality sweep and never firing
as reason=soak.
Move soak to `30 1 * * 1` (Monday 1:30am UTC) — safely between the
0am and 4am quality sweep slots. Interactive E2E at `0 6 * * *` is
already safe (between the 4am and 8am slots).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(qa): add e2e-interactive to trigger server valid reasons
The trigger server validates reason query params against an allowlist.
Without this, the `e2e-interactive` dispatch returns 400.
Also note: `soak` is already in VALID_REASONS in the repo but the running
service on the QA VM is stale — needs a restart to pick up both soak and
e2e-interactive reasons.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Apply the same base64 encoding mitigation used by all other cloud
drivers (aws, hetzner, digitalocean, gcp). The command is encoded
locally, validated for safe characters, then decoded and executed
on the remote side via `base64 -d | bash`.
Fixes#2800
Agent: security-auditor
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
The pre-run stale cleanup (added in #2789) used the same 30-minute max_age
as the post-run cleanup. Orphaned instances from recently-failed runs (< 30 min
old) were not cleaned, causing quota exhaustion on DigitalOcean and other clouds.
Pre-run cleanup now uses _CLEANUP_MAX_AGE=300 (5 min) to aggressively reclaim
orphaned e2e instances before provisioning new ones. Post-run cleanup retains
the 30-minute default. All 5 cloud drivers respect the override.
Fixes#2793
Agent: code-health
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Fixes#2797. The _stage_prompt_remotely() function was interpolating
${encoded_prompt} directly into the remote command string passed to
cloud_exec. While _validate_base64() ensures only [A-Za-z0-9+/=]
characters are present, defense-in-depth requires eliminating the
interpolation entirely.
The fix uses printf %s format substitution to build the remote command,
placing the encoded prompt into a single-quoted shell variable assignment
(_EP='...') on the remote side. Single quotes prevent all shell expansion,
and base64 charset cannot contain single quotes, making injection
structurally impossible.
Agent: security-auditor
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Replaces the pattern of embedding base64-encoded prompts directly into
remote command strings via shell variable interpolation with a two-step
approach: stage the encoded prompt to a remote temp file first, then
read from that file in the agent command. This eliminates RCE risk if
the prompt source ever becomes user-controlled.
Changes:
- Add _stage_prompt_remotely() helper that writes encoded prompt to
/tmp/.e2e-prompt on the remote host via an isolated cloud_exec call
- input_test_claude(): read prompt from temp file instead of _ENCODED_PROMPT var
- input_test_codex(): same
- input_test_openclaw(): same
- input_test_zeroclaw(): same
- Update _validate_base64() comment to reflect defense-in-depth role
Closes#2788
Agent: security-auditor
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Add explicit validation that encoded_prompt only contains safe base64
characters ([A-Za-z0-9+/=]) in all input_test_* functions in verify.sh.
This makes the safety assumption explicit in code rather than relying
on documentation — if the base64 output ever contains unexpected chars,
the test aborts immediately instead of injecting them into a remote
command string.
Fixes#2775
Agent: security-auditor
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
On GCP VMs (running as root), npm installs openclaw to /usr/local/bin
instead of ~/.npm-global/bin because the system npm prefix is writable
and already in PATH. The E2E verify_openclaw() and related gateway
helper functions only explicitly listed ~/.npm-global/bin, ~/.bun/bin,
and ~/.local/bin — missing /usr/local/bin when .spawnrc sourcing
silently fails in the piped-bash SSH exec context.
Add /usr/local/bin explicitly to all openclaw-related PATH exports in
verify.sh so the binary check succeeds regardless of .spawnrc state.
Fixes#2732
Agent: test-engineer
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
The E2E framework's run_single_agent function had no overall timeout.
When provision/verify/input_test steps hung (e.g. cloud_exec blocking
on sprite-zeroclaw or digitalocean-opencode), the process would stall
indefinitely without writing a .result file, causing silent test failures.
Add a per-agent wall-clock timeout (default 1800s, 2400s for junie) that
wraps the core provision/verify/input_test logic in a killable subshell.
If the timeout expires, the subshell is killed and a "fail" result is
written, ensuring E2E batches always complete.
Fixes#2714
Agent: code-health
Co-authored-by: B <6723574+louisgv@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>