mirror of
https://github.com/OpenRouterTeam/spawn.git
synced 2026-04-28 03:49:31 +00:00
feat: recursive spawn (--beta recursive) (#2978)
* feat: add recursive spawn (--beta recursive) Enables VMs to spawn child VMs. When --beta recursive is active: - Injects SPAWN_PARENT_ID, SPAWN_DEPTH, SPAWN_BETA=recursive into .spawnrc - Installs spawn CLI on the VM via install.sh - Delegates cloud + OpenRouter credentials to the VM - Tracks parent_id and depth on SpawnRecord for tree relationships - Adds `spawn tree` command for full recursive tree view - Adds `spawn history export` for pulling child history via SSH - Adds `spawn list --json` and `spawn list --flat` flags - Adds tree rendering in `spawn list` when parent-child relationships exist - Adds cascade delete support in delete.ts - Adds mergeChildHistory() for backward-pass history sync Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add recursive spawn to README Add --beta recursive to beta features table, new commands (spawn tree, spawn history export, spawn list --flat/--json) to commands table, and a dedicated Recursive Spawn section with usage examples for tree view and cascade delete. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: add cmdTree coverage tests to fix mock test CI The CI coverage threshold (90% functions, 80% lines) was failing because tree.ts had 0% coverage. Added tests that exercise cmdTree with empty history, tree rendering, JSON output, flat records, and deleted/depth labels. tree.ts now has 100% coverage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(security): validate cloudName and use valibot in pullChildHistory - Add cloudName validation against ^[a-z0-9-]+$ to prevent command injection in delegateCloudCredentials - Export SpawnRecordSchema from history.ts and replace loose type guard with valibot schema validation in pullChildHistory - Resolve merge conflicts with main (include both docker and recursive beta features) Agent: pr-maintainer Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * test: add installSpawnCli and delegateCloudCredentials coverage Export and test installSpawnCli (success + timeout failure paths) and delegateCloudCredentials (no creds, with creds, write failure, mkdir failure paths) to improve orchestrate.ts function coverage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: gritQL rule false positives and delete.ts coverage - use TsAsExpression() AST node instead of backtick pattern to avoid matching import aliases as type assertions - export and test findDescendants() and pullChildHistory() to bring delete.ts line coverage above the 35% threshold - add 8 new tests for descendant finding and history pull edge cases Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: B <6723574+louisgv@users.noreply.github.com> Co-authored-by: A <258483684+la14-1@users.noreply.github.com>
This commit is contained in:
parent
76bdaf2042
commit
b0674550c6
12 changed files with 1261 additions and 15 deletions
36
README.md
36
README.md
|
|
@ -61,7 +61,12 @@ spawn delete -c hetzner # Delete a server on Hetzner
|
|||
| `spawn list <filter>` | Filter history by agent or cloud name |
|
||||
| `spawn list -a <agent>` | Filter history by agent |
|
||||
| `spawn list -c <cloud>` | Filter history by cloud |
|
||||
| `spawn list --flat` | Show flat list (disable tree view) |
|
||||
| `spawn list --json` | Output history as JSON |
|
||||
| `spawn list --clear` | Clear all spawn history |
|
||||
| `spawn tree` | Show recursive spawn tree (parent/child relationships) |
|
||||
| `spawn tree --json` | Output spawn tree as JSON |
|
||||
| `spawn history export` | Dump history as JSON to stdout (used by parent VMs) |
|
||||
| `spawn fix` | Re-run agent setup on an existing VM (re-inject credentials, reinstall) |
|
||||
| `spawn fix <spawn-id>` | Fix a specific spawn by name or ID |
|
||||
| `spawn link <ip>` | Register an existing VM by IP |
|
||||
|
|
@ -149,8 +154,37 @@ spawn claude gcp --beta tarball --beta parallel
|
|||
| `tarball` | Use pre-built tarball for agent install (faster, skips live install) |
|
||||
| `images` | Use pre-built cloud images/snapshots (faster boot) |
|
||||
| `parallel` | Parallelize server boot with setup prompts |
|
||||
| `recursive` | Install spawn CLI on VM so it can spawn child VMs |
|
||||
|
||||
`--fast` enables all three.
|
||||
`--fast` enables `tarball`, `images`, and `parallel` (not `recursive`).
|
||||
|
||||
#### Recursive Spawn
|
||||
|
||||
Use `--beta recursive` to let spawned VMs create their own child VMs:
|
||||
|
||||
```bash
|
||||
spawn claude hetzner --beta recursive
|
||||
```
|
||||
|
||||
What this does:
|
||||
- **Installs spawn CLI** on the remote VM
|
||||
- **Delegates credentials** (cloud + OpenRouter) so child VMs can authenticate
|
||||
- **Injects parent tracking** (`SPAWN_PARENT_ID`, `SPAWN_DEPTH`) into the VM environment
|
||||
- **Passes `--beta recursive`** to children so they can also spawn recursively
|
||||
|
||||
View the spawn tree:
|
||||
```bash
|
||||
spawn tree
|
||||
# spawn-abc Claude Code / Hetzner 2m ago
|
||||
# ├─ spawn-def Codex CLI / Hetzner 1m ago
|
||||
# └─ spawn-ghi OpenClaw / Hetzner 30s ago
|
||||
# └─ spawn-jkl Claude Code / Hetzner 10s ago
|
||||
```
|
||||
|
||||
Tear down an entire tree:
|
||||
```bash
|
||||
spawn delete --cascade <id> # Delete a VM and all its children
|
||||
```
|
||||
|
||||
### Without the CLI
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
language js(typescript)
|
||||
|
||||
`$value as $type` as $expr where {
|
||||
TsAsExpression() as $expr where {
|
||||
! $expr <: `$_ as const`,
|
||||
! $expr <: JsNamedImportSpecifier(),
|
||||
register_diagnostic(span=$expr, message="Type assertions (`as`) are banned. Use schema validation (parseJsonWith), type guards, or `satisfies` instead.", severity="error")
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "@openrouter/spawn",
|
||||
"version": "0.25.32",
|
||||
"version": "0.26.0",
|
||||
"type": "module",
|
||||
"bin": {
|
||||
"spawn": "cli.js"
|
||||
|
|
|
|||
724
packages/cli/src/__tests__/recursive-spawn.test.ts
Normal file
724
packages/cli/src/__tests__/recursive-spawn.test.ts
Normal file
|
|
@ -0,0 +1,724 @@
|
|||
import type { SpawnRecord } from "../history.js";
|
||||
|
||||
import { afterEach, beforeEach, describe, expect, it, spyOn } from "bun:test";
|
||||
import { existsSync, mkdirSync, rmSync, writeFileSync } from "node:fs";
|
||||
import { join } from "node:path";
|
||||
import { findDescendants, pullChildHistory } from "../commands/delete.js";
|
||||
import { cmdTree } from "../commands/tree.js";
|
||||
import { exportHistory, HISTORY_SCHEMA_VERSION, loadHistory, mergeChildHistory, saveSpawnRecord } from "../history.js";
|
||||
|
||||
describe("recursive spawn", () => {
|
||||
let testDir: string;
|
||||
let originalEnv: NodeJS.ProcessEnv;
|
||||
|
||||
beforeEach(() => {
|
||||
testDir = join(process.env.HOME ?? "", `.spawn-test-recursive-${Date.now()}-${Math.random()}`);
|
||||
mkdirSync(testDir, {
|
||||
recursive: true,
|
||||
});
|
||||
originalEnv = {
|
||||
...process.env,
|
||||
};
|
||||
process.env.SPAWN_HOME = testDir;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
process.env = originalEnv;
|
||||
if (existsSync(testDir)) {
|
||||
rmSync(testDir, {
|
||||
recursive: true,
|
||||
force: true,
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// ── SpawnRecord parent_id and depth ─────────────────────────────────────
|
||||
|
||||
describe("parent tracking", () => {
|
||||
it("saves and loads records with parent_id and depth", () => {
|
||||
const record: SpawnRecord = {
|
||||
id: "child-1",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
parent_id: "parent-1",
|
||||
depth: 1,
|
||||
};
|
||||
saveSpawnRecord(record);
|
||||
const loaded = loadHistory();
|
||||
expect(loaded).toHaveLength(1);
|
||||
expect(loaded[0].parent_id).toBe("parent-1");
|
||||
expect(loaded[0].depth).toBe(1);
|
||||
});
|
||||
|
||||
it("loads records without parent_id (backwards compat)", () => {
|
||||
const data = {
|
||||
version: HISTORY_SCHEMA_VERSION,
|
||||
records: [
|
||||
{
|
||||
id: "old-record",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-01-01T00:00:00.000Z",
|
||||
},
|
||||
],
|
||||
};
|
||||
writeFileSync(join(testDir, "history.json"), JSON.stringify(data));
|
||||
const loaded = loadHistory();
|
||||
expect(loaded).toHaveLength(1);
|
||||
expect(loaded[0].parent_id).toBeUndefined();
|
||||
expect(loaded[0].depth).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
// ── mergeChildHistory ──────────────────────────────────────────────────
|
||||
|
||||
describe("mergeChildHistory", () => {
|
||||
it("merges child records into local history", () => {
|
||||
// Save a parent record first
|
||||
saveSpawnRecord({
|
||||
id: "parent-1",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
});
|
||||
|
||||
const childRecords: SpawnRecord[] = [
|
||||
{
|
||||
id: "child-1",
|
||||
agent: "codex",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T01:00:00.000Z",
|
||||
},
|
||||
{
|
||||
id: "child-2",
|
||||
agent: "openclaw",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T02:00:00.000Z",
|
||||
},
|
||||
];
|
||||
|
||||
mergeChildHistory("parent-1", childRecords);
|
||||
|
||||
const loaded = loadHistory();
|
||||
expect(loaded).toHaveLength(3);
|
||||
|
||||
// Child records should have parent_id set
|
||||
const child1 = loaded.find((r) => r.id === "child-1");
|
||||
expect(child1).toBeDefined();
|
||||
expect(child1!.parent_id).toBe("parent-1");
|
||||
|
||||
const child2 = loaded.find((r) => r.id === "child-2");
|
||||
expect(child2).toBeDefined();
|
||||
expect(child2!.parent_id).toBe("parent-1");
|
||||
});
|
||||
|
||||
it("deduplicates by spawn ID", () => {
|
||||
saveSpawnRecord({
|
||||
id: "parent-1",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
});
|
||||
|
||||
const childRecords: SpawnRecord[] = [
|
||||
{
|
||||
id: "child-1",
|
||||
agent: "codex",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T01:00:00.000Z",
|
||||
},
|
||||
];
|
||||
|
||||
// Merge twice — should not create duplicates
|
||||
mergeChildHistory("parent-1", childRecords);
|
||||
mergeChildHistory("parent-1", childRecords);
|
||||
|
||||
const loaded = loadHistory();
|
||||
expect(loaded).toHaveLength(2); // parent + 1 child (not 3)
|
||||
});
|
||||
|
||||
it("preserves existing parent_id on child records", () => {
|
||||
saveSpawnRecord({
|
||||
id: "grandparent",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
});
|
||||
|
||||
const childRecords: SpawnRecord[] = [
|
||||
{
|
||||
id: "child-1",
|
||||
agent: "codex",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T01:00:00.000Z",
|
||||
parent_id: "some-other-parent",
|
||||
},
|
||||
];
|
||||
|
||||
mergeChildHistory("grandparent", childRecords);
|
||||
|
||||
const loaded = loadHistory();
|
||||
const child = loaded.find((r) => r.id === "child-1");
|
||||
// Existing parent_id should be preserved
|
||||
expect(child!.parent_id).toBe("some-other-parent");
|
||||
});
|
||||
|
||||
it("does nothing with empty child records", () => {
|
||||
saveSpawnRecord({
|
||||
id: "parent-1",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
});
|
||||
|
||||
mergeChildHistory("parent-1", []);
|
||||
|
||||
const loaded = loadHistory();
|
||||
expect(loaded).toHaveLength(1);
|
||||
});
|
||||
});
|
||||
|
||||
// ── exportHistory ─────────────────────────────────────────────────────
|
||||
|
||||
describe("exportHistory", () => {
|
||||
it("exports history as JSON string", () => {
|
||||
saveSpawnRecord({
|
||||
id: "record-1",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
parent_id: "parent-1",
|
||||
depth: 1,
|
||||
});
|
||||
|
||||
const json = exportHistory();
|
||||
const parsed: unknown = JSON.parse(json);
|
||||
expect(Array.isArray(parsed)).toBe(true);
|
||||
const records = Array.isArray(parsed) ? parsed : [];
|
||||
expect(records).toHaveLength(1);
|
||||
expect(records[0].parent_id).toBe("parent-1");
|
||||
expect(records[0].depth).toBe(1);
|
||||
});
|
||||
|
||||
it("returns empty array when no history", () => {
|
||||
const json = exportHistory();
|
||||
expect(JSON.parse(json)).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
// ── installSpawnCli ────────────────────────────────────────────────
|
||||
|
||||
describe("installSpawnCli", () => {
|
||||
it("runs install script on the remote runner", async () => {
|
||||
const { installSpawnCli } = await import("../shared/orchestrate.js");
|
||||
const commands: string[] = [];
|
||||
const mockRunner = {
|
||||
runServer: async (cmd: string) => {
|
||||
commands.push(cmd);
|
||||
},
|
||||
uploadFile: async () => {},
|
||||
downloadFile: async () => {},
|
||||
};
|
||||
|
||||
await installSpawnCli(mockRunner);
|
||||
|
||||
expect(commands.length).toBeGreaterThan(0);
|
||||
expect(commands[0]).toContain("install.sh");
|
||||
});
|
||||
|
||||
it("handles install failure gracefully", async () => {
|
||||
const { installSpawnCli } = await import("../shared/orchestrate.js");
|
||||
const mockRunner = {
|
||||
runServer: async () => {
|
||||
// Throw a timeout error so withRetry doesn't retry (timeouts are non-retryable)
|
||||
throw new Error("command timed out");
|
||||
},
|
||||
uploadFile: async () => {},
|
||||
downloadFile: async () => {},
|
||||
};
|
||||
|
||||
// Should not throw — installSpawnCli catches errors gracefully
|
||||
await installSpawnCli(mockRunner);
|
||||
});
|
||||
});
|
||||
|
||||
// ── delegateCloudCredentials ──────────────────────────────────────
|
||||
|
||||
describe("delegateCloudCredentials", () => {
|
||||
it("skips when no credential files exist", async () => {
|
||||
const { delegateCloudCredentials } = await import("../shared/orchestrate.js");
|
||||
const commands: string[] = [];
|
||||
const mockRunner = {
|
||||
runServer: async (cmd: string) => {
|
||||
commands.push(cmd);
|
||||
},
|
||||
uploadFile: async () => {},
|
||||
downloadFile: async () => {},
|
||||
};
|
||||
|
||||
// No credential files exist in test sandbox, so should warn and return
|
||||
await delegateCloudCredentials(mockRunner, "hetzner");
|
||||
|
||||
// Should not have run mkdir since there are no files to delegate
|
||||
expect(commands.length).toBe(0);
|
||||
});
|
||||
|
||||
it("delegates credentials when files exist", async () => {
|
||||
const { delegateCloudCredentials } = await import("../shared/orchestrate.js");
|
||||
const home = process.env.HOME ?? "";
|
||||
const configDir = join(home, ".config", "spawn");
|
||||
mkdirSync(configDir, {
|
||||
recursive: true,
|
||||
});
|
||||
writeFileSync(join(configDir, "hetzner.json"), '{"token":"test-token"}');
|
||||
writeFileSync(join(configDir, "openrouter.json"), '{"key":"test-key"}');
|
||||
|
||||
const commands: string[] = [];
|
||||
const mockRunner = {
|
||||
runServer: async (cmd: string) => {
|
||||
commands.push(cmd);
|
||||
},
|
||||
uploadFile: async () => {},
|
||||
downloadFile: async () => {},
|
||||
};
|
||||
|
||||
await delegateCloudCredentials(mockRunner, "hetzner");
|
||||
|
||||
// Should have run mkdir + 2 file writes
|
||||
expect(commands.length).toBe(3);
|
||||
expect(commands[0]).toContain("mkdir -p ~/.config/spawn");
|
||||
expect(commands[1]).toContain("hetzner.json");
|
||||
expect(commands[2]).toContain("openrouter.json");
|
||||
});
|
||||
|
||||
it("handles file write failure gracefully", async () => {
|
||||
const { delegateCloudCredentials } = await import("../shared/orchestrate.js");
|
||||
const home = process.env.HOME ?? "";
|
||||
const configDir = join(home, ".config", "spawn");
|
||||
mkdirSync(configDir, {
|
||||
recursive: true,
|
||||
});
|
||||
writeFileSync(join(configDir, "hetzner.json"), '{"token":"test"}');
|
||||
|
||||
let callCount = 0;
|
||||
const mockRunner = {
|
||||
runServer: async (_cmd: string) => {
|
||||
callCount += 1;
|
||||
// First call (mkdir) succeeds (returns void), second call (file write) fails
|
||||
if (callCount >= 2) {
|
||||
throw new Error("write failed");
|
||||
}
|
||||
},
|
||||
uploadFile: async () => {},
|
||||
downloadFile: async () => {},
|
||||
};
|
||||
|
||||
// Should not throw
|
||||
await delegateCloudCredentials(mockRunner, "hetzner");
|
||||
// At least 2 calls: mkdir + file write(s) that fail
|
||||
expect(callCount).toBeGreaterThanOrEqual(2);
|
||||
});
|
||||
|
||||
it("handles mkdir failure gracefully", async () => {
|
||||
const { delegateCloudCredentials } = await import("../shared/orchestrate.js");
|
||||
const home = process.env.HOME ?? "";
|
||||
const configDir = join(home, ".config", "spawn");
|
||||
mkdirSync(configDir, {
|
||||
recursive: true,
|
||||
});
|
||||
writeFileSync(join(configDir, "hetzner.json"), '{"token":"test"}');
|
||||
|
||||
let callCount = 0;
|
||||
const mockRunner = {
|
||||
runServer: async () => {
|
||||
callCount += 1;
|
||||
throw new Error("SSH failed");
|
||||
},
|
||||
uploadFile: async () => {},
|
||||
downloadFile: async () => {},
|
||||
};
|
||||
|
||||
// Should not throw, just warn
|
||||
await delegateCloudCredentials(mockRunner, "hetzner");
|
||||
// mkdir was called and failed
|
||||
expect(callCount).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
// ── Recursive env vars ───────────────────────────────────────────────
|
||||
|
||||
describe("recursive env vars", () => {
|
||||
it("appendRecursiveEnvVars adds parent tracking vars", async () => {
|
||||
// Import the function dynamically to avoid ESM issues
|
||||
const { appendRecursiveEnvVars } = await import("../shared/orchestrate.js");
|
||||
const envPairs: string[] = [
|
||||
"EXISTING_VAR=value",
|
||||
];
|
||||
|
||||
appendRecursiveEnvVars(envPairs, "test-spawn-id");
|
||||
|
||||
expect(envPairs).toContain("SPAWN_PARENT_ID=test-spawn-id");
|
||||
expect(envPairs).toContain("SPAWN_DEPTH=1");
|
||||
expect(envPairs).toContain("SPAWN_BETA=recursive");
|
||||
});
|
||||
|
||||
it("increments depth from SPAWN_DEPTH env var", async () => {
|
||||
const origDepth = process.env.SPAWN_DEPTH;
|
||||
process.env.SPAWN_DEPTH = "3";
|
||||
|
||||
const { appendRecursiveEnvVars } = await import("../shared/orchestrate.js");
|
||||
const envPairs: string[] = [];
|
||||
appendRecursiveEnvVars(envPairs, "test-id");
|
||||
|
||||
expect(envPairs).toContain("SPAWN_DEPTH=4");
|
||||
|
||||
if (origDepth === undefined) {
|
||||
delete process.env.SPAWN_DEPTH;
|
||||
} else {
|
||||
process.env.SPAWN_DEPTH = origDepth;
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// ── cmdTree ────────────────────────────────────────────────────────
|
||||
|
||||
describe("cmdTree", () => {
|
||||
it("shows empty message when no history", async () => {
|
||||
const logs: string[] = [];
|
||||
const origLog = console.log;
|
||||
console.log = (...args: unknown[]) => {
|
||||
logs.push(args.map(String).join(" "));
|
||||
};
|
||||
|
||||
await cmdTree();
|
||||
|
||||
console.log = origLog;
|
||||
// p.log.info writes to stderr, not captured — but cmdTree should not throw
|
||||
});
|
||||
|
||||
it("renders tree with parent-child relationships", async () => {
|
||||
saveSpawnRecord({
|
||||
id: "root-1",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
name: "my-root",
|
||||
});
|
||||
saveSpawnRecord({
|
||||
id: "child-1",
|
||||
agent: "codex",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T01:00:00.000Z",
|
||||
parent_id: "root-1",
|
||||
depth: 1,
|
||||
name: "my-child",
|
||||
});
|
||||
saveSpawnRecord({
|
||||
id: "child-2",
|
||||
agent: "openclaw",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T02:00:00.000Z",
|
||||
parent_id: "root-1",
|
||||
depth: 1,
|
||||
});
|
||||
saveSpawnRecord({
|
||||
id: "grandchild-1",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T03:00:00.000Z",
|
||||
parent_id: "child-1",
|
||||
depth: 2,
|
||||
});
|
||||
|
||||
const logs: string[] = [];
|
||||
const origLog = console.log;
|
||||
console.log = (...args: unknown[]) => {
|
||||
logs.push(args.map(String).join(" "));
|
||||
};
|
||||
|
||||
// Mock loadManifest to avoid network calls
|
||||
const manifestMod = await import("../manifest.js");
|
||||
const manifestSpy = spyOn(manifestMod, "loadManifest").mockRejectedValue(new Error("no network"));
|
||||
|
||||
await cmdTree();
|
||||
|
||||
console.log = origLog;
|
||||
manifestSpy.mockRestore();
|
||||
|
||||
// Should have output with tree characters
|
||||
const output = logs.join("\n");
|
||||
expect(output).toContain("my-root");
|
||||
expect(output).toContain("my-child");
|
||||
// Tree connectors
|
||||
expect(output).toContain("├─");
|
||||
expect(output).toContain("└─");
|
||||
});
|
||||
|
||||
it("outputs JSON when --json flag is set", async () => {
|
||||
saveSpawnRecord({
|
||||
id: "root-1",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
});
|
||||
saveSpawnRecord({
|
||||
id: "child-1",
|
||||
agent: "codex",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T01:00:00.000Z",
|
||||
parent_id: "root-1",
|
||||
depth: 1,
|
||||
});
|
||||
|
||||
const logs: string[] = [];
|
||||
const origLog = console.log;
|
||||
console.log = (...args: unknown[]) => {
|
||||
logs.push(args.map(String).join(" "));
|
||||
};
|
||||
|
||||
const manifestMod = await import("../manifest.js");
|
||||
const manifestSpy = spyOn(manifestMod, "loadManifest").mockRejectedValue(new Error("no network"));
|
||||
|
||||
await cmdTree(true);
|
||||
|
||||
console.log = origLog;
|
||||
manifestSpy.mockRestore();
|
||||
|
||||
const output = logs.join("\n");
|
||||
const parsed: unknown = JSON.parse(output);
|
||||
expect(Array.isArray(parsed)).toBe(true);
|
||||
const records = Array.isArray(parsed) ? parsed : [];
|
||||
expect(records).toHaveLength(2);
|
||||
});
|
||||
|
||||
it("shows flat message when no parent-child relationships", async () => {
|
||||
saveSpawnRecord({
|
||||
id: "a",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
});
|
||||
saveSpawnRecord({
|
||||
id: "b",
|
||||
agent: "codex",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T01:00:00.000Z",
|
||||
});
|
||||
|
||||
const logs: string[] = [];
|
||||
const origLog = console.log;
|
||||
console.log = (...args: unknown[]) => {
|
||||
logs.push(args.map(String).join(" "));
|
||||
};
|
||||
|
||||
const manifestMod = await import("../manifest.js");
|
||||
const manifestSpy = spyOn(manifestMod, "loadManifest").mockRejectedValue(new Error("no network"));
|
||||
|
||||
await cmdTree();
|
||||
|
||||
console.log = origLog;
|
||||
manifestSpy.mockRestore();
|
||||
});
|
||||
|
||||
it("renders deleted and depth labels", async () => {
|
||||
saveSpawnRecord({
|
||||
id: "root-1",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
connection: {
|
||||
ip: "1.2.3.4",
|
||||
user: "root",
|
||||
deleted: true,
|
||||
deleted_at: "2026-03-24T05:00:00.000Z",
|
||||
},
|
||||
});
|
||||
saveSpawnRecord({
|
||||
id: "child-1",
|
||||
agent: "codex",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T01:00:00.000Z",
|
||||
parent_id: "root-1",
|
||||
depth: 1,
|
||||
});
|
||||
|
||||
const logs: string[] = [];
|
||||
const origLog = console.log;
|
||||
console.log = (...args: unknown[]) => {
|
||||
logs.push(args.map(String).join(" "));
|
||||
};
|
||||
|
||||
const manifestMod = await import("../manifest.js");
|
||||
const manifestSpy = spyOn(manifestMod, "loadManifest").mockRejectedValue(new Error("no network"));
|
||||
|
||||
await cmdTree();
|
||||
|
||||
console.log = origLog;
|
||||
manifestSpy.mockRestore();
|
||||
|
||||
const output = logs.join("\n");
|
||||
expect(output).toContain("deleted");
|
||||
expect(output).toContain("depth=1");
|
||||
});
|
||||
});
|
||||
|
||||
// ── findDescendants ──────────────────────────────────────────────────
|
||||
|
||||
describe("findDescendants", () => {
|
||||
it("finds direct children", () => {
|
||||
saveSpawnRecord({
|
||||
id: "parent-1",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
});
|
||||
saveSpawnRecord({
|
||||
id: "child-1",
|
||||
agent: "codex",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T01:00:00.000Z",
|
||||
parent_id: "parent-1",
|
||||
});
|
||||
saveSpawnRecord({
|
||||
id: "child-2",
|
||||
agent: "openclaw",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T02:00:00.000Z",
|
||||
parent_id: "parent-1",
|
||||
});
|
||||
|
||||
const descendants = findDescendants("parent-1");
|
||||
expect(descendants).toHaveLength(2);
|
||||
expect(descendants.map((d) => d.id).sort()).toEqual([
|
||||
"child-1",
|
||||
"child-2",
|
||||
]);
|
||||
});
|
||||
|
||||
it("finds transitive descendants", () => {
|
||||
saveSpawnRecord({
|
||||
id: "root",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
});
|
||||
saveSpawnRecord({
|
||||
id: "child",
|
||||
agent: "codex",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T01:00:00.000Z",
|
||||
parent_id: "root",
|
||||
});
|
||||
saveSpawnRecord({
|
||||
id: "grandchild",
|
||||
agent: "openclaw",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T02:00:00.000Z",
|
||||
parent_id: "child",
|
||||
});
|
||||
|
||||
const descendants = findDescendants("root");
|
||||
expect(descendants).toHaveLength(2);
|
||||
expect(descendants.map((d) => d.id)).toContain("child");
|
||||
expect(descendants.map((d) => d.id)).toContain("grandchild");
|
||||
});
|
||||
|
||||
it("returns empty array when no children", () => {
|
||||
saveSpawnRecord({
|
||||
id: "lonely",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
});
|
||||
|
||||
const descendants = findDescendants("lonely");
|
||||
expect(descendants).toHaveLength(0);
|
||||
});
|
||||
|
||||
it("excludes deleted descendants", () => {
|
||||
saveSpawnRecord({
|
||||
id: "parent",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
});
|
||||
saveSpawnRecord({
|
||||
id: "deleted-child",
|
||||
agent: "codex",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T01:00:00.000Z",
|
||||
parent_id: "parent",
|
||||
connection: {
|
||||
ip: "1.2.3.4",
|
||||
user: "root",
|
||||
deleted: true,
|
||||
deleted_at: "2026-03-24T05:00:00.000Z",
|
||||
},
|
||||
});
|
||||
|
||||
const descendants = findDescendants("parent");
|
||||
expect(descendants).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ── pullChildHistory ─────────────────────────────────────────────────
|
||||
|
||||
describe("pullChildHistory", () => {
|
||||
it("skips records without connection", async () => {
|
||||
const record: SpawnRecord = {
|
||||
id: "no-conn",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
};
|
||||
// Should not throw
|
||||
await pullChildHistory(record);
|
||||
});
|
||||
|
||||
it("skips local cloud records", async () => {
|
||||
const record: SpawnRecord = {
|
||||
id: "local-1",
|
||||
agent: "claude",
|
||||
cloud: "local",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
connection: {
|
||||
ip: "127.0.0.1",
|
||||
user: "me",
|
||||
cloud: "local",
|
||||
},
|
||||
};
|
||||
await pullChildHistory(record);
|
||||
});
|
||||
|
||||
it("skips sprite-console records", async () => {
|
||||
const record: SpawnRecord = {
|
||||
id: "sprite-1",
|
||||
agent: "claude",
|
||||
cloud: "sprite",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
connection: {
|
||||
ip: "sprite-console",
|
||||
user: "root",
|
||||
cloud: "sprite",
|
||||
},
|
||||
};
|
||||
await pullChildHistory(record);
|
||||
});
|
||||
|
||||
it("skips records without IP", async () => {
|
||||
const record: SpawnRecord = {
|
||||
id: "no-ip",
|
||||
agent: "claude",
|
||||
cloud: "hetzner",
|
||||
timestamp: "2026-03-24T00:00:00.000Z",
|
||||
connection: {
|
||||
ip: "",
|
||||
user: "root",
|
||||
cloud: "hetzner",
|
||||
},
|
||||
};
|
||||
await pullChildHistory(record);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
@ -4,6 +4,7 @@ import type { Manifest } from "../manifest.js";
|
|||
import * as p from "@clack/prompts";
|
||||
import { isString } from "@openrouter/spawn-shared";
|
||||
import pc from "picocolors";
|
||||
import * as v from "valibot";
|
||||
import { authenticate as awsAuthenticate, destroyServer as awsDestroyServer, ensureAwsCli } from "../aws/aws.js";
|
||||
import { destroyServer as doDestroyServer, ensureDoToken } from "../digitalocean/digitalocean.js";
|
||||
import {
|
||||
|
|
@ -13,7 +14,7 @@ import {
|
|||
resolveProject as gcpResolveProject,
|
||||
} from "../gcp/gcp.js";
|
||||
import { ensureHcloudToken, destroyServer as hetznerDestroyServer } from "../hetzner/hetzner.js";
|
||||
import { getActiveServers, markRecordDeleted } from "../history.js";
|
||||
import { getActiveServers, loadHistory, markRecordDeleted, mergeChildHistory, SpawnRecordSchema } from "../history.js";
|
||||
import { loadManifest } from "../manifest.js";
|
||||
import { validateMetadataValue, validateServerIdentifier } from "../security.js";
|
||||
import { getHistoryPath } from "../shared/paths.js";
|
||||
|
|
@ -240,6 +241,115 @@ export async function confirmAndDelete(
|
|||
return success;
|
||||
}
|
||||
|
||||
/** Pull child history from a remote VM via SSH before deleting it. */
|
||||
export async function pullChildHistory(record: SpawnRecord): Promise<void> {
|
||||
const conn = record.connection;
|
||||
if (!conn?.ip || !conn.user || conn.cloud === "local" || conn.ip === "sprite-console") {
|
||||
return;
|
||||
}
|
||||
|
||||
const { ensureSshKeys, getSshKeyOpts } = await import("../shared/ssh-keys.js");
|
||||
const { SSH_BASE_OPTS } = await import("../shared/ssh.js");
|
||||
|
||||
const pullResult = await asyncTryCatch(async () => {
|
||||
const keys = await ensureSshKeys();
|
||||
const keyOpts = getSshKeyOpts(keys);
|
||||
const proc = Bun.spawn(
|
||||
[
|
||||
"ssh",
|
||||
...SSH_BASE_OPTS,
|
||||
...keyOpts,
|
||||
`${conn.user}@${conn.ip}`,
|
||||
"spawn history export 2>/dev/null",
|
||||
],
|
||||
{
|
||||
stdout: "pipe",
|
||||
stderr: "ignore",
|
||||
stdin: "ignore",
|
||||
},
|
||||
);
|
||||
const output = await new Response(proc.stdout).text();
|
||||
await proc.exited;
|
||||
return output.trim();
|
||||
});
|
||||
|
||||
if (!pullResult.ok || !pullResult.data) {
|
||||
// Non-fatal: VM might already be unreachable
|
||||
return;
|
||||
}
|
||||
|
||||
await asyncTryCatch(async () => {
|
||||
const parsed: unknown = JSON.parse(pullResult.data);
|
||||
if (!Array.isArray(parsed)) {
|
||||
return;
|
||||
}
|
||||
const childRecords: SpawnRecord[] = [];
|
||||
for (const el of parsed) {
|
||||
const result = v.safeParse(SpawnRecordSchema, el);
|
||||
if (result.success) {
|
||||
childRecords.push(result.output);
|
||||
}
|
||||
}
|
||||
if (childRecords.length > 0) {
|
||||
mergeChildHistory(record.id, childRecords);
|
||||
p.log.info(`Merged ${childRecords.length} child record(s) from ${conn.server_name || conn.ip}`);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/** Find all children of a given spawn record (direct and transitive). */
|
||||
export function findDescendants(parentId: string): SpawnRecord[] {
|
||||
const history = loadHistory();
|
||||
const descendants: SpawnRecord[] = [];
|
||||
const queue = [
|
||||
parentId,
|
||||
];
|
||||
|
||||
while (queue.length > 0) {
|
||||
const currentId = queue.shift()!;
|
||||
for (const r of history) {
|
||||
if (r.parent_id === currentId && !r.connection?.deleted) {
|
||||
descendants.push(r);
|
||||
queue.push(r.id);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return descendants;
|
||||
}
|
||||
|
||||
/** Delete a spawn and all its descendants (depth-first). */
|
||||
export async function cascadeDelete(record: SpawnRecord, manifest: Manifest | null): Promise<boolean> {
|
||||
const descendants = findDescendants(record.id);
|
||||
|
||||
if (descendants.length > 0) {
|
||||
const totalCount = descendants.length + 1;
|
||||
const confirmed = await p.confirm({
|
||||
message: `This will delete ${totalCount} server(s) (1 parent + ${descendants.length} child${descendants.length !== 1 ? "ren" : ""}). Continue?`,
|
||||
initialValue: false,
|
||||
});
|
||||
|
||||
if (p.isCancel(confirmed) || !confirmed) {
|
||||
p.log.info("Cascade delete cancelled.");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Delete children first (depth-first by reversing — deepest children last in queue, first to delete)
|
||||
descendants.reverse();
|
||||
for (const child of descendants) {
|
||||
if (!child.connection?.deleted) {
|
||||
p.log.step(`Deleting child: ${child.connection?.server_name || child.id}`);
|
||||
await pullChildHistory(child);
|
||||
await execDeleteServer(child);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Delete the parent
|
||||
await pullChildHistory(record);
|
||||
return confirmAndDelete(record, manifest);
|
||||
}
|
||||
|
||||
export async function cmdDelete(agentFilter?: string, cloudFilter?: string): Promise<void> {
|
||||
const resolved = await resolveListFilters(agentFilter, cloudFilter);
|
||||
agentFilter = resolved.agentFilter;
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
// Barrel re-export — all command modules re-exported from this index.
|
||||
|
||||
// delete.ts — cmdDelete
|
||||
export { cmdDelete } from "./delete.js";
|
||||
// delete.ts — cmdDelete, cascadeDelete
|
||||
export { cascadeDelete, cmdDelete } from "./delete.js";
|
||||
// feedback.ts — cmdFeedback
|
||||
export { cmdFeedback } from "./feedback.js";
|
||||
// fix.ts — cmdFix, fixSpawn, buildFixScript
|
||||
|
|
@ -22,10 +22,11 @@ export {
|
|||
export { cmdAgentInteractive, cmdInteractive } from "./interactive.js";
|
||||
// link.ts — cmdLink
|
||||
export { cmdLink } from "./link.js";
|
||||
// list.ts — cmdList, cmdLast, cmdListClear, history display
|
||||
// list.ts — cmdList, cmdLast, cmdListClear, cmdHistoryExport, history display
|
||||
export {
|
||||
buildRecordLabel,
|
||||
buildRecordSubtitle,
|
||||
cmdHistoryExport,
|
||||
cmdLast,
|
||||
cmdList,
|
||||
cmdListClear,
|
||||
|
|
@ -67,6 +68,8 @@ export {
|
|||
} from "./shared.js";
|
||||
// status.ts — cmdStatus
|
||||
export { cmdStatus } from "./status.js";
|
||||
// tree.ts — cmdTree (recursive spawn tree view)
|
||||
export { cmdTree } from "./tree.js";
|
||||
// uninstall.ts — cmdUninstall
|
||||
export { cmdUninstall } from "./uninstall.js";
|
||||
// update.ts — cmdUpdate
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ import * as p from "@clack/prompts";
|
|||
import pc from "picocolors";
|
||||
import {
|
||||
clearHistory,
|
||||
exportHistory,
|
||||
filterHistory,
|
||||
getActiveServers,
|
||||
markRecordDeleted,
|
||||
|
|
@ -196,6 +197,81 @@ function showListFooter(records: SpawnRecord[], agentFilter?: string, cloudFilte
|
|||
console.log();
|
||||
}
|
||||
|
||||
// ── Tree rendering ──────────────────────────────────────────────────────────
|
||||
|
||||
interface TreeNode {
|
||||
record: SpawnRecord;
|
||||
children: TreeNode[];
|
||||
}
|
||||
|
||||
/** Build a tree structure from records that have parent_id. */
|
||||
function buildTree(records: SpawnRecord[]): TreeNode[] {
|
||||
const nodeMap = new Map<string, TreeNode>();
|
||||
const roots: TreeNode[] = [];
|
||||
|
||||
// Create nodes for all records
|
||||
for (const r of records) {
|
||||
nodeMap.set(r.id, {
|
||||
record: r,
|
||||
children: [],
|
||||
});
|
||||
}
|
||||
|
||||
// Link children to parents
|
||||
for (const r of records) {
|
||||
const node = nodeMap.get(r.id);
|
||||
if (!node) {
|
||||
continue;
|
||||
}
|
||||
if (r.parent_id && nodeMap.has(r.parent_id)) {
|
||||
nodeMap.get(r.parent_id)!.children.push(node);
|
||||
} else {
|
||||
roots.push(node);
|
||||
}
|
||||
}
|
||||
|
||||
return roots;
|
||||
}
|
||||
|
||||
/** Render a tree node with indentation and tree-drawing characters. */
|
||||
function renderTreeNode(
|
||||
node: TreeNode,
|
||||
manifest: Manifest | null,
|
||||
prefix: string,
|
||||
isLast: boolean,
|
||||
isRoot: boolean,
|
||||
): void {
|
||||
const r = node.record;
|
||||
const name = r.name || r.connection?.server_name || "unnamed";
|
||||
const connector = isRoot ? "" : isLast ? "└─ " : "├─ ";
|
||||
const line1 = `${prefix}${connector}${pc.bold(name)}`;
|
||||
console.log(line1);
|
||||
console.log(`${prefix}${isRoot ? "" : isLast ? " " : "│ "} ${pc.dim(buildRecordSubtitle(r, manifest))}`);
|
||||
|
||||
const childPrefix = isRoot ? "" : `${prefix}${isLast ? " " : "│ "}`;
|
||||
for (let i = 0; i < node.children.length; i++) {
|
||||
renderTreeNode(node.children[i], manifest, childPrefix, i === node.children.length - 1, false);
|
||||
}
|
||||
}
|
||||
|
||||
/** Render records as a tree when parent_id relationships exist. */
|
||||
function renderTreeTable(records: SpawnRecord[], manifest: Manifest | null): void {
|
||||
console.log();
|
||||
const roots = buildTree(records);
|
||||
for (let i = 0; i < roots.length; i++) {
|
||||
renderTreeNode(roots[i], manifest, "", i === roots.length - 1, true);
|
||||
if (i < roots.length - 1) {
|
||||
console.log();
|
||||
}
|
||||
}
|
||||
console.log();
|
||||
}
|
||||
|
||||
/** Check if any records have parent_id (indicating a tree structure). */
|
||||
function hasTreeStructure(records: SpawnRecord[]): boolean {
|
||||
return records.some((r) => r.parent_id);
|
||||
}
|
||||
|
||||
function renderListTable(records: SpawnRecord[], manifest: Manifest | null): void {
|
||||
console.log();
|
||||
for (let i = 0; i < records.length; i++) {
|
||||
|
|
@ -805,16 +881,31 @@ export async function cmdList(agentFilter?: string, cloudFilter?: string): Promi
|
|||
}
|
||||
|
||||
// Non-interactive: show full history table
|
||||
const flat = process.argv.includes("--flat");
|
||||
const records = filterHistory(agentFilter, cloudFilter);
|
||||
if (records.length === 0) {
|
||||
await showEmptyListMessage(agentFilter, cloudFilter);
|
||||
return;
|
||||
}
|
||||
|
||||
renderListTable(records, manifest);
|
||||
if (process.argv.includes("--json")) {
|
||||
console.log(JSON.stringify(records, null, 2));
|
||||
return;
|
||||
}
|
||||
|
||||
if (!flat && hasTreeStructure(records)) {
|
||||
renderTreeTable(records, manifest);
|
||||
} else {
|
||||
renderListTable(records, manifest);
|
||||
}
|
||||
showListFooter(records, agentFilter, cloudFilter);
|
||||
}
|
||||
|
||||
export function cmdHistoryExport(): void {
|
||||
const json = exportHistory();
|
||||
console.log(json);
|
||||
}
|
||||
|
||||
export async function cmdLast(): Promise<void> {
|
||||
const records = filterHistory();
|
||||
|
||||
|
|
|
|||
|
|
@ -665,6 +665,8 @@ export async function execScript(
|
|||
): Promise<void> {
|
||||
// Generate a unique spawn ID and record the spawn before execution
|
||||
const spawnId = generateSpawnId();
|
||||
const parentId = process.env.SPAWN_PARENT_ID || undefined;
|
||||
const depth = process.env.SPAWN_DEPTH ? Number(process.env.SPAWN_DEPTH) : undefined;
|
||||
const saveResult = tryCatchIf(isFileError, () =>
|
||||
saveSpawnRecord({
|
||||
id: spawnId,
|
||||
|
|
@ -681,6 +683,16 @@ export async function execScript(
|
|||
prompt,
|
||||
}
|
||||
: {}),
|
||||
...(parentId
|
||||
? {
|
||||
parent_id: parentId,
|
||||
}
|
||||
: {}),
|
||||
...(depth !== undefined && !Number.isNaN(depth)
|
||||
? {
|
||||
depth,
|
||||
}
|
||||
: {}),
|
||||
}),
|
||||
);
|
||||
if (!saveResult.ok && debug) {
|
||||
|
|
|
|||
111
packages/cli/src/commands/tree.ts
Normal file
111
packages/cli/src/commands/tree.ts
Normal file
|
|
@ -0,0 +1,111 @@
|
|||
// commands/tree.ts — `spawn tree` command: shows the full recursive spawn tree
|
||||
|
||||
import type { SpawnRecord } from "../history.js";
|
||||
import type { Manifest } from "../manifest.js";
|
||||
|
||||
import * as p from "@clack/prompts";
|
||||
import pc from "picocolors";
|
||||
import { loadHistory } from "../history.js";
|
||||
import { loadManifest } from "../manifest.js";
|
||||
import { asyncTryCatch } from "../shared/result.js";
|
||||
import { formatRelativeTime } from "./list.js";
|
||||
import { resolveDisplayName } from "./shared.js";
|
||||
|
||||
interface TreeNode {
|
||||
record: SpawnRecord;
|
||||
children: TreeNode[];
|
||||
}
|
||||
|
||||
/** Build a tree from all history records using parent_id. */
|
||||
function buildFullTree(records: SpawnRecord[]): TreeNode[] {
|
||||
const nodeMap = new Map<string, TreeNode>();
|
||||
const roots: TreeNode[] = [];
|
||||
|
||||
for (const r of records) {
|
||||
nodeMap.set(r.id, {
|
||||
record: r,
|
||||
children: [],
|
||||
});
|
||||
}
|
||||
|
||||
for (const r of records) {
|
||||
const node = nodeMap.get(r.id);
|
||||
if (!node) {
|
||||
continue;
|
||||
}
|
||||
if (r.parent_id && nodeMap.has(r.parent_id)) {
|
||||
nodeMap.get(r.parent_id)!.children.push(node);
|
||||
} else {
|
||||
roots.push(node);
|
||||
}
|
||||
}
|
||||
|
||||
return roots;
|
||||
}
|
||||
|
||||
/** Render a tree node to console with tree-drawing characters. */
|
||||
function printNode(node: TreeNode, manifest: Manifest | null, prefix: string, isLast: boolean, isRoot: boolean): void {
|
||||
const r = node.record;
|
||||
const name = r.name || r.connection?.server_name || r.id.slice(0, 8);
|
||||
const agentDisplay = resolveDisplayName(manifest, r.agent, "agent");
|
||||
const cloudDisplay = resolveDisplayName(manifest, r.cloud, "cloud");
|
||||
const time = formatRelativeTime(r.timestamp);
|
||||
const depthLabel = r.depth !== undefined ? pc.dim(` depth=${r.depth}`) : "";
|
||||
const deletedLabel = r.connection?.deleted ? pc.red(" [deleted]") : "";
|
||||
|
||||
const connector = isRoot ? "" : isLast ? "└─ " : "├─ ";
|
||||
const line = `${prefix}${connector}${pc.bold(name)} ${pc.dim(`${agentDisplay}/${cloudDisplay}`)} ${pc.dim(time)}${depthLabel}${deletedLabel}`;
|
||||
console.log(line);
|
||||
|
||||
const childPrefix = isRoot ? "" : `${prefix}${isLast ? " " : "│ "}`;
|
||||
for (let i = 0; i < node.children.length; i++) {
|
||||
printNode(node.children[i], manifest, childPrefix, i === node.children.length - 1, false);
|
||||
}
|
||||
}
|
||||
|
||||
/** Count total nodes in a tree. */
|
||||
function countNodes(nodes: TreeNode[]): number {
|
||||
let count = 0;
|
||||
for (const n of nodes) {
|
||||
count += 1;
|
||||
count += countNodes(n.children);
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
export async function cmdTree(jsonOutput?: boolean): Promise<void> {
|
||||
const records = loadHistory();
|
||||
|
||||
if (records.length === 0) {
|
||||
p.log.info("No spawn history found.");
|
||||
p.log.info(`Run ${pc.cyan("spawn <agent> <cloud>")} to create your first spawn.`);
|
||||
return;
|
||||
}
|
||||
|
||||
const manifestResult = await asyncTryCatch(() => loadManifest());
|
||||
const manifest: Manifest | null = manifestResult.ok ? manifestResult.data : null;
|
||||
|
||||
const roots = buildFullTree(records);
|
||||
|
||||
if (jsonOutput) {
|
||||
console.log(JSON.stringify(records, null, 2));
|
||||
return;
|
||||
}
|
||||
|
||||
console.log();
|
||||
for (let i = 0; i < roots.length; i++) {
|
||||
printNode(roots[i], manifest, "", i === roots.length - 1, true);
|
||||
if (i < roots.length - 1) {
|
||||
console.log();
|
||||
}
|
||||
}
|
||||
console.log();
|
||||
|
||||
const total = countNodes(roots);
|
||||
const treeCount = roots.filter((r) => r.children.length > 0).length;
|
||||
if (treeCount > 0) {
|
||||
p.log.info(`${total} spawn(s) across ${treeCount} tree(s)`);
|
||||
} else {
|
||||
p.log.info(`${total} spawn(s), no parent-child relationships`);
|
||||
}
|
||||
}
|
||||
|
|
@ -37,6 +37,8 @@ export interface SpawnRecord {
|
|||
name?: string;
|
||||
prompt?: string;
|
||||
connection?: VMConnection;
|
||||
parent_id?: string;
|
||||
depth?: number;
|
||||
}
|
||||
|
||||
/** Simplified cloud instance info returned by each provider's listServers(). */
|
||||
|
|
@ -63,7 +65,7 @@ const VMConnectionSchema = v.object({
|
|||
metadata: v.optional(v.record(v.string(), v.string())),
|
||||
});
|
||||
|
||||
const SpawnRecordSchema = v.object({
|
||||
export const SpawnRecordSchema = v.object({
|
||||
id: v.optional(v.string()), // optional for backwards compat with pre-migration records on disk
|
||||
agent: v.string(),
|
||||
cloud: v.string(),
|
||||
|
|
@ -71,6 +73,8 @@ const SpawnRecordSchema = v.object({
|
|||
name: v.optional(v.string()),
|
||||
prompt: v.optional(v.string()),
|
||||
connection: v.optional(VMConnectionSchema),
|
||||
parent_id: v.optional(v.string()),
|
||||
depth: v.optional(v.number()),
|
||||
});
|
||||
|
||||
/** v1 history file format: { version: 1, records: SpawnRecord[] } */
|
||||
|
|
@ -550,6 +554,43 @@ export function getActiveServers(): SpawnRecord[] {
|
|||
return records.filter((r) => r.connection?.cloud && r.connection.cloud !== "local" && !r.connection.deleted);
|
||||
}
|
||||
|
||||
/** Merge child spawn records into local history.
|
||||
* Sets parent_id on each child record and deduplicates by spawn ID. */
|
||||
export function mergeChildHistory(parentSpawnId: string, childRecords: SpawnRecord[]): void {
|
||||
if (childRecords.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
withHistoryLock(() => {
|
||||
const history = loadHistory();
|
||||
const existingIds = new Set(history.map((r) => r.id));
|
||||
|
||||
for (const child of childRecords) {
|
||||
if (!child.id) {
|
||||
child.id = generateSpawnId();
|
||||
}
|
||||
// Skip duplicates
|
||||
if (existingIds.has(child.id)) {
|
||||
continue;
|
||||
}
|
||||
// Ensure parent_id is set
|
||||
if (!child.parent_id) {
|
||||
child.parent_id = parentSpawnId;
|
||||
}
|
||||
history.push(child);
|
||||
existingIds.add(child.id);
|
||||
}
|
||||
|
||||
writeHistory(history);
|
||||
});
|
||||
}
|
||||
|
||||
/** Export history records as JSON string (for `spawn history export`). */
|
||||
export function exportHistory(): string {
|
||||
const records = loadHistory();
|
||||
return JSON.stringify(records, null, 2);
|
||||
}
|
||||
|
||||
export function filterHistory(agentFilter?: string, cloudFilter?: string): SpawnRecord[] {
|
||||
let records = loadHistory();
|
||||
if (agentFilter) {
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ import {
|
|||
cmdFeedback,
|
||||
cmdFix,
|
||||
cmdHelp,
|
||||
cmdHistoryExport,
|
||||
cmdInteractive,
|
||||
cmdLast,
|
||||
cmdLink,
|
||||
|
|
@ -25,6 +26,7 @@ import {
|
|||
cmdRun,
|
||||
cmdRunHeadless,
|
||||
cmdStatus,
|
||||
cmdTree,
|
||||
cmdUninstall,
|
||||
cmdUpdate,
|
||||
findClosestKeyByNameOrKey,
|
||||
|
|
@ -719,7 +721,21 @@ async function dispatchCommand(
|
|||
return;
|
||||
}
|
||||
|
||||
if (cmd === "tree") {
|
||||
if (hasTrailingHelpFlag(filteredArgs)) {
|
||||
cmdHelp();
|
||||
return;
|
||||
}
|
||||
const jsonFlag = filteredArgs.slice(1).includes("--json");
|
||||
await cmdTree(jsonFlag);
|
||||
return;
|
||||
}
|
||||
if (LIST_COMMANDS.has(cmd)) {
|
||||
// Handle "history export" subcommand
|
||||
if (cmd === "history" && filteredArgs[1] === "export") {
|
||||
cmdHistoryExport();
|
||||
return;
|
||||
}
|
||||
await dispatchListCommand(filteredArgs);
|
||||
return;
|
||||
}
|
||||
|
|
@ -857,16 +873,18 @@ async function main(): Promise<void> {
|
|||
"images",
|
||||
"parallel",
|
||||
"docker",
|
||||
"recursive",
|
||||
]);
|
||||
const betaFeatures = extractAllFlagValues(filteredArgs, "--beta", "spawn <agent> <cloud> --beta parallel");
|
||||
for (const flag of betaFeatures) {
|
||||
if (!VALID_BETA_FEATURES.has(flag)) {
|
||||
console.error(pc.red(`Unknown beta feature: ${pc.bold(flag)}`));
|
||||
console.error("\nAvailable beta features:");
|
||||
console.error(` ${pc.cyan("tarball")} Use pre-built tarball for agent installation`);
|
||||
console.error(` ${pc.cyan("images")} Use pre-built DO marketplace images (faster boot)`);
|
||||
console.error(` ${pc.cyan("parallel")} Parallelize server boot with setup prompts`);
|
||||
console.error(` ${pc.cyan("docker")} Use Docker CE app image on Hetzner/GCP (faster boot)`);
|
||||
console.error(` ${pc.cyan("tarball")} Use pre-built tarball for agent installation`);
|
||||
console.error(` ${pc.cyan("images")} Use pre-built DO marketplace images (faster boot)`);
|
||||
console.error(` ${pc.cyan("parallel")} Parallelize server boot with setup prompts`);
|
||||
console.error(` ${pc.cyan("docker")} Use Docker CE app image on Hetzner/GCP (faster boot)`);
|
||||
console.error(` ${pc.cyan("recursive")} Install spawn CLI on VM for recursive spawning`);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import type { CloudRunner } from "./agent-setup.js";
|
|||
import type { AgentConfig } from "./agents.js";
|
||||
import type { SshTunnelHandle } from "./ssh.js";
|
||||
|
||||
import { readFileSync } from "node:fs";
|
||||
import { existsSync, readFileSync } from "node:fs";
|
||||
import { getErrorMessage } from "@openrouter/spawn-shared";
|
||||
import * as v from "valibot";
|
||||
import { generateSpawnId, saveLaunchCmd, saveMetadata, saveSpawnRecord } from "../history.js";
|
||||
|
|
@ -14,7 +14,7 @@ import { offerGithubAuth, setupAutoUpdate, wrapSshCall } from "./agent-setup.js"
|
|||
import { tryTarballInstall } from "./agent-tarball.js";
|
||||
import { generateEnvConfig } from "./agents.js";
|
||||
import { getOrPromptApiKey } from "./oauth.js";
|
||||
import { getSpawnPreferencesPath } from "./paths.js";
|
||||
import { getSpawnCloudConfigPath, getSpawnPreferencesPath, getUserHome } from "./paths.js";
|
||||
import { asyncTryCatch, asyncTryCatchIf, isOperationalError, tryCatch } from "./result.js";
|
||||
import { isWindows } from "./shell.js";
|
||||
import { sleep, startSshTunnel } from "./ssh.js";
|
||||
|
|
@ -106,6 +106,95 @@ function wrapWithRestartLoop(cmd: string): string {
|
|||
].join("\n");
|
||||
}
|
||||
|
||||
// ── Recursive spawn helpers ──────────────────────────────────────────────────
|
||||
|
||||
/** Install the spawn CLI on a remote VM. */
|
||||
export async function installSpawnCli(runner: CloudRunner): Promise<void> {
|
||||
logStep("Installing spawn CLI on VM...");
|
||||
const result = await asyncTryCatch(() =>
|
||||
withRetry(
|
||||
"spawn CLI install",
|
||||
() => wrapSshCall(runner.runServer("curl -fsSL https://openrouter.ai/labs/spawn/cli/install.sh | bash")),
|
||||
2,
|
||||
5,
|
||||
),
|
||||
);
|
||||
if (!result.ok) {
|
||||
logWarn("Spawn CLI install failed — recursive spawning will not be available on this VM");
|
||||
} else {
|
||||
logInfo("Spawn CLI installed on VM");
|
||||
}
|
||||
}
|
||||
|
||||
/** Copy local cloud credentials to the remote VM for recursive spawning. */
|
||||
export async function delegateCloudCredentials(runner: CloudRunner, cloudName: string): Promise<void> {
|
||||
logStep("Delegating cloud credentials to VM...");
|
||||
|
||||
// Validate cloudName to prevent command injection via crafted cloud names
|
||||
if (!/^[a-z0-9-]+$/.test(cloudName)) {
|
||||
logWarn(`Invalid cloud name for credential delegation: ${cloudName}`);
|
||||
return;
|
||||
}
|
||||
|
||||
const filesToDelegate: {
|
||||
localPath: string;
|
||||
remotePath: string;
|
||||
}[] = [];
|
||||
|
||||
// Current cloud's credentials
|
||||
const cloudConfigPath = getSpawnCloudConfigPath(cloudName);
|
||||
if (existsSync(cloudConfigPath)) {
|
||||
filesToDelegate.push({
|
||||
localPath: cloudConfigPath,
|
||||
remotePath: `~/.config/spawn/${cloudName}.json`,
|
||||
});
|
||||
}
|
||||
|
||||
// OpenRouter credentials (always needed for child spawns)
|
||||
const orConfigPath = `${getUserHome()}/.config/spawn/openrouter.json`;
|
||||
if (existsSync(orConfigPath)) {
|
||||
filesToDelegate.push({
|
||||
localPath: orConfigPath,
|
||||
remotePath: "~/.config/spawn/openrouter.json",
|
||||
});
|
||||
}
|
||||
|
||||
if (filesToDelegate.length === 0) {
|
||||
logWarn("No credentials to delegate — child spawns may require manual auth");
|
||||
return;
|
||||
}
|
||||
|
||||
// Ensure config dir exists on VM
|
||||
const mkdirResult = await asyncTryCatch(() =>
|
||||
runner.runServer("mkdir -p ~/.config/spawn && chmod 700 ~/.config/spawn"),
|
||||
);
|
||||
if (!mkdirResult.ok) {
|
||||
logWarn("Could not create config directory on VM");
|
||||
return;
|
||||
}
|
||||
|
||||
for (const file of filesToDelegate) {
|
||||
const content = readFileSync(file.localPath, "utf-8");
|
||||
const b64 = Buffer.from(content).toString("base64");
|
||||
const writeResult = await asyncTryCatch(() =>
|
||||
runner.runServer(`printf '%s' '${b64}' | base64 -d > ${file.remotePath} && chmod 600 ${file.remotePath}`),
|
||||
);
|
||||
if (!writeResult.ok) {
|
||||
logWarn(`Could not delegate ${file.remotePath}`);
|
||||
}
|
||||
}
|
||||
|
||||
logInfo("Cloud credentials delegated to VM");
|
||||
}
|
||||
|
||||
/** Append recursive-spawn env vars to the envPairs array when --beta recursive is active. */
|
||||
export function appendRecursiveEnvVars(envPairs: string[], spawnId: string): void {
|
||||
const currentDepth = Number(process.env.SPAWN_DEPTH) || 0;
|
||||
envPairs.push(`SPAWN_PARENT_ID=${spawnId}`);
|
||||
envPairs.push(`SPAWN_DEPTH=${currentDepth + 1}`);
|
||||
envPairs.push("SPAWN_BETA=recursive");
|
||||
}
|
||||
|
||||
/** Options for runOrchestration (used in tests to inject mock dependencies). */
|
||||
export interface OrchestrationOptions {
|
||||
tryTarball?: (runner: CloudRunner, agentName: string) => Promise<boolean>;
|
||||
|
|
@ -257,6 +346,9 @@ export async function runOrchestration(
|
|||
if (modelId && agent.modelEnvVar) {
|
||||
envPairs.push(`${agent.modelEnvVar}=${modelId}`);
|
||||
}
|
||||
if (betaFeatures.has("recursive")) {
|
||||
appendRecursiveEnvVars(envPairs, spawnId);
|
||||
}
|
||||
const envContent = generateEnvConfig(envPairs);
|
||||
|
||||
// Install agent — remote tarball, fallback to live install
|
||||
|
|
@ -356,6 +448,9 @@ export async function runOrchestration(
|
|||
if (modelId && agent.modelEnvVar) {
|
||||
envPairs.push(`${agent.modelEnvVar}=${modelId}`);
|
||||
}
|
||||
if (betaFeatures.has("recursive")) {
|
||||
appendRecursiveEnvVars(envPairs, spawnId);
|
||||
}
|
||||
const envContent = generateEnvConfig(envPairs);
|
||||
|
||||
// 8. Install agent
|
||||
|
|
@ -451,6 +546,13 @@ async function postInstall(
|
|||
await setupAutoUpdate(cloud.runner, agentName, agent.updateCmd);
|
||||
}
|
||||
|
||||
// Recursive spawn setup — install spawn CLI and delegate credentials
|
||||
const betaFeaturesPost = new Set((process.env.SPAWN_BETA ?? "").split(",").filter(Boolean));
|
||||
if (betaFeaturesPost.has("recursive") && cloud.cloudName !== "local") {
|
||||
await installSpawnCli(cloud.runner);
|
||||
await delegateCloudCredentials(cloud.runner, cloud.cloudName);
|
||||
}
|
||||
|
||||
// Pre-launch hooks (retry loop)
|
||||
if (agent.preLaunch) {
|
||||
for (;;) {
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue