fix: migrate agent runtime config

This commit is contained in:
Peter Steinberger 2026-04-26 07:58:48 +01:00
parent 9d6e79019f
commit 5b9be2cdb1
No known key found for this signature in database
61 changed files with 873 additions and 335 deletions

View file

@ -78,6 +78,10 @@ Docs: https://docs.openclaw.ai
### Fixes
- Plugins/uninstall: remove tracked plugin files from their recorded managed extensions root even when the current state directory points somewhere else, so `openclaw plugins uninstall --force` does not leave the plugin discoverable. Thanks @shakkernerd.
- Agents/runtime: add `agentRuntime.id` as the canonical config key, migrate
legacy runtime-policy configs with `openclaw doctor --fix`, and route
canonical Anthropic models through `claude-cli` without passing CLI backend
aliases to embedded harness selection. Fixes #71957. Thanks @WolvenRA.
- CLI/update: guard Windows scheduled-task stops by state and timeout so auto-update restart cannot hang indefinitely on `schtasks /End` before stale-listener cleanup. Fixes #69970. Thanks @yangswld and @sherlock-huang.
- Gateway/install: refresh loaded gateway service installs when the current service embeds stale gateway auth instead of returning already-installed, avoiding LaunchAgent token-mismatch loops after token rotation. Fixes #70752. Thanks @hyspacex.
- Update: ignore bundled plugin `.openclaw-install-stage` directories during global install verification and packaged dist pruning so leftover runtime-dep staging files do not turn successful updates into `unexpected packaged dist file` failures. Fixes #71752. Thanks @waynegault.

View file

@ -1,4 +1,4 @@
f1eefb91a486188915373b09199959f0f1a7cd01dc75ef923832741f72a12543 config-baseline.json
9f0e386d5118cbca785a2e8e9c8b170d844faf1b7ef5e82e6b15d9e1c39f3796 config-baseline.core.json
d8e7866e0c3f633222f75a35defed3c3a03d849f4aa4f70871e3436e80074e76 config-baseline.json
5f5fb87fd46f9cbb84d8af17e00ae3c4b74062e8ad517bc2260ba83da2e9014f config-baseline.core.json
7cd9c908f066c143eab2a201efbc9640f483ab28bba92ddeca1d18cc2b528bc3 config-baseline.channel.json
a5479c182ec987bb21e814b8a4e7b3bda7190ae5c2b35fd5ca403dfa48afa115 config-baseline.plugin.json

View file

@ -162,7 +162,7 @@ configured OpenClaw model. If no configured model is usable yet, it can fall
back to local runtimes already present on the machine:
- Claude Code CLI: `claude-cli/claude-opus-4-7`
- Codex app-server harness: `openai/gpt-5.5` with `embeddedHarness.runtime: "codex"`
- Codex app-server harness: `openai/gpt-5.5` with `agentRuntime.id: "codex"`
- Codex CLI: `codex-cli/gpt-5.5`
The model-assisted planner cannot mutate config directly. It must translate the

View file

@ -18,14 +18,24 @@ configuration. They are different layers:
| ------------- | ------------------------------------- | ------------------------------------------------------------------- |
| Provider | `openai`, `anthropic`, `openai-codex` | How OpenClaw authenticates, discovers models, and names model refs. |
| Model | `gpt-5.5`, `claude-opus-4-6` | The model selected for the agent turn. |
| Agent runtime | `pi`, `codex`, ACP-backed runtimes | The low level loop that executes the prepared turn. |
| Agent runtime | `pi`, `codex`, `claude-cli` | The low level loop or backend that executes the prepared turn. |
| Channel | Telegram, Discord, Slack, WhatsApp | Where messages enter and leave OpenClaw. |
You will also see the word **harness** in code and config. A harness is the
implementation that provides an agent runtime. For example, the bundled Codex
harness implements the `codex` runtime. The config key is still named
`embeddedHarness` for compatibility, but user-facing docs and status output
should generally say runtime.
You will also see the word **harness** in code. A harness is the implementation
that provides an agent runtime. For example, the bundled Codex harness
implements the `codex` runtime. Public config uses `agentRuntime.id`; `openclaw
doctor --fix` rewrites older runtime-policy keys to that shape.
There are two runtime families:
- **Embedded harnesses** run inside OpenClaw's prepared agent loop. Today this
is the built-in `pi` runtime plus registered plugin harnesses such as
`codex`.
- **CLI backends** run a local CLI process while keeping the model ref
canonical. For example, `anthropic/claude-opus-4-7` with
`agentRuntime.id: "claude-cli"` means "select the Anthropic model, execute
through Claude CLI." `claude-cli` is not an embedded harness id and must not
be passed to AgentHarness selection.
## Three things named Codex
@ -34,7 +44,7 @@ Most confusion comes from three different surfaces sharing the Codex name:
| Surface | OpenClaw name/config | What it does |
| ---------------------------------------------------- | ------------------------------------ | --------------------------------------------------------------------------------------------------- |
| Codex OAuth provider route | `openai-codex/*` model refs | Uses ChatGPT/Codex subscription OAuth through the normal OpenClaw PI runner. |
| Native Codex app-server runtime | `embeddedHarness.runtime: "codex"` | Runs the embedded agent turn through the bundled Codex app-server harness. |
| Native Codex app-server runtime | `agentRuntime.id: "codex"` | Runs the embedded agent turn through the bundled Codex app-server harness. |
| Codex ACP adapter | `runtime: "acp"`, `agentId: "codex"` | Runs Codex through the external ACP/acpx control plane. Use only when ACP/acpx is explicitly asked. |
| Native Codex chat-control command set | `/codex ...` | Binds, resumes, steers, stops, and inspects Codex app-server threads from chat. |
| OpenAI Platform API route for GPT/Codex-style models | `openai/*` model refs | Uses OpenAI API-key auth unless a runtime override, such as `runtime: "codex"`, runs the turn. |
@ -52,8 +62,8 @@ The common Codex setup uses the `openai` provider with the `codex` runtime:
agents: {
defaults: {
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
},
},
},
@ -76,7 +86,7 @@ This is the agent-facing decision tree:
1. If the user asks for **Codex bind/control/thread/resume/steer/stop**, use the
native `/codex` command surface when the bundled `codex` plugin is enabled.
2. If the user asks for **Codex as the embedded runtime**, use
`openai/<model>` with `embeddedHarness.runtime: "codex"`.
`openai/<model>` with `agentRuntime.id: "codex"`.
3. If the user asks for **Codex OAuth/subscription auth on the normal OpenClaw
runner**, use `openai-codex/<model>` and leave the runtime as PI.
4. If the user explicitly says **ACP**, **acpx**, or **Codex ACP adapter**, use
@ -87,7 +97,7 @@ This is the agent-facing decision tree:
| You mean... | Use... |
| --------------------------------------- | -------------------------------------------- |
| Codex app-server chat/thread control | `/codex ...` from the bundled `codex` plugin |
| Codex app-server embedded agent runtime | `embeddedHarness.runtime: "codex"` |
| Codex app-server embedded agent runtime | `agentRuntime.id: "codex"` |
| OpenAI Codex OAuth on the PI runner | `openai-codex/*` model refs |
| Claude Code or other external harness | ACP/acpx |
@ -122,9 +132,9 @@ OpenClaw chooses an embedded runtime after provider and model resolution:
1. A session's recorded runtime wins. Config changes do not hot-switch an
existing transcript to a different native thread system.
2. `OPENCLAW_AGENT_RUNTIME=<id>` forces that runtime for new or reset sessions.
3. `agents.defaults.embeddedHarness.runtime` or
`agents.list[].embeddedHarness.runtime` can set `auto`, `pi`, or a registered
runtime id such as `codex`.
3. `agents.defaults.agentRuntime.id` or `agents.list[].agentRuntime.id` can set
`auto`, `pi`, a registered embedded harness id such as `codex`, or a
supported CLI backend alias such as `claude-cli`.
4. In `auto` mode, registered plugin runtimes can claim supported provider/model
pairs.
5. If no runtime claims a turn in `auto` mode and `fallback: "pi"` is set
@ -137,6 +147,24 @@ Explicit plugin runtimes fail closed by default. For example,
a broader fallback setting, so an agent-level `runtime: "codex"` is not silently
routed back to PI just because defaults used `fallback: "pi"`.
CLI backend aliases are different from embedded harness ids. The preferred
Claude CLI form is:
```json5
{
agents: {
defaults: {
model: "anthropic/claude-opus-4-7",
agentRuntime: { id: "claude-cli" },
},
},
}
```
Legacy refs such as `claude-cli/claude-opus-4-7` remain supported for
compatibility, but new config should keep the provider/model canonical and put
the execution backend in `agentRuntime.id`.
`auto` mode is intentionally conservative. Plugin runtimes can claim
provider/model pairs they understand, but the Codex plugin does not claim the
`openai-codex` provider in `auto` mode. That keeps
@ -146,7 +174,7 @@ moving subscription-auth configs onto the native app-server harness.
If `openclaw doctor` warns that the `codex` plugin is enabled while
`openai-codex/*` still routes through PI, treat that as a diagnosis, not a
migration. Keep the config unchanged when PI Codex OAuth is what you want.
Switch to `openai/<model>` plus `runtime: "codex"` only when you want native
Switch to `openai/<model>` plus `agentRuntime.id: "codex"` only when you want native
Codex app-server execution.
## Compatibility contract

View file

@ -24,17 +24,17 @@ Reference for **LLM/model providers** (not chat channels like WhatsApp/Telegram)
- `openai/<model>` uses the direct OpenAI API-key provider in PI.
- `openai-codex/<model>` uses Codex OAuth in PI.
- `openai/<model>` plus `agents.defaults.embeddedHarness.runtime: "codex"` uses the native Codex app-server harness.
- `openai/<model>` plus `agents.defaults.agentRuntime.id: "codex"` uses the native Codex app-server harness.
See [OpenAI](/providers/openai) and [Codex harness](/plugins/codex-harness). If the provider/runtime split is confusing, read [Agent runtimes](/concepts/agent-runtimes) first.
Plugin auto-enable follows the same boundary: `openai-codex/<model>` belongs to the OpenAI plugin, while the Codex plugin is enabled by `embeddedHarness.runtime: "codex"` or legacy `codex/<model>` refs.
Plugin auto-enable follows the same boundary: `openai-codex/<model>` belongs to the OpenAI plugin, while the Codex plugin is enabled by `agentRuntime.id: "codex"` or legacy `codex/<model>` refs.
GPT-5.5 is available through `openai/gpt-5.5` for direct API-key traffic, `openai-codex/gpt-5.5` in PI for Codex OAuth, and the native Codex app-server harness when `embeddedHarness.runtime: "codex"` is set.
GPT-5.5 is available through `openai/gpt-5.5` for direct API-key traffic, `openai-codex/gpt-5.5` in PI for Codex OAuth, and the native Codex app-server harness when `agentRuntime.id: "codex"` is set.
</Accordion>
<Accordion title="CLI runtimes">
CLI runtimes use the same split: choose canonical model refs such as `anthropic/claude-*`, `google/gemini-*`, or `openai/gpt-*`, then set `agents.defaults.embeddedHarness.runtime` to `claude-cli`, `google-gemini-cli`, or `codex-cli` when you want a local CLI backend.
CLI runtimes use the same split: choose canonical model refs such as `anthropic/claude-*`, `google/gemini-*`, or `openai/gpt-*`, then set `agents.defaults.agentRuntime.id` to `claude-cli`, `google-gemini-cli`, or `codex-cli` when you want a local CLI backend.
Legacy `claude-cli/*`, `google-gemini-cli/*`, and `codex-cli/*` refs migrate back to canonical provider refs with the runtime recorded separately.
@ -108,6 +108,10 @@ OpenClaw ships with the piai catalog. These providers require **no** `models.
- Example model: `anthropic/claude-opus-4-6`
- CLI: `openclaw onboard --auth-choice apiKey`
- Direct public Anthropic requests support the shared `/fast` toggle and `params.fastMode`, including API-key and OAuth-authenticated traffic sent to `api.anthropic.com`; OpenClaw maps that to Anthropic `service_tier` (`auto` vs `standard_only`)
- Preferred Claude CLI config keeps the model ref canonical and selects the CLI
backend separately: `anthropic/claude-opus-4-7` with
`agents.defaults.agentRuntime.id: "claude-cli"`. Legacy
`claude-cli/claude-opus-4-7` refs still work for compatibility.
<Note>
Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so OpenClaw treats Claude CLI reuse and `claude -p` usage as sanctioned for this integration unless Anthropic publishes a new policy. Anthropic setup-token remains available as a supported OpenClaw token path, but OpenClaw now prefers Claude CLI reuse and `claude -p` when available.
@ -124,7 +128,7 @@ Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so Ope
- Provider: `openai-codex`
- Auth: OAuth (ChatGPT)
- PI model ref: `openai-codex/gpt-5.5`
- Native Codex app-server harness ref: `openai/gpt-5.5` with `agents.defaults.embeddedHarness.runtime: "codex"`
- Native Codex app-server harness ref: `openai/gpt-5.5` with `agents.defaults.agentRuntime.id: "codex"`
- Native Codex app-server harness docs: [Codex harness](/plugins/codex-harness)
- Legacy model refs: `codex/gpt-*`
- Plugin boundary: `openai-codex/*` loads the OpenAI plugin; the native Codex app-server plugin is selected only by the Codex harness runtime or legacy `codex/*` refs.

View file

@ -13,7 +13,7 @@ Quick provider overview + examples: [/concepts/model-providers](/concepts/model-
Model refs choose a provider and model. They do not usually choose the
low-level agent runtime. For example, `openai/gpt-5.5` can run through the
normal OpenAI provider path or through the Codex app-server runtime, depending
on `agents.defaults.embeddedHarness.runtime`. See
on `agents.defaults.agentRuntime.id`. See
[/concepts/agent-runtimes](/concepts/agent-runtimes).
## How model selection works

View file

@ -316,8 +316,8 @@ Time format in system prompt. Default: `auto` (OS preference).
fallbacks: ["openai/gpt-5.4-mini"],
},
params: { cacheRetention: "long" }, // global default provider params
embeddedHarness: {
runtime: "pi", // pi | auto | registered harness id, e.g. codex
agentRuntime: {
id: "pi", // pi | auto | registered harness id, e.g. codex
fallback: "pi", // pi | none
},
pdfMaxBytesMb: 10,
@ -373,25 +373,25 @@ Time format in system prompt. Default: `auto` (OS preference).
- `params.extra_body`/`params.extraBody`: advanced pass-through JSON merged into `api: "openai-completions"` request bodies for OpenAI-compatible proxies. If it collides with generated request keys, the extra body wins; non-native completions routes still strip OpenAI-only `store` afterward.
- `params.chat_template_kwargs`: vLLM/OpenAI-compatible chat-template arguments merged into top-level `api: "openai-completions"` request bodies. For `vllm/nemotron-3-*` with thinking off, OpenClaw automatically sends `enable_thinking: false` and `force_nonempty_content: true`; explicit `chat_template_kwargs` override those defaults, and `extra_body.chat_template_kwargs` still has final precedence.
- `params.preserveThinking`: Z.AI-only opt-in for preserved thinking. When enabled and thinking is on, OpenClaw sends `thinking.clear_thinking: false` and replays prior `reasoning_content`; see [Z.AI thinking and preserved thinking](/providers/zai#thinking-and-preserved-thinking).
- `embeddedHarness`: default low-level embedded agent runtime policy. Omitted runtime defaults to OpenClaw Pi. Use `runtime: "pi"` to force the built-in PI harness, `runtime: "auto"` to let registered plugin harnesses claim supported models, or a registered harness id such as `runtime: "codex"`. Set `fallback: "none"` to disable automatic PI fallback. Explicit plugin runtimes such as `codex` fail closed by default unless you set `fallback: "pi"` in the same override scope. Keep model refs canonical as `provider/model`; select Codex, Claude CLI, Gemini CLI, and other execution backends through runtime config instead of legacy runtime provider prefixes. See [Agent runtimes](/concepts/agent-runtimes) for how this differs from provider/model selection.
- `agentRuntime`: default low-level agent runtime policy. Omitted id defaults to OpenClaw Pi. Use `id: "pi"` to force the built-in PI harness, `id: "auto"` to let registered plugin harnesses claim supported models, a registered harness id such as `id: "codex"`, or a supported CLI backend alias such as `id: "claude-cli"`. Set `fallback: "none"` to disable automatic PI fallback. Explicit plugin runtimes such as `codex` fail closed by default unless you set `fallback: "pi"` in the same override scope. Keep model refs canonical as `provider/model`; select Codex, Claude CLI, Gemini CLI, and other execution backends through runtime config instead of legacy runtime provider prefixes. See [Agent runtimes](/concepts/agent-runtimes) for how this differs from provider/model selection.
- Config writers that mutate these fields (for example `/models set`, `/models set-image`, and fallback add/remove commands) save canonical object form and preserve existing fallback lists when possible.
- `maxConcurrent`: max parallel agent runs across sessions (each session still serialized). Default: 4.
### `agents.defaults.embeddedHarness`
### `agents.defaults.agentRuntime`
`embeddedHarness` controls which low-level executor runs embedded agent turns.
Most deployments should keep the default OpenClaw Pi runtime.
Use it when a trusted plugin provides a native harness, such as the bundled
Codex app-server harness. For the mental model, see
[Agent runtimes](/concepts/agent-runtimes).
`agentRuntime` controls which low-level executor runs agent turns. Most
deployments should keep the default OpenClaw Pi runtime. Use it when a trusted
plugin provides a native harness, such as the bundled Codex app-server harness,
or when you want a supported CLI backend such as Claude CLI. For the mental
model, see [Agent runtimes](/concepts/agent-runtimes).
```json5
{
agents: {
defaults: {
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
fallback: "none",
},
},
@ -399,12 +399,14 @@ Codex app-server harness. For the mental model, see
}
```
- `runtime`: `"auto"`, `"pi"`, or a registered plugin harness id. The bundled Codex plugin registers `codex`.
- `fallback`: `"pi"` or `"none"`. In `runtime: "auto"`, omitted fallback defaults to `"pi"` so old configs can keep using PI when no plugin harness claims a run. In explicit plugin runtime mode, such as `runtime: "codex"`, omitted fallback defaults to `"none"` so a missing harness fails instead of silently using PI. Runtime overrides do not inherit fallback from a broader scope; set `fallback: "pi"` alongside the explicit runtime when you intentionally want that compatibility fallback. Selected plugin harness failures always surface directly.
- Environment overrides: `OPENCLAW_AGENT_RUNTIME=<id|auto|pi>` overrides `runtime`; `OPENCLAW_AGENT_HARNESS_FALLBACK=pi|none` overrides fallback for that process.
- For Codex-only deployments, set `model: "openai/gpt-5.5"` and `embeddedHarness.runtime: "codex"`. You may also set `embeddedHarness.fallback: "none"` explicitly for readability; it is the default for explicit plugin runtimes.
- `id`: `"auto"`, `"pi"`, a registered plugin harness id, or a supported CLI backend alias. The bundled Codex plugin registers `codex`; the bundled Anthropic plugin provides the `claude-cli` CLI backend.
- `fallback`: `"pi"` or `"none"`. In `id: "auto"`, omitted fallback defaults to `"pi"` so old configs can keep using PI when no plugin harness claims a run. In explicit plugin runtime mode, such as `id: "codex"`, omitted fallback defaults to `"none"` so a missing harness fails instead of silently using PI. Runtime overrides do not inherit fallback from a broader scope; set `fallback: "pi"` alongside the explicit runtime when you intentionally want that compatibility fallback. Selected plugin harness failures always surface directly.
- Environment overrides: `OPENCLAW_AGENT_RUNTIME=<id|auto|pi>` overrides `id`; `OPENCLAW_AGENT_HARNESS_FALLBACK=pi|none` overrides fallback for that process.
- For Codex-only deployments, set `model: "openai/gpt-5.5"` and `agentRuntime.id: "codex"`. You may also set `agentRuntime.fallback: "none"` explicitly for readability; it is the default for explicit plugin runtimes.
- For Claude CLI deployments, prefer `model: "anthropic/claude-opus-4-7"` plus `agentRuntime.id: "claude-cli"`. Legacy `claude-cli/claude-opus-4-7` model refs still work for compatibility, but new config should keep provider/model selection canonical and put the execution backend in `agentRuntime.id`.
- Older runtime-policy keys are rewritten to `agentRuntime` by `openclaw doctor --fix`.
- Harness choice is pinned per session id after the first embedded run. Config/env changes affect new or reset sessions, not an existing transcript. Legacy sessions with transcript history but no recorded pin are treated as PI-pinned. `/status` reports the effective runtime, for example `Runtime: OpenClaw Pi Default` or `Runtime: OpenAI Codex`.
- This only controls the embedded chat harness. Media generation, vision, PDF, music, video, and TTS still use their provider/model settings.
- This only controls text agent-turn execution. Media generation, vision, PDF, music, video, and TTS still use their provider/model settings.
**Built-in alias shorthands** (only apply when the model is in `agents.defaults.models`):
@ -923,7 +925,7 @@ for provider examples and precedence.
thinkingDefault: "high", // per-agent thinking level override
reasoningDefault: "on", // per-agent reasoning visibility override
fastModeDefault: false, // per-agent fast mode override
embeddedHarness: { runtime: "auto", fallback: "pi" },
agentRuntime: { id: "auto", fallback: "pi" },
params: { cacheRetention: "none" }, // overrides matching defaults.models params by key
tts: {
providers: {
@ -970,7 +972,7 @@ for provider examples and precedence.
- `thinkingDefault`: optional per-agent default thinking level (`off | minimal | low | medium | high | xhigh | adaptive | max`). Overrides `agents.defaults.thinkingDefault` for this agent when no per-message or session override is set. The selected provider/model profile controls which values are valid; for Google Gemini, `adaptive` keeps provider-owned dynamic thinking (`thinkingLevel` omitted on Gemini 3/3.1, `thinkingBudget: -1` on Gemini 2.5).
- `reasoningDefault`: optional per-agent default reasoning visibility (`on | off | stream`). Applies when no per-message or session reasoning override is set.
- `fastModeDefault`: optional per-agent default for fast mode (`true | false`). Applies when no per-message or session fast-mode override is set.
- `embeddedHarness`: optional per-agent low-level harness policy override. Use `{ runtime: "codex" }` to make one agent Codex-only while other agents keep the default PI fallback in `auto` mode.
- `agentRuntime`: optional per-agent low-level runtime policy override. Use `{ id: "codex" }` to make one agent Codex-only while other agents keep the default PI fallback in `auto` mode.
- `runtime`: optional per-agent runtime descriptor. Use `type: "acp"` with `runtime.acp` defaults (`agent`, `backend`, `mode`, `cwd`) when the agent should default to ACP harness sessions.
- `identity.avatar`: workspace-relative path, `http(s)` URL, or `data:` URI.
- `identity` derives defaults: `ackReaction` from `emoji`, `mentionPatterns` from `name`/`emoji`.

View file

@ -85,6 +85,7 @@ cat ~/.openclaw/openclaw.json
- Legacy on-disk state migration (sessions/agent dir/WhatsApp auth).
- Legacy plugin manifest contract key migration (`speechProviders`, `realtimeTranscriptionProviders`, `realtimeVoiceProviders`, `mediaUnderstandingProviders`, `imageGenerationProviders`, `videoGenerationProviders`, `webFetchProviders`, `webSearchProviders``contracts`).
- Legacy cron store migration (`jobId`, `schedule.cron`, top-level delivery/payload fields, payload `provider`, simple `notify: true` webhook fallback jobs).
- Legacy agent runtime-policy migration to `agents.defaults.agentRuntime` and `agents.list[].agentRuntime`.
</Accordion>
<Accordion title="State and integrity">
- Session lock file inspection and stale lock cleanup.
@ -237,7 +238,7 @@ That stages grounded durable candidates into the short-term dreaming store while
If you previously added legacy OpenAI transport settings under `models.providers.openai-codex`, they can shadow the built-in Codex OAuth provider path that newer releases use automatically. Doctor warns when it sees those old transport settings alongside Codex OAuth so you can remove or rewrite the stale transport override and get the built-in routing/fallback behavior back. Custom proxies and header-only overrides are still supported and do not trigger this warning.
</Accordion>
<Accordion title="2f. Codex plugin route warnings">
When the bundled Codex plugin is enabled, doctor also checks whether `openai-codex/*` primary model refs still resolve through the default PI runner. That combination is valid when you want Codex OAuth/subscription auth through PI, but it is easy to confuse with the native Codex app-server harness. Doctor warns and points to the explicit app-server shape: `openai/*` plus `embeddedHarness.runtime: "codex"` or `OPENCLAW_AGENT_RUNTIME=codex`.
When the bundled Codex plugin is enabled, doctor also checks whether `openai-codex/*` primary model refs still resolve through the default PI runner. That combination is valid when you want Codex OAuth/subscription auth through PI, but it is easy to confuse with the native Codex app-server harness. Doctor warns and points to the explicit app-server shape: `openai/*` plus `agentRuntime.id: "codex"` or `OPENCLAW_AGENT_RUNTIME=codex`.
Doctor does not repair this automatically because both routes are valid:

View file

@ -597,7 +597,7 @@ and troubleshooting see the main [FAQ](/help/faq).
`openai-codex/gpt-5.5` for Codex OAuth through the default PI runner. Use
`openai/gpt-5.5` for direct OpenAI API-key access. GPT-5.5 can also use
subscription/OAuth via `openai-codex/gpt-5.5` or native Codex app-server
runs with `openai/gpt-5.5` and `embeddedHarness.runtime: "codex"`.
runs with `openai/gpt-5.5` and `agentRuntime.id: "codex"`.
See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
</Accordion>
@ -607,7 +607,7 @@ and troubleshooting see the main [FAQ](/help/faq).
- `openai/gpt-5.5` = current direct OpenAI API-key route in PI
- `openai-codex/gpt-5.5` = Codex OAuth route in PI
- `openai/gpt-5.5` + `embeddedHarness.runtime: "codex"` = native Codex app-server route
- `openai/gpt-5.5` + `agentRuntime.id: "codex"` = native Codex app-server route
- `openai-codex:...` = auth profile id, not a model ref
If you want the direct OpenAI Platform billing/limit path, set

View file

@ -26,7 +26,7 @@ The bundled `codex` plugin contributes several separate capabilities:
| Capability | How you use it | What it does |
| --------------------------------- | --------------------------------------------------- | ----------------------------------------------------------------------------- |
| Native embedded runtime | `embeddedHarness.runtime: "codex"` | Runs OpenClaw embedded agent turns through Codex app-server. |
| Native embedded runtime | `agentRuntime.id: "codex"` | Runs OpenClaw embedded agent turns through Codex app-server. |
| Native chat-control commands | `/codex bind`, `/codex resume`, `/codex steer`, ... | Binds and controls Codex app-server threads from a messaging conversation. |
| Codex app-server provider/catalog | `codex` internals, surfaced through the harness | Lets the runtime discover and validate app-server models. |
| Codex media-understanding path | `codex/*` image-model compatibility paths | Runs bounded Codex app-server turns for supported image understanding models. |
@ -69,7 +69,7 @@ and [Plugin guard behavior](/tools/plugin).
The harness is off by default. New configs should keep OpenAI model refs
canonical as `openai/gpt-*` and explicitly force
`embeddedHarness.runtime: "codex"` or `OPENCLAW_AGENT_RUNTIME=codex` when they
`agentRuntime.id: "codex"` or `OPENCLAW_AGENT_RUNTIME=codex` when they
want native app-server execution. Legacy `codex/*` model refs still auto-select
the harness for compatibility, but runtime-backed legacy provider prefixes are
not shown as normal model/provider choices.
@ -87,14 +87,14 @@ Use this table before changing config:
| ------------------------------------------- | -------------------------- | -------------------------------------- | --------------------------- | ------------------------------ |
| OpenAI API through normal OpenClaw runner | `openai/gpt-*` | omitted or `runtime: "pi"` | OpenAI provider | `Runtime: OpenClaw Pi Default` |
| Codex OAuth/subscription through PI | `openai-codex/gpt-*` | omitted or `runtime: "pi"` | OpenAI Codex OAuth provider | `Runtime: OpenClaw Pi Default` |
| Native Codex app-server embedded turns | `openai/gpt-*` | `embeddedHarness.runtime: "codex"` | `codex` plugin | `Runtime: OpenAI Codex` |
| Mixed providers with conservative auto mode | provider-specific refs | `runtime: "auto", fallback: "pi"` | Optional plugin runtimes | Depends on selected runtime |
| Native Codex app-server embedded turns | `openai/gpt-*` | `agentRuntime.id: "codex"` | `codex` plugin | `Runtime: OpenAI Codex` |
| Mixed providers with conservative auto mode | provider-specific refs | `agentRuntime.id: "auto"` | Optional plugin runtimes | Depends on selected runtime |
| Explicit Codex ACP adapter session | ACP prompt/model dependent | `sessions_spawn` with `runtime: "acp"` | healthy `acpx` backend | ACP task/session status |
The important split is provider versus runtime:
- `openai-codex/*` answers "which provider/auth route should PI use?"
- `embeddedHarness.runtime: "codex"` answers "which loop should execute this
- `agentRuntime.id: "codex"` answers "which loop should execute this
embedded turn?"
- `/codex ...` answers "which native Codex conversation should this chat bind
or control?"
@ -106,11 +106,11 @@ OpenAI-family routes are prefix-specific. Use `openai-codex/*` when you want
Codex OAuth through PI; use `openai/*` when you want direct OpenAI API access or
when you are forcing the native Codex app-server harness:
| Model ref | Runtime path | Use when |
| ----------------------------------------------------- | -------------------------------------------- | ------------------------------------------------------------------------- |
| `openai/gpt-5.4` | OpenAI provider through OpenClaw/PI plumbing | You want current direct OpenAI Platform API access with `OPENAI_API_KEY`. |
| `openai-codex/gpt-5.5` | OpenAI Codex OAuth through OpenClaw/PI | You want ChatGPT/Codex subscription auth with the default PI runner. |
| `openai/gpt-5.5` + `embeddedHarness.runtime: "codex"` | Codex app-server harness | You want native Codex app-server execution for the embedded agent turn. |
| Model ref | Runtime path | Use when |
| --------------------------------------------- | -------------------------------------------- | ------------------------------------------------------------------------- |
| `openai/gpt-5.4` | OpenAI provider through OpenClaw/PI plumbing | You want current direct OpenAI Platform API access with `OPENAI_API_KEY`. |
| `openai-codex/gpt-5.5` | OpenAI Codex OAuth through OpenClaw/PI | You want ChatGPT/Codex subscription auth with the default PI runner. |
| `openai/gpt-5.5` + `agentRuntime.id: "codex"` | Codex app-server harness | You want native Codex app-server execution for the embedded agent turn. |
GPT-5.5 is currently subscription/OAuth-only in OpenClaw. Use
`openai-codex/gpt-5.5` for PI OAuth, or `openai/gpt-5.5` with the Codex
@ -123,7 +123,7 @@ refs and records the runtime policy separately, while fallback-only legacy refs
are left unchanged because runtime is configured for the whole agent container.
New PI Codex OAuth configs should use `openai-codex/gpt-*`; new native
app-server harness configs should use `openai/gpt-*` plus
`embeddedHarness.runtime: "codex"`.
`agentRuntime.id: "codex"`.
`agents.defaults.imageModel` follows the same prefix split. Use
`openai-codex/gpt-*` when image understanding should run through the OpenAI
@ -152,14 +152,14 @@ means:
- **No change is required** if you intended ChatGPT/Codex OAuth through PI.
- Change the model to `openai/<model>` and set
`embeddedHarness.runtime: "codex"` if you intended native app-server
`agentRuntime.id: "codex"` if you intended native app-server
execution.
- Existing sessions still need `/new` or `/reset` after a runtime change,
because session runtime pins are sticky.
Harness selection is not a live session control. When an embedded turn runs,
OpenClaw records the selected harness id on that session and keeps using it for
later turns in the same session id. Change `embeddedHarness` config or
later turns in the same session id. Change `agentRuntime` config or
`OPENCLAW_AGENT_RUNTIME` when you want future sessions to use another harness;
use `/new` or `/reset` to start a fresh session before switching an existing
conversation between PI and Codex. This avoids replaying one transcript through
@ -205,8 +205,8 @@ Use `openai/gpt-5.5`, enable the bundled plugin, and force the `codex` harness:
agents: {
defaults: {
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
},
},
},
@ -230,11 +230,11 @@ If your config uses `plugins.allow`, include `codex` there too:
Legacy configs that set `agents.defaults.model` or an agent model to
`codex/<model>` still auto-enable the bundled `codex` plugin. New configs should
prefer `openai/<model>` plus the explicit `embeddedHarness` entry above.
prefer `openai/<model>` plus the explicit `agentRuntime` entry above.
## Add Codex alongside other models
Do not set `runtime: "codex"` globally if the same agent should freely switch
Do not set `agentRuntime.id: "codex"` globally if the same agent should freely switch
between Codex and non-Codex provider models. A forced runtime applies to every
embedded turn for that agent or session. If you select an Anthropic model while
that runtime is forced, OpenClaw still tries the Codex harness and fails closed
@ -242,8 +242,8 @@ instead of silently routing that turn through PI.
Use one of these shapes instead:
- Put Codex on a dedicated agent with `embeddedHarness.runtime: "codex"`.
- Keep the default agent on `runtime: "auto"` and PI fallback for normal mixed
- Put Codex on a dedicated agent with `agentRuntime.id: "codex"`.
- Keep the default agent on `agentRuntime.id: "auto"` and PI fallback for normal mixed
provider usage.
- Use legacy `codex/*` refs only for compatibility. New configs should prefer
`openai/*` plus an explicit Codex runtime policy.
@ -262,8 +262,8 @@ adds a separate Codex agent:
},
agents: {
defaults: {
embeddedHarness: {
runtime: "auto",
agentRuntime: {
id: "auto",
fallback: "pi",
},
},
@ -277,8 +277,8 @@ adds a separate Codex agent:
id: "codex",
name: "Codex",
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
},
},
],
@ -302,7 +302,7 @@ Agents should route user requests by intent, not by the word "Codex" alone:
| "Bind this chat to Codex" | `/codex bind` |
| "Resume Codex thread `<id>` here" | `/codex resume <id>` |
| "Show Codex threads" | `/codex threads` |
| "Use Codex as the runtime for this agent" | config change to `embeddedHarness.runtime` |
| "Use Codex as the runtime for this agent" | config change to `agentRuntime.id` |
| "Use my ChatGPT/Codex subscription with normal OpenClaw" | `openai-codex/*` model refs |
| "Run Codex through ACP/acpx" | ACP `sessions_spawn({ runtime: "acp", ... })` |
| "Start Claude Code/Gemini/OpenCode/Cursor in a thread" | ACP/acpx, not `/codex` and not native sub-agents |
@ -323,8 +323,8 @@ uses Codex. Explicit plugin runtimes default to no PI fallback, so
agents: {
defaults: {
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
fallback: "none",
},
},
@ -352,8 +352,8 @@ auto-selection:
{
agents: {
defaults: {
embeddedHarness: {
runtime: "auto",
agentRuntime: {
id: "auto",
fallback: "pi",
},
},
@ -367,8 +367,8 @@ auto-selection:
id: "codex",
name: "Codex",
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
fallback: "none",
},
},
@ -565,8 +565,8 @@ Codex-only harness validation:
agents: {
defaults: {
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
},
},
},
@ -772,15 +772,15 @@ understanding continue to use the matching provider/model settings such as
**Codex does not appear as a normal `/model` provider:** that is expected for
new configs. Select an `openai/gpt-*` model with
`embeddedHarness.runtime: "codex"` (or a legacy `codex/*` ref), enable
`agentRuntime.id: "codex"` (or a legacy `codex/*` ref), enable
`plugins.entries.codex.enabled`, and check whether `plugins.allow` excludes
`codex`.
**OpenClaw uses PI instead of Codex:** `runtime: "auto"` can still use PI as the
**OpenClaw uses PI instead of Codex:** `agentRuntime.id: "auto"` can still use PI as the
compatibility backend when no Codex harness claims the run. Set
`embeddedHarness.runtime: "codex"` to force Codex selection while testing. A
`agentRuntime.id: "codex"` to force Codex selection while testing. A
forced Codex runtime now fails instead of falling back to PI unless you
explicitly set `embeddedHarness.fallback: "pi"`. Once Codex app-server is
explicitly set `agentRuntime.fallback: "pi"`. Once Codex app-server is
selected, its failures surface directly without extra fallback config.
**The app-server is rejected:** upgrade Codex so the app-server handshake
@ -795,9 +795,9 @@ or disable discovery.
and that the remote app-server speaks the same Codex app-server protocol version.
**A non-Codex model uses PI:** that is expected unless you forced
`embeddedHarness.runtime: "codex"` for that agent or selected a legacy
`agentRuntime.id: "codex"` for that agent or selected a legacy
`codex/*` ref. Plain `openai/gpt-*` and other provider refs stay on their normal
provider path in `auto` mode. If you force `runtime: "codex"`, every embedded
provider path in `auto` mode. If you force `agentRuntime.id: "codex"`, every embedded
turn for that agent must be a Codex-supported OpenAI model.
## Related

View file

@ -82,8 +82,8 @@ Current compatibility records include:
- bundled plugin allowlist and enablement behavior
- legacy provider/channel env-var manifest metadata
- activation hints that are being replaced by manifest contribution ownership
- `embeddedHarness` and `agent-harness` naming aliases while public naming moves
toward `agentRuntime`
- legacy runtime-policy config keys while doctor migrates operators to
`agentRuntime`
- generated bundled channel config metadata fallback while registry-first
`channelConfigs` metadata lands
- the persisted plugin registry disable env while repair flows migrate operators

View file

@ -142,7 +142,7 @@ OpenClaw. The harness then claims that provider in `supports(...)`.
The bundled Codex plugin follows this pattern:
- preferred user model refs: `openai/gpt-5.5` plus
`embeddedHarness.runtime: "codex"`
`agentRuntime.id: "codex"`
- compatibility refs: legacy `codex/gpt-*` refs remain accepted, but new
configs should not use them as normal provider/model refs
- harness id: `codex`
@ -153,7 +153,7 @@ The bundled Codex plugin follows this pattern:
The Codex plugin is additive. Plain `openai/gpt-*` refs continue to use the
normal OpenClaw provider path unless you force the Codex harness with
`embeddedHarness.runtime: "codex"`. Older `codex/gpt-*` refs still select the
`agentRuntime.id: "codex"`. Older `codex/gpt-*` refs still select the
Codex provider and harness for compatibility.
For operator setup, model prefix examples, and Codex-only configs, see
@ -194,14 +194,14 @@ intentional silent replies such as `NO_REPLY` unclassified.
The bundled `codex` harness is the native Codex mode for embedded OpenClaw
agent turns. Enable the bundled `codex` plugin first, and include `codex` in
`plugins.allow` if your config uses a restrictive allowlist. Native app-server
configs should use `openai/gpt-*` with `embeddedHarness.runtime: "codex"`.
configs should use `openai/gpt-*` with `agentRuntime.id: "codex"`.
Use `openai-codex/*` for Codex OAuth through PI instead. Legacy `codex/*`
model refs remain compatibility aliases for the native harness.
When this mode runs, Codex owns the native thread id, resume behavior,
compaction, and app-server execution. OpenClaw still owns the chat channel,
visible transcript mirror, tool policy, approvals, media delivery, and session
selection. Use `embeddedHarness.runtime: "codex"` without a `fallback` override
selection. Use `agentRuntime.id: "codex"` without a `fallback` override
when you need to prove that only the Codex app-server path can claim the run.
Explicit plugin runtimes already fail closed by default. Set `fallback: "pi"`
only when you intentionally want PI to handle missing harness selection. Codex
@ -209,8 +209,8 @@ app-server failures already fail directly instead of retrying through PI.
## Disable PI fallback
By default, OpenClaw runs embedded agents with `agents.defaults.embeddedHarness`
set to `{ runtime: "auto", fallback: "pi" }`. In `auto` mode, registered plugin
By default, OpenClaw runs embedded agents with `agents.defaults.agentRuntime`
set to `{ id: "auto", fallback: "pi" }`. In `auto` mode, registered plugin
harnesses can claim a provider/model pair. If none match, OpenClaw falls back
to PI.
@ -228,8 +228,8 @@ For Codex-only embedded runs:
"agents": {
"defaults": {
"model": "openai/gpt-5.5",
"embeddedHarness": {
"runtime": "codex"
"agentRuntime": {
"id": "codex"
}
}
}
@ -244,8 +244,8 @@ the fallback:
{
"agents": {
"defaults": {
"embeddedHarness": {
"runtime": "auto",
"agentRuntime": {
"id": "auto",
"fallback": "none"
}
}
@ -259,8 +259,8 @@ Per-agent overrides use the same shape:
{
"agents": {
"defaults": {
"embeddedHarness": {
"runtime": "auto",
"agentRuntime": {
"id": "auto",
"fallback": "pi"
}
},
@ -268,8 +268,8 @@ Per-agent overrides use the same shape:
{
"id": "codex-only",
"model": "openai/gpt-5.5",
"embeddedHarness": {
"runtime": "codex",
"agentRuntime": {
"id": "codex",
"fallback": "none"
}
}

View file

@ -97,6 +97,25 @@ Anthropic's current public docs:
Setup and runtime details for the Claude CLI backend are in [CLI Backends](/gateway/cli-backends).
</Note>
### Config example
Prefer the canonical Anthropic model ref plus a CLI runtime override:
```json5
{
agents: {
defaults: {
model: { primary: "anthropic/claude-opus-4-7" },
agentRuntime: { id: "claude-cli" },
},
},
}
```
Legacy `claude-cli/claude-opus-4-7` model refs still work for
compatibility, but new config should keep provider/model selection as
`anthropic/*` and put the execution backend in `agentRuntime.id`.
<Tip>
If you want the clearest billing path, use an Anthropic API key instead. OpenClaw also supports subscription-style options from [OpenAI Codex](/providers/openai), [Qwen Cloud](/providers/qwen), [MiniMax](/providers/minimax), and [Z.AI / GLM](/providers/glm).
</Tip>

View file

@ -13,7 +13,7 @@ Gemini Grounding.
- Provider: `google`
- Auth: `GEMINI_API_KEY` or `GOOGLE_API_KEY`
- API: Google Gemini API
- Runtime option: `agents.defaults.embeddedHarness.runtime: "google-gemini-cli"`
- Runtime option: `agents.defaults.agentRuntime.id: "google-gemini-cli"`
reuses Gemini CLI OAuth while keeping model refs canonical as `google/*`.
## Getting started

View file

@ -17,7 +17,7 @@ embedded agent loop:
- **API key** — direct OpenAI Platform access with usage-based billing (`openai/*` models)
- **Codex subscription through PI** — ChatGPT/Codex sign-in with subscription access (`openai-codex/*` models)
- **Codex app-server harness** — native Codex app-server execution (`openai/*` models plus `agents.defaults.embeddedHarness.runtime: "codex"`)
- **Codex app-server harness** — native Codex app-server execution (`openai/*` models plus `agents.defaults.agentRuntime.id: "codex"`)
OpenAI explicitly supports subscription OAuth usage in external tools and workflows like OpenClaw.
@ -27,13 +27,13 @@ changing config.
## Quick choice
| Goal | Use | Notes |
| --------------------------------------------- | -------------------------------------------------------- | ---------------------------------------------------------------------------- |
| Direct API-key billing | `openai/gpt-5.5` | Set `OPENAI_API_KEY` or run OpenAI API-key onboarding. |
| GPT-5.5 with ChatGPT/Codex subscription auth | `openai-codex/gpt-5.5` | Default PI route for Codex OAuth. Best first choice for subscription setups. |
| GPT-5.5 with native Codex app-server behavior | `openai/gpt-5.5` plus `embeddedHarness.runtime: "codex"` | Forces the Codex app-server harness for that model ref. |
| Image generation or editing | `openai/gpt-image-2` | Works with either `OPENAI_API_KEY` or OpenAI Codex OAuth. |
| Transparent-background images | `openai/gpt-image-1.5` | Use `outputFormat=png` or `webp` and `openai.background=transparent`. |
| Goal | Use | Notes |
| --------------------------------------------- | ------------------------------------------------ | ---------------------------------------------------------------------------- |
| Direct API-key billing | `openai/gpt-5.5` | Set `OPENAI_API_KEY` or run OpenAI API-key onboarding. |
| GPT-5.5 with ChatGPT/Codex subscription auth | `openai-codex/gpt-5.5` | Default PI route for Codex OAuth. Best first choice for subscription setups. |
| GPT-5.5 with native Codex app-server behavior | `openai/gpt-5.5` plus `agentRuntime.id: "codex"` | Forces the Codex app-server harness for that model ref. |
| Image generation or editing | `openai/gpt-image-2` | Works with either `OPENAI_API_KEY` or OpenAI Codex OAuth. |
| Transparent-background images | `openai/gpt-image-1.5` | Use `outputFormat=png` or `webp` and `openai.background=transparent`. |
## Naming map
@ -44,7 +44,7 @@ The names are similar but not interchangeable:
| `openai` | Provider prefix | Direct OpenAI Platform API route. |
| `openai-codex` | Provider prefix | OpenAI Codex OAuth/subscription route through the normal OpenClaw PI runner. |
| `codex` plugin | Plugin | Bundled OpenClaw plugin that provides native Codex app-server runtime and `/codex` chat controls. |
| `embeddedHarness.runtime: codex` | Agent runtime | Force the native Codex app-server harness for embedded turns. |
| `agentRuntime.id: codex` | Agent runtime | Force the native Codex app-server harness for embedded turns. |
| `/codex ...` | Chat command set | Bind/control Codex app-server threads from a conversation. |
| `runtime: "acp", agentId: "codex"` | ACP session route | Explicit fallback path that runs Codex through ACP/acpx. |
@ -57,7 +57,7 @@ combination so you can confirm it is intentional; it does not rewrite it.
GPT-5.5 is available through both direct OpenAI Platform API-key access and
subscription/OAuth routes. Use `openai/gpt-5.5` for direct `OPENAI_API_KEY`
traffic, `openai-codex/gpt-5.5` for Codex OAuth through PI, or
`openai/gpt-5.5` with `embeddedHarness.runtime: "codex"` for the native Codex
`openai/gpt-5.5` with `agentRuntime.id: "codex"` for the native Codex
app-server harness.
</Note>
@ -65,7 +65,7 @@ app-server harness.
Enabling the OpenAI plugin, or selecting an `openai-codex/*` model, does not
enable the bundled Codex app-server plugin. OpenClaw enables that plugin only
when you explicitly select the native Codex harness with
`embeddedHarness.runtime: "codex"` or use a legacy `codex/*` model ref.
`agentRuntime.id: "codex"` or use a legacy `codex/*` model ref.
If the bundled `codex` plugin is enabled but `openai-codex/*` still resolves
through PI, `openclaw doctor` warns and leaves the route unchanged.
</Note>
@ -76,7 +76,7 @@ through PI, `openclaw doctor` warns and leaves the route unchanged.
| ------------------------- | ---------------------------------------------------------- | ------------------------------------------------------ |
| Chat / Responses | `openai/<model>` model provider | Yes |
| Codex subscription models | `openai-codex/<model>` with `openai-codex` OAuth | Yes |
| Codex app-server harness | `openai/<model>` with `embeddedHarness.runtime: codex` | Yes |
| Codex app-server harness | `openai/<model>` with `agentRuntime.id: codex` | Yes |
| Server-side web search | Native OpenAI Responses tool | Yes, when web search is enabled and no provider pinned |
| Images | `image_generate` | Yes |
| Videos | `video_generate` | Yes |
@ -120,15 +120,15 @@ Choose your preferred auth method and follow the setup steps.
| Model ref | Runtime config | Route | Auth |
| ---------------------- | -------------------------- | --------------------------- | ---------------- |
| `openai/gpt-5.5` | omitted / `runtime: "pi"` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.4-mini` | omitted / `runtime: "pi"` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.5` | `runtime: "codex"` | Codex app-server harness | Codex app-server |
| `openai/gpt-5.5` | omitted / `agentRuntime.id: "pi"` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.4-mini` | omitted / `agentRuntime.id: "pi"` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.5` | `agentRuntime.id: "codex"` | Codex app-server harness | Codex app-server |
<Note>
`openai/*` is the direct OpenAI API-key route unless you explicitly force
the Codex app-server harness. Use `openai-codex/*` for Codex OAuth through
the default PI runner, or use `openai/gpt-5.5` with
`embeddedHarness.runtime: "codex"` for native Codex app-server execution.
`agentRuntime.id: "codex"` for native Codex app-server execution.
</Note>
### Config example
@ -185,7 +185,7 @@ Choose your preferred auth method and follow the setup steps.
|-----------|----------------|-------|------|
| `openai-codex/gpt-5.5` | omitted / `runtime: "pi"` | ChatGPT/Codex OAuth through PI | Codex sign-in |
| `openai-codex/gpt-5.5` | `runtime: "auto"` | Still PI unless a plugin explicitly claims `openai-codex` | Codex sign-in |
| `openai/gpt-5.5` | `embeddedHarness.runtime: "codex"` | Codex app-server harness | Codex app-server auth |
| `openai/gpt-5.5` | `agentRuntime.id: "codex"` | Codex app-server harness | Codex app-server auth |
<Note>
Keep using the `openai-codex` provider id for auth/profile commands. The
@ -211,7 +211,7 @@ Choose your preferred auth method and follow the setup steps.
The default PI harness appears as `Runtime: OpenClaw Pi Default`. When the
bundled Codex app-server harness is selected, `/status` shows
`Runtime: OpenAI Codex`. Existing sessions keep their recorded harness id, so use
`/new` or `/reset` after changing `embeddedHarness` if you want `/status` to
`/new` or `/reset` after changing `agentRuntime` if you want `/status` to
reflect a new PI/Codex choice.
### Doctor warning
@ -220,7 +220,7 @@ Choose your preferred auth method and follow the setup steps.
`openai-codex/*` route is selected, `openclaw doctor` warns that the model
still resolves through PI. Keep the config unchanged when that is the
intended subscription-auth route. Switch to `openai/<model>` plus
`embeddedHarness.runtime: "codex"` only when you want native Codex
`agentRuntime.id: "codex"` only when you want native Codex
app-server execution.
### Context window cap
@ -380,7 +380,7 @@ See [Video Generation](/tools/video-generation) for shared tool parameters, prov
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai-codex/gpt-5.5`, `openai/gpt-5.5`, `openrouter/openai/gpt-5.5`, `opencode/gpt-5.5`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `openai/gpt-5.x` sessions forced through `embeddedHarness.runtime: "codex"` keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `openai/gpt-5.x` sessions forced through `agentRuntime.id: "codex"` keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
The GPT-5 contribution adds a tagged behavior contract for persona persistence, execution safety, tool discipline, output shape, completion checks, and verification. Channel-specific reply and silent-message behavior stays in the shared OpenClaw system prompt and outbound delivery policy. The GPT-5 guidance is always enabled for matching models. The friendly interaction-style layer is separate and configurable.
@ -766,7 +766,7 @@ the Server-side compaction accordion below.
- Injects `context_management: [{ type: "compaction", compact_threshold: ... }]`
- Default `compact_threshold`: 70% of `contextWindow` (or `80000` when unavailable)
This applies to the built-in Pi harness path and to OpenAI provider hooks used by embedded runs. The native Codex app-server harness manages its own context through Codex and is configured separately with `agents.defaults.embeddedHarness.runtime`.
This applies to the built-in Pi harness path and to OpenAI provider hooks used by embedded runs. The native Codex app-server harness manages its own context through Codex and is configured separately with `agents.defaults.agentRuntime.id`.
<Tabs>
<Tab title="Enable explicitly">

View file

@ -20,7 +20,7 @@ Codex has two OpenClaw routes:
| Route | Config/command | Setup page |
| -------------------------- | ------------------------------------------------------ | --------------------------------------- |
| Native Codex app-server | `/codex ...`, `embeddedHarness.runtime: "codex"` | [Codex harness](/plugins/codex-harness) |
| Native Codex app-server | `/codex ...`, `agentRuntime.id: "codex"` | [Codex harness](/plugins/codex-harness) |
| Explicit Codex ACP adapter | `/acp spawn codex`, `runtime: "acp", agentId: "codex"` | This page |
Prefer the native route unless you explicitly need ACP/acpx behavior.

View file

@ -20,7 +20,7 @@ Each ACP session spawn is tracked as a [background task](/automation/tasks).
<Note>
**ACP is the external-harness path, not the default Codex path.** The
native Codex app-server plugin owns `/codex ...` controls and the
`embeddedHarness.runtime: "codex"` embedded runtime; ACP owns
`agentRuntime.id: "codex"` embedded runtime; ACP owns
`/acp ...` controls and `sessions_spawn({ runtime: "acp" })` sessions.
If you want Codex or Claude Code to connect as an external MCP client
@ -172,7 +172,7 @@ Quick `/acp` flow from chat:
</Accordion>
<Accordion title="Model / provider / runtime selection cheat sheet">
- `openai-codex/*` — PI Codex OAuth/subscription route.
- `openai/*` plus `embeddedHarness.runtime: "codex"` — native Codex app-server embedded runtime.
- `openai/*` plus `agentRuntime.id: "codex"` — native Codex app-server embedded runtime.
- `/codex ...` — native Codex conversation control.
- `/acp ...` or `runtime: "acp"` — explicit ACP/acpx control.
</Accordion>

View file

@ -212,7 +212,7 @@ OpenClaw scans for plugins in this order (first match wins):
runtime
- OpenAI-family Codex routes keep separate plugin boundaries:
`openai-codex/*` belongs to the OpenAI plugin, while the bundled Codex
app-server plugin is selected by `embeddedHarness.runtime: "codex"` or legacy
app-server plugin is selected by `agentRuntime.id: "codex"` or legacy
`codex/*` model refs
## Troubleshooting runtime hooks

View file

@ -122,7 +122,7 @@ describe("anthropic cli migration", () => {
primary: "anthropic/claude-opus-4-7",
fallbacks: ["anthropic/claude-opus-4-6", "openai/gpt-5.2"],
},
embeddedHarness: { runtime: "claude-cli" },
agentRuntime: { id: "claude-cli" },
models: {
"anthropic/claude-opus-4-7": { alias: "Opus" },
"anthropic/claude-sonnet-4-6": {},
@ -153,7 +153,7 @@ describe("anthropic cli migration", () => {
expect(result.configPatch).toEqual({
agents: {
defaults: {
embeddedHarness: { runtime: "claude-cli" },
agentRuntime: { id: "claude-cli" },
models: {
"openai/gpt-5.2": {},
"anthropic/claude-opus-4-7": {},
@ -184,7 +184,7 @@ describe("anthropic cli migration", () => {
agents: {
defaults: {
model: { primary: "anthropic/claude-opus-4-7" },
embeddedHarness: { runtime: "claude-cli" },
agentRuntime: { id: "claude-cli" },
models: {
"anthropic/claude-opus-4-7": {},
"anthropic/claude-sonnet-4-6": {},
@ -325,7 +325,7 @@ describe("anthropic cli migration", () => {
primary: "anthropic/claude-opus-4-7",
fallbacks: ["anthropic/claude-opus-4-6", "openai/gpt-5.2"],
},
embeddedHarness: { runtime: "claude-cli" },
agentRuntime: { id: "claude-cli" },
models: {
"anthropic/claude-opus-4-7": { alias: "Opus" },
"anthropic/claude-opus-4-6": { alias: "Opus" },

View file

@ -12,9 +12,9 @@ import { CLAUDE_CLI_BACKEND_ID, CLAUDE_CLI_DEFAULT_ALLOWLIST_REFS } from "./cli-
type AgentDefaultsModel = NonNullable<NonNullable<OpenClawConfig["agents"]>["defaults"]>["model"];
type AgentDefaultsModels = NonNullable<NonNullable<OpenClawConfig["agents"]>["defaults"]>["models"];
type AgentDefaultsEmbeddedHarness = NonNullable<
type AgentDefaultsRuntimePolicy = NonNullable<
NonNullable<OpenClawConfig["agents"]>["defaults"]
>["embeddedHarness"];
>["agentRuntime"];
type ClaudeCliCredential = NonNullable<ReturnType<typeof readClaudeCliCredentialsForSetup>>;
function toAnthropicModelRef(raw: string): string | null {
@ -125,16 +125,14 @@ function seedClaudeCliAllowlist(
return next;
}
function selectClaudeCliRuntime(
embeddedHarness: AgentDefaultsEmbeddedHarness | undefined,
): AgentDefaultsEmbeddedHarness {
const currentRuntime = embeddedHarness?.runtime?.trim();
function selectClaudeCliRuntime(agentRuntime: AgentDefaultsRuntimePolicy | undefined) {
const currentRuntime = agentRuntime?.id?.trim();
if (currentRuntime && currentRuntime !== "auto") {
return embeddedHarness;
return agentRuntime;
}
return {
...embeddedHarness,
runtime: CLAUDE_CLI_BACKEND_ID,
...agentRuntime,
id: CLAUDE_CLI_BACKEND_ID,
};
}
@ -198,7 +196,7 @@ export function buildAnthropicCliMigrationResult(
agents: {
defaults: {
...(rewrittenModel.changed ? { model: rewrittenModel.value } : {}),
embeddedHarness: selectClaudeCliRuntime(defaults?.embeddedHarness),
agentRuntime: selectClaudeCliRuntime(defaults?.agentRuntime),
models: nextModels,
},
},

View file

@ -140,7 +140,7 @@ function isAnthropicCacheRetentionTarget(
}
function usesClaudeCliModelSelection(config: OpenClawConfig): boolean {
if (config.agents?.defaults?.embeddedHarness?.runtime === CLAUDE_CLI_BACKEND_ID) {
if (config.agents?.defaults?.agentRuntime?.id === CLAUDE_CLI_BACKEND_ID) {
return true;
}
const primary = resolveModelPrimaryValue(

View file

@ -176,7 +176,7 @@ describe("anthropic provider replay hooks", () => {
},
agents: {
defaults: {
embeddedHarness: { runtime: "claude-cli" },
agentRuntime: { id: "claude-cli" },
model: { primary: "anthropic/claude-opus-4-7" },
models: {
"anthropic/claude-opus-4-7": {},

View file

@ -196,6 +196,12 @@ function buildDiscordModelPickerCurrentModel(
return `${defaultProvider}/${defaultModel}`;
}
function resolveConfiguredAgentRuntimeId(value: {
agentRuntime?: { id?: unknown };
}): string | undefined {
return normalizeOptionalString(value.agentRuntime?.id);
}
function buildDiscordModelPickerAllowedModelRefs(
data: Awaited<ReturnType<typeof loadDiscordModelPickerData>>,
): Set<string> {
@ -386,15 +392,15 @@ function resolveDiscordModelPickerCurrentRuntime(params: {
// Fall through to configured defaults when the session store is unavailable.
}
const agentRuntime = normalizeOptionalString(
const agentRuntime = resolveConfiguredAgentRuntimeId(
params.cfg.agents?.list?.find(
(entry) => normalizeOptionalString(entry.id) === params.route.agentId,
)?.embeddedHarness?.runtime,
) ?? {},
);
if (agentRuntime) {
return agentRuntime;
}
return normalizeOptionalString(params.cfg.agents?.defaults?.embeddedHarness?.runtime) ?? "auto";
return resolveConfiguredAgentRuntimeId(params.cfg.agents?.defaults ?? {}) ?? "auto";
}
export async function replyWithDiscordModelPickerProviders(params: {

View file

@ -85,7 +85,7 @@ export function buildGoogleGeminiCliProvider(): ProviderPlugin {
configPatch: {
agents: {
defaults: {
embeddedHarness: { runtime: PROVIDER_ID },
agentRuntime: { id: PROVIDER_ID },
models: {
[DEFAULT_MODEL]: {},
},

View file

@ -0,0 +1,19 @@
import type { AgentRuntimePolicyConfig } from "../config/types.agents-shared.js";
type AgentRuntimePolicyContainer = {
agentRuntime?: AgentRuntimePolicyConfig;
};
export function resolveAgentRuntimePolicy(
container: AgentRuntimePolicyContainer | undefined,
): AgentRuntimePolicyConfig | undefined {
const preferred = container?.agentRuntime;
if (hasAgentRuntimePolicy(preferred)) {
return preferred;
}
return undefined;
}
function hasAgentRuntimePolicy(value: AgentRuntimePolicyConfig | undefined): boolean {
return Boolean(value?.id?.trim() || value?.fallback);
}

View file

@ -302,7 +302,7 @@ describe("Auth profile runtime contract - Pi and CLI adapter", () => {
cfg: {
agents: {
defaults: {
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
},
},
} as OpenClawConfig,
@ -385,7 +385,7 @@ describe("Auth profile runtime contract - Pi and CLI adapter", () => {
cfg: {
agents: {
defaults: {
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
},
},
} as OpenClawConfig,
@ -408,7 +408,7 @@ describe("Auth profile runtime contract - Pi and CLI adapter", () => {
cfg: {
agents: {
defaults: {
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
},
},
} as OpenClawConfig,
@ -434,7 +434,7 @@ describe("Auth profile runtime contract - Pi and CLI adapter", () => {
list: [
{
id: "main",
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
},
],
},

View file

@ -469,6 +469,60 @@ describe("CLI attempt execution", () => {
}),
);
});
it("routes canonical Anthropic models through the configured Claude CLI runtime", async () => {
const sessionKey = "agent:main:direct:canonical-claude-cli";
const sessionEntry: SessionEntry = {
sessionId: "openclaw-session-canonical-cli",
updatedAt: Date.now(),
};
const sessionStore: Record<string, SessionEntry> = { [sessionKey]: sessionEntry };
await fs.writeFile(storePath, JSON.stringify(sessionStore, null, 2), "utf-8");
runCliAgentMock.mockResolvedValueOnce(makeCliResult("canonical cli"));
await runAgentAttempt({
providerOverride: "anthropic",
modelOverride: "claude-opus-4-7",
cfg: {
agents: {
defaults: {
agentRuntime: { id: "claude-cli", fallback: "none" },
},
},
} as OpenClawConfig,
sessionEntry,
sessionId: sessionEntry.sessionId,
sessionKey,
sessionAgentId: "main",
sessionFile: path.join(tmpDir, "session.jsonl"),
workspaceDir: tmpDir,
body: "route this",
isFallbackRetry: false,
resolvedThinkLevel: "medium",
timeoutMs: 1_000,
runId: "run-canonical-claude-cli",
opts: { senderIsOwner: false } as Parameters<typeof runAgentAttempt>[0]["opts"],
runContext: {} as Parameters<typeof runAgentAttempt>[0]["runContext"],
spawnedBy: undefined,
messageChannel: "telegram",
skillsSnapshot: undefined,
resolvedVerboseLevel: undefined,
agentDir: tmpDir,
onAgentEvent: vi.fn(),
authProfileProvider: "anthropic",
sessionStore,
storePath,
sessionHasHistory: false,
});
expect(runEmbeddedPiAgentMock).not.toHaveBeenCalled();
expect(runCliAgentMock).toHaveBeenCalledWith(
expect.objectContaining({
provider: "claude-cli",
model: "claude-opus-4-7",
}),
);
});
});
describe("embedded attempt harness pinning", () => {
@ -476,6 +530,7 @@ describe("embedded attempt harness pinning", () => {
beforeEach(async () => {
tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-embedded-attempt-"));
runCliAgentMock.mockReset();
runEmbeddedPiAgentMock.mockReset();
});
@ -541,7 +596,7 @@ describe("embedded attempt harness pinning", () => {
cfg: {
agents: {
defaults: {
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
},
},
} as OpenClawConfig,
@ -617,4 +672,54 @@ describe("embedded attempt harness pinning", () => {
}),
);
});
it("does not pass CLI runtime aliases as embedded harness ids for fallback providers", async () => {
const sessionEntry: SessionEntry = {
sessionId: "fallback-session",
updatedAt: Date.now(),
};
runEmbeddedPiAgentMock.mockResolvedValueOnce({
meta: { durationMs: 1 },
} satisfies EmbeddedPiRunResult);
await runAgentAttempt({
providerOverride: "openai",
modelOverride: "gpt-5.4",
cfg: {
agents: {
defaults: {
agentRuntime: { id: "claude-cli", fallback: "none" },
},
},
} as OpenClawConfig,
sessionEntry,
sessionId: sessionEntry.sessionId,
sessionKey: "agent:main:main",
sessionAgentId: "main",
sessionFile: path.join(tmpDir, "session.jsonl"),
workspaceDir: tmpDir,
body: "fallback",
isFallbackRetry: true,
resolvedThinkLevel: "medium",
timeoutMs: 1_000,
runId: "run-openai-fallback-with-cli-runtime",
opts: { senderIsOwner: false } as Parameters<typeof runAgentAttempt>[0]["opts"],
runContext: {} as Parameters<typeof runAgentAttempt>[0]["runContext"],
spawnedBy: undefined,
messageChannel: undefined,
skillsSnapshot: undefined,
resolvedVerboseLevel: undefined,
agentDir: tmpDir,
onAgentEvent: vi.fn(),
authProfileProvider: "openai",
sessionHasHistory: false,
});
expect(runCliAgentMock).not.toHaveBeenCalled();
expect(runEmbeddedPiAgentMock).toHaveBeenCalledOnce();
expect(runEmbeddedPiAgentMock.mock.calls[0]?.[0]).not.toHaveProperty(
"agentHarnessId",
"claude-cli",
);
});
});

View file

@ -15,6 +15,7 @@ import { runCliAgent } from "../cli-runner.js";
import { getCliSessionBinding, setCliSessionBinding } from "../cli-session.js";
import { FailoverError } from "../failover-error.js";
import { resolveAgentHarnessPolicy } from "../harness/selection.js";
import { isCliRuntimeAlias, resolveCliRuntimeExecutionProvider } from "../model-runtime-aliases.js";
import { isCliProvider } from "../model-selection.js";
import { prepareSessionManagerForRun } from "../pi-embedded-runner/session-manager-init.js";
import { runEmbeddedPiAgent, type EmbeddedPiRunResult } from "../pi-embedded.js";
@ -276,6 +277,14 @@ export function runAgentAttempt(params: {
sessionId: params.sessionId,
sessionKey: params.sessionKey ?? params.sessionId,
});
const agentRuntimeOverride = params.sessionEntry?.agentRuntimeOverride?.trim();
const cliExecutionProvider =
resolveCliRuntimeExecutionProvider({
provider: params.providerOverride,
cfg: params.cfg,
agentId: params.sessionAgentId,
runtimeOverride: agentRuntimeOverride,
}) ?? params.providerOverride;
const agentHarnessPolicy = resolveAgentHarnessPolicy({
provider: params.providerOverride,
modelId: params.modelOverride,
@ -291,14 +300,14 @@ export function runAgentAttempt(params: {
workspaceDir: params.workspaceDir,
harnessId: sessionPinnedAgentHarnessId,
harnessRuntime: agentHarnessPolicy.runtime,
allowHarnessAuthProfileForwarding: !isCliProvider(params.providerOverride, params.cfg),
allowHarnessAuthProfileForwarding: !isCliProvider(cliExecutionProvider, params.cfg),
});
const authProfileId = runtimeAuthPlan.forwardedAuthProfileId;
if (isCliProvider(params.providerOverride, params.cfg)) {
const cliSessionBinding = getCliSessionBinding(params.sessionEntry, params.providerOverride);
if (isCliProvider(cliExecutionProvider, params.cfg)) {
const cliSessionBinding = getCliSessionBinding(params.sessionEntry, cliExecutionProvider);
const resolveReusableCliSessionBinding = async () => {
if (
!isClaudeCliProvider(params.providerOverride) ||
!isClaudeCliProvider(cliExecutionProvider) ||
!cliSessionBinding?.sessionId ||
(await claudeCliSessionTranscriptHasContent({ sessionId: cliSessionBinding.sessionId }))
) {
@ -306,13 +315,13 @@ export function runAgentAttempt(params: {
}
log.warn(
`cli session reset: provider=${sanitizeForLog(params.providerOverride)} reason=transcript-missing sessionKey=${params.sessionKey ?? params.sessionId}`,
`cli session reset: provider=${sanitizeForLog(cliExecutionProvider)} reason=transcript-missing sessionKey=${params.sessionKey ?? params.sessionId}`,
);
if (params.sessionKey && params.sessionStore && params.storePath) {
params.sessionEntry =
(await clearCliSessionInStore({
provider: params.providerOverride,
provider: cliExecutionProvider,
sessionKey: params.sessionKey,
sessionStore: params.sessionStore,
storePath: params.storePath,
@ -334,7 +343,7 @@ export function runAgentAttempt(params: {
workspaceDir: params.workspaceDir,
config: params.cfg,
prompt: effectivePrompt,
provider: params.providerOverride,
provider: cliExecutionProvider,
model: params.modelOverride,
thinkLevel: params.resolvedThinkLevel,
timeoutMs: params.timeoutMs,
@ -370,12 +379,12 @@ export function runAgentAttempt(params: {
params.storePath
) {
log.warn(
`CLI session expired, clearing from session store: provider=${sanitizeForLog(params.providerOverride)} sessionKey=${params.sessionKey}`,
`CLI session expired, clearing from session store: provider=${sanitizeForLog(cliExecutionProvider)} sessionKey=${params.sessionKey}`,
);
params.sessionEntry =
(await clearCliSessionInStore({
provider: params.providerOverride,
provider: cliExecutionProvider,
sessionKey: params.sessionKey,
sessionStore: params.sessionStore,
storePath: params.storePath,
@ -393,7 +402,7 @@ export function runAgentAttempt(params: {
const updatedEntry = { ...entry };
setCliSessionBinding(
updatedEntry,
params.providerOverride,
cliExecutionProvider,
result.meta.agentMeta.cliSessionBinding,
);
updatedEntry.updatedAt = Date.now();
@ -503,7 +512,10 @@ function resolveConfiguredAgentHarnessId(params: {
agentId: params.sessionAgentId,
sessionKey: params.sessionKey,
});
return policy.runtime === "auto" ? undefined : policy.runtime;
if (policy.runtime === "auto" || isCliRuntimeAlias(policy.runtime)) {
return undefined;
}
return policy.runtime;
}
export function buildAcpResult(params: {

View file

@ -1,6 +1,7 @@
import type { OpenClawConfig } from "../config/types.openclaw.js";
import { normalizeOptionalLowercaseString } from "../shared/string-coerce.js";
import { isRecord } from "../utils.js";
import { resolveAgentRuntimePolicy } from "./agent-runtime-policy.js";
export function collectConfiguredAgentHarnessRuntimes(
config: OpenClawConfig,
@ -18,13 +19,13 @@ export function collectConfiguredAgentHarnessRuntimes(
runtimes.add(normalized);
};
pushRuntime(config.agents?.defaults?.embeddedHarness?.runtime);
pushRuntime(resolveAgentRuntimePolicy(config.agents?.defaults)?.id);
if (Array.isArray(config.agents?.list)) {
for (const agent of config.agents.list) {
if (!isRecord(agent)) {
continue;
}
pushRuntime((agent.embeddedHarness as Record<string, unknown> | undefined)?.runtime);
pushRuntime(resolveAgentRuntimePolicy(agent)?.id);
}
}
pushRuntime(env.OPENCLAW_AGENT_RUNTIME);

View file

@ -124,7 +124,7 @@ describe("runAgentHarnessAttemptWithFallback", () => {
await expect(
runAgentHarnessAttemptWithFallback(
createAttemptParams({ agents: { defaults: { embeddedHarness: { fallback: "pi" } } } }),
createAttemptParams({ agents: { defaults: { agentRuntime: { fallback: "pi" } } } }),
),
).rejects.toThrow('Requested agent harness "codex" is not registered');
expect(piRunAttempt).not.toHaveBeenCalled();
@ -132,7 +132,7 @@ describe("runAgentHarnessAttemptWithFallback", () => {
it("falls back to the PI harness in auto mode when no plugin harness matches", async () => {
const result = await runAgentHarnessAttemptWithFallback(
createAttemptParams({ agents: { defaults: { embeddedHarness: { runtime: "auto" } } } }),
createAttemptParams({ agents: { defaults: { agentRuntime: { id: "auto" } } } }),
);
expect(result.sessionIdUsed).toBe("pi");
@ -144,7 +144,7 @@ describe("runAgentHarnessAttemptWithFallback", () => {
await expect(
runAgentHarnessAttemptWithFallback(
createAttemptParams({ agents: { defaults: { embeddedHarness: { runtime: "auto" } } } }),
createAttemptParams({ agents: { defaults: { agentRuntime: { id: "auto" } } } }),
),
).rejects.toThrow("codex startup failed");
expect(piRunAttempt).not.toHaveBeenCalled();
@ -164,7 +164,7 @@ describe("runAgentHarnessAttemptWithFallback", () => {
await expect(
runAgentHarnessAttemptWithFallback(
createAttemptParams({ agents: { defaults: { embeddedHarness: { runtime: "codex" } } } }),
createAttemptParams({ agents: { defaults: { agentRuntime: { id: "codex" } } } }),
),
).rejects.toThrow("codex startup failed");
expect(piRunAttempt).not.toHaveBeenCalled();
@ -185,7 +185,7 @@ describe("runAgentHarnessAttemptWithFallback", () => {
);
const params = createAttemptParams({
agents: { defaults: { embeddedHarness: { runtime: "auto" } } },
agents: { defaults: { agentRuntime: { id: "auto" } } },
});
const result = await runAgentHarnessAttemptWithFallback(params);
@ -205,7 +205,7 @@ describe("runAgentHarnessAttemptWithFallback", () => {
await expect(
runAgentHarnessAttemptWithFallback(
createAttemptParams({
agents: { defaults: { embeddedHarness: { runtime: "auto", fallback: "pi" } } },
agents: { defaults: { agentRuntime: { id: "auto", fallback: "pi" } } },
}),
),
).rejects.toThrow("PI fallback is disabled");
@ -215,7 +215,7 @@ describe("runAgentHarnessAttemptWithFallback", () => {
it("fails for config-forced plugin harnesses when fallback is omitted", async () => {
await expect(
runAgentHarnessAttemptWithFallback(
createAttemptParams({ agents: { defaults: { embeddedHarness: { runtime: "codex" } } } }),
createAttemptParams({ agents: { defaults: { agentRuntime: { id: "codex" } } } }),
),
).rejects.toThrow('Requested agent harness "codex" is not registered');
expect(piRunAttempt).not.toHaveBeenCalled();
@ -224,7 +224,7 @@ describe("runAgentHarnessAttemptWithFallback", () => {
it("allows config-forced plugin harnesses to opt into PI fallback", async () => {
const result = await runAgentHarnessAttemptWithFallback(
createAttemptParams({
agents: { defaults: { embeddedHarness: { runtime: "codex", fallback: "pi" } } },
agents: { defaults: { agentRuntime: { id: "codex", fallback: "pi" } } },
}),
);
@ -237,8 +237,8 @@ describe("runAgentHarnessAttemptWithFallback", () => {
runAgentHarnessAttemptWithFallback({
...createAttemptParams({
agents: {
defaults: { embeddedHarness: { fallback: "pi" } },
list: [{ id: "strict", embeddedHarness: { runtime: "codex" } }],
defaults: { agentRuntime: { fallback: "pi" } },
list: [{ id: "strict", agentRuntime: { id: "codex" } }],
},
}),
sessionKey: "agent:strict:session-1",
@ -251,8 +251,8 @@ describe("runAgentHarnessAttemptWithFallback", () => {
const result = await runAgentHarnessAttemptWithFallback({
...createAttemptParams({
agents: {
defaults: { embeddedHarness: { fallback: "none" } },
list: [{ id: "strict", embeddedHarness: { runtime: "codex", fallback: "pi" } }],
defaults: { agentRuntime: { fallback: "none" } },
list: [{ id: "strict", agentRuntime: { id: "codex", fallback: "pi" } }],
},
}),
sessionKey: "agent:strict:session-1",
@ -328,7 +328,7 @@ describe("selectAgentHarness", () => {
const harness = selectAgentHarness({
provider: "codex",
modelId: "gpt-5.4",
config: { agents: { defaults: { embeddedHarness: { runtime: "auto" } } } },
config: { agents: { defaults: { agentRuntime: { id: "auto" } } } },
});
expect(harness.id).toBe("codex-high");
@ -362,20 +362,20 @@ describe("selectAgentHarness", () => {
provider: "anthropic",
modelId: "sonnet-4.6",
config: {
agents: { defaults: { embeddedHarness: { runtime: "auto", fallback: "none" } } },
agents: { defaults: { agentRuntime: { id: "auto", fallback: "none" } } },
},
}),
).toThrow("PI fallback is disabled");
expect(piRunAttempt).not.toHaveBeenCalled();
});
it("allows per-agent embedded harness policy overrides", () => {
it("allows per-agent runtime policy overrides", () => {
const config: OpenClawConfig = {
agents: {
defaults: { embeddedHarness: { fallback: "pi" } },
defaults: { agentRuntime: { fallback: "pi" } },
list: [
{ id: "main", default: true },
{ id: "strict", embeddedHarness: { runtime: "auto", fallback: "none" } },
{ id: "strict", agentRuntime: { id: "auto", fallback: "none" } },
],
},
};
@ -393,6 +393,46 @@ describe("selectAgentHarness", () => {
);
});
it("uses agentRuntime as the runtime policy source", () => {
const config: OpenClawConfig = {
agents: {
defaults: {
agentRuntime: { id: "auto", fallback: "none" },
},
},
};
expect(() =>
selectAgentHarness({
provider: "anthropic",
modelId: "sonnet-4.6",
config,
}),
).toThrow("PI fallback is disabled");
});
it("does not treat CLI runtime aliases as embedded harness ids", async () => {
const config: OpenClawConfig = {
agents: {
defaults: {
agentRuntime: { id: "claude-cli", fallback: "none" },
},
},
};
expect(selectAgentHarness({ provider: "openai", modelId: "gpt-5.4", config }).id).toBe("pi");
await expect(
runAgentHarnessAttemptWithFallback({
...createAttemptParams(config),
provider: "openai",
modelId: "gpt-5.4",
}),
).resolves.toMatchObject({
sessionIdUsed: "pi",
});
});
it("keeps an existing session pinned to PI even when config now forces a plugin harness", () => {
registerFailingCodexHarness();
@ -401,7 +441,7 @@ describe("selectAgentHarness", () => {
provider: "codex",
modelId: "gpt-5.4",
agentHarnessId: "pi",
config: { agents: { defaults: { embeddedHarness: { runtime: "codex" } } } },
config: { agents: { defaults: { agentRuntime: { id: "codex" } } } },
}).id,
).toBe("pi");
});

View file

@ -1,9 +1,11 @@
import type { AgentEmbeddedHarnessConfig } from "../../config/types.agents-shared.js";
import type { AgentRuntimePolicyConfig } from "../../config/types.agents-shared.js";
import type { OpenClawConfig } from "../../config/types.openclaw.js";
import { formatErrorMessage } from "../../infra/errors.js";
import { createSubsystemLogger } from "../../logging/subsystem.js";
import { normalizeAgentId } from "../../routing/session-key.js";
import { resolveAgentRuntimePolicy } from "../agent-runtime-policy.js";
import { listAgentEntries, resolveSessionAgentIds } from "../agent-scope.js";
import { isCliRuntimeAlias } from "../model-runtime-aliases.js";
import type { CompactEmbeddedPiSessionParams } from "../pi-embedded-runner/compact.types.js";
import type {
EmbeddedRunAttemptParams,
@ -314,10 +316,16 @@ export function resolveAgentHarnessPolicy(params: {
agentId: params.agentId,
sessionKey: params.sessionKey,
});
const defaultsPolicy = params.config?.agents?.defaults?.embeddedHarness;
const defaultsPolicy = resolveAgentRuntimePolicy(params.config?.agents?.defaults);
const runtime = env.OPENCLAW_AGENT_RUNTIME?.trim()
? resolveEmbeddedAgentRuntime(env)
: normalizeEmbeddedAgentRuntime(agentPolicy?.runtime ?? defaultsPolicy?.runtime);
: normalizeEmbeddedAgentRuntime(agentPolicy?.id ?? defaultsPolicy?.id);
if (isCliRuntimeAlias(runtime)) {
return {
runtime: "pi",
fallback: "pi",
};
}
return {
runtime,
fallback: resolveAgentHarnessFallbackPolicy({
@ -332,8 +340,8 @@ export function resolveAgentHarnessPolicy(params: {
function resolveAgentHarnessFallbackPolicy(params: {
env: NodeJS.ProcessEnv;
runtime: EmbeddedAgentRuntime;
agentPolicy?: AgentEmbeddedHarnessConfig;
defaultsPolicy?: AgentEmbeddedHarnessConfig;
agentPolicy?: AgentRuntimePolicyConfig;
defaultsPolicy?: AgentRuntimePolicyConfig;
}): EmbeddedAgentHarnessFallback {
const envFallback = resolveEmbeddedAgentHarnessFallback(params.env);
if (envFallback) {
@ -345,7 +353,7 @@ function resolveAgentHarnessFallbackPolicy(params: {
return normalizeAgentHarnessFallback(undefined, params.runtime);
}
if (params.agentPolicy?.runtime) {
if (params.agentPolicy?.id) {
return normalizeAgentHarnessFallback(params.agentPolicy.fallback, params.runtime);
}
@ -362,7 +370,7 @@ function isPluginAgentRuntime(runtime: EmbeddedAgentRuntime): boolean {
function resolveAgentEmbeddedHarnessConfig(
config: OpenClawConfig | undefined,
params: { agentId?: string; sessionKey?: string },
): AgentEmbeddedHarnessConfig | undefined {
): AgentRuntimePolicyConfig | undefined {
if (!config) {
return undefined;
}
@ -371,12 +379,13 @@ function resolveAgentEmbeddedHarnessConfig(
agentId: params.agentId,
sessionKey: params.sessionKey,
});
return listAgentEntries(config).find((entry) => normalizeAgentId(entry.id) === sessionAgentId)
?.embeddedHarness;
return resolveAgentRuntimePolicy(
listAgentEntries(config).find((entry) => normalizeAgentId(entry.id) === sessionAgentId),
);
}
function normalizeAgentHarnessFallback(
value: AgentEmbeddedHarnessConfig["fallback"] | undefined,
value: AgentRuntimePolicyConfig["fallback"] | undefined,
runtime: EmbeddedAgentRuntime,
): EmbeddedAgentHarnessFallback {
if (value) {

View file

@ -1,5 +1,6 @@
import type { OpenClawConfig } from "../config/types.openclaw.js";
import { normalizeAgentId } from "../routing/session-key.js";
import { resolveAgentRuntimePolicy } from "./agent-runtime-policy.js";
import { normalizeProviderId } from "./provider-id.js";
export type LegacyRuntimeModelProviderAlias = {
@ -39,6 +40,12 @@ const CLI_RUNTIME_BY_PROVIDER = new Map(
]),
);
const CLI_RUNTIME_ALIASES = new Set(
LEGACY_RUNTIME_MODEL_PROVIDER_ALIASES.filter((entry) => entry.cli).map((entry) =>
normalizeProviderId(entry.runtime),
),
);
export function listLegacyRuntimeModelProviderAliases(): readonly LegacyRuntimeModelProviderAlias[] {
return LEGACY_RUNTIME_MODEL_PROVIDER_ALIASES;
}
@ -84,6 +91,11 @@ export function isLegacyRuntimeModelProvider(provider: string): boolean {
return Boolean(resolveLegacyRuntimeModelProviderAlias(provider));
}
export function isCliRuntimeAlias(runtime: string | undefined): boolean {
const normalized = runtime?.trim();
return normalized ? CLI_RUNTIME_ALIASES.has(normalizeProviderId(normalized)) : false;
}
function resolveConfiguredRuntime(params: {
cfg?: OpenClawConfig;
agentId?: string;
@ -94,14 +106,15 @@ function resolveConfiguredRuntime(params: {
return normalizeProviderId(override);
}
if (params.agentId) {
const agentRuntime = params.cfg?.agents?.list
?.find((entry) => normalizeAgentId(entry.id) === normalizeAgentId(params.agentId ?? ""))
?.embeddedHarness?.runtime?.trim();
const agentEntry = params.cfg?.agents?.list?.find(
(entry) => normalizeAgentId(entry.id) === normalizeAgentId(params.agentId ?? ""),
);
const agentRuntime = resolveAgentRuntimePolicy(agentEntry)?.id?.trim();
if (agentRuntime) {
return normalizeProviderId(agentRuntime);
}
}
const defaults = params.cfg?.agents?.defaults?.embeddedHarness?.runtime?.trim();
const defaults = resolveAgentRuntimePolicy(params.cfg?.agents?.defaults)?.id?.trim();
if (defaults) {
return normalizeProviderId(defaults);
}

View file

@ -264,7 +264,7 @@ describe("runEmbeddedPiAgent overflow compaction trigger routing", () => {
config: {
agents: {
defaults: {
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
},
},
},
@ -336,7 +336,7 @@ describe("runEmbeddedPiAgent overflow compaction trigger routing", () => {
config: {
agents: {
defaults: {
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
},
},
},

View file

@ -18,12 +18,12 @@ describe("agents_list tool", () => {
loadConfigMock.mockReset();
});
it("returns model and embedded harness metadata for allowed agents", async () => {
it("returns model and agent runtime metadata for allowed agents", async () => {
loadConfigMock.mockReturnValue({
agents: {
defaults: {
model: "anthropic/claude-opus-4.5",
embeddedHarness: { runtime: "pi", fallback: "pi" },
agentRuntime: { id: "pi", fallback: "pi" },
subagents: { allowAgents: ["codex"] },
},
list: [
@ -32,7 +32,7 @@ describe("agents_list tool", () => {
id: "codex",
name: "Codex",
model: "openai/gpt-5.5",
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
},
],
},
@ -51,14 +51,14 @@ describe("agents_list tool", () => {
id: "main",
configured: true,
model: "anthropic/claude-opus-4.5",
embeddedHarness: { runtime: "pi", source: "defaults" },
agentRuntime: { id: "pi", source: "defaults" },
},
{
id: "codex",
name: "Codex",
configured: true,
model: "openai/gpt-5.5",
embeddedHarness: { runtime: "codex", fallback: "none", source: "agent" },
agentRuntime: { id: "codex", fallback: "none", source: "agent" },
},
],
});
@ -85,7 +85,7 @@ describe("agents_list tool", () => {
agents: [
{
id: "main",
embeddedHarness: { runtime: "codex", source: "env" },
agentRuntime: { id: "codex", source: "env" },
},
],
});

View file

@ -5,6 +5,7 @@ import {
normalizeAgentId,
parseAgentSessionKey,
} from "../../routing/session-key.js";
import { resolveAgentRuntimePolicy } from "../agent-runtime-policy.js";
import {
listAgentEntries,
resolveAgentConfig,
@ -21,8 +22,8 @@ type AgentListEntry = {
name?: string;
configured: boolean;
model?: string;
embeddedHarness?: {
runtime: string;
agentRuntime?: {
id: string;
fallback?: "pi" | "none";
source: "env" | "agent" | "defaults" | "implicit";
};
@ -32,39 +33,41 @@ function normalizeRuntimeValue(value: unknown): string | undefined {
return typeof value === "string" && value.trim() ? value.trim().toLowerCase() : undefined;
}
function resolveAgentEmbeddedHarnessMetadata(
function resolveAgentRuntimeMetadata(
cfg: ReturnType<typeof loadConfig>,
agentId: string,
): AgentListEntry["embeddedHarness"] {
): NonNullable<AgentListEntry["agentRuntime"]> {
const envRuntime = normalizeRuntimeValue(process.env.OPENCLAW_AGENT_RUNTIME);
if (envRuntime) {
return {
runtime: envRuntime,
id: envRuntime,
source: "env",
};
}
const agentEntry = listAgentEntries(cfg).find((entry) => normalizeAgentId(entry.id) === agentId);
const agentRuntime = normalizeRuntimeValue(agentEntry?.embeddedHarness?.runtime);
const agentPolicy = resolveAgentRuntimePolicy(agentEntry);
const agentRuntime = normalizeRuntimeValue(agentPolicy?.id);
if (agentRuntime) {
return {
runtime: agentRuntime,
fallback: agentEntry?.embeddedHarness?.fallback,
id: agentRuntime,
fallback: agentPolicy?.fallback,
source: "agent",
};
}
const defaultsRuntime = normalizeRuntimeValue(cfg.agents?.defaults?.embeddedHarness?.runtime);
const defaultsPolicy = resolveAgentRuntimePolicy(cfg.agents?.defaults);
const defaultsRuntime = normalizeRuntimeValue(defaultsPolicy?.id);
if (defaultsRuntime) {
return {
runtime: defaultsRuntime,
fallback: cfg.agents?.defaults?.embeddedHarness?.fallback,
id: defaultsRuntime,
fallback: defaultsPolicy?.fallback,
source: "defaults",
};
}
return {
runtime: "pi",
id: "pi",
source: "implicit",
};
}
@ -136,13 +139,16 @@ export function createAgentsListTool(opts?: {
.filter((id) => id !== requesterAgentId)
.toSorted((a, b) => a.localeCompare(b));
const ordered = [requesterAgentId, ...rest];
const agents: AgentListEntry[] = ordered.map((id) => ({
id,
name: configuredNameMap.get(id),
configured: configuredIds.includes(id),
model: resolveAgentEffectiveModelPrimary(cfg, id),
embeddedHarness: resolveAgentEmbeddedHarnessMetadata(cfg, id),
}));
const agents: AgentListEntry[] = ordered.map((id) => {
const agentRuntime = resolveAgentRuntimeMetadata(cfg, id);
return {
id,
name: configuredNameMap.get(id),
configured: configuredIds.includes(id),
model: resolveAgentEffectiveModelPrimary(cfg, id),
agentRuntime,
};
});
return jsonResult({
requester: requesterAgentId,

View file

@ -417,6 +417,50 @@ describe("runAgentTurnWithFallback", () => {
);
});
it("does not pass CLI runtime overrides as embedded harness ids for fallback providers", async () => {
state.isCliProviderMock.mockImplementation((provider: unknown) => provider === "claude-cli");
state.runWithModelFallbackMock.mockImplementationOnce(async (params: FallbackRunnerParams) => ({
result: await params.run("openai", "gpt-5.4"),
provider: "openai",
model: "gpt-5.4",
attempts: [],
}));
state.runEmbeddedPiAgentMock.mockResolvedValueOnce({
payloads: [{ text: "fallback" }],
meta: {},
});
const runAgentTurnWithFallback = await getRunAgentTurnWithFallback();
const followupRun = createFollowupRun();
followupRun.run.provider = "anthropic";
followupRun.run.model = "claude-opus-4-7";
followupRun.run.config = {
agents: {
defaults: {
agentRuntime: { id: "claude-cli", fallback: "none" },
},
},
};
const result = await runAgentTurnWithFallback({
...createMinimalRunAgentTurnParams({ followupRun }),
getActiveSessionEntry: () =>
({
sessionId: "session",
updatedAt: Date.now(),
agentRuntimeOverride: "claude-cli",
}) as SessionEntry,
});
expect(result.kind).toBe("success");
expect(state.runCliAgentMock).not.toHaveBeenCalled();
expect(state.runEmbeddedPiAgentMock).toHaveBeenCalledOnce();
expect(state.runEmbeddedPiAgentMock.mock.calls[0]?.[0]).not.toHaveProperty(
"agentHarnessId",
"claude-cli",
);
});
it("forwards media-only tool results without typing text", async () => {
const onToolResult = vi.fn();
state.runEmbeddedPiAgentMock.mockImplementationOnce(async (params: EmbeddedAgentParams) => {

View file

@ -13,7 +13,10 @@ import { runCliAgent } from "../../agents/cli-runner.js";
import { getCliSessionBinding } from "../../agents/cli-session.js";
import { LiveSessionModelSwitchError } from "../../agents/live-model-switch-error.js";
import { runWithModelFallback, isFallbackSummaryError } from "../../agents/model-fallback.js";
import { resolveCliRuntimeExecutionProvider } from "../../agents/model-runtime-aliases.js";
import {
isCliRuntimeAlias,
resolveCliRuntimeExecutionProvider,
} from "../../agents/model-runtime-aliases.js";
import { isCliProvider } from "../../agents/model-selection.js";
import {
BILLING_ERROR_USER_MESSAGE,
@ -1122,7 +1125,8 @@ export async function runAgentTurnWithFallback(params: {
...runBaseParams,
...(agentRuntimeOverride &&
agentRuntimeOverride !== "auto" &&
agentRuntimeOverride !== "default"
agentRuntimeOverride !== "default" &&
!isCliRuntimeAlias(agentRuntimeOverride)
? { agentHarnessId: agentRuntimeOverride }
: {}),
sandboxSessionKey: params.runtimePolicySessionKey,

View file

@ -1667,7 +1667,7 @@ describe("runReplyAgent claude-cli routing", () => {
messageProvider: "webchat",
sessionFile: "/tmp/session.jsonl",
workspaceDir: "/tmp",
config: { agents: { defaults: { embeddedHarness: { runtime: "claude-cli" } } } },
config: { agents: { defaults: { agentRuntime: { id: "claude-cli" } } } },
skillsSnapshot: {},
provider: "anthropic",
model: "claude-opus-4-7",

View file

@ -489,7 +489,7 @@ describe("buildStatusReply subagent summary", () => {
...baseCfg,
agents: {
defaults: {
embeddedHarness: { runtime: "codex" },
agentRuntime: { id: "codex" },
},
},
},
@ -529,7 +529,7 @@ describe("buildStatusReply subagent summary", () => {
...baseCfg,
agents: {
defaults: {
embeddedHarness: { runtime: "codex" },
agentRuntime: { id: "codex" },
},
},
},

View file

@ -427,11 +427,11 @@ describe("normalizeCompatibilityConfigValues", () => {
expect(res.changes).toEqual([]);
});
it("migrates legacy Codex primary refs to OpenAI refs plus explicit Codex harness", () => {
it("migrates legacy Codex primary refs to OpenAI refs plus explicit Codex runtime", () => {
const res = normalizeCompatibilityConfigValues({
agents: {
defaults: {
embeddedHarness: { runtime: "auto", fallback: "pi" },
agentRuntime: { id: "auto", fallback: "pi" },
model: {
primary: "codex/gpt-5.5",
fallbacks: ["anthropic/claude-sonnet-4-6", "codex/gpt-5.4-mini"],
@ -455,8 +455,8 @@ describe("normalizeCompatibilityConfigValues", () => {
primary: "openai/gpt-5.5",
fallbacks: ["anthropic/claude-sonnet-4-6", "openai/gpt-5.4-mini"],
});
expect(res.config.agents?.defaults?.embeddedHarness).toEqual({
runtime: "codex",
expect(res.config.agents?.defaults?.agentRuntime).toEqual({
id: "codex",
fallback: "pi",
});
expect(res.config.agents?.defaults?.models).toEqual({
@ -465,7 +465,7 @@ describe("normalizeCompatibilityConfigValues", () => {
});
expect(res.config.agents?.list?.[0]).toMatchObject({
id: "reviewer",
embeddedHarness: { runtime: "codex" },
agentRuntime: { id: "codex" },
model: "openai/gpt-5.4-mini",
});
expect(res.changes).toEqual(
@ -518,7 +518,7 @@ describe("normalizeCompatibilityConfigValues", () => {
primary: "anthropic/claude-opus-4-7",
fallbacks: ["anthropic/claude-sonnet-4-6"],
});
expect(res.config.agents?.defaults?.embeddedHarness).toEqual({ runtime: "claude-cli" });
expect(res.config.agents?.defaults?.agentRuntime).toEqual({ id: "claude-cli" });
expect(res.config.agents?.defaults?.models).toEqual({
"anthropic/claude-opus-4-7": { alias: "Anthropic Opus" },
});
@ -544,7 +544,7 @@ describe("normalizeCompatibilityConfigValues", () => {
primary: "openai/gpt-5.5",
fallbacks: ["openai/gpt-5.4-mini"],
});
expect(res.config.agents?.defaults?.embeddedHarness).toEqual({ runtime: "codex-cli" });
expect(res.config.agents?.defaults?.agentRuntime).toEqual({ id: "codex-cli" });
expect(res.config.agents?.defaults?.models).toEqual({
"openai/gpt-5.5": { alias: "OpenAI GPT" },
});
@ -570,8 +570,8 @@ describe("normalizeCompatibilityConfigValues", () => {
primary: "google/gemini-3.1-pro-preview",
fallbacks: ["google/gemini-3-flash-preview"],
});
expect(res.config.agents?.defaults?.embeddedHarness).toEqual({
runtime: "google-gemini-cli",
expect(res.config.agents?.defaults?.agentRuntime).toEqual({
id: "google-gemini-cli",
});
expect(res.config.agents?.defaults?.models).toEqual({
"google/gemini-3.1-pro-preview": { alias: "Gemini API" },

View file

@ -28,7 +28,7 @@ describe("collectCodexRouteWarnings", () => {
expect(warnings).toEqual([expect.stringContaining("Codex plugin is enabled")]);
expect(warnings[0]).toContain("agents.defaults.model");
expect(warnings[0]).toContain('runtime "pi"');
expect(warnings[0]).toContain('embeddedHarness.runtime: "codex"');
expect(warnings[0]).toContain('agentRuntime.id: "codex"');
});
it("does not warn when the native Codex runtime is selected", () => {
@ -38,8 +38,8 @@ describe("collectCodexRouteWarnings", () => {
agents: {
defaults: {
model: "openai-codex/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
},
},
},

View file

@ -1,6 +1,6 @@
import type {
AgentEmbeddedHarnessConfig,
AgentModelConfig,
AgentRuntimePolicyConfig,
} from "../../../config/types.agents-shared.js";
import type { OpenClawConfig } from "../../../config/types.openclaw.js";
@ -44,13 +44,13 @@ function isCodexPluginEnabled(cfg: OpenClawConfig): boolean {
function resolveRuntime(params: {
env?: NodeJS.ProcessEnv;
agentHarness?: AgentEmbeddedHarnessConfig;
defaultsHarness?: AgentEmbeddedHarnessConfig;
agentRuntime?: AgentRuntimePolicyConfig;
defaultsRuntime?: AgentRuntimePolicyConfig;
}): string {
return (
normalizeString(params.env?.OPENCLAW_AGENT_RUNTIME) ??
normalizeString(params.agentHarness?.runtime) ??
normalizeString(params.defaultsHarness?.runtime) ??
normalizeString(params.agentRuntime?.id) ??
normalizeString(params.defaultsRuntime?.id) ??
"pi"
);
}
@ -60,10 +60,10 @@ function collectOpenAICodexPiRouteHits(
env?: NodeJS.ProcessEnv,
): CodexPiRouteHit[] {
const defaults = cfg.agents?.defaults;
const defaultsHarness = defaults?.embeddedHarness;
const defaultsRuntime = defaults?.agentRuntime;
const hits: CodexPiRouteHit[] = [];
const defaultModel = normalizeModelRef(defaults?.model);
const defaultRuntime = resolveRuntime({ env, defaultsHarness });
const defaultRuntime = resolveRuntime({ env, defaultsRuntime });
if (isOpenAICodexModelRef(defaultModel) && defaultRuntime !== "codex") {
hits.push({ path: "agents.defaults.model", model: defaultModel, runtime: defaultRuntime });
}
@ -75,8 +75,8 @@ function collectOpenAICodexPiRouteHits(
}
const runtime = resolveRuntime({
env,
agentHarness: agent.embeddedHarness,
defaultsHarness,
agentRuntime: agent.agentRuntime,
defaultsRuntime,
});
if (runtime === "codex") {
continue;
@ -101,11 +101,11 @@ export function collectCodexRouteWarnings(params: {
}
return [
[
"- Codex plugin is enabled, but `openai-codex/*` model refs still use the OpenClaw PI runner unless `embeddedHarness.runtime` is `codex`.",
"- Codex plugin is enabled, but `openai-codex/*` model refs still use the OpenClaw PI runner unless `agentRuntime.id` is `codex`.",
...hits.map(
(hit) => `- ${hit.path}: ${hit.model} currently resolves with runtime "${hit.runtime}".`,
),
'- To use native Codex app-server, set the model to `openai/<model>` and set `agents.defaults.embeddedHarness.runtime: "codex"` (or the agent-level equivalent).',
'- To use native Codex app-server, set the model to `openai/<model>` and set `agents.defaults.agentRuntime.id: "codex"` (or the agent-level equivalent).',
"- Leave this unchanged if you intentionally want Codex OAuth/subscription auth through PI.",
].join("\n"),
];

View file

@ -193,8 +193,8 @@ type ModelProviderEntry = Partial<
>;
type ModelsConfigPatch = Partial<NonNullable<OpenClawConfig["models"]>>;
type ModelDefinitionEntry = NonNullable<ModelProviderEntry["models"]>[number];
type AgentEmbeddedHarnessPatch = NonNullable<
NonNullable<NonNullable<OpenClawConfig["agents"]>["defaults"]>["embeddedHarness"]
type AgentRuntimePolicyPatch = NonNullable<
NonNullable<NonNullable<OpenClawConfig["agents"]>["defaults"]>["agentRuntime"]
>;
function mergeModelEntry(legacyEntry: unknown, currentEntry: unknown): unknown {
@ -273,24 +273,24 @@ function normalizeLegacyRuntimeAllowlistModels(
return { value: next, changed };
}
function ensureEmbeddedHarnessRuntime(
function ensureAgentRuntimePolicy(
raw: unknown,
selectedRuntime: string,
): {
value: AgentEmbeddedHarnessPatch;
value: AgentRuntimePolicyPatch;
changed: boolean;
} {
if (!isRecord(raw)) {
return { value: { runtime: selectedRuntime }, changed: true };
return { value: { id: selectedRuntime }, changed: true };
}
const currentRuntime = normalizeOptionalLowercaseString(raw.runtime);
const currentRuntime = normalizeOptionalLowercaseString(raw.id);
if (!currentRuntime || currentRuntime === "auto") {
return {
value: { ...raw, runtime: selectedRuntime } as AgentEmbeddedHarnessPatch,
value: { ...raw, id: selectedRuntime } as AgentRuntimePolicyPatch,
changed: currentRuntime !== selectedRuntime,
};
}
return { value: raw as AgentEmbeddedHarnessPatch, changed: false };
return { value: raw as AgentRuntimePolicyPatch, changed: false };
}
function normalizeLegacyRuntimeAgentContainer(
@ -321,9 +321,9 @@ function normalizeLegacyRuntimeAgentContainer(
}
if (model.selectedRuntime) {
const harness = ensureEmbeddedHarnessRuntime(raw.embeddedHarness, model.selectedRuntime);
if (harness.changed) {
next.embeddedHarness = harness.value;
const agentRuntime = ensureAgentRuntimePolicy(raw.agentRuntime, model.selectedRuntime);
if (agentRuntime.changed) {
next.agentRuntime = agentRuntime.value;
changed = true;
}
}

View file

@ -98,6 +98,49 @@ describe("legacy migrate mention routing", () => {
});
describe("legacy migrate sandbox scope aliases", () => {
it("moves legacy embeddedHarness runtime policy into agentRuntime", () => {
const res = migrateLegacyConfigForTest({
agents: {
defaults: {
embeddedHarness: {
runtime: "claude-cli",
fallback: "none",
},
},
list: [
{
id: "reviewer",
agentRuntime: { fallback: "pi" },
embeddedHarness: {
runtime: "codex",
fallback: "none",
},
},
],
},
});
expect(res.changes).toEqual(
expect.arrayContaining([
"Moved agents.defaults.embeddedHarness → agents.defaults.agentRuntime.",
"Moved agents.list.0.embeddedHarness → agents.list.0.agentRuntime.",
]),
);
expect(res.config?.agents?.defaults).toEqual({
agentRuntime: {
id: "claude-cli",
fallback: "none",
},
});
expect(res.config?.agents?.list?.[0]).toEqual({
id: "reviewer",
agentRuntime: {
id: "codex",
fallback: "pi",
},
});
});
it("moves agents.defaults.sandbox.perSession into scope", () => {
const res = migrateLegacyConfigForTest({
agents: {

View file

@ -54,6 +54,21 @@ const LEGACY_SANDBOX_SCOPE_RULES: LegacyConfigRule[] = [
},
];
const LEGACY_AGENT_RUNTIME_POLICY_RULES: LegacyConfigRule[] = [
{
path: ["agents", "defaults", "embeddedHarness"],
message:
'agents.defaults.embeddedHarness is legacy; use agents.defaults.agentRuntime instead. Run "openclaw doctor --fix".',
match: (value) => getRecord(value) !== null,
},
{
path: ["agents", "list"],
message:
'agents.list[].embeddedHarness is legacy; use agents.list[].agentRuntime instead. Run "openclaw doctor --fix".',
match: (value) => hasLegacyAgentListEmbeddedHarness(value),
},
];
function sandboxScopeFromPerSession(perSession: boolean): "session" | "shared" {
return perSession ? "session" : "shared";
}
@ -124,6 +139,13 @@ function hasLegacyAgentListSandboxPerSession(value: unknown): boolean {
return value.some((agent) => hasLegacySandboxPerSession(getRecord(agent)?.sandbox));
}
function hasLegacyAgentListEmbeddedHarness(value: unknown): boolean {
if (!Array.isArray(value)) {
return false;
}
return value.some((agent) => getRecord(getRecord(agent)?.embeddedHarness) !== null);
}
function migrateLegacySandboxPerSession(
sandbox: Record<string, unknown>,
pathLabel: string,
@ -145,7 +167,56 @@ function migrateLegacySandboxPerSession(
delete sandbox.perSession;
}
function migrateLegacyAgentRuntimePolicy(
container: Record<string, unknown>,
pathLabel: string,
changes: string[],
): void {
const legacy = getRecord(container.embeddedHarness);
if (!legacy) {
return;
}
const existing = getRecord(container.agentRuntime);
const next = existing ? structuredClone(existing) : {};
if (next.id === undefined && legacy.runtime !== undefined) {
next.id = legacy.runtime;
}
if (next.fallback === undefined && legacy.fallback !== undefined) {
next.fallback = legacy.fallback;
}
if (Object.keys(next).length > 0) {
container.agentRuntime = next;
}
delete container.embeddedHarness;
changes.push(`Moved ${pathLabel}.embeddedHarness → ${pathLabel}.agentRuntime.`);
}
export const LEGACY_CONFIG_MIGRATIONS_RUNTIME_AGENTS: LegacyConfigMigrationSpec[] = [
defineLegacyConfigMigration({
id: "agents.embeddedHarness->agentRuntime",
describe: "Move legacy embeddedHarness runtime policy to agentRuntime",
legacyRules: LEGACY_AGENT_RUNTIME_POLICY_RULES,
apply: (raw, changes) => {
const agents = getRecord(raw.agents);
const defaults = getRecord(agents?.defaults);
if (defaults) {
migrateLegacyAgentRuntimePolicy(defaults, "agents.defaults", changes);
}
if (!Array.isArray(agents?.list)) {
return;
}
for (const [index, agent] of agents.list.entries()) {
const agentRecord = getRecord(agent);
if (!agentRecord) {
continue;
}
migrateLegacyAgentRuntimePolicy(agentRecord, `agents.list.${index}`, changes);
}
},
}),
defineLegacyConfigMigration({
id: "agents.sandbox.perSession->scope",
describe: "Move legacy agent sandbox perSession aliases to sandbox.scope",

View file

@ -289,8 +289,8 @@ describe("applyPluginAutoEnable core", () => {
agents: {
defaults: {
model: "openai/gpt-5.5",
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
fallback: "none",
},
},
@ -316,13 +316,13 @@ describe("applyPluginAutoEnable core", () => {
]);
});
it("auto-enables an opt-in plugin when an embedded agent harness runtime is configured", () => {
it("auto-enables an opt-in plugin when an agent runtime is configured", () => {
const result = applyPluginAutoEnable({
config: {
agents: {
defaults: {
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
fallback: "none",
},
},

View file

@ -3184,27 +3184,47 @@ export const GENERATED_BASE_CONFIG_SCHEMA: BaseConfigSchemaResponse = {
},
additionalProperties: {},
},
embeddedHarness: {
agentRuntime: {
type: "object",
properties: {
runtime: {
id: {
type: "string",
title: "Default Agent Runtime",
description:
"Embedded harness runtime: pi, auto, or a registered plugin harness id such as codex. Omitted runtime uses built-in OpenClaw Pi.",
"Agent runtime id: pi, auto, a registered plugin harness id such as codex, or a supported CLI backend alias such as claude-cli. Omitted id uses built-in OpenClaw Pi.",
},
fallback: {
type: "string",
enum: ["pi", "none"],
title: "Default Embedded Harness Fallback",
title: "Default Agent Runtime Fallback",
description:
"Embedded harness fallback when no plugin harness matches. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings. Selected plugin harness failures surface directly.",
"Agent runtime fallback when no plugin harness matches. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings. Selected plugin harness failures surface directly.",
},
},
additionalProperties: false,
title: "Default Agent Runtime Settings",
description:
"Default embedded agent harness policy. Omitted runtime uses built-in OpenClaw Pi. Use runtime=auto for plugin harness selection, or a registered harness id such as codex.",
"Default agent runtime policy. Omitted id uses built-in OpenClaw Pi. Use id=auto for plugin harness selection, a registered harness id such as codex, or a supported CLI backend alias such as claude-cli.",
},
embeddedHarness: {
type: "object",
properties: {
runtime: {
type: "string",
title: "Default Legacy Embedded Harness Runtime",
description: "Legacy input for agents.defaults.agentRuntime.id.",
},
fallback: {
type: "string",
enum: ["pi", "none"],
title: "Default Legacy Embedded Harness Fallback",
description: "Legacy input for agents.defaults.agentRuntime.fallback.",
},
},
additionalProperties: false,
title: "Default Legacy Embedded Harness Settings",
description:
"Legacy input for agents.defaults.agentRuntime. Run openclaw doctor --fix to rewrite it to agentRuntime.",
},
model: {
anyOf: [
@ -5992,27 +6012,47 @@ export const GENERATED_BASE_CONFIG_SCHEMA: BaseConfigSchemaResponse = {
systemPromptOverride: {
type: "string",
},
agentRuntime: {
type: "object",
properties: {
id: {
type: "string",
title: "Agent Runtime",
description:
"Per-agent agent runtime id: pi, auto, a registered plugin harness id such as codex, or a supported CLI backend alias such as claude-cli. Omitted id inherits the default OpenClaw Pi behavior.",
},
fallback: {
type: "string",
enum: ["pi", "none"],
title: "Agent Runtime Fallback",
description:
"Per-agent agent runtime fallback. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings.",
},
},
additionalProperties: false,
title: "Agent Runtime",
description:
"Per-agent agent runtime policy override. Use id=codex to force Codex for one agent while defaults stay in auto mode.",
},
embeddedHarness: {
type: "object",
properties: {
runtime: {
type: "string",
title: "Agent Runtime",
description:
"Per-agent embedded harness runtime: pi, auto, or a registered plugin harness id such as codex. Omitted runtime inherits the default OpenClaw Pi behavior.",
title: "Agent Legacy Embedded Harness Runtime",
description: "Legacy input for agents.list.*.agentRuntime.id.",
},
fallback: {
type: "string",
enum: ["pi", "none"],
title: "Agent Embedded Harness Fallback",
description:
"Per-agent embedded harness fallback. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings.",
title: "Agent Legacy Embedded Harness Fallback",
description: "Legacy input for agents.list.*.agentRuntime.fallback.",
},
},
additionalProperties: false,
title: "Agent Embedded Harness",
title: "Agent Legacy Embedded Harness",
description:
"Per-agent embedded harness policy override. Use runtime=codex to force Codex for one agent while defaults stay in auto mode.",
"Legacy input for agents.list.*.agentRuntime. Run openclaw doctor --fix to rewrite it to agentRuntime.",
},
model: {
anyOf: [
@ -24220,19 +24260,34 @@ export const GENERATED_BASE_CONFIG_SCHEMA: BaseConfigSchemaResponse = {
help: "Default max characters retained from AGENTS.md during post-compaction context refresh injection. Lower this to make compaction recovery cheaper, or raise it for agents that depend on longer startup guidance.",
tags: ["performance"],
},
"agents.defaults.embeddedHarness": {
"agents.defaults.agentRuntime": {
label: "Default Agent Runtime Settings",
help: "Default embedded agent harness policy. Omitted runtime uses built-in OpenClaw Pi. Use runtime=auto for plugin harness selection, or a registered harness id such as codex.",
help: "Default agent runtime policy. Omitted id uses built-in OpenClaw Pi. Use id=auto for plugin harness selection, a registered harness id such as codex, or a supported CLI backend alias such as claude-cli.",
tags: ["advanced"],
},
"agents.defaults.agentRuntime.id": {
label: "Default Agent Runtime",
help: "Agent runtime id: pi, auto, a registered plugin harness id such as codex, or a supported CLI backend alias such as claude-cli. Omitted id uses built-in OpenClaw Pi.",
tags: ["advanced"],
},
"agents.defaults.agentRuntime.fallback": {
label: "Default Agent Runtime Fallback",
help: "Agent runtime fallback when no plugin harness matches. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings. Selected plugin harness failures surface directly.",
tags: ["reliability"],
},
"agents.defaults.embeddedHarness": {
label: "Default Legacy Embedded Harness Settings",
help: "Legacy input for agents.defaults.agentRuntime. Run openclaw doctor --fix to rewrite it to agentRuntime.",
tags: ["advanced"],
},
"agents.defaults.embeddedHarness.runtime": {
label: "Default Agent Runtime",
help: "Embedded harness runtime: pi, auto, or a registered plugin harness id such as codex. Omitted runtime uses built-in OpenClaw Pi.",
label: "Default Legacy Embedded Harness Runtime",
help: "Legacy input for agents.defaults.agentRuntime.id.",
tags: ["advanced"],
},
"agents.defaults.embeddedHarness.fallback": {
label: "Default Embedded Harness Fallback",
help: "Embedded harness fallback when no plugin harness matches. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings. Selected plugin harness failures surface directly.",
label: "Default Legacy Embedded Harness Fallback",
help: "Legacy input for agents.defaults.agentRuntime.fallback.",
tags: ["reliability"],
},
"agents.list": {
@ -24275,19 +24330,34 @@ export const GENERATED_BASE_CONFIG_SCHEMA: BaseConfigSchemaResponse = {
help: "Per-agent override for the post-compaction AGENTS.md excerpt budget.",
tags: ["performance"],
},
"agents.list.*.agentRuntime": {
label: "Agent Runtime",
help: "Per-agent agent runtime policy override. Use id=codex to force Codex for one agent while defaults stay in auto mode.",
tags: ["advanced"],
},
"agents.list.*.agentRuntime.id": {
label: "Agent Runtime",
help: "Per-agent agent runtime id: pi, auto, a registered plugin harness id such as codex, or a supported CLI backend alias such as claude-cli. Omitted id inherits the default OpenClaw Pi behavior.",
tags: ["advanced"],
},
"agents.list.*.agentRuntime.fallback": {
label: "Agent Runtime Fallback",
help: "Per-agent agent runtime fallback. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings.",
tags: ["reliability"],
},
"agents.list.*.embeddedHarness": {
label: "Agent Embedded Harness",
help: "Per-agent embedded harness policy override. Use runtime=codex to force Codex for one agent while defaults stay in auto mode.",
label: "Agent Legacy Embedded Harness",
help: "Legacy input for agents.list.*.agentRuntime. Run openclaw doctor --fix to rewrite it to agentRuntime.",
tags: ["advanced"],
},
"agents.list.*.embeddedHarness.runtime": {
label: "Agent Runtime",
help: "Per-agent embedded harness runtime: pi, auto, or a registered plugin harness id such as codex. Omitted runtime inherits the default OpenClaw Pi behavior.",
label: "Agent Legacy Embedded Harness Runtime",
help: "Legacy input for agents.list.*.agentRuntime.id.",
tags: ["advanced"],
},
"agents.list.*.embeddedHarness.fallback": {
label: "Agent Embedded Harness Fallback",
help: "Per-agent embedded harness fallback. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings.",
label: "Agent Legacy Embedded Harness Fallback",
help: "Legacy input for agents.list.*.agentRuntime.fallback.",
tags: ["reliability"],
},
"gateway.port": {

View file

@ -1181,18 +1181,27 @@ export const FIELD_HELP: Record<string, string> = {
"agents.defaults.model.primary": "Primary model (provider/model).",
"agents.defaults.model.fallbacks":
"Ordered fallback models (provider/model). Used when the primary model fails.",
"agents.defaults.agentRuntime":
"Default agent runtime policy. Omitted id uses built-in OpenClaw Pi. Use id=auto for plugin harness selection, a registered harness id such as codex, or a supported CLI backend alias such as claude-cli.",
"agents.defaults.agentRuntime.id":
"Agent runtime id: pi, auto, a registered plugin harness id such as codex, or a supported CLI backend alias such as claude-cli. Omitted id uses built-in OpenClaw Pi.",
"agents.defaults.agentRuntime.fallback":
"Agent runtime fallback when no plugin harness matches. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings. Selected plugin harness failures surface directly.",
"agents.defaults.embeddedHarness":
"Default embedded agent harness policy. Omitted runtime uses built-in OpenClaw Pi. Use runtime=auto for plugin harness selection, or a registered harness id such as codex.",
"agents.defaults.embeddedHarness.runtime":
"Embedded harness runtime: pi, auto, or a registered plugin harness id such as codex. Omitted runtime uses built-in OpenClaw Pi.",
"Legacy input for agents.defaults.agentRuntime. Run openclaw doctor --fix to rewrite it to agentRuntime.",
"agents.defaults.embeddedHarness.runtime": "Legacy input for agents.defaults.agentRuntime.id.",
"agents.defaults.embeddedHarness.fallback":
"Embedded harness fallback when no plugin harness matches. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings. Selected plugin harness failures surface directly.",
"Legacy input for agents.defaults.agentRuntime.fallback.",
"agents.list.*.agentRuntime":
"Per-agent agent runtime policy override. Use id=codex to force Codex for one agent while defaults stay in auto mode.",
"agents.list.*.agentRuntime.id":
"Per-agent agent runtime id: pi, auto, a registered plugin harness id such as codex, or a supported CLI backend alias such as claude-cli. Omitted id inherits the default OpenClaw Pi behavior.",
"agents.list.*.agentRuntime.fallback":
"Per-agent agent runtime fallback. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings.",
"agents.list.*.embeddedHarness":
"Per-agent embedded harness policy override. Use runtime=codex to force Codex for one agent while defaults stay in auto mode.",
"agents.list.*.embeddedHarness.runtime":
"Per-agent embedded harness runtime: pi, auto, or a registered plugin harness id such as codex. Omitted runtime inherits the default OpenClaw Pi behavior.",
"agents.list.*.embeddedHarness.fallback":
"Per-agent embedded harness fallback. Auto mode defaults to pi; explicit plugin runtimes default to none and do not inherit broader fallback settings.",
"Legacy input for agents.list.*.agentRuntime. Run openclaw doctor --fix to rewrite it to agentRuntime.",
"agents.list.*.embeddedHarness.runtime": "Legacy input for agents.list.*.agentRuntime.id.",
"agents.list.*.embeddedHarness.fallback": "Legacy input for agents.list.*.agentRuntime.fallback.",
"agents.defaults.imageModel.primary":
"Optional image model (provider/model) used when the primary model lacks image input.",
"agents.defaults.imageModel.fallbacks": "Ordered fallback image models (provider/model).",

View file

@ -82,9 +82,12 @@ export const FIELD_LABELS: Record<string, string> = {
"agents.defaults.contextLimits.memoryGetDefaultLines": "Default memory_get Line Window",
"agents.defaults.contextLimits.toolResultMaxChars": "Default Tool Result Max Chars",
"agents.defaults.contextLimits.postCompactionMaxChars": "Default Post-compaction Max Chars",
"agents.defaults.embeddedHarness": "Default Agent Runtime Settings",
"agents.defaults.embeddedHarness.runtime": "Default Agent Runtime",
"agents.defaults.embeddedHarness.fallback": "Default Embedded Harness Fallback",
"agents.defaults.agentRuntime": "Default Agent Runtime Settings",
"agents.defaults.agentRuntime.id": "Default Agent Runtime",
"agents.defaults.agentRuntime.fallback": "Default Agent Runtime Fallback",
"agents.defaults.embeddedHarness": "Default Legacy Embedded Harness Settings",
"agents.defaults.embeddedHarness.runtime": "Default Legacy Embedded Harness Runtime",
"agents.defaults.embeddedHarness.fallback": "Default Legacy Embedded Harness Fallback",
"agents.list": "Agent List",
"agents.list[].skillsLimits": "Agent Skills Limits",
"agents.list[].skillsLimits.maxSkillsPromptChars": "Agent Skills Prompt Max Chars",
@ -93,9 +96,12 @@ export const FIELD_LABELS: Record<string, string> = {
"agents.list[].contextLimits.memoryGetDefaultLines": "Agent memory_get Line Window",
"agents.list[].contextLimits.toolResultMaxChars": "Agent Tool Result Max Chars",
"agents.list[].contextLimits.postCompactionMaxChars": "Agent Post-compaction Max Chars",
"agents.list.*.embeddedHarness": "Agent Embedded Harness",
"agents.list.*.embeddedHarness.runtime": "Agent Runtime",
"agents.list.*.embeddedHarness.fallback": "Agent Embedded Harness Fallback",
"agents.list.*.agentRuntime": "Agent Runtime",
"agents.list.*.agentRuntime.id": "Agent Runtime",
"agents.list.*.agentRuntime.fallback": "Agent Runtime Fallback",
"agents.list.*.embeddedHarness": "Agent Legacy Embedded Harness",
"agents.list.*.embeddedHarness.runtime": "Agent Legacy Embedded Harness Runtime",
"agents.list.*.embeddedHarness.fallback": "Agent Legacy Embedded Harness Fallback",
gateway: "Gateway",
"gateway.port": "Gateway Port",
"gateway.mode": "Gateway Mode",

View file

@ -5,6 +5,7 @@ import type {
import type {
AgentEmbeddedHarnessConfig,
AgentModelConfig,
AgentRuntimePolicyConfig,
AgentSandboxConfig,
} from "./types.agents-shared.js";
import type {
@ -178,7 +179,9 @@ export type CliBackendConfig = {
export type AgentDefaultsConfig = {
/** Global default provider params applied to all models before per-model and per-agent overrides. */
params?: Record<string, unknown>;
/** Default embedded agent harness policy. */
/** Default agent runtime policy. */
agentRuntime?: AgentRuntimePolicyConfig;
/** @deprecated Use agentRuntime. */
embeddedHarness?: AgentEmbeddedHarnessConfig;
/** Primary model and fallbacks (provider/model). Accepts string or {primary,fallbacks}. */
model?: AgentModelConfig;

View file

@ -23,6 +23,13 @@ export type AgentEmbeddedHarnessConfig = {
fallback?: "pi" | "none";
};
export type AgentRuntimePolicyConfig = {
/** Agent runtime id. Omitted uses "pi"; "auto" opts into plugin harness auto-selection. */
id?: string;
/** Fallback when no plugin harness matches or an auto-selected plugin harness fails. */
fallback?: "pi" | "none";
};
export type AgentSandboxConfig = {
mode?: "off" | "non-main" | "all";
/** Sandbox runtime backend id. Default: "docker". */

View file

@ -7,6 +7,7 @@ import type {
import type {
AgentEmbeddedHarnessConfig,
AgentModelConfig,
AgentRuntimePolicyConfig,
AgentSandboxConfig,
} from "./types.agents-shared.js";
import type { DmScope, HumanDelayConfig, IdentityConfig } from "./types.base.js";
@ -80,7 +81,9 @@ export type AgentConfig = {
agentDir?: string;
/** Optional per-agent full system prompt replacement. */
systemPromptOverride?: AgentDefaultsConfig["systemPromptOverride"];
/** Optional per-agent embedded harness policy override. */
/** Optional per-agent agent runtime policy override. */
agentRuntime?: AgentRuntimePolicyConfig;
/** @deprecated Use agentRuntime. */
embeddedHarness?: AgentEmbeddedHarnessConfig;
model?: AgentModelConfig;
/** Optional per-agent default thinking level (overrides agents.defaults.thinkingDefault). */

View file

@ -6,6 +6,7 @@ import {
AgentSandboxSchema,
AgentContextLimitsSchema,
AgentEmbeddedHarnessSchema,
AgentRuntimePolicySchema,
AgentModelSchema,
MemorySearchSchema,
} from "./zod-schema.agent-runtime.js";
@ -39,6 +40,7 @@ export const AgentDefaultsSchema = z
.object({
/** Global default provider params applied to all models before per-model and per-agent overrides. */
params: z.record(z.string(), z.unknown()).optional(),
agentRuntime: AgentRuntimePolicySchema,
embeddedHarness: AgentEmbeddedHarnessSchema,
model: AgentModelSchema.optional(),
imageModel: AgentModelSchema.optional(),

View file

@ -811,6 +811,14 @@ export const AgentEmbeddedHarnessSchema = z
.strict()
.optional();
export const AgentRuntimePolicySchema = z
.object({
id: z.string().optional(),
fallback: z.enum(["pi", "none"]).optional(),
})
.strict()
.optional();
export const AgentEntrySchema = z
.object({
id: z.string(),
@ -819,6 +827,7 @@ export const AgentEntrySchema = z
workspace: z.string().optional(),
agentDir: z.string().optional(),
systemPromptOverride: z.string().optional(),
agentRuntime: AgentRuntimePolicySchema,
embeddedHarness: AgentEmbeddedHarnessSchema,
model: AgentModelSchema.optional(),
thinkingDefault: z

View file

@ -71,7 +71,7 @@ function buildCodexAppServerPlannerConfig(workspaceDir: string): OpenClawConfig
agents: {
defaults: {
workspace: workspaceDir,
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
model: { primary: `openai/${CRESTODIAN_CODEX_MODEL}` },
},
},

View file

@ -159,7 +159,7 @@ describe("Crestodian assistant", () => {
agents: {
defaults: {
workspace: "/tmp/workspace",
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
model: { primary: "openai/gpt-5.5" },
},
},
@ -220,7 +220,7 @@ describe("Crestodian assistant", () => {
expect(firstEmbeddedCall.config).toMatchObject({
agents: {
defaults: {
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
model: { primary: "openai/gpt-5.5" },
},
},

View file

@ -295,7 +295,7 @@ describeLive("gateway live (cli backend)", () => {
[modelKey]: {},
...(modelSwitchTarget ? { [modelSwitchTarget]: {} } : {}),
},
embeddedHarness: { runtime: "pi", fallback: "pi" },
agentRuntime: { id: "pi", fallback: "pi" },
cliBackends: {
...existingBackends,
[providerId]: {

View file

@ -288,7 +288,7 @@ async function writeGatewayConfig(params: {
agents: {
defaults: {
workspace: params.workspace,
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
model: { primary: `codex/${params.model}` },
skipBootstrap: true,
sandbox: { mode: "off" },

View file

@ -195,7 +195,7 @@ async function writeLiveGatewayConfig(params: {
agents: {
defaults: {
workspace: params.workspace,
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
skipBootstrap: true,
timeoutSeconds: CODEX_HARNESS_AGENT_TIMEOUT_SECONDS,
model: { primary: params.modelKey },

View file

@ -58,7 +58,7 @@ async function writeLiveGatewayConfig(params: {
list: [{ id: "dev", default: true }],
defaults: {
workspace: params.workspace,
embeddedHarness: { runtime: "codex", fallback: "none" },
agentRuntime: { id: "codex", fallback: "none" },
skipBootstrap: true,
model: { primary: params.modelKey },
models: { [params.modelKey]: {} },

View file

@ -252,8 +252,8 @@ function createStartupConfig(params: {
enabledPluginIds?: string[];
providerIds?: string[];
modelId?: string;
embeddedHarnessRuntime?: string;
agentEmbeddedHarnessRuntimes?: string[];
agentRuntimeId?: string;
agentRuntimeIds?: string[];
channelIds?: string[];
allowPluginIds?: string[];
noConfiguredChannels?: boolean;
@ -316,10 +316,10 @@ function createStartupConfig(params: {
agents: {
defaults: {
model: { primary: params.modelId },
...(params.embeddedHarnessRuntime
...(params.agentRuntimeId
? {
embeddedHarness: {
runtime: params.embeddedHarnessRuntime,
agentRuntime: {
id: params.agentRuntimeId,
fallback: "none",
},
}
@ -328,32 +328,32 @@ function createStartupConfig(params: {
[params.modelId]: {},
},
},
...(params.agentEmbeddedHarnessRuntimes?.length
...(params.agentRuntimeIds?.length
? {
list: params.agentEmbeddedHarnessRuntimes.map((runtime, index) => ({
list: params.agentRuntimeIds.map((runtime, index) => ({
id: `agent-${index + 1}`,
embeddedHarness: { runtime },
agentRuntime: { id: runtime },
})),
}
: {}),
},
}
: params.embeddedHarnessRuntime || params.agentEmbeddedHarnessRuntimes?.length
: params.agentRuntimeId || params.agentRuntimeIds?.length
? {
agents: {
defaults: params.embeddedHarnessRuntime
defaults: params.agentRuntimeId
? {
embeddedHarness: {
runtime: params.embeddedHarnessRuntime,
agentRuntime: {
id: params.agentRuntimeId,
fallback: "none",
},
}
: {},
...(params.agentEmbeddedHarnessRuntimes?.length
...(params.agentRuntimeIds?.length
? {
list: params.agentEmbeddedHarnessRuntimes.map((runtime, index) => ({
list: params.agentRuntimeIds.map((runtime, index) => ({
id: `agent-${index + 1}`,
embeddedHarness: { runtime },
agentRuntime: { id: runtime },
})),
}
: {}),
@ -645,7 +645,7 @@ describe("resolveGatewayStartupPluginIds", () => {
it("includes required agent harness owner plugins when the default runtime is forced", () => {
expectStartupPluginIdsCase({
config: createStartupConfig({
embeddedHarnessRuntime: "codex",
agentRuntimeId: "codex",
enabledPluginIds: ["codex"],
}),
expected: ["demo-channel", "browser", "codex"],
@ -655,7 +655,7 @@ describe("resolveGatewayStartupPluginIds", () => {
it("includes required agent harness owner plugins when an agent override forces the runtime", () => {
expectStartupPluginIdsCase({
config: createStartupConfig({
agentEmbeddedHarnessRuntimes: ["codex"],
agentRuntimeIds: ["codex"],
enabledPluginIds: ["codex"],
}),
expected: ["demo-channel", "browser", "codex"],
@ -677,8 +677,8 @@ describe("resolveGatewayStartupPluginIds", () => {
config: {
agents: {
defaults: {
embeddedHarness: {
runtime: "codex",
agentRuntime: {
id: "codex",
fallback: "none",
},
},