docs: improve 2026.4.23 release docs

This commit is contained in:
Peter Steinberger 2026-04-24 17:54:54 +01:00
parent 51c11cfd90
commit b2352c3e24
No known key found for this signature in database
6 changed files with 75 additions and 0 deletions

View file

@ -9,6 +9,16 @@ title: "Chat channels"
OpenClaw can talk to you on any chat app you already use. Each channel connects via the Gateway.
Text is supported everywhere; media and reactions vary by channel.
## Delivery notes
- Telegram replies that contain markdown image syntax, such as `![alt](url)`,
are converted into media replies on the final outbound path when possible.
- Slack multi-person DMs route as group chats, so group policy, mention
behavior, and group-session rules apply to MPIM conversations.
- WhatsApp setup is install-on-demand: onboarding can show the setup flow before
Baileys runtime dependencies are staged, and the Gateway loads the WhatsApp
runtime only when the channel is actually active.
## Supported channels
- [BlueBubbles](/channels/bluebubbles) — **Recommended for iMessage**; uses the BlueBubbles macOS server REST API with full feature support (bundled plugin; edit, unsend, effects, reactions, group management — edit currently broken on macOS 26 Tahoe).

View file

@ -461,6 +461,10 @@ Two built-in tools can make persistent control-plane changes:
The owner-only `gateway` runtime tool still refuses to rewrite
`tools.exec.ask` or `tools.exec.security`; legacy `tools.bash.*` aliases are
normalized to the same protected exec paths before the write.
Agent-driven `gateway config.apply` and `gateway config.patch` edits are
fail-closed by default: only a narrow set of prompt, model, and mention-gating
paths are agent-tunable. New sensitive config trees are therefore protected
unless they are deliberately added to the allowlist.
For any agent/surface that handles untrusted content, deny these by default:
@ -843,6 +847,10 @@ Plaintext `ws://` is loopback-only by default. For trusted private-network
paths, set `OPENCLAW_ALLOW_INSECURE_PRIVATE_WS=1` on the client process as
break-glass. This is intentionally process environment only, not an
`openclaw.json` config key.
Mobile pairing and Android manual or scanned gateway routes are stricter:
cleartext is accepted for loopback, but private-LAN, link-local, `.local`, and
dotless hostnames must use TLS unless you explicitly opt into the trusted
private-network cleartext path.
Local device pairing:

View file

@ -71,6 +71,18 @@ ReadWritePaths=/var/lib/openclaw /home/openclaw/.openclaw /tmp
If `OPENCLAW_PLUGIN_STAGE_DIR` is not set, OpenClaw uses `$STATE_DIRECTORY` when
systemd provides it, then falls back to `~/.openclaw/plugin-runtime-deps`.
### Bundled plugin runtime dependencies
Packaged installs keep bundled plugin runtime dependencies out of the read-only
package tree. On startup and during `openclaw doctor --fix`, OpenClaw repairs
runtime dependencies only for bundled plugins that are active in config, active
through legacy channel config, or enabled by their bundled manifest default.
Explicit disablement wins. A disabled plugin or channel does not get its
runtime dependencies repaired just because it exists in the package. External
plugins and custom load paths still use `openclaw plugins install` or
`openclaw plugins update`.
## Auto-updater
The auto-updater is off by default. Enable it in `~/.openclaw/openclaw.json`:

View file

@ -15,6 +15,15 @@ OpenAI provides developer APIs for GPT models. OpenClaw supports three OpenAI-fa
OpenAI explicitly supports subscription OAuth usage in external tools and workflows like OpenClaw.
## Quick choice
| Goal | Use | Notes |
| --------------------------------------------- | -------------------------------------------------------- | ---------------------------------------------------------------------------- |
| Direct API-key billing | `openai/gpt-5.4` | Set `OPENAI_API_KEY` or run OpenAI API-key onboarding. |
| GPT-5.5 with ChatGPT/Codex subscription auth | `openai-codex/gpt-5.5` | Default PI route for Codex OAuth. Best first choice for subscription setups. |
| GPT-5.5 with native Codex app-server behavior | `openai/gpt-5.5` plus `embeddedHarness.runtime: "codex"` | Uses the Codex app-server harness, not the public OpenAI API route. |
| Image generation or editing | `openai/gpt-image-2` | Works with either `OPENAI_API_KEY` or OpenAI Codex OAuth. |
<Note>
GPT-5.5 is currently available in OpenClaw through subscription/OAuth routes:
`openai-codex/gpt-5.5` with the PI runner, or `openai/gpt-5.5` with the
@ -200,6 +209,14 @@ Choose your preferred auth method and follow the setup steps.
Use `contextWindow` to declare native model metadata. Use `contextTokens` to limit the runtime context budget.
</Note>
### Catalog recovery
OpenClaw uses upstream Codex catalog metadata for `gpt-5.5` when it is
present. If live Codex discovery omits the `openai-codex/gpt-5.5` row while
the account is authenticated, OpenClaw synthesizes that OAuth model row so
cron, sub-agent, and configured default-model runs do not fail with
`Unknown model`.
</Tab>
</Tabs>

View file

@ -44,6 +44,21 @@ image endpoints remain blocked by default.
The agent calls `image_generate` automatically. No tool allow-listing needed — it's enabled by default when a provider is available.
## Common routes
| Goal | Model ref | Auth |
| ---------------------------------------------------- | -------------------------------------------------- | ------------------------------------ |
| OpenAI image generation with API billing | `openai/gpt-image-2` | `OPENAI_API_KEY` |
| OpenAI image generation with Codex subscription auth | `openai/gpt-image-2` | OpenAI Codex OAuth |
| OpenRouter image generation | `openrouter/google/gemini-3.1-flash-image-preview` | `OPENROUTER_API_KEY` |
| Google Gemini image generation | `google/gemini-3.1-flash-image-preview` | `GEMINI_API_KEY` or `GOOGLE_API_KEY` |
The same `image_generate` tool handles text-to-image and reference-image
editing. Use `image` for one reference or `images` for multiple references.
Provider-supported output hints such as `quality`, `outputFormat`, and
OpenAI-specific `background` are forwarded when available and reported as
ignored when a provider does not support them.
## Supported providers
| Provider | Default model | Edit support | Auth |

View file

@ -75,6 +75,19 @@ higher-quality model. You can configure this via `agents.defaults.subagents.mode
overrides. When a child genuinely needs the requester's current transcript, the agent can request
`context: "fork"` on that one spawn.
## Context modes
Native sub-agents start isolated unless the caller explicitly asks to fork the
current transcript.
| Mode | When to use it | Behavior |
| ---------- | -------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- |
| `isolated` | Fresh research, independent implementation, slow tool work, or anything that can be briefed in the task text | Creates a clean child transcript. This is the default and keeps token use lower. |
| `fork` | Work that depends on the current conversation, prior tool results, or nuanced instructions already present in the requester transcript | Branches the requester transcript into the child session before the child starts. |
Use `fork` sparingly. It is for context-sensitive delegation, not a replacement
for writing a clear task prompt.
## Tool
Use `sessions_spawn`: