feat(channels): add dispatch modes and prompt lifecycle hooks

Add three dispatch modes for handling concurrent messages:
- steer (default): cancel current prompt and start new one
- collect: buffer messages and coalesce into follow-up prompt
- followup: queue messages for sequential processing

Introduce onPromptStart/onPromptEnd lifecycle hooks for working
indicators. These fire only when a prompt actually begins processing,
not for buffered (collect mode) or gated/blocked messages.

Refactor Telegram, WeChat, and DingTalk adapters to use the new hooks
instead of overriding handleInbound, simplifying the working indicator
pattern and ensuring correct behavior with dispatch modes.

This enables better UX for async workflows and prevents indicator
leaks when messages are buffered or cancelled.
This commit is contained in:
tanzhenxin 2026-03-28 06:19:02 +00:00
parent 9fc2abbed2
commit 7251da0152
10 changed files with 649 additions and 124 deletions

View file

@ -47,23 +47,24 @@ Channels are configured under the `channels` key in `settings.json`. Each channe
### Options
| Option | Required | Description |
| ------------------------ | -------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
| `type` | Yes | Channel type: `telegram`, `weixin`, `dingtalk`, or a custom type from an extension (see [Plugins](./plugins)) |
| `token` | Telegram | Bot token. Supports `$ENV_VAR` syntax to read from environment variables. Not needed for WeChat or DingTalk |
| `clientId` | DingTalk | DingTalk AppKey. Supports `$ENV_VAR` syntax |
| `clientSecret` | DingTalk | DingTalk AppSecret. Supports `$ENV_VAR` syntax |
| `model` | No | Model to use for this channel (e.g., `qwen3.5-plus`). Overrides the default model. Useful for multimodal models that support image input |
| `senderPolicy` | No | Who can talk to the bot: `allowlist` (default), `open`, or `pairing` |
| `allowedUsers` | No | List of user IDs allowed to use the bot (used by `allowlist` and `pairing` policies) |
| `sessionScope` | No | How sessions are scoped: `user` (default), `thread`, or `single` |
| `cwd` | No | Working directory for the agent. Defaults to the current directory |
| `instructions` | No | Custom instructions prepended to the first message of each session |
| `groupPolicy` | No | Group chat access: `disabled` (default), `allowlist`, or `open`. See [Group Chats](#group-chats) |
| `groups` | No | Per-group settings. Keys are group chat IDs or `"*"` for defaults. See [Group Chats](#group-chats) |
| `blockStreaming` | No | Progressive response delivery: `on` or `off` (default). See [Block Streaming](#block-streaming) |
| `blockStreamingChunk` | No | Chunk size bounds: `{ "minChars": 400, "maxChars": 1000 }`. See [Block Streaming](#block-streaming) |
| `blockStreamingCoalesce` | No | Idle flush: `{ "idleMs": 1500 }`. See [Block Streaming](#block-streaming) |
| Option | Required | Description |
| ------------------------ | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| `type` | Yes | Channel type: `telegram`, `weixin`, `dingtalk`, or a custom type from an extension (see [Plugins](./plugins)) |
| `token` | Telegram | Bot token. Supports `$ENV_VAR` syntax to read from environment variables. Not needed for WeChat or DingTalk |
| `clientId` | DingTalk | DingTalk AppKey. Supports `$ENV_VAR` syntax |
| `clientSecret` | DingTalk | DingTalk AppSecret. Supports `$ENV_VAR` syntax |
| `model` | No | Model to use for this channel (e.g., `qwen3.5-plus`). Overrides the default model. Useful for multimodal models that support image input |
| `senderPolicy` | No | Who can talk to the bot: `allowlist` (default), `open`, or `pairing` |
| `allowedUsers` | No | List of user IDs allowed to use the bot (used by `allowlist` and `pairing` policies) |
| `sessionScope` | No | How sessions are scoped: `user` (default), `thread`, or `single` |
| `cwd` | No | Working directory for the agent. Defaults to the current directory |
| `instructions` | No | Custom instructions prepended to the first message of each session |
| `groupPolicy` | No | Group chat access: `disabled` (default), `allowlist`, or `open`. See [Group Chats](#group-chats) |
| `groups` | No | Per-group settings. Keys are group chat IDs or `"*"` for defaults. See [Group Chats](#group-chats) |
| `dispatchMode` | No | What happens when you send a message while the bot is busy: `steer` (default), `collect`, or `followup`. See [Dispatch Modes](#dispatch-modes) |
| `blockStreaming` | No | Progressive response delivery: `on` or `off` (default). See [Block Streaming](#block-streaming) |
| `blockStreamingChunk` | No | Chunk size bounds: `{ "minChars": 400, "maxChars": 1000 }`. See [Block Streaming](#block-streaming) |
| `blockStreamingCoalesce` | No | Idle flush: `{ "idleMs": 1500 }`. See [Block Streaming](#block-streaming) |
### Sender Policy
@ -222,6 +223,37 @@ Files work with any model — no multimodal support required.
| Files | Direct download via Bot API (20MB limit) | CDN download with AES decryption | downloadCode API (two-step) |
| Captions | Photo/file captions included as message text | Not applicable | Rich text: mixed text + images in one message |
## Dispatch Modes
Controls what happens when you send a new message while the bot is still processing a previous one.
- **`steer`** (default) — The bot cancels the current request and starts working on your new message. Best for normal chat, where a follow-up usually means you want to correct or redirect the bot.
- **`collect`** — Your new messages are buffered. When the current request finishes, all buffered messages are combined into a single follow-up prompt. Good for async workflows where you want to queue up thoughts.
- **`followup`** — Each message is queued and processed as its own separate turn, in order. Useful for batch workflows where each message is independent.
```json
{
"channels": {
"my-channel": {
"type": "telegram",
"dispatchMode": "steer",
...
}
}
}
```
You can also set dispatch mode per group, overriding the channel default:
```json
{
"groups": {
"*": { "requireMention": true, "dispatchMode": "steer" },
"-100123456": { "dispatchMode": "collect" }
}
}
```
## Block Streaming
By default, the agent works for a while and then sends one large response. With block streaming enabled, the response arrives as multiple shorter messages while the agent is still working — similar to how ChatGPT or Claude show progressive output.