Commit graph

188 commits

Author SHA1 Message Date
Fu Yuchen
93cbad24b1
fix(core): preserve reasoning_content during session resume and active sessions (GH#3579) (#3590)
* fix(core): preserve reasoning_content during session resume and active sessions (GH#3579)

* chore(core): remove dead thinkingThresholdMinutes config after latch removal (GH#3579)
2026-04-24 17:49:05 +08:00
顾盼
aeeb2976d6
feat(web-search): remove built-in web_search tool, replace with MCP-based approach (#3502)
* feat(web-search): add GLM (ZhipuAI) web search provider

- Add GlmProvider class implementing BaseWebSearchProvider using the
  ZhipuAI Web Search API (https://open.bigmodel.cn/api/paas/v4/web_search)
- Support multiple search engines: search_std, search_pro, search_pro_sogou,
  search_pro_quark
- Support optional config: maxResults, searchIntent, searchRecencyFilter,
  contentSize, searchDomainFilter
- Truncate query to 70 characters per API limit
- Register 'glm' in the provider discriminated union (types.ts) and
  createProvider() switch (index.ts)
- Add GlmProviderConfig to settingsSchema, ConfigParams, and Config class
- Add --glm-api-key CLI flag and GLM_API_KEY env var support in webSearch.ts
- Forward GLM_API_KEY in sandbox environment
- Update provider priority list: Tavily > Google > GLM > DashScope
- Add 17 unit tests for GlmProvider and 4 integration tests in index.test.ts
- Update docs/developers/tools/web-search.md with GLM configuration,
  env vars, CLI args, pricing, and corrected DashScope billing info
- Fix stale OAuth/free-tier references in web-search.md

Closes #3496

* docs(web-search): fix DashScope note and add GLM server-side limitations

* fix(web-search): make DashScope provider work with standard API key, remove qwen-oauth dependency

- DashScopeProvider.isAvailable() now checks config.apiKey instead of authType
- Remove OAuth credential file reading and resource_url requirement
- Use standard DashScope endpoint: dashscope.aliyuncs.com/api/v1/indices/plugin/web_search
- Read DASHSCOPE_API_KEY env var and --dashscope-api-key CLI flag
- Forward DASHSCOPE_API_KEY into sandbox environment
- Update integration test to detect DASHSCOPE_API_KEY
- Update docs to reflect new API key based configuration

* feat(web-search): remove built-in web search tool

The web_search tool and all related provider implementations are removed.
Web search functionality will be provided via MCP integrations instead,
which is the direction the broader agent ecosystem is moving.

Removed:
- packages/core/src/tools/web-search/ (entire directory)
- packages/cli/src/config/webSearch.ts
- integration-tests/cli/web_search.test.ts
- ToolNames.WEB_SEARCH, ToolErrorCode.WEB_SEARCH_FAILED
- webSearch config in ConfigParams, Config class, settingsSchema
- CLI options: --tavily-api-key, --google-api-key, --google-search-engine-id,
  --glm-api-key, --dashscope-api-key, --web-search-default
- Sandbox env forwarding for TAVILY/GLM/DASHSCOPE/GOOGLE search keys
- web_search from rule-parser, permission-manager, speculation gate,
  microcompact tool set, and builtin-agents tool list

* fix: remove websearch reference

* docs: remove websearch tool

* docs: add break change guide

* fix review
2026-04-24 11:29:02 +08:00
顾盼
2710bdec0d
feat(cli): Phase 2 — slash command multi-mode expansion, ACP fixes, and UX improvements (#3377)
* refactor(cli): replace slash command whitelist with capability-based filtering (Phase 1)

## Summary

Replace the hardcoded ALLOWED_BUILTIN_COMMANDS_NON_INTERACTIVE whitelist with a
unified, capability-based command metadata model. This is Phase 1 of the slash
command architecture refactor described in docs/design/slash-command/.

## Key changes

### New types (types.ts)
- Add ExecutionMode ('interactive' | 'non_interactive' | 'acp')
- Add CommandSource ('builtin-command' | 'bundled-skill' | 'skill-dir-command' |
  'plugin-command' | 'mcp-prompt')
- Add CommandType ('prompt' | 'local' | 'local-jsx')
- Extend SlashCommand interface with: source, sourceLabel, commandType,
  supportedModes, userInvocable, modelInvocable, argumentHint, whenToUse,
  examples (all optional, backward-compatible)

### New module (commandUtils.ts + commandUtils.test.ts)
- getEffectiveSupportedModes(): 3-priority inference
  (explicit supportedModes > commandType > CommandKind fallback)
- filterCommandsForMode(): replaces filterCommandsForNonInteractive()
- 18 unit tests

### Whitelist removal (nonInteractiveCliCommands.ts)
- Remove ALLOWED_BUILTIN_COMMANDS_NON_INTERACTIVE constant
- Remove filterCommandsForNonInteractive() function
- Replace with CommandService.getCommandsForMode(mode)

### CommandService enhancements (CommandService.ts)
- Add getCommandsForMode(mode: ExecutionMode): filters by mode, excludes hidden
- Add getModelInvocableCommands(): reserved for Phase 3 model tool-call use

### Built-in command annotations (41 files)
Annotate every built-in command with commandType:
- commandType='local' + supportedModes all-modes: btw, bug, compress, context,
  init, summary (replaces the 6-command whitelist)
- commandType='local' interactive-only: export, memory, plan, insight
- commandType='local-jsx' interactive-only: all remaining ~31 commands

### Loader metadata injection (4 files)
Each loader stamps source/sourceLabel/commandType/modelInvocable on every
command it emits:
- BuiltinCommandLoader: source='builtin-command', modelInvocable=false
- BundledSkillLoader: source='bundled-skill', commandType='prompt',
  modelInvocable=true
- command-factory (FileCommandLoader): source per extension/user origin,
  commandType='prompt', modelInvocable=!extensionName
- McpPromptLoader: source='mcp-prompt', commandType='prompt', modelInvocable=true

### Bug fix
MCP_PROMPT commands were incorrectly excluded from non-interactive/ACP modes by
the old whitelist logic. commandType='prompt' now correctly allows them in all
modes.

### Session.ts / nonInteractiveHelpers.ts
- ACP session calls getAvailableCommands with explicit 'acp' mode
- Remove allowedBuiltinCommandNames parameter from buildSystemMessage() —
  capability filtering is now self-contained in CommandService

* fix test ci

* feat(cli): Phase 2 slash command expansion + ACP fixes + UX improvements

Phase 2.1 - Command mode expansion:
- Extend 13 built-in commands to support non_interactive/acp modes
- A class: export, plan, statusline - supportedModes only
- A+ class: language, copy, restore - add non-interactive branches
- A' class: model, approvalMode - handle dialog paths in non-interactive
- B class: about, stats, insight, docs, clear - full non-interactive branches
- context: format output as readable Markdown instead of raw JSON
- export: use HTML as default format when no subcommand given

Phase 2.2 - SkillTool integration:
- SkillTool now consumes CommandService.getModelInvocableCommands()

Phase 2.3 - Mid-input slash ghost text:
- Replace mid-input dropdown completion with inline ghost text
- Match Claude Code behavior: gray dimmed completion hint in input box
- Tab accepts the ghost text completion
- Add findMidInputSlashCommand() and getBestSlashCommandMatch() utilities

ACP session bug fixes:
- Fix executionMode undefined in interactive mode (slashCommandProcessor)
- Fix slash command output not visible in Zed (use emitAgentMessage)
- Fix newline rendering in Zed (Markdown hard line-break)
- Fix history replay merging consecutive user messages (recordSlashCommand)
- Fix /clear not clearing model context (dynamic chat reference)

* feat: inline complete only for modelInvocable

* fix memory command

* fix: pass 'non_interactive' mode explicitly to getAvailableCommands

- Fix critical bug in nonInteractiveHelpers.ts: loadSlashCommandNames was
  calling getAvailableCommands without specifying mode, causing it to default
  to 'acp' instead of 'non_interactive'. Commands with supportedModes that
  include 'non_interactive' but not 'acp' would be silently excluded.
- Apply the same fix in systemController.ts for the same reason.
- Update test mock to delegate filtering to production filterCommandsForMode()
  instead of duplicating the logic inline, preventing divergence.

Fixes review comments by wenshao and tanzhenxin on PR #3283.

* fix: resolve TypeScript type error in nonInteractiveHelpers.test.ts

* fix test ci

* fix mcp prompt in skill manager

* revert pr#3345

* fix test ci

* feat(cli): adapt /insight for non_interactive mode with message return

- non_interactive: run generateStaticInsight() synchronously with no-op
  progress callback, return { type: 'message' } with output path
- acp: keep existing stream_messages path with progress streaming
- interactive: unchanged

Add tests for non_interactive success and error paths.

Update phase2-technical-design.md and roadmap.md to reflect the
three-way mode split and clarify that MCP prompts do not need
modelInvocable (they are called via native MCP tool call mechanism).

* fix(cli): ghost text only shown when cursor is at end of slash token

Use strict equality (!==) instead of > in findMidInputSlashCommand so that
ghost text is only computed and Tab-accepted when the cursor sits exactly at
the trailing edge of the partial command token.

Previously, with the cursor inside an already-typed token (e.g. /re|view),
the ghost text suffix would still be shown and pressing Tab would insert it
at the cursor position, producing a duplicated tail. Using strict equality
makes ghost text disappear as soon as the cursor moves inside the token.

Add unit tests for findMidInputSlashCommand covering cursor-at-end,
cursor-inside-token, cursor-past-token, start-of-line, and
no-space-before-slash cases.

* fix(cli): support /model <model-id> in non-interactive and ACP modes

Previously, /model <model-id> (without --fast) fell through to the
non-interactive branch that only returned the current model info and
incorrectly told users to use --fast. Now:

- /model <model-id>  → sets the main model via settings + config.setModel()
- /model             → shows current model with correct usage hint
- /model --fast <id> → unchanged (sets fast model)

Fixes the inconsistency flagged in PR review: the help text said to use
'/model <model-id>' but the command returned a dialog action which is
unsupported in non-interactive mode.

* fix(cli): declare supportedModes on doctorCommand to enable non-interactive and ACP

The command's action already had non-interactive handling (returns a JSON
message with check results), but without supportedModes declared the
BUILT_IN fallback restricted it to interactive-only so it was never
registered in non_interactive or acp sessions.

* feat(skills): add SkillCommandLoader for user/project/extension skills as slash commands

- New SkillCommandLoader loads user, project, and extension level SKILL.md
  files as slash commands (previously only bundled skills were slash-invocable)
- Extension skills follow plugin-command rules: modelInvocable only when
  description or whenToUse is present
- User/project skills are always modelInvocable (matching bundled behavior)
- skill-manager now injects extensionName when loading extension-level skills
- Add when_to_use and disable-model-invocation frontmatter support to SKILL.md
  and .md command files (SkillConfig, markdown-command-parser, command-factory,
  BundledSkillLoader, FileCommandLoader)
- SkillTool filters out skills with disableModelInvocation and includes
  whenToUse in the skill description shown to the model
- 16 unit tests for SkillCommandLoader covering all cases

* docs: update phase2 design doc to reflect final decisions on plan/statusline/copy/restore

These four commands are intentionally kept as interactive-only by design:
- /plan and /statusline: tightly coupled with interactive multi-turn UI
- /copy and /restore: clipboard and snapshot restore are inherently interactive

Update design doc classification table, section 4.2, 4.3, 5.2, 5.3,
file change summary, test requirements, behavior analysis table,
and implementation batch descriptions to reflect this decision.

* feat(cli): re-implement slashCommands.disabled denylist based on current refactored code

Adapts the feature originally introduced in pr#3445 to the current
CommandService / Phase-2 refactored code.

Sources (merged, de-duplicated, case-insensitive):
  - settings key slashCommands.disabled (string[], UNION merge)
  - --disabled-slash-commands CLI flag (comma-separated or repeated)
  - QWEN_DISABLED_SLASH_COMMANDS environment variable

Enforcement points:
  - CommandService.create() accepts optional disabledNames: ReadonlySet<string>
    and removes matching commands post-rename, so disabled commands never appear
    in autocomplete, mid-input ghost text, or model-invocable commands list.
  - slashCommandProcessor (interactive TUI) passes the denylist to
    CommandService.create so disabled commands are absent from dropdown/ghost text.
  - nonInteractiveCliCommands.handleSlashCommand() keeps allCommands unfiltered
    to distinguish disabled vs unknown; disabled commands return unsupported with
    a "disabled by the current configuration" reason (not no_command).
  - getAvailableCommands() (ACP) passes the denylist to CommandService.create.

Config plumbing:
  - core/Config: ConfigParameters.disabledSlashCommands + getDisabledSlashCommands()
  - cli/config: CliArgs.disabledSlashCommands + yargs option + loadCliConfig merge
  - settingsSchema: slashCommands.disabled (MergeStrategy.UNION)
  - settings.schema.json: regenerated

Tests: 28 pass (CommandService x4, nonInteractiveCliCommands x3 new cases)

* feat(cli): complete slashCommands.disabled coverage from pr#3445

Fill in the three items that were missing from the initial re-implementation:

- packages/cli/src/config/settings.test.ts: add UNION-merge test for
  slashCommands.disabled across user and workspace scopes
- packages/cli/src/nonInteractiveCli.test.ts: add getDisabledSlashCommands
  mock to the shared mockConfig fixture
- docs/users/configuration/settings.md: add slashCommands section (table +
  example + note) and --disabled-slash-commands row in the CLI args table

* fix(cli): match disabled slash commands by alias as well as primary name

The denylist previously only checked cmd.name (the primary/canonical name),
so disabling a command by its alias (e.g. 'about' for the 'status' command)
had no effect. Fix both CommandService.create() and the isDisabled() helper
in nonInteractiveCliCommands.ts to also check altNames.

Also improve the user-facing error message to show the token the user actually
typed (e.g. /about) instead of always showing the primary name (/status).
2026-04-22 19:12:44 +08:00
Shaojin Wen
d71f2fab70
feat(cli): cap inline shell output with configurable line limit (#3508)
* feat(cli): cap inline shell output with configurable line limit

Long-running shell commands (npm install, find /, build logs) currently
fill the viewport with the full visible PTY buffer (up to availableHeight,
~24 lines on a typical terminal). The output dominates the screen and
pushes prior context off the top.

This caps inline ANSI shell output to a small window (default 5 lines,
matching Claude Code's ShellProgressMessage). The hidden line count is
already surfaced via the existing `+N lines` indicator in
`ShellStatsBar`, so users still know how much was elided.

The cap applies only when nothing in the existing escape-hatch set is
true:
  - `forceShowResult` (errors, !-prefix user-initiated commands,
    tools awaiting confirmation, agents pending confirmation)
  - `isThisShellFocused` (ctrl+f focus on a running embedded PTY shell)
  - `ui.shellOutputMaxLines = 0` (user opt-out)

Also adds a new `ui.shellOutputMaxLines` setting (default 5) so users
can adjust or disable the cap. The SettingsDialog renders it
automatically via the existing `type: 'number'` schema path.

Notes on scope:
  - Only the `'ansi'` display branch is capped. `'string'`, `'diff'`,
    `'todo'`, `'plan'`, `'task'` renderers are untouched.
  - `AnsiOutputDisplay` is only produced by shell tools (`shell.ts`,
    `shellCommandProcessor.ts`), so other tool outputs are unaffected.
  - The `+N lines` count is bounded by the headless xterm buffer height
    (~30 rows) — a pre-existing limitation of the buffer-based stats,
    not introduced here.

Tests:
  - 4 new ToolMessage tests cover cap default, forceShowResult bypass,
    settings disable (cap=0), and custom cap value.
  - The existing `MockAnsiOutputText` / `MockShellStatsBar` mocks were
    extended to print `availableTerminalHeight` / `displayHeight` so
    the cap behavior is asserted at the prop level.

* fix(cli): apply shell output cap to completed string display too

Initial PR caught only the streaming ANSI branch. AI shell tools emit
the final completed result through `shell.ts:returnDisplayMessage =
result.output`, which is a plain string. That string went through
`StringResultRenderer` with the unmodified `availableHeight`, so the
cap was effectively bypassed for the steady-state display the user
actually sees most of the time.

Verified manually in tmux: a `seq 1 30` invocation by the AI now
collapses to "first 26 lines hidden ... 27 28 29 30" instead of
listing all 30 rows. `!`-prefix `seq 1 30` still expands fully via
the existing `isUserInitiated → forceShowResult` bypass.

Changes:
  - Detect shell tool by name (matches existing `SHELL_COMMAND_NAME` /
    `SHELL_NAME` checks already used in this file)
  - Rename `ansiAvailableHeight` → `shellCapHeight` since it now
    governs the string branch as well
  - Pass `shellCapHeight` to `StringResultRenderer`; the value
    falls back to `availableHeight` for non-shell tools so other
    tools' string output is unaffected
  - Two new tests: shell completed string is capped; non-shell
    string is not
  - Two existing tests updated to use `name="Shell"` so they actually
    exercise the cap path (would previously have passed by accident
    since the original code didn't check tool name)

Also picks up the auto-regenerated VSCode IDE companion settings
schema entry for `ui.shellOutputMaxLines`.

* fix(cli): symmetrize ANSI/string row counts and clamp shell cap input

Addresses two non-blocking review observations on #3508.

Off-by-one between paths:
  MaxSizedBox reserves one row for its overflow banner when content
  exceeds maxHeight (visibleContentHeight = max - 1). The ANSI path
  pre-slices to N in AnsiOutputText so MaxSizedBox sees exactly N
  rows and renders all N — plus the separate ShellStatsBar line.
  The string path passes the raw cap and lets MaxSizedBox handle
  overflow, so it shows N-1 content rows + the banner.

  Result with cap=5: ANSI showed 5+stats, string showed 4+banner.
  Pass shellCapHeight + 1 to StringResultRenderer when capping so
  both paths render N visible content rows. Verified in tmux: the
  completed Shell tool box now reports `... first 25 lines hidden ...`
  followed by lines 26-30 (was 26 + lines 27-30).

Setting validation:
  Schema accepts any number; the dialog only rejects NaN. Negatives
  silently disabled the cap (only 0 is documented as off) and
  fractional values produced fractional slice counts. Added
  Math.max(0, Math.floor(value || 0)) at the use site so:
   - negatives → 0 → cap disabled (matches the documented opt-out)
   - fractions → floor → whole-row cap
   - non-numeric (raw settings.json edits) → 0 → cap disabled
  Schema-level minimum/integer constraints aren't supported by the
  current settings infrastructure (no other number setting uses
  them either), so the guard lives at the use site.

Tests:
  - Updated string-cap test to assert lines 26-30 visible (catches
    the +1 fix; was lines 27-30 before)
  - New parameterized test covers -1, 1.5, and a non-numeric value
2026-04-22 14:37:13 +08:00
Shaojin Wen
afbb5e71db
fix(cli): rework session recap rendering and add blur threshold setting (#3482)
* feat(cli): make recap away-threshold configurable

The 5-minute blur threshold was hard-coded. Confirmed from Claude
Code's own binary (v2.1.113) that 5 minutes is their default as well
(and that they shift to 60 minutes when 1h prompt-cache is active) —
so the default stays, but expose it as `general.sessionRecapAway
ThresholdMinutes` for users who briefly alt-tab often and don't want
recaps piling up, or who want to lower it for testing.

Non-positive / unset values fall back to the 5-minute default, so
dropping the key has the same behavior as before.

* fix(core): align recap prompt with Claude Code (1-2 sentences, ≤40 words)

The earlier "exactly one sentence, 80-char cap" was an over-correction
to a single in-the-moment ask. Going back to it: the natural shape of
"current task + next action" is two clauses, and forcing them into a
single sentence either crams them with a semicolon or drops the next
action entirely on complex sessions.

Adopt Claude Code's prompt verbatim (extracted from the v2.1.113
binary): "under 40 words, 1-2 plain sentences, no markdown. Lead with
the overall goal and current task, then the one next action. Skip
root-cause narrative, fix internals, secondary to-dos, and em-dash
tangents." Add a Chinese-budget note (~80 chars) and keep the
<recap>...</recap> wrapping that protects against reasoning-model
preambles leaking into the UI.

The sticky banner already re-measures controls height when the
recap toggles, so a 2-line render lays out cleanly.

Sweep "one-line" out of user-facing copy (settings description,
slash-command description, feature docs, design doc) so the
documentation matches the new shape.

* fix(cli): restore "one-line" in user-facing recap copy

Verified from the Claude Code v2.1.113 binary that the slash-command
description IS literally "Generate a one-line session recap now" even
though the underlying prompt allows 1-2 sentences. Claude Code is
deliberately setting a tighter user expectation than the prompt
guarantees, which keeps the surface feel "glanceable".

Mirror that asymmetry: keep the prompt at 1-2 sentences (the previous
commit) for behavioral parity, but put "one-line" back in the user-
visible copy (slash-command description, settings description, user
docs). Internal design doc keeps the accurate "1-2 sentence" wording.

* fix(cli): render recap inline in history to match Claude Code

Earlier I read the user's complaint that the recap "scrolled away" as
"the recap should be sticky above the input box," and built a sticky
banner accordingly. Disassembly of the Claude Code v2.1.113 binary
shows the actual behavior is the opposite: their away_summary is a
plain `type:"system", subtype:"away_summary"` message dispatched
through the standard message renderer (no Static, no anchor, no
flexbox pinning) — it scrolls with the conversation like every other
system message.

Tear out the sticky-banner machinery so recap matches that:

- Recap is back in the `HistoryItemWithoutId` union and `addItem`'d
  into history (both from `/recap` and from auto-trigger), so it
  serializes into session saves and behaves like every other history
  item — no special clear paths, no resume-wrapper, no layout-effect
  re-measure dance.
- `useAwaySummary` takes `addItem` again instead of a setter callback.
- `AwayRecapMessage` renders the way Claude Code does: a 2-column
  gutter with `※`, then bold "recap: " and italic content, all in
  dim color. Drop the prior `StatusMessage`-shaped layout that fused
  prefix and label into "※ recap:".
- Remove the AppContainer plumbing, the slashCommandProcessor state,
  the UIStateContext fields, the DefaultAppLayout / ScreenReader
  placement blocks, the test-utils mocks, and the noninteractive
  stub. Restore `useResumeCommand.handleResume` to a void return
  since callers no longer need the success boolean.

Sweep the design doc so the architecture diagram, files table, and
hook deps reflect the inline-history flow.

* fix(cli): dedupe back-to-back auto-recaps with no new user turns between

Two consecutive blur cycles, each over the threshold but with no new
user activity in between, would each fire their own auto-recap and
add two near-duplicate entries to history (same task, slightly
different wording from temperature-driven LLM variance). Reported
case: leaving the terminal twice while a /review of one PR was
still on screen produced two recaps both about that same review.

Add a `shouldFireRecap` gate before kicking off the LLM call:

- Need at least 3 user messages in history total (don't fire on a
  near-empty session).
- If a previous away_recap is already in history, need at least 2
  new user messages since that one before another can fire.

Same shape as Claude Code's `Ic1` gate (`Sc1=3`, `Rc1=2`). Read
history through a ref so this isn't in the effect's deps and the
effect doesn't re-run on every message.

* fix(cli): type useResumeCommand.handleResume as Promise<void>

Per gemini review on #3482: the interface declared this as `() => void`
but the implementation is `async` and returns `Promise<void>`. The
mismatch silently lost the chainable promise — tests had to launder
it through `as unknown as Promise<void> | undefined` just to await.

Tighten the interface to `Promise<void>` and drop the cast in the
"closes the dialog immediately" test.

* fix(cli): persist auto-fired recap to chat recording so /resume keeps it

Per yiliang114 review on #3482: the manual `/recap` path persists across
`/resume` because the slash-command processor records every output
history item via `chatRecorder.recordSlashCommand({ phase: 'result',
outputHistoryItems })`, but the auto path called `addItem` directly
and bypassed that recorder. The result was an asymmetry where users
who triggered recap manually saw it after `/resume`, while users whose
recap fired automatically lost it.

Mirror the manual recording from useAwaySummary's `.then` callback —
record only the `result` phase (not invocation, since we don't want
a fake `> /recap` user line replayed) with the away-recap item as the
single output. Wrapped in try/catch because recap is best-effort and
must never surface a failure to the user.

Add useAwaySummary.test.ts covering:
- the recording path is taken on a successful auto-trigger
- the dedup gate (`shouldFireRecap`) suppresses the LLM call entirely,
  including the recording, when no new user turns happened since the
  last recap

* fix(cli): cast recap item via spread to satisfy strict tsc --build

CI's `tsc --build` (stricter than local `tsc --noEmit`) rejected the
direct `item as Record<string, unknown>` cast: HistoryItemAwayRecap's
literal `type: 'away_recap'` field doesn't overlap with `unknown`,
TS2352. Use the `{ ...item } as Record<string, unknown>` spread
pattern that the rest of the codebase (arenaCommand,
slashCommandProcessor's serializer) already uses for the same
SlashCommandRecordPayload field.
2026-04-21 14:39:13 +08:00
Shaojin Wen
52c7a3d0ed
fix(cli): pin /recap above input and align defaults with fastModel (#3478)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* fix(cli): pin /recap above input box and align defaults with fastModel

The recap rendered as a regular history item, so as soon as the model
streamed a new reply the "where you left off" reminder scrolled out of
view. Move it to a sticky banner anchored just above the Composer
(matching how btwItem is rendered) so it stays visible across turns.

While reworking the surface, also:
- Replace the chevron prefix with `※ recap:` so it reads as a labeled
  recap line instead of a generic dim message.
- Mirror the placement in ScreenReaderAppLayout so screen-reader users
  see it in the same logical position.
- Drop HistoryItemAwayRecap from the HistoryItemWithoutId union — it
  is no longer addItem-able, and leaving it in invited silent no-op
  bugs where addItem(awayRecap) would compile but render nothing.
- Clear the banner on /clear, /reset, /new and on /resume into a
  different session, so a recap from a previous context doesn't bleed
  into a freshly started one.
- Re-measure the controls box when the banner appears or disappears
  (its height changes by a couple of lines) so the main content area
  recomputes availableTerminalHeight and stays laid out correctly.

Auto-trigger now defaults to "on iff fastModel is configured" rather
than unconditionally on. Running an ambient background recap on the
main coding model is too costly and slow to be a sane default; tying
it to fastModel means the feature is silently opt-in for users who
have set up a cheap fast model. An explicit `general.showSessionRecap`
override still wins either way, and `/recap` itself is unaffected.

Sharpen the slash-command description to match the new behavior.

* fix(core): silence AbortSignal listener-leak warning in OpenAI pipeline

Every chat.completions.create call wires up an abort listener on the
incoming AbortSignal, and several layers — retryWithBackoff, the
LoggingContentGenerator wrapper, the SDK's own internal stream/fetch
plumbing — register their own listeners against the same signal. Five
retry attempts plus those layers comfortably exceed Node's default
10-listener cap and produce a MaxListenersExceededWarning. With
features that share or compose signals (e.g., recap + followup
speculation firing on the same response cycle), even a higher cap
gets blown past.

The signals here are per-request and short-lived, so the accumulation
is structural rather than a real memory leak — they get GC'd as soon
as the request settles. setMaxListeners(0, signal) at the SDK boundary
disables the warning for these specific signals only, without masking
any genuine leak elsewhere in the process. Idempotent and confined to
the one place where retry-bound API calls cross into the SDK.

* fix(core): tighten recap to a single sentence within 80 chars

The 1-3 sentence budget reliably wrapped onto two lines in the sticky
banner above the input box, which made it visually heavy for what is
supposed to be a glanceable reminder. Constrain the prompt to exactly
one sentence with a hard 80-char cap, and merge the "high-level task
+ next step" rule into a single sentence instead of two adjacent ones.

Also sweep the docs (settings, commands, design) so the user-facing
copy and the internal design notes match the new format.

* fix(cli): apply review feedback for recap PR

Two issues from review:

- The schema description for `general.showSessionRecap` still said
  "1-3 sentence summary" while the prompt, docs, and slash-command
  copy already say "one-line". Aligns the text in settingsSchema.ts
  and the regenerated VSCode JSON schema.

- The /resume wrapper cleared the sticky recap synchronously, before
  the inner handler had a chance to discover that no session data
  was available. On a no-op resume the user would still lose the
  current recap. Make `useResumeCommand.handleResume` return
  Promise<boolean> reporting whether a session actually loaded, and
  only clear the recap on a confirmed switch.

* fix(cli): default showSessionRecap to false and drop fastModel heuristic

The earlier "enabled iff fastModel is configured" default made it hard
for users to answer the simple question "is auto-recap on for me right
now?" — the answer depended on a setting from a different category,
and setting/unsetting fastModel silently changed recap behavior.

Revert to a plain boolean with a conservative off-by-default:

- Auto-trigger fires only when the user explicitly sets
  `general.showSessionRecap: true`.
- Manual `/recap` keeps working regardless (that's a user-initiated
  call, not an ambient one).
- Users never get ambient LLM calls billed to their main coding model
  without having opted in.

Aligns settings.md, design doc, and the regenerated JSON schema.
2026-04-20 23:58:19 +08:00
ihubanov
0b8b3da836
feat(cli): add slashCommands.disabled setting to gate slash commands (#3445)
* feat(cli): add slashCommands.disabled setting to gate slash commands

Introduces a first-class way for operators to hide and refuse to execute
specific slash commands. Useful for multi-tenant / enterprise / sandboxed
deployments where different users should see different command subsets.

The denylist is sourced from three unioned inputs:

  * `slashCommands.disabled` settings key (string[], UNION merge), so
    workspace scopes can only add to a denylist set at user or system
    scope, never shrink it — matching the shape already used by
    `permissions.deny`.
  * `--disabled-slash-commands` CLI flag (comma-separated or repeated).
  * `QWEN_DISABLED_SLASH_COMMANDS` environment variable.

Matching is case-insensitive against the final (post-rename) command
name, so extension commands are addressable by their disambiguated
form (e.g. `firebase.deploy`). Disabled commands are removed from
`CommandService`'s output, so they disappear from autocomplete and
produce the standard unknown-command path in both interactive TUI and
non-interactive (`--prompt`) modes.

The scope of this change is slash commands only: it does not affect
tool permissions (still `permissions.deny`) or keyboard shortcuts.

* chore(cli): regenerate settings.schema.json for slashCommands.disabled

Regenerates the companion JSON schema consumed by the VS Code extension
after adding the `slashCommands.disabled` entry to the TS schema in the
previous commit. Required by the "Check settings schema is up-to-date"
CI lint step.

* fix(cli): route disabled slash commands to unsupported, not no_command

handleSlashCommand was passing the disabled denylist straight into
CommandService.create, so disabled commands disappeared from
`allCommands` too. The fallback existence check that distinguishes
"known but not allowed in non-interactive mode" from "truly unknown"
then failed, and disabled commands like `/help` fell through to
`no_command` — causing the caller to forward them to the model as
plain prompt text.

Keep `allCommands` unfiltered and apply the denylist only when
constructing the executable set and when producing the unsupported
response. A disabled command now returns `unsupported` with a
"disabled by the current configuration" reason and never reaches the
model. Added three regression tests covering the primary case,
case-insensitive match, and the preserved no_command path for
genuinely unknown input.
2026-04-20 11:06:26 +08:00
Shaojin Wen
60a6dfc14c
feat(cli): add session recap with /recap and auto-show on return (#3434)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(cli): add session recap with /recap and auto-show on return

Users often open an old session days later and need to scroll through
pages to remember where they left off. This change adds a short
"where did I leave off" recap — a 1-3 sentence summary generated by
the fast model — so they can resume without re-reading the history.

Two triggers:
- /recap: manual slash command.
- Auto: when the terminal has been blurred for 5+ minutes and gets
  focused again (uses the existing DECSET 1004 focus protocol via
  useFocus). Gated on streamingState === Idle so it never interrupts
  an active turn. Only fires once per blur cycle.

The recap is rendered in dim color with a chevron prefix, visually
distinct from assistant replies. A new `general.showSessionRecap`
setting controls the auto-trigger (default on). /recap works
independent of the setting.

Implementation notes:
- generateSessionRecap uses fastModel (falls back to main model),
  tools: [], maxOutputTokens: 300, and a tight system prompt. It
  strips tool calls / responses from history before sending — tool
  responses can hold 10K+ tokens of file content that drown the recap
  in irrelevant detail. The 30-message window respects turn boundaries
  (slice never starts on a dangling model/tool response).
- Output is wrapped in <recap>...</recap> tags; the extractor returns
  empty (skips render) if the tag is missing, preventing model
  reasoning from leaking into the UI.
- All failures are silent (return null) and logged via a scoped
  debugLogger; recap is best-effort and must never break main flow.
- /recap refuses to run while a turn is pending.

* fix(cli): abort in-flight recap when showSessionRecap is disabled

If the user disables showSessionRecap while an auto-recap LLM call is
already in flight, the previous code returned early without aborting.
The pending .then would still pass its idle/abort guards and append the
recap, producing an unwanted message after the user has opted out.

Abort the controller and clear it eagerly so the resolved promise no
longer adds to history.

* fix(cli): gate /recap and auto-recap on streaming idle state

Two related issues from review:

1. /recap was only refusing when ui.pendingItem was set, but a normal
   model reply runs with streamingState === Responding and a null
   pendingItem. Invoking /recap mid-stream would generate a recap from
   a partial conversation and insert it between the user prompt and
   the assistant reply.

2. useAwaySummary cleared blurredAtRef before checking isIdle, so if
   focus returned during a still-streaming turn (after a >5min blur)
   the recap was permanently dropped — there was no later retry when
   the turn became idle, because isIdle was not in the effect deps.

Fixes:
- Expose isIdleRef on CommandContext.ui (mirrors btwAbortControllerRef
  pattern). Plumb it from AppContainer through useSlashCommandProcessor.
- recapCommand now refuses when isIdleRef.current is false OR
  pendingItem is non-null.
- useAwaySummary preserves blurredAtRef on the !isIdle bail and adds
  isIdle to the effect deps, so the trigger re-evaluates when the
  current turn finishes.
- Brief blurs (< AWAY_THRESHOLD_MS) still reset blurredAtRef.

Also seeds isIdleRef in nonInteractiveUi and mockCommandContext so the
new field has a sensible default outside the interactive UI.

* docs: document /recap command, showSessionRecap setting, and design

- User docs: add /recap to the Session and Project Management table in
  features/commands.md and a dedicated subsection covering manual use,
  the auto-trigger, the dim-color rendering, and the fast-model tip.
- User docs: add general.showSessionRecap row to the configuration
  settings reference.
- Design doc: docs/design/session-recap/session-recap-design.md covers
  motivation, the two trigger paths, the per-file architecture, prompt
  design with the <recap> tag and three-tier extractor, history
  filtering rationale (functionResponse can be 10K+ tokens), the
  useAwaySummary state machine, the isIdleRef gating for /recap, model
  selection, observability, and out-of-scope items.

* fix(core): exclude thought parts from session recap context

filterToDialog kept any non-empty text part, but @google/genai's Part
type also marks model reasoning with part.thought / part.thoughtSignature.
That hidden chain-of-thought was being fed to the recap LLM and could
get summarized as if it were user-visible dialogue.

Drop parts where either flag is set. Update the design doc's
History 过滤 section to call this out alongside the existing
tool-call/response rationale.

* docs(session-recap): correct debug-logging guidance, fill in state machine, sharpen UX wording

Audit of the session recap docs against the implementation found three
issues worth fixing:

- Design doc claimed debug logs were enabled via a QWEN_CODE_DEBUG_LOGGING
  env var. That var does not exist; debug logs are written to
  ~/.qwen/debug/<sessionId>.txt by default, gated by QWEN_DEBUG_LOG_FILE.
  Replace with the accurate path + opt-out behavior, and tell the reader
  to grep for the [SESSION_RECAP] tag.
- Design doc's useAwaySummary state machine table was missing the
  isFocused && blurredAtRef === null path (taken on first render and
  right after a brief-blur reset). Add the row.
- User doc's "Refuses to run ... failures are silent" line conflated the
  inline-error refusal with silent generation failures, and "(when the
  conversation is idle)" used internal jargon. Split the two cases and
  spell out what "idle" means, including the wait-then-fire behavior
  when focus returns mid-turn.

* docs(session-recap): correctly describe /recap vs auto-trigger failure modes

The previous wording said "Generation/network failures are silent — the
recap simply does not appear", but recapCommand returns a user-facing
info message ("Not enough conversation context for a recap yet.") in
exactly that path, and also returns inline messages for the
config-not-loaded and busy-turn guards.

Only the auto-trigger path is truly silent (it just skips addItem when
generateSessionRecap returns null). Split the two paths in the doc so
the manual command's "always responds with something" behavior is
distinguished from the auto-trigger's no-op-on-failure behavior.

* docs(session-recap): align prompt-rules section with the actual prompt

Two doc-vs-code mismatches in the design doc's "System Prompt" section,
caught with the same lens as yiliang114's failure-mode review:

- The bullet list claimed RECAP_SYSTEM_PROMPT forbids "推测用户意图"
  and "用 'you' 称呼用户". Those rules existed in an early draft but
  were dropped when the <recap> tag rules were added; the current
  prompt has no such restrictions. Replace with the actual rules and
  add a "与 RECAP_SYSTEM_PROMPT 一一对应" marker so future edits stay
  in sync.
- The doc said systemInstruction "覆盖" the main agent prompt. True
  for the agent prompt portion, but GeminiClient.generateContent
  internally calls getCustomSystemPrompt which appends user memory
  (QWEN.md / 自动 memory) as a suffix. Spell that out — the final
  system prompt is recap prompt + user memory, which is actually
  useful project context for the recap.

* docs(session-recap): translate design doc to English

The repo convention for docs/design is English (7 of 8 existing files;
auto-memory/memory-system.md is the only Chinese one). The first version
of this design doc followed the auto-memory example, which turned out
to be the wrong sample.

Translate to English while preserving the existing structure, the
state-machine table, the prompt-vs-doc 1:1 alignment, the
QWEN_DEBUG_LOG_FILE description, and the failure-mode notes added in
prior commits.

* fix(cli): drop empty info return from /recap interactive success path

The interactive success path inserts the away_recap history item
directly via ui.addItem and then returned `{type: 'message',
messageType: 'info', content: ''}`. The slash-command processor's
'message' case unconditionally calls addMessage, which adds another
HistoryItemInfo with empty text. The empty info renders as nothing
(StatusMessage early-returns null), but it still bloats the in-memory
history list and shows up in /export and saved sessions.

Return void on the interactive success path and on the abort path so
the processor's `if (result)` check skips the message-handler branch
entirely. Widen the action's return type to `void | SlashCommandActionReturn`
to match (same shape as btwCommand).
2026-04-19 21:38:48 +08:00
Shaojin Wen
4bf5bf22de
feat(cli): support refreshInterval in statusLine for periodic refresh (#3383)
* feat(cli): support refreshInterval in statusLine for periodic refresh

The statusLine (#3311) re-runs only when Agent state changes (token count,
model, git branch, etc.). Commands that display *external* data — a clock,
rate-limit counters, CI build status — have no Agent event to hook into
and go stale between messages.

Add an optional `ui.statusLine.refreshInterval` field (seconds, minimum 1)
that schedules a setInterval alongside the existing event-driven updates.
Overlap with state-change debounce is safe: `doUpdate` kills any in-flight
child and bumps the generation counter, so only the most recent output
reaches the footer.

Validation lives in `getStatusLineConfig`:
- Must be `number`, `Number.isFinite(...)`, `>= 1`
- Anything else is silently dropped (no interval scheduled)

No changes to the default behavior — configs without `refreshInterval`
behave exactly as before.

* fix(cli): yield periodic statusLine tick when previous exec is in flight

Review feedback on #3383: with `refreshInterval: 1` and a command whose
real exec time exceeds 1s, each tick was unconditionally calling
`doUpdate()` — which kills the in-flight child and bumps the generation
counter — so the prior exec's callback was always discarded as stale.
`setOutput` was never reached and the statusline stayed empty until
`refreshInterval` was removed or the command became faster.

Guard the interval callback with an `activeChildRef` check so a pending
exec is allowed to finish. State-change triggers (model switch, token
count, branch, etc.) still go through `scheduleUpdate` → `doUpdate`
directly and legitimately preempt stale children; only the periodic
tick yields. The existing 5s exec timeout is still the hard ceiling.

Also drop the redundant `'refreshInterval' in raw` check — the `typeof
raw.refreshInterval === 'number'` guard already excludes missing /
undefined values.

Tests:
- Add regression test `'skips periodic ticks while a previous exec is
  still running'` — three ticks during one unfinished exec trigger zero
  new spawns; the next tick after callback completion does spawn.
- Update two existing tests to resolve the mount exec before expecting
  subsequent ticks (the old tests implicitly relied on the starvation
  behavior being tolerated).

* test(cli): assert user-visible lines state in starvation regression

Self-review insight: the existing `skips periodic ticks while a previous
exec is still running` test only counted `exec` calls — it confirmed the
guard prevents redundant spawns, but would have silently passed even if
the eventual callback was still being discarded as stale (which is the
actual user-visible symptom of the starvation bug).

Add `expect(result.current.lines).toEqual(['done'])` after resolving the
mount's pending callback. Without the guard, generationRef would have
bumped 3 times during the yielded ticks, the callback's captured gen
would fail the stale check, `setOutput` would never fire, and `lines`
would stay empty — now caught explicitly.

* perf(cli): dedupe statusLine output to skip unchanged Footer re-renders

Review feedback on #3383 (narrow terminal stacking): when
`refreshInterval` fires at 1s and the command output is unchanged, the
mount-and-setOutput cycle still allocates a new array and triggers a
Footer re-render. Under certain narrow-terminal conditions, Ink's
erase-line accounting mis-counts wrapped rows and stale content
accumulates on screen.

The Footer-layout root cause is in #3311's narrow-mode flex setup and
Ink's truncate semantics, which is out of scope for this PR. But we
can cut the re-render surface here by preserving the `lines` array
reference when the command produces identical output — a strict
Pareto improvement for any caller (clock-style statuslines with
second-precision still re-render; rate-limit / branch / CI-status
style statuslines that change infrequently stop triggering work every
tick).

Tests:
- `preserves the same lines array reference when output is unchanged`
  asserts referential equality after a re-exec with identical stdout.
- `produces a new reference when output changes` guards against
  over-eager dedup that would miss legitimate updates.

* fix(cli): stabilize Footer rendering in narrow terminals

Narrow-terminal E2E feedback on #3383: with `refreshInterval` at 1s,
empty lines were accumulating above the input prompt each tick. Root
cause is in the Footer flex layout — originally from #3311 — where Ink
miscounts logical rows vs the physical rows the terminal actually uses.

Two adjustments, both idiomatic (used elsewhere in the repo already):

1. Left column — `minWidth={0}`. Without this, Yoga's `min-width: auto`
   default keeps the Box at its natural content width, so a statusline
   wider than the terminal doesn't engage `<Text wrap="truncate">`; the
   text renders at content-width and the terminal wraps it physically.
   `minWidth={0}` lets the column shrink so the text child can truncate
   at container width.

2. Right section — `flexWrap="wrap"`. With multiple indicators (sandbox
   label, debug badge, dream, context-usage) the row can exceed a narrow
   terminal's width. Without `flexWrap` Ink lays them out in a single
   logical row, but the terminal physically wraps to two — Ink's erase
   sequence (`\e[2K\e[1A…` per logical row) then clears one row while
   two exist, and the extra row ghosts every re-render. With `wrap` Ink
   tracks the second row explicitly and erases correctly.

Together these make the Footer's row count match between Ink's logical
view and the terminal's physical view, so frequent re-renders (as
`refreshInterval` enables) stop accumulating ghost rows.

Needs verification in a real narrow TTY — from this environment I can
reason about the flex semantics and confirm both props are supported by
Ink's Box, but actually observing ghost-row elimination requires
process.stdout.columns on a real terminal.

* Revert "fix(cli): stabilize Footer rendering in narrow terminals"

This reverts commit 9758cda85f. Reason: I could not reproduce BZ-D's
reported ghost-row stacking in tmux (40x25, 2-line statusline + real
exec + Static history + refreshInterval: 1) over 14+ ticks. Both
`minWidth={0}` and `flexWrap="wrap"` are legitimate defensive idioms,
but without a failing repro I can't verify they address the reported
bug, and I shouldn't ship a speculative layout change as "the fix".

Keeping the output-dedup commit (e1d321186) — that one is a strict
improvement regardless of the underlying Ink behavior. Will request
BZ-D's specific terminal setup and reopen with a verified fix (or
confirm the issue is specific to a particular emulator, not flex/Ink).
2026-04-19 11:12:16 +08:00
ChiGao
9e26424aa7
feat(cli): add dual-output sidecar mode for TUI (#3352)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(cli): add dual-output sidecar mode for TUI

Adds an optional **dual-output** mode for the interactive TUI: while Qwen
Code keeps rendering normally on stdout, it concurrently emits a structured
JSON event stream on a second channel (--json-fd / --json-file) and
optionally watches a JSONL command file (--input-file) for prompts and
tool-permission responses written by an external program.

This unlocks programmatic embedding of the TUI from IDE extensions, web
frontends, CI agents, or automation scripts without forcing them to give
up the rich interactive UI in favor of --output-format=stream-json.

## Design

The TUI already has a battle-tested JSON event emitter
(`StreamJsonOutputAdapter`). This change makes that adapter pluggable on
its output stream and wires a small `DualOutputBridge` that forwards TUI
events to a second instance of the adapter writing to fd / file.

For tool approvals, when a tool enters awaiting_approval the bridge emits
`control_request` (subtype `can_use_tool`); whichever side resolves first
(TUI's native UI or `confirmation_response` via --input-file) wins, and a
`control_response` is mirrored back so all observers stay in sync.

`session_start` is announced once when the bridge is constructed so
consumers can correlate the channel with a session before any other event
arrives.

## CLI surface

- `--json-fd <n>` — write JSON events to fd n (n >= 3; provided via spawn
  stdio).
- `--json-file <path>` — write JSON events to a file / FIFO / /dev/fd/N.
- `--input-file <path>` — watch this file for JSONL commands.

`--json-fd` and `--json-file` are mutually exclusive. fds 0/1/2 are
rejected to prevent corrupting the TUI.

## Wire protocol

Output: existing stream-json schema with `includePartialMessages` always
enabled, plus:

- `system` / `subtype: session_start` — emitted once on bridge
  construction.
- `control_request` / `subtype: can_use_tool` — pending tool approval.
- `control_response` — final approval outcome (mirrors TUI-native or
  external resolution).

Input (--input-file):

    {"type":"submit","text":"What does this function do?"}
    {"type":"confirmation_response","request_id":"...","allowed":true}

`submit` is queued and retried when the TUI returns to idle.
`confirmation_response` is dispatched immediately — a pending tool call
is blocking and the response cannot wait behind earlier submits.

See `docs/users/features/dual-output.md` for the full schema, latency
notes, failure modes, and a spawn example.

## What changes when the flags are absent

Nothing. The bridge and watcher are constructed only when the relevant
flags are set; otherwise the React Context providers carry `null` and
every callsite short-circuits. No overhead, no behavioral change for
existing users.

## Failure handling

- Bad fd / unopenable path → warning on stderr, dual output stays
  disabled, TUI launches normally.
- Consumer disconnect (EPIPE) → bridge silently disables itself, TUI
  keeps running.
- Any exception inside the adapter → caught, logged, bridge disabled.
  The TUI is never crashed by a dual-output failure.

## Files

New:
- packages/cli/src/dualOutput/{DualOutputBridge,DualOutputContext,index}.{ts,tsx}
- packages/cli/src/remoteInput/{RemoteInputWatcher,RemoteInputContext,index}.{ts,tsx}
- packages/cli/src/nonInteractive/io/index.ts
- docs/users/features/dual-output.md

Modified:
- packages/core/src/config/config.ts — 3 new ConfigParameters fields + getters
- packages/cli/src/config/config.ts — yargs options + mutex validation
- packages/cli/src/gemini.tsx — instantiate bridge / watcher in
  startInteractiveUI, wrap with Context Providers, register cleanup
- packages/cli/src/ui/AppContainer.tsx — connect RemoteInput to
  submitQuery, bridge tool confirmations
- packages/cli/src/ui/hooks/useGeminiStream.ts — call
  dualOutput?.processEvent(...) at five existing event points
- packages/cli/src/nonInteractive/io/{Base,Stream}JsonOutputAdapter.ts —
  StreamJsonOutputAdapter accepts an injected output stream; base adapter
  exposes emitPermissionRequest / emitControlResponse through a new
  emitControlMessageImpl hook (default no-op in batch mode).

## Tests

- packages/cli/src/dualOutput/DualOutputBridge.test.ts — fd validation,
  auto session_start, control-event routing, post-shutdown safety.
- packages/cli/src/remoteInput/RemoteInputWatcher.test.ts — submit
  forwarding, immediate confirmation dispatch, busy/idle retry,
  malformed-line tolerance, shutdown.
- packages/cli/src/nonInteractive/io/StreamJsonOutputAdapter.dualOutput.test.ts —
  custom outputStream injection and new emitPermissionRequest /
  emitControlResponse paths.

tsc --noEmit -p packages/cli/tsconfig.json is clean.
vitest run src/nonInteractive src/dualOutput src/remoteInput → 297 passed,
1 skipped, 11 files.

* feat(cli): dual-output capability handshake, session_end, control_error, settings.json

Incremental improvements on top of the initial dual-output PR based on
reviewer feedback. All extensions are additive; older consumers that
ignore unknown fields keep working.

## Capability handshake in session_start

`session_start.data` now carries three new fields so consumers can
feature-detect without sniffing the stream:

- `protocol_version` (integer, currently 1) — bumped on any protocol
  change consumers might care about.
- `version` (string) — the Qwen Code CLI version, threaded in from
  `gemini.tsx`.
- `supported_events` (string[]) — the event kinds this bridge version
  is known to emit, exported as `SUPPORTED_EVENTS` from the module.

## session_end on bridge shutdown

DualOutputBridge.shutdown() now emits a final
`system` / `session_end` event carrying `session_id` before closing the
stream. Gives consumers a definitive termination signal rather than
requiring them to infer it from EPIPE. Idempotent — calling shutdown
twice emits exactly one session_end.

## control_error emission path

`ControlErrorResponse` (already defined in types.ts) now has a first-
class emission path: `BaseJsonOutputAdapter.emitControlError(requestId,
message)` → `control_response` with `subtype: 'error'`. Wired into
AppContainer's remote-input confirmation handler so that a
`confirmation_response` referencing an unknown / already-resolved
request_id produces a structured error reply instead of silently
dropping, letting consumers retry or surface the error.

## settings.json support

New `dualOutput` top-level settings block with `jsonFile` and
`inputFile` properties. `--json-fd` has no settings equivalent (fd
passing is a spawn-time concern). CLI flag wins over settings when
both are present, so scripted one-off runs still work unchanged.
`requiresRestart: true` since the bridge is constructed once at
startup.

## Documentation

`docs/users/features/dual-output.md` gains three major sections:

- **Use cases** — concrete integration scenarios (terminal+chat dual
  sync, IDE extensions, web frontends, CI observers, multi-agent
  orchestration, session replay, observability, QA).
- **Why two output flags?** — detailed rationale for coexisting
  `--json-fd` and `--json-file`, including the PTY constraint
  (`node-pty` / `bun-pty` expose no stdio array, and `forkpty(3)` /
  `login_tty` actively close fds >= 3 before exec).
- **Comparison with Claude Code's stream-json** — schema-parity
  matrix, transport-topology differences, permission-control-plane
  behavioral notes, and a "room to improve" section as a design
  horizon.
- **Runnable demos** — seven copy-paste POCs: event observer, remote
  submit, permission bridge, Node embedder with capability
  feature-detection, session_end handling, failure drills.
- **Settings-based configuration** — example settings.json snippet and
  precedence rules.

## Tests

- DualOutputBridge.test.ts: new cases for capability handshake shape,
  session_end on shutdown, shutdown idempotency, and emitControlError.
- StreamJsonOutputAdapter.dualOutput.test.ts: new case for
  emitControlError at the adapter level.

302 passed, 1 skipped, 11 files. tsc --noEmit -p packages/cli is clean.

* docs(dual-output): shrink Claude Code comparison to one honest sentence

After actually reading the Claude Code source (src/cli/structuredIO.ts,
src/bridge/*, src/utils/messages/systemInit.ts), the previous
"Comparison with Claude Code's stream-json" section was overstated:

- Claude Code has no equivalent of TUI + sidecar running simultaneously.
  Its stream-json only works with --print (non-interactive); the bridge
  in src/bridge/* is Anthropic's own remote worker protocol, not a
  local embedding surface.
- CC uses `system/init` (not `session_start`) and has no session_end in
  the wire protocol, so the schema-parity table contained false ticks.
- Framing this PR as "parity with Claude Code" is therefore inaccurate;
  it's filling a gap Claude Code does not address.

Replace the whole multi-section comparison (schema matrix, transport
table, permission notes, borrow list, roadmap) with a single sentence
stating the accurate relation: same event format in spirit, different
topology — CC's is non-interactive only.

* fix(cli): address review feedback on dual-output sidecar mode

- Fix control_response mirror: external-initiated confirmations now
  emit control_response via the same mirror useEffect as TUI-native
  resolutions, making the emission path symmetric for all observers.
- Fix ENOENT: --json-file with a non-existent path now falls back to
  createWriteStream (auto-creates the file) instead of throwing.
- Fix race: add reading guard to RemoteInputWatcher.readNewLines()
  preventing duplicate command processing on rapid appends.
- Refactor confirmationHandler to use refs (pendingToolCallsRef,
  dualOutputRef) and register once (deps: [remoteInput]) to eliminate
  teardown/re-registration churn.
- Add debug logging to shutdown bare catch for ops correlation.
- Add ENOENT fallback test case for DualOutputBridge.
- Regenerate settings.schema.json for dualOutput section.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): make RemoteInputWatcher poll interval configurable for CI reliability

RemoteInputWatcher.test.ts was timing out in CI (5s default) because
fs.watchFile's 500ms poll interval is unreliable under load. Fix:

- Accept optional `pollIntervalMs` in constructor (default 500ms).
- Tests use 100ms poll interval for faster feedback.
- Increase per-test timeout to 15s and waitFor timeout to 10s.
- Increase "TUI busy" wait from 800ms to 1500ms for CI headroom.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): eliminate fs.watchFile timing dependency in RemoteInputWatcher tests

Tests were flaky across all CI platforms (macOS/ubuntu/windows) because
fs.watchFile polling (even at 100ms) is unreliable under CI load.

Fix: expose checkForNewInput() as a public method that directly triggers
file reading and returns a Promise. Tests now call it synchronously after
writing to the input file — no polling, no timeouts, deterministic.

Also fixes:
- Windows ENOTEMPTY: add delay in afterEach before rmSync
- Add active check in readNewLines to respect shutdown state
- readNewLines now returns Promise<void> for awaitable reads

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: 秦奇 <gary.gq@alibaba-inc.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-18 02:14:53 +08:00
顾盼
9e2f63a1ca
feat(memory): managed auto-memory and auto-dream system (#3087)
* docs: add auto-memory implementation log

* feat(core): add managed auto-memory storage scaffold

* feat(core): load managed auto-memory index

* feat(core): add managed auto-memory recall

* feat(core): add managed auto-memory extraction

* feat(cli): add managed auto-memory dream commands

* feat(core): add auxiliary side-query foundation

* feat(memory): add model-driven recall selection

* feat(memory): add model-driven extraction planner

* feat(core): add background task runtime foundation

* feat(memory): schedule auto dream in background

* feat(core): add background agent runner foundation

* feat(memory): add extraction agent planner

* feat(core): add dream agent planner

* feat(core): rebuild managed memory index

* feat(memory): add governance status commands

* feat(memory): add managed forget flow

* feat(core): harden background agent planning

* feat(memory): complete managed parity closure

* test(memory): add managed lifecycle integration coverage

* feat: same to cc

* feat(memory-ui): add memory saved notification and memory count badge

Feature 3 - Memory Saved Notification:
- Add HistoryItemMemorySaved type to types.ts
- Create MemorySavedMessage component for rendering '● Saved/Updated N memories'
- In useGeminiStream: detect in-turn memory writes via mapToDisplay's
  memoryWriteCount field and emit 'memory_saved' history item after turn
- In client.ts: capture background dream/extract promises and expose
  via consumePendingMemoryTaskPromises(); useGeminiStream listens
  post-turn and emits 'Updated N memories' notification for background tasks

Feature 4 - Memory Count Badge:
- Add isMemoryOp field to IndividualToolCallDisplay
- Add memoryWriteCount/memoryReadCount to HistoryItemToolGroup
- Add detectMemoryOp() in useReactToolScheduler using isAutoMemPath
- ToolGroupMessage renders '● Recalled N memories, Wrote N memories' badge
  at the top of tool groups that touch memory files

Fix: process.env bracket-access in paths.ts (noPropertyAccessFromIndexSignature)
Fix: MemoryDialog.test.tsx mock useSettings to satisfy SettingsProvider requirement

* fix(memory-ui): auto-approve memory writes, collapse memory tool groups, fix MEMORY.md path

Problem 1 - Auto-approve memory file operations:
- write-file.ts: getDefaultPermission() checks isAutoMemPath; returns 'allow'
  for managed auto-memory files, 'ask' for all other files
- edit.ts: same pattern

Problem 2 - Feature 4 UX: collapse memory-only tool groups:
- ToolGroupMessage: detect when all tool calls have isMemoryOp set (pure memory
  group) and all are complete; render compact '● Recalled/Wrote N memories
  (ctrl+o to expand)' instead of individual tool call rows
- ctrl+o toggles expand/collapse when isFocused and group is memory-only
- Mixed groups (memory + other tools) keep badge-at-top behaviour
- Expanded state shows individual tool calls with '● Memory operations
  (ctrl+o to collapse)' header

Problem 3 - MEMORY.md path mismatch:
- prompt.ts: Step 2 now references full absolute path ${memoryDir}/MEMORY.md
  so the model writes to the correct location inside the memory directory,
  not to the parent project directory

Fix tests:
- write-file.test.ts: add getProjectRoot to mockConfigInternal
- prompt.test.ts: update assertion to match full-path section header

* fix(memory-ui): fix duplicate notification, broken ctrl+o, and Edit tool detection

- Remove duplicate 'Saved N memories' notification: the tool group badge already
  shows 'Wrote N memories'; the separate HistoryItemMemorySaved addItem after
  onComplete was double-counting. Keep only the background-task path
  (consumePendingMemoryTaskPromises).

- Remove ctrl+o expand: Ink's Static area freezes items on first render and
  cannot respond to user input. useInput/useState(isExpanded) in a Static item
  is a no-op. Removed the dead code; memory-only groups now always render as
  the compact summary (no fake interactive hint).

- Fix Edit tool detection: detectMemoryOp was checking for 'edit_file' but the
  real tool name constant is 'edit'. Also removed non-existent 'create_file'
  (write_file covers all writes). Now editing MEMORY.md is correctly identified
  as a memory write op, collapses to 'Wrote N memories', and is auto-approved.

* fix(dream): run /dream as a visible submit_prompt turn, not a silent background agent

The previous implementation ran an AgentHeadless background agent that could
take 5+ minutes with zero UI feedback — user saw a blank screen for the entire
duration and then at most one line of text.

Fix: /dream now returns submit_prompt with the consolidation task prompt so it
runs as a regular AI conversation turn. Tool calls (read_file, write_file, edit,
grep_search, list_directory, glob) are immediately visible as collapsed tool
groups as the model works through the memory files — identical UX to Claude Code.

Also export buildConsolidationTaskPrompt from dreamAgentPlanner so dreamCommand
can reuse the same detailed consolidation prompt that was already written.

* fix(memory): auto-allow ls/glob/grep on memory base directory

Add getMemoryBaseDir() to getDefaultPermission() allow list in ls.ts,
glob.ts, and grep.ts — mirrors the existing pattern in read-file.ts.

Without this, ListFiles/Glob/Grep on ~/.qwen/* would trigger an
approval dialog, blocking /dream at its very first step.

* fix(background): prevent permission prompt hangs in background agents

Match Claude Code's headless-agent intent: background memory agents must never
block on interactive permission prompts.

Wrap background runtime config so getApprovalMode() returns YOLO, ensuring any
ask decision is auto-approved instead of hanging forever. Add regression test
covering the wrapped approval mode.

* fix(memory): run auto extract through forked agent

Make managed auto-memory extraction follow the Claude Code architecture:
background extraction now uses a forked agent to read/write memory files
directly, instead of planning patches and applying them with a separate
filesystem pipeline.

Keep the old patch/model path only as fallback if the forked agent fails.
Add regression tests covering the new execution path and tool whitelist.

* refactor(memory): remove legacy extract fallback pipeline

Delete the old patch/model/heuristic extraction path entirely.
Managed auto-memory extract now runs only through the forked-agent
execution flow, with no planner/apply fallback stages remaining.

Also remove obsolete exports/tests and update scheduler/integration
coverage to use the forked-agent-only architecture.

* refactor(memory): move auxiliary files out of memory/ directory

meta.json, extract-cursor.json, and consolidation.lock are internal
bookkeeping files, not user-visible memories. Move them one level up
to the project state dir (parent of memory/) so that the memory/
directory contains only MEMORY.md and topic files, matching the
clean layout of the upstream reference implementation.

Add getAutoMemoryProjectStateDir() helper in paths.ts and update the
three path accessors + store.test.ts path assertions accordingly.

* fix(memory): record lastDreamAt after manual /dream run

The /dream command submits a prompt to the main agent (submit_prompt),
which writes memory files directly. Because it bypasses dreamScheduler,
meta.json was never updated and /memory always showed 'never'.

Fix by:
- Exporting writeDreamManualRunToMetadata() from dream.ts
- Adding optional onComplete callback to SubmitPromptActionReturn and
  SubmitPromptResult (types.ts / commands/types.ts)
- Propagating onComplete through slashCommandProcessor.ts
- Firing onComplete after turn completion in useGeminiStream.ts
- Providing the callback in dreamCommand.ts to write lastDreamAt

* fix(memory): remove scope params from /remember in managed auto-memory mode

--global/--project are legacy save_memory tool concepts. In managed
auto-memory mode the forked agent decides the appropriate type
(user/feedback/project/reference) based on the content of the fact.

Also improve the prompt wording to explicitly ask the agent to choose
the correct type, reducing the tendency to default to 'project'.

* feat(ui): show '✦ dreaming' indicator in footer during background dream

Subscribe to getManagedAutoMemoryDreamTaskRegistry() in Footer via a
useDreamRunning() hook. While any dream task for the current project is
pending or running, display '✦ dreaming' in the right section of the
footer bar, between Debug Mode and context usage.

* refactor(memory): align dream/extract infrastructure with Claude Code patterns

Five improvements based on Claude Code parity audit:

1. Memoize getAutoMemoryRoot (paths.ts)
   - Add _autoMemoryRootCache Map, keyed by projectRoot
   - findCanonicalGitRoot() walks the filesystem per call; memoize avoids
     repeated git-tree traversal on hot-path schedulers/scanners
   - Expose clearAutoMemoryRootCache() for test teardown

2. Lock file stores PID + isProcessRunning reclaim (dreamScheduler.ts)
   - acquireDreamLock() writes process.pid to the lock file body
   - lockExists() reads PID and calls process.kill(pid, 0); dead/missing
     PID reclaims the lock immediately instead of waiting 2h
   - Stale threshold reduced to 1h (PID-reuse guard, same as CC)

3. Session scan throttle (dreamScheduler.ts)
   - Add SESSION_SCAN_INTERVAL_MS = 10min (same as CC)
   - Add lastSessionScanAt Map<projectRoot, number> to ManagedAutoMemoryDreamRuntime
   - When time-gate passes but session-gate doesn't, throttle prevents
     re-scanning the filesystem on every user turn

4. mtime-based session counting (dreamScheduler.ts)
   - Replace fragile recentSessionIdsSinceDream Set in meta.json with
     filesystem mtime scan (listSessionsTouchedSince)
   - Mirrors Claude Code's listSessionsTouchedSince: reads session JSONL
     files from Storage.getProjectDir()/chats/, filters by mtime > lastDreamAt
   - Immune to meta.json corruption/loss; no per-turn metadata write
   - ManagedAutoMemoryDreamRuntime accepts injectable SessionScannerFn
     for clean unit testing without real session files

5. Extraction mutual exclusion extended to write_file/edit (extractScheduler.ts)
   - historySliceUsesMemoryTool() now checks write_file/edit/replace/create_file
     tool calls whose file_path is within isAutoMemPath()
   - Previously only detected save_memory; missed direct file writes by
     the main agent, causing redundant background extraction

* docs(memory): add user-facing memory docs, i18n for all locales, simplify /forget

- Add docs/users/features/memory.md: comprehensive user-facing guide covering
  QWEN.md instructions, auto-memory behaviour, all memory commands, and
  troubleshooting; replaces the placeholder auto-memory.md
- Update docs/users/features/_meta.ts: rename entry auto-memory → memory
- Update docs/users/features/commands.md: add /init, /remember, /forget,
  /dream rows; fix /memory description; remove /init duplicate
- Update docs/users/configuration/settings.md: add memory.* settings section
  (enableManagedAutoMemory, enableManagedAutoDream) between tools and permissions
- Remove /forget --apply flag: preview-then-apply flow replaced with direct
  deletion; update forgetCommand.ts, en.js, zh.js accordingly
- Add all auto-memory i18n keys to de, ja, pt, ru locales (18 keys each):
  Open auto-memory folder, Auto-memory/Auto-dream status lines, never/on/off,
  ✦ dreaming, /forget and /remember usage strings, all managed-memory messages
- Remove dead save_memory branch from extractScheduler.partWritesToMemory()
- Add ✦ dreaming indicator to Footer.tsx with i18n; fix Footer.test.tsx mocks
- Refactor MemoryDialog.tsx auto-dream status line to use i18n
- Remove save_memory tool (memoryTool.ts/test); clean up webui references
- Add extractionPlanner.ts, const.ts and associated tests
- Delete stale docs/users/configuration/memory.md and
  docs/developers/tools/memory.md (content superseded)

* refactor(memory): remove all Claude Code references from comments and test names

* test(memory): remove empty placeholder test files that cause vitest to fail

* fix eslint

* fix test in windows

* fix test

* fix(memory): address critical review findings from PR #3087

- fix(read-file): narrow auto-allow from getMemoryBaseDir() (~/.qwen) to
  isAutoMemPath(projectRoot) to prevent exposing settings.json / OAuth
  credentials without user approval (wenshao review)

- fix(forget): per-entry deletion instead of whole-file unlink
  - assign stable per-entry IDs (relativePath:index for multi-entry files)
    so the model can target individual entries without removing siblings
  - rewrite file keeping unmatched entries; only unlink when file becomes
    empty (wenshao review)

- fix(entries): round-trip correctness for multi-entry new-format bodies
  - parseAutoMemoryEntries: plain-text line closes current entry and opens
    a new one (was silently ignored when current was already set)
  - renderAutoMemoryBody: emit blank line between adjacent entries so the
    parser can detect entry boundaries on re-read (wenshao review)

- fix(entries): resolve two CodeQL polynomial-regex alerts
  - indentedMatch: \s{2,}(?:[-*]\s+)? → [\t ]{2,}(?:[-*][\t ]+)?
  - topLevelMatch: :\s*(.+)$ → :[ \t]*(\S.*)$
  (github-advanced-security review)

- fix(scan.test): use forward-slash literal for relativePath expectation
  since listMarkdownFiles() normalises all separators to '/' on all
  platforms including Windows

* fix(memory): replace isAutoMemPath startsWith with path.relative()

Using path.relative() instead of string startsWith() is more robust
across platforms — it correctly handles Windows path-separator
differences and avoids potential edge cases where a path prefix match
could succeed on non-separator boundaries.

Addresses github-actions review item 3 (PR #3087).

* feat(telemetry): add auto-memory telemetry instrumentation

Add OpenTelemetry logs + metrics for the five auto-memory lifecycle
events: extract, dream, recall, forget, and remember.

Telemetry layer (packages/core/src/telemetry/):
- constants.ts: 5 new event-name constants
  (qwen-code.memory.{extract,dream,recall,forget,remember})
- types.ts: 5 new event classes with typed constructor params
  (MemoryExtractEvent, MemoryDreamEvent, MemoryRecallEvent,
   MemoryForgetEvent, MemoryRememberEvent)
- metrics.ts: 8 new OTel instruments (5 Counters + 3 Histograms)
  with recordMemoryXxx() helpers; registered inside initializeMetrics()
- loggers.ts: logMemoryExtract/Dream/Recall/Forget/Remember() — each
  emits a structured log record and calls its recordXxx() counterpart
- index.ts: re-exports all new symbols

Instrumentation call-sites:
- extractScheduler.ts ManagedAutoMemoryExtractRuntime.runTask():
  emits extract event with trigger=auto, completed/failed status,
  patches_count, touched_topics, and wall-clock duration
- dream.ts runManagedAutoMemoryDream():
  emits dream event with trigger=auto, updated/noop status,
  deduped_entries, touched_topics, and duration; covers both
  agent-planner and mechanical fallback paths
- recall.ts resolveRelevantAutoMemoryPromptForQuery():
  emits recall event with strategy, docs_scanned/selected, and
  duration; covers model, heuristic, and none paths
- forget.ts forgetManagedAutoMemoryEntries():
  emits forget event with removed_entries_count, touched_topics,
  and selection_strategy (model/heuristic/none)
- rememberCommand.ts action():
  emits remember event with topic=managed|legacy at command
  invocation time (before agent decides the actual memory type)

* refactor(telemetry): remove memory forget/remember telemetry events

Remove EVENT_MEMORY_FORGET and EVENT_MEMORY_REMEMBER along with all
associated infrastructure that is no longer needed:

- constants.ts: remove EVENT_MEMORY_FORGET, EVENT_MEMORY_REMEMBER
- types.ts: remove MemoryForgetEvent, MemoryRememberEvent classes
- metrics.ts: remove MEMORY_FORGET_COUNT, MEMORY_REMEMBER_COUNT constants,
  memoryForgetCounter, memoryRememberCounter module vars,
  their initialization in initializeMetrics(), and
  recordMemoryForgetMetrics(), recordMemoryRememberMetrics() functions
- loggers.ts: remove logMemoryForget(), logMemoryRemember() functions
  and their imports
- index.ts: remove all re-exports for the above symbols
- memory/forget.ts: remove logMemoryForget call-site and import
- cli/rememberCommand.ts: remove logMemoryRemember call-sites and import

* change default value

* fix forked agent

* refactor(background): unify fork primitives into runForkedAgent + cleanup

- Merge runForkedQuery into runForkedAgent via TypeScript overloads:
  with cacheSafeParams → GeminiChat single-turn path (ForkedQueryResult)
  without cacheSafeParams → AgentHeadless multi-turn path (ForkedAgentResult)
- Delete forkedQuery.ts; move its test to background/forkedAgent.cache.test.ts
- Remove forkedQuery export from followup/index.ts
- Migrate all callers (suggestionGenerator, speculation, btwCommand, client)
  to import from background/forkedAgent
- Add getFastModel() / setFastModel() to Config; expose in CLI config init
  and ModelDialog / modelCommand
- Remove resolveFastModel() from AppContainer — now delegated to config.getFastModel()
- Strip Claude Code references from code comments

* fix(memory): address wenshao's critical review findings

- dream.ts: writeDreamManualRunToMetadata now persists lastDreamSessionId
  and resets recentSessionIdsSinceDream, preventing auto-dream from firing
  again in the same session after a manual /dream
- config.ts: gate managed auto-memory injection on getManagedAutoMemoryEnabled();
  when disabled, previously saved memories are no longer injected into new sessions
- rememberCommand.ts: remove legacy save_memory branch (tool was removed);
  fall back to submit_prompt directing agent to write to QWEN.md instead
- BuiltinCommandLoader.ts: only register /dream and /forget when managed
  auto-memory is enabled, matching the feature's runtime availability
- forget.ts: return early in forgetManagedAutoMemoryMatches when matches is
  empty, avoiding unnecessary directory scaffolding as a side effect

* fix test

* fix ci test

* feat(memory): align extract/dream agents to Claude Code patterns

- fix(client): move saveCacheSafeParams before early-return paths so
  extract agents always have cache params available (fixes extract never
  triggering in skipNextSpeakerCheck mode)

- feat(extract): add read-only shell tool + memory-scoped write
  permissions; create inline createMemoryScopedAgentConfig() with
  PermissionManager wrapper (isToolEnabled + evaluate) that allows only
  read-only shell commands and write/edit within the auto-memory dir

- feat(extract): align prompt to Claude Code patterns — manifest block
  listing existing files, parallel read-then-write strategy, two-step
  save (memory file then index)

- feat(dream): remove mechanical fallback; runManagedAutoMemoryDream is
  now agent-only and throws without config

- feat(dream): align prompt to Claude Code 4-phase structure
  (Orient/Gather/Consolidate/Prune+Index); add narrow transcript grep,
  relative→absolute date conversion, stale index pruning, index size cap

- fix(permissions): add isToolEnabled() to MemoryScopedPermissionManager
  to prevent TypeError crash in CoreToolScheduler._schedule

- test: update dreamScheduler tests to mock dream.js; replace removed
  mechanical-dedup test with scheduler infrastructure verification

* move doc to design

* refactor(memory): unify extract+dream background task management into MemoryBackgroundTaskHub

- Add memoryTaskHub.ts: single BackgroundTaskRegistry + BackgroundTaskDrainer shared
  by all memory background tasks; exposes listExtractTasks() / listDreamTasks()
  typed query helpers and a unified drain() method
- extractScheduler: ManagedAutoMemoryExtractRuntime accepts hub via constructor
  (defaults to defaultMemoryTaskHub); test factory gets isolated fresh hub
- dreamScheduler: same pattern — sessionScanner + hub injection; BackgroundTask-
  Scheduler initialized from injected hub; test factory gets isolated hub
- status.ts: replace two separate getRegistry() calls with defaultMemoryTaskHub
  typed query methods
- Footer.tsx (useDreamRunning): subscribe to shared registry, filter by
  DREAM_TASK_TYPE so extract tasks do not trigger the dream spinner
- index.ts: re-export memoryTaskHub.ts so defaultMemoryTaskHub/DREAM_TASK_TYPE/
  EXTRACT_TASK_TYPE are available as top-level package exports

* refactor(background): introduce general-purpose BackgroundTaskHub

Replace memory-specific MemoryBackgroundTaskHub with a domain-agnostic
BackgroundTaskHub in the background/ layer. Any future background task
runtime (3rd, 4th, …) plugs in by accepting a hub via constructor
injection — no new infrastructure required.

Changes:
- Add background/taskHub.ts: BackgroundTaskHub (registry + drainer +
  createScheduler() + listByType(taskType, projectRoot?)) and the
  globalBackgroundTaskHub singleton. Zero knowledge of any task type.
- Delete memory/memoryTaskHub.ts: its narrow listExtractTasks /
  listDreamTasks helpers are replaced by the generic listByType() call.
- Move EXTRACT_TASK_TYPE to extractScheduler.ts (owned by the runtime
  that defines it); replace 3 hardcoded string literals with the const.
- Move DREAM_TASK_TYPE to dreamScheduler.ts; use hub.createScheduler()
  instead of manually wiring new BackgroundTaskScheduler(reg, drain).
- status.ts: globalBackgroundTaskHub.listByType(EXTRACT_TASK_TYPE, ...)
- Footer.tsx: globalBackgroundTaskHub.registry (shared, filtered by type)
- index.ts: export background/taskHub.js; drop memory/memoryTaskHub.js

* test(background): add BackgroundTaskHub unit tests and hub isolation checks

- background/taskHub.test.ts (11 tests):
  - createScheduler(): tasks registered via scheduler appear in hub registry;
    multiple calls return distinct scheduler instances
  - listByType(): filters by taskType, filters by projectRoot, returns []
    for unknown types, two types co-exist in registry but stay separated
  - drain(): resolves false on timeout, resolves true when tasks complete,
    resolves true immediately when no tasks in flight
  - isolation: tasks in hubA do not appear in hubB
  - globalBackgroundTaskHub: is a BackgroundTaskHub instance with registry/drainer

- extractScheduler.test.ts (+1 test):
  - factory-created runtimes have isolated registries; tasks in runtimeA
    are invisible to runtimeB; all tasks carry EXTRACT_TASK_TYPE

- dreamScheduler.test.ts (+1 test):
  - factory-created runtimes have isolated registries; tasks in runtimeA
    are invisible to runtimeB; all tasks carry DREAM_TASK_TYPE

* refactor(memory): consolidate all memory state into MemoryManager

Replace BackgroundTaskRegistry/Drainer/Scheduler/Hub helper classes and
module-level globals with a single MemoryManager class owned by Config.

## Changes

### New
- packages/core/src/memory/manager.ts — MemoryManager with:
  - scheduleExtract / scheduleDream (inline queuing + deduplication logic)
  - recall / forget / selectForgetCandidates / forgetMatches
  - getStatus / drain / appendToUserMemory
  - subscribe(listener) compatible with useSyncExternalStore
  - storeWith() atomic record registration (no double-notify)
  - Distinct skippedReason 'scan_throttled' vs 'min_sessions' for dream
- packages/core/src/utils/forkedAgent.ts — pure cache util (moved from background/)
- packages/core/src/utils/sideQuery.ts — pure util (moved from auxiliary/)

### Deleted
- background/taskRegistry, taskDrainer, taskScheduler, taskHub and all tests
- background/forkedAgent (moved to utils/)
- auxiliary/sideQuery (moved to utils/)
- memory/extractScheduler, dreamScheduler, state and all tests

### Modified
- config/config.ts — Config owns MemoryManager instance; getMemoryManager()
- core/client.ts — all memory ops via config.getMemoryManager()
- core/client.test.ts — mock MemoryManager instead of individual modules
- memory/status.ts — accepts MemoryManager param, drops globalBackgroundTaskHub
- index.ts — memory exports reduced from 14 modules to 5 (manager/types/paths/store/const)
- cli/commands/dreamCommand.ts — via config.getMemoryManager()
- cli/commands/forgetCommand.ts — via config.getMemoryManager()
- cli/components/Footer.tsx — useSyncExternalStore replacing setInterval polling
- cli/components/Footer.test.tsx — add getMemoryManager mock
2026-04-16 20:05:45 +08:00
Reid
07475026f6
fix(cli): remember "Start new chat session" until summary changes (#3308)
* fix(cli): remember "Start new chat session" until summary changes

  Persist a project-scoped Welcome Back restart choice keyed to the
  current PROJECT_SUMMARY fingerprint.

  This suppresses the Welcome Back dialog after choosing "Start new chat
  session", while still showing it again after the project summary is
  updated.

* fix conflict
2026-04-16 13:54:14 +08:00
DennisYu07
b5115e731e
feat(hooks): Add HTTP Hook, Function Hook and Async Hook support (#2827)
* add http/async/function type

* fix url error

* resolve comment

* align cc non blocking error

* fix hookRunner for async

* fix(hooks): update hook type validation to support http and function types

- Change validated hook types from ['command', 'plugin'] to ['command', 'http', 'function']
- Add validation for HTTP hooks requiring url field
- Add validation for function hooks requiring callback field
- Add comprehensive test coverage for all hook type validations

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(hooks): align SSRF protection with Claude Code behavior

- Allow 127.0.0.0/8 (loopback) for local dev hooks
- Allow localhost hostname for local dev hooks
- Allow ::1 (IPv6 loopback) for local dev hooks
- Add 100.64.0.0/10 (CGNAT) to blocked ranges (RFC 6598)
- Update tests to match Claude Code's ssrfGuard.ts behavior

This fixes HTTP hooks failing to connect to local dev servers.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(hooks): align HTTP hook security with Claude Code behavior

- Add CRLF/NUL sanitization for env var interpolation (header injection)
- Implement combined abort signal (external signal + timeout)
- Upgrade SSRF protection to DNS-level with ssrfGuard
  - Allow loopback (127.0.0.0/8, ::1) for local dev hooks
  - Block CGNAT (100.64.0.0/10) and IPv6 private ranges
- Increase default HTTP hook timeout to 10 minutes
- Fix VS Code hooks schema to support http type
  - Add url, headers, allowedEnvVars, async, once, statusMessage, shell fields
  - Note: "function" type is SDK-only (callback cannot be serialized to JSON)

* feat(hooks): enhance Function Hook with messages, skillRoot, shell, and matcher support

- Add MessagesProvider for automatic conversation history passing to function hooks
- Add FunctionHookContext with messages, toolUseID, and signal
- Add skillRoot support for skill-scoped session hooks
- Add shell parameter support for command hooks (bash/powershell)
- Add regex matcher support for hook pattern matching
- Add statusMessage to CommandHookConfig
- Change default function hook timeout from 60s to 5s
- Add comprehensive unit tests for all new features

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* add session hook for skill

* fix function hook parsing

* refactor ui for http hook/async hook/function hook

* update doc and add integration test

* change telemetryn type and refactor SSRF

* fix project level bug

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-16 10:10:33 +08:00
tanzhenxin
4daf7f9353
feat(core): add microcompaction for idle context cleanup (#3006)
Some checks are pending
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(core): add microcompaction for idle context cleanup

Clear old tool result content from chat history when the user returns
after an idle period (default 60 min). Replaces functionResponse output
with a sentinel string for compactable tools (read_file, shell, grep,
glob, web_fetch, web_search, edit, write_file), keeping the N most
recent results intact (default 5). Runs before full compression so it
can shed tokens cheaply without an API call.

- Time-based trigger reuses lastApiCompletionTimestamp from thinking cleanup
- Per-part counting so keepRecent applies to individual tool results
  even when batched in parallel
- Preserves tool error responses (only clears successful outputs)
- Configurable via settings.json (context.microcompaction) with env var
  overrides for E2E testing
- Enabled by default

* refactor(config): unify idle cleanup settings under clearContextOnIdle

Consolidate thinking block cleanup and tool results microcompaction
config into a single `context.clearContextOnIdle` settings group:

  {
    "context": {
      "clearContextOnIdle": {
        "thinkingThresholdMinutes": 5,
        "toolResultsThresholdMinutes": 60,
        "toolResultsNumToKeep": 5
      }
    }
  }

- Use -1 on either threshold to disable that cleanup (no enabled bool)
- Remove separate `microcompaction` and `gapThresholdMinutes` settings
- Thinking cleanup: 5 min default (unchanged)
- Tool results cleanup: 60 min default
- Preserve tool error responses (only clear successful outputs)

* feat(vscode-ide-companion): add clearContextOnIdle settings configuration

- Add gapThresholdMinutes settings for thinking blocks, tool results, and retention count
- Remove deprecated gapThresholdMinutes from root settings level

This reorganizes the context clearing settings into a dedicated clearContextOnIdle object with configurable thresholds for thinking blocks and tool results.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): restrict microcompaction to user-initiated messages only

Move microcompactHistory() inside the UserQuery/Cron guard so model
latency during tool-call loops doesn't count as user idle time.

* docs: update settings docs for clearContextOnIdle config rename

Replace stale `context.gapThresholdMinutes` entry with the new
`context.clearContextOnIdle.*` settings group introduced in the
microcompaction feature.

* fix(core): address review comments on microcompaction PR

- Guard against NaN in toolResultsNumToKeep with Number.isFinite()
- Report effective keepRecent (after Math.max) in meta, not raw config
- Fix comment to mention cron messages alongside user messages

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-13 18:51:35 +08:00
jinye
1557d93043
feat(cli): support tools.sandboxImage in settings (#3146)
Co-authored-by: jinye.djy <jinye.djy@alibaba-inc.com>
2026-04-13 09:43:34 +08:00
Shaojin Wen
5482044e59
fix: improve /model --fast description clarity and prevent accidental activation (#3077)
Replace vague "background tasks" with specific "prompt suggestions and speculative
execution" in the --fast flag description across all i18n locales, docs, and VS Code
schema. Update example model name from qwen3.5-flash to qwen3-coder-flash. Also fix
completion logic to require a non-empty partial arg before suggesting --fast, preventing
Tab+Enter from accidentally entering fast model mode.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 12:09:46 +08:00
Shaojin Wen
746f67f436
refactor: rename verboseMode to compactMode for better UX clarity (#3075)
The "Compact Mode" label is more intuitive than "Verbose Mode" for users,
as it directly describes the default compact view experience. This change
inverts the boolean semantics (compactMode=false means show full output)
and exposes the setting in the /settings dialog (showInDialog: true).

- Rename ui.verboseMode → ui.compactMode with inverted default (false)
- Rename VerboseModeContext → CompactModeContext (file and exports)
- Rename TOGGLE_VERBOSE_MODE → TOGGLE_COMPACT_MODE in key bindings
- Update all consumer components with inverted logic
- Update i18n keys across 6 locales (verbose → compact)
- Update VS Code settings schema
- Add ui.compactMode documentation to settings.md
- Fix Ctrl+O description in keyboard-shortcuts.md
2026-04-10 11:55:50 +08:00
wenshao
a1c33cdb5e refactor(status-line): remove padding config
The status line is now inlined in the footer's left section,
so horizontal padding is no longer applicable. Remove padding
from StatusLineConfig, settings schema, JSON schema, and docs.
2026-04-08 20:24:33 +08:00
wenshao
0be4d32cb0 Merge remote-tracking branch 'origin/main' into feature/status-line-customization 2026-04-08 18:50:10 +08:00
tanzhenxin
d9a1275913
Merge pull request #2954 from QwenLM/fix/disable-followup-suggestions-default
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
fix(cli): disable follow-up suggestions by default
2026-04-08 18:02:00 +08:00
tanzhenxin
3c23952ef7
Merge pull request #2897 from QwenLM/feat/thinking-cross-turn-retention-idle-cleanup
feat(core): thinking block cross-turn retention with idle cleanup
2026-04-08 15:26:53 +08:00
wenshao
6a55a9aeea feat(config): make thinking idle threshold configurable and lower default to 5min
Align with observed provider prompt-cache TTL (~5 min). Add
`context.gapThresholdMinutes` setting so users can tune the threshold
for providers with different cache TTLs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 14:21:06 +08:00
wenshao
51964fa4b9 Merge remote-tracking branch 'origin/main' into feature/status-line-customization
# Conflicts:
#	packages/cli/src/ui/components/Footer.tsx
2026-04-08 05:05:04 +08:00
tanzhenxin
03fdaf2faa fix(cli): disable follow-up suggestions by default
Most Qwen OAuth users don't have a fast model configured for this
feature, so it fires a wasted API request on every turn with no
visible benefit. Default to off; users can opt in via settings.
2026-04-07 12:50:27 +00:00
tanzhenxin
b632541629
Merge pull request #2770 from chiga0/feat/add-verbose-mode-switcher
feat: to #2767, support verbose and compact mode swither with ctrl-o
2026-04-07 15:48:41 +08:00
chiga0
1d639c97fa fix: revert default to verbose mode (true) and force-show on Error status
Per maintainer review (tanzhenxin): default verboseMode reverted to true
to preserve existing behavior — compact mode is opt-in via Ctrl+O.

Also addresses wenshao's security concern: in compact mode, tool groups
now force-expand on Error status (in addition to existing Confirming
handling), and ToolMessage force-shows result for both Confirming and
Error statuses so users always see diffs before approval and error
details for debugging.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 22:04:31 +08:00
chiga0
8d1866ca55 fix: address PR #2770 review feedback for verbose/compact mode toggle
- Fix default value: compact mode (verboseMode=false) is now the default,
  matching PR description and intended UX
- Extract shared ToolStatusIndicator component to eliminate duplicate
  status icon rendering between ToolMessage and CompactToolGroupDisplay
- Memoize VerboseModeProvider context value to prevent unnecessary
  re-renders of all consumer components
- Clear frozenSnapshot on WaitingForConfirmation state to ensure tool
  confirmation UI remains interactive during mid-stream toggle
- Replace magic string 'Shell' with SHELL_NAME constant in ToolMessage
- Remove unused i18n translation keys (verbose/compact mode messages)
- Update snapshots for Footer and ToolGroupMessage tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 15:07:59 +08:00
wenshao
8d85492913 feat(ui): rewrite customizable status line
Rewrite the status line feature (originally by Gemini 3.1 Pro) to align
with the upstream design:

- Settings: change from plain string to object `{ type, command, padding? }`
- Hook: event-driven with 300ms debounce instead of 5s polling; pass
  structured JSON context (session, model, tokens, vim) via stdin;
  generation counter to ignore stale exec callbacks; EPIPE guard on stdin
- Footer: render status line as dedicated row with dimColor + truncate;
  suppress "? for shortcuts" hint when status line is active
- Add `/statusline` slash command that delegates to a statusline-setup agent
- Add `statusline-setup` built-in agent with PS1 conversion instructions
- Remove unrelated changes (whitespace, formatting, package-lock, test file)
- Fix copyright headers (Google LLC → Qwen)
- Fix config path references (~/.qwen-code → ~/.qwen)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 08:04:20 +08:00
wenshao
6784f0c02c feat(ui): add customizable status line
Allow users to configure a custom shell command to display in the UI footer status line.
2026-04-06 07:10:50 +08:00
chiga0
6fd29b698b fix: address PR review feedback for verbose/compact mode toggle
- Change default verboseMode to true (preserving current UX behavior)
- Fix compact mode hiding active shell output (add forceShowResult + isUserInitiated)
- Fix asymmetric frozen snapshot (freeze on ANY toggle during streaming)
- Fix copyright header in VerboseModeContext.tsx (Google LLC → Qwen)
- Add proper translations for all 6 locales (de/ja/pt/ru/zh/en)
- Rewrite CompactToolGroupDisplay with bordered box, i18n hint, shell detection
- Fix Pending status color (theme.text.secondary instead of theme.status.success)
- Fix description casing: ctrl+o → Ctrl+O
- Add explanatory comment for useCallback settings dependency

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 20:43:06 +08:00
Shaojin Wen
3bce84d5da
feat(cli, webui): add follow-up suggestions feature (#2525)
Some checks failed
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Has been cancelled
E2E Tests / E2E Test (Linux) - sandbox:none (push) Has been cancelled
Qwen Code CI / Lint (push) Has been cancelled
Qwen Code CI / CodeQL (push) Has been cancelled
E2E Tests / E2E Test - macOS (push) Has been cancelled
Qwen Code CI / Test (push) Has been cancelled
Qwen Code CI / Test-1 (push) Has been cancelled
Qwen Code CI / Test-2 (push) Has been cancelled
Qwen Code CI / Test-3 (push) Has been cancelled
Qwen Code CI / Test-4 (push) Has been cancelled
Qwen Code CI / Test-5 (push) Has been cancelled
Qwen Code CI / Test-6 (push) Has been cancelled
Qwen Code CI / Test-7 (push) Has been cancelled
Qwen Code CI / Test-8 (push) Has been cancelled
Qwen Code CI / Post Coverage Comment (push) Has been cancelled
* feat(cli, webui): add follow-up suggestions feature

Implement context-aware follow-up suggestions that appear after task
completion, suggesting relevant next actions like "commit this", "run
tests", etc.

- Add `followup/` module with types, generator, and rule-based provider
- Export follow-up types and functions from core index
- 8 default suggestion rules covering common workflows

- Add `useFollowupSuggestionsCLI` hook for Ink/React
- Integrate suggestion generation in AppContainer when streaming completes
- Add Tab key to accept, arrow keys to cycle through suggestions
- Display suggestions as ghost text in input prompt

- Add `useFollowupSuggestions` hook for React
- Update InputForm to display suggestions as placeholder
- Add CSS styling for suggestion appearance with counter
- Add keyboard handlers (Tab, arrow keys)

- After streaming completes with tool calls, suggestions appear
- Tab accepts the current suggestion
- Left/Right arrows cycle through multiple suggestions
- Typing or pasting dismisses the suggestion

- Shell command rules (tests, git, npm install) don't work yet due to
  history not storing tool arguments
- VSCode extension integration pending
- Web UI needs parent app integration for suggestion generation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: resolve merge conflicts and build errors

- Rebased on upstream main (5d02260c8)
- Fixed JSX structure in InputPrompt.tsx
- Changed `return;` to `return true;` in follow-up handlers
- Added @agentclientprotocol/sdk to core package dependencies
- Restored correct BaseTextInput usage (self-closing, no children)
- Follow-up suggestions now shown via placeholder prop only

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: remove @agentclientprotocol/sdk from core package.json

The types are imported in fileSystemService.ts but the package
should not be a runtime dependency of core. It's provided by
the CLI package which depends on core. This was causing
package-lock.json sync issues on Node.js 24.x CI.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: restore alphabetical order of dependencies in core/package.json

* fix: restore package-lock.json from upstream to fix Node 24.x CI

* fix: resolve acpConnection test failure and ESLint warning

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* style: apply prettier formatting after merge

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(followup): address review issues in follow-up suggestions

- Export followupState.ts from core index (was dead code)
- Refactor CLI and WebUI hooks to use shared followupReducers (eliminate duplication)
- Move side effects out of setState updaters via queueMicrotask
- Fix AppContainer useEffect dependency on unstable historyManager.history reference
- Reorder matchesRule to check pattern before condition (cheaper first)
- Make RuleBasedProvider collect from all matching rules with dedup and limit
- Add missing resetGenerator export for testing
- Add explicit implements SuggestionProvider to RuleBasedProvider
- Fix unstable followup object in useEffect dependency arrays
- Merge duplicate imports to fix eslint import/no-duplicates warnings
- Standardize copyright year to 2025
- Add test files for followupState, ruleBasedProvider, suggestionGenerator

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address review feedback from PR #2525

- Fix acceptingRef race: set lock synchronously before queueMicrotask
- Derive hasError/wasCancelled from actual tool call statuses
- Incorporate rule priority into suggestion priority calculation
- Clear suggestions immediately when setSuggestions([]) is called
- Add !completion.showSuggestions guard to Tab handler
- Fix onAcceptFollowup type from (string) => void to () => void
- Fix ToolCallInfo.name doc examples to match display names
- Scope CSS counter ::after to data-has-suggestion + empty conditions
- Reset regex lastIndex before test() for g/y flag safety
- Stabilize hook return with useMemo + onAcceptRef pattern
- Add @qwen-code/qwen-code-core as webui external + peerDependency

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address second round of review feedback

- Scope CSS max-width to match counter condition (not count=1)
- Only dismiss followup on printable character input, not navigation keys
- Restrict tool_group scan to most recent contiguous block (current turn)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): clear suggestions on new turn, add search guards

- Clear followupSuggestions when streaming starts (Idle → Responding)
  to prevent stale suggestions from previous turns
- Add !reverseSearchActive && !commandSearchActive guards to Tab handler
  to avoid keybinding conflicts with search modes

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address third round of review feedback

- Fix string pattern asymmetry: only match tool names when matchMessage=false
- Collect tool_groups from last user message boundary, not contiguous tail
- Flatten to individual tool calls before slicing to cap at 10 actual calls

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): fix arrow cycling guard and align rule conditions with patterns

- Remove unreliable textContent check for arrow cycling in WebUI InputForm;
  rely on inputText state which already accounts for zero-width spaces
- Add 'error' to fix/bug rule condition to match its regex pattern
- Add 'clean up' to refactor rule condition to match its regex pattern

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): reset acceptingRef in clear() to prevent deadlock

If clear() is called during accept debounce window, acceptingRef
could remain stuck true permanently. Now reset in clear().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): cancel pending timeout in dismiss() and accept()

Prevents stale suggestion timeout from re-showing suggestions
after user dismisses or accepts during the 300ms delay window.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): reset lastIndex in removeRules() for g/y flag safety

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(vscode-ide-companion): mark @qwen-code/qwen-code-core as external in webview esbuild

The webui package now declares @qwen-code/qwen-code-core as external in its
vite build config. Without this change, the vscode-ide-companion webview
esbuild (platform: 'browser') would try to bundle core's Node.js-only
dependencies (undici, @grpc/grpc-js, fs, stream, etc.), causing 562 build
errors during `npm ci`.

* fix: restore node_modules/@google/gemini-cli-test-utils workspace link in lockfile

The top-level workspace symlink entry was accidentally removed by a local
npm install in commit 004baaeb, which replaced it with a nested
packages/cli/node_modules/ entry. npm ci requires the top-level link entry
to be present in the lockfile, otherwise it fails with:
  "Missing: @google/gemini-cli-test-utils@0.13.0 from lock file"

Also syncs @qwen-code/qwen-code-core peerDependency into the lockfile
to match the updated packages/webui/package.json.

* refactor(followup): extract controller and improve rule matching

- Extract createFollowupController for unified state management across CLI and WebUI
- Refactor rule-based provider to match via assistant message keywords instead of tool arguments
- Add enableFollowupSuggestions user setting in UI category
- Decouple WebUI from @qwen-code/qwen-code-core by copying browser-safe state logic
- Add followupHistory.ts for extracting suggestion context from CLI history
- Add comprehensive tests for controller and rule matching scenarios
- Use --app-primary CSS variable for consistency

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(webui): import followup state from core package

- Remove followupState.ts from webui (moved to core)
- Import FollowupSuggestion, FollowupState types from core
- Add @qwen-code/qwen-code-core as peerDependency
- Add core to vite external list
- Update test to include id field in HistoryItem

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(followup): simplify generator, revert unrelated changes

- Collapse FollowupSuggestionsGenerator class into a single
  generateFollowupSuggestions() function (152 → 26 lines)
- Inline extractSuggestionContext into followupHistory.ts
- Remove unused RuleBasedProvider.addRule/removeRules methods
- Revert unrelated acpConnection.test.ts refactor
- Fix followupHistory.test.ts HistoryItem missing id field
- Reduce test verbosity (162 → 36 lines for generator tests)

* fix(followup): fix accept() deadlock and restore UMD globals mapping

- Wrap queueMicrotask callback in try/catch/finally to prevent accepting
  lock from being permanently held when onAccept throws
- Restore '@qwen-code/qwen-code-core': 'QwenCodeCore' in webui
  vite.config.ts globals (regression from d0f38a5f)
- Add test case verifying accept() recovers after callback exception

* fix(followup): log accept callback errors instead of swallowing them

Replace empty catch {} with console.error to ensure onAccept errors
remain visible for debugging while still preventing deadlock via finally.
Update test to verify error is logged.

* refactor(webui): move followup hook to separate subpath entry

Move useFollowupSuggestions from the root entry to a dedicated
'@qwen-code/webui/followup' subpath so that consumers who only need
UI components are not forced to install @qwen-code/qwen-code-core.

- Add src/followup.ts as separate Vite lib entry
- Remove followup exports from src/index.ts
- Add ./followup exports map in package.json
- Mark @qwen-code/qwen-code-core as optional peerDependency
- Switch build from single-entry UMD to multi-entry ESM/CJS

* fix(webui): restore UMD build and isolate core from root type boundary

- Restore UMD output for root entry (used by CDN demos, export-html, etc.)
- Build followup subpath via separate vite.config.followup.ts to avoid
  Vite's multi-entry + UMD limitation
- Replace FollowupState import in InputForm.tsx with a local structural
  type (InputFormFollowupState) so root .d.ts no longer references
  @qwen-code/qwen-code-core
- Root entry (JS + UMD + .d.ts) is now fully free of core dependency;
  core is only required by '@qwen-code/webui/followup' subpath

* refactor(followup): replace rule-based suggestions with LLM-based prompt suggestion

Replace the hardcoded rule-based follow-up suggestion engine with an LLM-based
prompt suggestion system, aligned with Claude Code's NES (Next-step Suggestion)
architecture.

Core changes:
- Replace ruleBasedProvider with generatePromptSuggestion using BaseLlmClient.generateJson()
- Port Claude Code's SUGGESTION_PROMPT and 14 filter rules (shouldFilterSuggestion)
- Simplify state from multi-suggestion array to single string (FollowupState)
- Add framework-agnostic controller with Object.freeze'd initial state

Guard conditions (9 checks):
- Settings toggle, non-interactive/SDK mode, plan mode
- Permission/confirmation/loop-detection dialogs, elicitation requests
- API error response detection, conversation history limit (slice -40)

UI interaction (CLI + WebUI):
- Tab: fill suggestion into input
- Enter: accept and submit
- Right Arrow: fill without submitting
- Typing/paste: dismiss suggestion
- Autocomplete conflict prevention

Telemetry (PromptSuggestionEvent):
- outcome (accepted/ignored/suppressed), accept_method (tab/enter/right)
- time_to_accept_ms, time_to_ignore_ms, time_to_first_keystroke_ms
- suggestion_length, similarity, was_focused_when_shown, prompt_id
- Per-rule suppression logging with reason strings

Deleted files:
- ruleBasedProvider.ts/test, followupHistory.ts/test, types.ts (dead FollowupSuggestion type)

13 rounds of adversarial audit, 17 issues found and fixed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address qwen3.6-plus-preview review findings

P0: Fix API error detection — check pendingGeminiHistoryItems for error
items (API errors go to pending items, not historyManager.history).

P1: Don't log abort as 'error' in telemetry — aborts are normal user
behavior (user started typing), not errors.

P3: Early return in dismiss() when state already cleared, avoiding
redundant applyState call after accept().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(settings): update suggestion feature description to match current behavior

Remove outdated "arrow keys to cycle" text — the feature now uses
Tab/Right Arrow to accept and Enter to accept+submit (no cycling).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): fix WebUI Enter submitting empty text + defend onOutcome

P0/P1: WebUI Enter handler now passes suggestion text explicitly via
onSubmit(e, followupSuggestion) instead of relying on React setState
(which is async and would leave inputText as "" in the closure).

P3: Wrap onOutcome callbacks in try/catch in both accept() and dismiss()
so telemetry errors cannot block state transitions.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): allow setSuggestion(null) when disabled + fix dts clobber

- setSuggestion(null) now always clears state/timers even when disabled,
  preventing stale suggestions from lingering after feature toggle.
- Set insertTypesEntry: false in followup vite config to prevent
  overwriting the main build's index.d.ts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(webui): thread explicitText through submit chain for Enter accept

handleSubmit and handleSubmitWithScroll now accept an optional
explicitText parameter. When provided (e.g., from prompt suggestion
Enter accept), it is used instead of the closure-captured inputText,
fixing the React setState race where onSubmit reads stale empty text.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address Copilot review — 4 fixes

- Enter accept: use buffer.text.length === 0 instead of !trim() to
  prevent whitespace-only input from triggering suggestion accept
- Move ref tracking from render body to useEffect to avoid
  render-time side effects in StrictMode/concurrent rendering
- Align PromptSuggestionEvent event.name to 'qwen-code.prompt_suggestion'
  matching the EVENT_PROMPT_SUGGESTION constant used by the logger
- Fix onOutcome JSDoc: remove mention of 'suppressed' (handled separately)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address Copilot review — curated history, type compat, peer version

- Use curated history (getChat().getHistory(true)) to avoid invalid
  entries causing API 400 errors in suggestion generation
- Use method signature for onSubmit in InputFormProps to maintain
  bivariant compatibility with existing consumers under strictFunctionTypes
- Tighten @qwen-code/qwen-code-core peer dependency to >=0.13.1

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): add prompt cache sharing + speculation engine

Phase 1 — Forked Query (cache sharing):
- CacheSafeParams: snapshot of generationConfig (systemInstruction + tools)
  + curated history + model + version, saved after each successful main turn
- createForkedChat: isolated GeminiChat sharing the same cache prefix for
  DashScope cache_control hit
- runForkedQuery: single-turn request via forked chat with JSON schema support
- suggestionGenerator: uses forked query when CacheSafeParams available,
  falls back to BaseLlmClient.generateJson otherwise
- GeminiChat.getGenerationConfig(): new getter for cache param snapshots
- Feature flag: enableCacheSharing (default: false)

Phase 2 — Speculation (predictive execution):
- OverlayFs: copy-on-write filesystem for speculation file isolation
  (/tmp/qwen-speculation/{pid}/{id}/), handles new files + existing files
- speculationToolGate: tool boundary enforcement using AST-based shell
  checker (not deprecated regex), write tools gated by ApprovalMode
  (only auto-edit/yolo allow overlay writes)
- speculation.ts: startSpeculation (on suggestion display), acceptSpeculation
  (on Tab/Enter — copies overlay to real FS, injects history via addHistory),
  abortSpeculation (on user input/new turn — cleanup overlay)
- Custom execution loop: toolRegistry.getTool → tool.build → invocation.execute
  (bypasses CoreToolScheduler — permission handled by toolGate)
- ensureToolResultPairing: strips unpaired functionCalls at boundary
- Boundary-aware tool result preservation: keeps executed tool results
  even when boundary truncates remaining calls
- Feature flag: enableSpeculation (default: false)

Telemetry:
- SpeculationEvent: outcome, turns_used, files_written, tool_use_count,
  duration_ms, boundary_type, had_pipelined_suggestion
- logSpeculation logger function

Security:
- Write tools only allowed in auto-edit/yolo mode during speculation
- Shell commands gated by isShellCommandReadOnlyAST (AST parser)
- Unknown/MCP tools always hit boundary (safe default)
- All structuredClone for cache param isolation

4 rounds of adversarial audit, 20+ issues found and fixed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address Copilot review — curated history, type compat, peer version

- Move web_fetch/web_search from SAFE_READ_ONLY to BOUNDARY tools
  (they require user confirmation for network requests)
- Add overlay read path resolution for read tools (resolveReadPaths)
  so speculative reads see overlay-written files
- Wire enableCacheSharing setting into generatePromptSuggestion
- Fix esbuild comment to not hardcode webui version

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(speculation): use index-based tracking for boundary tool pairing

Track executed function calls by order (first N matching
functionResponses.length) instead of by name. Fixes incorrect
pairing when model emits multiple calls with the same tool name.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(speculation): handle undefined functionCall.name + wrap rewritePathArgs

- Skip functionCall parts with missing name instead of non-null assertion
- Wrap rewritePathArgs in try/catch — treat path rewrite failure as boundary

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): pipelined suggestion, UI rendering, dismiss abort

- Pipelined suggestion: after speculation completes, generate next
  suggestion using augmented context. Promoted on accept.
- UI rendering: completed speculation results rendered via historyManager.
- Dismiss abort: typing/pasting calls dismissPromptSuggestion → clears
  promptSuggestion → useEffect aborts running speculation immediately.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): clear cache on reset, truncate history, fix test + comment

- Clear CacheSafeParams on startChat/resetChat to prevent cross-session leakage
- Truncate history to 40 entries before deep clone in saveCacheSafeParams
  to reduce CPU/memory overhead on long sessions
- Update stale comment about speculation dismiss lifecycle
- Add onAccept assertion to accept test with proper microtask flush

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(design): add prompt suggestion design documentation

- prompt-suggestion-design.md: architecture, generation, filtering, state
  management, keyboard interaction, telemetry, feature flags
- speculation-design.md: copy-on-write overlay, tool gate security, boundary
  handling, pipelined suggestion, forked query cache sharing
- prompt-suggestion-implementation.md: implementation status, test coverage,
  audit history, Claude Code alignment tracking

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(overlay): align catch comment with silent behavior

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): wire augmented context into pipelined suggestion + guard Tab/Right

- Pipelined suggestion now includes the accepted suggestion text and
  speculated model response as context for the next prediction
- Tab/ArrowRight handlers only preventDefault when onAcceptFollowup
  is provided, preventing key interception without a wired callback

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(speculation): filter thought parts + add filePath to path keys

- Skip thought/reasoning parts from model responses to prevent leaking
  internal reasoning into speculated history
- Add 'filePath' to path rewrite key list for LSP and other tools that
  use camelCase argument names

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(overlay): resolve relative paths against realCwd not process.cwd

Relative tool paths are now resolved against the overlay's realCwd
before computing the relative path, preventing incorrect outside-cwd
detection when process.cwd() differs from config.getCwd().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(design): fix 4 doc-code inconsistencies

- Guard conditions: clarify 13 code checks vs 11 table categories,
  separate feature flags from guard block, add streaming transition
- Filter rules: 14 → 12 (actual count in code and table)
- BOUNDARY_TOOLS: add todo_write + exit_plan_mode to doc table
- SpeculationEvent: 8 → 7 fields (matching code)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): turns_used metric + reuse SUGGESTION_PROMPT + reduce clones

- turns_used: count only model messages (not all Content entries)
  to accurately reflect LLM round-trips instead of inflated 3x count
- Pipelined suggestion: reuse exported SUGGESTION_PROMPT from
  suggestionGenerator instead of a degraded local copy, ensuring
  consistent quality (EXAMPLES, NEVER SUGGEST rules included)
- createForkedChat: replace redundant structuredClone with shallow
  copies since params are already deep-cloned snapshots

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): speculation UI tool rendering + speculationModel setting

- Speculation UI: render tool calls as tool_group HistoryItems with
  structured name/description/result instead of plain text only
- speculationModel setting: allows using a cheaper/faster model for
  speculation and pipelined suggestion. Leave empty to use main model.
  Passed through startSpeculation → runSpeculativeLoop → pipelined.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(design): sync docs with latest code changes

- Add speculationModel setting to feature flags table
- Document tool_group UI rendering in speculation accept flow
- Fix createForkedChat: deep clone → shallow copy (already cloned snapshots)
- Document pipelined suggestion SUGGESTION_PROMPT reuse
- Add Model Override and UI Rendering sections to speculation-design
- Update line counts to match actual file sizes

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test(followup): add unit tests for overlayFs, toolGate, forkedQuery

overlayFs (15 tests): COW write, read resolution, apply, cleanup, path traversal
speculationToolGate (24 tests): tool categories, approval mode gating, shell AST, path rewrite
forkedQuery (6 tests): cache params save/get/clear, deep clone, version detection

Total: 27 → 173 tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test(followup): P0-P2 test coverage for speculation + controller + toolGate

speculation.test.ts (7 tests):
- ensureToolResultPairing: empty, no calls, paired, unpaired text+call,
  unpaired call-only, user-ending, empty parts

followupState.test.ts (+8 tests = 15 total):
- onOutcome: accepted/tab, ignored/dismiss, error caught, no-op when cleared
- clear(): resets accepting lock allowing re-accept
- double accept blocked by debounce
- setSuggestion replaces pending timer

speculationToolGate.test.ts (+3 tests = 27 total):
- resolveReadPaths: overlay path after write, unchanged when not written
- rewritePathArgs: path key coverage

Total: 173 → 190 tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test(followup): smoke tests + P0-P2 coverage gaps

smoke.test.ts (21 tests): E2E verification across modules
- Filter against realistic LLM outputs (9 good + 7 bad + reason check)
- OverlayFs full round-trip (write → read → apply → verify)
- ToolGate → OverlayFs integration (write redirect → read resolve)
- CacheSafeParams lifecycle (save → mutate → isolation → clear)
- ensureToolResultPairing orphaned functionCalls

followupState.test.ts (+8 tests):
- onOutcome: accepted/tab, ignored/dismiss, error caught, no-op cleared
- clear(): resets accepting lock
- double accept debounce
- setSuggestion replaces pending timer

speculationToolGate.test.ts (+3 tests):
- resolveReadPaths through overlay after write
- path key coverage for rewritePathArgs

Export ensureToolResultPairing for testing.

Total: 190 → 211 tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): dismiss aborts suggestion, boundary skip inject, parentSignal check

- dismissPromptSuggestion now also aborts suggestionAbortRef to prevent
  race between dismiss and in-flight startSpeculation
- Boundary speculation: skip acceptSpeculation (which injects history),
  fall through to normal addMessage to avoid duplicate user turns
- startSpeculation: check parentSignal.aborted upfront before starting
- Speculation rendering: use index-based loop instead of indexOf O(n²)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(design): fix speculation accept diagram — boundary skips inject

The architecture diagram now shows the branching logic: completed
speculations go through acceptSpeculation (inject + render), while
boundary speculations are discarded and the query is submitted fresh
via addMessage.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): enable cache sharing by default

enableCacheSharing now defaults to true. This is a pure cost
optimization with no behavioral change — suggestion generation
uses the forked query path (sharing the main conversation's
prompt cache prefix) when CacheSafeParams are available.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): aborted parent skips loop, acceptSpeculation try/finally, doc sync

- startSpeculation: return aborted state immediately when parentSignal
  is already aborted, without creating overlay or starting loop
- acceptSpeculation: wrap in try/finally to guarantee overlay cleanup
  even if applyToReal or addHistory throws
- Doc: enableCacheSharing default false → true (matches code)
- Doc: update test count table (7 → 15 followupState, add 6 new files)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): remove debug logs, add function calling fallback for non-FC models

- Remove all followup-debug process.stderr.write logs
- Add direct text fallback in generateViaBaseLlm when generateJson
  returns {} (model doesn't support function calling, e.g., glm-5.1)
- Add CJK text support in filter: skip whitespace-based word count
  for Chinese/Japanese/Korean text, use character count instead

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): add suggestionModel setting for faster suggestion generation

New setting `suggestionModel` allows using a smaller/faster model
(e.g., qwen-turbo) for prompt suggestion generation instead of the
main conversation model. Reduces suggestion latency significantly.

Passed through: settings → AppContainer → generatePromptSuggestion
→ generateViaForkedQuery / generateViaBaseLlm (both paths).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): suggestionModel setting, /stats tracking, /about display

- suggestionModel: new setting to use a faster model for suggestion
  generation (e.g., qwen3.5-flash instead of main model glm-5.1)
- /stats: suggestion API calls now report usage to UiTelemetryService
  so token consumption appears in /stats model breakdown
- /about: shows Suggestion Model field (configured or main model)

Also:
- Function calling fallback for non-FC models (direct text generation)
- CJK text support in word count filter (character-based for Chinese)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* i18n: add Suggestion Model translations for /about display

en: Suggestion Model | zh: 建议模型 | ja: 提案モデル
de: Vorschlagsmodell | pt: Modelo de Sugestão | ru: Модель предложений

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): always use generateContent for suggestion (not generateJson)

generateJson doesn't expose usageMetadata, so /stats can't track
suggestion model tokens. Switch to direct generateContent which
always returns usage data. Also simplifies the code by removing
the function-calling + fallback dual path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): fix /stats tracking — use ApiResponseEvent constructor

Use ApiResponseEvent class constructor with proper response_id and
override event.name to match UiEvent type for UiTelemetryService
switch statement. This ensures suggestion model token usage appears
in /stats model output.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* i18n: fix Chinese translation for Suggestion Model

"建议模型" → "提示建议模型" to avoid ambiguity.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(followup): merge suggestionModel + speculationModel into fastModel

Single unified setting for all background tasks: suggestion generation,
speculation, pipelined suggestions, and future background tasks.

Users only need to understand one concept: main model for conversation,
fast model for background tasks.

- Remove: suggestionModel, speculationModel
- Add: fastModel (ui.fastModel in settings.json)
- Update /about display: "Fast Model" with i18n translations
- Update all 6 locale files (en/zh/ja/de/pt/ru)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(settings): move fastModel to top-level (parallel to model)

fastModel is an independent model concept, not a property of the
main model. Move from model.fastModel to top-level settings.fastModel.

Config: { "fastModel": "qwen3.5-flash", "model": { "name": "glm-5.1" } }

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): report usage in both forkedQuery and baseLlm paths

The forkedQuery path (used when enableCacheSharing=true) was not
reporting token usage to UiTelemetryService, so /stats model didn't
show the fast model. Now both paths report usage.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(cli): add /model --fast command to set fast model

Usage:
  /model --fast qwen3.5-flash  — set fast model
  /model --fast                — show current fast model
  /model                      — open model selection dialog (unchanged)

Saves to user settings (SettingScope.User).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(design): update to fastModel (replace suggestionModel/speculationModel)

- prompt-suggestion-design.md: speculationModel → fastModel (top-level)
- speculation-design.md: Model Override → Fast Model, update description
- prompt-suggestion-implementation.md: update settings description

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(cli): /model --fast opens model selection dialog for fast model

When called without a model name, /model --fast now opens the same
model selection dialog used by /model, but selecting a model saves
it as fastModel instead of switching the main model.

- useModelCommand: add isFastModelMode state
- ModelDialog: intercept selection in fast model mode, save to fastModel
- DialogManager: pass isFastModelMode prop to ModelDialog
- types.ts: add 'fast-model' dialog type

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): pass resolved model (not undefined) to runForkedQuery

model: modelOverride → model: model (which has the fallback applied)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(cli): /model --fast defaults to current fast model in dialog

When opening the model selection dialog via /model --fast, the
currently configured fastModel is pre-selected instead of the
main model.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(cli): add --fast tab completion for /model command

/model <Tab> now shows --fast as a completion option with description.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(schema): regenerate settings.schema.json with new followup settings

Adds enableCacheSharing, enableSpeculation, and fastModel to the
generated JSON schema so CI validation passes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(test): update tests for new Fast Model field in system info

Add "Fast Model" to expected labels in systemInfoFields and bugCommand
tests to match the new field added to /about and bug report output.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* ci: trigger PR synchronize event

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Copilot review comments (batch 4)

- modelCommand: use getPersistScopeForModelSelection for fastModel,
  return meaningful info message instead of empty content
- ModelDialog: handle $runtime|authType|modelId format in fast-model mode
- forkedQuery: return structuredClone from getCacheSafeParams
- client: fix stale comment about history truncation order
- speculation: detect abort in .then() handler, set 'aborted' status
  and cleanup overlay to prevent leaks
- docs: update test count table

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(users): add followup suggestions user manual

- New feature page: followup-suggestions.md covering usage, keybindings,
  fast model configuration, settings, and quality filters
- commands.md: add /model --fast command reference
- settings.md: add enableFollowupSuggestions, enableCacheSharing,
  enableSpeculation, and fastModel settings documentation
- _meta.ts: register new page in navigation

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(users): audit fixes for followup suggestions documentation

- followup-suggestions.md: add 300ms delay, WebUI support, plan mode
  guard, non-interactive guard, slash commands as single-word, meta/error
  filters, character limit
- settings.md: move fastModel next to model section, add /model --fast
  cross-reference and link to feature page
- overview.md: add followup suggestions to feature list
- i18n: add missing translations for 'Set fast model for background
  tasks' and 'Fast model updated.' in all 6 locales

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Copilot review comments (batch 5)

- modelCommand: remove duplicate info message (keep addItem only)
- followup-suggestions.md: clarify WebUI requires host app wiring
- speculation-design.md: fix abort telemetry description
- i18n: add missing translations for fast model strings

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(cli): remove duplicate message in /model --fast command

Use return message instead of addItem + empty return to avoid
blank INFO line in history. Also handle missing settings service.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(i18n): remove unused 'Fast model updated.' translations

The /model --fast command now returns the model name directly
instead of using this string. Remove dead translations.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): disable thinking mode for suggestion and speculation

Forked queries inherit the main conversation's generationConfig which
may have thinkingConfig enabled. This wastes tokens and adds latency
for background tasks that don't need reasoning. Explicitly set
thinkingConfig.includeThoughts=false in both paths:
- createForkedChat (covers forked query + speculation)
- generateViaBaseLlm (non-cache-sharing fallback)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: document thinking mode auto-disable for background tasks

- User docs: note that thinking is auto-disabled for suggestions/speculation
- Design docs: detail thinkingConfig override in both forked query and
  BaseLlm paths, explain why cache hits are unaffected

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
Co-authored-by: jinjing.zzj <jinjing.zzj@alibaba-inc.com>
Co-authored-by: yiliang114 <1204183885@qq.com>
2026-04-03 20:07:23 +08:00
tanzhenxin
b2f04418fa
Merge pull request #2628 from QwenLM/feat/channels-telegram
feat(channels): add extensible Channels platform with plugin system and Telegram/WeChat/DingTalk channels
2026-04-01 16:19:08 +08:00
tanzhenxin
76d64c9464
Merge pull request #2731 from QwenLM/feat/in-session-cron-loops
feat(cron): add in-session loop scheduling with cron tools
2026-04-01 16:18:46 +08:00
DennisYu07
9c26c7fe85 add more notes 2026-04-01 12:04:37 +08:00
DennisYu07
5221002831 remove hooks experimental and refactor hook Config 2026-04-01 11:50:23 +08:00
秦奇
b9c17d13ff feat: to #2767, support verbose and compact mode swither with ctrl-o 2026-03-31 19:00:13 +08:00
tanzhenxin
439a1a46e2 feat(cron): make cron tools opt-in via experimental settings
Change cron/loop tools from opt-out to opt-in. Cron tools are now
disabled by default and can be enabled via:
- settings.json: { "experimental": { "cron": true } }
- Environment variable: QWEN_CODE_ENABLE_CRON=1

This ensures experimental features are explicitly enabled by users
who want to try them.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-03-29 02:25:28 +00:00
tanzhenxin
811ccdd02d Merge remote-tracking branch 'origin/main' into feat/channels-telegram 2026-03-27 08:21:41 +00:00
DennisYu07
12eb0f8f8d correct hooks JSON schema 2026-03-26 14:07:36 +08:00
tanzhenxin
3eedc43238 feat(channels): add Telegram channel integration with ACP bridge
Implements the channels infrastructure for connecting external messaging
platforms to Qwen Code via ACP. Phase 1 supports plain text round-trip:
Telegram user sends message -> AcpBridge -> qwen-code --acp -> response
back to Telegram.

New packages:
- @qwen-code/channel-base: AcpBridge, SessionRouter, SenderGate, ChannelBase
- @qwen-code/channel-telegram: TelegramAdapter using telegraf

CLI: `qwen channel start <name>` reads from settings.json channels config,
spawns ACP agent, connects to Telegram via polling.
2026-03-24 06:33:36 +00:00
易良
fbf5ed57d6
feat(storage): support configurable runtime output directory (#2127)
Some checks failed
Qwen Code CI / Lint (push) Failing after 12s
Qwen Code CI / Test (push) Has been skipped
Qwen Code CI / Test-1 (push) Has been skipped
Qwen Code CI / Test-2 (push) Has been skipped
Qwen Code CI / Test-3 (push) Has been skipped
Qwen Code CI / Test-4 (push) Has been skipped
Qwen Code CI / Test-5 (push) Has been skipped
Qwen Code CI / Test-6 (push) Has been skipped
Qwen Code CI / Test-7 (push) Has been skipped
Qwen Code CI / Test-8 (push) Has been skipped
Qwen Code CI / CodeQL (push) Failing after 6s
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Failing after 5s
Qwen Code CI / Post Coverage Comment (push) Has been skipped
E2E Tests / E2E Test (Linux) - sandbox:none (push) Failing after 10m36s
E2E Tests / E2E Test - macOS (push) Has been cancelled
* feat(storage): support configurable runtime output directory (#2014)

Add `advanced.runtimeOutputDir` setting and `QWEN_RUNTIME_DIR` env var
to redirect runtime output (temp files, debug logs, session data, todos,
insights) to a custom directory while keeping config files at ~/.qwen.

- Introduce `Storage.setRuntimeBaseDir()` / `getRuntimeBaseDir()` with
  tilde expansion and relative path resolution
- Add `AsyncLocalStorage`-based `runWithRuntimeBaseDir()` for concurrent
  session isolation in ACP integration
- Update all runtime path methods to use `getRuntimeBaseDir()` instead
  of `getGlobalQwenDir()` (temp, debug, ide, projects, history dirs)
- Config paths (settings, oauth, installation_id, etc.) remain pinned
  to `~/.qwen` regardless of runtime dir configuration
- Add comprehensive tests covering path resolution, env var priority,
  async context isolation, and config path stability

* fix(core/storage): 支持 Windows 风格波浪号路径

扩展 setRuntimeBaseDir 以支持 Windows 风格的波浪号路径 (~\),
使用统一的路径分割逻辑处理 Unix 和 Windows 风格的路径分隔符

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core/debugLogger): runtime base dir 变更时创建新 debug 目录

添加 ensuredDebugDirPath 追踪变量,当 runtime base dir 发生变更时,
确保在新的目录下创建 debug 子目录

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* feat(cli/acp): 支持 ACP runtime output dir 配置

新增 runWithAcpRuntimeOutputDir 辅助函数,在 ACP Agent 的
loadSession 和 listSessions 操作中应用配置的 runtimeOutputDir

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs(vscode-ide-companion/acpConnection): 补充 this 别名的使用说明

为 self = this 的用法添加解释性注释,说明在嵌套回调中需要使用 this

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* feat(cli): add runtime output directory configuration support

* fix(core): update test to use getUserSkillsDirs method

Update storage.test.ts to call getUserSkillsDirs() instead of the
non-existent getUserSkillsDir() method. The method was renamed to
return an array of skill directories.

* fix(core/todoWrite): use path.join for cross-platform path assertion in test

Replace hardcoded forward-slash path `.qwen/todos/` with `path.join('.qwen', 'todos')` to fix Windows CI failure where paths use backslashes.

Made-with: Cursor

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-03-20 13:53:05 +08:00
LaZzyMan
b59864f554 Merge branch 'main' into feat/support-permission 2026-03-19 19:08:55 +08:00
DennisYu07
6f914e4f4e merge main 2026-03-19 17:12:19 +08:00
LaZzyMan
f9d9a985ce Merge branch 'main' into feat/support-permission 2026-03-19 11:24:30 +08:00
DennisYu07
b236e4152f Merge branch 'main' into feat/hook_sessionstart_sessionend 2026-03-17 20:34:13 -07:00
tanzhenxin
edd8388b27 Merge branch 'main' into feature/arena-agent-collaboration 2026-03-17 14:00:47 +08:00
LaZzyMan
d129ddc489 Merge branch 'main' into feat/support-permission 2026-03-16 11:42:37 +08:00
LaZzyMan
02ea2ed70c fix settings 2026-03-16 11:28:05 +08:00
tanzhenxin
58bee3dec9
Merge pull request #2388 from QwenLM/fix/remove-enableToolOutputTruncation-setting
fix(core): improve shell tool truncation, simplify tool output handling, and remove summarization
2026-03-16 09:51:37 +08:00
tanzhenxin
8161ac4523 fix(hooks): correct JSON schema type for hooks configuration
- Add 'array' type support to SettingItemDefinition
- Change hooks field from object to array type
- Add additionalProperties constraint for env fields
- Fix additionalProperties generation to only apply for object types

This ensures the hooks configuration schema correctly represents hooks as an array
and properly validates environment variable objects.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-03-15 20:32:56 +08:00