* feat(cli, webui): add follow-up suggestions feature Implement context-aware follow-up suggestions that appear after task completion, suggesting relevant next actions like "commit this", "run tests", etc. - Add `followup/` module with types, generator, and rule-based provider - Export follow-up types and functions from core index - 8 default suggestion rules covering common workflows - Add `useFollowupSuggestionsCLI` hook for Ink/React - Integrate suggestion generation in AppContainer when streaming completes - Add Tab key to accept, arrow keys to cycle through suggestions - Display suggestions as ghost text in input prompt - Add `useFollowupSuggestions` hook for React - Update InputForm to display suggestions as placeholder - Add CSS styling for suggestion appearance with counter - Add keyboard handlers (Tab, arrow keys) - After streaming completes with tool calls, suggestions appear - Tab accepts the current suggestion - Left/Right arrows cycle through multiple suggestions - Typing or pasting dismisses the suggestion - Shell command rules (tests, git, npm install) don't work yet due to history not storing tool arguments - VSCode extension integration pending - Web UI needs parent app integration for suggestion generation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: resolve merge conflicts and build errors - Rebased on upstream main (5d02260c8) - Fixed JSX structure in InputPrompt.tsx - Changed `return;` to `return true;` in follow-up handlers - Added @agentclientprotocol/sdk to core package dependencies - Restored correct BaseTextInput usage (self-closing, no children) - Follow-up suggestions now shown via placeholder prop only Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove @agentclientprotocol/sdk from core package.json The types are imported in fileSystemService.ts but the package should not be a runtime dependency of core. It's provided by the CLI package which depends on core. This was causing package-lock.json sync issues on Node.js 24.x CI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: restore alphabetical order of dependencies in core/package.json * fix: restore package-lock.json from upstream to fix Node 24.x CI * fix: resolve acpConnection test failure and ESLint warning Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * style: apply prettier formatting after merge Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * fix(followup): address review issues in follow-up suggestions - Export followupState.ts from core index (was dead code) - Refactor CLI and WebUI hooks to use shared followupReducers (eliminate duplication) - Move side effects out of setState updaters via queueMicrotask - Fix AppContainer useEffect dependency on unstable historyManager.history reference - Reorder matchesRule to check pattern before condition (cheaper first) - Make RuleBasedProvider collect from all matching rules with dedup and limit - Add missing resetGenerator export for testing - Add explicit implements SuggestionProvider to RuleBasedProvider - Fix unstable followup object in useEffect dependency arrays - Merge duplicate imports to fix eslint import/no-duplicates warnings - Standardize copyright year to 2025 - Add test files for followupState, ruleBasedProvider, suggestionGenerator Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address review feedback from PR #2525 - Fix acceptingRef race: set lock synchronously before queueMicrotask - Derive hasError/wasCancelled from actual tool call statuses - Incorporate rule priority into suggestion priority calculation - Clear suggestions immediately when setSuggestions([]) is called - Add !completion.showSuggestions guard to Tab handler - Fix onAcceptFollowup type from (string) => void to () => void - Fix ToolCallInfo.name doc examples to match display names - Scope CSS counter ::after to data-has-suggestion + empty conditions - Reset regex lastIndex before test() for g/y flag safety - Stabilize hook return with useMemo + onAcceptRef pattern - Add @qwen-code/qwen-code-core as webui external + peerDependency Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address second round of review feedback - Scope CSS max-width to match counter condition (not count=1) - Only dismiss followup on printable character input, not navigation keys - Restrict tool_group scan to most recent contiguous block (current turn) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): clear suggestions on new turn, add search guards - Clear followupSuggestions when streaming starts (Idle → Responding) to prevent stale suggestions from previous turns - Add !reverseSearchActive && !commandSearchActive guards to Tab handler to avoid keybinding conflicts with search modes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address third round of review feedback - Fix string pattern asymmetry: only match tool names when matchMessage=false - Collect tool_groups from last user message boundary, not contiguous tail - Flatten to individual tool calls before slicing to cap at 10 actual calls Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): fix arrow cycling guard and align rule conditions with patterns - Remove unreliable textContent check for arrow cycling in WebUI InputForm; rely on inputText state which already accounts for zero-width spaces - Add 'error' to fix/bug rule condition to match its regex pattern - Add 'clean up' to refactor rule condition to match its regex pattern Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): reset acceptingRef in clear() to prevent deadlock If clear() is called during accept debounce window, acceptingRef could remain stuck true permanently. Now reset in clear(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): cancel pending timeout in dismiss() and accept() Prevents stale suggestion timeout from re-showing suggestions after user dismisses or accepts during the 300ms delay window. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): reset lastIndex in removeRules() for g/y flag safety Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(vscode-ide-companion): mark @qwen-code/qwen-code-core as external in webview esbuild The webui package now declares @qwen-code/qwen-code-core as external in its vite build config. Without this change, the vscode-ide-companion webview esbuild (platform: 'browser') would try to bundle core's Node.js-only dependencies (undici, @grpc/grpc-js, fs, stream, etc.), causing 562 build errors during `npm ci`. * fix: restore node_modules/@google/gemini-cli-test-utils workspace link in lockfile The top-level workspace symlink entry was accidentally removed by a local npm install in commit004baaeb, which replaced it with a nested packages/cli/node_modules/ entry. npm ci requires the top-level link entry to be present in the lockfile, otherwise it fails with: "Missing: @google/gemini-cli-test-utils@0.13.0 from lock file" Also syncs @qwen-code/qwen-code-core peerDependency into the lockfile to match the updated packages/webui/package.json. * refactor(followup): extract controller and improve rule matching - Extract createFollowupController for unified state management across CLI and WebUI - Refactor rule-based provider to match via assistant message keywords instead of tool arguments - Add enableFollowupSuggestions user setting in UI category - Decouple WebUI from @qwen-code/qwen-code-core by copying browser-safe state logic - Add followupHistory.ts for extracting suggestion context from CLI history - Add comprehensive tests for controller and rule matching scenarios - Use --app-primary CSS variable for consistency Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * refactor(webui): import followup state from core package - Remove followupState.ts from webui (moved to core) - Import FollowupSuggestion, FollowupState types from core - Add @qwen-code/qwen-code-core as peerDependency - Add core to vite external list - Update test to include id field in HistoryItem Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * refactor(followup): simplify generator, revert unrelated changes - Collapse FollowupSuggestionsGenerator class into a single generateFollowupSuggestions() function (152 → 26 lines) - Inline extractSuggestionContext into followupHistory.ts - Remove unused RuleBasedProvider.addRule/removeRules methods - Revert unrelated acpConnection.test.ts refactor - Fix followupHistory.test.ts HistoryItem missing id field - Reduce test verbosity (162 → 36 lines for generator tests) * fix(followup): fix accept() deadlock and restore UMD globals mapping - Wrap queueMicrotask callback in try/catch/finally to prevent accepting lock from being permanently held when onAccept throws - Restore '@qwen-code/qwen-code-core': 'QwenCodeCore' in webui vite.config.ts globals (regression fromd0f38a5f) - Add test case verifying accept() recovers after callback exception * fix(followup): log accept callback errors instead of swallowing them Replace empty catch {} with console.error to ensure onAccept errors remain visible for debugging while still preventing deadlock via finally. Update test to verify error is logged. * refactor(webui): move followup hook to separate subpath entry Move useFollowupSuggestions from the root entry to a dedicated '@qwen-code/webui/followup' subpath so that consumers who only need UI components are not forced to install @qwen-code/qwen-code-core. - Add src/followup.ts as separate Vite lib entry - Remove followup exports from src/index.ts - Add ./followup exports map in package.json - Mark @qwen-code/qwen-code-core as optional peerDependency - Switch build from single-entry UMD to multi-entry ESM/CJS * fix(webui): restore UMD build and isolate core from root type boundary - Restore UMD output for root entry (used by CDN demos, export-html, etc.) - Build followup subpath via separate vite.config.followup.ts to avoid Vite's multi-entry + UMD limitation - Replace FollowupState import in InputForm.tsx with a local structural type (InputFormFollowupState) so root .d.ts no longer references @qwen-code/qwen-code-core - Root entry (JS + UMD + .d.ts) is now fully free of core dependency; core is only required by '@qwen-code/webui/followup' subpath * refactor(followup): replace rule-based suggestions with LLM-based prompt suggestion Replace the hardcoded rule-based follow-up suggestion engine with an LLM-based prompt suggestion system, aligned with Claude Code's NES (Next-step Suggestion) architecture. Core changes: - Replace ruleBasedProvider with generatePromptSuggestion using BaseLlmClient.generateJson() - Port Claude Code's SUGGESTION_PROMPT and 14 filter rules (shouldFilterSuggestion) - Simplify state from multi-suggestion array to single string (FollowupState) - Add framework-agnostic controller with Object.freeze'd initial state Guard conditions (9 checks): - Settings toggle, non-interactive/SDK mode, plan mode - Permission/confirmation/loop-detection dialogs, elicitation requests - API error response detection, conversation history limit (slice -40) UI interaction (CLI + WebUI): - Tab: fill suggestion into input - Enter: accept and submit - Right Arrow: fill without submitting - Typing/paste: dismiss suggestion - Autocomplete conflict prevention Telemetry (PromptSuggestionEvent): - outcome (accepted/ignored/suppressed), accept_method (tab/enter/right) - time_to_accept_ms, time_to_ignore_ms, time_to_first_keystroke_ms - suggestion_length, similarity, was_focused_when_shown, prompt_id - Per-rule suppression logging with reason strings Deleted files: - ruleBasedProvider.ts/test, followupHistory.ts/test, types.ts (dead FollowupSuggestion type) 13 rounds of adversarial audit, 17 issues found and fixed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address qwen3.6-plus-preview review findings P0: Fix API error detection — check pendingGeminiHistoryItems for error items (API errors go to pending items, not historyManager.history). P1: Don't log abort as 'error' in telemetry — aborts are normal user behavior (user started typing), not errors. P3: Early return in dismiss() when state already cleared, avoiding redundant applyState call after accept(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(settings): update suggestion feature description to match current behavior Remove outdated "arrow keys to cycle" text — the feature now uses Tab/Right Arrow to accept and Enter to accept+submit (no cycling). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): fix WebUI Enter submitting empty text + defend onOutcome P0/P1: WebUI Enter handler now passes suggestion text explicitly via onSubmit(e, followupSuggestion) instead of relying on React setState (which is async and would leave inputText as "" in the closure). P3: Wrap onOutcome callbacks in try/catch in both accept() and dismiss() so telemetry errors cannot block state transitions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): allow setSuggestion(null) when disabled + fix dts clobber - setSuggestion(null) now always clears state/timers even when disabled, preventing stale suggestions from lingering after feature toggle. - Set insertTypesEntry: false in followup vite config to prevent overwriting the main build's index.d.ts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(webui): thread explicitText through submit chain for Enter accept handleSubmit and handleSubmitWithScroll now accept an optional explicitText parameter. When provided (e.g., from prompt suggestion Enter accept), it is used instead of the closure-captured inputText, fixing the React setState race where onSubmit reads stale empty text. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address Copilot review — 4 fixes - Enter accept: use buffer.text.length === 0 instead of !trim() to prevent whitespace-only input from triggering suggestion accept - Move ref tracking from render body to useEffect to avoid render-time side effects in StrictMode/concurrent rendering - Align PromptSuggestionEvent event.name to 'qwen-code.prompt_suggestion' matching the EVENT_PROMPT_SUGGESTION constant used by the logger - Fix onOutcome JSDoc: remove mention of 'suppressed' (handled separately) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address Copilot review — curated history, type compat, peer version - Use curated history (getChat().getHistory(true)) to avoid invalid entries causing API 400 errors in suggestion generation - Use method signature for onSubmit in InputFormProps to maintain bivariant compatibility with existing consumers under strictFunctionTypes - Tighten @qwen-code/qwen-code-core peer dependency to >=0.13.1 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): add prompt cache sharing + speculation engine Phase 1 — Forked Query (cache sharing): - CacheSafeParams: snapshot of generationConfig (systemInstruction + tools) + curated history + model + version, saved after each successful main turn - createForkedChat: isolated GeminiChat sharing the same cache prefix for DashScope cache_control hit - runForkedQuery: single-turn request via forked chat with JSON schema support - suggestionGenerator: uses forked query when CacheSafeParams available, falls back to BaseLlmClient.generateJson otherwise - GeminiChat.getGenerationConfig(): new getter for cache param snapshots - Feature flag: enableCacheSharing (default: false) Phase 2 — Speculation (predictive execution): - OverlayFs: copy-on-write filesystem for speculation file isolation (/tmp/qwen-speculation/{pid}/{id}/), handles new files + existing files - speculationToolGate: tool boundary enforcement using AST-based shell checker (not deprecated regex), write tools gated by ApprovalMode (only auto-edit/yolo allow overlay writes) - speculation.ts: startSpeculation (on suggestion display), acceptSpeculation (on Tab/Enter — copies overlay to real FS, injects history via addHistory), abortSpeculation (on user input/new turn — cleanup overlay) - Custom execution loop: toolRegistry.getTool → tool.build → invocation.execute (bypasses CoreToolScheduler — permission handled by toolGate) - ensureToolResultPairing: strips unpaired functionCalls at boundary - Boundary-aware tool result preservation: keeps executed tool results even when boundary truncates remaining calls - Feature flag: enableSpeculation (default: false) Telemetry: - SpeculationEvent: outcome, turns_used, files_written, tool_use_count, duration_ms, boundary_type, had_pipelined_suggestion - logSpeculation logger function Security: - Write tools only allowed in auto-edit/yolo mode during speculation - Shell commands gated by isShellCommandReadOnlyAST (AST parser) - Unknown/MCP tools always hit boundary (safe default) - All structuredClone for cache param isolation 4 rounds of adversarial audit, 20+ issues found and fixed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address Copilot review — curated history, type compat, peer version - Move web_fetch/web_search from SAFE_READ_ONLY to BOUNDARY tools (they require user confirmation for network requests) - Add overlay read path resolution for read tools (resolveReadPaths) so speculative reads see overlay-written files - Wire enableCacheSharing setting into generatePromptSuggestion - Fix esbuild comment to not hardcode webui version Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(speculation): use index-based tracking for boundary tool pairing Track executed function calls by order (first N matching functionResponses.length) instead of by name. Fixes incorrect pairing when model emits multiple calls with the same tool name. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(speculation): handle undefined functionCall.name + wrap rewritePathArgs - Skip functionCall parts with missing name instead of non-null assertion - Wrap rewritePathArgs in try/catch — treat path rewrite failure as boundary Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): pipelined suggestion, UI rendering, dismiss abort - Pipelined suggestion: after speculation completes, generate next suggestion using augmented context. Promoted on accept. - UI rendering: completed speculation results rendered via historyManager. - Dismiss abort: typing/pasting calls dismissPromptSuggestion → clears promptSuggestion → useEffect aborts running speculation immediately. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): clear cache on reset, truncate history, fix test + comment - Clear CacheSafeParams on startChat/resetChat to prevent cross-session leakage - Truncate history to 40 entries before deep clone in saveCacheSafeParams to reduce CPU/memory overhead on long sessions - Update stale comment about speculation dismiss lifecycle - Add onAccept assertion to accept test with proper microtask flush Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): add prompt suggestion design documentation - prompt-suggestion-design.md: architecture, generation, filtering, state management, keyboard interaction, telemetry, feature flags - speculation-design.md: copy-on-write overlay, tool gate security, boundary handling, pipelined suggestion, forked query cache sharing - prompt-suggestion-implementation.md: implementation status, test coverage, audit history, Claude Code alignment tracking Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(overlay): align catch comment with silent behavior Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): wire augmented context into pipelined suggestion + guard Tab/Right - Pipelined suggestion now includes the accepted suggestion text and speculated model response as context for the next prediction - Tab/ArrowRight handlers only preventDefault when onAcceptFollowup is provided, preventing key interception without a wired callback Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(speculation): filter thought parts + add filePath to path keys - Skip thought/reasoning parts from model responses to prevent leaking internal reasoning into speculated history - Add 'filePath' to path rewrite key list for LSP and other tools that use camelCase argument names Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(overlay): resolve relative paths against realCwd not process.cwd Relative tool paths are now resolved against the overlay's realCwd before computing the relative path, preventing incorrect outside-cwd detection when process.cwd() differs from config.getCwd(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): fix 4 doc-code inconsistencies - Guard conditions: clarify 13 code checks vs 11 table categories, separate feature flags from guard block, add streaming transition - Filter rules: 14 → 12 (actual count in code and table) - BOUNDARY_TOOLS: add todo_write + exit_plan_mode to doc table - SpeculationEvent: 8 → 7 fields (matching code) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): turns_used metric + reuse SUGGESTION_PROMPT + reduce clones - turns_used: count only model messages (not all Content entries) to accurately reflect LLM round-trips instead of inflated 3x count - Pipelined suggestion: reuse exported SUGGESTION_PROMPT from suggestionGenerator instead of a degraded local copy, ensuring consistent quality (EXAMPLES, NEVER SUGGEST rules included) - createForkedChat: replace redundant structuredClone with shallow copies since params are already deep-cloned snapshots Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): speculation UI tool rendering + speculationModel setting - Speculation UI: render tool calls as tool_group HistoryItems with structured name/description/result instead of plain text only - speculationModel setting: allows using a cheaper/faster model for speculation and pipelined suggestion. Leave empty to use main model. Passed through startSpeculation → runSpeculativeLoop → pipelined. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): sync docs with latest code changes - Add speculationModel setting to feature flags table - Document tool_group UI rendering in speculation accept flow - Fix createForkedChat: deep clone → shallow copy (already cloned snapshots) - Document pipelined suggestion SUGGESTION_PROMPT reuse - Add Model Override and UI Rendering sections to speculation-design - Update line counts to match actual file sizes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(followup): add unit tests for overlayFs, toolGate, forkedQuery overlayFs (15 tests): COW write, read resolution, apply, cleanup, path traversal speculationToolGate (24 tests): tool categories, approval mode gating, shell AST, path rewrite forkedQuery (6 tests): cache params save/get/clear, deep clone, version detection Total: 27 → 173 tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(followup): P0-P2 test coverage for speculation + controller + toolGate speculation.test.ts (7 tests): - ensureToolResultPairing: empty, no calls, paired, unpaired text+call, unpaired call-only, user-ending, empty parts followupState.test.ts (+8 tests = 15 total): - onOutcome: accepted/tab, ignored/dismiss, error caught, no-op when cleared - clear(): resets accepting lock allowing re-accept - double accept blocked by debounce - setSuggestion replaces pending timer speculationToolGate.test.ts (+3 tests = 27 total): - resolveReadPaths: overlay path after write, unchanged when not written - rewritePathArgs: path key coverage Total: 173 → 190 tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(followup): smoke tests + P0-P2 coverage gaps smoke.test.ts (21 tests): E2E verification across modules - Filter against realistic LLM outputs (9 good + 7 bad + reason check) - OverlayFs full round-trip (write → read → apply → verify) - ToolGate → OverlayFs integration (write redirect → read resolve) - CacheSafeParams lifecycle (save → mutate → isolation → clear) - ensureToolResultPairing orphaned functionCalls followupState.test.ts (+8 tests): - onOutcome: accepted/tab, ignored/dismiss, error caught, no-op cleared - clear(): resets accepting lock - double accept debounce - setSuggestion replaces pending timer speculationToolGate.test.ts (+3 tests): - resolveReadPaths through overlay after write - path key coverage for rewritePathArgs Export ensureToolResultPairing for testing. Total: 190 → 211 tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): dismiss aborts suggestion, boundary skip inject, parentSignal check - dismissPromptSuggestion now also aborts suggestionAbortRef to prevent race between dismiss and in-flight startSpeculation - Boundary speculation: skip acceptSpeculation (which injects history), fall through to normal addMessage to avoid duplicate user turns - startSpeculation: check parentSignal.aborted upfront before starting - Speculation rendering: use index-based loop instead of indexOf O(n²) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): fix speculation accept diagram — boundary skips inject The architecture diagram now shows the branching logic: completed speculations go through acceptSpeculation (inject + render), while boundary speculations are discarded and the query is submitted fresh via addMessage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): enable cache sharing by default enableCacheSharing now defaults to true. This is a pure cost optimization with no behavioral change — suggestion generation uses the forked query path (sharing the main conversation's prompt cache prefix) when CacheSafeParams are available. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): aborted parent skips loop, acceptSpeculation try/finally, doc sync - startSpeculation: return aborted state immediately when parentSignal is already aborted, without creating overlay or starting loop - acceptSpeculation: wrap in try/finally to guarantee overlay cleanup even if applyToReal or addHistory throws - Doc: enableCacheSharing default false → true (matches code) - Doc: update test count table (7 → 15 followupState, add 6 new files) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): remove debug logs, add function calling fallback for non-FC models - Remove all followup-debug process.stderr.write logs - Add direct text fallback in generateViaBaseLlm when generateJson returns {} (model doesn't support function calling, e.g., glm-5.1) - Add CJK text support in filter: skip whitespace-based word count for Chinese/Japanese/Korean text, use character count instead Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): add suggestionModel setting for faster suggestion generation New setting `suggestionModel` allows using a smaller/faster model (e.g., qwen-turbo) for prompt suggestion generation instead of the main conversation model. Reduces suggestion latency significantly. Passed through: settings → AppContainer → generatePromptSuggestion → generateViaForkedQuery / generateViaBaseLlm (both paths). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): suggestionModel setting, /stats tracking, /about display - suggestionModel: new setting to use a faster model for suggestion generation (e.g., qwen3.5-flash instead of main model glm-5.1) - /stats: suggestion API calls now report usage to UiTelemetryService so token consumption appears in /stats model breakdown - /about: shows Suggestion Model field (configured or main model) Also: - Function calling fallback for non-FC models (direct text generation) - CJK text support in word count filter (character-based for Chinese) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * i18n: add Suggestion Model translations for /about display en: Suggestion Model | zh: 建议模型 | ja: 提案モデル de: Vorschlagsmodell | pt: Modelo de Sugestão | ru: Модель предложений Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): always use generateContent for suggestion (not generateJson) generateJson doesn't expose usageMetadata, so /stats can't track suggestion model tokens. Switch to direct generateContent which always returns usage data. Also simplifies the code by removing the function-calling + fallback dual path. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): fix /stats tracking — use ApiResponseEvent constructor Use ApiResponseEvent class constructor with proper response_id and override event.name to match UiEvent type for UiTelemetryService switch statement. This ensures suggestion model token usage appears in /stats model output. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * i18n: fix Chinese translation for Suggestion Model "建议模型" → "提示建议模型" to avoid ambiguity. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(followup): merge suggestionModel + speculationModel into fastModel Single unified setting for all background tasks: suggestion generation, speculation, pipelined suggestions, and future background tasks. Users only need to understand one concept: main model for conversation, fast model for background tasks. - Remove: suggestionModel, speculationModel - Add: fastModel (ui.fastModel in settings.json) - Update /about display: "Fast Model" with i18n translations - Update all 6 locale files (en/zh/ja/de/pt/ru) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(settings): move fastModel to top-level (parallel to model) fastModel is an independent model concept, not a property of the main model. Move from model.fastModel to top-level settings.fastModel. Config: { "fastModel": "qwen3.5-flash", "model": { "name": "glm-5.1" } } Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): report usage in both forkedQuery and baseLlm paths The forkedQuery path (used when enableCacheSharing=true) was not reporting token usage to UiTelemetryService, so /stats model didn't show the fast model. Now both paths report usage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(cli): add /model --fast command to set fast model Usage: /model --fast qwen3.5-flash — set fast model /model --fast — show current fast model /model — open model selection dialog (unchanged) Saves to user settings (SettingScope.User). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): update to fastModel (replace suggestionModel/speculationModel) - prompt-suggestion-design.md: speculationModel → fastModel (top-level) - speculation-design.md: Model Override → Fast Model, update description - prompt-suggestion-implementation.md: update settings description Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(cli): /model --fast opens model selection dialog for fast model When called without a model name, /model --fast now opens the same model selection dialog used by /model, but selecting a model saves it as fastModel instead of switching the main model. - useModelCommand: add isFastModelMode state - ModelDialog: intercept selection in fast model mode, save to fastModel - DialogManager: pass isFastModelMode prop to ModelDialog - types.ts: add 'fast-model' dialog type Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): pass resolved model (not undefined) to runForkedQuery model: modelOverride → model: model (which has the fallback applied) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(cli): /model --fast defaults to current fast model in dialog When opening the model selection dialog via /model --fast, the currently configured fastModel is pre-selected instead of the main model. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(cli): add --fast tab completion for /model command /model <Tab> now shows --fast as a completion option with description. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(schema): regenerate settings.schema.json with new followup settings Adds enableCacheSharing, enableSpeculation, and fastModel to the generated JSON schema so CI validation passes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(test): update tests for new Fast Model field in system info Add "Fast Model" to expected labels in systemInfoFields and bugCommand tests to match the new field added to /about and bug report output. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * ci: trigger PR synchronize event Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Copilot review comments (batch 4) - modelCommand: use getPersistScopeForModelSelection for fastModel, return meaningful info message instead of empty content - ModelDialog: handle $runtime|authType|modelId format in fast-model mode - forkedQuery: return structuredClone from getCacheSafeParams - client: fix stale comment about history truncation order - speculation: detect abort in .then() handler, set 'aborted' status and cleanup overlay to prevent leaks - docs: update test count table Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(users): add followup suggestions user manual - New feature page: followup-suggestions.md covering usage, keybindings, fast model configuration, settings, and quality filters - commands.md: add /model --fast command reference - settings.md: add enableFollowupSuggestions, enableCacheSharing, enableSpeculation, and fastModel settings documentation - _meta.ts: register new page in navigation Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(users): audit fixes for followup suggestions documentation - followup-suggestions.md: add 300ms delay, WebUI support, plan mode guard, non-interactive guard, slash commands as single-word, meta/error filters, character limit - settings.md: move fastModel next to model section, add /model --fast cross-reference and link to feature page - overview.md: add followup suggestions to feature list - i18n: add missing translations for 'Set fast model for background tasks' and 'Fast model updated.' in all 6 locales Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Copilot review comments (batch 5) - modelCommand: remove duplicate info message (keep addItem only) - followup-suggestions.md: clarify WebUI requires host app wiring - speculation-design.md: fix abort telemetry description - i18n: add missing translations for fast model strings Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(cli): remove duplicate message in /model --fast command Use return message instead of addItem + empty return to avoid blank INFO line in history. Also handle missing settings service. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(i18n): remove unused 'Fast model updated.' translations The /model --fast command now returns the model name directly instead of using this string. Remove dead translations. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): disable thinking mode for suggestion and speculation Forked queries inherit the main conversation's generationConfig which may have thinkingConfig enabled. This wastes tokens and adds latency for background tasks that don't need reasoning. Explicitly set thinkingConfig.includeThoughts=false in both paths: - createForkedChat (covers forked query + speculation) - generateViaBaseLlm (non-cache-sharing fallback) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: document thinking mode auto-disable for background tasks - User docs: note that thinking is auto-disabled for suggestions/speculation - Design docs: detail thinkingConfig override in both forked query and BaseLlm paths, explain why cache hits are unaffected Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> Co-authored-by: jinjing.zzj <jinjing.zzj@alibaba-inc.com> Co-authored-by: yiliang114 <1204183885@qq.com>
22 KiB
Commands
This document details all commands supported by Qwen Code, helping you efficiently manage sessions, customize the interface, and control its behavior.
Qwen Code commands are triggered through specific prefixes and fall into three categories:
| Prefix Type | Function Description | Typical Use Case |
|---|---|---|
Slash Commands (/) |
Meta-level control of Qwen Code itself | Managing sessions, modifying settings, getting help |
At Commands (@) |
Quickly inject local file content into conversation | Allowing AI to analyze specified files or code under directories |
Exclamation Commands (!) |
Direct interaction with system Shell | Executing system commands like git status, ls, etc. |
1. Slash Commands (/)
Slash commands are used to manage Qwen Code sessions, interface, and basic behavior.
1.1 Session and Project Management
These commands help you save, restore, and summarize work progress.
| Command | Description | Usage Examples |
|---|---|---|
/init |
Analyze current directory and create initial context file | /init |
/summary |
Generate project summary based on conversation history | /summary |
/compress |
Replace chat history with summary to save Tokens | /compress |
/resume |
Resume a previous conversation session | /resume |
/restore |
Restore files to state before tool execution | /restore (list) or /restore <ID> |
1.2 Interface and Workspace Control
Commands for adjusting interface appearance and work environment.
| Command | Description | Usage Examples |
|---|---|---|
/clear |
Clear terminal screen content | /clear (shortcut: Ctrl+L) |
/context |
Show context window usage breakdown | /context |
/theme |
Change Qwen Code visual theme | /theme |
/vim |
Turn input area Vim editing mode on/off | /vim |
/directory |
Manage multi-directory support workspace | /dir add ./src,./tests |
/editor |
Open dialog to select supported editor | /editor |
1.3 Language Settings
Commands specifically for controlling interface and output language.
| Command | Description | Usage Examples |
|---|---|---|
/language |
View or change language settings | /language |
→ ui [language] |
Set UI interface language | /language ui zh-CN |
→ output [language] |
Set LLM output language | /language output Chinese |
- Available built-in UI languages:
zh-CN(Simplified Chinese),en-US(English),ru-RU(Russian),de-DE(German) - Output language examples:
Chinese,English,Japanese, etc.
1.4 Tool and Model Management
Commands for managing AI tools and models.
| Command | Description | Usage Examples |
|---|---|---|
/mcp |
List configured MCP servers and tools | /mcp, /mcp desc |
/tools |
Display currently available tool list | /tools, /tools desc |
/skills |
List and run available skills | /skills, /skills <name> |
/approval-mode |
Change approval mode for tool usage | /approval-mode <mode (auto-edit)> --project |
→plan |
Analysis only, no execution | Secure review |
→default |
Require approval for edits | Daily use |
→auto-edit |
Automatically approve edits | Trusted environment |
→yolo |
Automatically approve all | Quick prototyping |
/model |
Switch model used in current session | /model |
/model --fast |
Set or select the fast model for background tasks | /model --fast qwen3.5-flash |
/extensions |
List all active extensions in current session | /extensions |
/memory |
Manage AI's instruction context | /memory add Important Info |
1.5 Side Question (/btw)
The /btw command allows you to ask quick side questions without interrupting or affecting the main conversation flow.
| Command | Description |
|---|---|
/btw <your question> |
Ask a quick side question |
?btw <your question> |
Alternative syntax for side questions |
How It Works:
- The side question is sent as a separate API call with recent conversation context (up to the last 20 messages)
- The response is displayed above the Composer — you can continue typing while waiting
- The main conversation is not blocked — it continues independently
- The side question response does not become part of the main conversation history
- Answers are rendered with full Markdown support (code blocks, lists, tables, etc.)
Keyboard Shortcuts (Interactive Mode):
| Shortcut | Action |
|---|---|
Escape |
Cancel (while loading) or dismiss (after completed) |
Space or Enter |
Dismiss the answer (when input is empty) |
Ctrl+C or Ctrl+D |
Cancel an in-flight side question |
Example:
(While the main conversation is about refactoring code)
> /btw What's the difference between let and var in JavaScript?
╭──────────────────────────────────────────╮
│ /btw What's the difference between let │
│ and var in JavaScript? │
│ │
│ + Answering... │
│ Press Escape, Ctrl+C, or Ctrl+D to cancel│
╰──────────────────────────────────────────╯
> (Composer remains active — keep typing)
(After the answer arrives)
╭──────────────────────────────────────────╮
│ /btw What's the difference between let │
│ and var in JavaScript? │
│ │
│ `let` is block-scoped, while `var` is │
│ function-scoped. `let` was introduced │
│ in ES6 and doesn't hoist the same way. │
│ │
│ Press Space, Enter, or Escape to dismiss │
╰──────────────────────────────────────────╯
> (Composer still active)
Supported Execution Modes:
| Mode | Behavior |
|---|---|
| Interactive | Shows above Composer with Markdown rendering |
| Non-interactive | Returns text result: btw> question\nanswer |
| ACP (Agent Protocol) | Returns stream_messages async generator |
Tip
Use
/btwwhen you need a quick answer without derailing your main task. It's especially useful for clarifying concepts, checking facts, or getting quick explanations while staying focused on your primary workflow.
1.6 Information, Settings, and Help
Commands for obtaining information and performing system settings.
| Command | Description | Usage Examples |
|---|---|---|
/help |
Display help information for available commands | /help or /? |
/about |
Display version information | /about |
/stats |
Display detailed statistics for current session | /stats |
/settings |
Open settings editor | /settings |
/auth |
Change authentication method | /auth |
/bug |
Submit issue about Qwen Code | /bug Button click unresponsive |
/copy |
Copy last output content to clipboard | /copy |
/quit |
Exit Qwen Code immediately | /quit or /exit |
1.7 Common Shortcuts
| Shortcut | Function | Note |
|---|---|---|
Ctrl/cmd+L |
Clear screen | Equivalent to /clear |
Ctrl/cmd+T |
Toggle tool description | MCP tool management |
Ctrl/cmd+C×2 |
Exit confirmation | Secure exit mechanism |
Ctrl/cmd+Z |
Undo input | Text editing |
Ctrl/cmd+Shift+Z |
Redo input | Text editing |
1.8 CLI Auth Subcommands
In addition to the in-session /auth slash command, Qwen Code provides standalone CLI subcommands for managing authentication directly from the terminal:
| Command | Description |
|---|---|
qwen auth |
Interactive authentication setup |
qwen auth qwen-oauth |
Authenticate with Qwen OAuth |
qwen auth coding-plan |
Authenticate with Alibaba Cloud Coding Plan |
qwen auth coding-plan --region china --key sk-sp-… |
Non-interactive Coding Plan setup (for scripting) |
qwen auth status |
Show current authentication status |
Tip
These commands run outside of a Qwen Code session. Use them to configure authentication before starting a session, or in scripts and CI environments. See the Authentication page for full details.
2. @ Commands (Introducing Files)
@ commands are used to quickly add local file or directory content to the conversation.
| Command Format | Description | Examples |
|---|---|---|
@<file path> |
Inject content of specified file | @src/main.py Please explain this code |
@<directory path> |
Recursively read all text files in directory | @docs/ Summarize content of this document |
Standalone @ |
Used when discussing @ symbol itself |
@ What is this symbol used for in programming? |
Note: Spaces in paths need to be escaped with backslash (e.g., @My\ Documents/file.txt)
3. Exclamation Commands (!) - Shell Command Execution
Exclamation commands allow you to execute system commands directly within Qwen Code.
| Command Format | Description | Examples |
|---|---|---|
!<shell command> |
Execute command in sub-Shell | !ls -la, !git status |
Standalone ! |
Switch Shell mode, any input is executed directly as Shell command | !(enter) → Input command → !(exit) |
Environment Variables: Commands executed via ! will set the QWEN_CODE=1 environment variable.
4. Custom Commands
Save frequently used prompts as shortcut commands to improve work efficiency and ensure consistency.
Note
Custom commands now use Markdown format with optional YAML frontmatter. TOML format is deprecated but still supported for backwards compatibility. When TOML files are detected, an automatic migration prompt will be displayed.
Quick Overview
| Function | Description | Advantages | Priority | Applicable Scenarios |
|---|---|---|---|---|
| Namespace | Subdirectory creates colon-named commands | Better command organization | ||
| Global Commands | ~/.qwen/commands/ |
Available in all projects | Low | Personal frequently used commands, cross-project use |
| Project Commands | <project root directory>/.qwen/commands/ |
Project-specific, version-controllable | High | Team sharing, project-specific commands |
Priority Rules: Project commands > User commands (project command used when names are same)
Command Naming Rules
File Path to Command Name Mapping Table
| File Location | Generated Command | Example Call |
|---|---|---|
~/.qwen/commands/test.md |
/test |
/test Parameter |
<project>/.qwen/commands/git/commit.md |
/git:commit |
/git:commit Message |
Naming Rules: Path separator (/ or \) converted to colon (:)
Markdown File Format Specification (Recommended)
Custom commands use Markdown files with optional YAML frontmatter:
---
description: Optional description (displayed in /help)
---
Your prompt content here.
Use {{args}} for parameter injection.
| Field | Required | Description | Example |
|---|---|---|---|
description |
Optional | Command description (displayed in /help) | description: Code analysis tool |
| Prompt body | Required | Prompt content sent to model | Any Markdown content after the frontmatter |
TOML File Format (Deprecated)
Warning
Deprecated: TOML format is still supported but will be removed in a future version. Please migrate to Markdown format.
| Field | Required | Description | Example |
|---|---|---|---|
prompt |
Required | Prompt content sent to model | prompt = "Please analyze code: {{args}}" |
description |
Optional | Command description (displayed in /help) | description = "Code analysis tool" |
Parameter Processing Mechanism
| Processing Method | Syntax | Applicable Scenarios | Security Features |
|---|---|---|---|
| Context-aware Injection | {{args}} |
Need precise parameter control | Automatic Shell escaping |
| Default Parameter Processing | No special marking | Simple commands, parameter appending | Append as-is |
| Shell Command Injection | !{command} |
Need dynamic content | Execution confirmation required before |
1. Context-aware Injection ({{args}})
| Scenario | TOML Configuration | Call Method | Actual Effect |
|---|---|---|---|
| Raw Injection | prompt = "Fix: {{args}}" |
/fix "Button issue" |
Fix: "Button issue" |
| In Shell Command | prompt = "Search: !{grep {{args}} .}" |
/search "hello" |
Execute grep "hello" . |
2. Default Parameter Processing
| Input Situation | Processing Method | Example |
|---|---|---|
| Has parameters | Append to end of prompt (separated by two line breaks) | /cmd parameter → Original prompt + parameter |
| No parameters | Send prompt as is | /cmd → Original prompt |
🚀 Dynamic Content Injection
| Injection Type | Syntax | Processing Order | Purpose |
|---|---|---|---|
| File Content | @{file path} |
Processed first | Inject static reference files |
| Shell Commands | !{command} |
Processed in middle | Inject dynamic execution results |
| Parameter Replacement | {{args}} |
Processed last | Inject user parameters |
3. Shell Command Execution (!{...})
| Operation | User Interaction |
|---|---|
| 1. Parse command and parameters | - |
| 2. Automatic Shell escaping | - |
| 3. Show confirmation dialog | ✅ User confirmation |
| 4. Execute command | - |
| 5. Inject output to prompt | - |
Example: Git Commit Message Generation
---
description: Generate Commit message based on staged changes
---
Please generate a Commit message based on the following diff:
```diff
!{git diff --staged}
```
4. File Content Injection (@{...})
| File Type | Support Status | Processing Method |
|---|---|---|
| Text Files | ✅ Full Support | Directly inject content |
| Images/PDF | ✅ Multi-modal Support | Encode and inject |
| Binary Files | ⚠️ Limited Support | May be skipped or truncated |
| Directory | ✅ Recursive Injection | Follow .gitignore rules |
Example: Code Review Command
---
description: Code review based on best practices
---
Review {{args}}, reference standards:
@{docs/code-standards.md}
Practical Creation Example
"Pure Function Refactoring" Command Creation Steps Table
| Operation | Command/Code |
|---|---|
| 1. Create directory structure | mkdir -p ~/.qwen/commands/refactor |
| 2. Create command file | touch ~/.qwen/commands/refactor/pure.md |
| 3. Edit command content | Refer to the complete code below. |
| 4. Test command | @file.js → /refactor:pure |
---
description: Refactor code to pure function
---
Please analyze code in current context, refactor to pure function.
Requirements:
1. Provide refactored code
2. Explain key changes and pure function characteristic implementation
3. Maintain function unchanged
Custom Command Best Practices Summary
Command Design Recommendations Table
| Practice Points | Recommended Approach | Avoid |
|---|---|---|
| Command Naming | Use namespaces for organization | Avoid overly generic names |
| Parameter Processing | Clearly use {{args}} |
Rely on default appending (easy to confuse) |
| Error Handling | Utilize Shell error output | Ignore execution failure |
| File Organization | Organize by function in directories | All commands in root directory |
| Description Field | Always provide clear description | Rely on auto-generated description |
Security Features Reminder Table
| Security Mechanism | Protection Effect | User Operation |
|---|---|---|
| Shell Escaping | Prevent command injection | Automatic processing |
| Execution Confirmation | Avoid accidental execution | Dialog confirmation |
| Error Reporting | Help diagnose issues | View error information |