* feat(cli, webui): add follow-up suggestions feature Implement context-aware follow-up suggestions that appear after task completion, suggesting relevant next actions like "commit this", "run tests", etc. - Add `followup/` module with types, generator, and rule-based provider - Export follow-up types and functions from core index - 8 default suggestion rules covering common workflows - Add `useFollowupSuggestionsCLI` hook for Ink/React - Integrate suggestion generation in AppContainer when streaming completes - Add Tab key to accept, arrow keys to cycle through suggestions - Display suggestions as ghost text in input prompt - Add `useFollowupSuggestions` hook for React - Update InputForm to display suggestions as placeholder - Add CSS styling for suggestion appearance with counter - Add keyboard handlers (Tab, arrow keys) - After streaming completes with tool calls, suggestions appear - Tab accepts the current suggestion - Left/Right arrows cycle through multiple suggestions - Typing or pasting dismisses the suggestion - Shell command rules (tests, git, npm install) don't work yet due to history not storing tool arguments - VSCode extension integration pending - Web UI needs parent app integration for suggestion generation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: resolve merge conflicts and build errors - Rebased on upstream main (5d02260c8) - Fixed JSX structure in InputPrompt.tsx - Changed `return;` to `return true;` in follow-up handlers - Added @agentclientprotocol/sdk to core package dependencies - Restored correct BaseTextInput usage (self-closing, no children) - Follow-up suggestions now shown via placeholder prop only Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove @agentclientprotocol/sdk from core package.json The types are imported in fileSystemService.ts but the package should not be a runtime dependency of core. It's provided by the CLI package which depends on core. This was causing package-lock.json sync issues on Node.js 24.x CI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: restore alphabetical order of dependencies in core/package.json * fix: restore package-lock.json from upstream to fix Node 24.x CI * fix: resolve acpConnection test failure and ESLint warning Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * style: apply prettier formatting after merge Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * fix(followup): address review issues in follow-up suggestions - Export followupState.ts from core index (was dead code) - Refactor CLI and WebUI hooks to use shared followupReducers (eliminate duplication) - Move side effects out of setState updaters via queueMicrotask - Fix AppContainer useEffect dependency on unstable historyManager.history reference - Reorder matchesRule to check pattern before condition (cheaper first) - Make RuleBasedProvider collect from all matching rules with dedup and limit - Add missing resetGenerator export for testing - Add explicit implements SuggestionProvider to RuleBasedProvider - Fix unstable followup object in useEffect dependency arrays - Merge duplicate imports to fix eslint import/no-duplicates warnings - Standardize copyright year to 2025 - Add test files for followupState, ruleBasedProvider, suggestionGenerator Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address review feedback from PR #2525 - Fix acceptingRef race: set lock synchronously before queueMicrotask - Derive hasError/wasCancelled from actual tool call statuses - Incorporate rule priority into suggestion priority calculation - Clear suggestions immediately when setSuggestions([]) is called - Add !completion.showSuggestions guard to Tab handler - Fix onAcceptFollowup type from (string) => void to () => void - Fix ToolCallInfo.name doc examples to match display names - Scope CSS counter ::after to data-has-suggestion + empty conditions - Reset regex lastIndex before test() for g/y flag safety - Stabilize hook return with useMemo + onAcceptRef pattern - Add @qwen-code/qwen-code-core as webui external + peerDependency Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address second round of review feedback - Scope CSS max-width to match counter condition (not count=1) - Only dismiss followup on printable character input, not navigation keys - Restrict tool_group scan to most recent contiguous block (current turn) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): clear suggestions on new turn, add search guards - Clear followupSuggestions when streaming starts (Idle → Responding) to prevent stale suggestions from previous turns - Add !reverseSearchActive && !commandSearchActive guards to Tab handler to avoid keybinding conflicts with search modes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address third round of review feedback - Fix string pattern asymmetry: only match tool names when matchMessage=false - Collect tool_groups from last user message boundary, not contiguous tail - Flatten to individual tool calls before slicing to cap at 10 actual calls Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): fix arrow cycling guard and align rule conditions with patterns - Remove unreliable textContent check for arrow cycling in WebUI InputForm; rely on inputText state which already accounts for zero-width spaces - Add 'error' to fix/bug rule condition to match its regex pattern - Add 'clean up' to refactor rule condition to match its regex pattern Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): reset acceptingRef in clear() to prevent deadlock If clear() is called during accept debounce window, acceptingRef could remain stuck true permanently. Now reset in clear(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): cancel pending timeout in dismiss() and accept() Prevents stale suggestion timeout from re-showing suggestions after user dismisses or accepts during the 300ms delay window. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): reset lastIndex in removeRules() for g/y flag safety Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(vscode-ide-companion): mark @qwen-code/qwen-code-core as external in webview esbuild The webui package now declares @qwen-code/qwen-code-core as external in its vite build config. Without this change, the vscode-ide-companion webview esbuild (platform: 'browser') would try to bundle core's Node.js-only dependencies (undici, @grpc/grpc-js, fs, stream, etc.), causing 562 build errors during `npm ci`. * fix: restore node_modules/@google/gemini-cli-test-utils workspace link in lockfile The top-level workspace symlink entry was accidentally removed by a local npm install in commit004baaeb, which replaced it with a nested packages/cli/node_modules/ entry. npm ci requires the top-level link entry to be present in the lockfile, otherwise it fails with: "Missing: @google/gemini-cli-test-utils@0.13.0 from lock file" Also syncs @qwen-code/qwen-code-core peerDependency into the lockfile to match the updated packages/webui/package.json. * refactor(followup): extract controller and improve rule matching - Extract createFollowupController for unified state management across CLI and WebUI - Refactor rule-based provider to match via assistant message keywords instead of tool arguments - Add enableFollowupSuggestions user setting in UI category - Decouple WebUI from @qwen-code/qwen-code-core by copying browser-safe state logic - Add followupHistory.ts for extracting suggestion context from CLI history - Add comprehensive tests for controller and rule matching scenarios - Use --app-primary CSS variable for consistency Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * refactor(webui): import followup state from core package - Remove followupState.ts from webui (moved to core) - Import FollowupSuggestion, FollowupState types from core - Add @qwen-code/qwen-code-core as peerDependency - Add core to vite external list - Update test to include id field in HistoryItem Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * refactor(followup): simplify generator, revert unrelated changes - Collapse FollowupSuggestionsGenerator class into a single generateFollowupSuggestions() function (152 → 26 lines) - Inline extractSuggestionContext into followupHistory.ts - Remove unused RuleBasedProvider.addRule/removeRules methods - Revert unrelated acpConnection.test.ts refactor - Fix followupHistory.test.ts HistoryItem missing id field - Reduce test verbosity (162 → 36 lines for generator tests) * fix(followup): fix accept() deadlock and restore UMD globals mapping - Wrap queueMicrotask callback in try/catch/finally to prevent accepting lock from being permanently held when onAccept throws - Restore '@qwen-code/qwen-code-core': 'QwenCodeCore' in webui vite.config.ts globals (regression fromd0f38a5f) - Add test case verifying accept() recovers after callback exception * fix(followup): log accept callback errors instead of swallowing them Replace empty catch {} with console.error to ensure onAccept errors remain visible for debugging while still preventing deadlock via finally. Update test to verify error is logged. * refactor(webui): move followup hook to separate subpath entry Move useFollowupSuggestions from the root entry to a dedicated '@qwen-code/webui/followup' subpath so that consumers who only need UI components are not forced to install @qwen-code/qwen-code-core. - Add src/followup.ts as separate Vite lib entry - Remove followup exports from src/index.ts - Add ./followup exports map in package.json - Mark @qwen-code/qwen-code-core as optional peerDependency - Switch build from single-entry UMD to multi-entry ESM/CJS * fix(webui): restore UMD build and isolate core from root type boundary - Restore UMD output for root entry (used by CDN demos, export-html, etc.) - Build followup subpath via separate vite.config.followup.ts to avoid Vite's multi-entry + UMD limitation - Replace FollowupState import in InputForm.tsx with a local structural type (InputFormFollowupState) so root .d.ts no longer references @qwen-code/qwen-code-core - Root entry (JS + UMD + .d.ts) is now fully free of core dependency; core is only required by '@qwen-code/webui/followup' subpath * refactor(followup): replace rule-based suggestions with LLM-based prompt suggestion Replace the hardcoded rule-based follow-up suggestion engine with an LLM-based prompt suggestion system, aligned with Claude Code's NES (Next-step Suggestion) architecture. Core changes: - Replace ruleBasedProvider with generatePromptSuggestion using BaseLlmClient.generateJson() - Port Claude Code's SUGGESTION_PROMPT and 14 filter rules (shouldFilterSuggestion) - Simplify state from multi-suggestion array to single string (FollowupState) - Add framework-agnostic controller with Object.freeze'd initial state Guard conditions (9 checks): - Settings toggle, non-interactive/SDK mode, plan mode - Permission/confirmation/loop-detection dialogs, elicitation requests - API error response detection, conversation history limit (slice -40) UI interaction (CLI + WebUI): - Tab: fill suggestion into input - Enter: accept and submit - Right Arrow: fill without submitting - Typing/paste: dismiss suggestion - Autocomplete conflict prevention Telemetry (PromptSuggestionEvent): - outcome (accepted/ignored/suppressed), accept_method (tab/enter/right) - time_to_accept_ms, time_to_ignore_ms, time_to_first_keystroke_ms - suggestion_length, similarity, was_focused_when_shown, prompt_id - Per-rule suppression logging with reason strings Deleted files: - ruleBasedProvider.ts/test, followupHistory.ts/test, types.ts (dead FollowupSuggestion type) 13 rounds of adversarial audit, 17 issues found and fixed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address qwen3.6-plus-preview review findings P0: Fix API error detection — check pendingGeminiHistoryItems for error items (API errors go to pending items, not historyManager.history). P1: Don't log abort as 'error' in telemetry — aborts are normal user behavior (user started typing), not errors. P3: Early return in dismiss() when state already cleared, avoiding redundant applyState call after accept(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(settings): update suggestion feature description to match current behavior Remove outdated "arrow keys to cycle" text — the feature now uses Tab/Right Arrow to accept and Enter to accept+submit (no cycling). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): fix WebUI Enter submitting empty text + defend onOutcome P0/P1: WebUI Enter handler now passes suggestion text explicitly via onSubmit(e, followupSuggestion) instead of relying on React setState (which is async and would leave inputText as "" in the closure). P3: Wrap onOutcome callbacks in try/catch in both accept() and dismiss() so telemetry errors cannot block state transitions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): allow setSuggestion(null) when disabled + fix dts clobber - setSuggestion(null) now always clears state/timers even when disabled, preventing stale suggestions from lingering after feature toggle. - Set insertTypesEntry: false in followup vite config to prevent overwriting the main build's index.d.ts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(webui): thread explicitText through submit chain for Enter accept handleSubmit and handleSubmitWithScroll now accept an optional explicitText parameter. When provided (e.g., from prompt suggestion Enter accept), it is used instead of the closure-captured inputText, fixing the React setState race where onSubmit reads stale empty text. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address Copilot review — 4 fixes - Enter accept: use buffer.text.length === 0 instead of !trim() to prevent whitespace-only input from triggering suggestion accept - Move ref tracking from render body to useEffect to avoid render-time side effects in StrictMode/concurrent rendering - Align PromptSuggestionEvent event.name to 'qwen-code.prompt_suggestion' matching the EVENT_PROMPT_SUGGESTION constant used by the logger - Fix onOutcome JSDoc: remove mention of 'suppressed' (handled separately) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address Copilot review — curated history, type compat, peer version - Use curated history (getChat().getHistory(true)) to avoid invalid entries causing API 400 errors in suggestion generation - Use method signature for onSubmit in InputFormProps to maintain bivariant compatibility with existing consumers under strictFunctionTypes - Tighten @qwen-code/qwen-code-core peer dependency to >=0.13.1 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): add prompt cache sharing + speculation engine Phase 1 — Forked Query (cache sharing): - CacheSafeParams: snapshot of generationConfig (systemInstruction + tools) + curated history + model + version, saved after each successful main turn - createForkedChat: isolated GeminiChat sharing the same cache prefix for DashScope cache_control hit - runForkedQuery: single-turn request via forked chat with JSON schema support - suggestionGenerator: uses forked query when CacheSafeParams available, falls back to BaseLlmClient.generateJson otherwise - GeminiChat.getGenerationConfig(): new getter for cache param snapshots - Feature flag: enableCacheSharing (default: false) Phase 2 — Speculation (predictive execution): - OverlayFs: copy-on-write filesystem for speculation file isolation (/tmp/qwen-speculation/{pid}/{id}/), handles new files + existing files - speculationToolGate: tool boundary enforcement using AST-based shell checker (not deprecated regex), write tools gated by ApprovalMode (only auto-edit/yolo allow overlay writes) - speculation.ts: startSpeculation (on suggestion display), acceptSpeculation (on Tab/Enter — copies overlay to real FS, injects history via addHistory), abortSpeculation (on user input/new turn — cleanup overlay) - Custom execution loop: toolRegistry.getTool → tool.build → invocation.execute (bypasses CoreToolScheduler — permission handled by toolGate) - ensureToolResultPairing: strips unpaired functionCalls at boundary - Boundary-aware tool result preservation: keeps executed tool results even when boundary truncates remaining calls - Feature flag: enableSpeculation (default: false) Telemetry: - SpeculationEvent: outcome, turns_used, files_written, tool_use_count, duration_ms, boundary_type, had_pipelined_suggestion - logSpeculation logger function Security: - Write tools only allowed in auto-edit/yolo mode during speculation - Shell commands gated by isShellCommandReadOnlyAST (AST parser) - Unknown/MCP tools always hit boundary (safe default) - All structuredClone for cache param isolation 4 rounds of adversarial audit, 20+ issues found and fixed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): address Copilot review — curated history, type compat, peer version - Move web_fetch/web_search from SAFE_READ_ONLY to BOUNDARY tools (they require user confirmation for network requests) - Add overlay read path resolution for read tools (resolveReadPaths) so speculative reads see overlay-written files - Wire enableCacheSharing setting into generatePromptSuggestion - Fix esbuild comment to not hardcode webui version Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(speculation): use index-based tracking for boundary tool pairing Track executed function calls by order (first N matching functionResponses.length) instead of by name. Fixes incorrect pairing when model emits multiple calls with the same tool name. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(speculation): handle undefined functionCall.name + wrap rewritePathArgs - Skip functionCall parts with missing name instead of non-null assertion - Wrap rewritePathArgs in try/catch — treat path rewrite failure as boundary Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): pipelined suggestion, UI rendering, dismiss abort - Pipelined suggestion: after speculation completes, generate next suggestion using augmented context. Promoted on accept. - UI rendering: completed speculation results rendered via historyManager. - Dismiss abort: typing/pasting calls dismissPromptSuggestion → clears promptSuggestion → useEffect aborts running speculation immediately. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): clear cache on reset, truncate history, fix test + comment - Clear CacheSafeParams on startChat/resetChat to prevent cross-session leakage - Truncate history to 40 entries before deep clone in saveCacheSafeParams to reduce CPU/memory overhead on long sessions - Update stale comment about speculation dismiss lifecycle - Add onAccept assertion to accept test with proper microtask flush Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): add prompt suggestion design documentation - prompt-suggestion-design.md: architecture, generation, filtering, state management, keyboard interaction, telemetry, feature flags - speculation-design.md: copy-on-write overlay, tool gate security, boundary handling, pipelined suggestion, forked query cache sharing - prompt-suggestion-implementation.md: implementation status, test coverage, audit history, Claude Code alignment tracking Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(overlay): align catch comment with silent behavior Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): wire augmented context into pipelined suggestion + guard Tab/Right - Pipelined suggestion now includes the accepted suggestion text and speculated model response as context for the next prediction - Tab/ArrowRight handlers only preventDefault when onAcceptFollowup is provided, preventing key interception without a wired callback Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(speculation): filter thought parts + add filePath to path keys - Skip thought/reasoning parts from model responses to prevent leaking internal reasoning into speculated history - Add 'filePath' to path rewrite key list for LSP and other tools that use camelCase argument names Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(overlay): resolve relative paths against realCwd not process.cwd Relative tool paths are now resolved against the overlay's realCwd before computing the relative path, preventing incorrect outside-cwd detection when process.cwd() differs from config.getCwd(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): fix 4 doc-code inconsistencies - Guard conditions: clarify 13 code checks vs 11 table categories, separate feature flags from guard block, add streaming transition - Filter rules: 14 → 12 (actual count in code and table) - BOUNDARY_TOOLS: add todo_write + exit_plan_mode to doc table - SpeculationEvent: 8 → 7 fields (matching code) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): turns_used metric + reuse SUGGESTION_PROMPT + reduce clones - turns_used: count only model messages (not all Content entries) to accurately reflect LLM round-trips instead of inflated 3x count - Pipelined suggestion: reuse exported SUGGESTION_PROMPT from suggestionGenerator instead of a degraded local copy, ensuring consistent quality (EXAMPLES, NEVER SUGGEST rules included) - createForkedChat: replace redundant structuredClone with shallow copies since params are already deep-cloned snapshots Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): speculation UI tool rendering + speculationModel setting - Speculation UI: render tool calls as tool_group HistoryItems with structured name/description/result instead of plain text only - speculationModel setting: allows using a cheaper/faster model for speculation and pipelined suggestion. Leave empty to use main model. Passed through startSpeculation → runSpeculativeLoop → pipelined. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): sync docs with latest code changes - Add speculationModel setting to feature flags table - Document tool_group UI rendering in speculation accept flow - Fix createForkedChat: deep clone → shallow copy (already cloned snapshots) - Document pipelined suggestion SUGGESTION_PROMPT reuse - Add Model Override and UI Rendering sections to speculation-design - Update line counts to match actual file sizes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(followup): add unit tests for overlayFs, toolGate, forkedQuery overlayFs (15 tests): COW write, read resolution, apply, cleanup, path traversal speculationToolGate (24 tests): tool categories, approval mode gating, shell AST, path rewrite forkedQuery (6 tests): cache params save/get/clear, deep clone, version detection Total: 27 → 173 tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(followup): P0-P2 test coverage for speculation + controller + toolGate speculation.test.ts (7 tests): - ensureToolResultPairing: empty, no calls, paired, unpaired text+call, unpaired call-only, user-ending, empty parts followupState.test.ts (+8 tests = 15 total): - onOutcome: accepted/tab, ignored/dismiss, error caught, no-op when cleared - clear(): resets accepting lock allowing re-accept - double accept blocked by debounce - setSuggestion replaces pending timer speculationToolGate.test.ts (+3 tests = 27 total): - resolveReadPaths: overlay path after write, unchanged when not written - rewritePathArgs: path key coverage Total: 173 → 190 tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(followup): smoke tests + P0-P2 coverage gaps smoke.test.ts (21 tests): E2E verification across modules - Filter against realistic LLM outputs (9 good + 7 bad + reason check) - OverlayFs full round-trip (write → read → apply → verify) - ToolGate → OverlayFs integration (write redirect → read resolve) - CacheSafeParams lifecycle (save → mutate → isolation → clear) - ensureToolResultPairing orphaned functionCalls followupState.test.ts (+8 tests): - onOutcome: accepted/tab, ignored/dismiss, error caught, no-op cleared - clear(): resets accepting lock - double accept debounce - setSuggestion replaces pending timer speculationToolGate.test.ts (+3 tests): - resolveReadPaths through overlay after write - path key coverage for rewritePathArgs Export ensureToolResultPairing for testing. Total: 190 → 211 tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): dismiss aborts suggestion, boundary skip inject, parentSignal check - dismissPromptSuggestion now also aborts suggestionAbortRef to prevent race between dismiss and in-flight startSpeculation - Boundary speculation: skip acceptSpeculation (which injects history), fall through to normal addMessage to avoid duplicate user turns - startSpeculation: check parentSignal.aborted upfront before starting - Speculation rendering: use index-based loop instead of indexOf O(n²) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): fix speculation accept diagram — boundary skips inject The architecture diagram now shows the branching logic: completed speculations go through acceptSpeculation (inject + render), while boundary speculations are discarded and the query is submitted fresh via addMessage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): enable cache sharing by default enableCacheSharing now defaults to true. This is a pure cost optimization with no behavioral change — suggestion generation uses the forked query path (sharing the main conversation's prompt cache prefix) when CacheSafeParams are available. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): aborted parent skips loop, acceptSpeculation try/finally, doc sync - startSpeculation: return aborted state immediately when parentSignal is already aborted, without creating overlay or starting loop - acceptSpeculation: wrap in try/finally to guarantee overlay cleanup even if applyToReal or addHistory throws - Doc: enableCacheSharing default false → true (matches code) - Doc: update test count table (7 → 15 followupState, add 6 new files) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): remove debug logs, add function calling fallback for non-FC models - Remove all followup-debug process.stderr.write logs - Add direct text fallback in generateViaBaseLlm when generateJson returns {} (model doesn't support function calling, e.g., glm-5.1) - Add CJK text support in filter: skip whitespace-based word count for Chinese/Japanese/Korean text, use character count instead Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): add suggestionModel setting for faster suggestion generation New setting `suggestionModel` allows using a smaller/faster model (e.g., qwen-turbo) for prompt suggestion generation instead of the main conversation model. Reduces suggestion latency significantly. Passed through: settings → AppContainer → generatePromptSuggestion → generateViaForkedQuery / generateViaBaseLlm (both paths). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(followup): suggestionModel setting, /stats tracking, /about display - suggestionModel: new setting to use a faster model for suggestion generation (e.g., qwen3.5-flash instead of main model glm-5.1) - /stats: suggestion API calls now report usage to UiTelemetryService so token consumption appears in /stats model breakdown - /about: shows Suggestion Model field (configured or main model) Also: - Function calling fallback for non-FC models (direct text generation) - CJK text support in word count filter (character-based for Chinese) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * i18n: add Suggestion Model translations for /about display en: Suggestion Model | zh: 建议模型 | ja: 提案モデル de: Vorschlagsmodell | pt: Modelo de Sugestão | ru: Модель предложений Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): always use generateContent for suggestion (not generateJson) generateJson doesn't expose usageMetadata, so /stats can't track suggestion model tokens. Switch to direct generateContent which always returns usage data. Also simplifies the code by removing the function-calling + fallback dual path. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): fix /stats tracking — use ApiResponseEvent constructor Use ApiResponseEvent class constructor with proper response_id and override event.name to match UiEvent type for UiTelemetryService switch statement. This ensures suggestion model token usage appears in /stats model output. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * i18n: fix Chinese translation for Suggestion Model "建议模型" → "提示建议模型" to avoid ambiguity. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(followup): merge suggestionModel + speculationModel into fastModel Single unified setting for all background tasks: suggestion generation, speculation, pipelined suggestions, and future background tasks. Users only need to understand one concept: main model for conversation, fast model for background tasks. - Remove: suggestionModel, speculationModel - Add: fastModel (ui.fastModel in settings.json) - Update /about display: "Fast Model" with i18n translations - Update all 6 locale files (en/zh/ja/de/pt/ru) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(settings): move fastModel to top-level (parallel to model) fastModel is an independent model concept, not a property of the main model. Move from model.fastModel to top-level settings.fastModel. Config: { "fastModel": "qwen3.5-flash", "model": { "name": "glm-5.1" } } Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): report usage in both forkedQuery and baseLlm paths The forkedQuery path (used when enableCacheSharing=true) was not reporting token usage to UiTelemetryService, so /stats model didn't show the fast model. Now both paths report usage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(cli): add /model --fast command to set fast model Usage: /model --fast qwen3.5-flash — set fast model /model --fast — show current fast model /model — open model selection dialog (unchanged) Saves to user settings (SettingScope.User). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(design): update to fastModel (replace suggestionModel/speculationModel) - prompt-suggestion-design.md: speculationModel → fastModel (top-level) - speculation-design.md: Model Override → Fast Model, update description - prompt-suggestion-implementation.md: update settings description Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(cli): /model --fast opens model selection dialog for fast model When called without a model name, /model --fast now opens the same model selection dialog used by /model, but selecting a model saves it as fastModel instead of switching the main model. - useModelCommand: add isFastModelMode state - ModelDialog: intercept selection in fast model mode, save to fastModel - DialogManager: pass isFastModelMode prop to ModelDialog - types.ts: add 'fast-model' dialog type Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): pass resolved model (not undefined) to runForkedQuery model: modelOverride → model: model (which has the fallback applied) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(cli): /model --fast defaults to current fast model in dialog When opening the model selection dialog via /model --fast, the currently configured fastModel is pre-selected instead of the main model. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(cli): add --fast tab completion for /model command /model <Tab> now shows --fast as a completion option with description. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(schema): regenerate settings.schema.json with new followup settings Adds enableCacheSharing, enableSpeculation, and fastModel to the generated JSON schema so CI validation passes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(test): update tests for new Fast Model field in system info Add "Fast Model" to expected labels in systemInfoFields and bugCommand tests to match the new field added to /about and bug report output. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * ci: trigger PR synchronize event Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Copilot review comments (batch 4) - modelCommand: use getPersistScopeForModelSelection for fastModel, return meaningful info message instead of empty content - ModelDialog: handle $runtime|authType|modelId format in fast-model mode - forkedQuery: return structuredClone from getCacheSafeParams - client: fix stale comment about history truncation order - speculation: detect abort in .then() handler, set 'aborted' status and cleanup overlay to prevent leaks - docs: update test count table Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(users): add followup suggestions user manual - New feature page: followup-suggestions.md covering usage, keybindings, fast model configuration, settings, and quality filters - commands.md: add /model --fast command reference - settings.md: add enableFollowupSuggestions, enableCacheSharing, enableSpeculation, and fastModel settings documentation - _meta.ts: register new page in navigation Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(users): audit fixes for followup suggestions documentation - followup-suggestions.md: add 300ms delay, WebUI support, plan mode guard, non-interactive guard, slash commands as single-word, meta/error filters, character limit - settings.md: move fastModel next to model section, add /model --fast cross-reference and link to feature page - overview.md: add followup suggestions to feature list - i18n: add missing translations for 'Set fast model for background tasks' and 'Fast model updated.' in all 6 locales Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Copilot review comments (batch 5) - modelCommand: remove duplicate info message (keep addItem only) - followup-suggestions.md: clarify WebUI requires host app wiring - speculation-design.md: fix abort telemetry description - i18n: add missing translations for fast model strings Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(cli): remove duplicate message in /model --fast command Use return message instead of addItem + empty return to avoid blank INFO line in history. Also handle missing settings service. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(i18n): remove unused 'Fast model updated.' translations The /model --fast command now returns the model name directly instead of using this string. Remove dead translations. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(followup): disable thinking mode for suggestion and speculation Forked queries inherit the main conversation's generationConfig which may have thinkingConfig enabled. This wastes tokens and adds latency for background tasks that don't need reasoning. Explicitly set thinkingConfig.includeThoughts=false in both paths: - createForkedChat (covers forked query + speculation) - generateViaBaseLlm (non-cache-sharing fallback) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: document thinking mode auto-disable for background tasks - User docs: note that thinking is auto-disabled for suggestions/speculation - Design docs: detail thinkingConfig override in both forked query and BaseLlm paths, explain why cache hits are unaffected Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> Co-authored-by: jinjing.zzj <jinjing.zzj@alibaba-inc.com> Co-authored-by: yiliang114 <1204183885@qq.com>
117 KiB
Qwen Code Configuration
Tip
Authentication / API keys: Authentication (Qwen OAuth, Alibaba Cloud Coding Plan, or API Key) and auth-related environment variables (like
OPENAI_API_KEY) are documented in Authentication.
Note
Note on New Configuration Format: The format of the
settings.jsonfile has been updated to a new, more organized structure. The old format will be migrated automatically. Qwen Code offers several ways to configure its behavior, including environment variables, command-line arguments, and settings files. This document outlines the different configuration methods and available settings.
Configuration layers
Configuration is applied in the following order of precedence (lower numbers are overridden by higher numbers):
| Level | Configuration Source | Description |
|---|---|---|
| 1 | Default values | Hardcoded defaults within the application |
| 2 | System defaults file | System-wide default settings that can be overridden by other settings files |
| 3 | User settings file | Global settings for the current user |
| 4 | Project settings file | Project-specific settings |
| 5 | System settings file | System-wide settings that override all other settings files |
| 6 | Environment variables | System-wide or session-specific variables, potentially loaded from .env files |
| 7 | Command-line arguments | Values passed when launching the CLI |
Settings files
Qwen Code uses JSON settings files for persistent configuration. There are four locations for these files:
| File Type | Location | Scope |
|---|---|---|
| System defaults file | Linux: /etc/qwen-code/system-defaults.jsonWindows: C:\ProgramData\qwen-code\system-defaults.jsonmacOS: /Library/Application Support/QwenCode/system-defaults.json The path can be overridden using the QWEN_CODE_SYSTEM_DEFAULTS_PATH environment variable. |
Provides a base layer of system-wide default settings. These settings have the lowest precedence and are intended to be overridden by user, project, or system override settings. |
| User settings file | ~/.qwen/settings.json (where ~ is your home directory). |
Applies to all Qwen Code sessions for the current user. |
| Project settings file | .qwen/settings.json within your project's root directory. |
Applies only when running Qwen Code from that specific project. Project settings override user settings. |
| System settings file | Linux: /etc/qwen-code/settings.json Windows: C:\ProgramData\qwen-code\settings.json macOS: /Library/Application Support/QwenCode/settings.jsonThe path can be overridden using the QWEN_CODE_SYSTEM_SETTINGS_PATH environment variable. |
Applies to all Qwen Code sessions on the system, for all users. System settings override user and project settings. May be useful for system administrators at enterprises to have controls over users' Qwen Code setups. |
Note
Note on environment variables in settings: String values within your
settings.jsonfiles can reference environment variables using either$VAR_NAMEor${VAR_NAME}syntax. These variables will be automatically resolved when the settings are loaded. For example, if you have an environment variableMY_API_TOKEN, you could use it insettings.jsonlike this:"apiKey": "$MY_API_TOKEN".
The .qwen directory in your project
In addition to a project settings file, a project's .qwen directory can contain other project-specific files related to Qwen Code's operation, such as:
- Custom sandbox profiles (e.g.
.qwen/sandbox-macos-custom.sb,.qwen/sandbox.Dockerfile). - Agent Skills under
.qwen/skills/(each Skill is a directory containing aSKILL.md).
Configuration migration
Qwen Code automatically migrates legacy configuration settings to the new format. Old settings files are backed up before migration. The following settings have been renamed from negative (disable*) to positive (enable*) naming:
| Old Setting | New Setting | Notes |
|---|---|---|
disableAutoUpdate + disableUpdateNag |
general.enableAutoUpdate |
Consolidated into a single setting |
disableLoadingPhrases |
ui.accessibility.enableLoadingPhrases |
|
disableFuzzySearch |
context.fileFiltering.enableFuzzySearch |
|
disableCacheControl |
model.generationConfig.enableCacheControl |
Note
Boolean value inversion: When migrating, boolean values are inverted (e.g.,
disableAutoUpdate: truebecomesenableAutoUpdate: false).
Consolidation policy for disableAutoUpdate and disableUpdateNag
When both legacy settings are present with different values, the migration follows this policy: if either disableAutoUpdate or disableUpdateNag is true, then enableAutoUpdate becomes false:
disableAutoUpdate |
disableUpdateNag |
Migrated enableAutoUpdate |
|---|---|---|
false |
false |
true |
false |
true |
false |
true |
false |
false |
true |
true |
false |
Available settings in settings.json
Settings are organized into categories. All settings should be placed within their corresponding top-level category object in your settings.json file.
general
| Setting | Type | Description | Default |
|---|---|---|---|
general.preferredEditor |
string | The preferred editor to open files in. | undefined |
general.vimMode |
boolean | Enable Vim keybindings. | false |
general.enableAutoUpdate |
boolean | Enable automatic update checks and installations on startup. | true |
general.gitCoAuthor |
boolean | Automatically add a Co-authored-by trailer to git commit messages when commits are made through Qwen Code. | true |
general.checkpointing.enabled |
boolean | Enable session checkpointing for recovery. | false |
general.defaultFileEncoding |
string | Default encoding for new files. Use "utf-8" (default) for UTF-8 without BOM, or "utf-8-bom" for UTF-8 with BOM. Only change this if your project specifically requires BOM. |
"utf-8" |
output
| Setting | Type | Description | Default | Possible Values |
|---|---|---|---|---|
output.format |
string | The format of the CLI output. | "text" |
"text", "json" |
ui
| Setting | Type | Description | Default |
|---|---|---|---|
ui.theme |
string | The color theme for the UI. See Themes for available options. | undefined |
ui.customThemes |
object | Custom theme definitions. | {} |
ui.hideWindowTitle |
boolean | Hide the window title bar. | false |
ui.hideTips |
boolean | Hide helpful tips in the UI. | false |
ui.hideBanner |
boolean | Hide the application banner. | false |
ui.hideFooter |
boolean | Hide the footer from the UI. | false |
ui.showMemoryUsage |
boolean | Display memory usage information in the UI. | false |
ui.showLineNumbers |
boolean | Show line numbers in code blocks in the CLI output. | true |
ui.showCitations |
boolean | Show citations for generated text in the chat. | true |
enableWelcomeBack |
boolean | Show welcome back dialog when returning to a project with conversation history. When enabled, Qwen Code will automatically detect if you're returning to a project with a previously generated project summary (.qwen/PROJECT_SUMMARY.md) and show a dialog allowing you to continue your previous conversation or start fresh. This feature integrates with the /summary command and quit confirmation dialog. |
true |
ui.accessibility.enableLoadingPhrases |
boolean | Enable loading phrases (disable for accessibility). | true |
ui.accessibility.screenReader |
boolean | Enables screen reader mode, which adjusts the TUI for better compatibility with screen readers. | false |
ui.customWittyPhrases |
array of strings | A list of custom phrases to display during loading states. When provided, the CLI will cycle through these phrases instead of the default ones. | [] |
ui.enableFollowupSuggestions |
boolean | Enable followup suggestions that predict what you want to type next after the model responds. Suggestions appear as ghost text and can be accepted with Tab, Enter, or Right Arrow. | true |
ui.enableCacheSharing |
boolean | Use cache-aware forked queries for suggestion generation. Reduces cost on providers that support prefix caching (experimental). | true |
ui.enableSpeculation |
boolean | Speculatively execute accepted suggestions before submission. Results appear instantly when you accept (experimental). | false |
ide
| Setting | Type | Description | Default |
|---|---|---|---|
ide.enabled |
boolean | Enable IDE integration mode. | false |
ide.hasSeenNudge |
boolean | Whether the user has seen the IDE integration nudge. | false |
privacy
| Setting | Type | Description | Default |
|---|---|---|---|
privacy.usageStatisticsEnabled |
boolean | Enable collection of usage statistics. | true |
model
| Setting | Type | Description | Default |
|---|---|---|---|
model.name |
string | The Qwen model to use for conversations. | undefined |
model.maxSessionTurns |
number | Maximum number of user/model/tool turns to keep in a session. -1 means unlimited. | -1 |
model.generationConfig |
object | Advanced overrides passed to the underlying content generator. Supports request controls such as timeout, maxRetries, enableCacheControl, contextWindowSize (override model's context window size), modalities (override auto-detected input modalities), customHeaders (custom HTTP headers for API requests), and extra_body (additional body parameters for OpenAI-compatible API requests only), along with fine-tuning knobs under samplingParams (for example temperature, top_p, max_tokens). Leave unset to rely on provider defaults. |
undefined |
model.chatCompression.contextPercentageThreshold |
number | Sets the threshold for chat history compression as a percentage of the model's total token limit. This is a value between 0 and 1 that applies to both automatic compression and the manual /compress command. For example, a value of 0.6 will trigger compression when the chat history exceeds 60% of the token limit. Use 0 to disable compression entirely. |
0.7 |
model.skipNextSpeakerCheck |
boolean | Skip the next speaker check. | false |
model.skipLoopDetection |
boolean | Disables loop detection checks. Loop detection prevents infinite loops in AI responses but can generate false positives that interrupt legitimate workflows. Enable this option if you experience frequent false positive loop detection interruptions. | false |
model.skipStartupContext |
boolean | Skips sending the startup workspace context (environment summary and acknowledgement) at the beginning of each session. Enable this if you prefer to provide context manually or want to save tokens on startup. | false |
model.enableOpenAILogging |
boolean | Enables logging of OpenAI API calls for debugging and analysis. When enabled, API requests and responses are logged to JSON files. | false |
model.openAILoggingDir |
string | Custom directory path for OpenAI API logs. If not specified, defaults to logs/openai in the current working directory. Supports absolute paths, relative paths (resolved from current working directory), and ~ expansion (home directory). |
undefined |
Example model.generationConfig:
{
"model": {
"generationConfig": {
"timeout": 60000,
"contextWindowSize": 128000,
"modalities": {
"image": true
},
"enableCacheControl": true,
"customHeaders": {
"X-Client-Request-ID": "req-123"
},
"extra_body": {
"enable_thinking": true
},
"samplingParams": {
"temperature": 0.2,
"top_p": 0.8,
"max_tokens": 1024
}
}
}
}
contextWindowSize:
Overrides the default context window size for the selected model. Qwen Code determines the context window using built-in defaults based on model name matching, with a constant fallback value. Use this setting when a provider's effective context limit differs from Qwen Code's default. This value defines the model's assumed maximum context capacity, not a per-request token limit.
modalities:
Overrides the auto-detected input modalities for the selected model. Qwen Code automatically detects supported modalities (image, PDF, audio, video) based on model name pattern matching. Use this setting when the auto-detection is incorrect — for example, to enable pdf for a model that supports it but isn't recognized. Format: { "image": true, "pdf": true, "audio": true, "video": true }. Omit a key or set it to false for unsupported types.
customHeaders:
Allows you to add custom HTTP headers to all API requests. This is useful for request tracing, monitoring, API gateway routing, or when different models require different headers. If customHeaders is defined in modelProviders[].generationConfig.customHeaders, it will be used directly; otherwise, headers from model.generationConfig.customHeaders will be used. No merging occurs between the two levels.
The extra_body field allows you to add custom parameters to the request body sent to the API. This is useful for provider-specific options that are not covered by the standard configuration fields. Note: This field is only supported for OpenAI-compatible providers (openai, qwen-oauth). It is ignored for Anthropic and Gemini providers. If extra_body is defined in modelProviders[].generationConfig.extra_body, it will be used directly; otherwise, values from model.generationConfig.extra_body will be used.
model.openAILoggingDir examples:
"~/qwen-logs"- Logs to~/qwen-logsdirectory"./custom-logs"- Logs to./custom-logsrelative to current directory"/tmp/openai-logs"- Logs to absolute path/tmp/openai-logs
fastModel
| Setting | Type | Description | Default |
|---|---|---|---|
fastModel |
string | Model for background tasks (suggestion generation, speculation). Leave empty to use the main model. A smaller/faster model (e.g., qwen3.5-flash) reduces latency and cost. Can also be set via /model --fast. |
"" |
context
| Setting | Type | Description | Default |
|---|---|---|---|
context.fileName |
string or array of strings | The name of the context file(s). | undefined |
context.importFormat |
string | The format to use when importing memory. | undefined |
context.includeDirectories |
array | Additional directories to include in the workspace context. Specifies an array of additional absolute or relative paths to include in the workspace context. Missing directories will be skipped with a warning by default. Paths can use ~ to refer to the user's home directory. This setting can be combined with the --include-directories command-line flag. |
[] |
context.loadFromIncludeDirectories |
boolean | Controls the behavior of the /memory refresh command. If set to true, QWEN.md files should be loaded from all directories that are added. If set to false, QWEN.md should only be loaded from the current directory. |
false |
context.fileFiltering.respectGitIgnore |
boolean | Respect .gitignore files when searching. | true |
context.fileFiltering.respectQwenIgnore |
boolean | Respect .qwenignore files when searching. | true |
context.fileFiltering.enableRecursiveFileSearch |
boolean | Whether to enable searching recursively for filenames under the current tree when completing @ prefixes in the prompt. |
true |
context.fileFiltering.enableFuzzySearch |
boolean | When true, enables fuzzy search capabilities when searching for files. Set to false to improve performance on projects with a large number of files. |
true |
Troubleshooting File Search Performance
If you are experiencing performance issues with file searching (e.g., with @ completions), especially in projects with a very large number of files, here are a few things you can try in order of recommendation:
- Use
.qwenignore: Create a.qwenignorefile in your project root to exclude directories that contain a large number of files that you don't need to reference (e.g., build artifacts, logs,node_modules). Reducing the total number of files crawled is the most effective way to improve performance. - Disable Fuzzy Search: If ignoring files is not enough, you can disable fuzzy search by setting
enableFuzzySearchtofalsein yoursettings.jsonfile. This will use a simpler, non-fuzzy matching algorithm, which can be faster. - Disable Recursive File Search: As a last resort, you can disable recursive file search entirely by setting
enableRecursiveFileSearchtofalse. This will be the fastest option as it avoids a recursive crawl of your project. However, it means you will need to type the full path to files when using@completions.
tools
| Setting | Type | Description | Default | Notes |
|---|---|---|---|---|
tools.sandbox |
boolean or string | Sandbox execution environment (can be a boolean or a path string). | undefined |
|
tools.shell.enableInteractiveShell |
boolean | Use node-pty for an interactive shell experience. Fallback to child_process still applies. |
false |
|
tools.core |
array of strings | Deprecated. Will be removed in next version. Use permissions.allow + permissions.deny instead. Restricts built-in tools to an allowlist. All tools not in the list are disabled. |
undefined |
|
tools.exclude |
array of strings | Deprecated. Use permissions.deny instead. Tool names to exclude from discovery. Automatically migrated to the permissions format on first load. |
undefined |
|
tools.allowed |
array of strings | Deprecated. Use permissions.allow instead. Tool names that bypass the confirmation dialog. Automatically migrated to the permissions format on first load. |
undefined |
|
tools.approvalMode |
string | Sets the default approval mode for tool usage. | default |
Possible values: plan (analyze only, do not modify files or execute commands), default (require approval before file edits or shell commands run), auto-edit (automatically approve file edits), yolo (automatically approve all tool calls) |
tools.discoveryCommand |
string | Command to run for tool discovery. | undefined |
|
tools.callCommand |
string | Defines a custom shell command for calling a specific tool that was discovered using tools.discoveryCommand. The shell command must meet the following criteria: It must take function name (exactly as in function declaration) as first command line argument. It must read function arguments as JSON on stdin, analogous to functionCall.args. It must return function output as JSON on stdout, analogous to functionResponse.response.content. |
undefined |
|
tools.useRipgrep |
boolean | Use ripgrep for file content search instead of the fallback implementation. Provides faster search performance. | true |
|
tools.useBuiltinRipgrep |
boolean | Use the bundled ripgrep binary. When set to false, the system-level rg command will be used instead. This setting is only effective when tools.useRipgrep is true. |
true |
|
tools.truncateToolOutputThreshold |
number | Truncate tool output if it is larger than this many characters. Applies to Shell, Grep, Glob, ReadFile and ReadManyFiles tools. | 25000 |
Requires restart: Yes |
tools.truncateToolOutputLines |
number | Maximum lines or entries kept when truncating tool output. Applies to Shell, Grep, Glob, ReadFile and ReadManyFiles tools. | 1000 |
Requires restart: Yes |
Note
Migrating from
tools.core/tools.exclude/tools.allowed: These legacy settings are deprecated and automatically migrated to the newpermissionsformat on first load. Prefer configuringpermissions.allow/permissions.denydirectly. Use/permissionsto manage rules interactively.
permissions
The permissions system provides fine-grained control over which tools can run, which require confirmation, and which are blocked.
Decision priority (highest first): deny > ask > allow > (default/interactive mode)
The first matching rule wins. Rules use the format "ToolName" or "ToolName(specifier)".
| Setting | Type | Description | Default |
|---|---|---|---|
permissions.allow |
array of strings | Rules for auto-approved tool calls (no confirmation needed). Merged across all scopes (user + project + system). | undefined |
permissions.ask |
array of strings | Rules for tool calls that always require user confirmation. Takes priority over allow. |
undefined |
permissions.deny |
array of strings | Rules for blocked tool calls. Highest priority — overrides both allow and ask. |
undefined |
Tool name aliases (any of these work in rules):
| Alias | Canonical tool | Notes |
|---|---|---|
Bash, Shell |
run_shell_command |
|
Read, ReadFile |
read_file |
Meta-category — see below |
Edit, EditFile |
edit |
Meta-category — see below |
Write, WriteFile |
write_file |
|
Grep, SearchFiles |
grep_search |
|
Glob, FindFiles |
glob |
|
ListFiles |
list_directory |
|
WebFetch |
web_fetch |
|
Agent |
task |
|
Skill |
skill |
Meta-categories:
Some rule names automatically cover multiple tools:
| Rule name | Tools covered |
|---|---|
Read |
read_file, grep_search, glob, list_directory |
Edit |
edit, write_file |
Important
Read(/path/**)matches all four read tools (file read, grep, glob, and directory listing). To restrict only file reading, useReadFile(/path/**)orread_file(/path/**).
Rule syntax examples:
| Rule | Meaning |
|---|---|
"Bash" |
All shell commands |
"Bash(git *)" |
Shell commands starting with git (word boundary: NOT gitk) |
"Bash(git push *)" |
Shell commands like git push origin main |
"Bash(npm run *)" |
Any npm run script |
"Read" |
All file read operations (read, grep, glob, list) |
"Read(./secrets/**)" |
Read any file under ./secrets/ recursively |
"Edit(/src/**/*.ts)" |
Edit TypeScript files under project root /src/ |
"WebFetch(api.example.com)" |
Fetch from api.example.com and all its subdomains |
"mcp__puppeteer" |
All tools from the puppeteer MCP server |
Path pattern prefixes:
| Prefix | Meaning | Example |
|---|---|---|
// |
Absolute path from filesystem root | //etc/passwd |
~/ |
Relative to home directory | ~/Documents/*.pdf |
/ |
Relative to project root | /src/**/*.ts |
./ |
Relative to current working directory | ./secrets/** |
| (none) | Same as ./ |
secrets/** |
Shell command bypass prevention:
Permission rules for Read, Edit, and WebFetch are also enforced when the agent runs equivalent shell commands. For example, if Read(./.env) is in deny, the agent cannot bypass it via cat .env in a shell command. Supported shell commands include cat, grep, curl, wget, cp, mv, rm, chmod, and many more. Unknown/safe commands (e.g. git) are unaffected by file/network rules.
Migrating from legacy settings:
| Legacy setting | Equivalent permissions rule |
Notes |
|---|---|---|
tools.allowed |
permissions.allow |
Auto-migrated on first load |
tools.exclude |
permissions.deny |
Auto-migrated on first load |
tools.core |
permissions.allow (allowlist) |
Auto-migrated; unlisted tools are disabled at registry level |
Example configuration:
{
"permissions": {
"allow": ["Bash(git *)", "Bash(npm run *)", "Read(//Users/alice/code/**)"],
"ask": ["Bash(git push *)", "Edit"],
"deny": ["Bash(rm -rf *)", "Read(.env)", "WebFetch(malicious.com)"]
}
}
Tip
Use
/permissionsin the interactive CLI to view, add, and remove rules without editingsettings.jsondirectly.
mcp
| Setting | Type | Description | Default |
|---|---|---|---|
mcp.serverCommand |
string | Command to start an MCP server. | undefined |
mcp.allowed |
array of strings | An allowlist of MCP servers to allow. Allows you to specify a list of MCP server names that should be made available to the model. This can be used to restrict the set of MCP servers to connect to. Note that this will be ignored if --allowed-mcp-server-names is set. |
undefined |
mcp.excluded |
array of strings | A denylist of MCP servers to exclude. A server listed in both mcp.excluded and mcp.allowed is excluded. Note that this will be ignored if --allowed-mcp-server-names is set. |
undefined |
Note
Security Note for MCP servers: These settings use simple string matching on MCP server names, which can be modified. If you're a system administrator looking to prevent users from bypassing this, consider configuring the
mcpServersat the system settings level such that the user will not be able to configure any MCP servers of their own. This should not be used as an airtight security mechanism.
lsp
Warning
Experimental Feature: LSP support is currently experimental and disabled by default. Enable it using the
--experimental-lspcommand line flag.
Language Server Protocol (LSP) provides code intelligence features like go-to-definition, find references, and diagnostics.
LSP server configuration is done through .lsp.json files in your project root directory, not through settings.json. See the LSP documentation for configuration details and examples.
security
| Setting | Type | Description | Default |
|---|---|---|---|
security.folderTrust.enabled |
boolean | Setting to track whether Folder trust is enabled. | false |
security.auth.selectedType |
string | The currently selected authentication type. | undefined |
security.auth.enforcedType |
string | The required auth type (useful for enterprises). | undefined |
security.auth.useExternal |
boolean | Whether to use an external authentication flow. | undefined |
advanced
| Setting | Type | Description | Default |
|---|---|---|---|
advanced.autoConfigureMemory |
boolean | Automatically configure Node.js memory limits. | false |
advanced.dnsResolutionOrder |
string | The DNS resolution order. | undefined |
advanced.excludedEnvVars |
array of strings | Environment variables to exclude from project context. Specifies environment variables that should be excluded from being loaded from project .env files. This prevents project-specific environment variables (like DEBUG=true) from interfering with the CLI behavior. Variables from .qwen/.env files are never excluded. |
["DEBUG","DEBUG_MODE"] |
advanced.bugCommand |
object | Configuration for the bug report command. Overrides the default URL for the /bug command. Properties: urlTemplate (string): A URL that can contain {title} and {info} placeholders. Example: "bugCommand": { "urlTemplate": "https://bug.example.com/new?title={title}&info={info}" } |
undefined |
advanced.tavilyApiKey |
string | API key for Tavily web search service. Used to enable the web_search tool functionality. |
undefined |
Note
Note about advanced.tavilyApiKey: This is a legacy configuration format. For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers using the new
webSearchconfiguration format.
mcpServers
Configures connections to one or more Model-Context Protocol (MCP) servers for discovering and using custom tools. Qwen Code attempts to connect to each configured MCP server to discover available tools. If multiple MCP servers expose a tool with the same name, the tool names will be prefixed with the server alias you defined in the configuration (e.g., serverAlias__actualToolName) to avoid conflicts. Note that the system might strip certain schema properties from MCP tool definitions for compatibility. At least one of command, url, or httpUrl must be provided. If multiple are specified, the order of precedence is httpUrl, then url, then command.
| Property | Type | Description | Optional |
|---|---|---|---|
mcpServers.<SERVER_NAME>.command |
string | The command to execute to start the MCP server via standard I/O. | Yes |
mcpServers.<SERVER_NAME>.args |
array of strings | Arguments to pass to the command. | Yes |
mcpServers.<SERVER_NAME>.env |
object | Environment variables to set for the server process. | Yes |
mcpServers.<SERVER_NAME>.cwd |
string | The working directory in which to start the server. | Yes |
mcpServers.<SERVER_NAME>.url |
string | The URL of an MCP server that uses Server-Sent Events (SSE) for communication. | Yes |
mcpServers.<SERVER_NAME>.httpUrl |
string | The URL of an MCP server that uses streamable HTTP for communication. | Yes |
mcpServers.<SERVER_NAME>.headers |
object | A map of HTTP headers to send with requests to url or httpUrl. |
Yes |
mcpServers.<SERVER_NAME>.timeout |
number | Timeout in milliseconds for requests to this MCP server. | Yes |
mcpServers.<SERVER_NAME>.trust |
boolean | Trust this server and bypass all tool call confirmations. | Yes |
mcpServers.<SERVER_NAME>.description |
string | A brief description of the server, which may be used for display purposes. | Yes |
mcpServers.<SERVER_NAME>.includeTools |
array of strings | List of tool names to include from this MCP server. When specified, only the tools listed here will be available from this server (allowlist behavior). If not specified, all tools from the server are enabled by default. | Yes |
mcpServers.<SERVER_NAME>.excludeTools |
array of strings | List of tool names to exclude from this MCP server. Tools listed here will not be available to the model, even if they are exposed by the server. Note: excludeTools takes precedence over includeTools - if a tool is in both lists, it will be excluded. |
Yes |
telemetry
Configures logging and metrics collection for Qwen Code. For more information, see telemetry.
| Setting | Type | Description | Default |
|---|---|---|---|
telemetry.enabled |
boolean | Whether or not telemetry is enabled. | |
telemetry.target |
string | The destination for collected telemetry. Supported values are local and gcp. |
|
telemetry.otlpEndpoint |
string | The endpoint for the OTLP Exporter. | |
telemetry.otlpProtocol |
string | The protocol for the OTLP Exporter (grpc or http). |
|
telemetry.logPrompts |
boolean | Whether or not to include the content of user prompts in the logs. | |
telemetry.outfile |
string | The file to write telemetry to when target is local. |
|
telemetry.useCollector |
boolean | Whether to use an external OTLP collector. |
Example settings.json
Here is an example of a settings.json file with the nested structure, new as of v0.3.0:
{
"general": {
"vimMode": true,
"preferredEditor": "code"
},
"ui": {
"theme": "GitHub",
"hideTips": false,
"customWittyPhrases": [
"You forget a thousand things every day. Make sure this is one of 'em",
"Connecting to AGI"
]
},
"tools": {
"approvalMode": "yolo",
"sandbox": "docker",
"discoveryCommand": "bin/get_tools",
"callCommand": "bin/call_tool",
"exclude": ["write_file"]
},
"mcpServers": {
"mainServer": {
"command": "bin/mcp_server.py"
},
"anotherServer": {
"command": "node",
"args": ["mcp_server.js", "--verbose"]
}
},
"telemetry": {
"enabled": true,
"target": "local",
"otlpEndpoint": "http://localhost:4317",
"logPrompts": true
},
"privacy": {
"usageStatisticsEnabled": true
},
"model": {
"name": "qwen3-coder-plus",
"maxSessionTurns": 10,
"enableOpenAILogging": false,
"openAILoggingDir": "~/qwen-logs",
},
"context": {
"fileName": ["CONTEXT.md", "QWEN.md"],
"includeDirectories": ["path/to/dir1", "~/path/to/dir2", "../path/to/dir3"],
"loadFromIncludeDirectories": true,
"fileFiltering": {
"respectGitIgnore": false
}
},
"advanced": {
"excludedEnvVars": ["DEBUG", "DEBUG_MODE", "NODE_ENV"]
}
}
Shell History
The CLI keeps a history of shell commands you run. To avoid conflicts between different projects, this history is stored in a project-specific directory within your user's home folder.
- Location:
~/.qwen/tmp/<project_hash>/shell_history<project_hash>is a unique identifier generated from your project's root path.- The history is stored in a file named
shell_history.
Environment Variables & .env Files
Environment variables are a common way to configure applications, especially for sensitive information (like tokens) or for settings that might change between environments.
Qwen Code can automatically load environment variables from .env files.
For authentication-related variables (like OPENAI_*) and the recommended .qwen/.env approach, see Authentication.
Tip
Environment Variable Exclusion: Some environment variables (like
DEBUGandDEBUG_MODE) are automatically excluded from project.envfiles by default to prevent interference with the CLI behavior. Variables from.qwen/.envfiles are never excluded. You can customize this behavior using theadvanced.excludedEnvVarssetting in yoursettings.jsonfile.
Environment Variables Table
| Variable | Description | Notes |
|---|---|---|
QWEN_TELEMETRY_ENABLED |
Set to true or 1 to enable telemetry. Any other value is treated as disabling it. |
Overrides the telemetry.enabled setting. |
QWEN_TELEMETRY_TARGET |
Sets the telemetry target (local or gcp). |
Overrides the telemetry.target setting. |
QWEN_TELEMETRY_OTLP_ENDPOINT |
Sets the OTLP endpoint for telemetry. | Overrides the telemetry.otlpEndpoint setting. |
QWEN_TELEMETRY_OTLP_PROTOCOL |
Sets the OTLP protocol (grpc or http). |
Overrides the telemetry.otlpProtocol setting. |
QWEN_TELEMETRY_LOG_PROMPTS |
Set to true or 1 to enable or disable logging of user prompts. Any other value is treated as disabling it. |
Overrides the telemetry.logPrompts setting. |
QWEN_TELEMETRY_OUTFILE |
Sets the file path to write telemetry to when the target is local. |
Overrides the telemetry.outfile setting. |
QWEN_TELEMETRY_USE_COLLECTOR |
Set to true or 1 to enable or disable using an external OTLP collector. Any other value is treated as disabling it. |
Overrides the telemetry.useCollector setting. |
QWEN_SANDBOX |
Alternative to the sandbox setting in settings.json. |
Accepts true, false, docker, podman, or a custom command string. |
SEATBELT_PROFILE |
(macOS specific) Switches the Seatbelt (sandbox-exec) profile on macOS. |
permissive-open: (Default) Restricts writes to the project folder (and a few other folders, see packages/cli/src/utils/sandbox-macos-permissive-open.sb) but allows other operations. strict: Uses a strict profile that declines operations by default. <profile_name>: Uses a custom profile. To define a custom profile, create a file named sandbox-macos-<profile_name>.sb in your project's .qwen/ directory (e.g., my-project/.qwen/sandbox-macos-custom.sb). |
DEBUG or DEBUG_MODE |
(often used by underlying libraries or the CLI itself) Set to true or 1 to enable verbose debug logging, which can be helpful for troubleshooting. |
Note: These variables are automatically excluded from project .env files by default to prevent interference with the CLI behavior. Use .qwen/.env files if you need to set these for Qwen Code specifically. |
NO_COLOR |
Set to any value to disable all color output in the CLI. | |
CLI_TITLE |
Set to a string to customize the title of the CLI. | |
CODE_ASSIST_ENDPOINT |
Specifies the endpoint for the code assist server. | This is useful for development and testing. |
TAVILY_API_KEY |
Your API key for the Tavily web search service. | Used to enable the web_search tool functionality. Example: export TAVILY_API_KEY="tvly-your-api-key-here" |
Command-Line Arguments
Arguments passed directly when running the CLI can override other configurations for that specific session.
Command-Line Arguments Table
| Argument | Alias | Description | Possible Values | Notes |
|---|---|---|---|---|
--model |
-m |
Specifies the Qwen model to use for this session. | Model name | Example: npm start -- --model qwen3-coder-plus |
--prompt |
-p |
Used to pass a prompt directly to the command. This invokes Qwen Code in a non-interactive mode. | Your prompt text | For scripting examples, use the --output-format json flag to get structured output. |
--prompt-interactive |
-i |
Starts an interactive session with the provided prompt as the initial input. | Your prompt text | The prompt is processed within the interactive session, not before it. Cannot be used when piping input from stdin. Example: qwen -i "explain this code" |
--system-prompt |
Overrides the built-in main session system prompt for this run. | Your prompt text | Loaded context files such as QWEN.md are still appended after this override. Can be combined with --append-system-prompt. |
|
--append-system-prompt |
Appends extra instructions to the main session system prompt for this run. | Your prompt text | Applied after the built-in prompt and loaded context files. Can be combined with --system-prompt. See Headless Mode for examples. |
|
--output-format |
-o |
Specifies the format of the CLI output for non-interactive mode. | text, json, stream-json |
text: (Default) The standard human-readable output. json: A machine-readable JSON output emitted at the end of execution. stream-json: Streaming JSON messages emitted as they occur during execution. For structured output and scripting, use the --output-format json or --output-format stream-json flag. See Headless Mode for detailed information. |
--input-format |
Specifies the format consumed from standard input. | text, stream-json |
text: (Default) Standard text input from stdin or command-line arguments. stream-json: JSON message protocol via stdin for bidirectional communication. Requirement: --input-format stream-json requires --output-format stream-json to be set. When using stream-json, stdin is reserved for protocol messages. See Headless Mode for detailed information. |
|
--include-partial-messages |
Include partial assistant messages when using stream-json output format. When enabled, emits stream events (message_start, content_block_delta, etc.) as they occur during streaming. |
Default: false. Requirement: Requires --output-format stream-json to be set. See Headless Mode for detailed information about stream events. |
||
--sandbox |
-s |
Enables sandbox mode for this session. | ||
--sandbox-image |
Sets the sandbox image URI. | |||
--debug |
-d |
Enables debug mode for this session, providing more verbose output. | ||
--all-files |
-a |
If set, recursively includes all files within the current directory as context for the prompt. | ||
--help |
-h |
Displays help information about command-line arguments. | ||
--show-memory-usage |
Displays the current memory usage. | |||
--yolo |
Enables YOLO mode, which automatically approves all tool calls. | |||
--approval-mode |
Sets the approval mode for tool calls. | plan, default, auto-edit, yolo |
Supported modes: plan: Analyze only—do not modify files or execute commands. default: Require approval for file edits or shell commands (default behavior). auto-edit: Automatically approve edit tools (edit, write_file) while prompting for others. yolo: Automatically approve all tool calls (equivalent to --yolo). Cannot be used together with --yolo. Use --approval-mode=yolo instead of --yolo for the new unified approach. Example: qwen --approval-mode auto-editSee more about Approval Mode. |
|
--allowed-tools |
A comma-separated list of tool names that will bypass the confirmation dialog. | Tool names | Example: qwen --allowed-tools "Shell(git status)" |
|
--telemetry |
Enables telemetry. | |||
--telemetry-target |
Sets the telemetry target. | See telemetry for more information. | ||
--telemetry-otlp-endpoint |
Sets the OTLP endpoint for telemetry. | See telemetry for more information. | ||
--telemetry-otlp-protocol |
Sets the OTLP protocol for telemetry (grpc or http). |
Defaults to grpc. See telemetry for more information. |
||
--telemetry-log-prompts |
Enables logging of prompts for telemetry. | See telemetry for more information. | ||
--checkpointing |
Enables checkpointing. | |||
--acp |
Enables ACP mode (Agent Client Protocol). Useful for IDE/editor integrations like Zed. | Stable. Replaces the deprecated --experimental-acp flag. |
||
--experimental-lsp |
Enables experimental LSP (Language Server Protocol) feature for code intelligence (go-to-definition, find references, diagnostics, etc.). | Experimental. Requires language servers to be installed. | ||
--extensions |
-e |
Specifies a list of extensions to use for the session. | Extension names | If not provided, all available extensions are used. Use the special term qwen -e none to disable all extensions. Example: qwen -e my-extension -e my-other-extension |
--list-extensions |
-l |
Lists all available extensions and exits. | ||
--proxy |
Sets the proxy for the CLI. | Proxy URL | Example: --proxy http://localhost:7890. |
|
--include-directories |
Includes additional directories in the workspace for multi-directory support. | Directory paths | Can be specified multiple times or as comma-separated values. 5 directories can be added at maximum. Example: --include-directories /path/to/project1,/path/to/project2 or --include-directories /path/to/project1 --include-directories /path/to/project2 |
|
--screen-reader |
Enables screen reader mode, which adjusts the TUI for better compatibility with screen readers. | |||
--version |
Displays the version of the CLI. | |||
--openai-logging |
Enables logging of OpenAI API calls for debugging and analysis. | This flag overrides the enableOpenAILogging setting in settings.json. |
||
--openai-logging-dir |
Sets a custom directory path for OpenAI API logs. | Directory path | This flag overrides the openAILoggingDir setting in settings.json. Supports absolute paths, relative paths, and ~ expansion. Example: qwen --openai-logging-dir "~/qwen-logs" --openai-logging |
|
--tavily-api-key |
Sets the Tavily API key for web search functionality for this session. | API key | Example: qwen --tavily-api-key tvly-your-api-key-here |
Context Files (Hierarchical Instructional Context)
While not strictly configuration for the CLI's behavior, context files (defaulting to QWEN.md but configurable via the context.fileName setting) are crucial for configuring the instructional context (also referred to as "memory"). This powerful feature allows you to give project-specific instructions, coding style guides, or any relevant background information to the AI, making its responses more tailored and accurate to your needs. The CLI includes UI elements, such as an indicator in the footer showing the number of loaded context files, to keep you informed about the active context.
- Purpose: These Markdown files contain instructions, guidelines, or context that you want the Qwen model to be aware of during your interactions. The system is designed to manage this instructional context hierarchically.
Example Context File Content (e.g. QWEN.md)
Here's a conceptual example of what a context file at the root of a TypeScript project might contain:
# Project: My Awesome TypeScript Library
## General Instructions:
- When generating new TypeScript code, please follow the existing coding style.
- Ensure all new functions and classes have JSDoc comments.
- Prefer functional programming paradigms where appropriate.
- All code should be compatible with TypeScript 5.0 and Node.js 20+.
## Coding Style:
- Use 2 spaces for indentation.
- Interface names should be prefixed with `I` (e.g., `IUserService`).
- Private class members should be prefixed with an underscore (`_`).
- Always use strict equality (`===` and `!==`).
## Specific Component: `src/api/client.ts`
- This file handles all outbound API requests.
- When adding new API call functions, ensure they include robust error handling and logging.
- Use the existing `fetchWithRetry` utility for all GET requests.
## Regarding Dependencies:
- Avoid introducing new external dependencies unless absolutely necessary.
- If a new dependency is required, please state the reason.
This example demonstrates how you can provide general project context, specific coding conventions, and even notes about particular files or components. The more relevant and precise your context files are, the better the AI can assist you. Project-specific context files are highly encouraged to establish conventions and context.
- Hierarchical Loading and Precedence: The CLI implements a hierarchical memory system by loading context files (e.g.,
QWEN.md) from several locations. Content from files lower in this list (more specific) typically overrides or supplements content from files higher up (more general). The exact concatenation order and final context can be inspected using the/memory showcommand. The typical loading order is:- Global Context File:
- Location:
~/.qwen/<configured-context-filename>(e.g.,~/.qwen/QWEN.mdin your user home directory). - Scope: Provides default instructions for all your projects.
- Location:
- Project Root & Ancestors Context Files:
- Location: The CLI searches for the configured context file in the current working directory and then in each parent directory up to either the project root (identified by a
.gitfolder) or your home directory. - Scope: Provides context relevant to the entire project or a significant portion of it.
- Location: The CLI searches for the configured context file in the current working directory and then in each parent directory up to either the project root (identified by a
- Global Context File:
- Concatenation & UI Indication: The contents of all found context files are concatenated (with separators indicating their origin and path) and provided as part of the system prompt. The CLI footer displays the count of loaded context files, giving you a quick visual cue about the active instructional context.
- Importing Content: You can modularize your context files by importing other Markdown files using the
@path/to/file.mdsyntax. For more details, see the Memory Import Processor documentation. - Commands for Memory Management:
- Use
/memory refreshto force a re-scan and reload of all context files from all configured locations. This updates the AI's instructional context. - Use
/memory showto display the combined instructional context currently loaded, allowing you to verify the hierarchy and content being used by the AI. - See the Commands documentation for full details on the
/memorycommand and its sub-commands (showandrefresh).
- Use
By understanding and utilizing these configuration layers and the hierarchical nature of context files, you can effectively manage the AI's memory and tailor Qwen Code's responses to your specific needs and projects.
Sandbox
Qwen Code can execute potentially unsafe operations (like shell commands and file modifications) within a sandboxed environment to protect your system.
Sandbox is disabled by default, but you can enable it in a few ways:
- Using
--sandboxor-sflag. - Setting
QWEN_SANDBOXenvironment variable. - Sandbox is enabled when using
--yoloor--approval-mode=yoloby default.
By default, it uses a pre-built qwen-code-sandbox Docker image.
For project-specific sandboxing needs, you can create a custom Dockerfile at .qwen/sandbox.Dockerfile in your project's root directory. This Dockerfile can be based on the base sandbox image:
FROM qwen-code-sandbox
# Add your custom dependencies or configurations here
# For example:
# RUN apt-get update && apt-get install -y some-package
# COPY ./my-config /app/my-config
When .qwen/sandbox.Dockerfile exists, you can use BUILD_SANDBOX environment variable when running Qwen Code to automatically build the custom sandbox image:
BUILD_SANDBOX=1 qwen -s
Usage Statistics
To help us improve Qwen Code, we collect anonymized usage statistics. This data helps us understand how the CLI is used, identify common issues, and prioritize new features.
What we collect:
- Tool Calls: We log the names of the tools that are called, whether they succeed or fail, and how long they take to execute. We do not collect the arguments passed to the tools or any data returned by them.
- API Requests: We log the model used for each request, the duration of the request, and whether it was successful. We do not collect the content of the prompts or responses.
- Session Information: We collect information about the configuration of the CLI, such as the enabled tools and the approval mode.
What we DON'T collect:
- Personally Identifiable Information (PII): We do not collect any personal information, such as your name, email address, or API keys.
- Prompt and Response Content: We do not log the content of your prompts or the responses from the model.
- File Content: We do not log the content of any files that are read or written by the CLI.
How to opt out:
You can opt out of usage statistics collection at any time by setting the usageStatisticsEnabled property to false under the privacy category in your settings.json file:
{
"privacy": {
"usageStatisticsEnabled": false
}
}
Note
When usage statistics are enabled, events are sent to an Alibaba Cloud RUM collection endpoint.