Find a file
jinye df594f75fe
feat(core): event monitor tool with throttled stdout streaming (Phase C) (#3684)
* feat(core): event monitor tool with throttled stdout streaming (Phase C)

Add a new Monitor tool that spawns a long-running shell command and streams
its stdout lines back to the agent as event notifications. This is Phase C
from the background task management roadmap (#3634, #3666).

What changes:
- New MonitorRegistry (services/monitorRegistry.ts): per-monitor entry with
  lifecycle (running/completed/failed/cancelled), idle timeout auto-stop,
  max events auto-stop, AbortController-based cancellation. Follows the
  same structural pattern as BackgroundTaskRegistry.
- New Monitor tool (tools/monitor.ts): spawns via child_process.spawn with
  independent AbortController (Ctrl+C won't kill monitors), separate
  stdout/stderr line buffers, token-bucket throttling (burst=5, sustain=1/s).
  Returns immediately with monitor ID; events stream as notifications.
- Sleep interception in shell.ts: detectBlockedSleepPattern() blocks
  foreground `sleep N` (N>=2) and guides model to use Monitor or
  is_background instead.
- Config integration: MonitorRegistry instantiation, accessor, shutdown
  cleanup (abortAll), lazy tool registration.
- CLI wiring: notification callbacks in useGeminiStream.ts (interactive)
  and nonInteractiveCli.ts (headless), including hold-back loop abort on
  exit and SIGINT cleanup.

What this PR doesn't do (gated on #3471/#3488):
- Footer pill / dialog integration
- task_stop / send_message integration

Test plan:
- 21 MonitorRegistry unit tests (lifecycle, idle timeout, max events,
  XML escaping, nonexistent ID guard, callback clearing)
- 20 Monitor tool unit tests (validation, spawn, line buffering, separate
  stdout/stderr buffers, throttling, signal-killed path, turn isolation)
- 7 detectBlockedSleepPattern unit tests
- 2 E2E tests (monitor invocation, sleep interception)
- Full core suite: 248 files / 6151 passed

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): hold-back loop waits for monitors + emit task_started for SDK

Two fixes from Codex review:

P1: The non-interactive hold-back loop now includes monitorRegistry.getRunning()
in its wait condition, so monitors can stream events before the CLI exits.
Previously monitors were aborted immediately after the agent's first reply.

P2: MonitorRegistry gains setRegisterCallback(), and nonInteractiveCli wires
it to emit task_started system messages. Stream-json/SDK consumers now see
a task_started for each monitor, matching the backgroundTaskRegistry contract.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): Windows process kill + pipeline sleep false-positive

Two fixes from Codex review:

P1: Monitor abort handler now uses `taskkill /f /t` on Windows instead
of POSIX-only `process.kill(-pid)`. Follows the existing pattern in
ShellExecutionService.childProcessFallback.

P2: detectBlockedSleepPattern no longer uses splitCommands (which splits
on `|` pipes). Replaced with a regex that only matches sleep followed by
sequential separators (&&, ||, ;, &, newline), not pipes. `sleep 5 | cat`
is now correctly allowed.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(test): resolve TS errors in monitor.test.ts mock types

Use Object.defineProperty for readonly ChildProcess.pid and proper
Readable type for stdout/stderr mocks to satisfy strict tsc builds.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): remove false notification promise + add early-abort guard

P1: Sleep interception guidance no longer promises "completion notification"
for is_background — that wiring doesn't exist yet (follow-up from #3642).

P2: Monitor.execute() now checks _signal.aborted before spawning, preventing
a race where cancellation during tool scheduling still launches a monitor.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(test): add getMonitorRegistry mock to useGeminiStream tests

The useGeminiStream hook now calls config.getMonitorRegistry() to wire
up monitor notification callbacks. The test mock config was missing this
method, causing 64 test failures with "config.getMonitorRegistry is not
a function".

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(test): add getMonitorRegistry mock to nonInteractiveCli tests

Same fix as useGeminiStream.test.tsx — the mock config needs
getMonitorRegistry to avoid "is not a function" errors (29 failures).

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): address PR review — CORE_TOOLS, directory param, test fix

1. Add 'monitor' to PermissionManager.CORE_TOOLS so coreTools allowlist
   correctly gates the monitor tool (same as run_shell_command).

2. Add optional 'directory' parameter to MonitorTool with workspace
   validation, mirroring ShellTool's directory support for multi-root
   workspaces.

3. Fix sleep-interception E2E test: readToolLogs() doesn't expose
   toolResult, so the old assertion was dead code. Now verifies via
   the model's output text instead.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): address MonitorTool review #4186888042

Addresses three [Critical] review comments on packages/core/src/tools/monitor.ts:

1. Partial-line buffer unbounded growth (processLines)
   MAX_LINE_LENGTH was only enforced after a newline, so a command emitting
   a long stream without newlines would grow buffer.value without bound and
   re-split the entire accumulated string on every chunk. Now, when the
   buffer has no newline and exceeds MAX_LINE_LENGTH, we force-emit a single
   truncated event through the throttled path and reset the buffer.

2. Missing type guard on params.command
   validateToolParamValues called params.command.trim() without a typeof
   check. Schema validation normally catches this, but SDK/direct callers
   could bypass it and hit an uncaught TypeError. Added typeof === 'string'
   guard, matching the pattern used for max_events / idle_timeout_ms.

3. Workspace check bypass via raw startsWith
   The directory validator used workspaceDirs.some(d => params.directory
   .startsWith(d)), which allowed prefix collisions (e.g. /tmp/project-evil
   against a /tmp/project workspace) and skipped canonicalisation / symlink
   resolution. Switched to WorkspaceContext.isPathWithinWorkspace, which
   already does fullyResolvedPath + segment-aware isPathWithinRoot matching
   and is the standard used elsewhere in the codebase.

Test coverage: added 6 unit tests covering non-string command guard,
non-absolute directory rejection, prefix-collision rejection, traversal
rejection, workspace acceptance, and partial-line cap behaviour
(including buffer reset). All 26 monitor.test.ts cases pass.

The same startsWith pattern also exists in ShellTool and is tracked as a
separate follow-up to keep this PR focused on Phase C scope.

* fix(core): scope monitor always-allow permissions

Populate Monitor confirmation permissionRules using the same command-rule extraction path as ShellTool, so ProceedAlways persists command-scoped Bash(...) rules instead of a broad monitor-level allow. Also add unit coverage for command-scoped rules, filtering already-allowed subcommands, and extractor fallback behavior.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): decouple monitor permission scope from Bash rules

Remove pm.isCommandAllowed() from MonitorToolInvocation.getConfirmationDetails()
to prevent existing Bash(...) allow rules from shrinking the monitor confirmation
scope. Monitor is a long-running background process with a different risk profile
than one-shot shell execution and should maintain its own permission boundary.
Only AST-based read-only filtering is retained.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): unify monitor error/exit cleanup to prevent resource leaks

Extract a shared cleanup() helper called from both the `exit` and
`error` event handlers. Previously the `error` handler did not flush
buffers, clear buffer values, remove the abort listener, or log
dropped-line stats — causing potential memory leaks when `error` fires
without a subsequent `exit` (e.g. ENOENT for missing commands).

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): add user-skills-directory guard to monitor directory validation

Mirror ShellTool's getUserSkillsDirs() check in MonitorTool's
validateToolParamValues() to prevent monitor commands from running
inside user skills directories.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* feat(core): add Monitor(...) permission namespace for monitor tool (#3726)

Introduce a dedicated Monitor(...) permission namespace so monitor and
shell tools have independent permission boundaries. Previously monitor
emitted Bash(...) rules, causing "Always Allow" to fail for future
monitor invocations while unintentionally granting run_shell_command.

Changes:
- rule-parser.ts: add Monitor alias, SHELL_TOOL_NAMES entry,
  CANONICAL_TO_RULE_DISPLAY, DISPLAY_NAME_TO_VERB
- permission-manager.ts: extract SHELL_LIKE_TOOLS set so evaluate(),
  evaluateSingle(), hasRelevantRules(), hasMatchingAskRule() handle
  both run_shell_command and monitor
- monitor.ts: emit Monitor(...) instead of Bash(...) in permissionRules
- Tests: parseRule, matchesRule, cross-tool isolation regression,
  buildPermissionRules, buildHumanReadableRuleLabel for Monitor

Co-authored-by: jinye.djy <jinye.djy@alibaba-inc.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): decouple headless monitor lifetime from final result

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): stabilize stream-json monitor session shutdown

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): deny monitor in headless approval defaults

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): honor tool aliases in headless allow checks

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): address opus review — sleep regex, monitor cap, non-interactive cleanup

- Fix sleep interception false positive for backgrounded sleep (`sleep 5 &
  echo done`). Remove bare `&` from separator character class so the
  background operator is not treated as a sequential separator.
- Add MAX_CONCURRENT_MONITORS (16) check in MonitorRegistry.register()
  and early rejection in MonitorTool.execute() to prevent unbounded
  process spawning.
- Widen monitorId from 8 to 16 hex chars to reduce birthday collision risk.
- Abort all running monitors in nonInteractiveCli.ts success-path finally
  so piped stdio refs don't keep the Node event loop alive after result
  emission in one-shot (--print) mode.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): abort monitors and background shells on /clear

Without this, long-running monitors from a previous session survive
/clear and continue pushing events into the new session's notification
queue. This enables cross-session prompt injection where a malicious
monitor persists across the user's escape hatch.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): abort monitors on stream-json session shutdown

Call monitorRegistry.abortAll() in both shutdown() and
drainAndShutdown() so detached monitor child processes don't survive
session termination.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(cli): use content event type in stream tests

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): isolate session cleanup on clear and shutdown

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): finalize session cleanup after drain

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): close remaining monitor review gaps

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): preserve shell cwd in virtual permission checks

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): normalize trailing background ampersands

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): align monitor permission and wrapper handling

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(core): make monitor CI assertions cross-platform

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): align monitor wrapper normalization

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): normalize wrapped monitor commands

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): harden monitor headless edge cases

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): preserve monitor spawn errors

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): harden monitor register cleanup

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): parse monitor wrapper script token

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): address PR review comments for monitor tool

- Make Bash(...) permission rules cover monitor via toolMatchesRuleToolName,
  so deny rules like Bash(rm *) also block monitor({command: "rm ..."})
- Remove dead `normalizeRuleToolName` mock reference in config.test.ts
- Fix tool description to mention stdout/stderr instead of just stdout
- Export MAX_CONCURRENT_MONITORS from monitorRegistry and use it in
  monitor.ts instead of hardcoded 16
- Rename ambiguous MAX_LINE_LENGTH constants: PARTIAL_LINE_BUFFER_CAP
  (4096, monitor.ts) and EVENT_LINE_TRUNCATE (2000, monitorRegistry.ts)
- Fix schema description text: "Max 80 characters" → "Truncated to 80
  characters in display"
- Add .unref() to SIGTERM→SIGKILL escalation timer to prevent 200ms
  exit delay

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): resolve clear command typecheck issues

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): preserve background tasks across shutdown abort

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): close monitor review gaps

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): address latest monitor review comments

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): handle monitors across session switches

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(core): cover aborted monitor startup

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): address remaining monitor PR review comments

Adopts four unresolved review threads on PR #3684:

* shell: trim top-level trailing comments before validating sleep
  separator so 'sleep 5 # wait' no longer bypasses
  detectBlockedSleepPattern.
* monitor: add sanitizeMonitorLine to strip C0/C1 control chars
  (except tab) and defang structural envelope tag names with U+200B
  before forwarding output to the model, blocking prompt-injection
  attempts hidden in monitored stdout/stderr.
* monitor: declare line buffers and throttledEmit before abortHandler
  to avoid TDZ on synchronous abort paths, and add
  flushPartialLineBuffers called from both abortHandler (before kill)
  and cleanup (natural exit/error) so partial-line data is no longer
  silently dropped on cancel.
* permissions: document that normalizePermissionContext relies on
  buildPermissionCheckContext to forward monitor's directory as cwd,
  and add regression tests proving relative-path Read(./...) allow
  and deny rules resolve against the monitor's explicit cwd.

* fix(core): abort running monitors in MonitorRegistry.reset()

reset() previously only cleared idle timers and emptied the map without
aborting running monitors' AbortControllers. This could orphan child
processes when reset() was called without a prior abortAll(), e.g. via
useResumeCommand → resetBackgroundStateForSessionSwitch.

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(core): harden monitor notification XML and displayText

- Extend escapeXml to escape " and ' as defense-in-depth: safe to reuse
  the helper in any future XML attribute context without re-auditing.
- Strip C0 (except tab) and C1 control characters from the displayText
  surface before interpolation, so untrusted child-process output cannot
  leak ANSI escapes / NUL bytes into the operator's terminal even if a
  direct caller of MonitorRegistry.emitEvent skips sanitization.

Adds unit tests for both hardening paths.

* test(core): cover token-bucket throttling and commented-sleep bypass

- Add 4 unit tests for the monitor token-bucket throttle (burst=5,
  1 token/sec refill): burst cap, refill release, long-idle bucket cap,
  and whitespace lines not consuming budget. Uses vi.setSystemTime to
  exercise Date.now() without advancing pending setTimeouts.
- Add an E2E case that feeds 'sleep 5 # wait for db' through the shell
  tool to lock in trimTrailingShellComment behavior end-to-end; the
  unit-level coverage in shell.test.ts remains authoritative but the
  E2E anchor prevents a regression from silently passing unit tests.

* fix(core): address 3 remaining copilot review comments

1. shell.ts sleep interception: strip shell wrapper before detecting the
   blocked sleep pattern so `bash -c 'sleep 5'` / `sh -c ...` cannot
   route around the block. Mirrors every other sensitive check in
   shell.ts, which already normalizes through stripShellWrapper.

2. monitorRegistry.ts emitEvent auto-stop: settle the entry BEFORE
   aborting its controller so that any synchronous abort listener that
   flushes buffered output back through registry.emitEvent() (e.g. the
   Monitor tool's flushPartialLineBuffers) finds status !== 'running'
   and short-circuits instead of overshooting maxEvents and emitting a
   duplicate 'Max events reached' terminal notification.

3. monitorRegistry.ts truncateDescription: cap output at exactly
   MAX_DESCRIPTION_LENGTH by counting the ellipsis against the budget,
   instead of returning MAX_DESCRIPTION_LENGTH + 3 characters.

Each fix is covered by a new unit test.

* fix(core): address review comments — sanitize, notify, kill logging, throttle observability

- Remove double normalize in buildPermissionCheckContext (PM is single source)
- Add {notify:false} to Config.shutdown() and abortTaskRegistries() abortAll
- Swap settle-before-abort in cancel() and resetIdleTimer() to prevent races
- Add stripDisplayControlChars to emitTerminalNotification
- Sanitize monitor description at entry creation via sanitizeMonitorLine
- Surface throttle-dropped line count in terminal notification
- Add .unref() to idle timer to allow clean process exit
- Add error handler + stdio:ignore to Windows taskkill spawn
- Log SIGTERM/SIGKILL kill failures via debugLogger.warn
- Attach early child error handler to cover spawn-to-register window
- Destroy child stdio on register failure to prevent handle leaks
- Improve stripShellWrapper to handle absolute paths, combined flags, env prefix
- Improve SHELL_TOOL_NAMES documentation and toolMatchesRuleToolName clarity

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): resolve monitor tool typecheck errors

- Cast child.stdout/stderr to a minimal { destroy?: () => void } shape so
  the optional destroy() call compiles and still works with test mocks.
- Initialize droppedLines: 0 in MonitorEntry test fixtures that predate
  the field becoming required.

* fix(monitor): add missing stdio option in taskkill test assertions (#3784)

* fix(core): address monitor review feedback

* fix(core): harden monitor command lifecycle

---------

Co-authored-by: jinye.djy <jinye.djy@alibaba-inc.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2026-05-02 20:57:26 +08:00
.github feat(core): add shared permission flow for tool execution unification (#3723) 2026-04-30 22:10:37 +08:00
.husky Sync upstream Gemini-CLI v0.8.2 (#838) 2025-10-23 09:27:04 +08:00
.qwen feat(skills): add tmux-real-user-testing skill for readable TUI test logs (#3577) 2026-04-29 11:19:00 +08:00
.vscode Merge branch 'main' into feat/sandbox-config-improvements 2026-03-06 14:38:39 +08:00
docs feat(telemetry): define HTTP OTLP endpoint behavior and signal routing (#3779) 2026-05-01 22:47:01 +08:00
docs-site feat: update docs 2025-12-15 09:47:03 +08:00
eslint-rules pre-release commit 2025-07-22 23:26:01 +08:00
integration-tests feat(core): event monitor tool with throttled stdout streaming (Phase C) (#3684) 2026-05-02 20:57:26 +08:00
packages feat(core): event monitor tool with throttled stdout streaming (Phase C) (#3684) 2026-05-02 20:57:26 +08:00
scripts fix(cli): bound SubAgent display by visual height to prevent flicker (#3721) 2026-04-29 22:34:55 +08:00
.dockerignore fix(cli): skip stdin read for ACP mode 2026-03-27 11:47:01 +00:00
.editorconfig pre-release commit 2025-07-22 23:26:01 +08:00
.gitattributes pre-release commit 2025-07-22 23:26:01 +08:00
.gitignore Feat/openrouter auth (#3576) 2026-04-27 14:47:44 +08:00
.npmrc chore: remove google registry 2025-08-08 20:45:54 +08:00
.nvmrc chore: Expand node version test matrix (#2700) 2025-07-21 16:33:54 -07:00
.prettierignore Merge branch 'main' into feat/add-vscode-settings-json-schema 2026-03-03 11:21:57 +08:00
.prettierrc.json pre-release commit 2025-07-22 23:26:01 +08:00
.yamllint.yml Sync upstream Gemini-CLI v0.8.2 (#838) 2025-10-23 09:27:04 +08:00
AGENTS.md feat(vscode): expose /skills as slash command with secondary picker (#2548) 2026-04-24 23:28:53 +08:00
CONTRIBUTING.md docs: add Screenshots/Video Demo section to PR template 2026-03-20 16:59:53 +08:00
Dockerfile refactor: Extract web-templates package and unify build/pack workflow 2026-02-26 21:02:46 +08:00
esbuild.config.js feat: add wasm build config (#2985) 2026-04-09 14:21:00 +08:00
eslint.config.js feat(cli): add API preconnect to reduce first-call latency (#3318) 2026-04-27 06:54:55 +08:00
LICENSE Sync upstream Gemini-CLI v0.8.2 (#838) 2025-10-23 09:27:04 +08:00
Makefile feat: update docs 2025-12-22 21:11:33 +08:00
package-lock.json chore(release): v0.15.6 (#3766) 2026-04-30 15:59:35 +08:00
package.json chore(release): v0.15.6 (#3766) 2026-04-30 15:59:35 +08:00
README.md feat(SDK) Add Python SDK implementation for #3010 (#3494) 2026-04-25 07:02:58 +08:00
SECURITY.md fix: update security vulnerability reporting channel 2026-02-24 14:22:47 +08:00
tsconfig.json # 🚀 Sync Gemini CLI v0.2.1 - Major Feature Update (#483) 2025-09-01 14:48:55 +08:00
vitest.config.ts test(channels): add comprehensive test suites for channel adapters 2026-03-27 15:26:39 +00:00

npm version License Node.js Version Downloads

QwenLM%2Fqwen-code | Trendshift

An open-source AI agent that lives in your terminal.

中文 | Deutsch | français | 日本語 | Русский | Português (Brasil)

🎉 News

  • 2026-04-15: Qwen OAuth free tier has been discontinued. To continue using Qwen Code, switch to Alibaba Cloud Coding Plan, OpenRouter, Fireworks AI, or bring your own API key. Run qwen auth to configure.

  • 2026-04-13: Qwen OAuth free tier policy update: daily quota adjusted to 100 requests/day (from 1,000).

  • 2026-04-02: Qwen3.6-Plus is now live! Get an API key from Alibaba Cloud ModelStudio to access it through the OpenAI-compatible API.

  • 2026-02-16: Qwen3.5-Plus is now live!

Why Qwen Code?

Qwen Code is an open-source AI agent for the terminal, optimized for Qwen series models. It helps you understand large codebases, automate tedious work, and ship faster.

  • Multi-protocol, flexible providers: use OpenAI / Anthropic / Gemini-compatible APIs, Alibaba Cloud Coding Plan, OpenRouter, Fireworks AI, or bring your own API key.
  • Open-source, co-evolving: both the framework and the Qwen3-Coder model are open-source—and they ship and evolve together.
  • Agentic workflow, feature-rich: rich built-in tools (Skills, SubAgents) for a full agentic workflow and a Claude Code-like experience.
  • Terminal-first, IDE-friendly: built for developers who live in the command line, with optional integration for VS Code, Zed, and JetBrains IDEs.

Installation

Linux / macOS

bash -c "$(curl -fsSL https://qwen-code-assets.oss-cn-hangzhou.aliyuncs.com/installation/install-qwen.sh)"

Windows (Run as Administrator)

Works in both Command Prompt and PowerShell:

powershell -Command "Invoke-WebRequest 'https://qwen-code-assets.oss-cn-hangzhou.aliyuncs.com/installation/install-qwen.bat' -OutFile (Join-Path $env:TEMP 'install-qwen.bat'); & (Join-Path $env:TEMP 'install-qwen.bat')"

Note

: It's recommended to restart your terminal after installation to ensure environment variables take effect.

Manual Installation

Prerequisites

Make sure you have Node.js 20 or later installed. Download it from nodejs.org.

NPM

npm install -g @qwen-code/qwen-code@latest

Homebrew (macOS, Linux)

brew install qwen-code

Quick Start

# Start Qwen Code (interactive)
qwen

# Then, in the session:
/help
/auth

On first use, you'll be prompted to sign in. You can run /auth anytime to switch authentication methods.

Example prompts:

What does this project do?
Explain the codebase structure.
Help me refactor this function.
Generate unit tests for this module.
Click to watch a demo video

🦞 Use Qwen Code for Coding Tasks in Claw

Copy the prompt below and paste it into your agent:

Use Qwen Code as a coding agent. First, fetch and read this skill reference:
https://raw.githubusercontent.com/QwenLM/qwen-code/refs/heads/main/.qwen/skills/qwen-code-claw/SKILL.md
Then follow the instructions inside to install, authenticate, and use Qwen Code via acpx for all coding tasks.

Authentication

Qwen Code supports the following authentication methods:

  • API Key (recommended): use an API key from Alibaba Cloud Model Studio (Beijing / intl) or any supported provider (OpenAI, Anthropic, Google GenAI, and other compatible endpoints).
  • Coding Plan: subscribe to the Alibaba Cloud Coding Plan (Beijing / intl) for a fixed monthly fee with higher quotas.

⚠️ Qwen OAuth was discontinued on April 15, 2026. If you were previously using Qwen OAuth, please switch to one of the methods above. Run qwen and then /auth to reconfigure.

Use an API key to connect to Alibaba Cloud Model Studio or any supported provider. Supports multiple protocols:

  • OpenAI-compatible: Alibaba Cloud ModelStudio, ModelScope, OpenAI, OpenRouter, and other OpenAI-compatible providers
  • Anthropic: Claude models
  • Google GenAI: Gemini models

The recommended way to configure models and providers is by editing ~/.qwen/settings.json (create it if it doesn't exist). This file lets you define all available models, API keys, and default settings in one place.

Quick Setup in 3 Steps

Step 1: Create or edit ~/.qwen/settings.json

Here is a complete example:

{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3.6-plus",
        "name": "qwen3.6-plus",
        "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
        "description": "Qwen3-Coder via Dashscope",
        "envKey": "DASHSCOPE_API_KEY"
      }
    ]
  },
  "env": {
    "DASHSCOPE_API_KEY": "sk-xxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3.6-plus"
  }
}

Step 2: Understand each field

Field What it does
modelProviders Declares which models are available and how to connect to them. Keys like openai, anthropic, gemini represent the API protocol.
modelProviders[].id The model ID sent to the API (e.g. qwen3.6-plus, gpt-4o).
modelProviders[].envKey The name of the environment variable that holds your API key.
modelProviders[].baseUrl The API endpoint URL (required for non-default endpoints).
env A fallback place to store API keys (lowest priority; prefer .env files or export for sensitive keys).
security.auth.selectedType The protocol to use on startup (openai, anthropic, gemini, vertex-ai).
model.name The default model to use when Qwen Code starts.

Step 3: Start Qwen Code — your configuration takes effect automatically:

qwen

Use the /model command at any time to switch between all configured models.

More Examples
Coding Plan (Alibaba Cloud ModelStudio) — fixed monthly fee, higher quotas
{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3.6-plus",
        "name": "qwen3.6-plus (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "qwen3.6-plus from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY"
      },
      {
        "id": "qwen3.5-plus",
        "name": "qwen3.5-plus (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "qwen3.5-plus with thinking enabled from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      },
      {
        "id": "glm-4.7",
        "name": "glm-4.7 (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "glm-4.7 with thinking enabled from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      },
      {
        "id": "kimi-k2.5",
        "name": "kimi-k2.5 (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "kimi-k2.5 with thinking enabled from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      }
    ]
  },
  "env": {
    "BAILIAN_CODING_PLAN_API_KEY": "sk-xxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3.6-plus"
  }
}

Subscribe to the Coding Plan and get your API key at Alibaba Cloud ModelStudio(Beijing) or Alibaba Cloud ModelStudio(intl).

Multiple providers (OpenAI + Anthropic + Gemini)
{
  "modelProviders": {
    "openai": [
      {
        "id": "gpt-4o",
        "name": "GPT-4o",
        "envKey": "OPENAI_API_KEY",
        "baseUrl": "https://api.openai.com/v1"
      }
    ],
    "anthropic": [
      {
        "id": "claude-sonnet-4-20250514",
        "name": "Claude Sonnet 4",
        "envKey": "ANTHROPIC_API_KEY"
      }
    ],
    "gemini": [
      {
        "id": "gemini-2.5-pro",
        "name": "Gemini 2.5 Pro",
        "envKey": "GEMINI_API_KEY"
      }
    ]
  },
  "env": {
    "OPENAI_API_KEY": "sk-xxxxxxxxxxxxx",
    "ANTHROPIC_API_KEY": "sk-ant-xxxxxxxxxxxxx",
    "GEMINI_API_KEY": "AIzaxxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "gpt-4o"
  }
}
Enable thinking mode (for supported models like qwen3.5-plus)
{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3.5-plus",
        "name": "qwen3.5-plus (thinking)",
        "envKey": "DASHSCOPE_API_KEY",
        "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      }
    ]
  },
  "env": {
    "DASHSCOPE_API_KEY": "sk-xxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3.5-plus"
  }
}

Tip: You can also set API keys via export in your shell or .env files, which take higher priority than settings.jsonenv. See the authentication guide for full details.

Security note: Never commit API keys to version control. The ~/.qwen/settings.json file is in your home directory and should stay private.

Local Model Setup (Ollama / vLLM)

You can also run models locally — no API key or cloud account needed. This is not an authentication method; instead, configure your local model endpoint in ~/.qwen/settings.json using the modelProviders field.

Ollama setup
  1. Install Ollama from ollama.com
  2. Pull a model: ollama pull qwen3:32b
  3. Configure ~/.qwen/settings.json:
{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3:32b",
        "name": "Qwen3 32B (Ollama)",
        "baseUrl": "http://localhost:11434/v1",
        "description": "Qwen3 32B running locally via Ollama"
      }
    ]
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3:32b"
  }
}
vLLM setup
  1. Install vLLM: pip install vllm
  2. Start the server: vllm serve Qwen/Qwen3-32B
  3. Configure ~/.qwen/settings.json:
{
  "modelProviders": {
    "openai": [
      {
        "id": "Qwen/Qwen3-32B",
        "name": "Qwen3 32B (vLLM)",
        "baseUrl": "http://localhost:8000/v1",
        "description": "Qwen3 32B running locally via vLLM"
      }
    ]
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "Qwen/Qwen3-32B"
  }
}

Usage

As an open-source terminal agent, you can use Qwen Code in four primary ways:

  1. Interactive mode (terminal UI)
  2. Headless mode (scripts, CI)
  3. IDE integration (VS Code, Zed)
  4. SDKs (TypeScript, Python, Java)

Interactive mode

cd your-project/
qwen

Run qwen in your project folder to launch the interactive terminal UI. Use @ to reference local files (for example @src/main.ts).

Headless mode

cd your-project/
qwen -p "your question"

Use -p to run Qwen Code without the interactive UI—ideal for scripts, automation, and CI/CD. Learn more: Headless mode.

IDE integration

Use Qwen Code inside your editor (VS Code, Zed, and JetBrains IDEs):

SDKs

Build on top of Qwen Code with the available SDKs:

Python SDK example:

import asyncio

from qwen_code_sdk import is_sdk_result_message, query


async def main() -> None:
    result = query(
        "Summarize the repository layout.",
        {
            "cwd": "/path/to/project",
            "path_to_qwen_executable": "qwen",
        },
    )

    async for message in result:
        if is_sdk_result_message(message):
            print(message["result"])


asyncio.run(main())

Commands & Shortcuts

Session Commands

  • /help - Display available commands
  • /clear - Clear conversation history
  • /compress - Compress history to save tokens
  • /stats - Show current session information
  • /bug - Submit a bug report
  • /exit or /quit - Exit Qwen Code

Keyboard Shortcuts

  • Ctrl+C - Cancel current operation
  • Ctrl+D - Exit (on empty line)
  • Up/Down - Navigate command history

Learn more about Commands

Tip: In YOLO mode (--yolo), vision switching happens automatically without prompts when images are detected. Learn more about Approval Mode

Configuration

Qwen Code can be configured via settings.json, environment variables, and CLI flags.

File Scope Description
~/.qwen/settings.json User (global) Applies to all your Qwen Code sessions. Recommended for modelProviders and env.
.qwen/settings.json Project Applies only when running Qwen Code in this project. Overrides user settings.

The most commonly used top-level fields in settings.json:

Field Description
modelProviders Define available models per protocol (openai, anthropic, gemini, vertex-ai).
env Fallback environment variables (e.g. API keys). Lower priority than shell export and .env files.
security.auth.selectedType The protocol to use on startup (e.g. openai).
model.name The default model to use when Qwen Code starts.

See the Authentication section above for complete settings.json examples, and the settings reference for all available options.

Benchmark Results

Terminal-Bench Performance

Agent Model Accuracy
Qwen Code Qwen3-Coder-480A35 37.5%
Qwen Code Qwen3-Coder-30BA3B 31.3%

Ecosystem

Looking for a graphical interface?

  • AionUi A modern GUI for command-line AI tools including Qwen Code
  • Gemini CLI Desktop A cross-platform desktop/web/mobile UI for Qwen Code

Troubleshooting

If you encounter issues, check the troubleshooting guide.

Common issues:

  • Qwen OAuth free tier was discontinued on 2026-04-15: Qwen OAuth is no longer available. Run qwen/auth and switch to API Key or Coding Plan. See the Authentication section above for setup instructions.

To report a bug from within the CLI, run /bug and include a short title and repro steps.

Connect with Us

Acknowledgments

This project is based on Google Gemini CLI. We acknowledge and appreciate the excellent work of the Gemini CLI team. Our main contribution focuses on parser-level adaptations to better support Qwen-Coder models.