Find a file
Shaojin Wen aac2e96ec3
Some checks are pending
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
Qwen Code CI / Lint (push) Waiting to run
feat(core): managed background shell pool with /tasks command (#3642)
* feat(core): managed background shell pool with /bashes command

Replace shell.ts's `&` fork-and-detach background path with a managed
process registry. Background shells now have observable lifecycle, captured
output, and explicit cancellation — matching the pattern used by background
subagents (#3076).

Phase B from #3634 (background task management roadmap).

What changes
- New `BackgroundShellRegistry` (services/backgroundShellRegistry.ts):
  per-process entry with status (running / completed / failed / cancelled),
  AbortController, output file path. State transitions are one-shot
  (terminal status sticks; late callbacks no-op). Mirrors the lifecycle
  shape of #3471's BackgroundTaskRegistry so the two can be unified later.
- `shell.ts` is_background path rewritten as `executeBackground`:
  - Spawns the unwrapped command (no '&', no pgrep envelope)
  - Streams stdout to `<projectDir>/tasks/<sessionId>/shell-<id>.output`
    (path layout aligns with the direction sketched in #3471 review)
  - Bridges the external abort signal into the entry's AbortController so
    a single source of truth governs cancellation
  - Returns immediately with id + output path; agent's turn isn't blocked
  - Settles the registry entry asynchronously when ShellExecutionService
    resolves: complete (clean exit) / fail (error) / cancel (aborted)
- Removes ~120 lines of dead bg-specific code from shell.ts:
  pgrep wrapping, '&' appending, Windows ampersand cleanup, Windows
  early-return path, bg PID parsing, tempFile cleanup
- New `/bashes` slash command: lists registered shells with id, status,
  runtime, command, output path. Empty state prints a friendly message.

What this PR doesn't do
- Footer pill / dialog integration — gated on #3488 landing
- task_stop / send_message integration — gated on #3471 landing
- Auto-backgrounding heuristics for long foreground bash — Phase D

Test plan
- 11 registry unit tests (state machine + idempotent terminal transitions)
- 4 background-path tests in shell.test.ts (spawn no-wrap + complete /
  fail / cancel settle paths)
- 2 /bashes command tests (empty + populated)
- Full core suite: 247 files / 6075 passed (existing tests unaffected)

* fix(core): address PR #3642 review feedback

Three [Critical] from the auto review + naming alignment with Claude Code:

- shell.ts settle: non-zero exit code or termination signal now bucket into
  `failed` instead of `completed`. The previous `if (result.error) fail else
  complete()` would misreport `false` / failed `npm test` as success because
  ShellExecutionService surfaces ordinary command failures as a non-zero
  exitCode with `error: null`. Failure reason carries the exit code or signal
  so `/tasks` shows the real cause.

- ShellExecutionService.childProcessFallback: add `streamStdout` mode that
  emits each decoded chunk through the existing onOutputEvent path. The
  default (foreground) path continues to buffer + emit the cleaned final
  blob, so existing in-line shell calls are unaffected. executeBackground
  opts in via `{ streamStdout: true }`, which is what makes the captured
  output file actually useful for long-running processes (dev servers,
  watchers) — without it the file stayed empty until the process exited.

- shell.ts test fixture: cancel-settle test was using `signal: 'SIGTERM'`
  but `ShellExecutionResult.signal` is `number | null`. TS2322 broke the
  build; switched to `signal: null`. Added a test that explicitly covers
  the new "non-zero exit → failed" path so the bucketing change has
  regression coverage.

- shell.ts comment: explicitly document why background shells force
  `shouldUseNodePty=false` (no terminal, no human; node-pty would be dead
  weight for fire-and-forget commands).

- /bashes → /tasks (alias bashes), description "List and manage background
  tasks" — matches Claude Code's command name. Currently lists shells only;
  will surface other task kinds (subagents, monitor) as those registries
  land via #3471 / #3488.

* fix(core): address PR #3642 second-round review feedback

- shellExecutionService streaming: drop stdout/stderr buffer + outputChunks
  accumulation in streaming mode. Each decoded chunk goes straight to
  onOutputEvent and is GC-eligible immediately. Long-running background
  commands (dev servers, watchers) no longer accumulate unbounded memory
  proportional to total output. Buffered (foreground) mode is unchanged.

- shell.ts executeBackground: stripAnsi each chunk before writing to the
  output file. Dev servers / build tools spam color codes and cursor-move
  sequences that would render as garbage in the file the agent reads.

- bashesCommand: command description "List and manage" → "List background
  tasks" — current implementation only supports listing, cancellation
  follows when the unified task_stop tool from #3471 is wired in. Replace
  the hand-rolled formatRuntime helper with the shared formatDuration
  utility (uses hideTrailingZeros for parity with the previous output).

- backgroundShellRegistry: add a comment documenting the lack of an
  eviction policy as a known limitation. LRU / age-based / capped-size
  eviction (and on-disk output rotation) is left as a follow-up alongside
  the broader output-file lifecycle story.

* fix(core): address PR #3642 third-round review feedback

- shell.ts executeBackground: add 'error' listener on the output write
  stream. fs.createWriteStream surfaces write failures (disk full,
  permission, fs going away) as 'error' events; without a listener Node
  treats it as an uncaught exception and kills the entire CLI session.
  Log + drop is the sane default — the registry still settles via
  resultPromise so /tasks shows the right terminal status.

- shell.ts executeBackground: store the abort handler reference and
  removeEventListener in the settle callback. Background shells outlive
  the turn signal; the dangling listener was keeping `entryAc` (and
  transitively `outputStream`) reachable until the turn signal itself was
  GC'd, which for long sessions would never happen.

- shell.test.ts: extend the createWriteStream mock with an `on` stub so
  the new error-listener wiring doesn't crash the test suite.

* refactor(cli): drop /bashes alias and rename file to tasksCommand

Per follow-up review: the slash command should be exclusively /tasks.
Removes the `bashes` altName, renames `bashesCommand{,.test}.ts` →
`tasksCommand{,.test}.ts`, renames the exported binding `bashesCommand`
→ `tasksCommand`, and cleans up the remaining `/bashes` references in
backgroundShellRegistry.ts comments. No behavior change beyond the
alias removal.

* refactor(cli): finish tasksCommand rename — apply content changes

The previous commit (03c8503c8) only captured the file rename via
`git mv`; the export name change (`bashesCommand` → `tasksCommand`),
the removal of `altNames: ['bashes']`, the import update in
BuiltinCommandLoader, and the `/bashes` → `/tasks` comments in
backgroundShellRegistry.ts were unstaged when that commit landed.
Squash candidate before merge.

* fix(core): address PR #3642 fourth-round review feedback

Four reviewer concerns from @wenshao + @doudouOUC:

- [Critical] Config.shutdown() now also calls
  `backgroundShellRegistry.abortAll()`. Previously only the subagent
  registry was aborted, so a managed background shell could outlive the
  CLI process and orphan its child. Symmetric with how
  `BackgroundTaskRegistry.abortAll()` is wired in.

- [P1] shell.ts executeBackground strips a trailing `&` from the command
  before spawn. The managed path is itself the backgrounding mechanism;
  forwarding `node server.js &` verbatim made bash exit immediately while
  the real child outlived the wrapper, causing the registry to settle as
  `completed` while the shell was still running and chunked output to
  land on a closed stream. Strip + warn.

- [P2] Output file moves under `storage.getProjectTempDir()` (specifically
  `<projectTempDir>/background-shells/<sessionId>/shell-<id>.output`).
  `ReadFileTool` already auto-allows the project temp dir, so the LLM
  can `Read` the captured output without bouncing off a permission
  prompt — important because background-agent contexts can't surface
  interactive prompts.

- [P2] Background shells are no longer killed when the current turn's
  AbortSignal fires. Forwarding the turn signal into the entry's
  AbortController meant a Ctrl+C on the turn would also terminate
  intentionally backgrounded dev servers / watchers, contradicting the
  independent-lifecycle promise. Cancellation now flows only through
  `entryAc` (driven by future `task_stop` integration via #3471).

Tests:
- New `abortAll` registry tests cover running / mixed / empty cases.
- `runs background commands as managed pool entries` test stops asserting
  the wrapper-vs-entry signal identity since they're now structurally
  separate (no turn-to-entry forwarding).
- New `does not forward the turn signal into the background shell` test
  pins the new behavior.
- New `strips trailing & from the spawned command` test pins the strip.
- Removed the cancel-via-outer-signal settle test — that path no longer
  exists; cancellation is exercised end-to-end via the registry's own
  `cancel` and `abortAll` tests in `backgroundShellRegistry.test.ts`.

* fix(core): tighten trailing & strip — narrow regex + ReDoS-safe

Two reviewer concerns on the same line of #3642 round 4:

- [Critical CodeQL] `\s*&+\s*$` is a polynomial-time regex on
  uncontrolled input (long all-`&` strings backtrack quadratically).
- [P2 doudouOUC] `&+` is too greedy: it also rewrites `npm run dev &&`
  into `npm run dev` (breaks logical AND syntax) and `echo foo \&` into
  `echo foo \` (eats the escaped literal). Only the bare bash background
  operator should be stripped.

Replace the regex with a small linear-time helper
`stripTrailingBackgroundAmp` that explicitly checks for the three
"don't touch" cases (`&&`, `\&`, no trailing `&`). Plain `endsWith` /
`slice` — no regex backtracking, and the intent reads off the page.

Tests:
- Existing strip-trailing-`&` test still passes.
- New `does not strip a trailing &&` test pins the logical-AND case.
- New `does not strip an escaped trailing \\&` test pins the escape case.

* fix(core): keep binary-detection sniff in streaming mode

@doudouOUC noted that `streamStdout` shortcut returned before the
binary-sniff path, so a background command emitting binary bytes
(`cat /bin/ls`, image dump, etc.) would be text-decoded and appended
to the task output file unbounded.

Restructure handleOutput so the sniff-and-cutover logic runs in both
modes:

- Both modes accumulate up to MAX_SNIFF_SIZE for the binary check.
  The accumulator is bounded; once the threshold is reached, it stops
  growing in streaming mode (dropped on binary detection / left
  inert on text confirmation) and continues to accumulate in buffered
  mode (existing foreground behavior).
- Streaming mode emits 'binary_detected' as soon as `isBinary` trips
  so the consumer can stop writing the output file. Up to ~4KB of
  bytes may have been emitted as text chunks before detection — this
  is bounded and acceptable; the unbounded write is the pathology
  reviewers flagged.
- Streaming text mode still emits each decoded chunk immediately and
  does not accumulate stdout/stderr strings, so long-running text
  streams remain GC-friendly.
- Buffered (foreground) behavior is unchanged — the sniff accumulator
  is the same path the existing tests cover.

Tests: 50 shellExecutionService + 11 backgroundShellRegistry + 57
shell.test.ts all pass; no regressions.

* fix(core): tighten streaming sniff bound + Windows rmSync flake

Two unrelated reds on the latest CI run:

1. [P1 doudouOUC] Streaming sniff buffer leaks on small chunks.
   The previous fix recomputed `sniffedBytes` from
   `Buffer.concat(outputChunks.slice(0, 20)).length` on every chunk —
   pinned to the first 20 chunks. If those total under MAX_SNIFF_SIZE
   (line-sized stdout, e.g. dev-server logs) the byte count never grew,
   the sniff branch stayed open forever, and `outputChunks` accumulated
   every later chunk — exactly the leak `streamStdout` was meant to
   prevent.

   Track sniffed bytes by running sum (`sniffedBytes += data.length`)
   so the bound is genuine. When sniff confirms text in streaming mode,
   drop the accumulator immediately so subsequent chunks fall through
   the streaming emit path without ever touching it.

2. file-exporters.test.ts afterEach `fs.rmSync` flaked on Windows
   (ENOTEMPTY: directory not empty). The exporter's underlying write
   stream hasn't always released its handle by the time `rmSync` runs.
   Pass `maxRetries: 5, retryDelay: 50` so the cleanup retries through
   the brief Windows handle-release window instead of failing the test
   on a CI quirk.

---------

Co-authored-by: wenshao <wenshao@U-K7F6PQY3-2157.local>
2026-04-28 11:06:50 +08:00
.github feat(SDK) Add Python SDK implementation for #3010 (#3494) 2026-04-25 07:02:58 +08:00
.husky Sync upstream Gemini-CLI v0.8.2 (#838) 2025-10-23 09:27:04 +08:00
.qwen feat(vscode): expose /skills as slash command with secondary picker (#2548) 2026-04-24 23:28:53 +08:00
.vscode Merge branch 'main' into feat/sandbox-config-improvements 2026-03-06 14:38:39 +08:00
docs fix(core): split tool-result media into follow-up user message for strict OpenAI compat (#3617) 2026-04-27 23:01:02 +08:00
docs-site feat: update docs 2025-12-15 09:47:03 +08:00
eslint-rules pre-release commit 2025-07-22 23:26:01 +08:00
integration-tests feat(web-search): remove built-in web_search tool, replace with MCP-based approach (#3502) 2026-04-24 11:29:02 +08:00
packages feat(core): managed background shell pool with /tasks command (#3642) 2026-04-28 11:06:50 +08:00
scripts feat(cli): add API preconnect to reduce first-call latency (#3318) 2026-04-27 06:54:55 +08:00
.dockerignore fix(cli): skip stdin read for ACP mode 2026-03-27 11:47:01 +00:00
.editorconfig chore: add .editorconfig (#2572) 2025-06-29 19:40:32 +00:00
.gitattributes Adding .gitattributes (#1303) 2025-06-23 06:58:41 +00:00
.gitignore Feat/openrouter auth (#3576) 2026-04-27 14:47:44 +08:00
.npmrc chore: remove google registry 2025-08-08 20:45:54 +08:00
.nvmrc chore: Expand node version test matrix (#2700) 2025-07-21 16:33:54 -07:00
.prettierignore Merge branch 'main' into feat/add-vscode-settings-json-schema 2026-03-03 11:21:57 +08:00
.prettierrc.json Run npm run format 2025-04-17 15:29:34 -07:00
.yamllint.yml Sync upstream Gemini-CLI v0.8.2 (#838) 2025-10-23 09:27:04 +08:00
AGENTS.md feat(vscode): expose /skills as slash command with secondary picker (#2548) 2026-04-24 23:28:53 +08:00
CONTRIBUTING.md docs: add Screenshots/Video Demo section to PR template 2026-03-20 16:59:53 +08:00
Dockerfile refactor: Extract web-templates package and unify build/pack workflow 2026-02-26 21:02:46 +08:00
esbuild.config.js feat: add wasm build config (#2985) 2026-04-09 14:21:00 +08:00
eslint.config.js feat(cli): add API preconnect to reduce first-call latency (#3318) 2026-04-27 06:54:55 +08:00
LICENSE Sync upstream Gemini-CLI v0.8.2 (#838) 2025-10-23 09:27:04 +08:00
Makefile feat: update docs 2025-12-22 21:11:33 +08:00
package-lock.json feat: Adds Catalan language support (#3643) 2026-04-26 22:26:53 +08:00
package.json feat(SDK) Add Python SDK implementation for #3010 (#3494) 2026-04-25 07:02:58 +08:00
README.md feat(SDK) Add Python SDK implementation for #3010 (#3494) 2026-04-25 07:02:58 +08:00
SECURITY.md fix: update security vulnerability reporting channel 2026-02-24 14:22:47 +08:00
tsconfig.json # 🚀 Sync Gemini CLI v0.2.1 - Major Feature Update (#483) 2025-09-01 14:48:55 +08:00
vitest.config.ts test(channels): add comprehensive test suites for channel adapters 2026-03-27 15:26:39 +00:00

npm version License Node.js Version Downloads

QwenLM%2Fqwen-code | Trendshift

An open-source AI agent that lives in your terminal.

中文 | Deutsch | français | 日本語 | Русский | Português (Brasil)

🎉 News

  • 2026-04-15: Qwen OAuth free tier has been discontinued. To continue using Qwen Code, switch to Alibaba Cloud Coding Plan, OpenRouter, Fireworks AI, or bring your own API key. Run qwen auth to configure.

  • 2026-04-13: Qwen OAuth free tier policy update: daily quota adjusted to 100 requests/day (from 1,000).

  • 2026-04-02: Qwen3.6-Plus is now live! Get an API key from Alibaba Cloud ModelStudio to access it through the OpenAI-compatible API.

  • 2026-02-16: Qwen3.5-Plus is now live!

Why Qwen Code?

Qwen Code is an open-source AI agent for the terminal, optimized for Qwen series models. It helps you understand large codebases, automate tedious work, and ship faster.

  • Multi-protocol, flexible providers: use OpenAI / Anthropic / Gemini-compatible APIs, Alibaba Cloud Coding Plan, OpenRouter, Fireworks AI, or bring your own API key.
  • Open-source, co-evolving: both the framework and the Qwen3-Coder model are open-source—and they ship and evolve together.
  • Agentic workflow, feature-rich: rich built-in tools (Skills, SubAgents) for a full agentic workflow and a Claude Code-like experience.
  • Terminal-first, IDE-friendly: built for developers who live in the command line, with optional integration for VS Code, Zed, and JetBrains IDEs.

Installation

Linux / macOS

bash -c "$(curl -fsSL https://qwen-code-assets.oss-cn-hangzhou.aliyuncs.com/installation/install-qwen.sh)"

Windows (Run as Administrator)

Works in both Command Prompt and PowerShell:

powershell -Command "Invoke-WebRequest 'https://qwen-code-assets.oss-cn-hangzhou.aliyuncs.com/installation/install-qwen.bat' -OutFile (Join-Path $env:TEMP 'install-qwen.bat'); & (Join-Path $env:TEMP 'install-qwen.bat')"

Note

: It's recommended to restart your terminal after installation to ensure environment variables take effect.

Manual Installation

Prerequisites

Make sure you have Node.js 20 or later installed. Download it from nodejs.org.

NPM

npm install -g @qwen-code/qwen-code@latest

Homebrew (macOS, Linux)

brew install qwen-code

Quick Start

# Start Qwen Code (interactive)
qwen

# Then, in the session:
/help
/auth

On first use, you'll be prompted to sign in. You can run /auth anytime to switch authentication methods.

Example prompts:

What does this project do?
Explain the codebase structure.
Help me refactor this function.
Generate unit tests for this module.
Click to watch a demo video

🦞 Use Qwen Code for Coding Tasks in Claw

Copy the prompt below and paste it into your agent:

Use Qwen Code as a coding agent. First, fetch and read this skill reference:
https://raw.githubusercontent.com/QwenLM/qwen-code/refs/heads/main/.qwen/skills/qwen-code-claw/SKILL.md
Then follow the instructions inside to install, authenticate, and use Qwen Code via acpx for all coding tasks.

Authentication

Qwen Code supports the following authentication methods:

  • API Key (recommended): use an API key from Alibaba Cloud Model Studio (Beijing / intl) or any supported provider (OpenAI, Anthropic, Google GenAI, and other compatible endpoints).
  • Coding Plan: subscribe to the Alibaba Cloud Coding Plan (Beijing / intl) for a fixed monthly fee with higher quotas.

⚠️ Qwen OAuth was discontinued on April 15, 2026. If you were previously using Qwen OAuth, please switch to one of the methods above. Run qwen and then /auth to reconfigure.

Use an API key to connect to Alibaba Cloud Model Studio or any supported provider. Supports multiple protocols:

  • OpenAI-compatible: Alibaba Cloud ModelStudio, ModelScope, OpenAI, OpenRouter, and other OpenAI-compatible providers
  • Anthropic: Claude models
  • Google GenAI: Gemini models

The recommended way to configure models and providers is by editing ~/.qwen/settings.json (create it if it doesn't exist). This file lets you define all available models, API keys, and default settings in one place.

Quick Setup in 3 Steps

Step 1: Create or edit ~/.qwen/settings.json

Here is a complete example:

{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3.6-plus",
        "name": "qwen3.6-plus",
        "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
        "description": "Qwen3-Coder via Dashscope",
        "envKey": "DASHSCOPE_API_KEY"
      }
    ]
  },
  "env": {
    "DASHSCOPE_API_KEY": "sk-xxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3.6-plus"
  }
}

Step 2: Understand each field

Field What it does
modelProviders Declares which models are available and how to connect to them. Keys like openai, anthropic, gemini represent the API protocol.
modelProviders[].id The model ID sent to the API (e.g. qwen3.6-plus, gpt-4o).
modelProviders[].envKey The name of the environment variable that holds your API key.
modelProviders[].baseUrl The API endpoint URL (required for non-default endpoints).
env A fallback place to store API keys (lowest priority; prefer .env files or export for sensitive keys).
security.auth.selectedType The protocol to use on startup (openai, anthropic, gemini, vertex-ai).
model.name The default model to use when Qwen Code starts.

Step 3: Start Qwen Code — your configuration takes effect automatically:

qwen

Use the /model command at any time to switch between all configured models.

More Examples
Coding Plan (Alibaba Cloud ModelStudio) — fixed monthly fee, higher quotas
{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3.6-plus",
        "name": "qwen3.6-plus (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "qwen3.6-plus from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY"
      },
      {
        "id": "qwen3.5-plus",
        "name": "qwen3.5-plus (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "qwen3.5-plus with thinking enabled from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      },
      {
        "id": "glm-4.7",
        "name": "glm-4.7 (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "glm-4.7 with thinking enabled from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      },
      {
        "id": "kimi-k2.5",
        "name": "kimi-k2.5 (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "kimi-k2.5 with thinking enabled from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      }
    ]
  },
  "env": {
    "BAILIAN_CODING_PLAN_API_KEY": "sk-xxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3.6-plus"
  }
}

Subscribe to the Coding Plan and get your API key at Alibaba Cloud ModelStudio(Beijing) or Alibaba Cloud ModelStudio(intl).

Multiple providers (OpenAI + Anthropic + Gemini)
{
  "modelProviders": {
    "openai": [
      {
        "id": "gpt-4o",
        "name": "GPT-4o",
        "envKey": "OPENAI_API_KEY",
        "baseUrl": "https://api.openai.com/v1"
      }
    ],
    "anthropic": [
      {
        "id": "claude-sonnet-4-20250514",
        "name": "Claude Sonnet 4",
        "envKey": "ANTHROPIC_API_KEY"
      }
    ],
    "gemini": [
      {
        "id": "gemini-2.5-pro",
        "name": "Gemini 2.5 Pro",
        "envKey": "GEMINI_API_KEY"
      }
    ]
  },
  "env": {
    "OPENAI_API_KEY": "sk-xxxxxxxxxxxxx",
    "ANTHROPIC_API_KEY": "sk-ant-xxxxxxxxxxxxx",
    "GEMINI_API_KEY": "AIzaxxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "gpt-4o"
  }
}
Enable thinking mode (for supported models like qwen3.5-plus)
{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3.5-plus",
        "name": "qwen3.5-plus (thinking)",
        "envKey": "DASHSCOPE_API_KEY",
        "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      }
    ]
  },
  "env": {
    "DASHSCOPE_API_KEY": "sk-xxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3.5-plus"
  }
}

Tip: You can also set API keys via export in your shell or .env files, which take higher priority than settings.jsonenv. See the authentication guide for full details.

Security note: Never commit API keys to version control. The ~/.qwen/settings.json file is in your home directory and should stay private.

Local Model Setup (Ollama / vLLM)

You can also run models locally — no API key or cloud account needed. This is not an authentication method; instead, configure your local model endpoint in ~/.qwen/settings.json using the modelProviders field.

Ollama setup
  1. Install Ollama from ollama.com
  2. Pull a model: ollama pull qwen3:32b
  3. Configure ~/.qwen/settings.json:
{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3:32b",
        "name": "Qwen3 32B (Ollama)",
        "baseUrl": "http://localhost:11434/v1",
        "description": "Qwen3 32B running locally via Ollama"
      }
    ]
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3:32b"
  }
}
vLLM setup
  1. Install vLLM: pip install vllm
  2. Start the server: vllm serve Qwen/Qwen3-32B
  3. Configure ~/.qwen/settings.json:
{
  "modelProviders": {
    "openai": [
      {
        "id": "Qwen/Qwen3-32B",
        "name": "Qwen3 32B (vLLM)",
        "baseUrl": "http://localhost:8000/v1",
        "description": "Qwen3 32B running locally via vLLM"
      }
    ]
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "Qwen/Qwen3-32B"
  }
}

Usage

As an open-source terminal agent, you can use Qwen Code in four primary ways:

  1. Interactive mode (terminal UI)
  2. Headless mode (scripts, CI)
  3. IDE integration (VS Code, Zed)
  4. SDKs (TypeScript, Python, Java)

Interactive mode

cd your-project/
qwen

Run qwen in your project folder to launch the interactive terminal UI. Use @ to reference local files (for example @src/main.ts).

Headless mode

cd your-project/
qwen -p "your question"

Use -p to run Qwen Code without the interactive UI—ideal for scripts, automation, and CI/CD. Learn more: Headless mode.

IDE integration

Use Qwen Code inside your editor (VS Code, Zed, and JetBrains IDEs):

SDKs

Build on top of Qwen Code with the available SDKs:

Python SDK example:

import asyncio

from qwen_code_sdk import is_sdk_result_message, query


async def main() -> None:
    result = query(
        "Summarize the repository layout.",
        {
            "cwd": "/path/to/project",
            "path_to_qwen_executable": "qwen",
        },
    )

    async for message in result:
        if is_sdk_result_message(message):
            print(message["result"])


asyncio.run(main())

Commands & Shortcuts

Session Commands

  • /help - Display available commands
  • /clear - Clear conversation history
  • /compress - Compress history to save tokens
  • /stats - Show current session information
  • /bug - Submit a bug report
  • /exit or /quit - Exit Qwen Code

Keyboard Shortcuts

  • Ctrl+C - Cancel current operation
  • Ctrl+D - Exit (on empty line)
  • Up/Down - Navigate command history

Learn more about Commands

Tip: In YOLO mode (--yolo), vision switching happens automatically without prompts when images are detected. Learn more about Approval Mode

Configuration

Qwen Code can be configured via settings.json, environment variables, and CLI flags.

File Scope Description
~/.qwen/settings.json User (global) Applies to all your Qwen Code sessions. Recommended for modelProviders and env.
.qwen/settings.json Project Applies only when running Qwen Code in this project. Overrides user settings.

The most commonly used top-level fields in settings.json:

Field Description
modelProviders Define available models per protocol (openai, anthropic, gemini, vertex-ai).
env Fallback environment variables (e.g. API keys). Lower priority than shell export and .env files.
security.auth.selectedType The protocol to use on startup (e.g. openai).
model.name The default model to use when Qwen Code starts.

See the Authentication section above for complete settings.json examples, and the settings reference for all available options.

Benchmark Results

Terminal-Bench Performance

Agent Model Accuracy
Qwen Code Qwen3-Coder-480A35 37.5%
Qwen Code Qwen3-Coder-30BA3B 31.3%

Ecosystem

Looking for a graphical interface?

  • AionUi A modern GUI for command-line AI tools including Qwen Code
  • Gemini CLI Desktop A cross-platform desktop/web/mobile UI for Qwen Code

Troubleshooting

If you encounter issues, check the troubleshooting guide.

Common issues:

  • Qwen OAuth free tier was discontinued on 2026-04-15: Qwen OAuth is no longer available. Run qwen/auth and switch to API Key or Coding Plan. See the Authentication section above for setup instructions.

To report a bug from within the CLI, run /bug and include a short title and repro steps.

Connect with Us

Acknowledgments

This project is based on Google Gemini CLI. We acknowledge and appreciate the excellent work of the Gemini CLI team. Our main contribution focuses on parser-level adaptations to better support Qwen-Coder models.