Find a file
jinye ff63da2652
refactor(serve): extract createInMemoryChannel helper (#4156 A1) (#4160)
* refactor(serve): extract createInMemoryChannel helper from httpAcpBridge.test.ts (#4156 A1)

Sub-PR A1 of issue #4156 (Stage 1.5b Mode A daemon). Pure refactor with
zero behavior change.

Extracts the inline paired NDJSON channel construction (`new TransformStream`
× 2 + `ndJsonStream` × 2) that was duplicated across `httpAcpBridge.test.ts`
into a production helper `createInMemoryChannel()` at
`packages/cli/src/serve/inMemoryChannel.ts`. The helper is added to
`packages/cli/src/serve/index.ts`'s barrel export alongside the rest of
the serve module's public API.

The helper is intentionally bare — it returns only the stream pair, no
lifecycle / teardown surface. Two reasons:
  1. Consumer behavior diverges widely (stuck channel, crashable child
     simulation, no-op, real in-process termination); a one-size-fits-all
     `close()` would either pull test-fixture concerns into a production
     module or force a single shape on consumers that don't want it.
  2. The SDK's `ndJsonStream` outer wrapper does not reliably propagate
     close on `Stream.writable` to the opposite `Stream.readable`;
     consumers needing to simulate a child exit hold their own underlying
     `TransformStream` references and close those directly.

10 of 11 inline call sites in `httpAcpBridge.test.ts` migrate cleanly to
the new helper. The 11th (`makeChannel` at line 151) keeps the inline
4-line construction because its `kill()` closure needs the underlying
`ab` / `ba` writables to simulate child-process termination — a comment
above the function explains the asymmetry.

The helper is also a primitive for the future A2 PR's
`inProcessAcpBridge.ts`, which will use it to wrap an in-process
`QwenAgent` without spawning a `qwen --acp` child (see issue #4156 §3
decision 1 and §8).

Test plan:
- New `inMemoryChannel.test.ts`: 5 tests covering bidirectional round-trip,
  ordering preservation, and bidirectional direction isolation
- Existing `httpAcpBridge.test.ts`: 70 tests, identical count and behavior
  before vs after migration
- `vitest run packages/cli/src/serve/inMemoryChannel.test.ts
  packages/cli/src/serve/httpAcpBridge.test.ts` — 75/75 pass
- `tsc --noEmit -p packages/cli/tsconfig.json` — clean for changed files

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* refactor(serve): address Copilot review feedback on createInMemoryChannel

Two small follow-ups from #4160 review:

1. inMemoryChannel.test.ts:113,137 — handle the pending `reader.read()`
   that the isolation tests intentionally leave hanging when the timeout
   wins the race. `reader.releaseLock()` in `finally` rejects that
   pending read per Web Streams spec; without a rejection handler this
   could surface as an unhandled rejection / flaky test signal. Added a
   no-op rejection handler via the two-arg `.then(onResolve, onReject)`
   form so the cleanup-path rejection settles cleanly.

2. inMemoryChannel.ts:11 — the JSDoc said "two `TransformStream<...>`
   pairs" which reads ambiguously as "two pairs of TransformStream"
   (i.e., 4 streams). The implementation creates exactly two
   TransformStreams (one per direction). Reworded to "two
   `TransformStream<...>` instances (one per direction)" to disambiguate.

Tests still 5/5 pass.

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* refactor(serve): expose abort() teardown primitive on createInMemoryChannel + route test through barrel

Two follow-ups from #4160 review:

1. Expose `abort(reason?)` on the helper return value (per @wenshao
   critical comment). Reasoning: the helper previously returned only the
   `Stream` pair, leaving consumers no way to tear the channel down.
   `ndJsonStream`'s outer wrapper does not reliably propagate `close()`,
   but `abort()` on the underlying byte-level `TransformStream` is
   forceful-by-spec — pending reads on both sides settle immediately so
   GC can reclaim. This unblocks the future Stage 1.5b in-process bridge
   (#4156, sub-PR A2) which needs teardown on daemon shutdown.

   The settlement shape is documented honestly in JSDoc: at the inner
   byte-level layer pending reads reject with the supplied reason; at
   the outer SDK-wrapped `Stream` the wrapper translates that into a
   clean `{done: true}` signal. Either way, pending operations no
   longer hang — that's the teardown invariant we care about.

2. Route the test's import through the `serve/index.js` barrel rather
   than the source file (per @wenshao suggestion). Without a test that
   exercises the public API path, a typo or missing re-export in the
   barrel would go undetected in CI.

Tests: 8/8 helper tests pass (5 existing + 3 new abort tests covering
teardown invariant + idempotency + no-reason variant). 70/70 existing
httpAcpBridge tests still pass.

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)
2026-05-15 14:43:06 +08:00
.github ci(deps): bump docker/* actions to Node 24 majors (#4131) 2026-05-14 10:54:01 +08:00
.husky Sync upstream Gemini-CLI v0.8.2 (#838) 2025-10-23 09:27:04 +08:00
.qwen feat(cli): support batch deletion of sessions in /delete (#3733) 2026-05-13 14:34:39 +08:00
.vscode Merge branch 'main' into feat/sandbox-config-improvements 2026-03-06 14:38:39 +08:00
docs refactor(serve): 1 daemon = 1 workspace (#3803 §02) (#4113) 2026-05-15 12:44:36 +08:00
docs-site feat: update docs 2025-12-15 09:47:03 +08:00
eslint-rules pre-release commit 2025-07-22 23:26:01 +08:00
integration-tests refactor(serve): 1 daemon = 1 workspace (#3803 §02) (#4113) 2026-05-15 12:44:36 +08:00
packages refactor(serve): extract createInMemoryChannel helper (#4156 A1) (#4160) 2026-05-15 14:43:06 +08:00
scripts feat(installer): add standalone archive installation (#3776) 2026-05-11 13:25:48 +08:00
.dockerignore fix(cli): skip stdin read for ACP mode 2026-03-27 11:47:01 +00:00
.editorconfig pre-release commit 2025-07-22 23:26:01 +08:00
.gitattributes pre-release commit 2025-07-22 23:26:01 +08:00
.gitignore feat(skills): Add codegraph skill for PR review risk analysis and conflict detection (#3910) 2026-05-11 18:14:40 +08:00
.npmrc chore: remove google registry 2025-08-08 20:45:54 +08:00
.nvmrc chore(deps): upgrade ink 6.2.3 → 7.0.2 + bump Node engine to 22 (#3860) 2026-05-11 17:29:50 +08:00
.prettierignore Merge branch 'main' into feat/add-vscode-settings-json-schema 2026-03-03 11:21:57 +08:00
.prettierrc.json pre-release commit 2025-07-22 23:26:01 +08:00
.yamllint.yml Sync upstream Gemini-CLI v0.8.2 (#838) 2025-10-23 09:27:04 +08:00
AGENTS.md chore(deps): upgrade ink 6.2.3 → 7.0.2 + bump Node engine to 22 (#3860) 2026-05-11 17:29:50 +08:00
CONTRIBUTING.md chore(deps): upgrade ink 6.2.3 → 7.0.2 + bump Node engine to 22 (#3860) 2026-05-11 17:29:50 +08:00
Dockerfile chore(deps): upgrade ink 6.2.3 → 7.0.2 + bump Node engine to 22 (#3860) 2026-05-11 17:29:50 +08:00
esbuild.config.js chore(deps): upgrade ink 6.2.3 → 7.0.2 + bump Node engine to 22 (#3860) 2026-05-11 17:29:50 +08:00
eslint.config.js feat(cli,sdk): qwen serve daemon (Stage 1) (#3889) 2026-05-13 14:47:47 +08:00
LICENSE Sync upstream Gemini-CLI v0.8.2 (#838) 2025-10-23 09:27:04 +08:00
Makefile feat: update docs 2025-12-22 21:11:33 +08:00
package-lock.json chore(release): v0.15.11 [skip ci] 2026-05-14 09:51:46 +08:00
package.json chore(release): v0.15.11 [skip ci] 2026-05-14 09:51:46 +08:00
README.md feat(cli,sdk): qwen serve daemon (Stage 1) (#3889) 2026-05-13 14:47:47 +08:00
SECURITY.md fix: update security vulnerability reporting channel 2026-02-24 14:22:47 +08:00
tsconfig.json # 🚀 Sync Gemini CLI v0.2.1 - Major Feature Update (#483) 2025-09-01 14:48:55 +08:00
vitest.config.ts test(channels): add comprehensive test suites for channel adapters 2026-03-27 15:26:39 +00:00

npm version License Node.js Version Downloads

QwenLM%2Fqwen-code | Trendshift

An open-source AI agent that lives in your terminal.

中文 | Deutsch | français | 日本語 | Русский | Português (Brasil)

🎉 News

  • 2026-04-15: Qwen OAuth free tier has been discontinued. To continue using Qwen Code, switch to Alibaba Cloud Coding Plan, OpenRouter, Fireworks AI, or bring your own API key. Run qwen auth to configure.

  • 2026-04-13: Qwen OAuth free tier policy update: daily quota adjusted to 100 requests/day (from 1,000).

  • 2026-04-02: Qwen3.6-Plus is now live! Get an API key from Alibaba Cloud ModelStudio to access it through the OpenAI-compatible API.

  • 2026-02-16: Qwen3.5-Plus is now live!

Why Qwen Code?

Qwen Code is an open-source AI agent for the terminal, optimized for Qwen series models. It helps you understand large codebases, automate tedious work, and ship faster.

  • Multi-protocol, flexible providers: use OpenAI / Anthropic / Gemini-compatible APIs, Alibaba Cloud Coding Plan, OpenRouter, Fireworks AI, or bring your own API key.
  • Open-source, co-evolving: both the framework and the Qwen3-Coder model are open-source—and they ship and evolve together.
  • Agentic workflow, feature-rich: rich built-in tools (Skills, SubAgents) for a full agentic workflow and a Claude Code-like experience.
  • Terminal-first, IDE-friendly: built for developers who live in the command line, with optional integration for VS Code, Zed, and JetBrains IDEs.

Installation

Linux / macOS

bash -c "$(curl -fsSL https://qwen-code-assets.oss-cn-hangzhou.aliyuncs.com/installation/install-qwen.sh)"

Windows (Run as Administrator)

Works in both Command Prompt and PowerShell:

powershell -Command "Invoke-WebRequest 'https://qwen-code-assets.oss-cn-hangzhou.aliyuncs.com/installation/install-qwen.bat' -OutFile (Join-Path $env:TEMP 'install-qwen.bat'); & (Join-Path $env:TEMP 'install-qwen.bat')"

Note

: It's recommended to restart your terminal after installation to ensure environment variables take effect.

Manual Installation

Prerequisites

Make sure you have Node.js 22 or later installed. Download it from nodejs.org.

NPM

npm install -g @qwen-code/qwen-code@latest

Homebrew (macOS, Linux)

brew install qwen-code

Quick Start

# Start Qwen Code (interactive)
qwen

# Then, in the session:
/help
/auth

On first use, you'll be prompted to sign in. You can run /auth anytime to switch authentication methods.

Example prompts:

What does this project do?
Explain the codebase structure.
Help me refactor this function.
Generate unit tests for this module.
Click to watch a demo video

🦞 Use Qwen Code for Coding Tasks in Claw

Copy the prompt below and paste it into your agent:

Use Qwen Code as a coding agent. First, fetch and read this skill reference:
https://raw.githubusercontent.com/QwenLM/qwen-code/refs/heads/main/.qwen/skills/qwen-code-claw/SKILL.md
Then follow the instructions inside to install, authenticate, and use Qwen Code via acpx for all coding tasks.

Authentication

Qwen Code supports the following authentication methods:

  • API Key (recommended): use an API key from Alibaba Cloud Model Studio (Beijing / intl) or any supported provider (OpenAI, Anthropic, Google GenAI, and other compatible endpoints).
  • Coding Plan: subscribe to the Alibaba Cloud Coding Plan (Beijing / intl) for a fixed monthly fee with higher quotas.

⚠️ Qwen OAuth was discontinued on April 15, 2026. If you were previously using Qwen OAuth, please switch to one of the methods above. Run qwen and then /auth to reconfigure.

Use an API key to connect to Alibaba Cloud Model Studio or any supported provider. Supports multiple protocols:

  • OpenAI-compatible: Alibaba Cloud ModelStudio, ModelScope, OpenAI, OpenRouter, and other OpenAI-compatible providers
  • Anthropic: Claude models
  • Google GenAI: Gemini models

The recommended way to configure models and providers is by editing ~/.qwen/settings.json (create it if it doesn't exist). This file lets you define all available models, API keys, and default settings in one place.

Quick Setup in 3 Steps

Step 1: Create or edit ~/.qwen/settings.json

Here is a complete example:

{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3.6-plus",
        "name": "qwen3.6-plus",
        "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
        "description": "Qwen3-Coder via Dashscope",
        "envKey": "DASHSCOPE_API_KEY"
      }
    ]
  },
  "env": {
    "DASHSCOPE_API_KEY": "sk-xxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3.6-plus"
  }
}

Step 2: Understand each field

Field What it does
modelProviders Declares which models are available and how to connect to them. Keys like openai, anthropic, gemini represent the API protocol.
modelProviders[].id The model ID sent to the API (e.g. qwen3.6-plus, gpt-4o).
modelProviders[].envKey The name of the environment variable that holds your API key.
modelProviders[].baseUrl The API endpoint URL (required for non-default endpoints).
env A fallback place to store API keys (lowest priority; prefer .env files or export for sensitive keys).
security.auth.selectedType The protocol to use on startup (openai, anthropic, gemini, vertex-ai).
model.name The default model to use when Qwen Code starts.

Step 3: Start Qwen Code — your configuration takes effect automatically:

qwen

Use the /model command at any time to switch between all configured models.

More Examples
Coding Plan (Alibaba Cloud ModelStudio) — fixed monthly fee, higher quotas
{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3.6-plus",
        "name": "qwen3.6-plus (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "qwen3.6-plus from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY"
      },
      {
        "id": "qwen3.5-plus",
        "name": "qwen3.5-plus (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "qwen3.5-plus with thinking enabled from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      },
      {
        "id": "glm-4.7",
        "name": "glm-4.7 (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "glm-4.7 with thinking enabled from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      },
      {
        "id": "kimi-k2.5",
        "name": "kimi-k2.5 (Coding Plan)",
        "baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
        "description": "kimi-k2.5 with thinking enabled from ModelStudio Coding Plan",
        "envKey": "BAILIAN_CODING_PLAN_API_KEY",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      }
    ]
  },
  "env": {
    "BAILIAN_CODING_PLAN_API_KEY": "sk-xxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3.6-plus"
  }
}

Subscribe to the Coding Plan and get your API key at Alibaba Cloud ModelStudio(Beijing) or Alibaba Cloud ModelStudio(intl).

Multiple providers (OpenAI + Anthropic + Gemini)
{
  "modelProviders": {
    "openai": [
      {
        "id": "gpt-4o",
        "name": "GPT-4o",
        "envKey": "OPENAI_API_KEY",
        "baseUrl": "https://api.openai.com/v1"
      }
    ],
    "anthropic": [
      {
        "id": "claude-sonnet-4-20250514",
        "name": "Claude Sonnet 4",
        "envKey": "ANTHROPIC_API_KEY"
      }
    ],
    "gemini": [
      {
        "id": "gemini-2.5-pro",
        "name": "Gemini 2.5 Pro",
        "envKey": "GEMINI_API_KEY"
      }
    ]
  },
  "env": {
    "OPENAI_API_KEY": "sk-xxxxxxxxxxxxx",
    "ANTHROPIC_API_KEY": "sk-ant-xxxxxxxxxxxxx",
    "GEMINI_API_KEY": "AIzaxxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "gpt-4o"
  }
}
Enable thinking mode (for supported models like qwen3.5-plus)
{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3.5-plus",
        "name": "qwen3.5-plus (thinking)",
        "envKey": "DASHSCOPE_API_KEY",
        "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
        "generationConfig": {
          "extra_body": {
            "enable_thinking": true
          }
        }
      }
    ]
  },
  "env": {
    "DASHSCOPE_API_KEY": "sk-xxxxxxxxxxxxx"
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3.5-plus"
  }
}

Tip: You can also set API keys via export in your shell or .env files, which take higher priority than settings.jsonenv. See the authentication guide for full details.

Security note: Never commit API keys to version control. The ~/.qwen/settings.json file is in your home directory and should stay private.

Local Model Setup (Ollama / vLLM)

You can also run models locally — no API key or cloud account needed. This is not an authentication method; instead, configure your local model endpoint in ~/.qwen/settings.json using the modelProviders field.

Set generationConfig.contextWindowSize inside the matching provider entry and adjust it to the context length configured on your local server.

Ollama setup
  1. Install Ollama from ollama.com
  2. Pull a model: ollama pull qwen3:32b
  3. Configure ~/.qwen/settings.json:
{
  "modelProviders": {
    "openai": [
      {
        "id": "qwen3:32b",
        "name": "Qwen3 32B (Ollama)",
        "baseUrl": "http://localhost:11434/v1",
        "description": "Qwen3 32B running locally via Ollama",
        "generationConfig": {
          "contextWindowSize": 131072
        }
      }
    ]
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "qwen3:32b"
  }
}
vLLM setup
  1. Install vLLM: pip install vllm
  2. Start the server: vllm serve Qwen/Qwen3-32B
  3. Configure ~/.qwen/settings.json:
{
  "modelProviders": {
    "openai": [
      {
        "id": "Qwen/Qwen3-32B",
        "name": "Qwen3 32B (vLLM)",
        "baseUrl": "http://localhost:8000/v1",
        "description": "Qwen3 32B running locally via vLLM",
        "generationConfig": {
          "contextWindowSize": 131072
        }
      }
    ]
  },
  "security": {
    "auth": {
      "selectedType": "openai"
    }
  },
  "model": {
    "name": "Qwen/Qwen3-32B"
  }
}

Usage

As an open-source terminal agent, you can use Qwen Code in five primary ways:

  1. Interactive mode (terminal UI)
  2. Headless mode (scripts, CI)
  3. IDE integration (VS Code, Zed)
  4. SDKs (TypeScript, Python, Java)
  5. Daemon mode — qwen serve exposes ACP over HTTP+SSE so multiple clients share one agent (experimental)

Interactive mode

cd your-project/
qwen

Run qwen in your project folder to launch the interactive terminal UI. Use @ to reference local files (for example @src/main.ts).

Headless mode

cd your-project/
qwen -p "your question"

Use -p to run Qwen Code without the interactive UI—ideal for scripts, automation, and CI/CD. Learn more: Headless mode.

IDE integration

Use Qwen Code inside your editor (VS Code, Zed, and JetBrains IDEs):

Daemon mode (qwen serve, experimental)

cd your-project/
qwen serve
# → qwen serve listening on http://127.0.0.1:4170 (mode=http-bridge)

Run Qwen Code as a local HTTP daemon so IDE plugins, web UIs, CI scripts and custom CLIs all share one agent session over HTTP+SSE — instead of each spawning their own subprocess. Loopback bind has no auth by default (set QWEN_SERVER_TOKEN to enable bearer auth even on loopback); remote binds (--hostname 0.0.0.0) require a token — boot refuses without one. See:

SDKs

Build on top of Qwen Code with the available SDKs:

Python SDK example:

import asyncio

from qwen_code_sdk import is_sdk_result_message, query


async def main() -> None:
    result = query(
        "Summarize the repository layout.",
        {
            "cwd": "/path/to/project",
            "path_to_qwen_executable": "qwen",
        },
    )

    async for message in result:
        if is_sdk_result_message(message):
            print(message["result"])


asyncio.run(main())

Commands & Shortcuts

Session Commands

  • /help - Display available commands
  • /clear - Clear conversation history
  • /compress - Compress history to save tokens
  • /stats - Show current session information
  • /bug - Submit a bug report
  • /exit or /quit - Exit Qwen Code

Keyboard Shortcuts

  • Ctrl+C - Cancel current operation
  • Ctrl+D - Exit (on empty line)
  • Up/Down - Navigate command history

Learn more about Commands

Tip: In YOLO mode (--yolo), vision switching happens automatically without prompts when images are detected. Learn more about Approval Mode

Configuration

Qwen Code can be configured via settings.json, environment variables, and CLI flags.

File Scope Description
~/.qwen/settings.json User (global) Applies to all your Qwen Code sessions. Recommended for modelProviders and env.
.qwen/settings.json Project Applies only when running Qwen Code in this project. Overrides user settings.

The most commonly used top-level fields in settings.json:

Field Description
modelProviders Define available models per protocol (openai, anthropic, gemini, vertex-ai).
env Fallback environment variables (e.g. API keys). Lower priority than shell export and .env files.
security.auth.selectedType The protocol to use on startup (e.g. openai).
model.name The default model to use when Qwen Code starts.

See the Authentication section above for complete settings.json examples, and the settings reference for all available options.

Benchmark Results

Terminal-Bench Performance

Agent Model Accuracy
Qwen Code Qwen3-Coder-480A35 37.5%
Qwen Code Qwen3-Coder-30BA3B 31.3%

Ecosystem

Looking for a graphical interface?

  • AionUi A modern GUI for command-line AI tools including Qwen Code
  • Gemini CLI Desktop A cross-platform desktop/web/mobile UI for Qwen Code

Troubleshooting

If you encounter issues, check the troubleshooting guide.

Common issues:

  • Qwen OAuth free tier was discontinued on 2026-04-15: Qwen OAuth is no longer available. Run qwen/auth and switch to API Key or Coding Plan. See the Authentication section above for setup instructions.

To report a bug from within the CLI, run /bug and include a short title and repro steps.

Connect with Us

Acknowledgments

This project is based on Google Gemini CLI. We acknowledge and appreciate the excellent work of the Gemini CLI team. Our main contribution focuses on parser-level adaptations to better support Qwen-Coder models.