|
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* fix(cli): pin /recap above input box and align defaults with fastModel The recap rendered as a regular history item, so as soon as the model streamed a new reply the "where you left off" reminder scrolled out of view. Move it to a sticky banner anchored just above the Composer (matching how btwItem is rendered) so it stays visible across turns. While reworking the surface, also: - Replace the chevron prefix with `※ recap:` so it reads as a labeled recap line instead of a generic dim message. - Mirror the placement in ScreenReaderAppLayout so screen-reader users see it in the same logical position. - Drop HistoryItemAwayRecap from the HistoryItemWithoutId union — it is no longer addItem-able, and leaving it in invited silent no-op bugs where addItem(awayRecap) would compile but render nothing. - Clear the banner on /clear, /reset, /new and on /resume into a different session, so a recap from a previous context doesn't bleed into a freshly started one. - Re-measure the controls box when the banner appears or disappears (its height changes by a couple of lines) so the main content area recomputes availableTerminalHeight and stays laid out correctly. Auto-trigger now defaults to "on iff fastModel is configured" rather than unconditionally on. Running an ambient background recap on the main coding model is too costly and slow to be a sane default; tying it to fastModel means the feature is silently opt-in for users who have set up a cheap fast model. An explicit `general.showSessionRecap` override still wins either way, and `/recap` itself is unaffected. Sharpen the slash-command description to match the new behavior. * fix(core): silence AbortSignal listener-leak warning in OpenAI pipeline Every chat.completions.create call wires up an abort listener on the incoming AbortSignal, and several layers — retryWithBackoff, the LoggingContentGenerator wrapper, the SDK's own internal stream/fetch plumbing — register their own listeners against the same signal. Five retry attempts plus those layers comfortably exceed Node's default 10-listener cap and produce a MaxListenersExceededWarning. With features that share or compose signals (e.g., recap + followup speculation firing on the same response cycle), even a higher cap gets blown past. The signals here are per-request and short-lived, so the accumulation is structural rather than a real memory leak — they get GC'd as soon as the request settles. setMaxListeners(0, signal) at the SDK boundary disables the warning for these specific signals only, without masking any genuine leak elsewhere in the process. Idempotent and confined to the one place where retry-bound API calls cross into the SDK. * fix(core): tighten recap to a single sentence within 80 chars The 1-3 sentence budget reliably wrapped onto two lines in the sticky banner above the input box, which made it visually heavy for what is supposed to be a glanceable reminder. Constrain the prompt to exactly one sentence with a hard 80-char cap, and merge the "high-level task + next step" rule into a single sentence instead of two adjacent ones. Also sweep the docs (settings, commands, design) so the user-facing copy and the internal design notes match the new format. * fix(cli): apply review feedback for recap PR Two issues from review: - The schema description for `general.showSessionRecap` still said "1-3 sentence summary" while the prompt, docs, and slash-command copy already say "one-line". Aligns the text in settingsSchema.ts and the regenerated VSCode JSON schema. - The /resume wrapper cleared the sticky recap synchronously, before the inner handler had a chance to discover that no session data was available. On a no-op resume the user would still lose the current recap. Make `useResumeCommand.handleResume` return Promise<boolean> reporting whether a session actually loaded, and only clear the recap on a confirmed switch. * fix(cli): default showSessionRecap to false and drop fastModel heuristic The earlier "enabled iff fastModel is configured" default made it hard for users to answer the simple question "is auto-recap on for me right now?" — the answer depended on a setting from a different category, and setting/unsetting fastModel silently changed recap behavior. Revert to a plain boolean with a conservative off-by-default: - Auto-trigger fires only when the user explicitly sets `general.showSessionRecap: true`. - Manual `/recap` keeps working regardless (that's a user-initiated call, not an ambient one). - Users never get ambient LLM calls billed to their main coding model without having opted in. Aligns settings.md, design doc, and the regenerated JSON schema. |
||
|---|---|---|
| .github | ||
| .husky | ||
| .qwen | ||
| .vscode | ||
| docs | ||
| docs-site | ||
| eslint-rules | ||
| integration-tests | ||
| packages | ||
| scripts | ||
| .dockerignore | ||
| .editorconfig | ||
| .gitattributes | ||
| .gitignore | ||
| .npmrc | ||
| .nvmrc | ||
| .prettierignore | ||
| .prettierrc.json | ||
| .yamllint.yml | ||
| AGENTS.md | ||
| CONTRIBUTING.md | ||
| Dockerfile | ||
| esbuild.config.js | ||
| eslint.config.js | ||
| LICENSE | ||
| Makefile | ||
| package-lock.json | ||
| package.json | ||
| README.md | ||
| SECURITY.md | ||
| tsconfig.json | ||
| vitest.config.ts | ||
An open-source AI agent that lives in your terminal.
中文 | Deutsch | français | 日本語 | Русский | Português (Brasil)
🎉 News
-
2026-04-15: Qwen OAuth free tier has been discontinued. To continue using Qwen Code, switch to Alibaba Cloud Coding Plan, OpenRouter, Fireworks AI, or bring your own API key. Run
qwen authto configure. -
2026-04-13: Qwen OAuth free tier policy update: daily quota adjusted to 100 requests/day (from 1,000).
-
2026-04-02: Qwen3.6-Plus is now live! Get an API key from Alibaba Cloud ModelStudio to access it through the OpenAI-compatible API.
-
2026-02-16: Qwen3.5-Plus is now live!
Why Qwen Code?
Qwen Code is an open-source AI agent for the terminal, optimized for Qwen series models. It helps you understand large codebases, automate tedious work, and ship faster.
- Multi-protocol, flexible providers: use OpenAI / Anthropic / Gemini-compatible APIs, Alibaba Cloud Coding Plan, OpenRouter, Fireworks AI, or bring your own API key.
- Open-source, co-evolving: both the framework and the Qwen3-Coder model are open-source—and they ship and evolve together.
- Agentic workflow, feature-rich: rich built-in tools (Skills, SubAgents) for a full agentic workflow and a Claude Code-like experience.
- Terminal-first, IDE-friendly: built for developers who live in the command line, with optional integration for VS Code, Zed, and JetBrains IDEs.
Installation
Quick Install (Recommended)
Linux / macOS
bash -c "$(curl -fsSL https://qwen-code-assets.oss-cn-hangzhou.aliyuncs.com/installation/install-qwen.sh)"
Windows (Run as Administrator)
Works in both Command Prompt and PowerShell:
powershell -Command "Invoke-WebRequest 'https://qwen-code-assets.oss-cn-hangzhou.aliyuncs.com/installation/install-qwen.bat' -OutFile (Join-Path $env:TEMP 'install-qwen.bat'); & (Join-Path $env:TEMP 'install-qwen.bat')"
Note
: It's recommended to restart your terminal after installation to ensure environment variables take effect.
Manual Installation
Prerequisites
Make sure you have Node.js 20 or later installed. Download it from nodejs.org.
NPM
npm install -g @qwen-code/qwen-code@latest
Homebrew (macOS, Linux)
brew install qwen-code
Quick Start
# Start Qwen Code (interactive)
qwen
# Then, in the session:
/help
/auth
On first use, you'll be prompted to sign in. You can run /auth anytime to switch authentication methods.
Example prompts:
What does this project do?
Explain the codebase structure.
Help me refactor this function.
Generate unit tests for this module.
Click to watch a demo video
🦞 Use Qwen Code for Coding Tasks in Claw
Copy the prompt below and paste it into your agent:
Use Qwen Code as a coding agent. First, fetch and read this skill reference:
https://raw.githubusercontent.com/QwenLM/qwen-code/refs/heads/main/.qwen/skills/qwen-code-claw/SKILL.md
Then follow the instructions inside to install, authenticate, and use Qwen Code via acpx for all coding tasks.
Authentication
Qwen Code supports the following authentication methods:
- API Key (recommended): use an API key from Alibaba Cloud Model Studio (Beijing / intl) or any supported provider (OpenAI, Anthropic, Google GenAI, and other compatible endpoints).
- Coding Plan: subscribe to the Alibaba Cloud Coding Plan (Beijing / intl) for a fixed monthly fee with higher quotas.
⚠️ Qwen OAuth was discontinued on April 15, 2026. If you were previously using Qwen OAuth, please switch to one of the methods above. Run
qwenand then/authto reconfigure.
API Key (recommended)
Use an API key to connect to Alibaba Cloud Model Studio or any supported provider. Supports multiple protocols:
- OpenAI-compatible: Alibaba Cloud ModelStudio, ModelScope, OpenAI, OpenRouter, and other OpenAI-compatible providers
- Anthropic: Claude models
- Google GenAI: Gemini models
The recommended way to configure models and providers is by editing ~/.qwen/settings.json (create it if it doesn't exist). This file lets you define all available models, API keys, and default settings in one place.
Quick Setup in 3 Steps
Step 1: Create or edit ~/.qwen/settings.json
Here is a complete example:
{
"modelProviders": {
"openai": [
{
"id": "qwen3.6-plus",
"name": "qwen3.6-plus",
"baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"description": "Qwen3-Coder via Dashscope",
"envKey": "DASHSCOPE_API_KEY"
}
]
},
"env": {
"DASHSCOPE_API_KEY": "sk-xxxxxxxxxxxxx"
},
"security": {
"auth": {
"selectedType": "openai"
}
},
"model": {
"name": "qwen3.6-plus"
}
}
Step 2: Understand each field
| Field | What it does |
|---|---|
modelProviders |
Declares which models are available and how to connect to them. Keys like openai, anthropic, gemini represent the API protocol. |
modelProviders[].id |
The model ID sent to the API (e.g. qwen3.6-plus, gpt-4o). |
modelProviders[].envKey |
The name of the environment variable that holds your API key. |
modelProviders[].baseUrl |
The API endpoint URL (required for non-default endpoints). |
env |
A fallback place to store API keys (lowest priority; prefer .env files or export for sensitive keys). |
security.auth.selectedType |
The protocol to use on startup (openai, anthropic, gemini, vertex-ai). |
model.name |
The default model to use when Qwen Code starts. |
Step 3: Start Qwen Code — your configuration takes effect automatically:
qwen
Use the /model command at any time to switch between all configured models.
More Examples
Coding Plan (Alibaba Cloud ModelStudio) — fixed monthly fee, higher quotas
{
"modelProviders": {
"openai": [
{
"id": "qwen3.6-plus",
"name": "qwen3.6-plus (Coding Plan)",
"baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
"description": "qwen3.6-plus from ModelStudio Coding Plan",
"envKey": "BAILIAN_CODING_PLAN_API_KEY"
},
{
"id": "qwen3.5-plus",
"name": "qwen3.5-plus (Coding Plan)",
"baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
"description": "qwen3.5-plus with thinking enabled from ModelStudio Coding Plan",
"envKey": "BAILIAN_CODING_PLAN_API_KEY",
"generationConfig": {
"extra_body": {
"enable_thinking": true
}
}
},
{
"id": "glm-4.7",
"name": "glm-4.7 (Coding Plan)",
"baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
"description": "glm-4.7 with thinking enabled from ModelStudio Coding Plan",
"envKey": "BAILIAN_CODING_PLAN_API_KEY",
"generationConfig": {
"extra_body": {
"enable_thinking": true
}
}
},
{
"id": "kimi-k2.5",
"name": "kimi-k2.5 (Coding Plan)",
"baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
"description": "kimi-k2.5 with thinking enabled from ModelStudio Coding Plan",
"envKey": "BAILIAN_CODING_PLAN_API_KEY",
"generationConfig": {
"extra_body": {
"enable_thinking": true
}
}
}
]
},
"env": {
"BAILIAN_CODING_PLAN_API_KEY": "sk-xxxxxxxxxxxxx"
},
"security": {
"auth": {
"selectedType": "openai"
}
},
"model": {
"name": "qwen3.6-plus"
}
}
Subscribe to the Coding Plan and get your API key at Alibaba Cloud ModelStudio(Beijing) or Alibaba Cloud ModelStudio(intl).
Multiple providers (OpenAI + Anthropic + Gemini)
{
"modelProviders": {
"openai": [
{
"id": "gpt-4o",
"name": "GPT-4o",
"envKey": "OPENAI_API_KEY",
"baseUrl": "https://api.openai.com/v1"
}
],
"anthropic": [
{
"id": "claude-sonnet-4-20250514",
"name": "Claude Sonnet 4",
"envKey": "ANTHROPIC_API_KEY"
}
],
"gemini": [
{
"id": "gemini-2.5-pro",
"name": "Gemini 2.5 Pro",
"envKey": "GEMINI_API_KEY"
}
]
},
"env": {
"OPENAI_API_KEY": "sk-xxxxxxxxxxxxx",
"ANTHROPIC_API_KEY": "sk-ant-xxxxxxxxxxxxx",
"GEMINI_API_KEY": "AIzaxxxxxxxxxxxxx"
},
"security": {
"auth": {
"selectedType": "openai"
}
},
"model": {
"name": "gpt-4o"
}
}
Enable thinking mode (for supported models like qwen3.5-plus)
{
"modelProviders": {
"openai": [
{
"id": "qwen3.5-plus",
"name": "qwen3.5-plus (thinking)",
"envKey": "DASHSCOPE_API_KEY",
"baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"generationConfig": {
"extra_body": {
"enable_thinking": true
}
}
}
]
},
"env": {
"DASHSCOPE_API_KEY": "sk-xxxxxxxxxxxxx"
},
"security": {
"auth": {
"selectedType": "openai"
}
},
"model": {
"name": "qwen3.5-plus"
}
}
Tip: You can also set API keys via
exportin your shell or.envfiles, which take higher priority thansettings.json→env. See the authentication guide for full details.
Security note: Never commit API keys to version control. The
~/.qwen/settings.jsonfile is in your home directory and should stay private.
Local Model Setup (Ollama / vLLM)
You can also run models locally — no API key or cloud account needed. This is not an authentication method; instead, configure your local model endpoint in ~/.qwen/settings.json using the modelProviders field.
Ollama setup
- Install Ollama from ollama.com
- Pull a model:
ollama pull qwen3:32b - Configure
~/.qwen/settings.json:
{
"modelProviders": {
"openai": [
{
"id": "qwen3:32b",
"name": "Qwen3 32B (Ollama)",
"baseUrl": "http://localhost:11434/v1",
"description": "Qwen3 32B running locally via Ollama"
}
]
},
"security": {
"auth": {
"selectedType": "openai"
}
},
"model": {
"name": "qwen3:32b"
}
}
vLLM setup
- Install vLLM:
pip install vllm - Start the server:
vllm serve Qwen/Qwen3-32B - Configure
~/.qwen/settings.json:
{
"modelProviders": {
"openai": [
{
"id": "Qwen/Qwen3-32B",
"name": "Qwen3 32B (vLLM)",
"baseUrl": "http://localhost:8000/v1",
"description": "Qwen3 32B running locally via vLLM"
}
]
},
"security": {
"auth": {
"selectedType": "openai"
}
},
"model": {
"name": "Qwen/Qwen3-32B"
}
}
Usage
As an open-source terminal agent, you can use Qwen Code in four primary ways:
- Interactive mode (terminal UI)
- Headless mode (scripts, CI)
- IDE integration (VS Code, Zed)
- TypeScript SDK
Interactive mode
cd your-project/
qwen
Run qwen in your project folder to launch the interactive terminal UI. Use @ to reference local files (for example @src/main.ts).
Headless mode
cd your-project/
qwen -p "your question"
Use -p to run Qwen Code without the interactive UI—ideal for scripts, automation, and CI/CD. Learn more: Headless mode.
IDE integration
Use Qwen Code inside your editor (VS Code, Zed, and JetBrains IDEs):
TypeScript SDK
Build on top of Qwen Code with the TypeScript SDK:
Commands & Shortcuts
Session Commands
/help- Display available commands/clear- Clear conversation history/compress- Compress history to save tokens/stats- Show current session information/bug- Submit a bug report/exitor/quit- Exit Qwen Code
Keyboard Shortcuts
Ctrl+C- Cancel current operationCtrl+D- Exit (on empty line)Up/Down- Navigate command history
Learn more about Commands
Tip: In YOLO mode (
--yolo), vision switching happens automatically without prompts when images are detected. Learn more about Approval Mode
Configuration
Qwen Code can be configured via settings.json, environment variables, and CLI flags.
| File | Scope | Description |
|---|---|---|
~/.qwen/settings.json |
User (global) | Applies to all your Qwen Code sessions. Recommended for modelProviders and env. |
.qwen/settings.json |
Project | Applies only when running Qwen Code in this project. Overrides user settings. |
The most commonly used top-level fields in settings.json:
| Field | Description |
|---|---|
modelProviders |
Define available models per protocol (openai, anthropic, gemini, vertex-ai). |
env |
Fallback environment variables (e.g. API keys). Lower priority than shell export and .env files. |
security.auth.selectedType |
The protocol to use on startup (e.g. openai). |
model.name |
The default model to use when Qwen Code starts. |
See the Authentication section above for complete
settings.jsonexamples, and the settings reference for all available options.
Benchmark Results
Terminal-Bench Performance
| Agent | Model | Accuracy |
|---|---|---|
| Qwen Code | Qwen3-Coder-480A35 | 37.5% |
| Qwen Code | Qwen3-Coder-30BA3B | 31.3% |
Ecosystem
Looking for a graphical interface?
- AionUi A modern GUI for command-line AI tools including Qwen Code
- Gemini CLI Desktop A cross-platform desktop/web/mobile UI for Qwen Code
Troubleshooting
If you encounter issues, check the troubleshooting guide.
Common issues:
Qwen OAuth free tier was discontinued on 2026-04-15: Qwen OAuth is no longer available. Runqwen→/authand switch to API Key or Coding Plan. See the Authentication section above for setup instructions.
To report a bug from within the CLI, run /bug and include a short title and repro steps.
Connect with Us
- Discord: https://discord.gg/RN7tqZCeDK
- Dingtalk: https://qr.dingtalk.com/action/joingroup?code=v1,k1,+FX6Gf/ZDlTahTIRi8AEQhIaBlqykA0j+eBKKdhLeAE=&_dt_no_comment=1&origin=1
Acknowledgments
This project is based on Google Gemini CLI. We acknowledge and appreciate the excellent work of the Gemini CLI team. Our main contribution focuses on parser-level adaptations to better support Qwen-Coder models.
