feat: add bugfix workflow, test-engineer agent, and debugging skills

- Add test-engineer agent for bug reproduction and verification
- Add /qc:bugfix command for structured bugfix workflow
- Add e2e-testing skill covering headless/interactive modes, MCP testing
- Add structured-debugging skill for hypothesis-driven debugging
- Simplify AGENTS.md to focus on essential commands and conventions
- Add terminal-capture scenario for bugfix workflow testing
- Add .qwen folder to ESLint ignore list

Known limitations: The /qc:bugfix workflow and e2e-testing skill
are experimental and may be unstable or consume significant tokens.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
This commit is contained in:
tanzhenxin 2026-04-04 18:30:09 +08:00
parent 3bce84d5da
commit dc833d9d94
11 changed files with 826 additions and 265 deletions

View file

@ -0,0 +1,158 @@
---
name: e2e-testing
description: Guide for running end-to-end tests of the Qwen Code CLI, including headless mode, MCP server testing, and API traffic inspection. Use this skill whenever you need to verify CLI behavior with real model calls, reproduce user-reported bugs end-to-end, test MCP tool integrations, or inspect raw API request/response payloads. Trigger on mentions of E2E testing, headless testing, MCP tool testing, or reproducing issues.
---
# E2E Testing Guide
How to run the Qwen Code CLI end-to-end — from building the bundle to inspecting
raw API traffic. Use when unit tests aren't enough and you need to verify behavior
through the full pipeline (model API → tool validation → tool execution).
## Which binary to use
- **Reproducing bugs**: use the globally installed `qwen` command — this matches
what the user ran when they filed the issue.
- **Verifying fixes**: build first (`npm run build && npm run bundle`), then run
`node dist/cli.js` — this tests your local changes.
## Headless Mode
Run the CLI non-interactively with JSON output (`<qwen>` = `qwen` or
`node dist/cli.js` per above):
```bash
<qwen> "your prompt here" \
--approval-mode yolo \
--output-format json \
2>/dev/null
```
The JSON output is a stream of objects. Key types:
- `type: "system"` — init: `tools`, `mcp_servers`, `model`, `permission_mode`
- `type: "assistant"` — model output: `content[].type` is `text`, `tool_use`, or `thinking`
- `type: "user"` — tool results: `content[].type` is `tool_result` with `is_error`
- `type: "result"` — final output with `result` text and `usage` stats
Pipe through `jq` to filter the verbose stream, e.g. extract tool-result errors:
`... 2>/dev/null | jq 'select(.type=="user") | .message.content[] | select(.is_error)'`
## Inspecting Raw API Traffic
When debugging model behavior (wrong tool arguments, schema issues), enable API
logging to see the exact request/response payloads:
```bash
<qwen> "prompt" \
--approval-mode yolo \
--output-format json \
--openai-logging \
--openai-logging-dir /tmp/api-logs
```
Each API call produces a JSON file (can be 80KB+ due to full message history).
The bulk is in `request.messages` (conversation history). Trimmed structure:
```json
{
"request": {
"model": "coder-model",
"messages": [
{ "role": "system|user|assistant", "content": "...", "tool_calls?": [...] }
],
"tools": [
{
"type": "function",
"function": {
"name": "tool_name",
"description": "...",
"parameters": { ... } // schema sent to the model
}
}
]
},
"response": {
"choices": [
{
"message": {
"role": "assistant",
"content": "...", // text response (may be null)
"tool_calls": [
{
"id": "call_...",
"function": {
"name": "tool_name",
"arguments": "..." // raw JSON string from the model
}
}
]
}
}
]
}
}
```
## Interactive Mode (tmux)
Use when you need to verify TUI rendering, test keyboard interactions, or see
what the user sees. Headless mode is simpler when you only need structured output.
### Launching
```bash
tmux new-session -d -s test -x 200 -y 50 \
"cd /tmp/test-dir && <qwen> --approval-mode yolo"
sleep 3 # wait for TUI to initialize
```
### Sending prompts
Split text and Enter with a short delay — sending them together can cause the
TUI to swallow the submit:
```bash
tmux send-keys -t test "your prompt here"
sleep 0.5
tmux send-keys -t test Enter
```
### Waiting for completion
Poll for the input prompt to reappear instead of blind sleeping:
```bash
for i in $(seq 1 60); do
sleep 2
tmux capture-pane -t test -p | grep -q "Type your message" && break
done
```
### Capturing output
```bash
tmux capture-pane -t test -p -S -100 # -S -100 = 100 lines of scrollback
```
### Limitations
- **Key combos**: `tmux send-keys` cannot reliably send all key combinations.
`C-?`, `C-Shift-*`, and function keys with modifiers are unsupported or
unreliable. For these, use the `InteractiveSession` harness in
`integration-tests/interactive/` or test manually.
- **Visual artifacts**: `capture-pane` captures the final rendered frame, not
intermediate states. Flicker, tearing, or brief blank frames cannot be
detected this way.
### Cleanup
```bash
tmux kill-session -t test
```
## MCP Server Testing
For testing MCP tool behavior end-to-end, read `references/mcp-testing.md`. It
covers the setup gotchas (config location, git repo requirement) and includes
a reusable zero-dependency test server template in `scripts/mcp-test-server.js`.

View file

@ -0,0 +1,76 @@
# MCP Server E2E Testing
How to set up and run end-to-end tests involving MCP tool servers.
## Where MCP Config Goes
MCP servers are configured in `.qwen/settings.json` under `mcpServers`. This is
the **only** location that works for E2E testing.
Common mistakes that waste time:
- `.mcp.json` — Claude Code convention, not Qwen Code
- `settings.local.json` — the JSON schema validation rejects `mcpServers` here
- `--mcp-config` CLI flag — does not exist
## Setup
The CLI needs a git repo to load project settings. Create a temp directory:
```bash
mkdir -p /tmp/test-dir && cd /tmp/test-dir && git init -q
mkdir -p .qwen
cat > .qwen/settings.json << 'EOF'
{
"mcpServers": {
"my-server": {
"command": "node",
"args": ["/tmp/my-mcp-server.js"],
"trust": true
}
}
}
EOF
```
Run from that directory:
```bash
cd /tmp/test-dir && <qwen> "prompt" \
--approval-mode yolo --output-format json
```
## Writing Test Servers
Use `scripts/mcp-test-server.js` as a template. It's a zero-dependency
JSON-RPC server over stdin/stdout — no npm install needed.
To create a server with custom tools, copy the template and edit the
`TOOL_DEFINITIONS` array and the `handleToolCall` function. Each tool definition
follows the MCP `inputSchema` format (standard JSON Schema).
### Sanity-checking the server
Test the server without the CLI by piping JSON-RPC directly:
```bash
node /tmp/my-mcp-server.js << 'EOF'
{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}
{"jsonrpc":"2.0","method":"notifications/initialized"}
{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}
EOF
```
## Verifying the Server Loaded
Check the `type: "system"` init message in JSON output:
```json
"mcp_servers": [{"name": "my-server", "status": "connected"}]
```
If `mcp_servers` is empty:
- You're not running from the directory containing `.qwen/settings.json`
- The directory is not a git repo (`git init` missing)
- The server command/path is wrong (check stderr with `2>&1`)

View file

@ -0,0 +1,114 @@
#!/usr/bin/env node
/**
* Zero-dependency MCP test server template.
* Speaks JSON-RPC over stdin/stdout no npm install needed.
*
* Usage:
* 1. Edit TOOL_DEFINITIONS to define your tools
* 2. Edit handleToolCall() to implement tool behavior
* 3. Configure in .qwen/settings.json and run via the CLI
*
* Sanity check without the CLI:
* printf '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}\n' | node mcp-test-server.js
*/
const readline = require('readline');
const rl = readline.createInterface({ input: process.stdin, terminal: false });
// ---------------------------------------------------------------------------
// Configure your tools here
// ---------------------------------------------------------------------------
const SERVER_NAME = 'test-server';
const SERVER_VERSION = '1.0.0';
const TOOL_DEFINITIONS = [
{
name: 'echo',
description: 'Echoes back the provided arguments as JSON.',
inputSchema: {
type: 'object',
properties: {
message: { type: 'string', description: 'Message to echo' },
},
required: ['message'],
},
},
// Add more tools here
];
function handleToolCall(name, args) {
switch (name) {
case 'echo':
return `Echo: ${JSON.stringify(args)}`;
// Add more cases here
default:
return null; // returning null signals unknown tool
}
}
// ---------------------------------------------------------------------------
// MCP protocol handling — no need to edit below this line
// ---------------------------------------------------------------------------
function send(msg) {
process.stdout.write(JSON.stringify(msg) + '\n');
}
rl.on('line', (line) => {
let req;
try {
req = JSON.parse(line.trim());
} catch {
return;
}
if (req.method === 'initialize') {
send({
jsonrpc: '2.0',
id: req.id,
result: {
protocolVersion: '2024-11-05',
capabilities: { tools: {} },
serverInfo: { name: SERVER_NAME, version: SERVER_VERSION },
},
});
} else if (req.method === 'notifications/initialized') {
// no response needed
} else if (req.method === 'tools/list') {
send({
jsonrpc: '2.0',
id: req.id,
result: { tools: TOOL_DEFINITIONS },
});
} else if (req.method === 'tools/call') {
const toolName = req.params?.name;
const args = req.params?.arguments || {};
const result = handleToolCall(toolName, args);
if (result === null) {
send({
jsonrpc: '2.0',
id: req.id,
result: {
content: [{ type: 'text', text: `Unknown tool: ${toolName}` }],
isError: true,
},
});
} else {
send({
jsonrpc: '2.0',
id: req.id,
result: {
content: [{ type: 'text', text: String(result) }],
},
});
}
} else if (req.id) {
send({
jsonrpc: '2.0',
id: req.id,
error: { code: -32601, message: 'Method not found' },
});
}
});