* feat(core): implement fork subagent for context sharing
- Make subagent_type optional in AgentTool
- Add forkSubagent.ts to build identical tool result prefixes
- Run fork processes in the background to preserve UX
* fix(core): fix test failures related to root execution and optional subagent_type
- Skip pathReader and edit tool permission tests when running as root
- Fix agent.test.ts to correctly mock execute call with extraHistory
- Remove unused imports in forkSubagent.ts
* fix(core): fix fork subagent bugs and add CacheSafeParams integration
Bug fixes:
- Fix AgentParams.subagent_type type: string -> string? (match schema)
- Fix undefined agentType passed to hook system (fallback to subagentConfig.name)
- Fix hook continuation missing extraHistory parameter
- Fix functionResponse missing id field (match coreToolScheduler pattern)
- Fix consecutive user messages in Gemini API (ensure history ends with model)
- Fix duplicate task_prompt when directive already in extraHistory
- Fix FORK_AGENT.systemPrompt empty string causing createChat to throw
- Fix redundant dynamic import of forkSubagent.js (merge into single import)
- Fix non-fork agent returning empty string on execution failure
- Fix misleading fork child rule referencing non-existent system prompt config
- Fix functionResponse.response key from {result:} to {output:} for consistency
CacheSafeParams integration:
- Retrieve parent's generationConfig via getCacheSafeParams() for cache sharing
- Add generationConfigOverride to CreateChatOptions and AgentHeadless.execute()
- Add toolsOverride to AgentHeadless.execute() for parent tool declarations
- Fork API requests now share byte-identical prefix with parent (DashScope cache hits)
- Graceful degradation when CacheSafeParams unavailable (first turn)
Docs:
- Add Fork Subagent section to sub-agents.md user manual
- Add fork-subagent-design.md design document
* fix(core): apply subagent tool exclusion to forked agents
Fork children were inheriting parent's cached tool declarations directly,
bypassing prepareTools() filtering and gaining access to AgentTool and
cron tools. Extract EXCLUDED_TOOLS_FOR_SUBAGENTS as a shared constant
and apply it to forkToolsOverride.
* fix(core): skip env history whenever extraHistory is provided
Previously gated on generationConfigOverride, which meant the no-cache
fallback path (CacheSafeParams unavailable) still ran getInitialChatHistory
and duplicated env bootstrap messages already present in the parent's
history. Gate on extraHistory instead so both fork paths skip env init.
* fix(core): use explicit skipEnvHistory flag for fork env handling
The previous fix gated env-init skipping on the presence of extraHistory,
but agent-interactive (arena) also passes extraHistory — its chatHistory is
env-stripped by stripStartupContext() and DOES need fresh env init for the
child's working directory. Skipping env there broke the interactive path.
Replace the implicit gate with an explicit skipEnvHistory option that only
fork sets (when extraHistory is present, since fork's history comes from
getHistory(true) and already contains env).
* fix(core): defend skipEnvHistory gate against empty extraHistory
Edge case: when the parent's rawHistory ends with a user message and has
length 1, extraHistory becomes []. The previous gate (extraHistory !==
undefined) would set skipEnvHistory: true, leaving the fork with neither
env bootstrap nor parent history. Check length > 0 so empty arrays fall
through to the normal env-init path.
* fix(core): apply skipEnvHistory to stop-hook retry execute
The second subagent.execute() call in the SubagentStop retry loop was
missing skipEnvHistory, so on retry the fork's env context would be
duplicated — same bug as the initial tanzhenxin report, just on a less
common code path.
21 KiB
Subagents
Subagents are specialized AI assistants that handle specific types of tasks within Qwen Code. They allow you to delegate focused work to AI agents that are configured with task-specific prompts, tools, and behaviors.
What are Subagents?
Subagents are independent AI assistants that:
- Specialize in specific tasks - Each Subagent is configured with a focused system prompt for particular types of work
- Have separate context - They maintain their own conversation history, separate from your main chat
- Use controlled tools - You can configure which tools each Subagent has access to
- Work autonomously - Once given a task, they work independently until completion or failure
- Provide detailed feedback - You can see their progress, tool usage, and execution statistics in real-time
Fork Subagent (Implicit Fork)
In addition to named subagents, Qwen Code supports implicit forking — when the AI omits the subagent_type parameter, it triggers a fork that inherits the parent's full conversation context.
How Fork Differs from Named Subagents
| Named Subagent | Fork Subagent | |
|---|---|---|
| Context | Starts fresh, no parent history | Inherits parent's full conversation history |
| System prompt | Uses its own configured prompt | Uses parent's exact system prompt (for cache sharing) |
| Execution | Blocks the parent until done | Runs in background, parent continues immediately |
| Use case | Specialized tasks (testing, docs) | Parallel tasks that need the current context |
When Fork is Used
The AI automatically uses fork when it needs to:
- Run multiple research tasks in parallel (e.g., "investigate module A, B, and C")
- Perform background work while continuing the main conversation
- Delegate tasks that require understanding of the current conversation context
Prompt Cache Sharing
All forks share the parent's exact API request prefix (system prompt, tools, conversation history), enabling DashScope prompt cache hits. When 3 forks run in parallel, the shared prefix is cached once and reused — saving 80%+ token costs compared to independent subagents.
Recursive Fork Prevention
Fork children cannot create further forks. This is enforced at runtime — if a fork attempts to spawn another fork, it receives an error instructing it to execute tasks directly.
Current Limitations
- No result feedback: Fork results are reflected in the UI progress display but are not automatically fed back into the main conversation. The parent AI sees a placeholder message and cannot act on the fork's output.
- No worktree isolation: Forks share the parent's working directory. Concurrent file modifications from multiple forks may conflict.
Key Benefits
- Task Specialization: Create agents optimized for specific workflows (testing, documentation, refactoring, etc.)
- Context Isolation: Keep specialized work separate from your main conversation
- Context Inheritance: Fork subagents inherit the full conversation for context-heavy parallel tasks
- Prompt Cache Sharing: Fork subagents share the parent's cache prefix, reducing token costs
- Reusability: Save and reuse agent configurations across projects and sessions
- Controlled Access: Limit which tools each agent can use for security and focus
- Progress Visibility: Monitor agent execution with real-time progress updates
How Subagents Work
- Configuration: You create Subagents configurations that define their behavior, tools, and system prompts
- Delegation: The main AI can automatically delegate tasks to appropriate Subagents — or implicitly fork when no specific subagent type is needed
- Execution: Subagents work independently, using their configured tools to complete tasks
- Results: They return results and execution summaries back to the main conversation
Getting Started
Quick Start
-
Create your first Subagent:
/agents createFollow the guided wizard to create a specialized agent.
-
Manage existing agents:
/agents manageView and manage your configured Subagents.
-
Use Subagents automatically: Simply ask the main AI to perform tasks that match your Subagents' specializations. The AI will automatically delegate appropriate work.
Example Usage
User: "Please write comprehensive tests for the authentication module"
AI: I'll delegate this to your testing specialist Subagents.
[Delegates to "testing-expert" Subagents]
[Shows real-time progress of test creation]
[Returns with completed test files and execution summary]`
Management
CLI Commands
Subagents are managed through the /agents slash command and its subcommands:
Usage::/agents create。Creates a new Subagent through a guided step wizard.
Usage::/agents manage。Opens an interactive management dialog for viewing and managing existing Subagents.
Storage Locations
Subagents are stored as Markdown files in multiple locations:
- Project-level:
.qwen/agents/(highest precedence) - User-level:
~/.qwen/agents/(fallback) - Extension-level: Provided by installed extensions
This allows you to have project-specific agents, personal agents that work across all projects, and extension-provided agents that add specialized capabilities.
Extension Subagents
Extensions can provide custom subagents that become available when the extension is enabled. These agents are stored in the extension's agents/ directory and follow the same format as personal and project agents.
Extension subagents:
- Are automatically discovered when the extension is enabled
- Appear in the
/agents managedialog under "Extension Agents" section - Cannot be edited directly (edit the extension source instead)
- Follow the same configuration format as user-defined agents
To see which extensions provide subagents, check the extension's qwen-extension.json file for an agents field.
File Format
Subagents are configured using Markdown files with YAML frontmatter. This format is human-readable and easy to edit with any text editor.
Basic Structure
---
name: agent-name
description: Brief description of when and how to use this agent
model: inherit # Optional: inherit or model-id
approvalMode: auto-edit # Optional: default, plan, auto-edit, yolo
tools: # Optional: allowlist of tools
- tool1
- tool2
disallowedTools: # Optional: blocklist of tools
- tool3
---
System prompt content goes here.
Multiple paragraphs are supported.
Model Selection
Use the optional model frontmatter field to control which model a subagent uses:
inherit: Use the same model as the main conversation- Omit the field: Same as
inherit glm-5: Use that model ID with the main conversation's auth typeopenai:gpt-4o: Use a different provider (resolves credentials from env vars)
Permission Mode
Use the optional approvalMode frontmatter field to control how a subagent's tool calls are approved. Valid values:
default: Tools require interactive approval (same as the main session default)plan: Analyze-only mode — the agent plans but does not execute changesauto-edit: Tools are auto-approved without prompting (recommended for most agents)yolo: All tools auto-approved, including potentially destructive ones
If you omit this field, the subagent's permission mode is determined automatically:
- If the parent session is in yolo or auto-edit mode, the subagent inherits that mode. A permissive parent stays permissive.
- If the parent session is in plan mode, the subagent stays in plan mode. An analyze-only session cannot mutate files through a delegated agent.
- If the parent session is in default mode (in a trusted folder), the subagent gets auto-edit so it can work autonomously.
When you do set approvalMode, the parent's permissive modes still take priority. For example, if the parent is in yolo mode, a subagent with approvalMode: plan will still run in yolo mode.
---
name: cautious-reviewer
description: Reviews code without making changes
approvalMode: plan
tools:
- read_file
- grep_search
- glob
---
You are a code reviewer. Analyze the code and report findings.
Do not modify any files.
Tool Configuration
Use tools and disallowedTools to control which tools a subagent can access.
tools (allowlist): When specified, the subagent can only use the listed tools. When omitted, the subagent inherits all available tools from the parent session.
---
name: reader
description: Read-only agent for code exploration
tools:
- read_file
- grep_search
- glob
- list_directory
---
disallowedTools (blocklist): When specified, the listed tools are removed from the subagent's tool pool. This is useful when you want "everything except X" without listing every permitted tool.
---
name: safe-worker
description: Agent that cannot modify files
disallowedTools:
- write_file
- edit
- run_shell_command
---
If both tools and disallowedTools are set, the allowlist is applied first, then the blocklist removes from that set.
MCP tools follow the same rules. If a subagent has no tools list, it inherits all MCP tools from the parent session. If a subagent has an explicit tools list, it only gets MCP tools that are explicitly named in that list.
The disallowedTools field supports MCP server-level patterns:
mcp__server__tool_name— blocks a specific MCP toolmcp__server— blocks all tools from that MCP server
---
name: no-slack
description: Agent without Slack access
disallowedTools:
- mcp__slack
---
Example Usage
---
name: project-documenter
description: Creates project documentation and README files
---
You are a documentation specialist.
Focus on creating clear, comprehensive documentation that helps both
new contributors and end users understand the project.
Using Subagents Effectively
Automatic Delegation
Qwen Code proactively delegates tasks based on:
- The task description in your request
- The description field in Subagents configurations
- Current context and available tools
To encourage more proactive Subagents use, include phrases like "use PROACTIVELY" or "MUST BE USED" in your description field.
Explicit Invocation
Request a specific Subagent by mentioning it in your command:
Let the testing-expert Subagents create unit tests for the payment module
Have the documentation-writer Subagents update the API reference
Get the react-specialist Subagents to optimize this component's performance
Examples
Development Workflow Agents
Testing Specialist
Perfect for comprehensive test creation and test-driven development.
---
name: testing-expert
description: Writes comprehensive unit tests, integration tests, and handles test automation with best practices
tools:
- read_file
- write_file
- read_many_files
- run_shell_command
---
You are a testing specialist focused on creating high-quality, maintainable tests.
Your expertise includes:
- Unit testing with appropriate mocking and isolation
- Integration testing for component interactions
- Test-driven development practices
- Edge case identification and comprehensive coverage
- Performance and load testing when appropriate
For each testing task:
1. Analyze the code structure and dependencies
2. Identify key functionality, edge cases, and error conditions
3. Create comprehensive test suites with descriptive names
4. Include proper setup/teardown and meaningful assertions
5. Add comments explaining complex test scenarios
6. Ensure tests are maintainable and follow DRY principles
Always follow testing best practices for the detected language and framework.
Focus on both positive and negative test cases.
Use Cases:
- “Write unit tests for the authentication service”
- “Create integration tests for the payment processing workflow”
- “Add test coverage for edge cases in the data validation module”
Documentation Writer
Specialized in creating clear, comprehensive documentation.
---
name: documentation-writer
description: Creates comprehensive documentation, README files, API docs, and user guides
tools:
- read_file
- write_file
- read_many_files
- web_search
---
You are a technical documentation specialist.
Your role is to create clear, comprehensive documentation that serves both
developers and end users. Focus on:
**For API Documentation:**
- Clear endpoint descriptions with examples
- Parameter details with types and constraints
- Response format documentation
- Error code explanations
- Authentication requirements
**For User Documentation:**
- Step-by-step instructions with screenshots when helpful
- Installation and setup guides
- Configuration options and examples
- Troubleshooting sections for common issues
- FAQ sections based on common user questions
**For Developer Documentation:**
- Architecture overviews and design decisions
- Code examples that actually work
- Contributing guidelines
- Development environment setup
Always verify code examples and ensure documentation stays current with
the actual implementation. Use clear headings, bullet points, and examples.
Use Cases:
- “Create API documentation for the user management endpoints”
- “Write a comprehensive README for this project”
- “Document the deployment process with troubleshooting steps”
Code Reviewer
Focused on code quality, security, and best practices.
---
name: code-reviewer
description: Reviews code for best practices, security issues, performance, and maintainability
tools:
- read_file
- read_many_files
---
You are an experienced code reviewer focused on quality, security, and maintainability.
Review criteria:
- **Code Structure**: Organization, modularity, and separation of concerns
- **Performance**: Algorithmic efficiency and resource usage
- **Security**: Vulnerability assessment and secure coding practices
- **Best Practices**: Language/framework-specific conventions
- **Error Handling**: Proper exception handling and edge case coverage
- **Readability**: Clear naming, comments, and code organization
- **Testing**: Test coverage and testability considerations
Provide constructive feedback with:
1. **Critical Issues**: Security vulnerabilities, major bugs
2. **Important Improvements**: Performance issues, design problems
3. **Minor Suggestions**: Style improvements, refactoring opportunities
4. **Positive Feedback**: Well-implemented patterns and good practices
Focus on actionable feedback with specific examples and suggested solutions.
Prioritize issues by impact and provide rationale for recommendations.
Use Cases:
- “Review this authentication implementation for security issues”
- “Check the performance implications of this database query logic”
- “Evaluate the code structure and suggest improvements”
Technology-Specific Agents
React Specialist
Optimized for React development, hooks, and component patterns.
---
name: react-specialist
description: Expert in React development, hooks, component patterns, and modern React best practices
tools:
- read_file
- write_file
- read_many_files
- run_shell_command
---
You are a React specialist with deep expertise in modern React development.
Your expertise covers:
- **Component Design**: Functional components, custom hooks, composition patterns
- **State Management**: useState, useReducer, Context API, and external libraries
- **Performance**: React.memo, useMemo, useCallback, code splitting
- **Testing**: React Testing Library, Jest, component testing strategies
- **TypeScript Integration**: Proper typing for props, hooks, and components
- **Modern Patterns**: Suspense, Error Boundaries, Concurrent Features
For React tasks:
1. Use functional components and hooks by default
2. Implement proper TypeScript typing
3. Follow React best practices and conventions
4. Consider performance implications
5. Include appropriate error handling
6. Write testable, maintainable code
Always stay current with React best practices and avoid deprecated patterns.
Focus on accessibility and user experience considerations.
Use Cases:
- “Create a reusable data table component with sorting and filtering”
- “Implement a custom hook for API data fetching with caching”
- “Refactor this class component to use modern React patterns”
Python Expert
Specialized in Python development, frameworks, and best practices.
---
name: python-expert
description: Expert in Python development, frameworks, testing, and Python-specific best practices
tools:
- read_file
- write_file
- read_many_files
- run_shell_command
---
You are a Python expert with deep knowledge of the Python ecosystem.
Your expertise includes:
- **Core Python**: Pythonic patterns, data structures, algorithms
- **Frameworks**: Django, Flask, FastAPI, SQLAlchemy
- **Testing**: pytest, unittest, mocking, test-driven development
- **Data Science**: pandas, numpy, matplotlib, jupyter notebooks
- **Async Programming**: asyncio, async/await patterns
- **Package Management**: pip, poetry, virtual environments
- **Code Quality**: PEP 8, type hints, linting with pylint/flake8
For Python tasks:
1. Follow PEP 8 style guidelines
2. Use type hints for better code documentation
3. Implement proper error handling with specific exceptions
4. Write comprehensive docstrings
5. Consider performance and memory usage
6. Include appropriate logging
7. Write testable, modular code
Focus on writing clean, maintainable Python code that follows community standards.
Use Cases:
- “Create a FastAPI service for user authentication with JWT tokens”
- “Implement a data processing pipeline with pandas and error handling”
- “Write a CLI tool using argparse with comprehensive help documentation”
Best Practices
Design Principles
Single Responsibility Principle
Each Subagent should have a clear, focused purpose.
✅ Good:
---
name: testing-expert
description: Writes comprehensive unit tests and integration tests
---
❌ Avoid:
---
name: general-helper
description: Helps with testing, documentation, code review, and deployment
---
Why: Focused agents produce better results and are easier to maintain.
Clear Specialization
Define specific expertise areas rather than broad capabilities.
✅ Good:
---
name: react-performance-optimizer
description: Optimizes React applications for performance using profiling and best practices
---
❌ Avoid:
---
name: frontend-developer
description: Works on frontend development tasks
---
Why: Specific expertise leads to more targeted and effective assistance.
Actionable Descriptions
Write descriptions that clearly indicate when to use the agent.
✅ Good:
description: Reviews code for security vulnerabilities, performance issues, and maintainability concerns
❌ Avoid:
description: A helpful code reviewer
Why: Clear descriptions help the main AI choose the right agent for each task.
Configuration Best Practices
System Prompt Guidelines
Be Specific About Expertise:
You are a Python testing specialist with expertise in:
- pytest framework and fixtures
- Mock objects and dependency injection
- Test-driven development practices
- Performance testing with pytest-benchmark
Include Step-by-Step Approaches:
For each testing task:
1. Analyze the code structure and dependencies
2. Identify key functionality and edge cases
3. Create comprehensive test suites with clear naming
4. Include setup/teardown and proper assertions
5. Add comments explaining complex test scenarios
Specify Output Standards:
Always follow these standards:
- Use descriptive test names that explain the scenario
- Include both positive and negative test cases
- Add docstrings for complex test functions
- Ensure tests are independent and can run in any order
Security Considerations
- Tool Restrictions: Use
toolsto limit which tools a subagent can access, ordisallowedToolsto block specific tools while inheriting everything else - Permission Mode: Subagents inherit their parent's permission mode by default. Plan-mode sessions cannot escalate to auto-edit through delegated agents. Privileged modes (auto-edit, yolo) are blocked in untrusted folders.
- Sandboxing: All tool execution follows the same security model as direct tool use
- Audit Trail: All Subagents actions are logged and visible in real-time
- Access Control: Project and user-level separation provides appropriate boundaries
- Sensitive Information: Avoid including secrets or credentials in agent configurations
- Production Environments: Consider separate agents for production vs development environments
Limits
The following soft warnings apply to Subagent configurations (no hard limits are enforced):
- Description Field: A warning is shown for descriptions exceeding 1,000 characters
- System Prompt: A warning is shown for system prompts exceeding 10,000 characters