mirror of
https://github.com/QwenLM/qwen-code.git
synced 2026-05-05 23:42:03 +00:00
Current 7 LLM calls send ~350K input tokens (50K system prompt × 7 independent subagents). Fork Subagent would share prompt cache across all forks, reducing to ~120K effective tokens (~65% savings). Documented as future optimization pending Fork Subagent platform feature implementation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| scripts | ||
| src | ||
| vendor | ||
| index.ts | ||
| package.json | ||
| test-setup.ts | ||
| tsconfig.json | ||
| vitest.config.ts | ||