* feat: add contextual tips system with post-response context awareness
Add a context-aware tips system that proactively shows helpful tips based
on session state. Post-response tips warn when context usage exceeds 80%
or 95%, suggesting /compress. Startup tips rotate across sessions via LRU
scheduling with cross-session persistence (~/.qwen/tip_history.json).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: use value import for runtime values in useContextualTips
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address PR review feedback
- Use lastSessionTimestamp instead of totalShown for cross-session LRU
- Move getTipHistory singleton from Tips.tsx to services/tips/index.ts
- Defer TipHistory.load() when hideTips is true (no side effects)
- Use os.tmpdir() in tests for cross-platform portability
- Add proper translations for de/ja/pt/ru locale files
- Accept TipHistory | null in useContextualTips
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address Copilot review feedback
- Validate tips field type in TipHistory.load() to handle corrupted JSON
- Split approval-mode tip into platform-specific variants using ctx.platform
- Add afterEach cleanup for temp files in all test suites
- Guard useContextualTips against null tipHistory
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: import shared DEFAULT_TOKEN_LIMIT, harden tipHistory, set file permissions
- Import DEFAULT_TOKEN_LIMIT from @qwen-code/qwen-code-core instead of
hardcoding 1_048_576 in tipRegistry.ts and useContextualTips.ts
- Add normalizeEntry() to defensively handle corrupted tip history entries
- Write tip_history.json with mode 0o600 for privacy on multi-user systems
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove unused compressionThreshold from TipContext
compressionThreshold was defined in TipContext but never used by any tip's
isRelevant check. Remove it to avoid misleading consumers into thinking
tips respect the user's compression settings.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: sanitize sessionCount and getLastShown against corrupted tip history
- Validate sessionCount is finite and non-negative in TipHistory.load()
- Use normalizeEntry() in getLastShown() for corrupted lastSessionTimestamp
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: add contextual tips user documentation
Add docs/users/features/tips.md covering startup tips, post-response
context warnings, tip history persistence, and the hideTips setting.
Update settings.md description and register the new page in _meta.ts.
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace vague "background tasks" with specific "prompt suggestions and speculative
execution" in the --fast flag description across all i18n locales, docs, and VS Code
schema. Update example model name from qwen3.5-flash to qwen3-coder-flash. Also fix
completion logic to require a non-empty partial arg before suggesting --fast, preventing
Tab+Enter from accidentally entering fast model mode.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The "Compact Mode" label is more intuitive than "Verbose Mode" for users,
as it directly describes the default compact view experience. This change
inverts the boolean semantics (compactMode=false means show full output)
and exposes the setting in the /settings dialog (showInDialog: true).
- Rename ui.verboseMode → ui.compactMode with inverted default (false)
- Rename VerboseModeContext → CompactModeContext (file and exports)
- Rename TOGGLE_VERBOSE_MODE → TOGGLE_COMPACT_MODE in key bindings
- Update all consumer components with inverted logic
- Update i18n keys across 6 locales (verbose → compact)
- Update VS Code settings schema
- Add ui.compactMode documentation to settings.md
- Fix Ctrl+O description in keyboard-shortcuts.md
The /context command was missing the subcommand autocomplete feature
that other commands like /stats have. Now users can type '/context '
and see 'detail' as a suggestion in the dropdown.
- Added 'detail' subCommand to contextCommand with its own description
- Subcommand delegates to main action with 'detail' arg
- Added missing translation key for full description in zh.js
- Updated commands.md documentation
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
* fix(core): prevent followup suggestion input/output from appearing in tool call UI
The follow-up suggestion generation was leaking into the conversation UI
through three channels:
1. The forked query included tools in its generation config, allowing the
model to produce function calls during suggestion generation. Fixed by
setting `tools: []` in runForkedQuery's per-request config (kept in
createForkedChat for speculation which needs tools).
2. logApiResponse and logApiError recorded suggestion API events to the
chatRecordingService, causing them to appear in session JSONL files
and the WebUI. Fixed by adding isInternalPromptId() guard that skips
chatRecordingService for 'prompt_suggestion' and 'forked_query' IDs.
uiTelemetryService.addEvent() is preserved so /stats still tracks
suggestion token usage.
3. LoggingContentGenerator logged suggestion requests/responses to the
OpenAI logger and telemetry pipeline. Fixed by skipping logApiRequest,
buildOpenAIRequestForLogging, and logOpenAIInteraction for internal
prompt IDs. _logApiResponse is preserved (for /stats) but its
chatRecordingService path is filtered by fix#2.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor: deduplicate isInternalPromptId into shared export from loggers.ts
Address review feedback: extract isInternalPromptId() to a single
exported function in telemetry/loggers.ts and import it in
LoggingContentGenerator, eliminating the duplicate private method.
Also update loggingContentGenerator.test.ts mock to use importOriginal
so the real isInternalPromptId is available during tests.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor: extract isInternalPromptId to shared utils, add tests
Address maintainer review feedback:
1. Move isInternalPromptId() to packages/core/src/utils/internalPromptIds.ts
using a ReadonlySet for the ID registry. Adding new internal prompt IDs
only requires changing one file. loggers.ts re-exports for compatibility,
loggingContentGenerator.ts imports directly from utils.
2. Extract `tools: []` magic value to a frozen NO_TOOLS constant in
forkedQuery.ts.
3. Add unit tests for isInternalPromptId: prompt_suggestion → true,
forked_query → true, user_query → false, empty string → false.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address Copilot review — docs, stream optimization, tests
1. Update forkedQuery.ts module docs to reflect that runForkedQuery
overrides tools: [] at the per-request level while createForkedChat
retains the full generationConfig for speculation callers.
2. Propagate isInternal into loggingStreamWrapper to skip response
collection and consolidation for internal prompts, avoiding
unnecessary CPU/memory overhead.
3. Add logApiResponse chatRecordingService filter tests: verify
prompt_suggestion/forked_query skip recording while normal IDs
still record.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: deep-freeze NO_TOOLS, add internal prompt guard tests
Address Copilot review round 3:
1. Deep-freeze NO_TOOLS.tools array to prevent shared mutable state
across forked query calls.
2. Add LoggingContentGenerator tests verifying that internal prompt IDs
(prompt_suggestion, forked_query) skip logApiRequest and OpenAI
interaction logging while preserving logApiResponse.
3. Add logApiError chatRecordingService filter tests matching the
existing logApiResponse coverage.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: reconcile createForkedChat JSDoc with module header
Clarify that createForkedChat retains the full generationConfig
(including tools) for speculation callers, while runForkedQuery
strips tools at the per-request level via NO_TOOLS.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: build errors and Copilot round 4 feedback
1. Fix NO_TOOLS type: Object.freeze produces readonly array incompatible
with ToolUnion[]. Use Readonly<Pick<>> instead; spread in requestConfig
already creates a fresh mutable copy per call.
2. Fix test missing required 'model' field in ContentGeneratorConfig.
3. Track firstResponseId/firstModelVersion in loggingStreamWrapper so
_logApiResponse/_logApiError have accurate values even when full
response collection is skipped for internal prompts.
4. Strengthen OpenAI logger test assertion: assert OpenAILogger was
constructed (not guarded by if), then assert logInteraction was
not called.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove dead Object.keys check, add streaming internal prompt test
1. Simplify runForkedQuery: requestConfig always has tools:[] from
NO_TOOLS spread, so the Object.keys().length > 0 ternary is dead
code. Pass requestConfig directly.
2. Add generateContentStream test for internal prompt IDs to match
the existing generateContent coverage, ensuring the streaming
wrapper also skips logApiRequest and OpenAI interaction logging.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: prevent Enter accept from re-inserting suggestion into buffer
When accepting a followup suggestion via Enter, accept() queued
buffer.insert(suggestion) in a microtask that executed after
handleSubmitAndClear had already cleared the buffer, leaving the
suggestion text stuck in the input.
Add skipOnAccept option to accept() so the Enter path bypasses the
onAccept callback. Also add runForkedQuery unit tests verifying
tools: [] is passed in per-request config.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(core): add speculation to internal IDs, fix logToolCall filtering, improve suggestion prompt
- Add 'speculation' to INTERNAL_PROMPT_IDS so speculation API traffic
and tool calls are hidden from chat recordings and tool call UI
- Add isInternalPromptId check to logToolCall() for consistency with
logApiError/logApiResponse
- Improve SUGGESTION_PROMPT: prioritize assistant's last few lines and
extract actionable text from explicit tips (e.g. "Tip: type X")
- Fix garbled unicode in prompt text
- Update design docs and user docs to reflect changes
- Add test coverage for all new behavior
* fix(core): deep-freeze NO_TOOLS, add speculation to loggingContentGenerator tests
- Object.freeze NO_TOOLS and its tools array to prevent runtime mutation
- Add 'speculation' to loggingContentGenerator internal prompt ID tests
for consistency with loggers.test.ts and internalPromptIds.ts
* fix(core): fix NO_TOOLS Object.freeze type error
Use `as const` with type assertion to satisfy TypeScript while keeping
runtime immutability via Object.freeze.
* refactor(core): remove unused isInternalPromptId re-export from loggers.ts
All consumers import directly from utils/internalPromptIds.js.
The re-export was dead code with no importers.
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Update architecture diagram to show status line in footer left
section instead of separate row below
- Document 1-row (default mode) and 2-row (non-default mode) layouts
- Note suppressHint behavior and truncation
- Update settings reference description
The status line is now inlined in the footer's left section,
so horizontal padding is no longer applicable. Remove padding
from StatusLineConfig, settings schema, JSON schema, and docs.
Restructure the status line stdin JSON for clarity and accuracy:
- Rename model.id → model.display_name, cwd → workspace.current_dir
- Replace raw context_window size/count with used_percentage,
remaining_percentage, current_usage, context_window_size, and
total_input_tokens/total_output_tokens
- Add version field from cfg.getCliVersion()
- Add git.branch, metrics.models, metrics.files
- Remove upstream-only fields: tokens.tool (never populated),
session (start_time/elapsed_time not live-updating),
streaming_state, approval_mode, terminal, metrics.tools
- Rename tokens.candidates → tokens.completion (Qwen API convention)
- Fix template string escaping in builtin-agents to avoid
templateString() placeholder collision
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(core): adaptive output token escalation (8K default + 64K retry)
99% of model responses are under 5K tokens, but we previously reserved
32K for every request. This wastes GPU slot capacity by ~4x.
Now the default output limit is 8K. When a response hits this cap
(stop_reason=max_tokens), it automatically retries once at 64K — only
the ~1% of requests that actually need more tokens pay the cost.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: add design doc and user doc for adaptive output token escalation
- Add design doc covering problem, architecture, token limit
determination, escalation mechanism, and design decisions
- Document QWEN_CODE_MAX_OUTPUT_TOKENS env var in settings.md
- Add max_tokens adaptive behavior explanation in model config section
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Align with observed provider prompt-cache TTL (~5 min). Add
`context.gapThresholdMinutes` setting so users can tune the threshold
for providers with different cache TTLs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rename the subcommand to accurately reflect its behavior (exits plan
mode and restores previous approval mode, does not trigger execution).
Update source, tests, i18n keys (6 locales), and docs.
User doc and PR description now include the "PR review, zero findings
→ post comments → approve PR" row in the follow-up actions table.
Also fixed PR description: "Step 4" → "Step 9" for post comments.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Cross-repo lightweight mode has no local codebase — Agent 5 (build/test)
is pointless. Now launches 4 agents instead of 5 in cross-repo mode.
Updated token count tables in SKILL.md, user doc, and DESIGN.md:
same-repo = 7 LLM calls, cross-repo = 6.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- User doc: added "Other" row to language table + explanation that
CI config is read for unrecognized projects
- DESIGN.md: added "Why auto-discover from CI config" decision
section + added .qwen/review-tools.md to rejected alternatives
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- git branch -D: add 2>/dev/null || true to both cleanup sites
(Step 1 stale cleanup + Step 11) to prevent abort if ref missing
- Cross-repo doc: clarify Agents 1-4 only (Agent 5 build/test
requires local codebase, not available in cross-repo mode)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
SKILL.md:
- Step 9 must use owner/repo from URL (not gh repo view) for cross-repo
- Step 2 (project rules) skipped in cross-repo mode (no local files)
User doc: add Cross-repo PR Review section with same-repo vs cross-repo
capability comparison table.
DESIGN.md: add "Why cross-repo uses lightweight mode" section explaining
CLI tools are inherently repo-local and our approach is best available.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reverse audit agent already has full context (all confirmed findings +
entire diff), so its findings don't need a second opinion. This brings
the actual LLM call count to 7 (5 review + 1 verify + 1 reverse),
matching the documented claim.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
If a previous review was interrupted (Ctrl+C, crash), stale worktree
and local ref would block the next review. Now Step 1 checks for and
cleans up stale .qwen/tmp/review-pr-<N> worktree and qwen-review/pr-<N>
ref before creating new ones.
Step 5 also cleans up the local ref alongside the worktree.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mermaid only renders on GitHub; shows as raw code on Nextra,
Docusaurus, VS Code preview, and offline viewing. Plain-text
ASCII diagram is universally compatible and includes LLM call
cost annotations on each stage.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Step 4.5: use absolute paths for reports/cache in worktree mode
(relative paths would land in worktree and be deleted)
- Step 1: fetch into qwen-review/pr-<N> ref to avoid clobbering
existing local branches
- Step 2.6: reverse audit findings use batch verification (not
one-per-finding), consistent with Step 2.5
- Doc: clarify reverse audit findings are also batch-verified
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add Token Efficiency section showing fixed 7 LLM calls breakdown
- Fix follow-up table: "fix these issues" is local-only (worktree
cleaned up after PR review)
- Update PR description with worktree, batch verification, cross-model
review, PR comment dedup, and expanded test plan
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Previously, each finding got its own independent verification agent
(N findings = N LLM calls). Now a single verification agent receives
all findings at once and verifies them in one pass.
Token cost: 6+N variable calls → 7 fixed calls (5 review + 1 verify + 1 reverse audit)
Quality: minimal impact — batch verification has fuller cross-finding context
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add model attribution to no-findings LGTM path
- Handle empty string from getModel() with .trim() || 'unknown'
- Add tests for {{model}} with args and empty model ID
- Fix doc contradiction: PR autofix pushes automatically from worktree
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1. Remove gh pr checkout --detach (modifies working tree, defeats
worktree purpose). Use git fetch only.
2. Add dependency installation step (npm ci etc.) in worktree —
without it, all TS/JS linting/building fails.
3. Cache and reports written to main project dir, not worktree
(would be deleted in Step 5).
4. "fix these issues" tip only for local reviews — worktree is
cleaned up after PR review, so interactive fixing not possible.
5. Autofix push uses explicit remote branch name from Step 1.
6. Move incremental check before dependency install to avoid
wasting time when no new changes.
7. Fix Step 3 reference: "from Steps 2.5 and 2.6" (includes
reverse audit findings).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the stash + checkout + restore flow with an isolated git
worktree for PR reviews. This eliminates:
- Stash orphan risks (multiple early exit paths)
- Wrong-branch risks (Step 5 restore failures)
- Build cache pollution (worktree has its own state)
- All stash-related error handling complexity
New flow:
- Step 1: git worktree add .qwen/tmp/review-pr-<number>
- All agents operate in the worktree directory
- Autofix commits and pushes from the worktree
- Step 5: git worktree remove (--force for dirty worktrees)
User's working tree is never modified during PR reviews.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
For PR reviews, fetch existing inline and general comments via gh api
before launching agents. A summary of already-discussed issues is
passed to agents so they don't re-report problems that humans or other
tools have already flagged.
Added to Exclusion Criteria: "Issues already discussed in existing
PR comments."
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- commands.md: renumber 1.6→1.7→1.8→1.9 after inserting 1.5 Built-in Skills
- SKILL.md: promote Reverse audit from ### to ## Step 2.6 for consistent
step hierarchy
- _meta.ts: add code-review to Features navigation sidebar
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The incremental review cache now stores modelId alongside commitSha.
When the same PR is re-reviewed with a different model:
- Cache detects model change → runs full review (not skipped)
- Informs user: "Previous review used X. Running full review with Y
for a second opinion."
Same SHA + same model still skips as before.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add Step 2.6: after all findings are verified and aggregated, a single
reverse audit agent reviews the diff with full knowledge of what was
already found, specifically looking for important issues that all
previous agents missed.
- Only reports Critical/Suggestion level gaps (not Nice to have)
- Findings go through the same verification as other agents
- Single agent call — minimal cost overhead
- If nothing is found, initial review had strong coverage
This formalizes the "multi-round undirected audit" pattern that proved
effective during the development of this PR (14 rounds, 40+ issues).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add comprehensive user documentation for the /review command covering:
- Quick start examples for all modes (local, PR, file, --comment)
- Pipeline overview with all steps explained
- Review agents table (5 agents + their focus areas)
- Deterministic analysis (supported languages and tools)
- Severity levels and PR comment filtering rules
- Autofix workflow
- PR inline comments (what gets posted vs terminal-only)
- Follow-up actions (fix/post comments/commit)
- Project review rules (.qwen/review-rules.md etc.)
- Incremental review and caching
- Review report persistence
- Cross-file impact analysis
- Design philosophy
Also add /review and /simplify to the commands reference page
under a new "Built-in Skills" section with link to full docs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Inline shell snippets need sh -c to execute via pipe, matching how
child_process.exec() runs the configured command.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Footer comment now accurately states only the "? for shortcuts"
hint is suppressed, not all left-section items
- Docs now note that Windows uses cmd.exe by default and suggest
wrapping commands with bash -c or using a bash script
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use echo "$input" instead of echo $input for proper shell variable
quoting, consistent with the script file example.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix "three distinct permission modes" → "four" (Plan was always listed)
- Update refactor example to use /plan command instead of /approval-mode
- Fix grammar in example description
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>