Commit graph

722 commits

Author SHA1 Message Date
tanzhenxin
8d74a0cf0a
feat(subagents): add disallowedTools field to agent definitions (#3064)
* feat(subagents): add disallowedTools field to agent definitions

Add a `disallowedTools` blocklist to agent frontmatter, letting agents
specify tools they should not have access to. Supports exact tool names,
MCP server-level patterns (e.g., `mcp__slack`), and display name aliases.

Applied as a post-filter in AgentCore.prepareTools() after the existing
`tools` allowlist. Persisted through serialize/parse roundtrips.

* docs: document disallowedTools and MCP tool behavior for subagents

Add Tool Configuration section to sub-agents docs explaining:
- tools allowlist and disallowedTools blocklist
- How MCP tools follow the same allowlist/blocklist rules
- MCP server-level patterns in disallowedTools

* fix(subagents): validate disallowedTools in SubagentValidator

Reuse the existing validateTools() method to validate disallowedTools
entries at config validation time, catching non-string and empty entries
before they reach runtime.

* test: remove flaky BaseSelectionList scroll test on Windows
2026-04-13 18:24:02 +08:00
tanzhenxin
9a889dc614
feat(skills): add model override support via skill frontmatter (#2949)
* feat(skills): add model override support via skill frontmatter

Allow skills to specify a `model` field in YAML frontmatter to override
which model is used for subsequent turns within the same agentic loop.
The override flows through ToolResult → ToolCallResponseInfo →
SendMessageOptions and naturally expires when the loop ends.

Resolves #2052

* fix(core): only include modelOverride in response when defined

Fixes strict equality test failures in nonInteractiveToolExecutor.test.ts
where the extra undefined modelOverride field caused object mismatch.

* fix(skills): fix model override pipeline issues

- Wire up modelOverride in interactive CLI path (useGeminiStream)
- Fix inherit/no-model unable to clear a prior override by using
  'in' operator instead of truthiness checks in scheduler and CLI
- Reject empty/whitespace model strings in parseModelField()
- Extract shared parseModelField() to deduplicate skill-load and
  skill-manager parsing logic
- Propagate modelOverride through stop-hook continuation in client

* fix(skills): persist model override across turns in interactive and cron paths

The interactive path stored the skill model override in a local variable,
causing it to be lost when subsequent non-skill tool turns ran. Use a ref
to persist the override for the duration of the agentic loop, resetting on
new user messages. Also propagate modelOverride in the cron execution loop
for consistency with the main non-interactive path.

* fix(skills): preserve model override on retry and add unit tests

Retry in interactive mode was clearing modelOverrideRef, causing the
skill-selected model to silently fall back to session default. Guard
the reset so retries preserve the active override.

Add unit tests for parseModelField (edge cases, type validation) and
modelOverride propagation through the skill tool result path.
2026-04-13 17:57:41 +08:00
Shaojin Wen
b3bc42931e
feat: add contextual tips system with post-response context awareness (#2904)
* feat: add contextual tips system with post-response context awareness

Add a context-aware tips system that proactively shows helpful tips based
on session state. Post-response tips warn when context usage exceeds 80%
or 95%, suggesting /compress. Startup tips rotate across sessions via LRU
scheduling with cross-session persistence (~/.qwen/tip_history.json).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: use value import for runtime values in useContextualTips

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address PR review feedback

- Use lastSessionTimestamp instead of totalShown for cross-session LRU
- Move getTipHistory singleton from Tips.tsx to services/tips/index.ts
- Defer TipHistory.load() when hideTips is true (no side effects)
- Use os.tmpdir() in tests for cross-platform portability
- Add proper translations for de/ja/pt/ru locale files
- Accept TipHistory | null in useContextualTips

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Copilot review feedback

- Validate tips field type in TipHistory.load() to handle corrupted JSON
- Split approval-mode tip into platform-specific variants using ctx.platform
- Add afterEach cleanup for temp files in all test suites
- Guard useContextualTips against null tipHistory

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: import shared DEFAULT_TOKEN_LIMIT, harden tipHistory, set file permissions

- Import DEFAULT_TOKEN_LIMIT from @qwen-code/qwen-code-core instead of
  hardcoding 1_048_576 in tipRegistry.ts and useContextualTips.ts
- Add normalizeEntry() to defensively handle corrupted tip history entries
- Write tip_history.json with mode 0o600 for privacy on multi-user systems

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: remove unused compressionThreshold from TipContext

compressionThreshold was defined in TipContext but never used by any tip's
isRelevant check. Remove it to avoid misleading consumers into thinking
tips respect the user's compression settings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: sanitize sessionCount and getLastShown against corrupted tip history

- Validate sessionCount is finite and non-negative in TipHistory.load()
- Use normalizeEntry() in getLastShown() for corrupted lastSessionTimestamp

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: add contextual tips user documentation

Add docs/users/features/tips.md covering startup tips, post-response
context warnings, tip history persistence, and the hideTips setting.
Update settings.md description and register the new page in _meta.ts.

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 17:40:27 +08:00
pomelo
338c0b1e9e
refactor: merge test-utils package into core (#3200)
* refactor: merge test-utils package into core

Consolidate the standalone @qwen-code/qwen-code-test-utils package
into packages/core/src/test-utils/, eliminating the need for a
separate package that only provided createTmpDir, cleanupTmpDir,
and FileSystemStructure type.

Changes:
- Move file-system-test-helpers.ts into core/src/test-utils/
- Re-export from core's test-utils index
- Update 3 core test files to use relative imports
- Update cli useAtCompletion test to import from @qwen-code/qwen-code-core
- Remove test-utils devDependency from core and cli package.json
- Delete packages/test-utils/ directory

All affected tests pass (fileSearch, crawler, ignore, useAtCompletion).

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix: remove deleted test-utils from build order

The test-utils package was merged into core but the build script still
tried to build it separately, causing CI failures.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-13 17:11:03 +08:00
DennisYu07
dddb56d885
feat: add stopFailure and postCompact (#2825) 2026-04-13 12:54:44 +08:00
Shaojin Wen
61ad9db9c1
feat(cli): queue input editing — pop queued messages for editing via ↑/ESC (#2871)
* feat(cli): add queue input editing via Up arrow key

Allow users to edit queued messages by pressing the Up arrow key when
the cursor is at the top of the input. All queued messages are popped
into the input field for revision before resubmission, reducing wasted
turns from incorrect queued instructions.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: add missing mocks for InputPrompt tests and attachment mode guard

- Add popAllQueuedMessages mock and messageQueue to UIState/UIActions
  mocks in InputPrompt.test.tsx to fix 25 test failures
- Add !isAttachmentMode guard to prevent queue pop from conflicting
  with attachment navigation
- Add single-message popAllMessages test case

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Copilot review - restrict to Up arrow, add tests, update docs

- Only trigger queue pop on NAVIGATION_UP (arrow key), not HISTORY_UP
  (Ctrl+P), preserving existing Ctrl+P history navigation behavior
- Update AsyncMessageQueue class docs to describe popLast() LIFO semantics
- Add InputPrompt tests: Up arrow pops queue, Up arrow falls back to
  history when queue empty, Ctrl+P not intercepted by queue pop

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: update fileoverview docs and make popAllMessages atomic via ref

- Update @fileoverview to describe FIFO+LIFO capability instead of
  "Simple FIFO queue"
- Use queueRef to make popAllMessages atomic, preventing duplicate
  pops from key auto-repeat before React re-renders

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: sync queueRef in addMessage/clearQueue and fall through on null pop

- Update queueRef inside addMessage setter and clearQueue to keep ref
  in sync between renders, preventing stale reads after clearQueue
- When popAllQueuedMessages returns null (queue already cleared), fall
  through to normal history navigation instead of consuming the key

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: remove dead popLast() and align popAllMessages separator to \n\n

- Remove unused AsyncMessageQueue.popLast() (no production callers)
- Change popAllMessages join separator from \n to \n\n for consistency
  with getQueuedMessagesText and auto-submit behavior

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: use hook's drainQueue for mid-turn drain to prevent double-consumption race

The midTurnDrainRef previously used a separate messageQueueRef (synced
from React state), while popAllMessages uses the hook's internal
queueRef. If a tool completed between popAllMessages clearing queueRef
and React re-rendering, midTurnDrainRef would read stale data and
consume the same messages a second time.

Switching to the hook's drainQueue makes both paths read from the same
synchronous ref, eliminating the window for double consumption.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: add missing popAllMessages mock and prepend branch test

Add popAllMessages to useMessageQueue mock in AppContainer tests.
Add test for prepending queued messages before existing input text.

* feat: add ESC trigger, cursor preservation, and progressive hint

- ESC pops queued messages before double-ESC clear logic
- Cursor stays at user's editing position after pop via moveToOffset
- Extract popQueueIntoInput helper to share logic between Up and ESC
- QueuedMessageDisplay hint hides after 3 empty→non-empty transitions

* test: add null-pop fallthrough test for queue race condition

Verify that when React state shows non-empty queue but the ref is
already drained (popAllQueuedMessages returns null), Up arrow falls
through to normal history navigation instead of getting stuck.

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 14:38:32 +08:00
易良
81ccbb976c
fix(cli): prioritize slash command completions (#3104) 2026-04-11 11:04:58 +08:00
Edenman
4d2d4432d5
Merge pull request #2923 from QwenLM/feature/status-line-customization
Some checks are pending
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
feat(ui): add customizable status line with /statusline command
2026-04-09 19:23:08 +08:00
wenshao
f25fc047f1 test: add comprehensive tests for useStatusLine hook and statuslineCommand
Cover config validation, command execution with exec options, stdin JSON
payload, stale generation rejection, debouncing, config removal, cleanup
on unmount, EPIPE handling, command hot-reload, all state change triggers
(token count, model, branch, vim toggle, file lines), and process management.
2026-04-09 15:50:41 +08:00
Shaojin Wen
f208801b0e
fix(followup): prevent tool call UI leak and Enter accept buffer race (#2872)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* fix(core): prevent followup suggestion input/output from appearing in tool call UI

The follow-up suggestion generation was leaking into the conversation UI
through three channels:

1. The forked query included tools in its generation config, allowing the
   model to produce function calls during suggestion generation. Fixed by
   setting `tools: []` in runForkedQuery's per-request config (kept in
   createForkedChat for speculation which needs tools).

2. logApiResponse and logApiError recorded suggestion API events to the
   chatRecordingService, causing them to appear in session JSONL files
   and the WebUI. Fixed by adding isInternalPromptId() guard that skips
   chatRecordingService for 'prompt_suggestion' and 'forked_query' IDs.
   uiTelemetryService.addEvent() is preserved so /stats still tracks
   suggestion token usage.

3. LoggingContentGenerator logged suggestion requests/responses to the
   OpenAI logger and telemetry pipeline. Fixed by skipping logApiRequest,
   buildOpenAIRequestForLogging, and logOpenAIInteraction for internal
   prompt IDs. _logApiResponse is preserved (for /stats) but its
   chatRecordingService path is filtered by fix #2.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: deduplicate isInternalPromptId into shared export from loggers.ts

Address review feedback: extract isInternalPromptId() to a single
exported function in telemetry/loggers.ts and import it in
LoggingContentGenerator, eliminating the duplicate private method.

Also update loggingContentGenerator.test.ts mock to use importOriginal
so the real isInternalPromptId is available during tests.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: extract isInternalPromptId to shared utils, add tests

Address maintainer review feedback:

1. Move isInternalPromptId() to packages/core/src/utils/internalPromptIds.ts
   using a ReadonlySet for the ID registry. Adding new internal prompt IDs
   only requires changing one file. loggers.ts re-exports for compatibility,
   loggingContentGenerator.ts imports directly from utils.

2. Extract `tools: []` magic value to a frozen NO_TOOLS constant in
   forkedQuery.ts.

3. Add unit tests for isInternalPromptId: prompt_suggestion → true,
   forked_query → true, user_query → false, empty string → false.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Copilot review — docs, stream optimization, tests

1. Update forkedQuery.ts module docs to reflect that runForkedQuery
   overrides tools: [] at the per-request level while createForkedChat
   retains the full generationConfig for speculation callers.

2. Propagate isInternal into loggingStreamWrapper to skip response
   collection and consolidation for internal prompts, avoiding
   unnecessary CPU/memory overhead.

3. Add logApiResponse chatRecordingService filter tests: verify
   prompt_suggestion/forked_query skip recording while normal IDs
   still record.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: deep-freeze NO_TOOLS, add internal prompt guard tests

Address Copilot review round 3:

1. Deep-freeze NO_TOOLS.tools array to prevent shared mutable state
   across forked query calls.

2. Add LoggingContentGenerator tests verifying that internal prompt IDs
   (prompt_suggestion, forked_query) skip logApiRequest and OpenAI
   interaction logging while preserving logApiResponse.

3. Add logApiError chatRecordingService filter tests matching the
   existing logApiResponse coverage.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: reconcile createForkedChat JSDoc with module header

Clarify that createForkedChat retains the full generationConfig
(including tools) for speculation callers, while runForkedQuery
strips tools at the per-request level via NO_TOOLS.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: build errors and Copilot round 4 feedback

1. Fix NO_TOOLS type: Object.freeze produces readonly array incompatible
   with ToolUnion[]. Use Readonly<Pick<>> instead; spread in requestConfig
   already creates a fresh mutable copy per call.

2. Fix test missing required 'model' field in ContentGeneratorConfig.

3. Track firstResponseId/firstModelVersion in loggingStreamWrapper so
   _logApiResponse/_logApiError have accurate values even when full
   response collection is skipped for internal prompts.

4. Strengthen OpenAI logger test assertion: assert OpenAILogger was
   constructed (not guarded by if), then assert logInteraction was
   not called.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: remove dead Object.keys check, add streaming internal prompt test

1. Simplify runForkedQuery: requestConfig always has tools:[] from
   NO_TOOLS spread, so the Object.keys().length > 0 ternary is dead
   code. Pass requestConfig directly.

2. Add generateContentStream test for internal prompt IDs to match
   the existing generateContent coverage, ensuring the streaming
   wrapper also skips logApiRequest and OpenAI interaction logging.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: prevent Enter accept from re-inserting suggestion into buffer

When accepting a followup suggestion via Enter, accept() queued
buffer.insert(suggestion) in a microtask that executed after
handleSubmitAndClear had already cleared the buffer, leaving the
suggestion text stuck in the input.

Add skipOnAccept option to accept() so the Enter path bypasses the
onAccept callback. Also add runForkedQuery unit tests verifying
tools: [] is passed in per-request config.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(core): add speculation to internal IDs, fix logToolCall filtering, improve suggestion prompt

- Add 'speculation' to INTERNAL_PROMPT_IDS so speculation API traffic
  and tool calls are hidden from chat recordings and tool call UI
- Add isInternalPromptId check to logToolCall() for consistency with
  logApiError/logApiResponse
- Improve SUGGESTION_PROMPT: prioritize assistant's last few lines and
  extract actionable text from explicit tips (e.g. "Tip: type X")
- Fix garbled unicode in prompt text
- Update design docs and user docs to reflect changes
- Add test coverage for all new behavior

* fix(core): deep-freeze NO_TOOLS, add speculation to loggingContentGenerator tests

- Object.freeze NO_TOOLS and its tools array to prevent runtime mutation
- Add 'speculation' to loggingContentGenerator internal prompt ID tests
  for consistency with loggers.test.ts and internalPromptIds.ts

* fix(core): fix NO_TOOLS Object.freeze type error

Use `as const` with type assertion to satisfy TypeScript while keeping
runtime immutability via Object.freeze.

* refactor(core): remove unused isInternalPromptId re-export from loggers.ts

All consumers import directly from utils/internalPromptIds.js.
The re-export was dead code with no importers.

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 00:07:03 +08:00
chinesepowered
5a5a175f00
fix(ui): Remove dead dirs state and unused hook parameter from InputPrompt (#2891)
* fix(ui): prevent useEffect from running every render in InputPrompt

getDirectories() returns a new array reference each call, causing the
useEffect dependency check to fail on every render. Move the call
inside the effect body and use stable dependencies [config, dirs] so
the effect only re-runs when they actually change.

* fix(ui): use serialized dep key for directory change detection

Move from [config, dirs] deps (both stable refs that miss external
changes) to a dirKey string (join of current directories). This
preserves the perf fix (no new array ref in deps) while still
detecting directory additions/removals from /add-dir etc.

* refactor(cli): remove unused dirs state from InputPrompt

The dirs parameter passed to useCommandCompletion() was never read
inside that hook, making the dirs state and sync effect in InputPrompt
dead code. Remove the parameter, the state, the effect, and all test
call-site args.
2026-04-08 22:18:22 +08:00
wenshao
a1c33cdb5e refactor(status-line): remove padding config
The status line is now inlined in the footer's left section,
so horizontal padding is no longer applicable. Remove padding
from StatusLineConfig, settings schema, JSON schema, and docs.
2026-04-08 20:24:33 +08:00
wenshao
841eb3c70c fix: address reviewer feedback — stdin error logging, JSON schema, i18n
- Log non-EPIPE stdin errors at debug level instead of silently
  swallowing them
- Add proper JSON schema properties for statusLine (type, command,
  padding) with enum, required, and additionalProperties constraints
- Add missing i18n entry for /statusline command description
2026-04-08 20:08:36 +08:00
wenshao
55b1ab174d fix(status-line): derive remaining_percentage from used and reject empty commands
- Compute remaining_percentage as round(100 - used) to guarantee
  used + remaining always sums to exactly 100.0
- Reject empty or whitespace-only command strings in config validation
2026-04-08 18:58:06 +08:00
pomelo
1e87388ffd
feat: add qwen3.6-plus model to ModelStudio Coding Plan (#3015)
- Add qwen3.6-plus to both China and Global/Intl regions as the first
  model in the Coding Plan template (1M context, enable_thinking)
- Set qwen3.6-plus as the new default MAINLINE_CODER_MODEL
- Add image+video input modality support for qwen3.6-plus

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-08 18:57:07 +08:00
wenshao
520ed4e040 fix: address audit findings across status-line and verbose-mode features
- useStatusLine: clamp used/remaining percentage to [0,100], track
  totalLinesRemoved as trigger, clean up debounceRef on unmount
- AppContainer: use drainQueue from useMessageQueue instead of manual
  messageQueueRef to avoid stale-ref reads between renders
- builtin-agents: add WRITE_FILE tool to statusline-setup agent, improve
  PS1 parsing instructions (unquoted assignments, \[/\]/\e escapes),
  strip ANSI colors, remove unreachable symlink instruction
- CompactToolGroupDisplay: fix misleading hint "show full tool output"
  to "toggle verbose mode" across all 6 locales
- AppContainer.test: add missing drainQueue mock
2026-04-08 18:45:44 +08:00
克竟
24a28d5fb0 refactor(status-line): redesign JSON input schema and add context fields
Restructure the status line stdin JSON for clarity and accuracy:
- Rename model.id → model.display_name, cwd → workspace.current_dir
- Replace raw context_window size/count with used_percentage,
  remaining_percentage, current_usage, context_window_size, and
  total_input_tokens/total_output_tokens
- Add version field from cfg.getCliVersion()
- Add git.branch, metrics.models, metrics.files
- Remove upstream-only fields: tokens.tool (never populated),
  session (start_time/elapsed_time not live-updating),
  streaming_state, approval_mode, terminal, metrics.tools
- Rename tokens.candidates → tokens.completion (Qwen API convention)
- Fix template string escaping in builtin-agents to avoid
  templateString() placeholder collision

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 17:52:07 +08:00
wenshao
51964fa4b9 Merge remote-tracking branch 'origin/main' into feature/status-line-customization
# Conflicts:
#	packages/cli/src/ui/components/Footer.tsx
2026-04-08 05:05:04 +08:00
tanzhenxin
b632541629
Merge pull request #2770 from chiga0/feat/add-verbose-mode-switcher
feat: to #2767, support verbose and compact mode swither with ctrl-o
2026-04-07 15:48:41 +08:00
Shaojin Wen
b6373ac71e
feat(core): implement mid-turn queue drain for agent execution (#2854)
Some checks are pending
Qwen Code CI / CodeQL (push) Waiting to run
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(core): implement mid-turn queue drain for agent execution

Inject queued user messages between tool execution steps within a single
turn, so the model sees them immediately instead of waiting for the
entire round to complete.

- Add `dequeueAll()` to AsyncMessageQueue
- Add `midTurnDrain` callback to ReasoningLoopOptions
- Drain queue after processFunctionCalls, inject as text parts
- AgentComposer always enqueues directly (no local buffering)
- Add QUEUE_MESSAGES_CONSUMED event for UI sync

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(cli): add mid-turn queue drain to main session

Extend mid-turn queue drain to the main session's tool execution path
(useGeminiStream). Previously only agent tabs had this feature.

- Add midTurnDrainRef parameter to useGeminiStream
- Inject queued messages in handleCompletedTools before submitQuery
- Bridge useMessageQueue to drain ref in AppContainer via ref pattern

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Copilot review feedback on mid-turn drain

- Guard midTurnDrain with abort check to prevent message loss on cancel
- Synchronously clear messageQueueRef to prevent duplicate drains
- Only clear pending display on IDLE status, not all status changes

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: scope mid-turn drain to main session only

Revert subagent-path changes (AgentCore, AgentInteractive,
AgentComposer, AsyncMessageQueue, agent-events) to keep the PR
focused on the main session, which is easier to test and validate.

Subagent mid-turn drain can be added in a follow-up PR.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Copilot review on main session mid-turn drain

- Move synchronous queue ref into useMessageQueue itself, expose
  drainQueue() for atomic drain (fixes race between addMessage and drain)
- Record drained messages as USER history items so the transcript
  stays complete
- Simplify AppContainer bridge to just midTurnDrainRef.current = drainQueue

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: guard mid-turn drain against cancelled turns

- Skip drain when turnCancelledRef or abortController signal is set,
  so queued messages stay for the next turn instead of being lost
- Restore ref-based queue bridge (drainQueue removed from useMessageQueue)
- Keep synchronous ref clear to prevent duplicate drains

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 09:14:44 +08:00
wenshao
9bba05bad3 fix: add ASK_USER_QUESTION to statusline-setup agent, clear debounce on command change
- Agent can now ask for clarification when PS1 is not found
- Clear pending debounce timer before immediate doUpdate on command
  change to prevent redundant second execution

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 14:46:02 +08:00
wenshao
4c4e63888a fix: kill child process when statusLine config is removed
When statusLineCommand becomes undefined (user removes the setting),
kill any in-flight child process, bump generation counter, and clear
the debounce timer so stale callbacks cannot resurrect the output.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 14:21:28 +08:00
wenshao
1a985bb02e fix: exec cwd, output trimming, and status line alignment
- Set cwd to config.getTargetDir() so commands like pwd/git run in the
  correct workspace directory
- Strip only trailing newline instead of trim() to preserve intentional
  leading/trailing whitespace in command output
- Match footer's marginLeft/marginRight on the status line row so it
  aligns with the rest of the footer content

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 11:45:49 +08:00
wenshao
24251db4ef fix: track vimEnabled changes in status line triggers
When vim mode is toggled off, vimMode stays the same but the status
line should stop including vim data. Use effectiveVim (undefined when
disabled) as the tracked value instead of raw vimMode.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 11:39:25 +08:00
wenshao
c219f7c4ac fix: address review feedback from Copilot
- Validate padding is finite number >= 0 instead of blind cast
- Initialize prevStateRef with current values to prevent double exec on mount
- Kill previous child process before starting new one; kill on unmount
- Fix agent prompt: settings path is ui.statusLine, not root-level
- Fix agent prompt: remove multi-cat examples, stdin can only be read once
- Update Footer left-section comment to reflect new behavior

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 11:34:47 +08:00
wenshao
8d85492913 feat(ui): rewrite customizable status line
Rewrite the status line feature (originally by Gemini 3.1 Pro) to align
with the upstream design:

- Settings: change from plain string to object `{ type, command, padding? }`
- Hook: event-driven with 300ms debounce instead of 5s polling; pass
  structured JSON context (session, model, tokens, vim) via stdin;
  generation counter to ignore stale exec callbacks; EPIPE guard on stdin
- Footer: render status line as dedicated row with dimColor + truncate;
  suppress "? for shortcuts" hint when status line is active
- Add `/statusline` slash command that delegates to a statusline-setup agent
- Add `statusline-setup` built-in agent with PS1 conversion instructions
- Remove unrelated changes (whitespace, formatting, package-lock, test file)
- Fix copyright headers (Google LLC → Qwen)
- Fix config path references (~/.qwen-code → ~/.qwen)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 08:04:20 +08:00
wenshao
6784f0c02c feat(ui): add customizable status line
Allow users to configure a custom shell command to display in the UI footer status line.
2026-04-06 07:10:50 +08:00
tanzhenxin
0776627e0f
Merge pull request #2420 from huww98/fix/ctrl-y-retry-rate-limit
feat: allow Ctrl+Y to skip rate-limit retry delay immediately
2026-04-05 14:47:59 +08:00
tanzhenxin
6d9ee19dc6
Merge pull request #2834 from kulikrch/fix/theme-esc-cancel-2833-main
fix(cli): restore previous theme on /theme cancel (refs #2833)
2026-04-05 14:43:34 +08:00
tanzhenxin
fe4f2567c6
Merge pull request #2822 from qqqys/fix/cli_command
fix(cli): prevent ideCommand failure from breaking all slash commands…
2026-04-05 14:27:01 +08:00
YingchaoX
0be974b1d7 fix(cli): restore ? shortcuts in vim normal mode 2026-04-04 22:24:56 +08:00
chiga0
6fd29b698b fix: address PR review feedback for verbose/compact mode toggle
- Change default verboseMode to true (preserving current UX behavior)
- Fix compact mode hiding active shell output (add forceShowResult + isUserInitiated)
- Fix asymmetric frozen snapshot (freeze on ANY toggle during streaming)
- Fix copyright header in VerboseModeContext.tsx (Google LLC → Qwen)
- Add proper translations for all 6 locales (de/ja/pt/ru/zh/en)
- Rewrite CompactToolGroupDisplay with bordered box, i18n hint, shell detection
- Fix Pending status color (theme.text.secondary instead of theme.status.success)
- Fix description casing: ctrl+o → Ctrl+O
- Add explanatory comment for useCallback settings dependency

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 20:43:06 +08:00
Shaojin Wen
3bce84d5da
feat(cli, webui): add follow-up suggestions feature (#2525)
Some checks failed
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Has been cancelled
E2E Tests / E2E Test (Linux) - sandbox:none (push) Has been cancelled
Qwen Code CI / Lint (push) Has been cancelled
Qwen Code CI / CodeQL (push) Has been cancelled
E2E Tests / E2E Test - macOS (push) Has been cancelled
Qwen Code CI / Test (push) Has been cancelled
Qwen Code CI / Test-1 (push) Has been cancelled
Qwen Code CI / Test-2 (push) Has been cancelled
Qwen Code CI / Test-3 (push) Has been cancelled
Qwen Code CI / Test-4 (push) Has been cancelled
Qwen Code CI / Test-5 (push) Has been cancelled
Qwen Code CI / Test-6 (push) Has been cancelled
Qwen Code CI / Test-7 (push) Has been cancelled
Qwen Code CI / Test-8 (push) Has been cancelled
Qwen Code CI / Post Coverage Comment (push) Has been cancelled
* feat(cli, webui): add follow-up suggestions feature

Implement context-aware follow-up suggestions that appear after task
completion, suggesting relevant next actions like "commit this", "run
tests", etc.

- Add `followup/` module with types, generator, and rule-based provider
- Export follow-up types and functions from core index
- 8 default suggestion rules covering common workflows

- Add `useFollowupSuggestionsCLI` hook for Ink/React
- Integrate suggestion generation in AppContainer when streaming completes
- Add Tab key to accept, arrow keys to cycle through suggestions
- Display suggestions as ghost text in input prompt

- Add `useFollowupSuggestions` hook for React
- Update InputForm to display suggestions as placeholder
- Add CSS styling for suggestion appearance with counter
- Add keyboard handlers (Tab, arrow keys)

- After streaming completes with tool calls, suggestions appear
- Tab accepts the current suggestion
- Left/Right arrows cycle through multiple suggestions
- Typing or pasting dismisses the suggestion

- Shell command rules (tests, git, npm install) don't work yet due to
  history not storing tool arguments
- VSCode extension integration pending
- Web UI needs parent app integration for suggestion generation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: resolve merge conflicts and build errors

- Rebased on upstream main (5d02260c8)
- Fixed JSX structure in InputPrompt.tsx
- Changed `return;` to `return true;` in follow-up handlers
- Added @agentclientprotocol/sdk to core package dependencies
- Restored correct BaseTextInput usage (self-closing, no children)
- Follow-up suggestions now shown via placeholder prop only

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: remove @agentclientprotocol/sdk from core package.json

The types are imported in fileSystemService.ts but the package
should not be a runtime dependency of core. It's provided by
the CLI package which depends on core. This was causing
package-lock.json sync issues on Node.js 24.x CI.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: restore alphabetical order of dependencies in core/package.json

* fix: restore package-lock.json from upstream to fix Node 24.x CI

* fix: resolve acpConnection test failure and ESLint warning

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* style: apply prettier formatting after merge

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(followup): address review issues in follow-up suggestions

- Export followupState.ts from core index (was dead code)
- Refactor CLI and WebUI hooks to use shared followupReducers (eliminate duplication)
- Move side effects out of setState updaters via queueMicrotask
- Fix AppContainer useEffect dependency on unstable historyManager.history reference
- Reorder matchesRule to check pattern before condition (cheaper first)
- Make RuleBasedProvider collect from all matching rules with dedup and limit
- Add missing resetGenerator export for testing
- Add explicit implements SuggestionProvider to RuleBasedProvider
- Fix unstable followup object in useEffect dependency arrays
- Merge duplicate imports to fix eslint import/no-duplicates warnings
- Standardize copyright year to 2025
- Add test files for followupState, ruleBasedProvider, suggestionGenerator

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address review feedback from PR #2525

- Fix acceptingRef race: set lock synchronously before queueMicrotask
- Derive hasError/wasCancelled from actual tool call statuses
- Incorporate rule priority into suggestion priority calculation
- Clear suggestions immediately when setSuggestions([]) is called
- Add !completion.showSuggestions guard to Tab handler
- Fix onAcceptFollowup type from (string) => void to () => void
- Fix ToolCallInfo.name doc examples to match display names
- Scope CSS counter ::after to data-has-suggestion + empty conditions
- Reset regex lastIndex before test() for g/y flag safety
- Stabilize hook return with useMemo + onAcceptRef pattern
- Add @qwen-code/qwen-code-core as webui external + peerDependency

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address second round of review feedback

- Scope CSS max-width to match counter condition (not count=1)
- Only dismiss followup on printable character input, not navigation keys
- Restrict tool_group scan to most recent contiguous block (current turn)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): clear suggestions on new turn, add search guards

- Clear followupSuggestions when streaming starts (Idle → Responding)
  to prevent stale suggestions from previous turns
- Add !reverseSearchActive && !commandSearchActive guards to Tab handler
  to avoid keybinding conflicts with search modes

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address third round of review feedback

- Fix string pattern asymmetry: only match tool names when matchMessage=false
- Collect tool_groups from last user message boundary, not contiguous tail
- Flatten to individual tool calls before slicing to cap at 10 actual calls

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): fix arrow cycling guard and align rule conditions with patterns

- Remove unreliable textContent check for arrow cycling in WebUI InputForm;
  rely on inputText state which already accounts for zero-width spaces
- Add 'error' to fix/bug rule condition to match its regex pattern
- Add 'clean up' to refactor rule condition to match its regex pattern

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): reset acceptingRef in clear() to prevent deadlock

If clear() is called during accept debounce window, acceptingRef
could remain stuck true permanently. Now reset in clear().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): cancel pending timeout in dismiss() and accept()

Prevents stale suggestion timeout from re-showing suggestions
after user dismisses or accepts during the 300ms delay window.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): reset lastIndex in removeRules() for g/y flag safety

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(vscode-ide-companion): mark @qwen-code/qwen-code-core as external in webview esbuild

The webui package now declares @qwen-code/qwen-code-core as external in its
vite build config. Without this change, the vscode-ide-companion webview
esbuild (platform: 'browser') would try to bundle core's Node.js-only
dependencies (undici, @grpc/grpc-js, fs, stream, etc.), causing 562 build
errors during `npm ci`.

* fix: restore node_modules/@google/gemini-cli-test-utils workspace link in lockfile

The top-level workspace symlink entry was accidentally removed by a local
npm install in commit 004baaeb, which replaced it with a nested
packages/cli/node_modules/ entry. npm ci requires the top-level link entry
to be present in the lockfile, otherwise it fails with:
  "Missing: @google/gemini-cli-test-utils@0.13.0 from lock file"

Also syncs @qwen-code/qwen-code-core peerDependency into the lockfile
to match the updated packages/webui/package.json.

* refactor(followup): extract controller and improve rule matching

- Extract createFollowupController for unified state management across CLI and WebUI
- Refactor rule-based provider to match via assistant message keywords instead of tool arguments
- Add enableFollowupSuggestions user setting in UI category
- Decouple WebUI from @qwen-code/qwen-code-core by copying browser-safe state logic
- Add followupHistory.ts for extracting suggestion context from CLI history
- Add comprehensive tests for controller and rule matching scenarios
- Use --app-primary CSS variable for consistency

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(webui): import followup state from core package

- Remove followupState.ts from webui (moved to core)
- Import FollowupSuggestion, FollowupState types from core
- Add @qwen-code/qwen-code-core as peerDependency
- Add core to vite external list
- Update test to include id field in HistoryItem

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(followup): simplify generator, revert unrelated changes

- Collapse FollowupSuggestionsGenerator class into a single
  generateFollowupSuggestions() function (152 → 26 lines)
- Inline extractSuggestionContext into followupHistory.ts
- Remove unused RuleBasedProvider.addRule/removeRules methods
- Revert unrelated acpConnection.test.ts refactor
- Fix followupHistory.test.ts HistoryItem missing id field
- Reduce test verbosity (162 → 36 lines for generator tests)

* fix(followup): fix accept() deadlock and restore UMD globals mapping

- Wrap queueMicrotask callback in try/catch/finally to prevent accepting
  lock from being permanently held when onAccept throws
- Restore '@qwen-code/qwen-code-core': 'QwenCodeCore' in webui
  vite.config.ts globals (regression from d0f38a5f)
- Add test case verifying accept() recovers after callback exception

* fix(followup): log accept callback errors instead of swallowing them

Replace empty catch {} with console.error to ensure onAccept errors
remain visible for debugging while still preventing deadlock via finally.
Update test to verify error is logged.

* refactor(webui): move followup hook to separate subpath entry

Move useFollowupSuggestions from the root entry to a dedicated
'@qwen-code/webui/followup' subpath so that consumers who only need
UI components are not forced to install @qwen-code/qwen-code-core.

- Add src/followup.ts as separate Vite lib entry
- Remove followup exports from src/index.ts
- Add ./followup exports map in package.json
- Mark @qwen-code/qwen-code-core as optional peerDependency
- Switch build from single-entry UMD to multi-entry ESM/CJS

* fix(webui): restore UMD build and isolate core from root type boundary

- Restore UMD output for root entry (used by CDN demos, export-html, etc.)
- Build followup subpath via separate vite.config.followup.ts to avoid
  Vite's multi-entry + UMD limitation
- Replace FollowupState import in InputForm.tsx with a local structural
  type (InputFormFollowupState) so root .d.ts no longer references
  @qwen-code/qwen-code-core
- Root entry (JS + UMD + .d.ts) is now fully free of core dependency;
  core is only required by '@qwen-code/webui/followup' subpath

* refactor(followup): replace rule-based suggestions with LLM-based prompt suggestion

Replace the hardcoded rule-based follow-up suggestion engine with an LLM-based
prompt suggestion system, aligned with Claude Code's NES (Next-step Suggestion)
architecture.

Core changes:
- Replace ruleBasedProvider with generatePromptSuggestion using BaseLlmClient.generateJson()
- Port Claude Code's SUGGESTION_PROMPT and 14 filter rules (shouldFilterSuggestion)
- Simplify state from multi-suggestion array to single string (FollowupState)
- Add framework-agnostic controller with Object.freeze'd initial state

Guard conditions (9 checks):
- Settings toggle, non-interactive/SDK mode, plan mode
- Permission/confirmation/loop-detection dialogs, elicitation requests
- API error response detection, conversation history limit (slice -40)

UI interaction (CLI + WebUI):
- Tab: fill suggestion into input
- Enter: accept and submit
- Right Arrow: fill without submitting
- Typing/paste: dismiss suggestion
- Autocomplete conflict prevention

Telemetry (PromptSuggestionEvent):
- outcome (accepted/ignored/suppressed), accept_method (tab/enter/right)
- time_to_accept_ms, time_to_ignore_ms, time_to_first_keystroke_ms
- suggestion_length, similarity, was_focused_when_shown, prompt_id
- Per-rule suppression logging with reason strings

Deleted files:
- ruleBasedProvider.ts/test, followupHistory.ts/test, types.ts (dead FollowupSuggestion type)

13 rounds of adversarial audit, 17 issues found and fixed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address qwen3.6-plus-preview review findings

P0: Fix API error detection — check pendingGeminiHistoryItems for error
items (API errors go to pending items, not historyManager.history).

P1: Don't log abort as 'error' in telemetry — aborts are normal user
behavior (user started typing), not errors.

P3: Early return in dismiss() when state already cleared, avoiding
redundant applyState call after accept().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(settings): update suggestion feature description to match current behavior

Remove outdated "arrow keys to cycle" text — the feature now uses
Tab/Right Arrow to accept and Enter to accept+submit (no cycling).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): fix WebUI Enter submitting empty text + defend onOutcome

P0/P1: WebUI Enter handler now passes suggestion text explicitly via
onSubmit(e, followupSuggestion) instead of relying on React setState
(which is async and would leave inputText as "" in the closure).

P3: Wrap onOutcome callbacks in try/catch in both accept() and dismiss()
so telemetry errors cannot block state transitions.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): allow setSuggestion(null) when disabled + fix dts clobber

- setSuggestion(null) now always clears state/timers even when disabled,
  preventing stale suggestions from lingering after feature toggle.
- Set insertTypesEntry: false in followup vite config to prevent
  overwriting the main build's index.d.ts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(webui): thread explicitText through submit chain for Enter accept

handleSubmit and handleSubmitWithScroll now accept an optional
explicitText parameter. When provided (e.g., from prompt suggestion
Enter accept), it is used instead of the closure-captured inputText,
fixing the React setState race where onSubmit reads stale empty text.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address Copilot review — 4 fixes

- Enter accept: use buffer.text.length === 0 instead of !trim() to
  prevent whitespace-only input from triggering suggestion accept
- Move ref tracking from render body to useEffect to avoid
  render-time side effects in StrictMode/concurrent rendering
- Align PromptSuggestionEvent event.name to 'qwen-code.prompt_suggestion'
  matching the EVENT_PROMPT_SUGGESTION constant used by the logger
- Fix onOutcome JSDoc: remove mention of 'suppressed' (handled separately)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address Copilot review — curated history, type compat, peer version

- Use curated history (getChat().getHistory(true)) to avoid invalid
  entries causing API 400 errors in suggestion generation
- Use method signature for onSubmit in InputFormProps to maintain
  bivariant compatibility with existing consumers under strictFunctionTypes
- Tighten @qwen-code/qwen-code-core peer dependency to >=0.13.1

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): add prompt cache sharing + speculation engine

Phase 1 — Forked Query (cache sharing):
- CacheSafeParams: snapshot of generationConfig (systemInstruction + tools)
  + curated history + model + version, saved after each successful main turn
- createForkedChat: isolated GeminiChat sharing the same cache prefix for
  DashScope cache_control hit
- runForkedQuery: single-turn request via forked chat with JSON schema support
- suggestionGenerator: uses forked query when CacheSafeParams available,
  falls back to BaseLlmClient.generateJson otherwise
- GeminiChat.getGenerationConfig(): new getter for cache param snapshots
- Feature flag: enableCacheSharing (default: false)

Phase 2 — Speculation (predictive execution):
- OverlayFs: copy-on-write filesystem for speculation file isolation
  (/tmp/qwen-speculation/{pid}/{id}/), handles new files + existing files
- speculationToolGate: tool boundary enforcement using AST-based shell
  checker (not deprecated regex), write tools gated by ApprovalMode
  (only auto-edit/yolo allow overlay writes)
- speculation.ts: startSpeculation (on suggestion display), acceptSpeculation
  (on Tab/Enter — copies overlay to real FS, injects history via addHistory),
  abortSpeculation (on user input/new turn — cleanup overlay)
- Custom execution loop: toolRegistry.getTool → tool.build → invocation.execute
  (bypasses CoreToolScheduler — permission handled by toolGate)
- ensureToolResultPairing: strips unpaired functionCalls at boundary
- Boundary-aware tool result preservation: keeps executed tool results
  even when boundary truncates remaining calls
- Feature flag: enableSpeculation (default: false)

Telemetry:
- SpeculationEvent: outcome, turns_used, files_written, tool_use_count,
  duration_ms, boundary_type, had_pipelined_suggestion
- logSpeculation logger function

Security:
- Write tools only allowed in auto-edit/yolo mode during speculation
- Shell commands gated by isShellCommandReadOnlyAST (AST parser)
- Unknown/MCP tools always hit boundary (safe default)
- All structuredClone for cache param isolation

4 rounds of adversarial audit, 20+ issues found and fixed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): address Copilot review — curated history, type compat, peer version

- Move web_fetch/web_search from SAFE_READ_ONLY to BOUNDARY tools
  (they require user confirmation for network requests)
- Add overlay read path resolution for read tools (resolveReadPaths)
  so speculative reads see overlay-written files
- Wire enableCacheSharing setting into generatePromptSuggestion
- Fix esbuild comment to not hardcode webui version

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(speculation): use index-based tracking for boundary tool pairing

Track executed function calls by order (first N matching
functionResponses.length) instead of by name. Fixes incorrect
pairing when model emits multiple calls with the same tool name.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(speculation): handle undefined functionCall.name + wrap rewritePathArgs

- Skip functionCall parts with missing name instead of non-null assertion
- Wrap rewritePathArgs in try/catch — treat path rewrite failure as boundary

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): pipelined suggestion, UI rendering, dismiss abort

- Pipelined suggestion: after speculation completes, generate next
  suggestion using augmented context. Promoted on accept.
- UI rendering: completed speculation results rendered via historyManager.
- Dismiss abort: typing/pasting calls dismissPromptSuggestion → clears
  promptSuggestion → useEffect aborts running speculation immediately.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): clear cache on reset, truncate history, fix test + comment

- Clear CacheSafeParams on startChat/resetChat to prevent cross-session leakage
- Truncate history to 40 entries before deep clone in saveCacheSafeParams
  to reduce CPU/memory overhead on long sessions
- Update stale comment about speculation dismiss lifecycle
- Add onAccept assertion to accept test with proper microtask flush

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(design): add prompt suggestion design documentation

- prompt-suggestion-design.md: architecture, generation, filtering, state
  management, keyboard interaction, telemetry, feature flags
- speculation-design.md: copy-on-write overlay, tool gate security, boundary
  handling, pipelined suggestion, forked query cache sharing
- prompt-suggestion-implementation.md: implementation status, test coverage,
  audit history, Claude Code alignment tracking

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(overlay): align catch comment with silent behavior

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): wire augmented context into pipelined suggestion + guard Tab/Right

- Pipelined suggestion now includes the accepted suggestion text and
  speculated model response as context for the next prediction
- Tab/ArrowRight handlers only preventDefault when onAcceptFollowup
  is provided, preventing key interception without a wired callback

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(speculation): filter thought parts + add filePath to path keys

- Skip thought/reasoning parts from model responses to prevent leaking
  internal reasoning into speculated history
- Add 'filePath' to path rewrite key list for LSP and other tools that
  use camelCase argument names

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(overlay): resolve relative paths against realCwd not process.cwd

Relative tool paths are now resolved against the overlay's realCwd
before computing the relative path, preventing incorrect outside-cwd
detection when process.cwd() differs from config.getCwd().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(design): fix 4 doc-code inconsistencies

- Guard conditions: clarify 13 code checks vs 11 table categories,
  separate feature flags from guard block, add streaming transition
- Filter rules: 14 → 12 (actual count in code and table)
- BOUNDARY_TOOLS: add todo_write + exit_plan_mode to doc table
- SpeculationEvent: 8 → 7 fields (matching code)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): turns_used metric + reuse SUGGESTION_PROMPT + reduce clones

- turns_used: count only model messages (not all Content entries)
  to accurately reflect LLM round-trips instead of inflated 3x count
- Pipelined suggestion: reuse exported SUGGESTION_PROMPT from
  suggestionGenerator instead of a degraded local copy, ensuring
  consistent quality (EXAMPLES, NEVER SUGGEST rules included)
- createForkedChat: replace redundant structuredClone with shallow
  copies since params are already deep-cloned snapshots

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): speculation UI tool rendering + speculationModel setting

- Speculation UI: render tool calls as tool_group HistoryItems with
  structured name/description/result instead of plain text only
- speculationModel setting: allows using a cheaper/faster model for
  speculation and pipelined suggestion. Leave empty to use main model.
  Passed through startSpeculation → runSpeculativeLoop → pipelined.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(design): sync docs with latest code changes

- Add speculationModel setting to feature flags table
- Document tool_group UI rendering in speculation accept flow
- Fix createForkedChat: deep clone → shallow copy (already cloned snapshots)
- Document pipelined suggestion SUGGESTION_PROMPT reuse
- Add Model Override and UI Rendering sections to speculation-design
- Update line counts to match actual file sizes

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test(followup): add unit tests for overlayFs, toolGate, forkedQuery

overlayFs (15 tests): COW write, read resolution, apply, cleanup, path traversal
speculationToolGate (24 tests): tool categories, approval mode gating, shell AST, path rewrite
forkedQuery (6 tests): cache params save/get/clear, deep clone, version detection

Total: 27 → 173 tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test(followup): P0-P2 test coverage for speculation + controller + toolGate

speculation.test.ts (7 tests):
- ensureToolResultPairing: empty, no calls, paired, unpaired text+call,
  unpaired call-only, user-ending, empty parts

followupState.test.ts (+8 tests = 15 total):
- onOutcome: accepted/tab, ignored/dismiss, error caught, no-op when cleared
- clear(): resets accepting lock allowing re-accept
- double accept blocked by debounce
- setSuggestion replaces pending timer

speculationToolGate.test.ts (+3 tests = 27 total):
- resolveReadPaths: overlay path after write, unchanged when not written
- rewritePathArgs: path key coverage

Total: 173 → 190 tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test(followup): smoke tests + P0-P2 coverage gaps

smoke.test.ts (21 tests): E2E verification across modules
- Filter against realistic LLM outputs (9 good + 7 bad + reason check)
- OverlayFs full round-trip (write → read → apply → verify)
- ToolGate → OverlayFs integration (write redirect → read resolve)
- CacheSafeParams lifecycle (save → mutate → isolation → clear)
- ensureToolResultPairing orphaned functionCalls

followupState.test.ts (+8 tests):
- onOutcome: accepted/tab, ignored/dismiss, error caught, no-op cleared
- clear(): resets accepting lock
- double accept debounce
- setSuggestion replaces pending timer

speculationToolGate.test.ts (+3 tests):
- resolveReadPaths through overlay after write
- path key coverage for rewritePathArgs

Export ensureToolResultPairing for testing.

Total: 190 → 211 tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): dismiss aborts suggestion, boundary skip inject, parentSignal check

- dismissPromptSuggestion now also aborts suggestionAbortRef to prevent
  race between dismiss and in-flight startSpeculation
- Boundary speculation: skip acceptSpeculation (which injects history),
  fall through to normal addMessage to avoid duplicate user turns
- startSpeculation: check parentSignal.aborted upfront before starting
- Speculation rendering: use index-based loop instead of indexOf O(n²)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(design): fix speculation accept diagram — boundary skips inject

The architecture diagram now shows the branching logic: completed
speculations go through acceptSpeculation (inject + render), while
boundary speculations are discarded and the query is submitted fresh
via addMessage.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): enable cache sharing by default

enableCacheSharing now defaults to true. This is a pure cost
optimization with no behavioral change — suggestion generation
uses the forked query path (sharing the main conversation's
prompt cache prefix) when CacheSafeParams are available.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): aborted parent skips loop, acceptSpeculation try/finally, doc sync

- startSpeculation: return aborted state immediately when parentSignal
  is already aborted, without creating overlay or starting loop
- acceptSpeculation: wrap in try/finally to guarantee overlay cleanup
  even if applyToReal or addHistory throws
- Doc: enableCacheSharing default false → true (matches code)
- Doc: update test count table (7 → 15 followupState, add 6 new files)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): remove debug logs, add function calling fallback for non-FC models

- Remove all followup-debug process.stderr.write logs
- Add direct text fallback in generateViaBaseLlm when generateJson
  returns {} (model doesn't support function calling, e.g., glm-5.1)
- Add CJK text support in filter: skip whitespace-based word count
  for Chinese/Japanese/Korean text, use character count instead

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): add suggestionModel setting for faster suggestion generation

New setting `suggestionModel` allows using a smaller/faster model
(e.g., qwen-turbo) for prompt suggestion generation instead of the
main conversation model. Reduces suggestion latency significantly.

Passed through: settings → AppContainer → generatePromptSuggestion
→ generateViaForkedQuery / generateViaBaseLlm (both paths).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(followup): suggestionModel setting, /stats tracking, /about display

- suggestionModel: new setting to use a faster model for suggestion
  generation (e.g., qwen3.5-flash instead of main model glm-5.1)
- /stats: suggestion API calls now report usage to UiTelemetryService
  so token consumption appears in /stats model breakdown
- /about: shows Suggestion Model field (configured or main model)

Also:
- Function calling fallback for non-FC models (direct text generation)
- CJK text support in word count filter (character-based for Chinese)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* i18n: add Suggestion Model translations for /about display

en: Suggestion Model | zh: 建议模型 | ja: 提案モデル
de: Vorschlagsmodell | pt: Modelo de Sugestão | ru: Модель предложений

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): always use generateContent for suggestion (not generateJson)

generateJson doesn't expose usageMetadata, so /stats can't track
suggestion model tokens. Switch to direct generateContent which
always returns usage data. Also simplifies the code by removing
the function-calling + fallback dual path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): fix /stats tracking — use ApiResponseEvent constructor

Use ApiResponseEvent class constructor with proper response_id and
override event.name to match UiEvent type for UiTelemetryService
switch statement. This ensures suggestion model token usage appears
in /stats model output.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* i18n: fix Chinese translation for Suggestion Model

"建议模型" → "提示建议模型" to avoid ambiguity.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(followup): merge suggestionModel + speculationModel into fastModel

Single unified setting for all background tasks: suggestion generation,
speculation, pipelined suggestions, and future background tasks.

Users only need to understand one concept: main model for conversation,
fast model for background tasks.

- Remove: suggestionModel, speculationModel
- Add: fastModel (ui.fastModel in settings.json)
- Update /about display: "Fast Model" with i18n translations
- Update all 6 locale files (en/zh/ja/de/pt/ru)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(settings): move fastModel to top-level (parallel to model)

fastModel is an independent model concept, not a property of the
main model. Move from model.fastModel to top-level settings.fastModel.

Config: { "fastModel": "qwen3.5-flash", "model": { "name": "glm-5.1" } }

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): report usage in both forkedQuery and baseLlm paths

The forkedQuery path (used when enableCacheSharing=true) was not
reporting token usage to UiTelemetryService, so /stats model didn't
show the fast model. Now both paths report usage.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(cli): add /model --fast command to set fast model

Usage:
  /model --fast qwen3.5-flash  — set fast model
  /model --fast                — show current fast model
  /model                      — open model selection dialog (unchanged)

Saves to user settings (SettingScope.User).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(design): update to fastModel (replace suggestionModel/speculationModel)

- prompt-suggestion-design.md: speculationModel → fastModel (top-level)
- speculation-design.md: Model Override → Fast Model, update description
- prompt-suggestion-implementation.md: update settings description

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(cli): /model --fast opens model selection dialog for fast model

When called without a model name, /model --fast now opens the same
model selection dialog used by /model, but selecting a model saves
it as fastModel instead of switching the main model.

- useModelCommand: add isFastModelMode state
- ModelDialog: intercept selection in fast model mode, save to fastModel
- DialogManager: pass isFastModelMode prop to ModelDialog
- types.ts: add 'fast-model' dialog type

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): pass resolved model (not undefined) to runForkedQuery

model: modelOverride → model: model (which has the fallback applied)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(cli): /model --fast defaults to current fast model in dialog

When opening the model selection dialog via /model --fast, the
currently configured fastModel is pre-selected instead of the
main model.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(cli): add --fast tab completion for /model command

/model <Tab> now shows --fast as a completion option with description.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(schema): regenerate settings.schema.json with new followup settings

Adds enableCacheSharing, enableSpeculation, and fastModel to the
generated JSON schema so CI validation passes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(test): update tests for new Fast Model field in system info

Add "Fast Model" to expected labels in systemInfoFields and bugCommand
tests to match the new field added to /about and bug report output.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* ci: trigger PR synchronize event

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Copilot review comments (batch 4)

- modelCommand: use getPersistScopeForModelSelection for fastModel,
  return meaningful info message instead of empty content
- ModelDialog: handle $runtime|authType|modelId format in fast-model mode
- forkedQuery: return structuredClone from getCacheSafeParams
- client: fix stale comment about history truncation order
- speculation: detect abort in .then() handler, set 'aborted' status
  and cleanup overlay to prevent leaks
- docs: update test count table

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(users): add followup suggestions user manual

- New feature page: followup-suggestions.md covering usage, keybindings,
  fast model configuration, settings, and quality filters
- commands.md: add /model --fast command reference
- settings.md: add enableFollowupSuggestions, enableCacheSharing,
  enableSpeculation, and fastModel settings documentation
- _meta.ts: register new page in navigation

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(users): audit fixes for followup suggestions documentation

- followup-suggestions.md: add 300ms delay, WebUI support, plan mode
  guard, non-interactive guard, slash commands as single-word, meta/error
  filters, character limit
- settings.md: move fastModel next to model section, add /model --fast
  cross-reference and link to feature page
- overview.md: add followup suggestions to feature list
- i18n: add missing translations for 'Set fast model for background
  tasks' and 'Fast model updated.' in all 6 locales

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Copilot review comments (batch 5)

- modelCommand: remove duplicate info message (keep addItem only)
- followup-suggestions.md: clarify WebUI requires host app wiring
- speculation-design.md: fix abort telemetry description
- i18n: add missing translations for fast model strings

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(cli): remove duplicate message in /model --fast command

Use return message instead of addItem + empty return to avoid
blank INFO line in history. Also handle missing settings service.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(i18n): remove unused 'Fast model updated.' translations

The /model --fast command now returns the model name directly
instead of using this string. Remove dead translations.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(followup): disable thinking mode for suggestion and speculation

Forked queries inherit the main conversation's generationConfig which
may have thinkingConfig enabled. This wastes tokens and adds latency
for background tasks that don't need reasoning. Explicitly set
thinkingConfig.includeThoughts=false in both paths:
- createForkedChat (covers forked query + speculation)
- generateViaBaseLlm (non-cache-sharing fallback)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: document thinking mode auto-disable for background tasks

- User docs: note that thinking is auto-disabled for suggestions/speculation
- Design docs: detail thinkingConfig override in both forked query and
  BaseLlm paths, explain why cache hits are unaffected

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
Co-authored-by: jinjing.zzj <jinjing.zzj@alibaba-inc.com>
Co-authored-by: yiliang114 <1204183885@qq.com>
2026-04-03 20:07:23 +08:00
思晗
7c1fe2d938 fix(cli): ensure correct ordering of hook system messages and AI responses
Commit pending AI response before adding HookSystemMessage to history
to prevent "Stop says:" block from appearing above the AI's reply.
2026-04-03 15:45:47 +08:00
KULIKRCH_HUAWEI\rocks
0104569fdd fix(cli): restore previous theme on /theme cancel (refs #2833) 2026-04-02 18:25:00 +03:00
qqqys
594fadbe94 fix(cli): prevent ideCommand failure from breaking all slash commands (#2785) 2026-04-02 14:08:05 +08:00
tanzhenxin
76d64c9464
Merge pull request #2731 from QwenLM/feat/in-session-cron-loops
feat(cron): add in-session loop scheduling with cron tools
2026-04-01 16:18:46 +08:00
DennisYu07
06a0f4797d
Merge pull request #2696 from QwenLM/feat/hooks-refactor-ui-event
refactor(ui): improve hook event handling with dedicated history items
2026-04-01 15:56:17 +08:00
DennisYu07
5221002831 remove hooks experimental and refactor hook Config 2026-04-01 11:50:23 +08:00
DennisYu07
1a7510d85e fix loss of stopHookCount 2026-03-31 20:22:51 +08:00
tanzhenxin
aa454a5a72 feat(cron): add distinct Cron message type and exit summary
- Introduce SendMessageType.Cron to differentiate cron-triggered prompts
  from user queries
- Skip UserPromptSubmit hook for cron messages
- Add getExitSummary() to display active loops when session ends
- Add tests for exit summary functionality

This improves cron loop handling by treating scheduled prompts
differently from user-initiated queries and provides better UX
when sessions end with active loops running.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-03-30 17:00:19 +08:00
tanzhenxin
a3623fd819 feat(cron): add interactive E2E tests and fix cron trigger reactivity
- Add getScreenText() to TerminalCapture for reading rendered xterm.js screen
- Add E2E tests for in-session cron: inline firing, user priority, error resilience
- Fix cron prompts not processing by adding cronTrigger state dependency

This ensures cron-injected prompts are processed immediately when fired,
not just when streaming state changes, and provides comprehensive test
coverage for the in-session cron feature.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-03-29 04:22:28 +00:00
tanzhenxin
439a1a46e2 feat(cron): make cron tools opt-in via experimental settings
Change cron/loop tools from opt-out to opt-in. Cron tools are now
disabled by default and can be enabled via:
- settings.json: { "experimental": { "cron": true } }
- Environment variable: QWEN_CODE_ENABLE_CRON=1

This ensures experimental features are explicitly enabled by users
who want to try them.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-03-29 02:25:28 +00:00
tanzhenxin
c4ae7bf0cd test(cli): add cron config mocks to test fixtures
- Add isCronDisabled mock returning true
- Add getCronScheduler mock returning null

This aligns test mocks with the new cron scheduler config interface.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-03-29 00:58:00 +00:00
tanzhenxin
aa4939111c feat(cron): add in-session loop scheduling with cron tools
Add session-scoped recurring jobs that fire while you work. Jobs live
inside the current Qwen Code process and are gone when you exit.

New tools:
- cron_create: schedule a prompt to run on a cron expression
- cron_list: list active cron jobs
- cron_delete: cancel a scheduled job

Components:
- CronScheduler service for in-process job management
- cronParser utility for 5-field cron expressions
- /loop skill for natural language scheduling
- Non-interactive mode integration to keep process alive

Constraints:
- Max 50 jobs per session
- 3-day expiry for recurring jobs
- Jitter to prevent thundering herd
- No catch-up for missed fire times

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-03-28 14:37:29 +00:00
DennisYu07
8dfd981af0 Merge branch 'main' into feat/hooks-refactor-ui-event 2026-03-27 13:59:33 +08:00
DennisYu07
cf0b67ef8e refactor ui for stop hook reason and systemMessage 2026-03-27 10:54:16 +08:00
LaZzyMan
3b2d50fad6 fix: @ file search stops working after selecting a slash command (#2518) 2026-03-27 10:47:55 +08:00
Mingholy
00447356ad
Merge pull request #2602 from QwenLM/feat/hooks-refactor-hooks-ui
feat(hooks ui): refactor ui for Qwen Code hooks
2026-03-26 20:11:50 +08:00
DennisYu07
a5c6084222 refactor ui for stop hook and userPromptSubmit 2026-03-25 20:44:55 +08:00