Commit graph

2334 commits

Author SHA1 Message Date
Shaojin Wen
ef29700bce
fix(ui): trim background task results and show newest first (#4094) (#4125)
Some checks are pending
Qwen Code CI / Classify PR (push) Waiting to run
Qwen Code CI / Lint (push) Blocked by required conditions
Qwen Code CI / Test (macos-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (ubuntu-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (windows-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* fix(ui): trim background task results and show newest first (#4094)

Two related improvements to the background task pill and dialog:

1. Trim outdated terminal task results.
   `BackgroundTaskRegistry` and `BackgroundShellRegistry` now cap
   retained terminal entries at 32 each (mirroring `MonitorRegistry`'s
   existing `MAX_RETAINED_TERMINAL_MONITORS` pattern). Running, paused,
   and cancelled-but-not-yet-notified entries are never evicted —
   pruning a not-yet-notified entry would break the SDK contract that
   every `register` pairs with exactly one terminal `task-notification`.

2. Show newest tasks at the top of the dialog.
   `useBackgroundTaskView` now sorts entries by `startTime` descending
   so the dialog opens with the cursor on the most recently launched
   task. `LiveAgentPanel` reverses internally back to ASC for its own
   visual layout (newest row sits closest to the composer).

* perf(shell-registry): batch abortAll prune + statusChange into one pass

abortAll() previously delegated to cancel() per entry, so each running
shell triggered its own pruneTerminalEntries() and statusChange wakeup.
On shutdown / `/clear` with N running shells the only subscriber
(useBackgroundTaskView) re-pulled getAll() N times for what is logically
a single batch transition.

Settle each entry inline via the new private settleAsCancelled() helper,
then fire prune + statusChange exactly once after the loop. The split
keeps the running-status guard at the public-API boundary so callers
can't accidentally re-settle a terminal entry.

* fix(ui): two-bucket sort so running tasks outrank fresh terminals

The earlier startTime DESC sort surfaced the newest LAUNCH but let an
older long-running / paused entry get pushed below a batch of newer
terminal entries — the user opening the dialog to check on the running
work would find it buried under stale completed rows.

Split the merge into two buckets:
  - active (running + paused): sorted by startTime DESC so the most
    recent launch sits at the very top of the dialog.
  - terminal (completed / failed / cancelled): sorted by endTime DESC
    so the most recently FINISHED entry leads the terminal section
    (matches "what changed while I wasn't looking" intuition; a long
    task that just settled outranks an old quick task that finished
    hours ago).

Pin the new behavior with two tests covering active-above-terminal
and the endTime-vs-startTime distinction inside the terminal bucket.

* fix: add missing outputFile and isBackgrounded to retention cap tests

The merge brought in required fields on AgentTaskRegistration that the
retention-cap test helpers were not supplying.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-05-17 09:13:24 +08:00
qqqys
07165a095c
Add stop hook blocking cap (#4208)
* feat(core): add stop hook blocking cap

* fix(core): tighten stop hook cap behavior

* fix(cli): show goal judge details

* fix(core): bound stop hook blocking cap

* fix(core): surface subagent stop cap warnings

* fix(core): clean up stop hook cap loop

* test(core): cover stop hook cap integrations

* test(core): strengthen stop hook cap coverage
2026-05-17 06:52:56 +08:00
jinye
0eed884c0b
fix(rewind): restore upstream TOCTOU ordering + heal sticky failed marker (#4216)
Some checks are pending
Qwen Code CI / Classify PR (push) Waiting to run
Qwen Code CI / Lint (push) Blocked by required conditions
Qwen Code CI / Test (macos-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (ubuntu-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (windows-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* fix(rewind): restore upstream TOCTOU ordering + heal sticky failed marker

Two related bugs that landed in #4064:

1. **trackEdit widened the pre-write TOCTOU window**. PR #4064 inserted
   `await trackEdit(filePath)` between `checkPriorRead` and
   `writeTextFile` in both `edit.ts` and `write-file.ts`. trackEdit does
   `stat` + `copyFile` and on large files can take hundreds of
   milliseconds. The pre-existing comment above `checkPriorRead`
   acknowledged a stat-then-write race window of "two adjacent
   syscalls"; the new ordering widened that to "freshness check →
   potentially-multi-second backup → write", so an external mutation
   landing during the backup would no longer be detected before the
   write clobbered it.

   The fix is the ordering upstream `claude-code/src/tools/FileEditTool`
   uses, with its own explicit comment on the equivalent block:

   > These awaits must stay OUTSIDE the critical section below — a
   > yield between the staleness check and writeTextContent lets
   > concurrent edits interleave.

   Moved `trackEdit` to BEFORE `checkPriorRead` in both tools. Backups
   are idempotent (deterministic `{hash}@v{version}` filename), so
   running the backup on a path that ends up rejected just leaves an
   unused-but-correct backup of the pre-edit state, not corrupt state.

2. **The `failed` marker added in d59838338 stayed sticky in `trackEdit`**.
   When `makeSnapshot` recorded `failed: true` for a file (transient
   I/O error during per-file backup), the next `trackEdit` for that
   file skipped because the early guard only checked existence, not
   the failed bit. The pre-edit state never got re-captured even
   after the underlying I/O recovered, leaving the snapshot marked
   filesFailed forever for that path.

   The guard now skips only on confirmed (non-failed) entries:
   `if (existing && !existing.failed) return;`. The re-check guard
   at the end is updated symmetrically so the fresh backup actually
   replaces the failed entry (instead of being silently dropped by
   the "already present" branch).

Tests:

- `edit.test.ts`: pin the TOCTOU ordering via spy on `fileReadCache.check`
  and `mockFileHistoryService.trackEdit`. Verifies `trackEdit` is called
  before the last `cache.check` (the pre-write freshness guard). Same
  pattern in `write-file.test.ts`. Both tests fail on the old ordering
  (verified by stashing the production change).
- `fileHistoryService.test.ts`: `heals a failed entry on the next
  trackEdit attempt` — sabotage storage so makeSnapshot records
  failed, restore storage, run trackEdit, assert the entry is now
  non-failed with a real backupFileName.

* test(rewind): switch TOCTOU tests to behavioral assertions + harden heal test

Address review feedback from code-reviewer agent:

- The TOCTOU ordering tests previously asserted ordering via
  `callOrder.lastIndexOf('cache.check')`, which would silently degrade
  to a tautology if a future refactor added any `cache.check` call
  AFTER the pre-write check (e.g. the deferred atomic-write + content-
  hash post-check explicitly mentioned in the comment above the
  freshness guard).

  Switch to a behavioral assertion that directly tests the invariant:
  install a `trackEdit` mock that mutates the file on disk (bumps
  mtime) before returning. The pre-write `checkPriorRead` must catch
  the in-trackEdit mutation — that only happens if `trackEdit` runs
  BEFORE the pre-write check. The broken ordering would run pre-write
  check first (passing on pre-mutation stats), then trackEdit (which
  mutates), then write (which clobbers the external mutation).

  Asserting on `result.error?.type === FILE_CHANGED_SINCE_READ`
  exercises the actual race protection the fix exists to preserve,
  and survives any future refactor that keeps the invariant intact
  regardless of how many `cache.check` calls happen on the way.

  Verified by reverting just the production change to HEAD~1: both
  behavioral tests fail under the broken ordering (file is silently
  overwritten, no FILE_CHANGED_SINCE_READ error), and pass under the
  fix.

- The heal test (`heals a failed entry on the next trackEdit attempt`)
  previously asserted `backupFileName !== null` but not that the file
  on disk at that name actually contained the current content. Add a
  `readFile + expect(...).toBe('p2-content')` assertion so a regression
  that wrongly reuses `previous.backupFileName` (pointing to
  `p1-content`) fails loudly instead of silently passing.
2026-05-17 02:01:58 +08:00
易良
ba77ddd81b
fix(lsp): expose status and startup diagnostics (#3649)
* feat(lsp): add /lsp slash command to show server status

Implements the /lsp command that displays the status of all configured
LSP servers. Previously this was documented in the FAQ but never
implemented, leaving users with no way to check if their language
servers started successfully.

Changes:
- Add LspServerStatusInfo interface to lsp/types.ts
- Add getServerStatus() to LspClient and NativeLspClient
- Expose getServerHandles() from NativeLspService
- Create lspCommand.ts with status table output
- Register /lsp in BuiltinCommandLoader (only when LSP is enabled)

The command shows: server name, command, languages, and status
(NOT_STARTED / IN_PROGRESS / READY / FAILED + error message).

* fix(lsp): expose status and startup diagnostics

* fix(lsp): harden status command diagnostics

* fix(lsp): add stderr error listener and harden initialization error handling

- Add stderr 'error' event listener in LspConnectionFactory to prevent
  unhandled stream errors from crashing the process
- Wrap setLspInitializationError calls in try-catch in config.ts to guard
  against post-initialization state changes that would throw
2026-05-17 01:42:28 +08:00
jinye
54fd5c50f0
feat(telemetry): add detailed sensitive span attributes (#4097)
Layer detailed content attributes onto the existing hierarchical spans
(qwen-code.interaction / qwen-code.llm_request / qwen-code.tool) gated
by includeSensitiveSpanAttributes:

- Interaction span: user prompt (new_context)
- LLM request span: system prompt + hash + preview + length (full text
  deduped per session via SHA-256), tool schemas (per-tool tool_schema
  events, also hash-deduped), model output
- Tool span: tool input, tool result on every exit path (success +
  pre-hook block + post-hook stop + tool error + try-block cancel +
  catch-block cancel + execution exception)

All large content truncated at 60KB with *_truncated and
*_original_length metadata. Heavy serialization (safeJsonStringify on
tool I/O, partToString on user prompt) is guarded by the sensitive
flag at the call site so it doesn't run when telemetry is off.

Also adds:
- getActiveInteractionSpan() helper for client.ts to attach prompt
  attributes to the interaction span.
- Updated config schema description and docs (telemetry.md +
  settings.md) to reflect expanded scope and add security/cost notes.
- 28 unit tests for detailed-span-attributes, 4 tests for
  getActiveInteractionSpan, integration mocks updated.
2026-05-17 00:36:48 +08:00
易良
2c200a3d0a
fix(core): add heap-pressure auto-compaction safety net (#4186)
* fix(core): add heap pressure compaction safety net

Link: #4185

* fix(core): keep heap pressure compaction active

Let heap-pressure bypass also bypass the failed-attempt latch and cover the interaction with a regression test.

Link: #4185

* test(core): cover raw next-speaker history lookup

Verify next-speaker checks use the raw last history entry for function responses while the LLM prompt still uses curated history.

Link: #4185

* fix(core): avoid latching heap-pressure compression failures

* fix(core): back off failed heap-pressure compaction

* fix(core): back off heap-pressure no-op compaction
2026-05-16 23:37:05 +08:00
tanzhenxin
8d765fec78
refactor(core): TaskBase envelope + foreground subagent persistence (#3970)
* refactor(core): TaskBase envelope + foreground subagent persistence

Establishes a shared `TaskBase` envelope across the agent / shell /
monitor task registries with a mandatory `outputFile` field. Brings the
foreground subagent path into compliance with the new contract, so it
now leaves the same JSONL transcript + meta sidecar on disk that
backgrounded subagents have always produced — closing the only gap
where a registered task wrote nothing. Renames the agent-task
discriminator from `flavor: 'foreground' | 'background'` to claw-code's
`isBackgrounded: boolean`; the deprecated names are kept as
one-release type aliases.

PR 1 of the task-registry-unification design. PR 2 will collapse the
three per-kind registries into one thin TaskRegistry plus per-kind
modules.

* refactor(core): drop unused BackgroundTaskFlavor type alias

The alias only preserved the type name; no in-tree caller used it,
and after the field rename no realistic external consumer use survives
(reading entry.flavor / writing { flavor: ... } both fail at the use
site regardless of whether the alias resolves). Drop it instead of
carrying a hollow shim.

* fix(core): tighten foreground subagent launch path

- Register before writing the meta sidecar so a register() failure can't
  leave an orphaned 'running' meta file behind. writeAgentMeta is
  best-effort and never throws, so the inverse failure mode (registry
  entry without sidecar) is a benign degradation.
- Cache getGitBranch by cwd at the agent module level so foreground
  launches don't pay a fresh git rev-parse exec each time. Branches
  don't change within a process under normal use; the transcript
  annotation is best-effort audit metadata.
- Document on cancel() that foreground entries take a partial path
  through the method — Map deletion is the caller's responsibility
  via unregisterForeground() in the tool-call's finally path.

* fix(agent): correct foreground meta status mapping and register order

The foreground finally block in agent.ts mapped any non-ERROR, non-CANCELLED
terminate mode (including MAX_TURNS, TIMEOUT, SHUTDOWN) to 'completed' in
the sidecar, so post-mortem readers and resume logic saw a successful
status for runs that actually hit a guardrail. Flip the ternary to mirror
the background path: GOAL -> completed, CANCELLED -> cancelled, else ->
failed.

Also reorder the background launch so registry.register() runs before
writeAgentMeta(), matching the foreground path. Both paths now share the
same orphaned-meta guarantee.

* test(agent): rename stale foreground-flavor test

The "default flavor (absent) behaves as background" test name and its
backwards-compat comment referenced the old optional flavor field, but
the registration shape has required isBackgrounded for a while now —
there is no "absent" path to exercise. Rename it to describe what the
assertion actually covers: that background entries fire a task-
notification on complete.

* refactor(core): alias BackgroundTaskStatus to TaskStatus

The local `BackgroundTaskStatus` union was byte-identical to the new
shared `TaskStatus` defined in `tasks/types.ts`. Replace it with a
`@deprecated` type alias so external consumers (notably
`nonInteractiveCli.ts`) keep compiling unchanged while the canonical
name lives in one place.

* refactor(core): tidy monitorRegistry signatures and document cancel ordering

Two small consistency wins flagged in review:

1. `dispatchOwnerLifecycleWake` and `dispatchNotification` were the only
   methods on the registry still typed with the deprecated `MonitorEntry`
   alias. Rename their parameters to `MonitorTask` to match every other
   signature in the file.

2. `cancel()` orders `settle()` and `abort()` differently between its two
   branches, which is intentional (silent cancel locks the terminal status
   before abort listeners run; default cancel lets a naturally-completing
   operation settle through its own terminal path). Document that
   asymmetry in a JSDoc on the method so the next reader doesn't have to
   reverse-engineer it.

* refactor(core): migrate internal BackgroundTaskStatus refs to TaskStatus

The `BackgroundTaskStatus` alias was introduced in 91b59a8fb as a
`@deprecated` synonym for external SDK consumers (notably
`nonInteractiveCli.ts`). New internal references in this PR's own
file kept the old name; migrate them so the only remaining usage of
the deprecated alias is the alias declaration itself.

No behavior change — the alias is `= TaskStatus` so the union is
identical.

* test(agent): cover foreground failed-mode terminal status mapping

The foreground finally block maps GOAL→completed, CANCELLED→cancelled,
and everything else (ERROR, MAX_TURNS, TIMEOUT, SHUTDOWN) → failed.
Only the GOAL branch was asserted; the failed-mode fallback had no
coverage even though the same mapping recently regressed (d67da6d50)
and had to be fixed by review.

Adds a table-driven case mocking getTerminateMode to ERROR /
MAX_TURNS / TIMEOUT and asserting patchAgentMeta receives
status: 'failed'. CANCELLED is already covered by the
"foreground CANCELLED prefixes the partial result" test below.

* test(agent): cover foreground CANCELLED → cancelled meta mapping

Extends the foreground terminate-mode it.each to assert that
CANCELLED is recorded as `cancelled` in the on-disk sidecar — the
existing cancel-prefix test only verified the LLM-visible payload,
leaving the patchAgentMeta mapping uncovered. A regression flipping
CANCELLED → 'failed' would now fail this case.

* test(agent): make registry path assertions platform-agnostic

The outputFile/metaPath regexes hardcoded forward slashes, so the
foreground JSONL+meta reservation test failed on Windows where paths
use backslashes. Accept either separator.

* fix(core): guard executeBackground register-throw window; correct outputFile contract

A throwing register() subscriber in executeBackground() would leak the
already-spawned child + open output stream, unreachable by /tasks /
task_stop. Mirror the promote path's defensive try/catch: abort the
entry's controller, destroy the stream, and rethrow so the launch fails
visibly.

Also correct the TaskBase.outputFile contract: agent JSONL is
materialized on the writer's first append, which is the launch prompt
at attach time — not the first runtime event. A subagent cancelled
before any event still leaves a prompt-only JSONL plus meta, not meta
alone.
2026-05-16 22:53:08 +08:00
jinye
8dfbdaa5d4
feat(telemetry): unify span creation paths for hierarchical trace tree (#4126)
* feat(telemetry): unify span creation paths for hierarchical trace tree (#3731 P3 Phase 1)

Replace disconnected withSpan/startSpanWithContext calls in runtime with
session-tracing typed helpers so LLM and tool spans become children of
the interaction span instead of siblings under the session root.

- Add toolContext ALS with runInToolSpanContext() for concurrent-safe
  tool span scoping (uses AsyncLocalStorage.run, not enterWith)
- Wire startLLMRequestSpan/endLLMRequestSpan in loggingContentGenerator
  for both streaming and non-streaming paths
- Wire startToolSpan/endToolSpan + startToolExecutionSpan/endToolExecutionSpan
  in coreToolScheduler with proper try/finally lifecycle
- Remove redundant withSpan('client.generateContent') wrapper from client.ts
- Fix endToolSpan to not override pre-set status when metadata is omitted
- Change startToolExecutionSpan to read parent from toolContext ALS
- Update tests for new span creation APIs and remove dead test infrastructure

* fix(telemetry): address CI build errors in session-tracing tests

- Remove unused _toolSpan variable (TS6133)
- Use bracket notation for index signature property access (TS4111)

* fix(telemetry): update coreToolScheduler and loggingContentGenerator test mocks

- coreToolScheduler.test.ts: mock startToolSpan/endToolSpan/runInToolSpanContext
  instead of withSpan; update cancellation tests for restored safeSetStatus call
- loggingContentGenerator.test.ts: fix attribute keys in mock, add try/catch in
  endLLMRequestSpan mock to match production best-effort behavior

* fix(telemetry): address review feedback from wenshao

- Add debugLogger.warn in catch blocks of endLLMRequestSpan/endToolSpan/
  endToolExecutionSpan instead of silent swallowing
- Add JSDoc on endToolSpan documenting intentional no-metadata-no-status
  contract with setToolSpanFailure/setToolSpanCancelled
- Add warning in startToolExecutionSpan when called outside
  runInToolSpanContext (no active toolContext)
- Sanitize error message in endToolExecutionSpan: use constant
  TOOL_SPAN_STATUS_TOOL_EXCEPTION instead of raw error message

* fix(telemetry): use partial mock for telemetry/index.js in coreToolScheduler tests

The full mock shadowed all re-exports (logToolCall, etc.) causing 49 test
failures. Use importActual to preserve other exports, only override span
functions.

* fix(telemetry): getLastToolSpan must skip tool.execution sub-spans

startToolExecutionSpan mock also pushes to toolSpanRecords, so at(-1)
returns the execution sub-span instead of the tool span. Use findLast
to filter by name.

* fix(telemetry): address second round review feedback

- Remove redundant safeSetStatus(span, OK) on success path — endToolSpan
  in finally already sets OK via metadata
- Add llm_request.stream attribute (true/false) to distinguish streaming
  vs non-streaming LLM requests in trace backends

* fix(telemetry): endToolSpan mock writes to record directly

Bypass span.setStatus() in mock to avoid potential interference from
vitest module resolution. Write to statusCalls/ended directly on the
ToolSpanRecord.

* fix(telemetry): mock session-tracing.js directly instead of telemetry/index.js

Mocking the barrel re-export (telemetry/index.js) with importActual was
unreliable — vitest's module resolution could bind production code to the
real endToolSpan before the mock override took effect. Mock the source
module (session-tracing.js) directly to guarantee interception.

* fix(telemetry): fix endToolSpan status on success — toolCalls is empty in finally

Root cause: checkAndNotifyCompletion clears this.toolCalls before the
finally block in executeSingleToolCall runs, so the tc lookup always
returns undefined.

Fix: set OK status explicitly in _executeToolCallBody's success path
via safeSetStatus(span, OK), and call endToolSpan() without metadata
in finally (just ends the span, preserves pre-set status from any path).

* fix(telemetry): address Codex review — activate OTel context, end span on failure

- Wrap non-stream generateContent API call + logging in context.with(spanContext)
  so nested OTel spans (HTTP instrumentation, log-bridge spans) parent to
  qwen-code.llm_request instead of session root (matches streaming path).
- runInToolSpanContext now also activates OTel context via otelContext.with,
  not just the custom toolContext ALS. Hooks/HTTP/IO during tool execution
  now correctly parent to qwen-code.tool span.
- Split end*Span helpers: span.end() runs in its own try/catch so a throwing
  setAttributes/setStatus can't leak unended spans.

* fix(telemetry): address Codex review v2 — session-root fallback + execution span timing

- start{LLMRequest,Tool,ToolExecution}Span now fall back to getSessionContext()
  when no parent context, instead of otelContext.active(). Side-query LLM calls
  (auto-title, recap) now stay in the session trace instead of starting a new
  detached trace.
- Move startToolExecutionSpan() to BEFORE invocation.execute(), matching
  claude-code. Previously the synchronous setup inside execute (shell command
  preprocessing, child_process.spawn) ran outside the execution span.

* fix(telemetry): address Codex review v3 — sync throw, idle timeout race, test coverage

- coreToolScheduler.executeSingleToolCall: move try-block to wrap
  invocation.execute() so synchronous throws (e.g. shell setup failure)
  flow into the same catch path as async rejections. Previously a sync
  throw would leak the execution span and skip failure hooks.

- loggingStreamWrapper: track spanEndedByTimeout flag so a stream that
  resumes after the 5-min idle timeout does not run the final
  endLLMRequestSpan (which would no-op anyway, but the flag also stops
  resetSpanTimeout from queuing further timer callbacks).

- coreToolScheduler.test: add execution sub-span assertions for success,
  ToolResult.error, thrown invocation exceptions, and pre-hook denial.

- loggingContentGenerator.test: capture setAttribute calls into the mock
  span attributes record; assert llm_request.stream is false for non-stream
  and true for stream paths.

* fix(telemetry): address Codex review v4 — consistency + test coverage gaps

- endLLMRequestSpan now uses spanCtx.span for mutations (matches
  endToolSpan/endToolExecutionSpan pattern). Same object, but consistent
  lookup pattern prevents future drift.

- Mocks capture endLLMRequestSpan and endToolSpan/endToolExecutionSpan
  metadata so tests can assert token counts, durationMs, success, error
  are forwarded correctly. Add assertions on:
    * Non-stream LLM: inputTokens, outputTokens, success on response path
    * Non-stream LLM: success: false + sanitized error on rejection
    * Stream LLM: final lastUsageMetadata reaches endLLMRequestSpan
    * Tool execution sub-span: success: true on happy path
    * Tool execution sub-span: success: false on ToolResult.error
    * Tool execution sub-span: success: false + sanitized error on throw

- Add OTel error resilience tests: when setAttributes or setStatus throws,
  span.end() must still run and the span must be removed from activeSpans.
  Covered for endLLMRequestSpan, endToolSpan, endToolExecutionSpan.

* fix(telemetry): address Codex review v5 — abort distinction + API symmetry

- session-tracing.ts SpanContext.type: comment 'tool.blocked_on_user' |
  'hook' as Phase 2 forward-declarations (no helpers wired yet).

- endToolExecutionSpan: align no-metadata-no-status behavior with
  endToolSpan. Currently no caller omits metadata, but the asymmetric
  default (OK vs preserve-pre-set) was a maintenance trap.

- loggingContentGenerator generateContent (non-stream) catch block:
  call endLLMRequestSpan BEFORE the logging block, mirroring the
  streaming path. Defense-in-depth against logging-side rejections.

- loggingContentGenerator: restore abort-specific span status message.
  All three LLM error paths (non-stream catch, stream eager-error catch,
  stream loggingStreamWrapper finally) now use
  API_CALL_ABORTED_SPAN_STATUS_MESSAGE when req.config.abortSignal.aborted,
  matching the original withSpan('client.generateContent') behavior.
  Trace backends can now distinguish cancellations from real failures.

- coreToolScheduler _executeToolCallBody catch: distinguish abort vs
  exception in execSpan error message. New constant
  TOOL_SPAN_STATUS_TOOL_CANCELLED prevents operators filtering exec
  spans for errors from seeing cancellation false positives.

- New test asserting exec span uses cancelled-by-user message when the
  invocation throws after abort.

* fix(telemetry): always write 'success' attribute on tool spans

E2E review found qwen-code.tool spans never carry the `success` boolean
attribute (the helper only writes it when metadata is passed, and the
finally block calls endToolSpan(toolSpan) without metadata). This breaks
the most common observability query — filtering tool failures with
`success = false` — because tool spans don't have that field at all.

Fix: setToolSpanFailure / setToolSpanCancelled now also call
span.setAttribute('success', false); the success path in
_executeToolCallBody adds span.setAttribute('success', true) after
safeSetStatus(span, OK). Mirrors the unconditional `success` attribute
on llm_request spans, so backends can use one query for both span types.

Add 4 scheduler-level tests asserting the success attribute on:
- success path
- ToolResult.error path
- thrown invocation path
- cancellation path
2026-05-16 22:29:55 +08:00
jinye
784182dfe3
feat(skills): add /stuck diagnostic skill for frozen sessions (#4133)
* feat(skills): add /stuck diagnostic skill for frozen sessions

Port the /stuck diagnostic capability to qwen-code as a bundled skill.
Scans for stuck processes, high CPU/memory, hung subprocesses, and
debug logs, then presents a structured diagnostic report.

Adapted from claude-code's internal /stuck skill with:
- Process identification via command path (node-based CLI, not compiled binary)
- Debug log path updated to ~/.qwen/debug/
- Cross-platform stack dump support (macOS sample + Linux /proc/stack)
- Direct user-facing output (no Slack dependency)

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): respect QWEN_RUNTIME_DIR/QWEN_HOME for debug log path in /stuck

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): add allowedTools and clarify diagnostic-only boundary in /stuck

- Add allowedTools (run_shell_command, read_file) for convention consistency
- Rephrase recommended actions as user-facing options, not model-executable commands

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): address review feedback for /stuck — security, accuracy, sidecar

- Add explicit PID argument validation (reject shell metacharacters) to
  prevent the model from substituting injection payloads into shell commands
- Mention macOS/BSD `U` state alongside Linux `D` for uninterruptible sleep,
  so I/O-blocked macOS sessions are not silently missed
- Add `-ww` to `ps` to disable column truncation, so long qwen paths don't
  fall outside the grep window and cause sessions to be missed
- Use `~/.qwen/projects/*/chats/*.runtime.json` sidecars as the primary
  source of (pid, sessionId, workDir) mappings; `ps` is now a supplement
  for CPU/RSS/state enrichment

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): apply minimal review fixes for /stuck

- Filter ps to current UID via -u "$(id -u)" — avoid leaking other users'
  Qwen processes on shared hosts
- Note that ps `rss=` is in KB; divide by 1024 before MB comparison
- Replace `pgrep -lP` with `pgrep -P` + `ps -p` so child state shows up
- Mention `advanced.runtimeOutputDir` setting alongside QWEN_RUNTIME_DIR /
  QWEN_HOME in the runtime-base description
- Add half-line about PID reuse handling and not quoting secrets from
  debug logs (without inflating the prompt into a full workflow)

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): round-3 review fixes for /stuck

- Resolve RUNTIME_DIR from QWEN_RUNTIME_DIR/QWEN_HOME and use it in the
  sidecar `ls`, debug log path, and `latest` symlink — the previous
  round only updated the prose and left the actual commands hardcoded
- Add explicit fallthrough: when sidecar enumeration finds nothing, fall
  through to step 2 instead of getting stuck trying to make sidecar work
- Replace metacharacter blacklist with digit-only PID whitelist — safer
  and shorter; "etc." in a blacklist outsourced completeness to the LLM
- Drop `strace -p <pid> -c -f` from the Linux stack-dump branch: `-c`
  blocks until the target exits, hanging the diagnostic on the very
  conditions it should diagnose; `ptrace_scope=1` would also misreport
  permission errors as process symptoms. Keep `cat /proc/<pid>/stack`
- Warn that `ps -ww` command lines may include CLI-arg credentials and
  that `sample` stack frames may include in-memory secrets — redact
  before quoting in the report
- Cover the "no sessions found at all" case so a fresh machine doesn't
  get reported as "all healthy" when zero data was collected

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): /stuck overview-vs-step3 consistency and self-explanatory state triage

- Update "What to look for" overview from `pgrep -lP <pid>` to
  `pgrep -P <pid>` to match step 3 (overview was left behind in the
  previous round when step 3 was upgraded to capture child state)
- Add a triage sentence to step 3: when the state alone explains the
  problem (`T` = stopped, `Z` = zombie), skip child/log/stack inspection
  and go straight to the report

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): correct /stuck runtime base priority order and resolution

The actual priority in `Storage.getRuntimeBaseDir()` is
`QWEN_RUNTIME_DIR` > `advanced.runtimeOutputDir` setting > `QWEN_HOME` >
`~/.qwen`. The previous round merged the `advanced.runtimeOutputDir`
mention but listed it after `QWEN_HOME`, and the shell snippet skipped
the settings layer entirely — so on a machine where only the setting
was configured, the skill would silently look in `~/.qwen` and miss all
sessions.

- Reorder the prose priority list to match the source
- Add a `jq`-based read of `~/.qwen/settings.json` between the env-var
  and `QWEN_HOME`/default fallbacks. Gracefully degrades if `jq` is
  absent or the setting is unset.

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* feat(skills): improve /stuck diagnostic flow

Functional upgrades found in self-review (no reviewer raised these):

- Add network-hang detection bullet to step 3. Hung HTTPS requests to
  the model API are the most common qwen-code "stuck" mode and showed
  as healthy under all previous heuristics (low CPU + S state). macOS
  uses `lsof -i -p`, Linux uses `ss -tnp`.
- Add a fast path at the top of "Investigation steps": when the user
  passes a digit-only PID, skip enumeration and go straight to per-PID
  ps + step 3. Avoids a full sidecar+ps scan in the targeted case.
- Replace per-file sidecar liveness check with a single bash loop that
  emits only live (pid, sidecar-path) pairs. On machines with many
  stale sidecars this drops 50+ separate reads.
- Promote `~/.qwen/debug/latest` to the primary debug-log entry point
  (it usually points to the suspicious session). Sidecar-derived path
  becomes the fallback.
- Bound the debug-log read with `tail -n 200` so the model doesn't
  attempt to load multi-GB log files.
- Replace the placeholder `<child_pids>` for `ps -p` with a runnable
  `pgrep -P <pid> | xargs -I{} ps -p {} -o ...` composition.
- Drop the redundant "substitute <pid> only after validation" note in
  step 3 — the digit-only whitelist in Argument validation already
  enforces this; PIDs from ps/sidecar are inherently digit-only.

End-to-end tmux smoke test confirms the flow runs to completion with a
correct structured report.

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): /stuck — RUNTIME_DIR preamble + jq-free sidecar liveness

Two issues caught by Codex review:

1. **PID fast path left $RUNTIME_DIR unset.** Step 3 references
   `"$RUNTIME_DIR"/debug/<session-id>.txt` but the fast path skipped
   step 1 where it was resolved, so debug-log lookup degraded to
   `/debug/latest` (broken absolute path). Fix: extract RUNTIME_DIR
   resolution into a preamble that runs before both paths. Also add a
   `grep -l "pid": <PID>` lookup in the fast path so it can match the
   given PID to its sidecar and recover the session ID for log lookup.

2. **Sidecar liveness loop required `jq`.** Default macOS / minimal
   Linux images don't ship `jq`, so the loop emitted nothing for every
   sidecar — the "preferred reliable" path silently failed and the
   skill fell back to the less accurate `ps | grep`. Replace with a
   single-spawn `node -e` script: node is guaranteed present (qwen-code
   itself runs on it). The settings.json `jq` lookup stays — that one
   gracefully degrades to QWEN_HOME/default if `jq` is missing.

Both verified by hand: liveness loop correctly emits live PID/sidecar
pairs (56219, 33534), `grep -l` lookup correctly finds the sidecar for
a given PID and emits empty for non-matches.

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): /stuck — validate fast-path PID is a Qwen Code process

Codex review caught that the targeted PID fast path accepted any
digit-only PID and dumped its full command line, bypassing the Qwen-
process filter that the general scan applies via
`grep -E '(qwen|node.*qwen|bun.*qwen)'`. Cross-user PIDs are already
filtered (`kill -0` returns EPERM), but **same-user non-Qwen processes**
would have their argv (potentially including secret CLI flags) printed
into the chat.

Fix: add a single-line validation pipeline before the stats dump:
`kill -0 <pid> && ps -p <pid> -o command= -ww | grep -qE '(qwen|node.*qwen|bun.*qwen)'`.
If it returns non-zero, refuse with "PID is not a current-user Qwen
Code session" and stop the diagnostic. Otherwise proceed.

Verified by manual test against a real Qwen Code session PID (matches)
and PID 1 / launchd (correctly rejected).

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): /stuck — settings path, sidecar grep, ps regex, lsof safety

Four issues from PR review:

1. **Settings path honors QWEN_HOME.** The `jq` lookup in the preamble
   hardcoded `~/.qwen/settings.json`, but `getGlobalSettingsPath()`
   resolves to `$QWEN_HOME/settings.json` when set. Now uses
   `"${QWEN_HOME:-$HOME/.qwen}/settings.json"`.

2. **Sidecar grep uses `-El`.** Without `-E`, BSD `grep` on macOS may
   not treat `\b` as a word boundary in BRE. Also added a note: when
   PID reuse makes multiple sidecars match, prefer the most recently
   modified file via `ls -t | head -n 1`.

3. **Process regex tightened to avoid false positives.** The old
   `(qwen|node.*qwen|bun.*qwen)` matched any path containing "qwen"
   anywhere — so `qwen-playground/server.js`, `qwen-polyfill.js`,
   and even unrelated processes that pass a qwen-code path as `--cwd`
   (e.g., Codex plugin brokers) all matched. Replaced with
   `(qwen-code/[^ ]*\.(js|ts|mjs|cjs)( |$)|/qwen( |$))` — requires the
   `qwen-code/` substring to be followed by a script-file path, OR
   the bin invocation to end in `/qwen`. Verified on the local machine
   that broker processes are no longer matched while real Qwen
   sessions (worktree dev, dist/cli.js, qwen serve daemons) all are.

4. **lsof safety.** Added `-nP` to skip reverse-DNS and port lookups
   which can themselves hang. Mentioned `timeout 10` / `gtimeout 10`
   as an optional prefix when available — qwen-code's shell tool
   already has a backstop timeout, so this is belt-and-suspenders.

Note: tested `\b` in BSD ERE on macOS — it does work correctly with
`-E`, so the `-El` switch alone fully addresses concern #2's
portability claim (BRE-without-E remains broken but is no longer used).

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): /stuck — expand ~ and resolve relative paths in RUNTIME_DIR

`Storage.resolvePath()` in qwen-code expands `~` and resolves relative
paths before using `advanced.runtimeOutputDir`. The shell preamble was
reading the raw JSON value via `jq`, so a user with
`"runtimeOutputDir": "~/.qwen-runtime"` would pass the literal string
to the glob — bash does not expand `~` inside double quotes — and the
sidecar scan would silently find nothing and fall back to ps-only mode.

Add two bash lines after the jq lookup:
- `${RUNTIME_DIR/#\~/$HOME}` to substitute leading tilde
- `case ... cd && pwd` to resolve relative paths to absolute (clears
  RUNTIME_DIR if cd fails so the chain falls through to QWEN_HOME)

Smoke tested: tilde paths expand, absolute paths pass through, relative
paths resolve, nonexistent dirs clear cleanly, empty stays empty.

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(skills): /stuck — round-N review feedback

Adopted 9 of the 16 review suggestions; declined 5; 1 already done.

- Anchor process regex to `(^|/)qwen-code[^ /]*/`. Now matches renamed
  clones (`qwen-code-dev`, `qwen-code-x1`, worktrees) AND rejects
  prefix false positives (`analyze-qwen-code/`, `my-qwen-code-tool/`).
  Verified against 10 cases.

- Clarify RSS unit conversion: KB ÷ 1024 = MB, KB ÷ 1048576 = GB. The
  4GB threshold is `4194304` KB raw, or 4 in GB. Prevents the model
  from dividing once and comparing to 4, which would over-alert by
  1024×.

- Add `State S with low CPU` to the Signs list so initial triage flags
  the most common hang signature (hung HTTPS to model API) instead of
  only catching it inside step 3.

- Split fast path validation into two guards with distinct messages:
  dead/wrong-user vs. yours-but-not-Qwen. Plus add the same
  credential-redaction note that step 2 already has.

- Replace `pgrep | xargs -I{} ps` with a single `ps -p $CHILDREN`
  call (avoids forking N times) and add `-ww` so long child cmdlines
  don't truncate.

- Wrap macOS `sample <pid> 3` with optional `timeout 15` (or
  `gtimeout 15`). Same belt-and-suspenders pattern used for `lsof`.

- Note that `ss -tnp -p` requires root/CAP_NET_ADMIN; non-root sees
  `-` in the PID column. Tell the model to fall back to `lsof` instead
  of concluding "no connections".

Declined: self-PID via `$$` (wrong PID — `$$` is the spawned shell,
not qwen), pgrep fallback for distroless (over-engineering), `\b`
matches negative numbers (false alarm — `:[[:space:]]*` won't match
through `-`), regex DRY abstraction (no value in markdown prompts),
project-level settings.json read (already declined; same trade-off).

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* test(skills): add integration test that parses every bundled SKILL.md

The bundled skill loader (`SkillManager.parseSkillFileInternal`) silently
catches and debug-logs frontmatter parse errors, so a typo in any
SKILL.md (missing `description`, broken `---` delimiter, `allowedTools`
written as a scalar) merges with green CI and only surfaces when a user
invokes the skill — at which point the skill is missing from
autocomplete with no indication why.

Add a tiny integration test that walks `packages/core/src/skills/bundled/`,
runs every `SKILL.md` through the real `parseSkillContent` (no mocks),
and asserts: name matches the directory, description is non-empty, body
is non-empty, and `allowedTools` (if present) is an array.

Lives in its own file because `skill-load.test.ts` mocks `fs/promises`
and the YAML parser, which would defeat the purpose of an integration
test. New file uses real fs and the real loader.

Negative-case verified: deliberately corrupting `stuck/SKILL.md`'s
frontmatter delimiter makes only that file's test fail; restoring it
returns the suite to all-green.

Addresses wenshao's standing [Critical] review (2026-05-15 12:29Z) about
the bundled skill system lacking automated tests for SKILL.md parsing.

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)
2026-05-16 22:23:02 +08:00
jinye
379d14ad00
feat(rewind): add file restoration support to /rewind command (#4064)
* feat(rewind): add file restoration support to /rewind command (#3697)

Previously /rewind only truncated conversation history — files modified
by the assistant remained on disk. This adds a file-copy-based backup
system (ported from claude-code's fileHistory) so users can optionally
roll back file changes when rewinding.

Core changes:
- New FileHistoryService with snapshot/backup/restore lifecycle
- trackEdit() called before each file write in edit and write-file tools
- makeSnapshot() at each user turn boundary in client.ts
- Three-phase RewindSelector UI: pick turn → choose restore option → execute
- RestoreOption type: 'both' | 'conversation' | 'code' | 'cancel'

Closes #3697

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(rewind): replace findLast with reverse loop for ES2022 compat

vscode-ide-companion targets ES2022 which lacks Array.findLast.

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(rewind): add missing i18n translations and fix test expectation

- Add file restore i18n keys to all 8 locale files (zh-TW, ca, de, fr,
  ja, pt, ru were missing)
- Update useGeminiStream test to expect promptId in user history item

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(rewind): add getFileHistoryService mock to tool tests

edit.test.ts and write-file.test.ts mock configs lacked the new
getFileHistoryService method, causing trackEdit calls to throw.

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(rewind): allow Esc during diff loading and add missing i18n footer strings

Allow users to press Esc/Ctrl+C to cancel during diff stats loading
phase. Add three missing footer navigation strings to all 9 locale files.

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(rewind): address review feedback — restoreBackup correctness, missing promptId warning, dead code removal

- restoreBackup now returns boolean; applySnapshot only counts a file
  as restored when the backup was actually applied (fixes misleading
  "Restored N file(s)" when backup is missing on disk)
- Show warning when user selects file restore on a turn created before
  file checkpointing was enabled (promptId undefined)
- Remove unused snapshotSequence field, canRestore(), and hasAnyChanges()
  methods that had no callers

* fix(rewind): correct diff direction, truncate snapshots on rewind, add zero-files feedback

- Swap diffLines args to diffLines(backup, current) so +/- stats
  match git convention (insertions = lines added since checkpoint)
- Truncate snapshots after rewind to discard stale timeline state,
  preventing makeSnapshot from using wrong baseline
- Show "No files needed restoration." when rewind finds files already
  at target state (all 9 locales)

* test(tools): assert trackEdit is called before file writes

* fix(i18n): add missing rewind UI locale keys across all 9 locales

* fix(core): reset fileHistoryService on session change, clean up dead code

- Reset fileHistoryService in startNewSession() so /clear gets a fresh
  instance with the new sessionId
- Rebuild trackedFiles after rewind() to avoid stale stat() calls
- Remove unused setCurrentPromptId/getCurrentPromptId dead API

* fix(rewind): validate conversation before file restore, preserve snapshots for code-only

- For 'both': validate conversation can be truncated before restoring
  files to prevent inconsistent state (files rolled back but conversation
  stays at newer state)
- For 'code'-only: pass truncateHistory=false so snapshot timeline is
  preserved — conversation turns remain visible and their snapshots stay
  available for future rewinds

* fix: correct trackEdit race comment — overwrite not orphan

* fix(types): use HistoryItemWithoutId for addItem to preserve union member properties

* fix(types): revert addItem type change, use cast at call site for promptId

* fix(rewind): guard onRewind calls with .catch() to prevent unhandled rejection

* fix(rewind): only truncate snapshot timeline when conversation truncation will execute

* fix(rewind): address tanzhenxin review - gate, partial failure, tests

1. Disable file checkpointing for non-interactive (-p) mode by gating
   on `params.interactive !== false` in addition to `!params.sdkMode`.

2. Surface partial restore failures: `rewind()` now returns
   `RewindResult { filesChanged, filesFailed }`. In "both" mode,
   conversation truncation is skipped when any file fails to restore,
   preventing inconsistent state.

3. Add comprehensive unit tests for FileHistoryService (17 tests
   covering trackEdit, makeSnapshot, rewind, eviction, diffStats).

* fix(rewind): defensive trackEdit + fix version collision on re-track

1. Wrap trackEdit calls in edit.ts and write-file.ts with try/catch
   so file history failures never break core tool operations.

2. Replace hardcoded version:1 in trackEdit with max-version lookup
   across all snapshots. Prevents backup file overwrite when the same
   file is re-tracked after a code-only rewind (truncateHistory=false).

* fix(rewind): add missing i18n keys + fix makeSnapshot version collision

1. Add 'Failed to restore {{count}} file(s): {{files}}' to all 7
   missing locales (ca, de, fr, ja, pt, ru, zh-TW).

2. Use global max-version scan in makeSnapshot (same as trackEdit)
   to prevent backup filename collisions after snapshot eviction.

* fix(rewind): set hasRestoreFailure when promptId is missing

In "both" mode, if the target turn has no promptId, conversation
truncation was still proceeding because hasRestoreFailure was not set.
Now correctly blocks truncation to prevent inconsistent state.

* fix(rewind): show loading state during async restore, close selector in finally

Defer setIsRewindSelectorOpen(false) to a try/finally block so the
selector stays visible during async file restore. RewindSelector now
manages its own isRestoring state: shows "Restoring..." text and
disables all keypress handlers while the restore is in progress.

This prevents the user from seeing a bare prompt with no progress
indicator during slow restores, and eliminates the race where typing
during restore could clobber the pre-filled prompt.

* fix(rewind): skip timeline truncation on partial failure + fix wording

1. rewind() now only truncates the snapshot timeline when
   filesFailed is empty, preventing loss of future checkpoints
   when the caller skips conversation truncation due to failures.

2. Change "No files needed restoration." to the more idiomatic
   "No files needed to be restored." across all 9 locales.

* fix(rewind): address review — TOCTOU in createBackup + outer catch in handleRewindConfirm

- Extract safeCopyFile(src, dst) helper that distinguishes source-missing
  (TOCTOU: file deleted between stat and copyFile) from target-dir-missing,
  so trackEdit no longer silently fails when a file disappears mid-backup.
  Same helper now covers restoreBackup.
- Wrap handleRewindConfirm with an outer catch that surfaces unexpected
  failures via historyManager error item; previously a sync throw from the
  post-rewind block would silently close the selector and leave 'both'
  mode in a half-applied state.
- Add 'Rewind failed: {{error}}' i18n key in all 9 locales.

* test(rewind): cover restoreFromSnapshots, trackEdit no-snapshot path, partial-failure timeline guard

- restoreFromSnapshots: assert relative-path shortening + external-path preservation
- trackEdit before any makeSnapshot: assert no-op early return
- rewind truncation guard: assert snapshot timeline is preserved when filesFailed > 0

* fix(rewind): clean up orphaned backups, surface no-client states, polish

- Per-eviction backup cleanup: when MAX_SNAPSHOTS overflow or rewind
  truncation drops snapshots, remove backup files no longer referenced
  by any surviving snapshot (best-effort, ENOENT-tolerant). Backup files
  are content-deduplicated across snapshots, so the live-set is computed
  from survivors before deletion.
- Surface no-client failure modes in handleRewindConfirm: 'conversation'
  mode now shows an error instead of silently returning; 'both' mode
  shows an info message after restore so the user knows the conversation
  half was skipped.
- i18n the previously hardcoded 'Conversation rewound...' message and
  add 3 new keys to all 9 locales.
- Tighten createBackup signature (drop unreachable null branch).
- Extract getMaxVersion helper to deduplicate identical loops in
  trackEdit and makeSnapshot.

Tests added: orphan-cleanup on overflow, dedupe preservation, rewind
truncation cleanup. All existing tests continue to pass (23 core, 71
AppContainer, 27 i18n).

* fix(rewind): use path separator constant in maybeShortenFilePath

The hardcoded '/' check meant Windows absolute paths (with '\') never
matched the cwd prefix, so the shortening was a no-op on Windows. The
new cleanup tests revealed this by asserting on the relative-path key:
on Windows the key was the full absolute path, so trackedFileBackups
lookups returned undefined.

Switching to the platform sep also makes Windows snapshots use the
relative key like POSIX, improving portability if cwd moves later.
restoreFromSnapshots re-runs maybeShortenFilePath on every key, so
existing on-disk sessions migrate transparently on resume.

* test(rewind): cover trackEdit best-effort guarantees and unchanged-file rewind

- edit.test.ts: assert tool still completes (file written, llmContent
  reflects the edit) when FileHistoryService.trackEdit rejects.
- write-file.test.ts: same for the write_file tool.
- fileHistoryService.test.ts: assert trackEdit swallows createBackup
  failures (forced via storageDir-replaced-with-file → ENOTDIR in
  recursive mkdir) without recording any backup.
- fileHistoryService.test.ts: assert applySnapshot leaves a file
  untouched (mtime unchanged, filesChanged empty) when its content
  already matches the target backup — covers the
  checkOriginFileChanged short-circuit.

* fix(rewind): align fileCheckpointing default + surface backup-missing on rewind

Two issues from a Codex review pass:

- Config: `fileCheckpointingEnabled` defaulted via `params.interactive !== false`,
  which resolves truthy when the caller omits `interactive` — but `this.interactive`
  itself defaults to `false`. Headless/programmatic callers that did not set
  `interactive` would silently start writing file-history backups under
  `~/.qwen/file-history/`. Use the same `?? false` default so the gate matches
  the resolved interactive value.

- checkOriginFileChanged: when the on-disk backup AND the working file have both
  been removed externally, the function returned `false` ("unchanged"), so
  `applySnapshot` skipped `restoreBackup` and rewind reported success even though
  the target snapshot expected the file to exist. Treat any failure to stat the
  backup as "changed" so callers attempt the restore: applySnapshot surfaces the
  missing backup via restoreBackup → filesFailed, makeSnapshot creates a fresh
  backup. Added a regression test for the both-missing path.

* fix(rewind): mark per-file backup failures so rewind surfaces them

Two related issues from a /review pass:

1. Silent data loss in makeSnapshot inheritance: when the per-file
   backup attempt threw inside makeSnapshot, the catch block left the
   path missing from `trackedFileBackups`, and the inheritance loop
   then copied the previous snapshot's backup into the new snapshot.
   A later rewind to that snapshot would restore older content while
   reporting success.

   Now the catch records `{ failed: true, ... }` for the path. The
   inheritance loop skips paths already present in trackedFileBackups,
   so failed paths are no longer paved over by stale carryover. Both
   applySnapshot and getDiffStats honor `failed` — rewind pushes the
   path to filesFailed and the diff preview omits it.

2. Marketing/scope mismatch: the rewind UI offers "Restore code" but
   the feature only tracks edits made via the `edit` and `write_file`
   tools — shell-mediated changes (`sed -i`, `cp`, `rm`, `mv`,
   `npm`, etc.) and out-of-tool manual edits are not captured.
   Added a class-level JSDoc on FileHistoryService spelling out the
   scope, and an inline footer in the restore-options panel:
   "Rewinding does not affect files edited manually or via shell
   commands." (matching the upstream claude-code MessageSelector
   wording). New i18n key in all 9 locales.

Test added: trackEdit/makeSnapshot per-file failure path. Asserts
the new snapshot has `failed: true`, and that rewind to that snapshot
reports the file as filesFailed instead of silently restoring the
inherited stale backup.

* fix(rewind): polish — i18n, type tightening, resumed-session UX hint

Several small wins from the latest /review pass plus a UX mitigation for
turns whose file-history snapshot is not present in memory (most often
because the conversation came from a resumed session, but also when a
turn has no captured edits):

- AppContainer: wrap the "Cannot rewind to a turn that was compressed"
  error in t(); add the new key to all 9 locales.
- RewindSelector: replace the inline `(+N -M in K file/files)` template
  literal with t() using two plural-aware keys; add to all 9 locales.
- DiffStats.filesChanged: tighten from optional to required to match
  reality (every code path that returns a DiffStats sets it). Drops the
  `!.filesChanged!` non-null cascade in RewindSelector.
- RewindSelector phase 2: when the option list does not contain
  code/both (i.e. no file-restore is actionable for this turn), show
  an explicit hint instead of leaving the user to guess why those
  options are missing. Same i18n key in all 9 locales.

The mitigation hint covers the resumed-session case Tan raised
(snapshots are not rehydrated by `/resume` today) without changing
behavior — `getRestoreOptions` already gracefully degrades to
conversation-only when `getDiffStats` returns undefined for a snapshot
that is not in memory; we just surface the "why" to the user.

* fix(rewind): unstick failed marker on the unchanged-file fast path

The `failed: true` marker added in d59838338 was sticky: once set, the
no-change optimization in `makeSnapshot` would copy the failed entry
forward into every subsequent snapshot for as long as the file stayed
unchanged. A single transient I/O error therefore poisoned `/rewind`
for that file until the user happened to modify the content again.

Add `!latestBackup.failed` to the no-change reuse guard so a failed
entry is never copied forward — the next snapshot retries the backup,
which either heals (when the underlying I/O has recovered) or honestly
records another failed entry.

New regression test (`does not carry a failed marker forward when the
file is unchanged`):

- Snapshot p1 with file content X
- Sabotage the storage dir → p2's per-file backup throws → p2 records
  failed: true
- Restore the storage dir; file still equals X
- p3 must NOT copy p2's failed entry; it must retry createBackup and
  produce a fresh non-failed entry that allows rewind to p3 to succeed
2026-05-16 22:16:01 +08:00
qqqys
0dde1ad704
feat(cli): add session-scoped /goal command with judge-driven turn continuation (#4123)
Some checks are pending
Qwen Code CI / Classify PR (push) Waiting to run
Qwen Code CI / Lint (push) Blocked by required conditions
Qwen Code CI / Test (macos-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (ubuntu-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (windows-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(cli): add session-scoped /goal command with judge-driven turn continuation

`/goal <condition>` pins a free-form objective for the rest of the session.
While a goal is active, an LLM judge runs at every Stop boundary and either
lets the turn end (condition met) or feeds the judge's reason back as the
next user prompt to keep the model working. Auto-clears on success;
`/goal clear` cancels early. Same primitive as Anthropic's Claude Code
2.1.140 `/goal`, built on qwen-code's existing Stop-hook + function-hook
plumbing — no new subsystem.

Core (packages/core/src/goals/):
  - activeGoalStore: per-session active goal + last-terminal cache, with a
    terminal-observer channel the CLI subscribes to so achieved/aborted
    cards land in history.
  - goalJudge: side-query against a fast model, transcript-grounded
    system prompt + json_schema response + disabled thinking. Tolerant
    JSON extraction with fallback so a flaky judge can't kill the loop;
    30s default timeout (vs. the 5s function-hook default that was
    silently killing real-world judge calls).
  - goalHook: function hook on Stop. Returns {decision:'block', reason}
    when not met (reusing client.ts's existing recursive continuation),
    {continue:true} when met. Self-clears active goal + notifies the
    terminal observer on met/aborted. MAX_GOAL_ITERATIONS=50 backstop.

CLI:
  - goalCommand: /goal | /goal <cond> | /goal clear|stop|off|reset|none|
    cancel. 4000-char cap, trust + disableAllHooks gates. Empty /goal
    shows running status, falls back to the last completed summary.
  - GoalPill: footer chip "◎ /goal active (12s)" — terse, claude-aligned.
  - GoalStatusMessage: set / checking / achieved / cleared / aborted
    history cards. "checking" replaces the generic stop_hook_loop chip
    for goal-driven iterations.
  - restoreGoal: on session resume, rehydrate the active goal hook +
    last-terminal cache from transcript so /goal survives /resume.

Cross-cutting fixes:
  - HookSystem.hasHooksForEvent(eventName, sessionId?): also consults
    SessionHooksManager. Previously SDK / programmatic Stop function
    hooks were silently gated out by client.ts's fast-path check, so
    they never fired.
  - client.ts: yield StopHookLoop on every continuation iteration (was
    iter > 1) — first not-met turn is now visible in the UI.
  - useGeminiStream: commit pending item + clear thoughtBuffer /
    geminiMessageBuffer on every Finished event. Fixes a UI bug where
    a Stop-hook continuation's text bled into the prior turn's pending
    history item (cumulative "te" / "tes" rendering), even though the
    persisted transcript was clean.

Co-authored-by: Qwen-Coder <noreply@qwen.ai>

* test(cli): fix footer goal pill mock

* fix(goal): persist terminal status on restore

* fix(goal): harden judge hook

* fix(goal): sanitize condition in instruction prompt and update matcher test

- goalCommand.ts: collapse newlines and downgrade embedded double-quotes in
  the condition before splicing into the instruction prompt so the wrapping
  quote structure stays intact.
- goalLoop.integration.test.ts: matcher assertion updated to '*' to match the
  current registerGoalHook contract (previously '').

Co-authored-by: Qwen-Coder <noreply@alibabacloud.com>

* feat(goal): surface judge reason on terminal cards

Renders `Last check: <reason>` on the achieved / aborted history card
and on the empty-`/goal` summary so the final view records *why* the
judge ruled the goal complete. Uses a single inline-label Text instead
of the flex-row split used for `Goal:` — the reason is capped at 240
chars and almost always wraps; the flex-row variant hangs the
continuation at the value column's left edge (~12 cols of blank space,
easily mistaken for a stray empty line). Single Text + natural wrap
keeps the continuation flush.

Co-authored-by: Qwen-Coder <noreply@alibabacloud.com>

* fix(goal): re-arm /goal on runtime /resume and /branch

Cold boot path in AppContainer already calls restoreGoalFromHistory after
loading session data, but the runtime /resume and /branch paths skipped
it entirely. After /new + /resume back to a session that had an active
/goal, the in-memory activeGoalStore entry still held the pre-/new
setAt and a hookId pointing to a hook that config.startNewSession() had
torn down — leaving the footer pill ticking from the original setAt
(observable as "几十秒" elapsed immediately after resume) while the
Stop hook was silently dead.

Wire restoreGoalFromHistory into both handlers right after the session
data lands so unregisterGoalHook clears the stale entry and
registerGoalHook re-arms with a fresh setAt / hookId and re-installs
the terminal observer.

Co-authored-by: Qwen-Coder <noreply@alibabacloud.com>

* refactor(goal): reuse shared formatDuration utility

Drop the duplicated local formatDuration from goalCommand.ts and
GoalStatusMessage.tsx in favor of the shared formatters.ts version,
called with { hideTrailingZeros: true }. The shared util already has
its own test suite and matches Claude Code's ShellTimeDisplay style
(round values drop zero-unit tails: `5m 0s` → `5m`).

Co-authored-by: Qwen-Coder <noreply@alibabacloud.com>

* fix(goal): abort judge API call on judge timeout

The judge-timeout path in judgeGoalWithTimeout only resolved a fallback
verdict; the underlying judgeGoal generateContent call kept running
because the hook context signal is never aborted by the timeout. Each
timeout leaked one in-flight request that accumulated across goal-loop
iterations. Link an AbortController into the judge signal and abort it
when the timeout fires.

Co-Authored-By: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(goal): harden judge continuation feedback

* test(goal): align loop integration with safe continuation

* fix(cli): harden goal resume lifecycle

* fix(cli): address goal review blockers

* fix(goal): guard stale same-condition callbacks

---------

Co-authored-by: Qwen-Coder <noreply@qwen.ai>
Co-authored-by: Qwen-Coder <noreply@alibabacloud.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-05-16 18:14:13 +08:00
qqqys
372acf1444
feat(cli): argument hint + --auto completion for /rename (#4048)
* feat(cli): argument hint + --auto completion for /rename

Closes #4047.

The /rename command supports a structured --auto flag (let the fast
model generate a sentence-case title from the conversation), but
unlike /model — which advertises --fast via argumentHint and a
completion entry — /rename's flag was undocumented inline. Users had
to either run the command incorrectly or check the docs to learn
about --auto.

- argumentHint: '[--auto] [<name>]' so the completion menu shows the
  shape when the user types `/rename` and tabs.
- completion: returns null on empty / free-text input (don't shadow
  the user typing a title) and surfaces --auto when the partial arg
  is a prefix of it ('-', '--', '--a', '--au', '--auto'). Same shape
  as /model's --fast handling.

Free-text titles intentionally don't auto-complete — there's nothing
meaningful to suggest, and offering --auto on every keystroke would
feel like noise on `/rename my-feature`.

Tests:
- pins argumentHint shape
- empty partial → null
- '-' / '--' / '--a' / '--au' / '--auto' all return the --auto suggestion
- 'my-feature' / 'fix bug' / '-x' return null (free-text path)

Co-Authored-By: Qwen-Coder <noreply@qwen.ai>

* fix(core): fall back to text JSON when generateJson gets no tool call

generateJson registers schemas as a respond_in_schema function
declaration and walks parts[].functionCall for the result. When no
tool_choice is set (the OpenAI-compatible converter never sets one) and
the system prompt explicitly asks for text JSON — e.g. session-title
generation's "Return ONLY a JSON object..." — some models honor the
prompt and emit the answer as a plain text part instead of calling the
tool. The answer is semantically correct; we just weren't reading it.

This bottoms out in /rename --auto as "The fast model returned no
usable title" on qwen3.6-max-preview, and likely affects every other
generateJson caller (next-speaker checker, edit corrector, etc.) on
the same class of model.

Add a tolerant fallback: when no function call comes back, parse
getResponseText(result) — which already skips thought parts — with a
JSON-object extractor that strips optional ```json fences and reads
the outermost {...} block. Strictly additive; the function-call path
stays primary.

Closes #4057.

Co-Authored-By: Qwen-Coder <noreply@qwen.ai>

* refactor(cli): unify /rename and /rename --auto pipelines

Bare /rename (no args) used to call a private generateKebabTitle path
that asked the fast model (or main-model fallback) for a 2-4 word
kebab-case name via a plain text call. /rename --auto used the
schema-enforced tryGenerateSessionTitle path for a 3-7 word sentence-
case title. Two code paths, two prompts, two failure-message formats,
two sanitizers — with the kebab path consistently lagging on history
filtering, surrogate handling, and error specificity.

Collapse to a single fast-model schema-enforced pipeline. Both bare
/rename and /rename --auto now call tryGenerateSessionTitle and both
record titleSource: 'auto' on success. The --auto flag stays as an
explicit user-intent marker (preserves the existing argumentHint /
completion / parseArgs surface) but no longer diverges semantically.

Bare /rename now also hard-requires fastModel; users who relied on
the main-model fallback need to either /model --fast <name> or pass
a name explicitly (/rename <name>). The new failure message points
at both options.

Co-Authored-By: Qwen-Coder <noreply@qwen.ai>

* fix(cli): clarify rename title failure

* test(core): cover loose json fallback

---------

Co-authored-by: Qwen-Coder <noreply@qwen.ai>
2026-05-16 16:47:15 +08:00
jinye
dcf7681d65
feat(core,cli): add generic atomicWriteFile, wire into Write/Edit tools, upgrade @types/node (#4096)
* feat(core): add generic atomicWriteFile and wire into Write/Edit tools

The Write and Edit tools used bare fs.writeFile, risking half-written
corrupt files on crash or power loss. Both tools' source code contained
explicit TODOs noting atomic write as the fix.

- Add atomicWriteFile() supporting string/Buffer with flush (fsync),
  permission preservation, symlink resolution, and EXDEV fallback
- Wire StandardFileSystemService.writeTextFile() through atomicWriteFile
- Refactor atomicWriteJSON to delegate to atomicWriteFile (adds fsync)
- Deduplicate renameWithRetry from runtimeStatus.ts
- Add flush:true to writeWithBackupSync for settings writes
- Upgrade @types/node to ^22.0.0 (flush option type support)

Closes the TODO in write-file.ts:371-385 and edit.ts:487-497.
Ref: #4095 (Phase 1)

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(core): address review comments on atomicWriteFile

- Fix permission window: separate existingMode from desiredMode so
  mode is set during writeFile (not just chmod after), eliminating
  the brief window where tmp file has overly permissive defaults
- Fix broken symlink handling: use lstat+readlink instead of realpath
  to correctly resolve symlinks whose targets don't exist yet,
  preventing the symlink from being replaced by rename
- Add test for writing through a broken symlink

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(core): address wenshao review on atomicWriteFile

- Fix Windows bug: use path.isAbsolute() instead of startsWith('/')
- Hoist path import to top-level static import
- Resolve full symlink chains via loop (handles A→B→C), with
  ELOOP guard at 40 hops matching POSIX SYMLOOP_MAX
- Mask stat.mode with 0o7777 to strip file-type bits
- Document EXDEV fallback atomicity loss in JSDoc
- Add tests for relative symlinks and multi-level symlink chains

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(test): fix CI failures from atomic write changes

- edit.test.ts: mock writeTextFile instead of chmod 444 for write error
  test — atomic write creates tmp file in same dir, so readonly target
  no longer triggers a write error
- atomicFileWrite.test.ts: skip permission tests on Windows — chmod is
  a no-op and stat.mode always returns 0o666

* fix(core): address deepseek review on atomicWriteFile

- Add try/catch around chmod calls to handle FAT/exFAT filesystems
  where POSIX permissions are not supported
- Add explicit type annotation to lstats variable

* fix: restore version numbers to 0.15.11 after rebase

* fix(core): resolve relative symlinks through directory symlinks

resolveSymlinkChain used path.dirname() to resolve relative symlink
targets, which is purely string-based. When intermediate path
components are themselves directory symlinks, the result would be
wrong (e.g. /a/link/file → ../target resolves to /a/target instead
of the kernel-resolved /b/target).

Use fs.realpath() on the parent directory to get the kernel-resolved
base for relative-target resolution.

* fix(test): normalize path separators in directory symlink test

Windows readlink returns native separators (backslashes), causing
the directory-symlink test to fail on Windows CI. Wrap both sides
of the symlink-target comparison with path.normalize.

* refactor(core): dedupe write/chmod logic in atomicWriteFile

- Extract writeOptions construction and tryChmod helper, removing
  duplication between the main write path and the EXDEV fallback
- Document atomicWriteJSON's symlink-preservation behavior

Addresses deepseek review on PR #4096.
2026-05-15 17:52:50 +08:00
ChiGao
41bcdae7d8
fix(core): refresh systemInstruction in setTools() so progressive MCP tools reach the model (#4166)
Some checks are pending
Qwen Code CI / Classify PR (push) Waiting to run
Qwen Code CI / Lint (push) Blocked by required conditions
Qwen Code CI / Test (macos-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (ubuntu-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (windows-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* fix(core): refresh systemInstruction in setTools() so progressive MCP tools reach the model

Under PR #3994's progressive MCP path, Config.initialize() runs
startChat() BEFORE MCP discovery starts, then kicks discovery off in the
background and re-runs setTools() once it settles. But setTools() only
updated chat.generationConfig.tools — not systemInstruction — and MCP
tools are shouldDefer=true, so they were filtered out of declarations
anyway. The prompt's "Deferred Tools" listing was frozen at the
built-in-only snapshot from the initial startChat(), and the model had
no signal that any MCP tool existed. Headless --prompt runs silently
regressed to built-ins (issue #4163); interactive mode had the same gap
but was masked by retries.

setTools() now rebuilds the system instruction with the up-to-date
deferred summary and re-binds it to the live chat. The eager-reveal
guard for "ToolSearch unavailable + deferred tools present" moves with
it so a freshly-arrived MCP tool in `--exclude-tools tool_search`
sessions still lands in declarations instead of disappearing silently.
Shared with startChat() / refreshSystemInstruction() via a new private
resolveDeferredToolsForSystemPrompt() helper so the three paths cannot
drift apart again.

The legacy synchronous path (QWEN_CODE_LEGACY_MCP_BLOCKING=1) was
incidentally correct because discovery happened before startChat(); it
remains correct.

Test plan:
- packages/core/src/core/client.test.ts — three new cases covering
  newly-arrived MCP tools, already-revealed filtering, and the
  no-ToolSearch eager-reveal path.
- Full client.test.ts (107 tests) green.
- tool-search / skill-manager / agent / mcp-client-manager / AppContainer
  test suites green (callers of setTools()).
- CI integration: integration-tests/cli/simple-mcp-server.test.ts is
  expected to pass on first try without QWEN_CODE_LEGACY_MCP_BLOCKING.

Fixes #4163

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(core): lock in SessionStart preservation across setTools refresh

Adds the regression test chiga0 asked for in the PR #4166 review:
proves that setTools()'s setSystemInstruction-then-reapply pattern keeps
the SessionStart hook's additionalContext intact, so progressive-MCP
refreshes (AppContainer batch flush + the trailing setTools after
waitForMcpReady) don't silently strip hook context from the system
instruction.

Generated by claude-opus-4-7

Co-authored-by: Claude <claude-opus-4-7@anthropic.com>

---------

Co-authored-by: 秦奇 <gary.gq@alibaba-inc.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
Co-authored-by: Claude <claude-opus-4-7@anthropic.com>
2026-05-15 17:26:22 +08:00
ChiGao
9d20536343
perf(cli): code-split lowlight to cut startup V8 parse cost (#4070)
* perf(cli): code-split lowlight to cut startup V8 parse cost

Move the syntax-highlight engine out of the synchronously-parsed cli.js
entry into a separately-emitted chunk and load it via dynamic import on
the first code-block render. Until the chunk arrives, code blocks render
as plain text; the next React commit of the surrounding subtree picks up
the highlighted version, so users never see incorrect highlighting –
just an imperceptibly later transition for the very first code block.

Mechanics:
- esbuild config: switch entry to outdir + splitting:true so that
  `await import('lowlight')` produces an actual on-disk chunk that's
  only parsed by V8 when first needed.
- esbuild-shims: rename injected __dirname/__filename to qwen-prefixed
  symbols + use `define` to redirect free references. Previous inject
  collided with vendored libraries (yargs) that ship their own
  `var __dirname` ESM-compat polyfill once splitting flattens chunks.
- prepare-package: include the new chunks/ directory in the published
  package's files list.
- CodeColorizer: keep the public colorize{Code,Line} signatures and HAST
  rendering identical; on first call when the chunk hasn't loaded it
  returns the plain line and fires the dynamic import via a tiny
  standalone loader module.
- lowlightLoader (new): isolates the lazy-load surface to a module with
  zero transitive imports (no themeManager, settings, or core). This
  lets test-setup prime the cache without dragging the whole UI module
  graph into every test file, which was observed to perturb theme and
  settings test outcomes when CodeColorizer was imported directly.
- test-setup: await loadLowlight() once via the standalone loader so
  synchronous snapshot tests see the highlighted output deterministically.

Measurements (real $HOME, n=15 interleaved A/B vs main HEAD, macOS):

| Metric             | Before (mean±sd ms) | After (mean±sd ms) | Δ        | t      | p        |
| ------------------ | ------------------- | ------------------ | -------- | ------ | -------- |
| firstByte (wall)   | 1633.5 ± 88.7       | 1475.8 ± 73.3      | -157.7   | 5.31   | 1.33e-5  |
| idle (wall)        | 2048.7 ± 93.6       | 1902.3 ± 80.2      | -146.3   | 4.60   | 8.71e-5  |
| cli.js size        | 25 MB               | 6.9 MB             | -18.1 MB | —      | —        |

Both metrics clear the +50ms-or-10% Welch's t-test bar by an order of
magnitude. cli.js drops 72%; total payload (cli.js + chunks/) is
similar but only cli.js is parsed at module-eval time, which is the
phase that dominates the user-visible startup gap.

How to validate:
  npm run bundle
  ls dist/                         # cli.js + chunks/lowlight-*.js
  node dist/cli.js -y              # interactive UI still renders

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): resolve chunk-relative sibling paths under esbuild splitting

With `splitting: true`, esbuild hoists modules with shared dependencies
into `dist/chunks/`. Three modules derived runtime paths from
`import.meta.url` assuming they were co-located with `cli.js`; once
hoisted, `path.dirname(fileURLToPath(import.meta.url))` resolved to
`dist/chunks/` and sibling-asset lookups silently missed:

- `skill-manager.ts`: bundledSkillsDir → `dist/chunks/bundled` (actual
  `dist/bundled/`). The `existsSync` guard swallowed the miss, dropping
  all four bundled skills (`/review`, `/qc-helper`, `/batch`, `/loop`)
  with no user-visible signal.
- `ripgrepUtils.ts`: `getBuiltinRipgrep()` → `dist/chunks/vendor/...`.
  Falls back to system rg if installed, otherwise null on minimal
  hosts — degrading grep to the slow internal scanner.
- `i18n/index.ts`: `getBuiltinLocalesDir()` → `dist/chunks/locales`.
  User-visible behavior survives via the static glob import in
  `tryImportBundledTranslations`, but the loose-on-disk override path
  is dead.

Each module now strips a trailing `chunks` segment when present, so
the lookup resolves under `dist/`. In source / transpiled modes the
basename is never `chunks`, so the fallback is a no-op.

Also:
- Add `chunks` to `DIST_REQUIRED_PATHS` in `create-standalone-package.js`
  so a regressed bundle that produces only `cli.js` fails the
  pre-packaging check instead of shipping a broken archive.
- Expand `esbuild-shims.js` header so future contributors understand
  that `__qwen_filename` / `__qwen_dirname` always resolve to the
  shim's chunk file (dist/chunks/) and that sibling-asset lookups
  must strip the `chunks` segment.

Reported by claude-opus-4-7 via Qwen Code /qreview on #4070.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* perf(cli): prefetch lowlight from AppContainer + harden loader

Three follow-ups to the lowlight code-split:

- AppContainer fires `loadLowlight()` from a mount effect so the dynamic
  import is already in flight before any code block needs colorizing.
  Without this, code blocks committed to ink's append-only `<Static>`
  region before the import resolves stay plain text for the rest of
  the session — Static can only be re-rendered via `refreshStatic`,
  which is not wired to lowlight load completion. Common reachable
  paths: short `--prompt -p` runs that finalize quickly, Ctrl+C-
  cancelled first turns, and the first-paint history replay on
  `--resume`. The startup parse-cost win is preserved (V8 still
  parses off the critical path).

- `lowlightLoader.ts` latches the first import failure so subsequent
  calls short-circuit to a rejected promise instead of re-attempting
  `import('lowlight')` on every keystroke. The colorizer already falls
  back to plain text on miss; recovery requires a fresh process anyway.

- `test-setup.ts` wraps the top-level `await loadLowlight()` in
  try/catch. A transient import failure no longer crashes the entire
  vitest run — tests that hit a code block render the plain-text
  fallback and surface a warning.

- `CodeColorizer.tsx` header comment updated to point at the
  AppContainer prefetch instead of claiming first-paint always sees
  a loaded instance.

Reported by DeepSeek/deepseek-v4-pro and claude-opus-4-7 via Qwen Code
/review and /qreview on #4070.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(bundle): extract resolveBundleDir helper, apply to extensions/new

Centralises the `chunks/` strip pattern that three sites
(`i18n/index.ts`, `skills/skill-manager.ts`, `utils/ripgrepUtils.ts`)
each duplicated after the round-3 fix in d581da04d. The implicit
coupling to `esbuild.config.js`'s `chunkNames: 'chunks/[name]-[hash]'`
now lives in a single helper (`packages/core/src/utils/bundlePaths.ts`),
so a future rename only needs updating in one place.

Also applies the same anchor to `commands/extensions/new.ts:EXAMPLES_PATH`.
That module is currently bundled into `cli.js` (so the strip is a no-op
today), but `qwen extensions new --help` always reads the examples
directory in its yargs `builder` — confirmed against the built bundle
that the lookup hits `dist/examples/` (sibling of `cli.js`). Using the
helper future-proofs against esbuild later hoisting the module into a
shared chunk, where the bare `__dirname`/`import.meta.url` lookup would
silently break the command for every end user.

While here, surface lowlight-load failures from `AppContainer`'s
prefetch effect to the debug channel (`debugLogger.warn`) instead of
swallowing them silently. The loader already latches failures
permanently, so this fires at most once per session; `CodeColorizer`
continues to fall back to plain text on miss, so user-visible behaviour
is unchanged.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(bundle): restore __filename shadow in ripgrepUtils; harden lowlight loader

Round-4 review (wenshao 2026-05-13 13:12) flagged five issues in the
recent code-split work. This commit addresses all of them.

CRITICAL — `packages/core/src/utils/ripgrepUtils.ts`: the round-3
`resolveBundleDir` refactor removed the local `__filename` declaration
but `getBuiltinRipgrep` still references bare `__filename` to decide
how many `..` segments to walk. In `npm run dev` (tsx, ESM) `__filename`
is undefined so the function throws `ReferenceError`. In the bundle
esbuild's `define` rewrites it to `__qwen_filename` (the shim chunk
path), which is the wrong string but happens to short-circuit to
`levelsUp = 0` — accidentally correct only because the chunk-path
string never contains `path.join('src', 'utils')`. Reproduced via tsx:
`__filename is not defined`; fixed by re-introducing the explicit
local shadow plus a comment explaining why centralising both helpers
into `resolveBundleDir` cannot replace the per-file shadow.

`packages/cli/src/ui/utils/lowlightLoader.ts`: the previous permanent
`lowlightFailed` latch left syntax highlighting dead for the entire
process lifetime on transient errors (EMFILE, antivirus locks,
slow-disk-after-wake). Replaced with a 30-second cooldown — within the
window subsequent calls return the cached rejection synchronously
(keeps the per-render short-circuit that protects against
permanently-broken installs); after the cooldown the next call retries
the dynamic import. Exposes `isLowlightCoolingDown()` so render-hot
callers can also skip duplicate failure logging.

`packages/cli/src/ui/utils/CodeColorizer.tsx`: hoisted
`loadLowlight()` + log out of the per-line render loop into a single
`ensureLowlightLoading()` call at the top of `colorizeCode`. In the
failure case this collapses hundreds of duplicate debug entries (one
per line) to one per block. The instance is now passed down to
`highlightAndRenderLine` as a parameter.

`packages/core/src/utils/bundlePaths.ts` + `esbuild.config.js`:
exposed `BUNDLE_CHUNK_DIR = 'chunks'` as a named constant and updated
`esbuild.config.js` to interpolate the same name into `chunkNames`
(plus an explicit "MUST stay in sync" comment). Renaming on one side
without the other now stands out at review time. Also expanded the
`define` comment with a contributor-facing warning describing exactly
why bare `__dirname` / `__filename` in source files becomes the shim
chunk path, and pointing future contributors at the
`fileURLToPath(import.meta.url)` shadow pattern (and
`resolveBundleDir` for sibling-asset lookups).

Verified:
- typecheck (all 4 workspaces): clean
- packages/core tests: 7747 passing (no regressions)
- packages/cli tests: only the pre-existing `useAtCompletion.test.ts`
  filesystem-order failures remain (confirmed against `git stash`)
- `npm run bundle` succeeds; `node dist/cli.js --version` returns
  `0.15.10`; `node dist/cli.js --help` renders normally
- `npx tsx <call getBuiltinRipgrep>` now returns the vendored path
  instead of throwing `ReferenceError`

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(bundle): validate lowlight API shape; sync doc-comment drift; add tests

- lowlightLoader: validate runtime shape of createLowlight() before the
  `as Lowlight` cast so an upstream API rename routes through the cooldown
  latch instead of silently degrading every code block to plain text.
- bundlePaths: correct doc comment — esbuild.config.js maintains its own
  `BUNDLE_CHUNK_DIR` constant rather than importing this one (it runs
  before any TS compile step).
- AppContainer: update prefetch-failure comment to reference the cooldown
  symbols (`LOWLIGHT_RETRY_COOLDOWN_MS` / `lowlightLastFailureAt`) that
  replaced the removed `lowlightFailed` latch.
- New unit tests covering the lowlightLoader state machine (success,
  in-flight dedup, shape mismatch, cooldown skip, post-cooldown retry)
  and `resolveBundleDir`'s strip-only-on-exact-match contract.

* test(bundlePaths): use path.resolve for Windows-compatible absolute paths

CI failure on Windows: the new `resolveBundleDir` tests built expected
values with `path.join(path.sep, ...)` (e.g. `\tmp\dist`), but
`pathToFileURL` resolves drive-less paths against the current drive
on Windows. The URL -> `fileURLToPath` round-trip returned `D:\tmp\dist`,
while the expectation stayed `\tmp\dist`, tripping all three new
assertions.

Switched both the URL source and the expected value to a single
`path.resolve(path.sep, ...)` anchor per test so both sides absorb
whatever the platform considers absolute. POSIX behaviour is unchanged
(`/tmp/dist` -> `/tmp/dist`).

---------

Co-authored-by: 秦奇 <gary.gq@alibaba-inc.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-05-15 17:26:18 +08:00
DennisYu07
57282ebb7d
feat(hooks): add prompt hook type with LLM evaluation support (#3388)
* implement prompt hook

* resolve comment

* resolve comment

* resolve comment

* resolve comment

* fix unit test
2026-05-15 17:13:05 +08:00
DennisYu07
1c529e4f0a
feat(hooks): Add TodoCreated and TodoCompleted hooks for todo lifecycle events (#3378)
* add TaskCreated and TaskCompleted

* resolve comment

* resolve lint

* change merge logic from simple to or

* resolve lint error

* reslove commnent

* fix i18n key mismatch and malformed imports

* resolve comment
2026-05-15 15:51:01 +08:00
DennisYu07
7c2b51d28e
fix(hooks): inject SessionStart additionalContext into chat context (#4115)
* inject addContext for SessionStart

* resolve comment

* resolve comment

* resolve comment

* fix comment

* unfiy function and resolve comment

* resolve comment
2026-05-15 15:21:25 +08:00
Tyler
4c18f13051
feat(core): add image+video support for Qwen3.6-35B-A3B quant variants (#4106)
Add modality pattern for qwen3.6-35b model names, enabling image and
video input for locally-hosted Qwen3.6-35B-A3B models (e.g. SGLang's
default model name: Qwen3.6-35B-A3B-NVFP4). Previously these fell
through to the text-only catch-all, blocking all image content.

Co-authored-by: Tyler <tyler@dinsmoor.us>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-05-15 15:11:12 +08:00
tanzhenxin
cc800d0132
fix(core): support cross-auth fast side queries (#4117)
Some checks are pending
Qwen Code CI / Classify PR (push) Waiting to run
Qwen Code CI / Lint (push) Blocked by required conditions
Qwen Code CI / Test (macos-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (ubuntu-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (windows-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* fix(core): support cross-auth fast side queries

* refactor(core): hoist resolveForModel selector and refresh side-query docs

Compute the model selector once at the top of `resolveForModel` and pass
it through to `createContentGeneratorForModel` and
`resolveModelAcrossAuthTypes`. This eliminates the redundant selector
resolution that happened up to five times per cross-auth side query
(once per call, plus once inside each downstream helper).

Also update the JSDoc for `SideQueryJsonOptions.model` and
`SideQueryTextOptions.model` to reflect the actual fallback chain
(`getFastModelForSideQuery` → `getFastModel` → `getModel` →
`DEFAULT_QWEN_MODEL`) introduced in this PR.
2026-05-14 19:22:12 +08:00
tanzhenxin
85c10c1619
fix(core): correct context-usage Footer for prompt size and Anthropic caches (#4109)
* fix(core): include cache_creation_input_tokens in Anthropic prompt accounting

Anthropic reports the prompt across three mutually-exclusive fields —
input_tokens, cache_read_input_tokens, cache_creation_input_tokens —
but the adapter only summed input + cache_read, dropping the
cache_creation bucket. On a fresh session that wrote the system prompt
to cache, the reported promptTokenCount was off by the cache-creation
amount.

Extract the normalization into a shared helper used by both streaming
and non-streaming paths, and add a guard for non-conforming providers
that expose the Anthropic protocol but follow OpenAI-style accounting
(input_tokens already covers the cache fields). When input_tokens is at
least as large as both cache fields and at least one cache field is
non-zero, trust input_tokens alone so we don't double-count.

* fix(core): prefer promptTokenCount over totalTokenCount for context display

The Footer's "context used" indicator is meant to track prompt size —
how much of the context window the next request will carry. The current
code preferred totalTokenCount (= prompt + output), so output tokens
generated in the in-flight round were double-counted. Across turns this
caused the % bar to oscillate non-monotonically: it could *decrease*
between turns whenever the prior round's output was large.

Flip the preference at every consumer site that drives the live counter:
the per-stream-chunk update in the main chat, the per-round update in
the subagent runtime (which drives auto-compaction), the session-resume
walk, and the in-process agent panel's listener. Producer sites that
expose total for billing/export are left unchanged.

* fix(core): use cache_creation as the discriminator in Anthropic usage normalization

The previous guard fell back to "input alone" whenever input_tokens was
at least as large as both cache fields. In a real Anthropic conversation
input_tokens grows past cache_creation_input_tokens as history
accumulates, so the guard inevitably mis-classified every later turn as
OpenAI-style and silently dropped the cache_creation portion from the
displayed prompt size. The Footer would show a one-shot drop at the
crossover point and then keep under-reporting by ~32k tokens.

cache_creation_input_tokens is unique to Anthropic's protocol (OpenAI
has no equivalent), so its presence is a strong signal the response
follows real Anthropic semantics. Use that as the primary discriminator
and only fall back to "input alone" when cache_creation is zero, cache
reads are reported, and input already covers them — the actual OpenAI-
on-Anthropic case the guard was meant to catch.

Adds a regression test that locks in the crossover scenario.

* chore(core): address PR review — restore isFinite guard and cover cache-field plumbing

- Restore the `isFinite` guard on `lastPromptTokenCount`: the previous
  `if (contextTok)` relaxation accepted `Infinity` (truthy), which a
  malformed provider response could otherwise latch and poison the
  downstream compaction math.
- Add unit coverage for the cache-field plumbing the PR introduced:
  - usage.ts: real-Anthropic warm-turn case where `cache_read > 0` and
    `cache_creation > 0` simultaneously (mid-conversation breakpoint
    advance over an already-cached prefix).
  - converter.ts: `convertAnthropicResponseToGemini` now exercised with
    all three prompt buckets present to confirm both cache fields are
    forwarded to `usageMetadata`.
  - anthropicContentGenerator.ts: streaming pipeline test that includes
    `cache_creation_input_tokens` in `message_start` and asserts the
    accumulated `usageMetadata` carries it through to the final chunk.
2026-05-14 19:21:39 +08:00
顾盼
609e05baee
feat(tools): add generic worktree support — EnterWorktree/ExitWorktree + Agent isolation (#4073)
* feat(tools): add generic worktree support (Phase A + B of #4056)

Adds first-class git worktree as a general-purpose capability:

Phase A — User-facing tools
- enter_worktree: creates `<projectRoot>/.qwen/worktrees/<slug>` on a
  `worktree-<slug>` branch and returns the absolute path. Slug auto-generated
  when omitted; validated against path traversal and disallowed characters.
- exit_worktree: keeps or removes the worktree (and its branch). Refuses to
  remove a worktree with uncommitted tracked changes or untracked files
  unless `discard_changes: true` is set.

Phase B — Agent isolation
- Agent tool gains an `isolation: 'worktree'` parameter that provisions a
  temporary `agent-<7hex>` worktree, prepends a worktree notice to the task
  prompt, and on completion either removes the worktree (no changes) or
  preserves it and reports its path/branch in the result. Background and
  foreground execution paths both wired up; rejected for fork agents.
- worktreeCleanup.cleanupStaleAgentWorktrees: fail-closed sweep for
  ephemeral `agent-<7hex>` worktrees older than 30 days with no tracked
  changes and no unpushed commits. User-named worktrees are never swept.
- buildWorktreeNotice helper for fork subagents (parity with claude-code).

Arena compatibility
- The existing Arena worktree implementation (GitWorktreeService.setupWorktrees,
  ArenaManager, agents.arena.worktreeBaseDir) is untouched. Arena uses its
  own batch APIs and `~/.qwen/arena` base dir; the new general-purpose APIs
  live alongside under `<projectRoot>/.qwen/worktrees/`.

Subagent safety
- enter_worktree / exit_worktree are added to EXCLUDED_TOOLS_FOR_SUBAGENTS
  so a subagent cannot mutate the parent session's worktree state.

Refs #4056

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* test(worktree): use path.join in expected paths so the test passes on Windows

The Windows CI run reported `enter-worktree.test.ts` failing because the
expected string was hardcoded with `/` while `getUserWorktreesDir()` uses
`path.join`, which returns `\\` on Windows. Build the expected path via
`path.join` so the platform-correct separator is compared.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(enter-worktree): treat empty name as auto-generate

Some models pass `{ "name": "" }` when calling EnterWorktree, because the
schema marks `name` as optional and they emit an empty placeholder. The
previous validation rejected the empty string with "Worktree name must be
a non-empty string", which surprised users running the auto-slug path.

Now both `validateToolParams` and `execute` treat `name: ""` as equivalent
to `name: undefined` and fall back to the auto-generated `{adj}-{noun}-{4hex}`
slug. Explicit invalid slugs (`'../etc'`, `'a/b'`, etc.) are still rejected
as before.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(worktree): address review findings 1-6 from PR #4073

Six issues raised on the initial review; each addressed with a verifiable
guarantee.

1. Real isolation for `agent isolation: 'worktree'`
   Before: subagent's Config still resolved `getTargetDir()` to the parent
   project root, so Edit/Write/Read workspace checks and Shell's default cwd
   silently operated on the parent tree. The cleanup helper then saw a
   "clean" worktree and removed it — destroying the evidence.
   After: the worktree is provisioned BEFORE `createApprovalModeOverride`,
   and the resulting agent Config has `getTargetDir`/`getCwd`/`getWorkingDir`
   rebound to the worktree path. Relative paths, unqualified shell
   commands, and glob/grep roots all confine to the worktree.

2. `exit_worktree action='remove'` now prompts in default/auto-edit modes
   Added `getDefaultPermission()` on the invocation: `'ask'` when action is
   `remove`, `'allow'` when `keep`. Brings it in line with edit, write_file,
   and run_shell_command.

3. Force-delete no longer silently destroys unpushed commits
   `removeUserWorktree` now uses `git branch -d` (refuses unmerged) by
   default and surfaces `branchPreserved: true` when git refuses. Added
   `hasUnmergedWorktreeCommits` (checks if branch tip is reachable from any
   other local branch or remote ref). Both the agent isolation cleanup and
   `exit_worktree action='remove'` use this check: if the branch has work
   not covered elsewhere, the worktree+branch are preserved even when
   `discard_changes: true` is set (there is no `discard_commits` flag —
   committed work is rarely what `remove` means to discard).

4. Both new tools are now deferred behind ToolSearch
   `shouldDefer: true` + `searchHint` on both. Verified via openai-logging:
   `enter_worktree` and `exit_worktree` no longer appear in the function-
   declaration list sent on every API request.

5. Stale-worktree cleanup is wired in
   `Config.initialize()` fires `cleanupStaleAgentWorktrees(targetDir)` as a
   non-awaited startup sweep (skipped in bare mode). Picks up orphaned
   `agent-<7hex>` worktrees left by crashed runs.

6. Foreground isolation no longer leaks on uncaught throw
   The foreground try block tracks whether the cleanup helper ran on the
   success path; the finally block invokes it as a fallback when the try
   bailed early. Mirrors the background path's pattern.

Verification:
- Unit tests: 83 passed (16 worktree + 64 existing agent + 3 cleanup) — no
  regressions.
- E2E #1: agent told to write `hello.txt` via RELATIVE path — file landed
  at `.qwen/worktrees/agent-XXXXXXX/hello.txt`, NOT at the parent root.
- E2E #3: created worktree, committed work inside it, called exit_worktree
  with `discard_changes=true` — refused with clear message; worktree and
  branch both preserved.
- E2E #4: openai-logging confirms worktree tools absent from API tool list
  (7 tools sent instead of 9).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(worktree): address review round 2 findings (1 from tanzhenxin, 7+8 from wenshao)

The first round closed the data-loss-class issues. This round addresses
follow-ups from a deeper audit:

1. Stale-worktree sweep was inert on common-case repos
   `cleanupStaleAgentWorktrees` previously ran `git log --branches --not
   --remotes --oneline` from each worktree's directory — that lists
   unpushed commits across EVERY local branch, not just the worktree's
   own branch. On any repo with no remote configured (or with stray
   unpushed branches), the sweep refused to remove every candidate.
   Replaced with `service.hasUnmergedWorktreeCommits(slug)` which scopes
   the check to the worktree branch via `for-each-ref --contains <tip>`.
   Also added the `branchPreserved` warn log requested in M7 and an
   `fs.access` shortcut for the empty-worktrees-dir case (M8).

2. `cleanupWorktreeIsolation` and `worktreeIsolation` were inside the
   inner try (~660 lines from the outer catch). Hoisted both to the top
   of `execute()` so the outer catch can reap or preserve the worktree
   when anything between provisioning and the inner try throws (e.g.
   `createApprovalModeOverride`, agent creation). Closure carries the
   resolved `repoRoot` so cleanup never has to re-resolve.

3. Background error path discarded the cleanup result. Now captures
   `formatWorktreeSuffix(...)` and appends it to the registry's failure
   /cancel message, so users see the preserved path/branch even when
   the agent crashed before reporting.

4. `cleanupWorktreeIsolation` now treats `result.success === false` as
   "worktree still on disk" and surfaces it as preserved instead of
   silently dropping it from the result.

5. Override was incomplete. Several Config methods read `this.targetDir`
   directly (`getProjectRoot`, `getFileService`, etc.) — own-property
   getter overrides did not redirect them. Now also shadows `targetDir`
   and `cwd` as own properties on the agent's Config override, swaps in
   a `FileDiscoveryService` rooted at the worktree, and rebuilds
   `WorkspaceContext` to point at the worktree only. Verified
   end-to-end: shell `pwd > pwd-record.txt` (no directory arg) lands at
   `.qwen/worktrees/agent-<7hex>/pwd-record.txt`, not the parent root.

6. monorepo subdir issue. Both `enter_worktree` and the agent isolation
   path now resolve `git rev-parse --show-toplevel` first and anchor
   `.qwen/worktrees/<slug>` at the repo root. Worktrees created from
   any subdirectory now end up where the startup sweep can find them.

7. Replaced `git worktree add -B` (silent force-reset of pre-existing
   branches) with `git worktree add -b` plus an explicit existence
   check via `git for-each-ref` (NOT `show-ref --quiet`, which
   simple-git swallows). Pre-existing `worktree-<slug>` branches now
   trigger a clear error instead of clobbering committed work.

8. First worktree creation in a repo writes `<projectRoot>/.qwen/.gitignore`
   with `worktrees/` so worktree contents stay out of the parent's
   `git status`, glob/grep results, and bundle tools. Idempotent: never
   overwrites an existing file.

9. Logging across the failure paths (`enter_worktree` errors,
   `agent.ts:failWorktreeProvisioning`, `cleanupWorktreeIsolation`,
   `hasUnmergedWorktreeCommits` swallowed errors,
   `cleanupStaleAgentWorktrees`'s `branchPreserved` race).

10. `exit_worktree` no longer suggests `discard_changes: true` when the
    git status check itself fails — that would be advising the user to
    bypass a safety check whose precondition is unknown. Now points at
    the underlying repo problem.

11. `generateAutoSlug` switched from `Math.random()` (4 hex, weak RNG,
    one-in-65k collision) to `randomBytes` (6 hex, ~16M combinations).
    Two RNG sources in this file collapsed to one.

Pushed back: the TOCTOU swap in `removeUserWorktree` (S6 round 1) is
left as-is — `git branch -d` is the real safety, and reordering does
not eliminate the window. Windows reserved-name validation (M5 round 2)
deferred to a follow-up; the current allowlist already rejects path
separators, `..`, leading dot/dash, and the >64-char case.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(worktree): use randomInt to silence CodeQL biased-modulo finding

CodeQL's `js/biased-cryptographic-random` flagged
`randomBytes(4)[i] % ARRAY.length` in `generateAutoSlug`. The math is
actually exact for the current word-list lengths (256 % 8 == 0), but
the lint rule does not know that — and a future contributor changing
the list to a non-power-of-two length would silently introduce bias.

Switched the index lookups to `crypto.randomInt(0, length)`, which uses
rejection sampling and is uniform by construction. Suffix still uses
`randomBytes(3).toString('hex')` since hex encoding is unbiased.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(worktree): address review round 3 findings 1-6 from PR #4073

The previous round added `getRepoTopLevel` for `enter_worktree`'s
provisioning, but missed three sibling call sites that still used the
raw cwd. The double-cleanup race in the foreground path also leaked
stale `[worktree preserved]` suffixes on rejected promises. All six
findings from the deeper audit are addressed:

1. exit_worktree now resolves through `getRepoTopLevel()` before
   building its `GitWorktreeService`, mirroring `enter_worktree`. Without
   this, launching `qwen` from a monorepo subdirectory created the
   worktree under the repo root but exit_worktree looked under the
   subdir's `.qwen/worktrees/` and always returned "Worktree not found".
   Verified end-to-end: enter + exit from `packages/core/` works.

2. agent.ts cleanup helper now nulls `worktreeIsolation` immediately
   after capturing the closure value. The previous structure could
   reach the helper twice — once in the foreground try's success path
   and once in the foreground finally fallback (or once in the inner
   try and once in the outer catch on a thrown rejection). The second
   call would `hasWorktreeChanges()` against a directory the first
   call already removed, fail-closed, and emit a bogus
   `[worktree preserved: <missing path>]` suffix.

3. Config.initialize's startup sweep now resolves `getRepoTopLevel()`
   before invoking `cleanupStaleAgentWorktrees`. Without this, every
   subdir launch scanned a non-existent `<subdir>/.qwen/worktrees/`
   and the 30-day expiry sweep was permanently a no-op.

4. agent.ts's `buildWorktreeNotice` now passes
   `worktreeIsolation.repoRoot` as `parentCwd` instead of
   `this.config.getTargetDir()`. The notice's path-translation
   guidance (≈ "translate paths from <parent> to <worktree>") would
   otherwise misdirect the subagent in a monorepo subdir launch.

5. Removed dead method `GitWorktreeService.listUserWorktrees`. It had
   no callers anywhere in the codebase and used `execSync` in a loop
   (would have blocked the event loop if anyone wired it up).

6. `localBranchExists` no longer swallows git failures silently. The
   defensive `false` default is preserved (so `git worktree add -b`
   itself surfaces the conflict if the check missed an existing
   branch), but the catch now logs via `debugLogger.warn` so disk-full
   / permission / ref-store-corruption cases are visible in debug
   output instead of being invisible.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(worktree): address review round 4 findings (data-loss + visibility)

Seven actionable findings from a deeper audit, all closed:

1. User worktree slugs could collide with ephemeral-agent shape
   `validateUserWorktreeSlug` did not reject names starting with
   `agent-`, so a user-named `agent-1234567` matched the cleanup regex
   `/^agent-[0-9a-f]{7}$/` and would be silently swept after 30 days
   along with whatever work was in it. Now reserved — clear error
   message points users at the cause.

2. Slug producer and consumer were string-coupled across files
   `agent.ts` hardcoded `agent-${hex(7)}` and `worktreeCleanup.ts`
   independently hardcoded `/^agent-[0-9a-f]{7}$/`. Future change to
   hex length on one side would silently break the other. Lifted
   `AGENT_WORKTREE_PREFIX`, `AGENT_WORKTREE_HEX_LENGTH`,
   `AGENT_WORKTREE_SLUG_PATTERN`, and `generateAgentWorktreeSlug()` to
   `gitWorktreeService.ts`; both call sites import them.

3. Startup sweep was invisible at default log level
   Fire-and-forget sweep used `debug` for errors and discarded the
   success count. A leak-chasing operator had no log breadcrumb.
   Errors promoted to `warn`; successful removals (count > 0) logged
   at `info`.

4. `getRepoTopLevel()` silent catch
   Returned `null` on any git failure with no log. Combined with
   `?? cwd` fallback in callers, a flaky git would have made worktree
   creators and the startup sweep disagree silently about which dir to
   use. Now logs the underlying error.

5. `hasTrackedChanges()` silent catch
   Cleanup's fail-closed `return true` had no log. Couldn't tell
   "has real changes — leave alone" from "git index unreadable — repo
   may be corrupt". Now logs.

6. `cleanupWorktreeIsolation` claimed `preservedPath` for a removed dir
   When `removeUserWorktree` returns `{ success: true, branchPreserved:
   true }` it has already deleted the directory and failed only on
   `git branch -d`. The helper still reported the (now non-existent)
   path as preserved. Now returns only `preservedBranch` for that
   case; `formatWorktreeSuffix` emits a distinct message instructing
   recovery via `git worktree add <new-path> <branch>`.

7. `removeUserWorktree` swallowed branch-delete failures
   Both `-d` and `-D` catch blocks were empty. Locked refs, perms,
   disk full all looked identical to "unmerged commits". Both now
   `debugLogger.warn` with the underlying error.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* refactor(worktree): self-review pass — reuse, parallelism, dead code

Self-review caught a handful of issues across three categories:

Reuse:
- `pathExists` in the new code now uses the existing `fileExists` from
  `utils/fileUtils.ts` instead of duplicating an `fs.access` wrapper.
- `worktree-` branch prefix was string-literalled in five places. Added
  `WORKTREE_BRANCH_PREFIX` and `worktreeBranchForSlug(slug)` exports in
  `gitWorktreeService.ts`; updated `gitWorktreeService.ts`,
  `worktreeCleanup.ts`, and `exit-worktree.ts` to use them. Future
  prefix changes are a single edit.

Efficiency:
- `Config.initialize` used two `await import(...)` calls inside the
  startup-sweep IIFE, paying that cost on every CLI start. Switched to
  static imports at the top of `config.ts` — the modules are tiny and
  the dynamic indirection bought nothing.
- `cleanupWorktreeIsolation` in `agent.ts` ran `hasWorktreeChanges` and
  `hasUnmergedWorktreeCommits` sequentially. They have no data
  dependency on each other and each spawns its own `git` invocation;
  `Promise.all` halves the cleanup wall-clock on the common path.
  Same fix in `worktreeCleanup.ts`'s per-entry loop.
- `ensureWorktreesGitignored` used `fs.access` then `fs.writeFile`, a
  TOCTOU race when two agent invocations created worktrees concurrently
  (both could pass the `access` check and the second would clobber the
  first's `.gitignore`). Now writes with `flag: 'wx'` and treats
  `EEXIST` as the no-op case — atomic in one syscall.

Quality:
- Dropped the `worktreeCleanupRan` boolean in the foreground execution
  path. `cleanupWorktreeIsolation` already nulls its closure variable
  at the top of every call (see the comment at its definition), so
  re-entries are no-ops. The boolean and its tracking were dead weight
  that obscured the real guard.
- Trimmed the Phase-2 override comment block to drop the WHAT-stating
  enumerations (items 3 and 4 just narrated the lines below) and
  removed a navigation comment about hoisted helpers — the helpers are
  visible at the top of the same method.

84 unit tests pass; typecheck clean.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(worktree): address review round 5 — design-doc commitments + correctness

Five critical findings + four suggestions, all closed.

Critical:
1. Wrong base branch for agent isolation. `createUserWorktree(slug)` with
   no `baseBranch` arg fell back to `getCurrentBranch()` on the **main**
   working tree, returning `main` regardless of which branch the user
   was actually on. A subagent invoked from `feature-x` would silently
   start from `main` and produce diffs against the wrong baseline.
   `enter_worktree` had the same bug. Both now resolve the parent's
   current branch first and pass it explicitly. Verified end-to-end:
   `git checkout feature-x` → `enter_worktree` → worktree HEAD includes
   the feature-x commit.

2. `countWorktreeChanges` (used by `exit_worktree`'s dirty-state guard)
   missed `status.conflicted[]`. In simple-git that array is mutually
   exclusive with the staged/modified/etc. arrays, so a worktree
   mid-merge with only conflicts looked `{tracked: 0, untracked: 0}`
   to the guard and `action='remove'` would proceed without
   `discard_changes: true`. Added `+ status.conflicted.length`.

3. `exit_worktree` had no session-ownership check, contradicting the
   design doc's "only operates on worktrees created by THIS session".
   In yolo mode a prompt injection could enumerate `.qwen/worktrees/`
   and pass any name to drop another session's work. Now:
   `enter_worktree` and agent isolation write a `.qwen-session`
   marker into the worktree at provisioning time; `exit_worktree
   action='remove'` reads it and refuses if it does not match the
   current `Config.getSessionId()`. Worktrees from before this guard
   (no marker file) are treated as "owner unknown" — allowed with a
   warn log so the change is observable.

4. `enter_worktree` did not refuse nested invocations from inside an
   existing worktree, contradicting the design doc. Now rejects any
   cwd containing `.qwen/worktrees/` as a path component, with a
   clear "Already inside a git worktree…" message. Verified: enter
   from inside a worktree returns is_error with that text.

6. `hasTrackedChanges` (cleanup sweep) had the same `conflicted[]`
   gap. Rewrote to use raw `git status --porcelain --untracked-files=no`
   which lists every tracked change including `UU` conflict markers
   in a single git call and explicitly skips the untracked walk
   (the prior comment claimed to skip it, but `status()` always
   does the scan).

Suggestion:
7. `buildWorktreeNotice` now receives the parent agent's actual
   `getTargetDir()` again (was switched to `repoRoot` in round 3 on
   a different reviewer's suggestion; round-5 caught that the model's
   inherited paths reference the parent's cwd, not necessarily the
   repo root, so the prior behaviour was correct).

8. Startup sweep now does `fs.access(<targetDir>/.qwen/worktrees)`
   *before* importing GitWorktreeService and spawning `git
   rev-parse --show-toplevel`. The git probe is reserved for users
   who actually have a worktrees directory locally — 99% of users
   pay only one syscall on startup.

9. Tests:
   - New `exit-worktree.test.ts` covers metadata, validation,
     `getDefaultPermission` (ask vs allow), and getDescription.
   - `agent.test.ts` adds three `validateToolParams` cases for the
     `isolation` parameter (accepted with subagent_type, rejected
     without, rejected for non-"worktree" values).
   - `enter-worktree.test.ts` adds round-trip tests for
     `writeWorktreeSessionMarker` / `readWorktreeSessionMarker` plus
     a `worktreeBranchForSlug` sanity check.
   - Total: 101 tests pass (was 86 → +15).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(test): drop unused @ts-expect-error in exit-worktree.test.ts

Empty string `''` is a valid `string` type, so the @ts-expect-error
directive on `validateToolParams({ name: '', action: 'keep' })` did
nothing — TypeScript correctly accepted the line, and `tsc --build`
in CI reported TS2578 ("Unused '@ts-expect-error' directive"). The
runtime assertion already covers the case; the directive was leftover
from an earlier draft.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(test): use importActual in ArenaManager mock to preserve new exports

The Arena test mocks `gitWorktreeService.js` with a factory that
returns only `{ GitWorktreeService }`. PR #4073 added several other
exports to that module (`AGENT_WORKTREE_SLUG_PATTERN`,
`WORKTREE_BRANCH_PREFIX`, `worktreeBranchForSlug`,
`generateAgentWorktreeSlug`, `writeWorktreeSessionMarker`,
`readWorktreeSessionMarker`, `WORKTREE_SESSION_FILE`).

Other modules in the dep graph reach the mocked surface — most
notably `worktreeCleanup.ts` imports `AGENT_WORKTREE_SLUG_PATTERN`
and `worktreeBranchForSlug`, and now reaches the mock via the static
`config.ts` → `worktreeCleanup.ts` import chain added in the
self-review pass. The Arena test failed at module-load with:

  Caused by: Error: [vitest] No "AGENT_WORKTREE_SLUG_PATTERN" export
  is defined on the "../../services/gitWorktreeService.js" mock. Did
  you forget to return it from "vi.mock"?

Use `importOriginal` to capture every real export, spread it into
the return object, and only replace `GitWorktreeService` (the class
the test actually needs to mock). The class-level mock keeps its
existing static-method shims.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(worktree): address review round 6 (5 critical + 6 suggestions)

The biggest item — #1 — is a self-inflicted regression from round 5:
the new agent- prefix reservation in `validateUserWorktreeSlug`
rejected EVERY slug that `generateAgentWorktreeSlug` produces, since
that helper emits exactly `agent-<7hex>`. Net effect: every
`AgentTool isolation: 'worktree'` invocation failed at validation.
The reservation now allows the canonical pattern through (everything
the helper can produce) and only rejects user-chosen `agent-*` names
that don't match it. Added a round-trip regression guard: 50
`generateAgentWorktreeSlug()` outputs are fed back through
`validateUserWorktreeSlug` and must all pass.

Other critical fixes:

2. `hasWorktreeChanges` (used by agent isolation cleanup) was the
   one remaining caller relying solely on `status.isClean()`.
   Defensive `|| status.conflicted.length > 0` so a future simple-git
   bookkeeping change can't let a mid-merge worktree appear clean and
   get auto-deleted.

3. `readWorktreeSessionMarker` swallowed every I/O error as "marker
   missing", which let a disk error / EACCES silently bypass the
   session-ownership guard. ENOENT is still treated as missing
   (legitimate); every other code now logs.

4. `exit_worktree` `fs.stat` catch was the same shape — every error
   collapsed to "Worktree not found". ENOENT → not found; everything
   else logs and returns a distinct "cannot access" error.

5. `cleanupStaleAgentWorktrees` `fs.stat` catch was again the same.
   ENOENT → silently skip (entry vanished between readdir and stat);
   everything else logs.

Suggestions:

6. Startup sweep fast-bail was running BEFORE resolving the repo
   top-level. For monorepo subdir launches, `targetDir/.qwen/worktrees`
   never exists and the sweep early-returned — permanently a no-op.
   Now resolves the root first, then fast-bails against the resolved
   `<root>/.qwen/worktrees`. Also logs the skip case so operators can
   tell "skipped" from "ran, found nothing".

7. `.qwen-session` marker was visible to `git add -A` inside the
   worktree. Now writes a `.git/info/exclude` rule (resolved via
   `git rev-parse --git-dir`, since worktree `.git` is a file
   pointing at the parent repo's `.git/worktrees/<name>/`).
   Best-effort: failure to write the rule does not abort
   provisioning.

8. Agent isolation now refuses to provision when the parent's cwd is
   already inside a worktree — same regex guard as `enter_worktree`.

9. `exit_worktree`'s wrapper around `hasUnmergedWorktreeCommits` now
   logs at the call site so the chain (caller → reason it asked →
   underlying git error) is complete in operator logs.

10. Sweep now logs unconditionally at `info`. Three distinct messages:
    "skipped (no worktrees dir)", "ran, nothing to remove", "removed N".

Tests:

11. New `execute()` coverage:
    • exit-worktree: session-ownership refusal, keep happy path,
      legacy/no-marker fallthrough with warn log, missing-worktree
      error, unmerged-commits guard with `discard_changes: true`,
      `writeWorktreeSessionMarker` round-trip.
    • enter-worktree: nested-guard rejection, non-git-repo error.
    These spin up real temp git repos (no filesystem mocking) and
    drive the actual tool invocation pipeline.

   Total: 135 tests pass (was 101 → +34).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* refactor(worktree): demote noise startup-sweep logs to debug

Self-review pass applying the round-6 review-triage framework
(filter #5: "If a log only fires on the happy path, it's noise.")
to my own round-6 changes:

- "Stale worktree sweep skipped: <dir> does not exist" — fires on
  every CLI start for ~99% of users who never use worktrees.
- "Stale worktree sweep ran under <root>: nothing to remove" —
  fires on every CLI start for users who have any worktrees but
  no stale ones at the moment.

Both are happy-path noise at `info`. Demoted to `debug` so an
operator can opt in via `--debug` when they want to confirm the
sweep is wired up, but normal output stays clean.

Only the actually-actionable case ("removed N worktrees") stays at
`info` — that's the signal someone chasing a worktree leak would
grep for.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(worktree): close AUTO_EDIT bypass + parent-dirty stale-code hazard

Round-7 review caught two correctness gaps:

1. exit_worktree action='remove' was still auto-approved in AUTO_EDIT
   `getDefaultPermission` returning 'ask' is necessary but not
   sufficient. `permissionFlow.isAutoEditApproved` auto-approves any
   tool whose `confirmationDetails.type` is 'edit' OR 'info', and
   `BaseToolInvocation` returns 'info' by default. So a session in
   AUTO_EDIT could silently destroy a worktree (with branch deletion)
   without a confirmation prompt — the data-loss path the round-1
   `'ask'` switch was meant to close. Now overrides
   `getConfirmationDetails` to return `type: 'exec'` for action=remove,
   which keeps the prompt in AUTO_EDIT. The `keep` action still falls
   through to the base info-type since it is non-destructive.

   Regression-guard test asserts the type is 'exec' (not 'info') for
   remove and that the command field describes both the worktree-remove
   and branch-delete operations.

2. Agent isolation worktrees ran against parent's HEAD, not its
   working tree
   `git worktree add -b <branch> <path> <base>` only checks out the
   base ref's tip — uncommitted edits in the parent's working tree do
   NOT propagate. The "edit code → ask review/test agent before
   committing" workflow silently ran the subagent against the
   pre-edit HEAD and returned results that looked authoritative but
   reflected stale code.

   Reviewer offered two options: overlay parent's dirty state à la
   Arena (~50 LOC, edge cases), or refuse isolation when parent is
   dirty (~10 LOC, clear UX). Chose the latter for Phase B scope —
   simpler, decisive, and matches the design-doc's explicit
   commitment that dirty-state overlay is Arena-specific. Users can
   commit/stash before re-invoking agent isolation; overlay can be a
   follow-up if users complain about the friction.

   Fail-closed on the dirty-check itself (assume dirty rather than
   silently launch on a possibly-stale tree).

   Test exercises both "dirty parent → guard fires" and
   "clean parent → guard passes" against real temp git repos.

139 unit tests pass (was 135, +4 regression guards).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-14 18:00:30 +08:00
ChiGao
a86404e9ea
fix(cli): apply /language output to running session without restart (#4143)
* fix(cli): apply /language output to running session without restart

`/language output <lang>` wrote ~/.qwen/output-language.md and persisted
the setting, but the rule reaches the model through the system instruction
which is bound once when GeminiChat is created at startup. The command
therefore had to tell the user "Please restart the application for the
changes to take effect."

Refresh hierarchical memory after writing the rule file so userMemory
re-reads output-language.md, then rebuild and re-bind the main-session
system instruction on the live chat via a new
GeminiClient.refreshSystemInstruction() helper. The change takes effect
on the next turn without restarting the session and without losing
conversation history. Drop the restart notice from the success message.

Fixes #4142

* test(cli): assert refresh order in /language output and cover failure path

- The fix for /language output relies on refreshHierarchicalMemory
  running *before* refreshSystemInstruction; otherwise the new
  systemInstruction is rebuilt from stale userMemory and the language
  switch silently fails to take effect. Assert ordering with
  invocationCallOrder so a regression cannot pass review.
- Add a test that the command still reports success when the in-session
  refresh throws — the setting is already persisted on disk, so the
  user-visible message should not surface the refresh failure.
- Drop two stale "rule file is updated on restart" comments left over
  from the pre-fix behavior.

---------

Co-authored-by: 秦奇 <gary.gq@alibaba-inc.com>
2026-05-14 16:24:41 +08:00
易良
796de4dfef
fix(core): merge IDE context into user prompt (#3980)
Some checks are pending
Qwen Code CI / Classify PR (push) Waiting to run
Qwen Code CI / Lint (push) Blocked by required conditions
Qwen Code CI / Test (macos-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (ubuntu-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (windows-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* fix(core): merge IDE context into user prompt

IDE mode now wraps editor context in a <system-reminder> block and
prepends it to the current user request instead of inserting a separate
user history entry via addHistory(). This preserves the API history
turn shape and avoids extra user turns in IDE mode.

Key changes:
- IDE context merged into user request via prependToFirstTextPart()
- State update deferred until after arena cancellation check
- escapeClosingSystemReminderTags() hardens against tag injection
  including zero-width/control character variants
- forceFullIdeContext reset on stream errors for correct resend
- Context prompt updated to encourage active use of editor context

Refs #3712

* fix(core): restore BaseLlmClient per-model cache clear on session reset

resetChat only cleared GeminiClient's new perModelGeneratorCache but
dropped the BaseLlmClient.clearPerModelGeneratorCache() call that was
present before the refactoring. sideQuery.ts and sessionTitle.ts still
route through BaseLlmClient with the fast model, so stale generators
survived session reset.

* refactor(core): remove duplicated per-model generator code from GeminiClient

The per-model ContentGenerator resolution logic (resolveModelAcrossAuthTypes,
createRetryAuthTypeForModel, createContentGeneratorForModel, perModelGeneratorCache)
was inadvertently duplicated from BaseLlmClient into GeminiClient. This restores
the original one-line delegation to BaseLlmClient.resolveForModel() and removes
~130 lines of redundant code to keep the PR focused on IDE context merging only.

* fix(core): harden IDE context reminder escaping

* fix(core): defer IDE context baseline update

* fix(core): use shared escapeSystemReminderTags in tool scheduler

Aligns the rule-activation envelope scrub with the IDE-context path —
both now route through `escapeSystemReminderTags`, which neutralizes
whitespace, zero-width, and control-character variants of
`<system-reminder>` tag boundaries. The previous narrow regex only
matched the literal `</system-reminder>` sequence, so a rule body
containing `</system-reminder >`, `</ system-reminder>`, or
`<​/system-reminder>` could still close the envelope mid-content.

* docs(core): clarify request assembly order in IDE merge path

Two adjacent comments described the pre-merge model: one called
the system-reminder block "append" while the code prepends, and the
tryCompressChat note still talked about "the previous context turn"
which no longer exists once IDE context is merged into the user
prompt. Rewrite both to match what the code actually does so future
readers do not get a misleading mental model of prompt assembly or
post-compression resend behavior.

* docs(core): align scheduler scrub comment with shared helper

The block-level comment still labeled the sanitization step as
"closing-tag scrub", which described the old narrow regex. The
shared escapeSystemReminderTags helper now neutralizes opening /
self-closing / obfuscated variants too, so name the helper directly
to keep the rationale and the call site in agreement.

* test(core): cover escapeSystemReminderTags variants in scheduler

The end-to-end scheduler scrub test only exercised a literal
</system-reminder> body. Now that the rule-activation envelope routes
through escapeSystemReminderTags, extend the integration coverage to
the obfuscated closing-tag variants the helper was introduced to
catch (whitespace before/after the slash, ZWSP / WJ / VS-16 inside
the name) and to opening-tag injection. Each case asserts that the
envelope still has exactly one </system-reminder> closer and that the
raw obfuscated form (or unescaped opening tag) does not survive into
the model-facing payload.

Refactor the existing test's mock setup into a shared
runSchedulerWithRule helper so the new it.each variants stay
focused on the assertion shape.

* fix(core): address remaining review feedback

- Add debug log when IDE context parts are empty for diagnosability
- Add safety comment in xml.ts explaining why no fast-path pre-check
  is used in getSystemReminderTagKind (zero-width obfuscation bypass)
2026-05-14 11:33:00 +08:00
顾盼
c512427f93
feat(core): strip inline media before chat compaction summary (#4101)
* feat(core): strip inline media before chat compaction summary

Compaction's side-query previously shipped historyToCompress verbatim.
Two related issues degraded summary quality and accuracy:

- Inline image / document bytes (from MCP tool results) leaked into the
  summary model's prompt where they could not be interpreted and merely
  inflated payload.
- findCompressSplitPoint apportioned chars via JSON.stringify(content),
  so a single 1 MB base64 image looked like ~350K tokens and biased
  the split point. Real Qwen-VL token cost is at most a few thousand.

This change adds a new compactionInputSlimming module that replaces
inlineData / fileData parts with short [image: <mime>] / [document:
<mime>] placeholders before the side-query, leaving live history
unchanged. The same constant feeds estimateContentChars so the
split-point algorithm sees the budget the summary model actually
consumes downstream. Microcompact is also extended to clear stale
inline images alongside old tool results.

A previous draft of the design also externalized large pastes to a
content-addressable on-disk cache, but it was withdrawn after surveying
claude-code's 2026-03 to 2026-05 releases - upstream consensus is to
keep user input visible to the model and amortize cost via prompt
caching rather than externalize. See the Out-of-scope section of the
design doc for the full rationale.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(core): recurse into functionResponse.parts when stripping media

E2E exposed that `read_file` (and any tool that surfaces an image)
wraps the result in `functionResponse.parts` via
`coreToolScheduler.createFunctionResponsePart`. The slimming module
only walked top-level `part.inlineData` / `part.fileData`, so the
nested base64 bytes leaked into the compaction side-query payload.
The previous design doc incorrectly claimed that no recursive walk
was needed.

Three changes:

- `slimCompactionInput.transformPart` recurses into the nested
  `functionResponse.parts` array and replaces each entry via the
  same image/document placeholder logic.
- `estimatePartChars` walks the nested array too, so the split-point
  algorithm doesn't fall back to `JSON.stringify` and over-count the
  base64 bytes.
- `microcompactHistory` drops `functionResponse.parts` when clearing
  an old tool result; the previous spread of `...part.functionResponse`
  silently carried the original media through.

New unit tests cover (a) nested image / document stripping, (b) the
estimator no longer being skewed by nested base64. The previously
failing E2E now PASSES: side-query payload contains zero `data:image/`
occurrences, zero long base64 runs, and exactly one
`[image: image/png]` placeholder.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(core): address review findings on compaction image stripping

Addresses 8 valid findings from PR review:

- [Critical] estimatePartTokens now handles `fileData` parts (both
  top-level and nested under functionResponse.parts). Without this,
  microcompact's `tokensSaved === 0` short-circuit silently discarded
  every fileData clear.

- estimatePartTokens for binary parts now uses a fixed
  MEDIA_PART_TOKEN_ESTIMATE constant (1,600) instead of base64-length
  divided by 4. The old formula billed a 1 MB image as ~250K tokens
  rather than its actual ~1,280 visual tokens on Qwen-VL, inflating
  the saved-token metric by orders of magnitude.

- mimeType values from MCP tool servers are now run through
  sanitizeMimeForPlaceholder before being embedded in `[image: …]` /
  `[document: …]` placeholders. An adversarial server could otherwise
  craft `image/png]\n\n[SYSTEM: …` and inject instructions into the
  summary side-query.

- collectCompactablePartRefs now recognizes a third 'nested-media'
  kind: functionResponse parts from non-compactable tools (e.g. MCP
  screenshots whose names aren't in COMPACTABLE_TOOLS) that carry
  images on functionResponse.parts. The nested media is dropped while
  the tool's text output is preserved. Previously such media
  accumulated forever in live history.

- keepRecent budgets are now per-kind (tool / media / nested-media).
  Setting `toolResultsNumToKeep: 1` keeps 1 of each kind rather than 1
  entry total across the merged list — matches the natural reading of
  the setting name.

- findCompressSplitPoint's `precomputedCharCounts` fallback path is
  now documented as test-only; production callers MUST pass the
  precomputed array.

- The text-based branch of isAlreadyCleared is gone: with the new
  nested-media handling (drops `parts`) and existing media handling
  (replaces with `{ text: … }` that is no longer collected) it was
  unreachable.

- OpenAI converter (createToolMessage) now passes text parts inside
  functionResponse.parts through as text content. The slimmer writes
  `{ text: '[image: image/png]' }` placeholders into the nested array;
  without this fix the converter dropped them when serializing to the
  OpenAI wire format, leaving the summary model with empty tool
  responses instead of the placeholder.

Two findings deferred with rationale (see design doc Open Questions):
MIN_COMPRESSION_FRACTION still uses pre-slim counts (acceptable —
"user shared an image" is itself worth summarizing); SlimResult is not
re-exported (round-3 simplify decided to keep core's public surface
minimal).

E2E re-verified end-to-end: side-query payload contains 0 data:image/
occurrences, 0 long base64 runs, and 1 `[image: image/png]` placeholder
in the expected position. 185/185 collocated unit tests pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(core): tidy compaction slimming after self-review

Three small polishes from a follow-up code review pass:

- `stripNestedMedia` no longer re-casts its return value: after
  destructuring `parts` out of the widened input type, TypeScript
  infers the original `FunctionResponse` shape without help.
- `isAlreadyCleared` shed a 10-line comment block — the body is now
  one line, so one descriptive line above it is enough.
- OpenAI converter's nested-part text check switched from
  `(part as { text?: unknown }).text` to
  `'text' in part && typeof part.text === 'string'`, dropping the
  cast and letting `in` narrow the type.

No behavior change. 185/185 unit tests still pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(core): wire slim stats to debug log; split MicrocompactMeta tools vs media

Addresses two follow-up review suggestions:

- `slimCompactionInput` returned `stats.imagesStripped` and
  `stats.documentsStripped` but the orchestrator never consumed them.
  Now logged at debug level whenever non-zero so operators can confirm
  the slimming pipeline actually fires on image-heavy compactions.

- `MicrocompactMeta.toolsCleared` lost meaning after the recent
  refactor: it had grown to count both tool-result clears AND
  inline-media / nested-media clears. Renamed:
  - `toolsCleared` → only `tool`-kind clears (compactable tool output)
  - `mediaCleared` → `media` + `nested-media` clears (new)
  - `toolsKept` / `mediaKept` mirror the split, replacing the prior
    `toolsKept` that was actually a combined count.

  The single non-test consumer (`client.ts` debug log) updated to use
  both fields.

185/185 unit tests pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-14 10:20:11 +08:00
tanzhenxin
f72a156e13
fix(anthropic): allow cache_control on tool_result blocks so the per-turn breakpoint advances (#4121)
The Anthropic adapter places three cache_control breakpoints (system, last tool,
last user message), but `addCacheControlToMessages` only attached the third when
the last block of the last user message was a non-empty text block. After turn 1
of any agentic conversation, the last user message is a tool_result, so the
breakpoint was silently dropped and the cacheable region collapsed back to
system+tools. Per-turn history was never cached.

Anthropic's docs explicitly list tool_result as a cacheable block type
(https://docs.claude.com/en/docs/build-with-claude/prompt-caching). Accepting
both text and tool_result keeps the breakpoint moving forward as the
conversation grows.
2026-05-14 10:07:11 +08:00
qwen-code-ci-bot
d419a92672
chore(release): v0.15.11 [skip ci]
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-05-14 09:51:46 +08:00
ChiGao
d343e2c15e
feat(perf): progressive MCP availability — MCP no longer blocks first input (#3994)
Some checks are pending
Qwen Code CI / Classify PR (push) Waiting to run
Qwen Code CI / Lint (push) Blocked by required conditions
Qwen Code CI / Test (macos-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (ubuntu-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (windows-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(perf): progressive MCP availability — MCP no longer blocks first input

Today `Config.initialize()` runs MCP discovery synchronously and the cli
can't accept input until every configured MCP server finishes its
discover handshake. One slow or hung server bottlenecks every user with
MCP configured. Validated by the profiler instrumentation added in this
PR (set `QWEN_CODE_PROFILE_STARTUP=1` to reproduce):

| User scenario             | Time to first prompt input |
| ------------------------- | -------------------------- |
| No MCP                    | ~480 ms                    |
| 1 fast MCP                | ~875 ms                    |
| 2 fast + 1 slow MCP       | **~7.1 s**                 |
| 1 hung MCP server         | **~10.5 s**                |

(Measured on macOS arm64 / Node 24.15, n=30/fixture, p50.)

`Config.initialize()` now passes `{ skipDiscovery: true }` to
`createToolRegistry` by default and kicks off MCP discovery in a
fire-and-forget background path. As each server completes discover,
the cli's `AppContainer` debounces `setTools()` calls into one-frame
(16 ms) batches so the model sees the consolidated tool list shortly
after each server settles. Rollback: `QWEN_CODE_LEGACY_MCP_BLOCKING=1`.

- `packages/core/src/config/config.ts` — `Config.initialize` switches
  to `skipDiscovery: true` + new `startMcpDiscoveryInBackground()`
  (defensive against partially-stubbed `ToolRegistry` in tests). Adds
  `MCPServerConfig.discoveryTimeoutMs` (last positional ctor param —
  doesn't shift existing call sites). Tool-call timeout is untouched.
- `packages/core/src/tools/tool-registry.ts` — new
  `getMcpClientManager()` getter so the background path can call the
  incremental discover directly without going through `discoverMcpTools`
  (which would wipe already-registered tools).
- `packages/core/src/tools/mcp-client-manager.ts` —
  `discoverAllMcpToolsIncremental` now: emits `mcp-client-update`
  after IN_PROGRESS transition, wraps each per-server discover in a
  discovery-only timeout (stdio 30s, remote 5s), emits trailing
  `mcp-client-update` after COMPLETED so UI subscribers see the
  terminal state.
- `packages/cli/src/ui/AppContainer.tsx` — new `useEffect` (gated on
  `isConfigInitialized`) subscribes to `mcp-client-update` and
  16ms-batches `setTools()` calls. Same effect also defers
  `finalizeStartupProfile` until MCP settles (or 35s hard cap), so
  startup-perf profiles capture the full MCP timeline.

Activated only by `QWEN_CODE_PROFILE_STARTUP=1`; when unset every
profiler entry point short-circuits in a single null/flag check and
returns. Heisenberg overhead measured at -1.12% Δp50 between
profile-on vs profile-off (Welch p=0.092, n=30/config × 3 configs) —
within statistical noise.

- `packages/cli/src/utils/startupProfiler.ts` — extended with
  `events` array (multi-fire), `recordStartupEvent`,
  `setInteractiveMode`, `derivedPhases`, per-checkpoint heap snapshots,
  `MAX_EVENTS` cap, and `QWEN_CODE_PROFILE_STARTUP_OUTER` / NO_HEAP
  env opt-ins. + 7 new tests.
- `packages/core/src/utils/startupEventSink.ts` (new) — minimal
  cross-package sink so `core` can emit profiler events without
  reverse-depending on `cli`. No-op when no sink registered. + 4 tests.
- `packages/core/src/index.ts` — export `setStartupEventSink` /
  `recordStartupEvent` / type aliases.
- `packages/cli/src/gemini.tsx` — registers the sink at `main()`
  entry, adds `first_paint` checkpoint after Ink render, calls
  `setInteractiveMode(true)` in the interactive branch.
- `packages/core/src/config/config.ts` — emits
  `tool_registry_created`.
- `packages/core/src/core/client.ts` — emits `gemini_tools_updated`
  at the end of `setTools()`.
- `packages/core/src/tools/mcp-client-manager.ts` — emits
  `mcp_discovery_start`, `mcp_server_ready:<name>`,
  `mcp_first_tool_registered`, `mcp_all_servers_settled`.
- `packages/cli/src/ui/AppContainer.tsx` — emits
  `config_initialize_start`, `config_initialize_end`, `input_enabled`.

`Config.initialize()` now returns BEFORE MCP discovery completes.
Things to check:
- Any code path that assumed "after `config.initialize()`, all MCP
  tools exist in the registry" — these will see only built-in tools
  initially; new tools appear via `mcp-client-update` events.
- `MCPDiscoveryState.COMPLETED` is now set asynchronously instead of
  synchronously after `initialize()` resolves.
- Model requests issued before MCP settles see only built-in tools;
  subsequent requests see the full set as servers come online.
- Tests that assert MCP tool count immediately after
  `config.initialize()` should wait for the `mcp-client-update` with
  COMPLETED discoveryState instead.

- 313 impacted-area tests green (config / mcp-client-manager / client
  / startupProfiler 18 / startupEventSink 4).
- `tsc --noEmit` clean for `packages/core` and `packages/cli`.
- `eslint` clean on touched files.
- Manual: `QWEN_CODE_PROFILE_STARTUP=1 SANDBOX=1` interactive run
  produces a JSON profile in `~/.qwen/startup-perf/` containing
  `first_paint`, `config_initialize_start/end`, `input_enabled`,
  MCP per-server events, and `gemini_tools_updated`. See PR
  description's "How to validate" section.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): harden progressive MCP discovery against silent regressions

Addresses review feedback on PR #3994:

- Skip user-disabled servers in discoverAllMcpToolsIncremental. The new
  incremental path used to iterate Object.entries(servers) without
  consulting isMcpServerDisabled, so a server the user had explicitly
  turned off would still get connected and its tools registered.
  Mirrors the existing protection in discoverAllMcpTools.

- Disconnect the underlying client when runWithDiscoveryTimeout fires.
  Without this, the inner discoverMcpToolsForServer kept running after
  the timeout rejected the outer promise — if discover() eventually
  succeeded it would register the late server's tools into the live
  toolRegistry (a silent registration vector, especially exploitable
  with a 0/negative discoveryTimeoutMs override).

- Clamp discoveryTimeoutMs to [100ms, 300_000ms]. 0/negative/Infinity
  values previously passed through to setTimeout unvalidated and made
  the silent-registration bug above trivially reachable.

- Classify the `tcp` (WebSocket) transport field as remote so hung WS
  handshakes use the 5s default instead of the 30s stdio default.

- Defensive delete of serverDiscoveryPromises[name] in the per-server
  catch so a doomed/orphan entry can't briefly short-circuit a
  subsequent discoverMcpToolsForServer call.

Adds focused tests for each fix.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): restore runtime.json sidecar and harden non-interactive MCP visibility

Addresses review feedback on PR #3994:

- Restore writeRuntimeStatus + markRuntimeStatusEnabled in
  startInteractiveUI. The progressive-MCP diff inadvertently dropped
  the runtime.json sidecar write from the interactive entry point,
  leaving Config.refreshSessionId()'s session-swap refresh as dead
  code and silently breaking external integrations (terminal
  multiplexers, IDE integrations, status daemons) that map PID →
  sessionId via runtime.json.

- Add Config.getFailedMcpServerNames() and surface a stderr warning
  in --prompt / stream-json / ACP entry points when one or more MCP
  servers failed during background discovery. Per-server errors are
  caught inside discoverAllMcpToolsIncremental and never reached a
  TTY otherwise, so a script using non-interactive mode with broken
  MCP config would silently run with only built-in tools — a
  regression vs the legacy synchronous path.

- Pass the parsed `settings` object through to
  runNonInteractiveStreamJson. The new call site dropped the
  argument, falling back to createMinimalSettings() and losing any
  user-configured permission / approval / hook setup for stream-json
  sessions. Added regression assertion to gemini.test.tsx.

- Move finalizeStartupProfile out of gemini.tsx's stream-json branch
  and into Session.ensureConfigInitialized so it runs AFTER
  config.initialize() / waitForMcpReady() in stream-json. Previously
  the profile was finalized before any MCP / config_initialize_*
  events were emitted, producing empty stream-json profiles.

- Gate setStartupEventSink registration on isStartupProfilerEnabled()
  so core-side recordStartupEvent calls short-circuit at the first
  null-check when profiling is disabled, instead of going through an
  arrow wrapper and the profiler's own enabled gate.

- Tighten the type-unsafe ToolRegistry cast in
  startMcpDiscoveryInBackground to preserve the typed return signature
  so a rename of getMcpClientManager would be flagged at this call
  site (kept the optional-chain guard for tests that stub
  ToolRegistry as a plain object).

- Re-document first_paint as "render call returned" so consumers don't
  confuse Ink's synchronous render() return with literal pixel paint.
  Kept the checkpoint name for backward compatibility with collected
  profiles.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): restore resize repaint and pin gemini_tools_lag capture in AppContainer

Addresses review feedback on PR #3994:

- Restore the terminal-resize useEffect that calls
  repaintStaticViewport() when terminalWidth changes. The progressive-
  MCP diff removed previousTerminalWidthRef + the repaint useCallback
  + the resize useEffect, so tmux pane resizes and fullscreen toggles
  leave the static region rendered at the old width — header content
  visibly tears until something else triggers refreshStatic.

- Pin the gemini_tools_lag startup metric. The previous onMcpUpdate
  handler called finalizeOnce() synchronously when discovery reached
  COMPLETED, but the pending setTools() batch was still 16ms away.
  setTools() emits `gemini_tools_updated` — when finalize ran first
  the profile's `finalized` guard suppressed that event, so
  gemini_tools_lag came out undefined in interactive mode. New
  onMcpUpdate flushes setTools() NOW on COMPLETED and only finalizes
  after the flush resolves, guaranteeing the event lands.

- Log setTools() batch-flush errors via debugLogger instead of
  silently swallowing them. GeminiClient.setTools() has no try/catch
  around warmAll() / getFunctionDeclarations() / getChat().setTools();
  the previous `.catch(() => {})` would have hidden production
  tool-registration regressions completely.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): correct MCP failure visibility and incremental cleanup

Addresses three review findings on PR #3994:

- McpClient.discover() now flips the client status to DISCONNECTED before
  re-throwing. Previously, a server that connected successfully but whose
  discoverPrompts / discoverTools then rejected (or that returned no
  prompts and no tools) would remain CONNECTED in the global status
  registry. Config.getFailedMcpServerNames() filters by
  `status !== CONNECTED`, so such servers were silently omitted from the
  non-interactive failure banner and the Footer's MCP health pill kept
  counting them as healthy.

- discoverAllMcpToolsIncremental no longer records `outcome: 'ready'`
  for servers whose connect/discover threw. The inner
  discoverMcpToolsForServerInternal catches errors without re-throwing
  (best-effort discovery semantics), so the try block resolved even for
  failures — only the runWithDiscoveryTimeout path reached the catch.
  Auth errors, server crashes, and missing-tools responses were therefore
  recorded as success in the startup profile. We now consult the actual
  server status (now correctly DISCONNECTED after the first fix) before
  emitting `ready`, and emit `outcome: 'failed'` otherwise.
  `mcp_first_tool_registered` is gated on the same check so a failed
  server can't pollute that user-facing metric.

- discoverAllMcpToolsIncremental tears down enabled→disabled mid-session
  transitions. When a previously-connected server is disabled (e.g. via
  `/mcp disable foo` or by editing settings), the incremental path used
  to just `continue` past it, leaving its client, tools, health check,
  and global status entry in place. Now calls removeServer() for any
  already-known client we encounter in the disabled branch.

Adds focused tests for each fix.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs(core): clarify ToolRegistry cast comment in startMcpDiscoveryInBackground

Addresses review feedback on PR #3994. The previous comment claimed the
call site uses "no defensive cast" but the code still casts via
`as ToolRegistry & { getMcpClientManager?: ... }`. Reword to explain
the cast's actual purpose: it exists only because some tests stub
ToolRegistry as a plain object, so we use optional chaining to avoid
crashing the init path when those tests run. Also note that the inner
shape now uses `ReturnType<ToolRegistry['getMcpClientManager']>` — a
future rename of the production method still surfaces as a type error
at this call site rather than silently falling through to the
`if (!manager)` branch.

Comment-only change; no behavior diff.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(core): close MCP timeout TOCTOU race and propagate disconnect status

Addresses two critical findings on PR #3994 round 6:

- runWithDiscoveryTimeout no longer uses fire-and-forget disconnect. The
  prior `void client.disconnect()` returned before `transport.close()`
  landed, leaving a window where an in-flight `discover()` could pump
  `tools/list` through the transport and synchronously register tools
  into the live registry BEFORE the close took effect. The earlier fix
  comment described this as a "remote-exploitable silent-tool-registration
  vector"; the await closes the timing window but doesn't help if tools
  already landed, so we also drop them with `removeMcpToolsByServer()`
  after the disconnect resolves. No-op when discover hadn't reached
  registration yet.

- McpClient.disconnect() now writes DISCONNECTED to the global registry
  directly. Previously, `isDisconnecting = true` was set BEFORE the
  internal `updateStatus(DISCONNECTED)` call, and `updateStatus`'s guard
  (designed to suppress LATE writes from a stale `connect()` catch)
  silently swallowed the write. The global stayed CONNECTED forever for
  timeout-disconnected servers, so `Config.getFailedMcpServerNames()`
  (which filters `status !== CONNECTED`) omitted them from the
  non-interactive failure banner and the Footer's MCP health pill kept
  counting them as healthy. This invalidated the round-5
  `getMCPServerStatus === CONNECTED` gate, which would always pass the
  "ready" check for timed-out servers. The guard stays in place for its
  original purpose; the legitimate disconnect→DISCONNECTED notification
  now bypasses it by writing the registry directly.

Also adds the `config_initialize_start` / `_end` profiler checkpoints
to `Session.ensureConfigInitialized()` so stream-json startup profiles
include the same derived `config_initialize_dur` phase as the
non-stream-json branch in gemini.tsx (round 6 [Suggestion]).

Tests cover (a) the disconnect-and-cleanup path on timeout and (b) the
intentional-disconnect global registry propagation regression.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(mcp): surface failures + prevent health-check resurrection of timed-out servers

Round-7 review follow-ups:

- AppContainer (interactive): MCP startup failures now route through
  debugLogger.warn on COMPLETED. Was silent — only debug logs / profile
  events surfaced failures, so regular interactive users got no
  indication their MCP servers failed. Mirrors the non-interactive
  stderr warning, adjusted to debugLogger so it doesn't collide with
  Ink's rendered output.

- acpAgent per-session: `QwenAgent.initializeConfig()` now emits the
  same `Warning: MCP server(s) failed to start` stderr line as the
  top-level `runAcpAgent` path. Previously per-session ACP configs
  with failed MCP servers silently fell back to built-in tools.

- mcp-client-manager timeout handler: after disconnecting an
  intentionally timed-out server, also drop it from `this.clients` and
  stop any pending health-check timer. Without this the discovery
  `finally` block would arm a health-check that detected DISCONNECTED
  status and called `reconnectServer()` → `discoverMcpToolsForServer()`
  directly — bypassing `runWithDiscoveryTimeout` entirely and silently
  resurrecting the slow server. `startHealthCheck` also early-returns
  for unknown servers so the trailing finally-block call is a no-op.

- startupEventSink: silent `catch {}` now logs via `debugLogger.error`
  so a corrupted sink doesn't silently drop every subsequent event.
  Quiet by default; visible under `QWEN_CODE_DEBUG=1`.

Tests:
- mcp-client-manager.test.ts: regression for the timeout → no-reconnect
  invariant (clients map purged + health-check timer absent).
- acpAgent.test.ts: per-session newSession surfaces failures to stderr,
  and stays safe when Config lacks `getFailedMcpServerNames`.

Declines (with reasoning in PR reply):
- [Critical] AppContainer batch-flush useEffect untested → re-flag of
  the round-5 deferral that wenshao acknowledged at the time. Lower-
  layer invariants (this PR's mcp-client-manager + mcp-client tests)
  pin the dependent contracts. The component-test harness for timers +
  event emitters in this file is non-trivial and out of scope; tracked
  for a follow-up.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: 秦奇 <gary.gq@alibaba-inc.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-05-13 22:17:16 +08:00
jinye
4bb8dc894a
fix(telemetry): address PR #3847 review follow-ups for trace correlation (#4058)
* fix(telemetry): address PR #3847 review follow-ups for trace correlation

Addresses unresolved review feedback from PR #3847:

- Respect OTEL_TRACES_SAMPLER env var when setting TraceFlags on the
  synthetic session root, so custom samplers (e.g. traceidratio) are
  not bypassed by forced SAMPLED flag
- Store current session ID in session-context.ts and use it as a
  fallback in LogToSpanProcessor when the OTel Resource session.id
  attribute is stale after /clear or /resume
- Wrap for-await loop body in runInSpan() so debug logs emitted
  during stream iteration see the stream span as active
- Add autoOkOnSuccess option to withSpan, eliminating the need for
  the load-bearing setStatus(UNSET) hack in cancellation paths
- Add defensive 5-minute timeout for stream spans to prevent leaks
  from abandoned generators

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* fix(telemetry): address review issues in PR #4058

- Fix sdk.test.ts assertion to match new two-arg setSessionContext call
- Add spanEnded guard to prevent double span.end() when timeout fires
- Add .trim() to OTEL_TRACES_SAMPLER env var for robustness
- Pass sessionId in debugLogger.test.ts for signature completeness
- Clarify log-to-span-processor comment: fallback covers "missing" not "stale"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(telemetry): adopt review feedback for sampler and idle timeout

- Fix shouldForceSampled to force SAMPLED for all parentbased_* samplers,
  not just parentbased_always_on — parentbased samplers delegate to
  localParentNotSampled (default AlwaysOff) when parent has NONE, which
  silently drops all traces
- Convert stream span timeout from fixed wall-clock to idle timeout:
  reset timer on each chunk so legitimately long streams are never
  affected; timeout only fires when no chunks arrive for 5 minutes
- Add test for parentbased_traceidratio → TraceFlags.SAMPLED

* fix(telemetry): guard resetSpanTimeout against already-ended span

Prevent zombie timer accumulation when chunks arrive after idle timeout
has already ended the span.

* fix(telemetry): handle parentbased_always_off sampler and fill test gaps

- shouldForceSampled() now returns false for parentbased_always_off,
  preventing silent over-sampling that contradicts user intent.
- Updated JSDoc to document the always_on exception for non-parentbased
  samplers.
- Added test for parentbased_always_off → TraceFlags.NONE.
- Added test for success path resilience when safeSetStatus throws in
  coreToolScheduler.

* fix(telemetry): harden span timeout ordering and session ID fallback

- Swap spanEnded/span.end() order so finally block can retry if end()
  throws.
- Clear idle timeout immediately after for-await loop exits, before
  post-loop processing.
- Use || instead of ?? for session.id fallback to handle empty strings.

* fix(telemetry): address review round 4 — docs, cleanup, and test gaps

- Expand shouldForceSampled JSDoc: document env-var assumption,
  parentbased_traceidratio 100% sampling semantics.
- Remove unnecessary runInSpan wrapper around chunk field assignments.
- Add stream.timed_out span attribute when idle timeout fires.
- Deduplicate config.getSessionId() call in initializeTelemetry.
- Add tests for always_on and always_off sampler values.

* fix(telemetry): harden timeout callback ordering and add || comment

- Move safeSetStatus before setAttribute in timeout callback so ERROR
  status is set even if setAttribute throws.
- Set spanEnded=true before span.end() so finally block never overwrites
  ERROR with OK if end() throws.
- Add comment explaining || vs ?? choice for session.id fallback.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-05-13 22:02:20 +08:00
ZevGit
97ac766405
fix(core): improve runtime fetch options error handling and documentation (#3997)
* fix(core): improve runtime fetch options error handling and documentation

- Add debug logging for dispatcher creation failures to prevent silent proxy bypass
- Skip preconnect dispatcher creation when no proxy is configured (optimization)
- Update outdated comments about timeout behavior (was "always disabled", now conditional)
- Add tests for catch block error handling fallback behavior
- Remove peer flag from @vscode/windows-process-cycles in package-lock.json

The original change skipped custom dispatcher when no proxy to avoid Node/undici
version mismatch (Node v26 ships undici v8 while project bundles v6). This follow-up
addresses code review findings:
1. Critical: catch block silently bypassed proxy on dispatcher failure
2. Suggestion: preconnect warmed unused dispatcher pool when no proxy
3. Suggestion: outdated comments about "always disabled" timeouts

* fix(core): address PR #3997 review comments

- Fix 14 failing apiPreconnect tests by passing proxy parameter
- Add console.error log for proxy dispatcher creation failure visibility
- Redact credentials from proxy URL in error messages
- Update outdated dashscope.ts timeout comment
- Add mockWarn assertions to dispatcher failure tests

* test(core): add console.error and credential redaction tests for dispatcher failure

- Add console.error call verification in dispatcher failure tests
- Add dedicated test for credential redaction from proxy URLs
- Mock console.error to prevent noise during test runs

* chore: improve code quality and documentation clarity

- Remove hardcoded version numbers from undici compatibility comment
- Add dual-logging rationale explaining debugLogger + console.error usage
- Document implicit coupling between apiPreconnect and buildFetchOptionsWithDispatcher
- Explain preconnectFired = true rationale for no-proxy case
- Add comment for preconnectFired before async fire (fire-and-forget semantics)
- Fix global.fetch mock leak by using vi.stubGlobal/unstubAllGlobals
- Remove duplicate test and rename misleading test name

* fix(core): align dashscope.ts comment and remove duplicate test

- Update dashscope.ts comment to match default.ts and anthropicContentGenerator.ts
- Remove misplaced duplicate test in getOrCreateSharedDispatcher describe block
  (functionally covered by existing test at line 72)

* fix(cli): restore URL validation and error path test coverage

- Add proxy parameter to 3 URL validation tests that were hitting no-proxy
  early return instead of exercising isDefaultBaseUrl logic
- Add proxy parameter to 2 error handling tests to ensure they exercise
  fetch rejection and dispatcher error paths
- Add mockFetch.mockResolvedValue(undefined) to beforeEach to prevent
  mock state leakage between tests
- Add mockDebugLogger.debug.mockClear() to beforeEach to prevent debug
  log assertions from being polluted by previous tests

* fix(core): harden proxy error handling and add timeout evaluation note

- Use greedy regex match (.+@) instead of non-greedy ([^@]*) for credential
  redaction to handle edge cases like user@domain:pass@proxy.local
- Add rejectedProxyCache Set to prevent duplicate error logging for the same
  broken proxy config across multiple API requests in long conversations
- Add resetRejectedProxyCache() export for test isolation
- Add timeout evaluation note documenting that Node.js built-in fetch uses
  default 300s bodyTimeout which is sufficient for all current model streaming
- Update test beforeEach to call resetRejectedProxyCache()

* fix(core): address final review feedback on credential redaction and JSDoc

- Restore safe regex /\/\/[^/\s]*@/g for credential redaction to avoid
  over-consuming hostname when error messages contain unrelated '@' chars
  and to redact all occurrences in multi-line error chains via /g flag
- Fix misplaced JSDoc: resetRejectedProxyCache and resetDispatcherCache
  now each have their own correct docstrings
- Add 2 tests to verify rejectedProxyCache dedup behavior:
  same proxy URL logs once, different proxy URLs each log separately

* fix(core): extract redactProxyCredentials helper with dedicated tests

- Extract redactProxyCredentials() as an exported pure function
  for testability and reusability
- Add 4 dedicated tests: single URL redaction, multi-URL /g coverage,
  non-over-redaction of unrelated @ chars, no-op for clean messages
- Add mockWarn/mockConsoleError cleanup to getOrCreateSharedDispatcher
  beforeEach to prevent test state leakage

* fix(core): harden credential redaction and cache key security

- Use redacted proxyUrl as cache key to avoid storing plaintext
  credentials in process memory (heap dump protection)
- Add fallback regex for scheme-less error messages (e.g., Node.js
  native 'connect ECONNREFUSED user:pass@host:8080')
- Add 4 tests: no-scheme redaction, double-redact prevention,
  cache key dedup with different credentials, mock expanded

* fix(core): address remaining review findings — mock alignment, code quality, test lifecycle

- Revert MockProxyAgent to fail only for syntactically invalid URIs
  (real ProxyAgent accepts credential URLs — includes('@') was wrong)
- Update credential redaction test to use http://invalid-proxy whose
  mock error message contains credentials to redact
- Extract NO_DISPATCHER_FALLBACK constant to replace duplicated ternary
  expression (sdkType === 'openai' ? undefined : {})
- Move vi.spyOn(console, 'error') into beforeEach with afterEach
  restoreAllMocks (consistent with apiPreconnect.test.ts pattern)
- Add preconnectFired verification test: no-proxy call permanently
  skips subsequent preconnect attempts

* fix(proxy): address PR #3997 review feedback

- Add credential redaction in apiPreconnect.ts catch blocks to prevent
  proxy credentials from being written to ~/.qwen/debug/ log files
- Export redactProxyCredentials from core package barrel for CLI reuse
- Fix environment variable tests (NODE_EXTRA_CA_CERTS,
  QWEN_CODE_DISABLE_PRECONNECT, SANDBOX) that were masked by no-proxy
  early return by adding proxy parameter
- Remove rejectedProxyCache deduplication to allow administrators to
  see each credential change failure; log only hostname to avoid
  credential leakage
- Add extractHostnameFromProxyUrl helper for safe hostname extraction

Verified: lint + build + typecheck + tests all pass

* fix(core): harden proxy credential redaction

Cover token-only and colon-containing proxy credentials in scheme-less error messages, keep hostname fallback logging credential-safe, and add failure counts for repeated proxy dispatcher failures.

Also align preconnect tests with the shared redaction helper and restore the unrelated fsevents lockfile peer marker.

* test(core): prevent proxy redaction overreach

Tighten scheme-less proxy credential redaction so ordinary email-like text is preserved while token and userinfo proxy credentials remain covered.

* fix(core): address proxy review edge cases

* docs(core): clarify proxy dispatcher behavior

* fix(core): redact proxy credentials from SDK errors
2026-05-13 21:52:08 +08:00
qqqys
15e6222546
fix(dashscope): use URL hostname check instead of regex to avoid ReDoS (#4112)
CodeQL flagged a polynomial regular-expression alert on the unanchored
/([\w-]+\.)?dashscope(-intl)?\.aliyuncs\.com/i pattern introduced in
#3991 — repeated '-' in the baseUrl could trigger catastrophic
backtracking. The unanchored regex was also too permissive: it would
incidentally match the dashscope domain appearing anywhere in a URL
path, not just in the hostname.

Parse the baseUrl with the URL constructor and compare hostname
suffixes instead. This eliminates the ReDoS surface entirely and is
strictly tighter — only real dashscope[-intl].aliyuncs.com hostnames
(and their subdomains) match.

Co-authored-by: Qwen-Coder <noreply@alibaba-inc.com>
2026-05-13 18:19:39 +08:00
tanzhenxin
5a1c427d6a
feat(subagents): use fastModel for Explore subagent (#4086)
Adds a "fast" keyword to the subagent model selector. When set, the
runtime resolves it via Config.getFastModel() under the parent authType,
falling back to inheriting the parent model when fastModel is unset or
invalid for the current authType. The built-in Explore subagent opts in,
so a configured fastModel automatically powers codebase searches without
affecting other subagents or the main session.
2026-05-13 16:27:25 +08:00
tanzhenxin
613fc42c89
fix(core): tag subagent OpenAI JSON logs (#4099) 2026-05-13 16:26:38 +08:00
qqqys
fd53527aad
feat(cli): support batch deletion of sessions in /delete (#3733)
* feat(cli): support batch deletion of sessions in /delete

Closes #3706

Add multi-select mode to the session picker so /delete can remove
multiple sessions at once. Space toggles a checkbox on the cursor
item; Enter commits the checked set, falling back to single-select
when nothing is checked. The current active session is rendered
disabled with a "current — cannot delete" hint and is also stripped
defensively before the service call.

Core gains a `removeSessions(ids)` batch API that returns
`{ removed, notFound, errors }` so the CLI can surface partial
failures with a single toast.

Co-Authored-By: Qwen-Coder <noreply@alibabacloud.com>

* fix(cli): address /delete batch-delete review comments

- useSessionPicker: when checked items are all hidden by the branch
  filter, do not silently fall through to single-deleting the cursor row
  (data-loss path); stay in multi-select mode instead.
- SessionPicker footer: count only the checked-and-visible-and-committable
  rows so "N selected" matches what Enter would actually delete.
- useDeleteCommand: partial-failure toast switches to type=error and
  surfaces failing session ids (truncated, capped at 3 with overflow)
  plus the first underlying error message, instead of just an aggregate
  count masquerading as info.
- Docs: fix Tab→Space JSDoc / inline-comment drift across the picker
  surface (Space is the actual binding).

Co-Authored-By: Qwen-Coder <noreply@qwenlm.com>

* docs(cli): clarify disabledIds is multi-select-only on session picker

Reviewer flagged that disabledIds is silently inert in single-select
mode because both its visual dim and Space no-op gate on
enableMultiSelect. Spell that out on both prop JSDocs and point future
callers at filtering initialSessions for single-select use cases.

Co-Authored-By: Qwen-Coder <noreply@alibabacloud.com>

* fix(cli): address /delete batch-delete review feedback (round 2)

- useSessionPicker: throw when enableMultiSelect is on without
  onConfirmMulti (footer would otherwise read "N selected" while
  Enter silently fell through to single-select on the cursor row)
- useSessionPicker: single-select Enter fallback now respects
  disabledIdSet so a stray Enter on the dimmed active-session row
  no longer closes the dialog and bounces back with an error
- useDeleteCommand: stop swallowing the outer catch — log + surface
  the underlying error message so on-call has something to grep
- useDeleteCommand: full-failure branch now mirrors the partial-
  failure branch (failing ids + first error reason) instead of a
  generic "Failed to delete sessions."
- useDeleteCommand: emit a "Deleting N session(s)..." progress
  toast before awaiting removeSessions so slow filesystems don't
  leave the user staring at a closed dialog
- sessionService: JSDoc {@link removed}=false → {@link notFound}

Tests: regression for the disabled-row single-select fallback,
two invariant-throw tests for the picker, a pre-await progress
toast test, and align the StandaloneSessionPicker branch-filter
tests with main's Ctrl+B-only binding (plain 'B' silently entered
search mode after the merge, masking the assertion).

Co-Authored-By: Qwen-Coder <noreply@qwen.ai>

* test(cli): drop wait-based multi-select picker tests, cover at hook level

The Multi-select describe block in StandaloneSessionPicker.test.tsx
was 7 ink-rendering tests that all relied on `await wait(N)` to
sync with stdin events — the same flaky shape #3978 already
purged from the search suite. CI flaked on them once and they're
gone for good.

Critical invariants are re-asserted at hook level in
useSessionPicker.test.tsx (toggleChecked add/remove + no-op on
disabled ids), which exercise pure state without keypress sim.
Footer-rendering and keypress-driven flows are intentionally
left to manual verification rather than carried as wait-based
integration tests.

Co-Authored-By: Qwen-Coder <noreply@qwen.ai>

* test(cli): cover batch delete keyboard and failure paths

* fix(cli): address /delete batch-delete review feedback (round 3)

- useSessionPicker: Enter now commits *every* checked id (minus
  disabled), not just the filtered intersection. Filter is a
  navigation aid; gating the commit on it would silently drop
  checks the user explicitly made (check A-E, type a query
  matching only C-E, lose A and B). Order by sessionState.sessions
  so the receiver sees display order even for filter-hidden items.
- SessionPicker: footer count switches from visibleCheckedCount
  to committableCheckedCount (all checkedIds minus disabled), so
  it can no longer say "0 selected" while Enter is about to delete
  three hidden checks.
- useDeleteCommand: hoist sampleIds/overflow/firstError/reason
  above the three-way branch so partial- and full-failure paths
  can't drift out of sync on a future tweak.
- useDeleteCommand: in-flight ref guard wrapped in try/finally
  drops re-entrant /delete invocations (closeDeleteDialog runs
  synchronously, so without this the user can re-open /delete
  and queue an overlapping batch). Guard releases on early
  return too, otherwise a "only current selected" rejection
  would lock out the rest of the session.
- useDeleteCommand: surface "Current active session skipped."
  info toast when the picker forwarded the active session,
  otherwise the progress toast lies about the count.

Tests:
- useDeleteCommand: full-failure branch (removed=0), re-entrant
  drop, guard-released-on-early-return, stripped-current info toast
- useSessionPicker: hook-level keypress suite covering Space toggle,
  Space-disabled no-op, Enter→onConfirmMulti, Enter→onSelect
  fallback, Enter-disabled no-op, Enter commits hidden checks,
  Enter refuses when every check is disabled. MockStdin pattern
  cloned from useKeypress.test.ts, no ink rendering and no wait().

Co-Authored-By: Qwen-Coder <noreply@qwen.ai>

* chore(cli): trim delete-many comments

* fix(cli): guard delete actions during batch delete

* test(cli): cover batch delete guard release

* fix(cli): address session delete review feedback

* fix(cli): address batch delete review follow-ups

---------

Co-authored-by: Qwen-Coder <noreply@alibabacloud.com>
Co-authored-by: Qwen-Coder <noreply@qwenlm.com>
Co-authored-by: Qwen-Coder <noreply@qwen.ai>
Co-authored-by: qqqys <266654365+qqqys@users.noreply.github.com>
2026-05-13 14:34:39 +08:00
Shaojin Wen
d07daa3e69
fix(cli): auto-restore prompt and preserve queue on cancel (#4023)
* fix(cli): auto-restore prompt and preserve queue on cancel; align with Claude Code

When a user pressed ESC immediately after submitting a prompt (before the
model produced any meaningful output), qwen-code left the cancelled prompt
stranded in the transcript and in cross-session ↑-history. Cancelling
during tool execution also silently dropped any queued follow-up input.

Mirror Claude Code's auto-restore-on-interrupt:

  - Drain the queue back into the input buffer on EVERY cancel path,
    including tool-execution cancels (replaces the unconditional
    clearQueue() that motivated #3204 with a non-destructive pop).
  - When the user cancels with no draft text, no queued input, and no
    meaningful pending/committed assistant content, truncate the user
    item and trailing INFO from history and pull the prompt text back
    into the input box for editing.
  - Add Logger.removeLastUserMessage so the disk-backed cross-session
    ↑-history (getPreviousUserMessages) is also cleaned on cancel.

The "meaningful content" check matches Claude Code's
messagesAfterAreOnlySynthetic: gemini text and tool runs are meaningful;
info/error/warning/retry/notification/tool_use_summary/thoughts are
synthetic. truncateToItem uses functional setState so it batches with
the INFO addItem from cancelOngoingRequest in the same render pass —
no flicker.

Tests cover all five guard branches and the logger undo across normal,
no-op, one-shot, MODEL_SWITCH-interleaved, disk-rotation, and
uninitialized cases.

* fix(core): clear lastLoggedUserEntry on logMessage write failure

Without this, a transient writeFile error during a USER logMessage left
the undo tracker pointing at the previous successful entry. A subsequent
removeLastUserMessage (e.g., from auto-restore on cancel) would then
silently delete an unrelated earlier row from disk-backed history.

Add a regression test that mocks a writeFile rejection and asserts the
tracker is null and the prior entry survives.

Reported in PR review.

* fix(cli, core): share Logger across AppContainer/useGeminiStream and serialize writes

PR-review follow-up addressing two issues in the cancel-undo path.

1. Logger instance mismatch (Critical):
   `useGeminiStream` and `AppContainer` each called `useLogger()`, which
   instantiates a fresh `Logger` per call. `lastLoggedUserEntry` lives on
   the instance, so the undo invoked from `AppContainer` was always a
   no-op — the cancelled prompt still surfaced via cross-session
   `getPreviousUserMessages`. Move the `useLogger` ownership to
   `AppContainer` and pass the same instance into `useGeminiStream` via a
   new optional `logger` parameter.

2. Logger write ordering:
   Both `logMessage` and `removeLastUserMessage` do read → splice/append
   → writeFile without a lock. A fast cancel-then-resubmit could let
   `removeLast` clobber a just-appended new entry. Add a per-instance
   `serialize()` helper (a Promise-chained write queue) and route both
   mutating ops through it. Reset the queue on `close()`. New regression
   test fires removeLast and a fresh logMessage in parallel and asserts
   the resubmitted entry survives.

3. Stale React-state race in cancel guard (Suggestion):
   The auto-restore guard read `pendingGeminiHistoryItems` from React
   state, which can lag a stream chunk that just set
   `pendingHistoryItemRef.current`. Snapshot the pending item at the
   start of `cancelOngoingRequest` and pass it through the new
   `onCancelSubmit({ pendingItem })` info parameter. The guard combines
   it with the React-state items so any meaningful in-flight content
   blocks auto-restore even before re-render. New test covers the case
   where pendingHistoryItems is empty but info.pendingItem carries
   `gemini_content`.

All touched-area suites pass: 64 cli AppContainer, 9 historyUtils,
85 useGeminiStream, 46 core logger.

* fix(cli): unbreak build after import-merge regression and tighten cancel-handler test types

The pre-commit eslint --fix on the previous commit collapsed the two
consecutive `import { ... } from '@qwen-code/qwen-code-core'` blocks in
useGeminiStream.ts into a single statement, but kept the `import type`
modifier from the first block — silently turning every runtime symbol
(SendMessageType, MessageSenderType, GitService, ApprovalMode, …) into
type-only imports. tsc rejected with TS2206 + a wave of TS1361 errors
that only surfaced on CI.

Restore the two separate imports: pure-type symbols (Logger included)
in `import type { ... }`, runtime symbols in plain `import { ... }`.

Also: the AppContainer cancel-handler tests captured `onCancelSubmit`
as `() => void`, but the hook signature now takes an optional info
arg. Widen the captured-callback type so passing `{ pendingItem }`
typechecks (TS2554 on line 1053).

* fix(cli, core): tighten cancel-undo robustness from PR review batch 3

Four follow-ups from a /review pass on the auto-restore-on-cancel path.

* logger.ts — only invalidate `lastLoggedUserEntry` when the failed
  write was itself a USER attempt. A failed non-USER write (MODEL_SWITCH
  on a transient disk error, etc.) doesn't change which row was the
  most recent user prompt, so the prior undo target is still valid.
  Without this, MODEL_SWITCH disk hiccups silently disabled cancel-undo.

* useGeminiStream.ts — wrap `onCancelSubmit` in try/finally so a throw
  in AppContainer's cancel handler can't strand the stream in
  Responding (the UI would lock — Esc would no-op until process
  restart). `setIsResponding(false)` and `setShellInputFocused(false)`
  always run.

* useGeminiStream.ts — also document the three-way coupling between
  the INFO `addItem` here and AppContainer's auto-restore guard:
  the guard reads `historyRef.current` which doesn't yet contain
  this INFO (React batches), and the guard's correctness depends on
  the items added here staying synthetic.

* historyUtils.ts — make `isSyntheticHistoryItem` exhaustive over the
  35-member `HistoryItemWithoutId` union. Every case is explicit; the
  default branch carries a `_exhaustive: never` so adding a new
  HistoryItem variant without classifying it triggers a compile-time
  error rather than silently disabling auto-restore. Runtime fallback
  is "meaningful" (safe — bail rather than wipe content).

Tests: +1 logger case (non-USER failure preserves the USER tracker),
+1 useGeminiStream case (throwing handler still flushes Responding).

All touched suites pass: 47 logger, 9 historyUtils, 86 useGeminiStream,
64 AppContainer.

* docs(core): clarify Logger writeQueue scope (log-history only, not checkpoints)

Reword the comment above `writeQueue` and the `serialize()` JSDoc to
state explicitly that the queue only serializes log-history mutations
(`logMessage` / `removeLastUserMessage`). Checkpoint ops
(saveCheckpoint / deleteCheckpoint / loadCheckpoint) touch separate
files and intentionally don't share this queue, so the previous
"every disk-mutating op chains here" wording overstated the
guarantee.

* fix(cli): flush buffered stream events before snapshotting pendingItem on cancel

Stream content/thought events are throttled into a per-turn `bufferedEvents`
array; only when `flushBufferedStreamEvents` runs do they reach
`pendingHistoryItemRef.current`. Snapshotting BEFORE the flush meant cancels
that fired inside the throttle window (60ms) saw a null `pendingItem` even
when meaningful text was sitting in the buffer. AppContainer's auto-restore
guard then read null, decided "model produced nothing", and called
`truncateToItem` — which silently wiped the very content that the
subsequent `addItem(pendingHistoryItemRef.current)` had just committed.

Move the snapshot to AFTER the flush so it sees the same value as the
addItem call directly below it.

Regression test: yields a content event and cancels without advancing
fake timers, asserts `info.pendingItem` carries the buffered "partial
response" text rather than null.

* fix(core): apply Logger.removeLastUserMessage in-memory removal synchronously

AppContainer's `userMessages` effect calls `getPreviousUserMessages()`
on the same render that history truncation fires (it depends on
`historyManager.history`). The previous implementation only updated
`this.logs` after `await fs.writeFile(...)` settled, so the effect
read stale logs and ↑-history surfaced the cancelled prompt until
some unrelated future history change re-ran the effect.

Move the cache filter ahead of the serialize queue so consumers see
the removal immediately. The async serialize op continues to read,
splice, and write disk, then re-syncs `this.logs` from disk on
success or rotation.

Regression test fires removeLast without awaiting, then asserts the
very next `getPreviousUserMessages()` returns [] (no cancelled
prompt), and that the background promise still resolves to true.

* docs(core, cli): clarify removeLastUserMessage contract; observability for cancel-undo

* logger.ts — extend the JSDoc on `removeLastUserMessage` to spell
  out the two-phase semantics (sync optimistic in-memory removal +
  async serialized disk reconciliation), and explicitly document that
  the boolean return value reflects the *disk* outcome while the
  in-memory cache is updated unconditionally. Also explain why disk
  failures are NOT rolled back: rolling back would resurrect the
  cancelled prompt in ↑-history, which is worse UX than a temporary
  cache/disk divergence (which converges on next op or on
  `initialize()` of the next session).

* AppContainer.tsx — wrap the fire-and-forget
  `logger.removeLastUserMessage()` in `.catch(debugLogger.debug)`.
  The Logger's internal try/catches mean the Promise should never
  reject today, but a future code-path change shouldn't surface as
  an UnhandledPromiseRejection — and a debug-level log is the right
  observability hook for "cancel succeeded in UI but disk-undo
  failed silently".

* fix(core,cli): #4023 review wave — logger atomicity + observable undo failure

3 #4023 review threads addressed:

- core/logger.ts: `removeLastUserMessage` now ROLLS BACK the
  optimistic in-memory removal when the disk read or write fails.
  Previously the JSDoc/return contract was violated: the method
  returned `false` on failure but `this.logs` already showed the
  entry removed — callers (AppContainer's `userMessages` effect)
  saw the inconsistency and the cancelled prompt vanished from
  ↑-history despite the disk still carrying it. The rollback
  re-inserts the target at its original index when no concurrent
  mutation took its place, and restores `lastLoggedUserEntry` so
  a follow-up retry has a target. Regression test pinned: spy on
  fs.writeFile to throw, assert `removed === false` AND
  getPreviousUserMessages() still surfaces the entry.

- cli/AppContainer.tsx: `void logger?.removeLastUserMessage()` no
  longer silently swallows failures. Added `.catch` that routes
  through `debugLogger.debug` so a disk-write failure leaves a
  diagnostic trail; without it the cancelled prompt would
  resurrect next session via ↑-history with no observability into
  why.

- cli/historyUtils.ts: `gemini_thought` / `gemini_thought_content`
  classification reaffirmed as SYNTHETIC with explicit JSDoc on
  WHY (Claude Code parity + auto-restore is most valuable in the
  cancel-during-thinking case which is exactly the case where
  thoughts have appeared but no committed `gemini_content`).
  Future readers won't re-litigate the classification by accident.

Tests: 49/49 logger.test.ts pass; tsc + ESLint clean.

* docs(core): align removeLastUserMessage JSDoc with rollback-on-failure behaviour

The previous commit added a rollback path to `removeLastUserMessage`
(re-insert the optimistically-removed entry and restore
`lastLoggedUserEntry` when the disk read or write throws), but the
JSDoc still said the in-memory removal is "intentionally NOT rolled
back" — a copy-paste leftover from the earlier design that picked
optimistic-and-diverge. Rewrite the failure-handling paragraph and
`@returns` line to describe the rollback contract instead.

No code change.

* fix(cli, core): scope auto-restore to the cancelled turn + tighten typings/tests

Three follow-ups from PR #4023 review batch 5.

* cli — `CancelSubmitInfo` gains `lastTurnUserItem` carrying the user
  prompt text that THIS turn's `prepareQueryForGemini` added (or
  `null` for paths that don't push a user history item: Cron /
  Notification / slash `submit_prompt`). `cancelOngoingRequest`
  snapshots `lastTurnUserItemRef.current` and ships it through. The
  AppContainer auto-restore guard now requires
  `info.lastTurnUserItem` to be present AND match the candidate
  user item's text before truncating/rewinding — closing the case
  where an older user item happens to be followed by only-synthetic
  trailing content and the current cancelled turn never owned a
  user item to begin with.

  Two new regression tests pin both halves: cancel of a non-USER
  turn bails despite trailing-synthetic, and a deliberate text
  mismatch also bails.

* cli — `.catch((err)` widened to `(err: unknown)` on the
  fire-and-forget `logger.removeLastUserMessage()` call. Belt-and-
  braces: `Promise.catch`'s lib typing is `(reason: any) =>` so
  this is not currently TS7006, but tightening keeps the codebase
  ready for `@typescript-eslint/no-implicit-any-catch`-style rules
  and matches the rest of the codebase's strict-error patterns.

* core — Added a `removeLastUserMessage` regression test pinning
  the `_readLogFile` failure branch (mocks `fs.readFile` to throw
  Permission denied). The symmetric `writeFile` failure case was
  already covered; this closes the gap on the read leg.

Tests: AppContainer 67/67 (+2), useGeminiStream 87/87, historyUtils 11/11,
logger 50/50 (+1). Type-check and lint clean.

* chore(cli): add debug observability for each auto-restore-on-cancel bail-out

The cancel handler in AppContainer has seven independent guards that
silently `return` when auto-restore is unsafe (buffer non-empty, queue
non-empty, pending meaningful content, no last-turn user item, no user
in history, trailing items not all synthetic, candidate-text mismatch).
Until now, users reporting "I pressed ESC but my prompt didn't come
back" had no way to know which guard tripped without a debugger.

Log a specific `debugLogger.debug(...)` line at each bail-out and one
on the success path. Debug level keeps production output silent;
re-enableable by running with `DEBUG=1` (per existing convention in
this file). No control-flow change.

* docs(core): scope removeLastUserMessage's "false ⇒ observable in-memory" guarantee

The previous JSDoc implied the guarantee held for every `false` return,
but it only really holds on the disk read/write THROW path (where we
roll back the optimistic in-memory removal). Two other `false`-paths
behave differently:

  - Initial guards (logger uninitialized / no tracked entry): nothing
    was ever removed, nothing to restore — entry stays in whatever
    state it was already in.
  - Disk read succeeds but the tracked row is missing on disk (e.g. a
    concurrent rotation/clear): we adopt disk state into `this.logs`,
    so both sides agree the entry is gone — `false` is returned but
    the entry is NOT observable in-memory either.

Rewrite the failure-handling paragraph and `@returns` line to spell
out both branches explicitly. No code change.

* fix(core): shift lastLoggedUserEntry on USER logMessage duplicate-skip

When `_updateLogFile` detects another instance already wrote an
identical (sessionId, messageId, timestamp, message) row and returns
null, the previous logMessage code path left `lastLoggedUserEntry`
pointing at the prior USER entry. A subsequent cancel/auto-restore
would then call `removeLastUserMessage()` and silently delete the
wrong row — typically an older prompt that the user did not intend
to undo.

The fix: when the duplicate skip happens on a USER attempt, advance
`lastLoggedUserEntry` to the entry object we just tried to write.
`_updateLogFile` mutates that object's `messageId` in-place to align
with the disk row before the duplicate check, so the 5-tuple matches
the row that's actually on disk and an undo correctly targets it.

The natural race (`max+1` colliding with an existing `messageId`)
is not reachable by sequential awaits — the snapshot used for the
duplicate check is always max+1-strict. The regression test drives
the contract directly by mocking `_updateLogFile` to resolve to
null and asserting `lastLoggedUserEntry` shifts to the new entry.

* fix(cli, core): strip orphan user entry from chat history on auto-restore

The auto-restore branch was cleaning up two of the three places a
cancelled prompt lives — the UI transcript via `truncateToItem` and
the disk-backed ↑-history via `Logger.removeLastUserMessage`. The
third — the in-memory chat history on `GeminiChat` — was left
untouched. `sendMessageStream` appends the user content to
`chat.history` BEFORE the stream generator runs and the abort path
doesn't pop it. After a successful auto-restore the next request's
wire payload still carried the cancelled prompt as a leading user
turn alongside the new prompt, so the model saw context the user
believed had been undone (and in some shapes the API would reject
two consecutive user turns).

Mirror the existing strip the Retry submit path uses
(`GeminiClient.sendMessageStream` at the `Retry` branch): make
`GeminiClient.stripOrphanedUserEntriesFromHistory` public and call
it from the auto-restore success path, sitting next to the UI
truncate and the disk-log undo. The method already pops trailing
user entries and clears the `FileReadCache` (which can otherwise
hold dangling `read_file` results from the stripped turn).

End-to-end reproduction from the PR review:
1. Submit `what time is it?` → ESC during pre-token delay →
   auto-restore (UI rewound, buffer pre-filled).
2. Edit buffer to `what year is it?` → submit.
3. Pre-fix: outbound `messages` carried both prompts as consecutive
   user turns. Post-fix: only the new prompt.

Test: extend the auto-restore-success AppContainer test with a
mock `stripOrphanedUserEntriesFromHistory` spy and assert it fires.
The non-restore branches don't install the spy (it's optionally
chained at the call site).

* fix(core, cli): tighten Logger.serialize signature + pin lastTurnUserItem and dup-skip identity contracts

Three follow-ups from PR review batch:

* core/logger.ts — `serialize()` was `this.writeQueue.then(op, op)`.
  The second callback was dead code: `writeQueue` is seeded with
  `Promise.resolve()` and reassigned through `.catch(() => undefined)`,
  so the queue tail can never reject. Worse, `then(op, op)` reads as
  "retry op on rejection" — wrong intent. Switch to `.then(() => op())`
  with a comment spelling out the no-reject invariant.

* cli/useGeminiStream.test.tsx — add ownership-contract tests at the
  PRODUCER side of `info.lastTurnUserItem`. Until now only the
  AppContainer tests pinned the contract, and they fabricate the
  value, so a regression that drops `lastTurnUserItemRef.current = {
  text: trimmedQuery }` in `prepareQueryForGemini` would slip
  through. New tests:
    - normal `UserQuery` submit → cancel → assert
      `info.lastTurnUserItem === { text: 'what time is it?' }`.
    - `SendMessageType.Notification` submit → cancel → assert
      `info.lastTurnUserItem === null` (path doesn't push a user
      history item, the ref reset at the top of
      prepareQueryForGemini must keep it null).

* core/logger.test.ts — strengthen the duplicate-skip regression.
  The previous test only checked the tracker advanced text; the
  important identity contract is that the recalculated 5-tuple
  matches the disk row, so a subsequent `removeLastUserMessage()`
  removes the duplicate-skipped row rather than the older USER.
  New test seeds disk with [first, second], stubs `_updateLogFile`
  for the second call to mimic the duplicate-skip branch (mutate
  newEntryObject's messageId+timestamp to align with the disk row,
  return null), then asserts removeLastUserMessage() leaves
  ['first'] on disk and removes 'second'.

* fix(cli, core): close four cancel-auto-restore correctness gaps from PR review

Four critical findings from gpt-5.5 /review pass:

1. **Retry skipped the lastTurnUserItem reset** (useGeminiStream.ts)
   `Retry` bypasses `prepareQueryForGemini`, which is where the
   `lastTurnUserItemRef.current = null` reset lived. A retry that
   followed a normal `UserQuery` carried the stale ownership snapshot
   into `onCancelSubmit`, and cancelling the retry before any
   meaningful output let `AppContainer` auto-restore truncate the
   original failed prompt. Move the reset (and the new content-seen
   reset, see #4) to the top of `submitQuery`, gated only on
   "this is a top-level submit" — covers Retry, Cron, Notification,
   and ordinary UserQuery alike.

2. **Text-only ownership matched dedup'd duplicates** (AppContainer.tsx,
   useGeminiStream.ts) `useHistoryManager.addItem` skips inserting a
   consecutive-duplicate user message while still returning a freshly
   generated id. The text-only ownership check would match the OLDER
   identical-text USER row, so a re-submitted same prompt + cancel
   would wrongly truncate the prior turn. Carry id+text in
   `CancelSubmitInfo.lastTurnUserItem` (using `addItem`'s return
   value) and require both id AND text to match before truncating.

3. **stripOrphan left IDE context state advanced** (client.ts) Other
   history-mutating paths (`setHistory`, `truncateHistory`) set
   `forceFullIdeContext = true` after mutating; the orphan-strip
   didn't, so a subsequent request could send a diff against a
   removed baseline. Gate cache-clear + IDE-context invalidation on
   an actual before/after length drop, so no-op strips don't churn
   state.

4. **Flush-then-thought race let auto-restore wipe committed content**
   (useGeminiStream.ts, AppContainer.tsx) `cancelOngoingRequest`'s
   pre-cancel flush can `addItem` a meaningful `gemini_content` (via
   handleContentEvent's split path) and then a later thought event
   overwrites `pendingHistoryItem` with a synthetic value. The
   AppContainer guard's React history snapshot is stale, so the
   trailing-only-synthetic check passes and the just-committed text
   gets truncated. Track a synchronous `turnSawContentEventRef` set
   in handleContentEvent, ship it through `CancelSubmitInfo`, and
   make the guard bail when set.

Tests:
- core/client.test.ts: stripOrphan only forces full IDE context on
  actual removal; existing retry tests updated to mock
  `getHistoryLength`.
- cli/useGeminiStream.test.tsx: ownership uses { id, text }, Retry
  reset works after a prior UserQuery cancel,
  turnProducedMeaningfulContent flips true when content lands.
- cli/AppContainer.test.tsx: guard bails on `turnProducedMeaningfulContent: true`,
  guard bails on id mismatch (catches addItem dedup case).

cli 162/162 + core client 99/99 + core logger 51/51.

* fix(cli): repaint static transcript after auto-restore truncate

Reported by @tanzhenxin: auto-restore truncated React `history` state
but the cancelled `> prompt` and `Request cancelled.` lines stayed
printed in the terminal — Ink's `<Static>` region is append-only, so
shrinking the underlying array doesn't unprint already-flushed lines.
On the PR's golden path (type prompt → Enter → ESC) the user sees the
prompt twice: once in scrollback, once pre-filled in the input buffer.
Confirmed at multiple Enter-to-ESC delays, so it's not a timing fluke.

Call `refreshStatic()` immediately after `truncateToItem(...)` in the
auto-restore success path. `refreshStatic` writes the ANSI
clear-terminal escape AND bumps the static remount key — the exact
recipe `/clear` (`handleClearScreen`) already uses for the same
reason. The targeted-repaint helper used for terminal resizes is
intentionally NOT used here: it preserves scrollback, which would
leave the cancelled prompt visible above the new viewport.

Test: extend the existing auto-restore happy-path AppContainer test
to assert `mockStdout.write` was called with `ansiEscapes.clearTerminal`.
The other auto-restore-bail tests don't install the assertion so they
naturally verify the negative case (no clear when guard rejects).
2026-05-13 12:08:16 +08:00
ChiGao
720ccaccb7
fix(core): normalize cumulative OpenAI stream deltas to suffixes (#3896)
Some OpenAI-compatible upstreams (notably specific DashScope / 阿里云百炼 Coding Plan paths) send `delta.content` as accumulated full text instead of incremental suffixes. The Gemini stream pipeline appended every chunk verbatim, so the same content was concatenated repeatedly during streaming.

Adds a per-stream `normalizeStreamingTextDelta` that detects cumulative-mode chunks (current chunk starts with previously emitted text) and emits only the suffix. Once cumulative mode is locked in, exact-repeat (≥64 chars) and prefix-overlap chunks are silenced. Normal incremental streams flow through unchanged.

The fix applies to both `delta.content` and `delta.reasoning_content` / `delta.reasoning`, with separate state slots so a switch between text and reasoning channels does not cause false-positive suppression. A separate `emittedLength` counter tracks user-visible bytes so an incremental-then-cumulative hybrid stream slices the correct suffix even after the 1024-byte detection-window cap kicks in.

Fixes #3279, #1184, #3838, #3806.
2026-05-13 12:05:42 +08:00
tanzhenxin
7099165dae
fix(core): log internal OpenAI JSON requests (#4081)
* fix(core): log internal OpenAI JSON requests

* fix(core): avoid duplicate OpenAI log metadata
2026-05-13 09:19:44 +08:00
jinye
14512080ed
feat(telemetry): add hierarchical session tracing spans (#4071)
Some checks are pending
Qwen Code CI / Classify PR (push) Waiting to run
Qwen Code CI / Lint (push) Blocked by required conditions
Qwen Code CI / Test (macos-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (ubuntu-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (windows-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(telemetry): add hierarchical session tracing with interaction spans

Add OpenTelemetry-based session tracing that creates hierarchical spans
for user interactions. Interaction spans wrap the full sendMessageStream
lifecycle and are properly ended at all exit points (ok, error, cancelled).

- New session-tracing.ts module with concurrent-safe span management
- Interaction spans started for UserQuery/Cron/Notification message types
- Shutdown safety net ends active interaction span before SDK shutdown
- Memory-safe span storage using WeakRef + strongRefs + 30-min TTL cleanup
- NOOP_SPAN sentinel avoids leaked spans when SDK is not initialized

* fix(telemetry): address review comments on session-tracing

- TTL cleanup now sets ctx.ended=true before span.end() to prevent double-end
- Interaction spans stored in strongSpans for GC-safe TTL cleanup
- cancelled status uses SpanStatusCode.OK per codebase convention
- endLLMRequestSpan treats missing metadata as OK, not ERROR
- Module-level lastInteractionCtx fallback for shutdown outside ALS context
- try/finally safety net in sendMessageStream for uncaught exceptions

* fix: apply auto-fixes from /review

- Fix next-speaker continuation span leak by overriding options.type to Hook
- Add setStatus call to endToolExecutionSpan for consistent span status
- Sanitize API error messages in interaction span status

---------

Co-authored-by: jinye.djy <jinye.djy@alibaba-inc.com>
2026-05-13 00:03:31 +08:00
Zach He
8e18e64594
feat(dashscope): support DASHSCOPE_PROXY_BASE_URL for prompt cache via API gateway (#3991)
Adds `DASHSCOPE_PROXY_BASE_URL` env var so DashScope prompt-cache headers fire when requests go through an internal API gateway proxy URL.

- New constant `DASHSCOPE_PROXY_BASE_URL` in `openaiContentGenerator/constants.ts`, read once at module load from `process.env`.
- `isDashScopeProvider()` now also returns true when `baseUrl` matches the configured proxy URL (case-insensitive, trailing slash on either side normalized).
- Debug log fires on the configured-but-mismatched path with a static message (no URL interpolation, so embedded credentials in the proxy URL cannot leak to `QWEN_DEBUG_LOG_FILE`).
- 9 new test cases covering match, mismatch, debug-log assertion, and trailing-slash normalization. Uses `vi.stubEnv` + `vi.unstubAllEnvs` in `beforeEach`.

Closes #3991
2026-05-12 14:48:10 +08:00
Matyokubov Umar
664208c420
feat(core): replace fdir crawler with git ls-files + ripgrep fallback (#3214)
Replaces the fdir filesystem crawler with a three-tier strategy for `@` file mention autocomplete.

Strategy:
1. `git ls-files --cached` + `git ls-files --others --exclude-standard` for git repos — reads from the git index, gitignore correctness delegated to git.
2. `ripgrep --files` for non-git directories when `rg` is available.
3. `fdir` as the original last-resort fallback.

Additional improvements:
- mtime-based change detection on `.git/index` skips re-crawl when nothing changed.
- 5s refresh throttling avoids rebuilding the index on every keystroke.
- Async chunked indexing yields to the event loop every 1000 entries to stay responsive on 200k+ file trees.
- Background untracked-file merge with a 10s timeout.
- `resolveGitDir()` correctly handles `.git` as a file (worktrees) including relative `gitdir:` pointers and walks up to parent repos.
- POSIX-style path normalization on Windows.
- Streaming `spawn()` with line-by-line `onLine` processing and an enforced `maxFiles` budget that terminates the child early, preventing the previous 20MB stdout buffer hazard on large repos.
- `--exclude-standard` on the untracked listing so gitignored directories (e.g. `node_modules/`, `dist/`) are not enumerated.

Closes #3137
2026-05-12 13:58:16 +08:00
Shaojin Wen
76d8c0ce83
chore(core): runtime.json sidecar follow-ups from PR #3714 review (#4030)
- Drop dead `typeof !== 'boolean'` check in the integer-type guard
  (Number.isInteger(true) is already false; rename to isFiniteInteger).
- Drop dead utf-8-message branch in readRuntimeStatus's catch:
  fs.readFile(p, 'utf-8') returns replacement chars rather than throwing,
  so that branch never fires; the JSON.parse fallback already handles
  the truncated-UTF-8 case the test exercises.
- Move runtimeStatus.js export from between fileUtils and filesearch
  into alphabetical position next to runtimeFetchOptions.
- Add Config.startNewSession integration tests pinning the
  runtimeStatusEnabled gate: flag-off leaves sibling sidecars alone,
  flag-on swaps old->new on the same PID, no-op when sessionId is
  unchanged.

No behavior change for the runtime-status sidecar itself; this just
trims dead defensive code and adds the integration coverage that was
missing in #3714.
2026-05-12 06:55:52 +08:00
jinye
32a49b4ddb
refactor(telemetry): remove dead useCollector setting and unreachable TelemetryTarget.QWEN (#4061)
Some checks are pending
Qwen Code CI / Classify PR (push) Waiting to run
Qwen Code CI / Lint (push) Blocked by required conditions
Qwen Code CI / Test (macos-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (ubuntu-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Test (windows-latest, Node 22.x) (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
useCollector was plumbed through config (interface, constructor, getter,
env var resolution) but never consumed by the telemetry SDK — the setting
had no runtime effect. TelemetryTarget.QWEN existed in the enum but
parseTelemetryTargetValue() only accepted 'local' and 'gcp', making
'qwen' unreachable (it would throw FatalConfigError).

Remove both dead code paths along with their tests and documentation.

Part of #3731
2026-05-11 23:22:53 +08:00
Shaojin Wen
bdd5b602de
feat(core): improve Anthropic proxy compatibility and enable global prompt cache scope (#4020)
* feat(core): improve Anthropic proxy compatibility and enable global prompt cache scope

- Use authToken instead of apiKey to send Authorization: Bearer header,
  avoiding dual-header conflicts with IdeaLab-style proxies
- Set User-Agent to claude-cli format and add x-app header for proxy
  Team rule compatibility
- Add adaptive thinking support for Claude 4.6+ models
- Enable prompt-caching-scope-2026-01-05 beta header and scope: global
  on system prompt cache_control to improve cross-session cache hit rates

* test: update tests for User-Agent, beta header, and cache_control changes

* fix(core): scope anthropic proxy workarounds to non-native baseURLs only

Address review feedback on PR #4020 by narrowing each workaround to where
it actually applies, instead of shipping it globally.

- Gate `Authorization: Bearer` (`authToken`), `claude-cli` User-Agent, and
  `x-app: cli` to non-Anthropic-native baseURLs. Direct `api.anthropic.com`
  users keep the SDK-default `x-api-key` (`apiKey`) auth and a truthful
  `QwenCode` User-Agent so usage isn't misattributed in Anthropic's
  logs/quotas, and so a stricter Anthropic backend doesn't 401 on a
  `Bearer`-shaped header.
- Gate the `prompt-caching-scope-2026-01-05` beta on `enableCacheControl`.
  When the converter isn't attaching `cache_control` to the body the beta
  is dead weight and risks 4xx responses from anthropic-compatible
  backends that don't recognize it. Restores the `betas.length === 0`
  early-return for the all-disabled case.
- Detect adaptive-thinking models with numeric major/minor compare instead
  of `[6-9]`. The character class missed `claude-haiku-4-6` entirely and
  would silently fall through to `budget_tokens` on `claude-opus-4-10` /
  `claude-opus-5-1` once those ship — tripping HTTP 400 with a shape the
  server no longer accepts.
- Honor explicit `reasoning.budget_tokens` before the adaptive branch.
  Adaptive omits `budget_tokens` from the wire shape, so checking it
  second silently dropped a user-supplied escape-hatch budget on Claude
  4.6+ models.
- Add `scope: 'global'` on the tool `cache_control` entry so the largest,
  slowest-changing prefix actually participates in cross-session caching
  under the new beta — the system-only attachment was capturing maybe
  half the available hit-rate improvement.
- Replace the misleading `as { type: 'ephemeral' }` cast on the system
  block (which erased `scope` from the type while leaving it on the
  wire) with a `AnthropicTextBlockParam` type that mirrors the existing
  `AnthropicToolParam` widening, so types match the runtime shape.

* fix(core): keep enableCacheControl live in the converter

Follow-up on PR #4020 review: `Config.setModel()` mutates
`enableCacheControl` in place (it's in `MODEL_GENERATION_CONFIG_FIELDS`),
but the converter captured it once at construction. On a hot flip the
generator's per-request `prompt-caching-scope-2026-01-05` beta gate would
sample the new value while the converter still emitted the old body-side
`cache_control` — beta-header and body could disagree.

- Thread the live `contentGeneratorConfig.enableCacheControl` into the
  converter via a per-call options override on both
  `convertGeminiRequestToAnthropic` and `convertGeminiToolsToAnthropic`,
  falling back to the constructor-time default when the caller doesn't
  pass one. The generator samples the value once per `buildRequest` and
  forwards it to both convert calls so the body and beta header always
  agree within a request, even across `setModel` flips.
- Regression test: hot-flip `enableCacheControl` from `true` to `false`
  on a live generator, verify the 2nd request drops both the beta header
  AND the body-side `cache_control` in lockstep.
- Tighten two existing beta-header tests that used `toContain` only on
  `interleaved-thinking` / `effort` — they now also assert
  `prompt-caching-scope-2026-01-05` is present (per-request keep-default
  and streaming paths), so accidental removal trips the test.
- Add coverage for the two previously-uncovered branches of
  `isAnthropicNativeBaseUrl`: `*.anthropic.com` subdomains
  (Anthropic-native) and a malformed baseURL (URL parse failure → proxy
  fallthrough). Also add an `anthropic.com.evil.com` hostname-spoof case
  mirroring the existing DeepSeek spoof test.

* refactor(core): consolidate anthropic generator shapes & document edge cases

Follow-up on PR #4020 review:

- Extract `AnthropicThinkingParam` type alias. The thinking union
  `{ type: 'enabled'; budget_tokens } | { type: 'adaptive' }` was repeated
  verbatim in three places: the `MessageCreateParamsWithThinking` field,
  the streaming-request intersection, and `buildThinkingConfig`'s return
  type. Once a third shape ships, forgetting one site would silently
  narrow a runtime value — single alias keeps them locked.
- Compute `useProxyIdentity` once in the constructor and pass it into
  `buildHeaders`. Previously `useBearerAuth` and `useProxyIdentity` named
  the same predicate at two call sites; collapsing them clarifies that
  Bearer auth + `claude-cli` UA + `x-app: cli` are one bundle that
  should never be split.
- Document that `modelSupportsAdaptiveThinking`'s regex is intentionally
  unanchored so reseller-prefixed names (`bedrock/claude-opus-4-7`,
  `vertex_ai/claude-sonnet-4-6@…`, `idealab:claude-opus-4-6`, …) keep
  matching. Tightening to `^claude-` would silently regress those.
- Soften the `prompt-caching-scope` beta comment so it describes what
  the code enforces (gate on the `enableCacheControl` flag) rather than
  promising a stronger "only ship when cache_control is on the body"
  invariant — the converter still skips `cache_control` on niche shapes
  (e.g. no system text, no tools, last user block isn't text). The
  looser gate is intentional; Anthropic-native ignores unused betas.
- Pin the wire shape for the `reasoning: undefined` + 4.6+ model corner.
  `resolveEffectiveEffort` returns undefined on `reasoning === undefined`,
  so `buildThinkingConfig` ships `{ type: 'adaptive' }` with no
  `output_config` and no `effort-2025-11-24` beta. If Anthropic ever
  starts requiring `output_config.effort` alongside adaptive, this test
  will fail at CI rather than at runtime as a server 400.

* fix(core): gate cache-scope on Anthropic-native baseURL, mirror auth gate

Follow-up on PR #4020 review: the `prompt-caching-scope-2026-01-05` beta
header and the body-side `scope: 'global'` field together comprise an
Anthropic-only wire-shape extension. Shipping them to non-Anthropic
backends (DeepSeek, IdeaLab) leaned on "unknown betas are ignored" —
true on Anthropic-native, but unverified for proxies and silently
inconsistent with the auth/identity gate, which already uses
`isAnthropicNativeBaseUrl` to bind Bearer / claude-cli / x-app to the
proxy path only.

- Add `useGlobalCacheScope` predicate on the generator. True iff
  `enableCacheControl !== false` AND the resolved baseURL is
  Anthropic-native. Plumbed per-request into both
  `convertGeminiRequestToAnthropic` and `convertGeminiToolsToAnthropic`;
  the same predicate also gates the `prompt-caching-scope-2026-01-05`
  beta in `buildPerRequestHeaders` so beta + scope field always travel
  together.
- Converter emits `cache_control: { type: 'ephemeral' }` (per-session)
  when scope is off and `{ type: 'ephemeral', scope: 'global' }` when on.
  Non-Anthropic baseURLs go back to their pre-PR per-session caching
  shape; existing prompt caching keeps working with no new beta.
- Document the intentional `scope: 'global'` omission on
  `addCacheControlToMessages`. The last user message changes every
  turn (live prompt + immediate tool_results), so cross-session reuse
  has effectively zero hit rate; cross-session caching is concentrated
  on system + tool prefixes only.

Tests:
- DeepSeek baseURL pins the proxy auth/identity path
  (`authToken` / claude-cli UA / `x-app: cli`). Documents the
  contract assumption that DeepSeek's anthropic-compatible endpoint
  accepts `Authorization: Bearer` — any future deviation surfaces
  here rather than at runtime for users.
- Non-Anthropic baseURL strips the cache-scope beta AND
  `scope: 'global'` from the wire shape, while keeping per-session
  `cache_control: { type: 'ephemeral' }` on system / tools.
- Hot-flip test extended to assert tool `cache_control` flips alongside
  system / user / beta header.
- Converter-level tests for per-call `enableCacheControl` and
  `useGlobalCacheScope` overrides — both directions of the constructor
  default (true→false, false→true) and the scope-independent-of-source
  case (cache on, scope off → per-session shape).
- baseConfig in the per-request anthropic-beta block now targets
  `api.anthropic.com` so cache-scope assertions remain meaningful; the
  proxy-baseURL behavior is covered separately.

* docs(core): tighten useGlobalCacheScope JSDoc — baseUrl is NOT hot-mutated

#4020 review: the JSDoc claimed `Config.setModel()` mutates both
`enableCacheControl` AND `baseUrl` in place. Per the current Config
implementation, only the qwen-oauth hot-update path mutates
`enableCacheControl` in place; non-qwen-oauth providers go through
the refresh path which recreates the ContentGenerator (so `baseUrl`
is captured fresh at construct time, not mutated).

Tightened the wording to reflect actual behavior + kept the
read-both-each-request defense (cheap and avoids stale-cache
surprises if the hot-update list ever expands).

* fix(core)!: suppress env back-fill so proxy auth doesn't leak real Anthropic key

#4020 review (tanzhenxin, severity high): the IdeaLab-proxy branch
spread `{ authToken: <key> }` and omitted `apiKey` entirely. The
Anthropic SDK constructor destructures with defaults
(`apiKey = readEnv('ANTHROPIC_API_KEY') ?? null`), and destructuring
defaults only fire for `undefined` — so an omitted `apiKey` lets
`ANTHROPIC_API_KEY` back-fill it. The SDK's auth resolver then prefers
`apiKey` over `authToken`, shipping `X-Api-Key` (not
`Authorization: Bearer`) on the wire. Concrete impact: a user with
`ANTHROPIC_API_KEY=sk-ant-…` exported (normal for anyone also running
Claude Code in the same shell) configuring qwen-code with an IdeaLab
proxy plus an IdeaLab token would leak their real Anthropic key as
`X-Api-Key` to the third-party proxy endpoint.

- Pass `apiKey: null` explicitly on the proxy branch and `authToken: null`
  on the Anthropic-native branch. Explicit `null` suppresses the
  destructuring default; the env back-fill no longer fires.
- New helper `resolveEffectiveBaseUrl` mirrors the SDK's own
  destructuring order (config → `ANTHROPIC_BASE_URL` env → SDK default).
  `isAnthropicNativeBaseUrl` now consults the env too, so a user
  configuring the proxy purely through `ANTHROPIC_BASE_URL` (qwen-code
  `baseUrl` unset) gets the proxy identity bundle instead of silently
  shipping native auth + UA + cache-scope beta to the proxy.

Tests:
- ANTHROPIC_API_KEY env + proxy baseURL → ctor receives `apiKey: null`
  and `authToken: our-key`. Locks in the credential-leak fix.
- ANTHROPIC_AUTH_TOKEN env + Anthropic-native baseURL → ctor receives
  `authToken: null` and `apiKey: our-key`. Symmetric guard for the
  inverse direction.
- ANTHROPIC_BASE_URL env points to proxy, config.baseUrl unset → proxy
  identity bundle (claude-cli UA, x-app, Bearer auth) applies.
- ANTHROPIC_BASE_URL unset → SDK default api.anthropic.com path keeps
  native identity (predicate doesn't misclassify the SDK default as a
  proxy).
- config.baseUrl wins over ANTHROPIC_BASE_URL — mirrors the SDK's own
  resolution order.
- Existing 7 identity tests updated from `toBeUndefined()` to
  `toBeNull()` to match the new explicit-suppression contract.

* refactor(core): gate cache-scope beta on body presence, not just predicate

#4020 review (Copilot): the comment promised "beta and body-side
scope: 'global' field always ship together" but the gate was just
`useGlobalCacheScope()`. In the degenerate case where the predicate is
true but the request body has no system text AND no tools, the beta
would still ship without any matching `cache_control.scope: 'global'`
on the wire — overstating the contract and shipping dead weight.

- New `hasGlobalCacheScopeOnWire(req)` scans the assembled request body
  (system block when shaped as `TextBlockParam[]`, tools array) for any
  `cache_control: { …, scope: 'global' }` entry. `buildPerRequestHeaders`
  gates the `prompt-caching-scope-2026-01-05` beta on this scan, so the
  beta and the body field share a single source of truth. No window
  between sampling the predicate and emitting the body where the two
  could diverge.
- `useGlobalCacheScope()` is still sampled once per `buildRequest` and
  threaded into the converter to decide whether to ATTACH
  `scope: 'global'` to the body. The body-scan downstream then derives
  the beta from what actually landed.

Tests:
- New: empty systemInstruction + no tools + Anthropic-native + cache on
  → beta NOT shipped (degenerate body-scan case).
- New: empty systemInstruction + non-empty tools → beta shipped
  (tool scope:'global' triggers the scan).
- Existing per-request beta tests now include a `systemInstruction` so
  the body has the scope field; degenerate case is covered by the new
  dedicated test.

Also tightened two stale comments (#3217834451, #3217834505) that
claimed `Config.setModel()` mutates both `enableCacheControl` and
`baseUrl` in place — only `enableCacheControl` is hot-mutated (qwen-oauth
path); non-qwen-oauth providers recreate the generator on refresh, so
`baseUrl` is captured fresh at construct time. Comments now describe the
real in-place mutation and note the qwen-oauth boundary.

* fix(core): trim config.baseUrl and document x-app in buildHeaders

#4020 review (Copilot): two low-stakes follow-ups on 491a441f9.

- `resolveEffectiveBaseUrl` trimmed `ANTHROPIC_BASE_URL` env but
  returned `contentGeneratorConfig.baseUrl` as-is. A copy-pasted baseURL
  with leading/trailing whitespace would trip `new URL(...)` in
  `isAnthropicNativeBaseUrl` and fall through the catch branch to the
  proxy identity bundle — meaning real api.anthropic.com would receive
  Bearer auth + claude-cli UA and 401. Apply the same trim() +
  empty-as-missing normalization on the config side. New regression
  test pins the contract with `'  https://api.anthropic.com  '`.
- `buildHeaders` docstring said constructor headers carry only
  User-Agent + customHeaders (excluding anthropic-beta). The PR also
  added `x-app: cli` on the proxy path; updated the comment so a future
  maintainer reading the "no duplicate headers" rationale doesn't
  miss the x-app addition.
2026-05-11 19:15:36 +08:00
tanzhenxin
8beb97eecf
feat(tools): keep ask_user_question always-visible to surface clarification UX (#4041)
Previously ask_user_question was deferred behind ToolSearch (shouldDefer: true),
so the model only saw it as a name in the deferred-tools section and had to
discover it via tool_search before invoking. In practice the model rarely
went through that discovery step and instead asked clarifying questions in
plain prose, losing the structured multiple-choice UX the tool exists to
provide.

Flip shouldDefer to false so the schema lives in the initial tool list. Add a
unit-test guard so a future refactor doesn't silently put it back behind
ToolSearch.
2026-05-11 19:06:03 +08:00
tanzhenxin
d7a25682e6
refactor(core): route side-query LLM calls through runSideQuery chokepoint (#3775)
* refactor(core): route side-query LLM calls through runSideQuery chokepoint

Folds every one-shot side-query call site through a single `runSideQuery`
entry point with `thinkingConfig.includeThoughts: false` and `fastModel`
(falling back to main) as the default policy. Adds a text-mode sibling
to the existing JSON-mode helper, plus a `BaseLlmClient.generateText`
primitive that calls `ContentGenerator.generateContent` directly so
side queries get neither user-memory wrapping nor the main-prompt
fallback that `geminiClient.generateContent` applies.

Migrated call sites: session title, recap, tool-use summary, /rename,
follow-up suggestion (direct path), ACP rewrite, project /summary,
arena approach summary, chat compression, web-fetch, insight analysis,
subagent spec generation. Six call sites override the helper defaults
explicitly (subagent gen, suggestion, ACP rewrite, /summary, compression,
insight) where main-model quality or caller-supplied model matters.

The /summary path additionally fixes a latent bug: text extraction
previously did not strip thought parts, so on thinking models the
saved `.qwen/PROJECT_SUMMARY.md` could leak `reasoning_content` into
the file. The chokepoint now strips thought parts and the request
itself goes out with thinking off.

Best-effort cosmetic callers (recap, tool-use summary, kebab rename,
suggestion) opt into `maxAttempts: 1` so transient outages don't burn
seven retries on output the user will likely never see. `isInternalPromptId`
recognises the `side-query:` prefix automatically so new call sites are
filtered without per-site allowlist updates.

Removes the `getAgentContentGenerator` workaround in `InProcessBackend`
and the `getAgentSummaryGenerator` indirection in `ArenaManager` —
arena approach summaries now run through the chokepoint against
`fastModel`, giving every agent a neutral arbiter rather than a
self-summary on its own model.

* fix(core): guard isInternalPromptId against undefined prompt_id

logToolCall calls isInternalPromptId(event.prompt_id), and tool-call
events from useToolScheduler can carry an undefined prompt_id. The
side-query refactor added promptId.startsWith(SIDE_QUERY_PROMPT_PREFIX)
without a falsy guard, so the missing id crashed the logger and broke
six useToolScheduler tests across all OS / Node matrix entries on CI.

* fix(cli,core): polish runSideQuery callers from review feedback

- Cap web-fetch, chat-compression, and ACP rewrite at maxAttempts: 1.
  These paths degrade gracefully on failure (tool error, NOOP fallback,
  null return), so 7 retries only delays the user-visible outcome.
- /summary now carries the main session's system instruction so the
  summarizer keeps the coding-assistant role, project context, and
  user memory instead of summarizing the chat in isolation.
- Add isInternalPromptId tests for the side-query: prefix so future
  callers minted via runSideQuery stay filtered out of recordings.

* refactor(core): document runSideQuery defaults and surface promptId in errors

- Add JSDoc on the model and config fields of SideQueryJsonOptions and
  SideQueryTextOptions so the fastModel-first defaulting and the
  thinkingConfig.includeThoughts: false default are visible at the API
  surface, not buried in resolveDefaultModel / applyThinkingDefault.
- BaseLlmClient.generate{Json,Text} error wraps now include promptId
  in the message and pass { cause: error }, so a side-query failure
  identifies which call site failed and preserves the original stack.
- Add tests covering maxAttempts forwarding (present + omitted) and
  rejection propagation for both JSON and text modes — the conditional
  spread is non-trivial and was previously unverified.

* fix(core): preserve per-model provider routing in side queries

BaseLlmClient was bound to the main session's ContentGenerator and only
swapped the request `model` field, so side queries targeting a fast or
alternate model inherited the main provider's baseUrl, credentials, and
sampling settings — breaking cross-provider configurations.

Move per-model generator/authType resolution out of GeminiClient and into
BaseLlmClient as `resolveForModel`. Both generateJson and generateText
now build a per-model ContentGenerator (with cache) when the request
targets a non-main model and pass the resolved retry authType through
to retryWithBackoff. GeminiClient.generateContent delegates to the same
resolver so there is a single source of truth.

Also pin the /forget destructive selector to the main model — the
runSideQuery default moved to fast model in this branch, but /forget
acts on the selection without confirmation, so a weaker fast model
could silently delete the wrong managed-memory entries.

* test(core): assert thinkingConfig/maxAttempts/model forwarding in compression

The compression caller of runSideQuery sets thinkingConfig.includeThoughts=true
and maxAttempts=1. A future refactor that silently drops either would degrade
compression quality without test failure; this assertion locks the contract.

* fix(cli): route dynamic localization through side query

* refactor(core): remove unused memory governance review
2026-05-11 19:03:14 +08:00
ChiGao
cadda23782
chore(deps): upgrade ink 6.2.3 → 7.0.2 + bump Node engine to 22 (#3860)
* chore(deps): upgrade ink 6.2.3 -> 7.0.2 + bump Node engine to 22

ink 7 requires Node >=22 and react-reconciler 0.33 with React >=19.2,
so this PR also bumps:

- Node engines (root + cli + core) 20 -> 22
- React/react-dom 19.1 -> 19.2.4 (pinned exact via overrides to keep
  the transitive React graph deduped to a single instance)
- @types/node pinned to 20.19.1 via overrides to avoid an unrelated
  Dirent NonSharedBuffer regression in sessionService tests
- @vitest/eslint-plugin pinned to 1.3.4 to avoid an unrelated lint
  regression introduced by the 1.6.x rule additions
- react-devtools-core 4.28 -> 6.1 (ink 7 peerOptional requires >=6.1.2)
- ink hoisted to root devDeps so workspace-private peer-dep contention
  doesn't push ink-link/spinner/gradient into nested workspace
  installs (which would skip transitive resolution for terminal-link)

Workflow + image + installer alignment:

- .nvmrc 20 -> 22
- Dockerfile node:20-slim -> node:22-slim
- CI test matrix drops 20.x (keeps 22.x + 24.x)
- terminal-bench workflow Node 20 -> 22
- Linux/Windows install scripts upgrade their Node version targets

Documentation alignment:

- README.md badge + prerequisites
- AGENTS.md, CONTRIBUTING.md, docs/users/quickstart.md,
  docs/users/configuration/settings.md, docs/developers/contributing.md,
  docs/developers/sdk-typescript.md, docs/users/extension/extension-releasing.md,
  packages/sdk-typescript/README.md, packages/zed-extension/README.md,
  scripts/installation/INSTALLATION_GUIDE.md

Test gating:

- Two AuthDialog/AskUserQuestionDialog tests that drive <SelectInput>
  through ink-testing-library now race ink 7's frame-throttled input
  delivery and land on the wrong option. The maintainers had already
  marked one of them unreliable (skip on Win32 + CI+Node20). Extend
  that gate to cover all environments until upstream
  ink-testing-library ships an ink-7-compatible release that flushes
  input deterministically. The other test now uses it.skip with the
  same comment. No business code changes.

Verified locally:

- npm run typecheck across all workspaces: clean
- npm run lint (root): clean
- npm run test --workspaces:
    cli  312/312 files, 4918 passed, 9 skipped
    core 266/266 files, 6836 passed, 3 skipped
    webui 6/6, 201 passed
    sdk  40/40, 283 passed, 1 skipped
- npm ls ink: single ink@7.0.2 instance across all peer deps
- single react@19.2.4 instance

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* chore: align Node 22 floor across all shipping artifacts

Reviewer (tanzhenxin) flagged five surfaces where the >=22 engine bump
leaked: SDK package metadata, web-templates engines, /doctor runtime
check, main bundler target, and SDK bundler target. Each was a separate
escape hatch letting Node 18/20 consumers install or run the artifact
on an unsupported runtime.

- packages/sdk-typescript/package.json: engines.node >=18.0.0 -> >=22.0.0
- packages/web-templates/package.json: engines.node >=20 -> >=22
- packages/cli/src/utils/doctorChecks.ts: MIN_NODE_MAJOR 20 -> 22
- esbuild.config.js: target node20 -> node22 (main CLI bundle)
- packages/sdk-typescript/scripts/build.js: target node18 -> node22 (esm + cjs)
- packages/cli/src/utils/doctorChecks.test.ts: rename test label to v22+

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* ci(e2e): bump E2E workflow Node matrix to 22.x

Reviewer (tanzhenxin) flagged that e2e.yml still pinned node-version
20.x while root engines is now >=22, so every E2E run on push would
either fail at npm ci with engine error or silently exercise the bundle
on a runtime that's no longer in ci.yml's test matrix.

The macOS job in the same workflow already reads .nvmrc (which is 22)
so this only updates the Linux matrix.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(deps): drop root wrap-ansi override so ink 7 gets its declared dep

Reviewer (tanzhenxin) flagged that the root overrides.wrap-ansi: 9.0.2
predates this upgrade and forces every consumer (including ink) to v9,
while ink 7 declares wrap-ansi: ^10.0.0. The lockfile had no nested
install under node_modules/ink/, so ink 7 was running with a transitive
dep one major below its declared minimum.

Dropping the global override lets ink resolve its own wrap-ansi 10
nested install (now visible in the lockfile under
node_modules/ink/node_modules/wrap-ansi), while the cli package's own
direct `wrap-ansi: 9.0.2` dependency keeps the cli code path
(TableRenderer.tsx) on the version it has been tested against. The
nested cliui override is preserved for yargs which still needs v7.

Verified via `npm ls wrap-ansi`:
- ink@7.0.2 -> wrap-ansi@10.0.0 (newly nested)
- @qwen-code/qwen-code -> wrap-ansi@9.0.2 (unchanged)
- yargs/cliui -> wrap-ansi@7.0.0 (unchanged)

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(InputPrompt): un-skip placeholder ID reuse after deletion

Reviewer (tanzhenxin) flagged that the new it.skip on the
'should reuse placeholder ID after deletion' test was undisclosed in
the PR description and removed coverage of real product behavior
(freePlaceholderId / bracketed-paste backspace path) without a
TODO(#NNNN) link.

Their argument was sound: the skip rationale pointed at ink 7's input
throttle, but this same file just bumped the wait helper from 50ms to
150ms specifically to give ink 7 frame time. Re-running the test under
the bumped wait shows it passes reliably (5/5 runs in the full-file
context, 9/10 alone), so the skip was masking the throttle-flake that
the wait bump already addresses, not a real product bug.

Drop the it.skip and the now-stale comment so coverage of the
freePlaceholderId reuse logic is restored.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(InputPrompt): bump first prompt-suggestion test wait to 350ms

The "accepts and submits the prompt suggestion on Enter when the buffer
is empty" test is the first in its describe block, so it pays the
renderer cold-start cost. On macOS-22.x CI runners that pushes the
Enter → onSubmit microtask past the default 150ms post-Enter wait. Match
the 350ms initial render wait used immediately above to absorb the cold
start.

* Revert "test(InputPrompt): bump first prompt-suggestion test wait to 350ms"

This reverts commit 6add83b62e.

* test(InputPrompt): wait for followup suggestion debounce before pressing Enter

Root cause of the failing prompt-suggestion tests on macOS and Windows
CI is not flaky timing of the test post-Enter wait — it's the 300ms
debounce inside createFollowupController.setSuggestion (shared core).
The Enter handler reads followup.state.isVisible synchronously, so if
the debounce timer has not fired before stdin.write('\\r'), the
suggestion path is skipped and onSubmit never runs. No amount of
post-Enter wait can recover from that — the keypress was already
processed against stale state.

The original wait(350) only left ~50ms margin over the 300ms debounce,
which ink 7 / React 19.2 mount overhead consumed on slow Windows
runners. Bump the initial wait to 700ms (named SUGGESTION_VISIBLE_WAIT_MS)
to give the debounce timer + cold-start render a generous buffer.

Apply to the two sibling tests too — without the wait their "does not
accept" assertions pass trivially when suggestion is never visible,
which is a false green that hides regressions in the actual reject path.

* fix(deps): align cli wrap-ansi with ink 7 (9.0.2 -> ^10.0.0)

Ink 7 ships its own wrap-ansi@10. CLI's direct dep was pinned to 9.0.2,
causing two copies of wrap-ansi in node_modules and a potential drift in
CJK width / ANSI handling between ink's internal text wrapping and our
TableRenderer.

Upgrading the CLI's direct dep to ^10.0.0 lets npm dedupe to a single
wrap-ansi@10 used by both ink and TableRenderer. API surface is
identical; the only documented behaviour change is that tabs are
expanded to 8-column tab stops before wrapping, which TableRenderer
doesn't feed in.

TableRenderer test suite (43 tests) passes against wrap-ansi@10.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* chore(deps): document @types/node 20.x pin in overrides

The override pinning @types/node to 20.19.1 (while engines require
Node >=22) is intentional: bumping to @types/node@22.x re-introduces
a Dirent<NonSharedBuffer> type regression that breaks
@qwen-code/qwen-code-core/sessionService tests.

Add a sibling "//@types/node" note inside `overrides` so future
maintainers see the rationale and know when to revisit the pin
without having to dig through PR #3860 history.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(AskUserQuestionDialog): link skipped Submit-tab test to tracking issue

The 'shows unanswered questions as (not answered) in Submit tab' test
was switched to `it.skip` in the ink 7 upgrade because
`ink-testing-library@4.0.0` doesn't flush input deterministically
through ink 7's 30fps throttle.

Add a `// TODO(#4036):` marker so the skip is greppable and can be
re-enabled once upstream ships an ink-7-compatible release.

Refs #4036

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(deps): move @types/node pin comment out of overrides block

npm's `overrides` field requires every key to be a real package name —
the `"//@types/node"` comment-key added in 205855875 trips Arborist with
"Override without name" and breaks `npm ci` across all CI jobs.

Move the explanation to a sibling top-level `"//overrides"` key, which
npm ignores at the document root. Same documentation value, no
override-parser collateral damage.

---------

Co-authored-by: 秦奇 <gary.gq@alibaba-inc.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-05-11 17:29:50 +08:00
pomelo
7e11428545
refactor(cli): remove legacy qwen auth CLI subcommand, redirect to /auth TUI dialog (#3959)
The `qwen auth` CLI subcommand (with subcommands like qwen-oauth,
coding-plan, api-key, openrouter, status) has been superseded by the
richer /auth TUI dialog introduced in the provider-first auth registry
(#3864). Running `qwen auth` now prints a deprecation notice pointing
users to the /auth TUI dialog (interactive), env vars (CI/headless),
or /doctor (status check).

Changes:
- Replace auth.ts with a stub that prints a removal notice and exits
- Delete handler.ts (734 lines), interactiveSelector.ts, and their
  tests (interactiveSelector.test.ts, openrouter.test.ts, status.test.ts)
- Update /auth slash command to handle non-interactive/ACP modes gracefully
- Enrich /doctor auth check with provider-aware diagnostics using
  findProviderByCredentials
- Mark `auth` as a subcommand that handles its own exit in config.ts

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-05-11 16:44:09 +08:00
Shaojin Wen
84ecb5b8a3
feat(cli): add --json-schema for structured output in headless mode (#3598)
* feat(cli): add --json-schema for structured output in headless mode

Registers a synthetic `structured_output` tool whose parameter schema IS the
user-supplied JSON Schema. In headless mode (`qwen -p`), the first successful
call terminates the session and exposes the validated payload via the result
message's `structured_result` field. Invalid schemas are rejected at CLI parse
time via a new strict Ajv compile helper so they can't silently no-op at
runtime.

* fix(cli): honour "first structured_output call ends session" + reject non-object root schemas

Two review fixes for the `--json-schema` feature:

1. `runNonInteractive` now breaks out of the tool-call loop as soon as the
   first successful `structured_output` invocation is captured, rather than
   continuing to execute any trailing tool calls the model emitted in the
   same turn. This restores the documented single-shot contract and prevents
   side-effecting tools from running after the final answer has already
   been accepted.

2. `resolveJsonSchemaArg` rejects schemas whose root `type` is anything
   other than "object" (or a type array including "object"). Function-
   calling APIs require tool arguments to be JSON objects, so a schema
   like `{"type": "array"}` would have registered an unusable synthetic
   tool the model could never satisfy. Absent `type` and `type: "object"`
   remain accepted.

Adds tests for both paths and updates the existing Ajv-compile test to
exercise that path without tripping the new root-type guard first.

* fix(cli): also reject root anyOf/oneOf schemas whose branches can't accept objects

Addresses a review follow-up: the previous root-object check only inspected
the top-level `type` keyword, so a schema like
`{"anyOf":[{"type":"array"},{"type":"string"}]}` slipped through even though
none of its branches can ever validate the object-shaped arguments that
function-calling APIs send.

Replace the single `type` check with `schemaRootAcceptsObject`, which
recursively walks root-level anyOf/oneOf branches and requires at least one
to accept objects. Absent `type`, `type: "object"`, `type: ["object", ...]`,
and mixed anyOf branches where one accepts object all still pass. `allOf`
is left to Ajv's runtime behaviour — guessing intent across contradictory
allOf branches at parse time is fragile.

* fix(cli): propagate exitCode from --json-schema failure path + tests

Address two PR-3598 review findings:

1. gemini.tsx unconditionally called process.exit(0) after
   runNonInteractive/runNonInteractiveStreamJson, clobbering the
   process.exitCode = 1 set by nonInteractiveCli.ts when the model
   emits plain text instead of the structured_output tool. Switch
   both call sites to process.exit(process.exitCode ?? 0) so CI can
   detect the failure via the exit code.

2. nonInteractiveCli.test.ts: strengthen the structured-output
   success path to assert registry.abortAll() is called and that the
   stdout result envelope carries the JSON-stringified args under
   `result` plus the raw object under `structured_result`. Add a
   retry-path test that mocks executeToolCall to return an error on
   the first structured_output call, then verifies sendMessageStream
   is called a second time so the model can retry rather than the
   session terminating early.

* fix(cli): suppress non-structured tool calls when structured_output is in the same turn

When --json-schema is active and the model emits a batch like
[write_file(...), structured_output(...)], the previous implementation
ran the leading side-effecting tool before accepting the structured
result, violating the "structured_output is the terminal contract"
guarantee. The trailing-only break also let an invalid first
structured_output fall through to subsequent tools before the retry
turn.

Pre-scan the batch: if a structured_output request is present, execute
ONLY the first one and skip everything else (leading and trailing).
This is consistent with the existing terminal-path semantics — the
suppressed tool_use blocks lack a matching tool_result, the same way
max-turns / cancellation leave the stream.

Adds a test covering the reverse-order [side_effect, structured_output]
case alongside the existing trailing-suppression and retry tests.

* fix(cli): tighten --json-schema root validation per review feedback

Three small holes flagged in the latest pass:

1. `schemaRootAcceptsObject` returned early when a root `type` keyword
   was present, ignoring sibling `anyOf`/`oneOf`. JSON Schema applies
   keywords at the same level conjunctively, so e.g.
   `{type:"object", anyOf:[{type:"string"}]}` is unsatisfiable for any
   value but used to pass. Now both `type` AND any sibling
   `anyOf`/`oneOf` must independently admit object.

2. The FatalConfigError text said "Every branch of a root anyOf/oneOf
   must be satisfiable by an object", but the actual logic only
   requires *at least one* branch (and tests still accept
   `anyOf:[object, string]`). Reworded to "at least one branch" so the
   message matches the behaviour.

3. `compileStrict` used `typeof schema !== 'object'` to gate input,
   which lets arrays through (`typeof [] === 'object'`). The contract
   says "schema must be a JSON object", so add an `Array.isArray`
   check so array input gets the intended error rather than a less
   helpful Ajv compile message.

Tests cover the new rejection paths and the array case.

* fix(cli): handle root $ref and allOf in --json-schema accept-object check

`schemaRootAcceptsObject` previously only inspected `type`, `anyOf`,
and `oneOf` at the root, so a couple of unsatisfiable shapes still
slipped through:

1. `{"$ref":"#/$defs/Foo","$defs":{"Foo":{"type":"array"}}}` would be
   accepted because we don't follow $refs, but registers a synthetic
   tool whose params resolve to "array" — the model can never produce
   a valid object. Now reject any root $ref unless the user adds a
   sibling `type:"object"` as an explicit anchor.

2. `allOf` was deferred to Ajv runtime, but allOf is conjunctive at
   the same level as `type` / `anyOf` / `oneOf`, so an entry like
   `{"allOf":[{"type":"object"},{"type":"string"}]}` is unsatisfiable
   for any value. Walk it like the others, requiring every branch to
   admit object.

Tests cover the new $ref-rejected / $ref+anchor-accepted paths and the
allOf reject/accept paths.

* fix(cli): explicit exit code from runNonInteractive + pair suppressed tool calls

Three review threads on the structured-output flow:

1. The break that ends the for-loop on a successful structured_output
   call sat *before* the responseParts.push and modelOverride capture.
   SyntheticOutputTool currently returns neither, so it was safe today
   — but anyone wiring extra signals into the synthetic tool later
   would see them silently dropped. Move the break after both captures
   so the contract is explicit, not implicit.

2. The failure path used to set process.exitCode = 1 and return void,
   relying on global mutable state across an async boundary. Any
   cleanup task between runNonInteractive and process.exit could
   silently turn the structured-output failure into exit 0. Switch
   runNonInteractive to Promise<number>, return 0 / 1 directly from
   each function-level exit, and have gemini.tsx use the captured
   return value.

3. The pre-scan from the prior commit suppresses sibling tool calls
   when structured_output is in the same turn. On the retry path —
   when structured_output fails validation — the next-turn payload
   has tool_result for structured_output but no entry for the
   suppressed siblings, leaving the prior assistant turn's tool_use
   blocks unpaired. Anthropic and OpenAI both reject that batch
   shape, so the retry would surface as an opaque provider error.
   Synthesize a "skipped" functionResponse for every suppressed call
   so every tool_use in the prior assistant message has a matching
   tool_result.

Tests cover the new retry-pairing contract and update the existing
plain-text-failure test to assert on the return value rather than
process.exitCode.

* fix: address Copilot follow-up review on --json-schema scaffolding

Five small but real findings flagged on the latest pass:

1. core/src/index.ts re-exported `SyntheticOutputTool` via `export type`,
   but it's a runtime class — that erased it from the emitted JS and
   would break value imports. Split into a value `export { ... }` and a
   `export type { StructuredOutputParams }`.

2. The structured-output success path returned without flushing
   `localQueue` notifications or finalising one-shot monitors. If a
   background task had already emitted `task_started`, exiting here
   could drop its paired `task_notification` and leave SDK consumers
   with unpaired lifecycle events. Mirror the regular terminal path's
   `flushQueuedNotificationsToSdk` + `finalizeOneShotMonitors` calls
   before `emitResult`.

3. `schemaRootAcceptsObject` ignored the `not` keyword, so
   `{not:{type:"object"}}` (which forbids every object value) slipped
   through. Add a best-effort `not` check that rejects when
   `not.type` directly excludes object. Deeper negated patterns still
   fall through to Ajv at runtime.

4. `compileStrict`'s JSDoc claimed it errored on "Ajv versions we can't
   support", but the function doesn't actually check Ajv versions. Reword
   to "malformed or uses unsupported draft/features for our Ajv
   configuration" so the contract matches the implementation.

5. The pre-scan suppressed sibling tool calls but only synthesised
   tool_result events for them on the retry path — the success path
   left those tool_use blocks unpaired in the emitted JSONL/stream-json
   event log. Move the synthesis after the for loop so it runs for both
   the success break and the validation-failure fall-through; the event
   log is now consistent regardless of which path the run takes.

Tests cover the new \`not\`-rejection paths, the success-path tool_result
synthesis, and the existing retry-pairing test still passes against the
restructured emit ordering.

* fix(cli): tighten --json-schema parse-time gate per Copilot review

Two more shapes that used to slip through:

1. `schemaRootAcceptsObject` defaulted to true when no narrowing
   keyword was present, so root-value constraints like `{const: 1}`
   or `{enum: [1, 2]}` registered an unsatisfiable structured_output
   contract — the model could never produce a value matching the
   tool's parameter schema, and the run would loop on validation
   failures until max-turns. Reject `const` whose literal isn't an
   object, and `enum` whose members include no object.

2. The yargs check rejected `--json-schema` with `-i` and with no
   prompt, but not with `--input-format stream-json`. Stream-json
   keeps the process open waiting for protocol messages, so
   "terminate on the first valid structured_output call" silently
   drops everything queued after that point. Refuse the combination
   at parse time so the contradiction surfaces immediately.

Tests cover the new const/enum reject and accept paths.

* fix(cli): handle empty/boolean subschemas + allow stdin-only prompt

Three more shapes flagged on the latest review pass:

1. `schemaRootAcceptsObject` treated an empty root `anyOf`/`oneOf`
   as "no constraint" (skipped when length === 0), but per JSON
   Schema an empty union is unsatisfiable — no value can match a
   member of the empty set. Reject those at parse time so users
   get a clear parse error instead of an opaque runtime
   never-validates loop.

2. JSON Schema (draft-06+) allows boolean subschemas anywhere a
   schema is accepted: `true` matches every value, `false` matches
   nothing. The `anyOf`/`oneOf`/`allOf` walks were rejecting
   booleans via the typeof-object guard, which incorrectly
   rejected `{anyOf:[true]}` and `{allOf:[true,{type:"object"}]}`
   while letting `{anyOf:[false]}` slip through. Replace the
   per-branch object guard with a `variantAcceptsObject` helper
   that treats `true` as accepting and `false` as rejecting, then
   recurses on object subschemas.

3. The yargs `.check` rejected `--json-schema` when no `-p` /
   positional prompt was given, but the headless CLI also reads
   the prompt from stdin (`cat prompt.txt | qwen --json-schema
   '...'`) — a legit usage pattern that was being blocked. Drop
   the parse-time no-prompt rejection; the existing runtime "No
   input provided via stdin..." error in gemini.tsx still catches
   genuinely empty input.

Tests cover the empty-union, all-`false`, mixed-boolean accept,
and `false`-in-allOf reject paths. Live-verified against the
bundled CLI: `echo "..." | qwen --json-schema '...'` now reaches
the model call, and the four schema edge cases all surface the
expected error text or proceed past parse time.

* docs(core): note SyntheticOutputTool as the value-export exception

The block comment above the lazy-load type re-exports said tool classes
"are now lazy-loaded and are not exported as values from the package
root", but `SyntheticOutputTool` was just promoted to a runtime export
in 62038527c so the CLI's `--json-schema` flow can construct it from
the package root. Document that exception inline so downstream
consumers reading the comment don't get told the wrong story.

* fix(cli): try every structured_output in a same-turn batch in order

The pre-scan used to pick only the FIRST structured_output call from a
turn and suppress everything else, even other structured_output calls.
That created two avoidable failure modes:

1. `[structured_output(bad), structured_output(good)]` would attempt
   only the bad one, fail validation, and force a full retry turn.
   The model already produced a valid structured payload — we should
   try it before asking again.

2. The trailing structured_output's tool_result was synthesised with
   the "Skipped: structured_output was also requested in this turn..."
   message, which is misleading because that call WAS the structured
   output we should have tried.

Filter `requestsToExecute` to ALL structured_output calls (in original
order) when --json-schema is active, and let the existing loop break
on the first success. Track an `executedCallIds` set, then synthesise
tool_result + retry parts after the loop for every tool_use the model
emitted that we never actually executed — covering both non-structured
siblings (always suppressed) and any structured_output left over after
the success break (only one terminal contract per turn).

Reworded the synthesised "skipped" output to "this turn's
structured_output contract took precedence" so it reads correctly
regardless of whether the suppressed call was structured or not.

Tests cover the multi-structured retry-free success path; the
existing single-structured retry and trailing/leading suppression
tests still pass against the updated emit ordering.

* fix: address gpt-5.5 review on --json-schema (privacy + $ref + core-tools)

Three findings, three changes:

1. Reject every root `$ref` in --json-schema, even with a sibling
   `type: "object"` anchor. Ajv applies `$ref` conjunctively with
   sibling keywords, so the previous "accept when type:object is
   present" carve-out was unsound: `{type:"object",$ref:"#/$defs/Foo",
   $defs:{Foo:{type:"array"}}}` parsed fine but no object value can
   satisfy both at runtime, leaving the model to loop until maxTurns.
   Updated docstring + test cases (replaced the accept-with-anchor
   case with a reject case for both anchored and well-formed $ref
   shapes — users wanting composition should inline at the root).

2. Redact `function_args` for structured_output in ToolCallEvent.
   The args ARE the user's structured payload (already emitted via
   stdout `result` / `structured_result`); recording them again as
   ordinary tool-call function_args duplicates that data into OTLP
   exports, QwenLogger, ui-telemetry, and the chat-recording UI
   event mirror — surfaces that can leak off-device. Replace with a
   stable `__redacted` placeholder so consumers still see the call
   happened (duration, success, decision metrics preserved) but the
   payload itself doesn't ride along. Two new uiTelemetry tests
   cover the redacted vs non-redacted paths.

3. Document and test that structured_output bypasses the --core-tools
   allowlist (same as agent / skill / exit_plan_mode / ask_user_question
   etc.). The synthetic tool only exists when --json-schema is set,
   so adding it to CORE_TOOLS would let `--core-tools read_file
   --json-schema X` silently drop the terminal contract and loop the
   model until maxTurns — bypass is intentional. Expanded the
   CORE_TOOLS docstring to enumerate the synthetic-tool exclusions
   and added a permission-manager test mirroring the pattern used
   for agent / skill / exit_plan_mode.

* fix(cli): apply structured_output terminal handling to drain turns

The synthetic structured_output tool is registered for the entire
headless session, so it can be invoked from EITHER the main
assistant-turn loop OR from a drain turn (queued cron-job /
notification reply). The drain path (drainOneItem) was treating it
like any other tool: execute, append the response back into
itemMessages, keep going. The submitted args were never captured and
no structured_result envelope was emitted, so a run that legitimately
satisfied --json-schema mid-drain ended up failing the contract with
"Model produced plain text..." anyway.

Apply the same terminal handling to drain turns:

- Hoist `structuredSubmission` to session scope so both paths write
  to one variable.
- In `drainOneItem`, run the same pre-scan: when --json-schema is
  active and structured_output is in the batch, execute every
  structured_output in original order until one succeeds; suppress
  every non-structured sibling. Synthesise tool_results for any
  unexecuted tool_use the model emitted, mirroring the main path.
- On capture, return early from drainOneItem so the drained item's
  inner while loop stops.
- `drainLocalQueue` short-circuits when a captured submission is in
  flight, so subsequent queued items don't run.
- The cron `checkCronDone` watches the same flag and stops the
  scheduler immediately on capture, releasing the surrounding
  `await new Promise(...)`.
- The final holdback loop bails out on capture so monitor lifecycle
  doesn't extend past the structured submission.
- After the holdback, before the existing failure / regular-success
  emit, emit the structured success envelope and return 0.

Adds a focused unit test that drives the drain path end-to-end via a
synchronously-fired monitor notification: main turn produces plain
text, the drain reply calls structured_output, and the test asserts
exit 0 + structured_result populated + no "Model produced plain
text..." error.

* fix(cli): address gpt-5.5 review follow-ups on --json-schema scaffolding

Six review findings, six small fixes:

1. **Nested $ref incorrectly rejected.** `schemaRootAcceptsObject`
   recurses into anyOf/oneOf/allOf branches and used to apply the
   root-only $ref rejection at every level, blocking common
   composition shapes like
   `{anyOf:[{$ref:"#/$defs/Foo"},{type:"string"}]}`. Add an
   `isRoot=true` parameter; non-root recursion treats `$ref` as
   opaque and defers to Ajv at runtime. Tests cover nested refs in
   anyOf / oneOf / allOf.

2. **Inaccurate package-root export comment.** `core/src/index.ts`
   claimed `SyntheticOutputTool` was exported as a runtime value
   for the CLI's --json-schema flow, but the only construction is
   inside `Config.registerLazy` via a relative dynamic import — no
   value consumer reaches into `@qwen-code/qwen-code-core`. Revert
   to a type-only re-export so `SyntheticOutputTool` lines up with
   every other lazy-loaded tool class.

3. **Unused constructor parameter.** `SyntheticOutputTool` took
   `(_config: Config, schema)` but never read `_config`. Drop the
   parameter (and the corresponding pass-through at the registration
   call site) so readers don't wonder why a Config is being threaded
   through.

4. **Tool description claimed "exactly once".** The retry path
   explicitly tolerates multiple calls until one validates, so
   "Call this tool exactly once" is misleading to a model that
   tried twice. Reword to "Call this tool to deliver the final
   result; the first call with valid arguments ends the session" so
   the description matches the actual contract.

5. **Asymmetric shutdown on the structured-output success path.**
   The regular terminal path waits in a holdback loop until
   `hasUnfinalizedTasks()` is false; the structured-output path
   used to call `abortAll()` and flush immediately, dropping the
   matching `task_notification` for any agent whose natural handler
   hadn't yet enqueued it. Add a bounded holdback (capped at 500ms
   via STRUCTURED_SHUTDOWN_HOLDBACK_MS) — long enough for typical
   abort callbacks to enqueue, short enough that a hung agent can't
   block exit.

6. **gemini.tsx exit-code asymmetry.** `runNonInteractive` returns
   an explicit exit code, but `runNonInteractiveStreamJson` still
   reads `process.exitCode` after `runExitCleanup`. Currently safe
   because the yargs `.check` rejects --json-schema with stream-json
   input, but a future stream-json equivalent of structured output
   would need to plumb the exit code through the return value too.
   Document this in a comment so the constraint is visible at the
   call site.

Plus: strengthen `synthesises tool_result for suppressed sibling
calls when structured_output fails validation` to assert the failed
structured_output's `functionResponse.response` carries the actual
validation error string ("args invalid"), not the synthesised
"Skipped:" prose — a regression that overwrote it would otherwise
slip past the existing pairing assertion.

* fix(cli): close --json-schema gaps surfaced in self-audit + review

Five fixes layered onto the same robustness pass over the
`--json-schema` flow:

1. **bare-mode registration** (`packages/core/src/config/config.ts`):
   `qwen --bare --json-schema X -p "..."` previously skipped the
   synthetic `structured_output` registration entirely (the
   registration block lives below the bare-mode early-return), so the
   model had no way to terminate and the run looped to
   `maxSessionTurns`. Register the synthetic tool inside the bare
   branch too.

2. **TTY interactive rejection** (`packages/cli/src/gemini.tsx`):
   `qwen --json-schema X` on a TTY with no `-p` and no piped stdin
   routes to `isInteractive=true` (priority-3 fallback) and would
   launch the TUI, where `structured_output` is just an inert tool
   that prints "accepted" and lets the chat continue. Parse-time
   gating can't catch this (stdin isn't probed yet at parse time), so
   reject at runtime before the UI launches; runs `runExitCleanup`
   first so MCP subprocesses get torn down.

3. **drain-turn structured-success flush**
   (`packages/cli/src/nonInteractiveCli.ts`): when a drain turn
   captures `structured_output`, `drainLocalQueue` returns early,
   leaving any items the drain didn't process in `localQueue`. The
   prior emit path then ran `registry.abortAll()` + `emitResult`
   without flushing — stream-json consumers saw `task_started` events
   without paired `task_notification`. Add the same 500ms holdback +
   `flushQueuedNotificationsToSdk` the main-turn structured-success
   path uses, so the two paths agree.

4. **ACP mutual-exclusion** (`packages/cli/src/config/config.ts`):
   `--acp` runs an independent `runAcpAgent` turn loop that doesn't
   honour the synthetic-tool terminal contract, so `--acp
   --json-schema X` would register the tool but never terminate. Add
   a yargs `.check` rejection covering both `--acp` and the
   deprecated `--experimental-acp` alias.

5. **max-turns + Skipped wording** (review comments
   #3198579251/#3198579389/#3198579567 from yiliang114):
   - `handleMaxTurnsExceededError` now appends a `--json-schema`-
     specific hint pointing at the common stuck-run causes
     (structured_output denied by `permissions.deny` /
     `--exclude-tools`, unsatisfiable schema, prompt didn't
     instruct the model). Without this, three different failures
     all surfaced as the same generic "increase maxSessionTurns"
     line.
   - The synthesised "Skipped:" tool_result for suppressed sibling
     calls drops the trailing "Re-issue this call in a separate
     turn if needed." sentence on the success path, where the
     session terminates immediately and no consumer (model or SDK)
     can act on the advice. Retry path keeps the sentence — the
     model is about to receive these parts and may legitimately
     re-issue.

Tests cover each fix: bare-mode registration order, ACP / experimental-
acp rejection (×2), `--json-schema` hint in both text and JSON max-turns
output, and explicit Skipped-text assertions on the success and retry
paths.

* fix: address 9 self-qreview comments on --json-schema PR

Folds the 9 Suggestion-level comments from the previous /qreview pass
into code/test fixes. Each one is a real issue, but mostly defensive —
none changes the user-visible happy path.

**Refactors (F4/F5/F6 — code-quality)**

- F4 `nonInteractiveCli.ts`: extract `SUPPRESSED_OUTPUT_SUCCESS` /
  `SUPPRESSED_OUTPUT_RETRY` module-level constants and a
  `suppressedOutputBody(structuredCaptured)` helper. Both the main-turn
  and drain-turn synthesis sites previously had a 4-way duplicated
  ternary; future wording changes can no longer drift between them.
- F5 `nonInteractiveCli.ts`: extract `emitStructuredSuccess()` closure
  inside `runNonInteractive`. The "abortAll → bounded holdback → flush
  → finalize one-shot monitors → emitResult → return 0" terminal block
  is now defined once and called from both the main-turn and drain-turn
  success paths. `finalizeOneShotMonitors` is idempotent
  (`oneShotMonitorsFinalized` guard) so the unconditional invocation is
  safe even when the drain-turn already finalized monitors before
  reaching the helper.
- F6 `core/config/config.ts`: extract
  `registerStructuredOutputIfRequested()` helper. The synthetic-tool
  registration block is no longer duplicated between the bare-mode
  early-return branch and the regular registration branch.

**Tests (F7/F8/F9 — pin existing behaviour)**

- F7 `nonInteractiveCli.test.ts`: new test "holds back for in-flight
  background tasks before emitting structured success" — flips
  `hasUnfinalizedTasks: true → false` mid-poll so the holdback `while`
  body actually executes; spies on `abortAll` and asserts ordering of
  `task_notification` (must precede the result envelope) and the
  bounded elapsed-time cap. None of the existing structured-output
  success tests entered this branch (they all pinned
  `hasUnfinalizedTasks: () => false`).
- F8 `gemini.test.tsx`: new test "rejects --json-schema when running
  in interactive (TUI) mode" — pins the TUI guard at gemini.tsx:694,
  asserting the headless-only stderr message AND the exact ordering
  `writeStderrLine → runExitCleanup → process.exit(1)` so a future
  refactor can't swap any of those steps.
- F9 `cli/config.test.ts`: pin the two previously-untested
  `--json-schema` mutual-exclusion branches: `-i`/`--prompt-interactive`
  and `--input-format stream-json`. The stream-json check is
  load-bearing — `gemini.tsx:768` explicitly relies on this rejection
  holding (the parse-time `process.exitCode ?? 0` plumbing in the
  stream-json branch is only safe because `--json-schema` can't reach
  it).

**Behaviour fixes (F1/F2/F15 — privacy / security / correctness)**

- F1 `core/core/geminiChat.ts`: redact `functionCall.args` for
  `structured_output` tool calls before passing them to
  `chatRecordingService.recordAssistantTurn`. Without this, the
  user's structured payload (already emitted on stdout via
  `result` / `structured_result`) was persisted verbatim to
  `<projectDir>/chats/<sessionId>.jsonl` and re-fed into model
  context on `--continue` / `--resume`, contradicting the privacy
  contract documented next to the existing `ToolCallEvent` redaction.
  Each validation-failure retry was also recorded. Now mirrors the
  same `__redacted` placeholder. Helper extracted as
  `redactStructuredOutputArgsForRecording` so it's unit-testable.
- F2 `cli/config/config.ts`: `resolveJsonSchemaArg`'s `@path` reader
  now (a) `fs.statSync`s first to refuse non-regular files (FIFOs,
  character devices like `/dev/zero`, directories), (b) caps the
  schema file at 1 MiB so an attacker who can influence the path
  through a wrapping process can't OOM the run, and (c) on JSON
  parse failure for `@path` source emits a generic "content of
  <path> is not valid JSON" instead of echoing the SyntaxError —
  Node ≥18's SyntaxError embeds a ~10-char file-content prefix in
  its message, which would otherwise ride out on stderr through any
  wrapper that surfaces the error. Inline (non-`@path`) JSON keeps
  the SyntaxError detail because the user is the source.
- F15 `core/tools/tool-registry.ts`: `registerTool` now also checks
  the lazy `factories` map for name collisions, not just the eager
  `tools` map. An MCP server registering a tool whose name shadows a
  built-in lazy factory (e.g. `structured_output`) now gets
  auto-qualified to `mcp__<server>__<name>`, instead of silently
  winning the resolution. The synthetic structured-output tool no
  longer needs renaming for the corner case to be safe.

Targeted suite (13 changed-area test files): 883/886 pass — 3
pre-existing skips. Typecheck clean on both packages.

* fix: address 3 deepseek-v4-pro qreview comments on --json-schema PR

Three Suggestion-level comments from the latest /qreview pass.

**N1 — `schemaRootAcceptsObject` skips `if/then/else`** (cli/config/config.ts):
A schema like `{"if": true, "then": {"type": "string"}}` passed parse-time
gating but is unsatisfiable for object-typed tool args at runtime — the
model would loop until maxSessionTurns. Add a best-effort check for the
two decidable shapes:
- `if: true` → object MUST match `then`; if `then` excludes objects
  (boolean `false`, non-object `type`, etc.), reject at parse time.
- `if: false` → object MUST match `else` (`true` if absent); same check.
Object-schema `if` cases stay runtime-decidable and fall through to Ajv,
matching the existing best-effort scope on `not`. 4 new test cases pin
both reject and accept paths.

**N2 — subagent registries register `structured_output` too** (core/config/config.ts,
core/tools/agent/agent.ts, core/agents/backends/InProcessBackend.ts):
`createApprovalModeOverride` and `buildSubagentContextOverride` rebuild
the tool registry on a `Object.create(base)` config. `this.jsonSchema`
propagates through the prototype chain, so
`registerStructuredOutputIfRequested` was firing for every subagent
registry rebuild — but only `runNonInteractive`'s main / drain loops
detect a successful `structured_output` call as terminal. A subagent
that called the tool would receive "Session will end now" and then keep
running because its own loop has no terminator: wasted tokens, no
structured payload on stdout.

Add a `forSubAgent: true` option to `createToolRegistry` (alongside the
existing `skipDiscovery`), and propagate it from both subagent rebuild
sites. The structured-output registration helper short-circuits when
the flag is set. Bare-mode init does NOT set the flag, preserving the
F6 fix where `qwen --bare --json-schema X -p "..."` still gets the
synthetic tool. New test asserts the registry rebuilt with
`forSubAgent: true` registers READ_FILE / EDIT / SHELL but NOT
STRUCTURED_OUTPUT.

**N3 — TEXT-mode `structuredResult` not integration-tested** (nonInteractiveCli.test.ts):
All 8 existing `--json-schema` tests pin `OutputFormat.JSON` or
`STREAM_JSON`. TEXT (the default for `qwen -p ...`) has no integration
coverage, so a regression in
`BaseJsonOutputAdapter.buildResultMessage`'s
`hasStructured ? JSON.stringify(structuredResult) : resultText`
contract or in `JsonOutputAdapter.emitResult`'s text-mode
`process.stdout.write(`${result}\n`)` path would only surface to plain
`qwen -p` users. New test pins TEXT-mode behaviour: stdout is exactly
`${JSON.stringify(structuredArgs)}\n` — no JSON envelope, no event
log.

Targeted suite (13 spec files): 945/948 pass — 3 pre-existing skips.
Typecheck clean on both packages.

* fix(cli): narrow `not` rejection in schemaRootAcceptsObject

Address Critical review comment #3216123734.

`schemaRootAcceptsObject`'s `not` handler previously rejected any schema
whose `not.type` included `"object"`, regardless of what other
constraints `not` had. That's a false positive for schemas where the
extra constraints NARROW what `not` excludes:

    { "not": { "type": "object", "required": ["error"] } }

excludes only objects with an `error` key — the value `{}` satisfies
this schema fine, but the old check rejected it at parse time with
"--json-schema root must accept object-typed values".

Fix: only reject when `not` is exactly `{type: ...}` with no narrowing
siblings (the unambiguous "every object is excluded" case). When other
keywords are present (`required`, `properties`, `minProperties`,
`enum`, etc.), defer to Ajv at runtime — same best-effort scope as the
sibling `anyOf`/`oneOf`/`allOf` deep-content checks.

3 new test cases pin the fixed accept paths
(`{not:{type:"object",required:[...]}}`,
`{not:{type:"object",properties:...,required:[...]}}`,
`{not:{type:"object",minProperties:1}}`). The existing reject test for
bare `{not:{type:"object"}}` still passes.

* refactor: dedupe structured_output handling per qreview C1/C2/C3

Three Suggestion-level review comments from the latest /qreview pass.

**C1 — main-turn / drain-turn `structured_output` dispatch was
duplicated ~120 lines** (`nonInteractiveCli.ts`)

The two batch-handling sites had near-identical bodies (filter
`structured_output` from the batch when `--json-schema` is active →
iterate with `executeToolCall` → write to `structuredSubmission` on
first valid call → synthesise tool_result events for suppressed
siblings). The only meaningful difference was which `modelOverride`
binding the loop wrote to (session-scoped `modelOverride` for the
main turn vs per-drain-item `itemModelOverride`). Extracted
`processToolCallBatch(batchRequests, setModelOverride)` defined
inside `runNonInteractive`:

- Closes over session-scoped state (`adapter`, `config`,
  `abortController`, `options`, `structuredSubmission`,
  `executeToolCall`, `handleToolError`, `suppressedOutputBody`,
  the progress-handler helpers).
- Takes the `modelOverride` setter as the one call-site-specific
  parameter so the main turn binds to the session var and the drain
  binds to the per-item var.

Main-turn body went from ~120 lines to a single call; drain-turn body
likewise. Net file shrink ~80 lines, no behaviour change. All 42
existing structured-output tests still pass (including
`stops executing remaining tool calls...`,
`tries multiple structured_output calls in the same turn...`,
`synthesises tool_result for suppressed sibling calls...`,
`captures structured_output emitted from a drain-turn (queued notification)`).

**C2 + C3 — `{__redacted: '…'}` placeholder duplicated in two files**
(`telemetry/types.ts` + `core/geminiChat.ts`)

The `ToolCallEvent` constructor (for telemetry surfaces — OTLP /
QwenLogger / ui-telemetry / chat-recording UI event mirror) and
`redactStructuredOutputArgsForRecording` (for the on-disk
chat-recording JSONL) each had a verbatim copy of:

    { __redacted: 'structured_output payload (see stdout result)' }

If the redaction wording (or the `__redacted` key, or the placeholder
text) ever drifted between the two surfaces, the privacy contract
would be subtly broken on one and not the other.

Hoisted to `STRUCTURED_OUTPUT_REDACTED_ARGS` exported from
`packages/core/src/tools/syntheticOutput.ts`, imported in both sites.
The constant carries its rationale in a JSDoc block so future readers
see both call sites at once.

Targeted suite (13 spec files): 961/964 pass — 3 pre-existing skips.
Typecheck clean on both packages.

---------

Co-authored-by: wenshao <wenshao@U-K7F6PQY3-2157.local>
2026-05-11 14:21:55 +08:00
Shaojin Wen
b949ae070c
feat(tools): defer low-frequency built-in tools to reduce initial prompt size (#4022)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (macos-latest, Node 22.x) (push) Waiting to run
Qwen Code CI / Test (ubuntu-latest, Node 22.x) (push) Waiting to run
Qwen Code CI / Test (windows-latest, Node 22.x) (push) Waiting to run
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(tools): defer low-frequency built-in tools to reduce initial prompt size

Mark Monitor, SendMessage, Skill, TaskStop, TodoWrite, and WebFetch as
shouldDefer=true so their full schemas are excluded from the initial
function-declaration list. The model discovers them on demand via
ToolSearch, aligning with Claude Code's deferral strategy for
infrequently used tools.

* fix(tools): don't defer TodoWrite/Skill — they're required by the system prompt

TodoWrite is mandated as high-frequency by the core system prompt
("Use VERY frequently", "IMPORTANT: Always use the TODO_WRITE tool to
plan and track tasks throughout the conversation"). Deferring it forces
a ToolSearch round-trip before every required call, adding latency and
hurting prompt compliance.

SkillTool's full description carries the dynamically generated
<available_skills> listing and the BLOCKING invocation rules
("invoke the relevant Skill tool BEFORE generating any other response").
The deferred-tool summary truncates this, so the model loses both the
list of skills and the activation contract from the initial prompt.

Keep Monitor / SendMessage / TaskStop / WebFetch deferred — those are
genuinely infrequent and have no system-prompt requirement.
2026-05-11 11:40:13 +08:00
jinye
0a05ea8004
feat(telemetry): inject traceId/spanId into debug log files for OTel correlation (#3847) 2026-05-11 07:42:56 +08:00