Commit graph

740 commits

Author SHA1 Message Date
pomelo
997796f532
refactor(cli): provider-first auth registry with unified install pipeline (#3864)
* fix(cli): refresh static header on model switch

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* feat(cli): simplify api key provider registry

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(cli): split Alibaba auth providers

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* polish(cli): refine auth provider onboarding

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): update OpenRouter free defaults

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): restrict token plan models

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* chore(cli): remove unused third-party providers

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* feat(cli): add regional third-party providers

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(cli): simplify api key provider endpoints

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(cli): split auth dialog flows

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(cli): unify auth around declarative provider config

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

Introduce ProviderConfig abstraction (providerConfig.ts) and a central provider registry (allProviders.ts), replacing the per-flow UI components (AlibabaModelStudioFlow, CustomProviderFlow, OAuthFlow, ThirdPartyProvidersFlow, etc.) with unified ProviderSetupSteps and useProviderSetupFlow.

Key changes:
- Remove setupMethods/apiKey/ directory entirely
- Collapse flow-specific hooks/components into a single generic provider setup flow
- Simplify each provider file to export only a ProviderConfig descriptor
- Add alibabaStandard provider alongside codingPlan/tokenPlan
- Move all baseUrl resolution, install plan building, and settings writing into providerConfig
- Update useAuth, AuthDialog, command handler, and upstream consumers to use the new registry

* refactor(cli): simplify provider setup input flow

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(cli): remove toLlmProvider and legacy auth wrappers

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(cli): flatten auth flow files and simplify ProviderSetupSteps props

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* feat(cli): prefill API key from existing env settings in provider setup flow

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): correct third-party provider context windows

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): harden provider auth setup

* feat(cli): support provider modality and context settings

* feat: eable modelsEditable for coding plan

* refactor(cli): auto-derive provider metadata key and state

Move metadataKey and getProviderState from per-provider config to
auto-derived helpers (resolveMetadataKey, resolveProviderState) in
providerConfig.ts. This centralizes version tracking logic and reduces
boilerplate in individual provider definitions.

Add useProviderUpdates hook that detects model template changes across
all version-tracked providers and surfaces update/ignore choices.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

Closes: OSS-1730, OSS-1729

* refactor(cli): namespace provider metadata under providerMetadata key

Introduce PROVIDER_METADATA_NS ('providerMetadata') to avoid top-level
settings key collisions. Provider metadata now lives under
e.g. providerMetadata.coding-plan.version instead of codingPlan.version.

Add migration logic (migrateProviderMetadata) to automatically move
legacy top-level keys (codingPlan, tokenPlan) into the new namespace
on first run.

Update auth handler, useProviderUpdates hook, and all related tests
to use the new namespace structure.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

[skip ci]

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): polish ProviderUpdatePrompt styling and test coverage [skip ci]

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(auth): simplify auth flows around provider abstraction [skip ci]

- Rewrite motivation.md to document provider-centric architecture
- Remove Alibaba Standard API Key and Coding Plan UI flows from handler
- Update status tests to use providerMetadata instead of codingPlan settings
- Streamline API key auth to show docs link only

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(auth): update provider models and refine auth infrastructure

- Bump model versions (qwen3.6-plus, glm-5.1) and add deepseek-v4-pro/flash
  with modalities to Alibaba Standard provider
- Reorder DeepSeek models, add thinking+image/video modalities to v4-pro,
  fix v4-flash context window
- Enhance auth tests with provider metadata setValue assertions
- Switch env key generation from hash-based to URL-based with
  trailing-slash normalization
- Remove deprecated codingPlan section from settings schema

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(i18n): add missing zh-TW translations for token plan and subscription providers

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(auth): improve provider install error recovery and AuthDialog state init

- Restore settings from backup on provider install plan failure
- Fix AuthDialog mainIndex state to null (was 0), preventing stale selection
- Remove ownsModel from customProvider; fall back to id-based filtering
- Change provider migration log from console.error to console.log
- Add sync reminder comments between CLI and VSCode subscription models
- Expand handleApiKeyAuth JSDoc explaining its role as lightweight fallback

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(auth): i18n for step labels, lazy preview JSON, and accurate header label

- Wrap getStepLabel() strings and PROTOCOL_ITEMS in t() for i18n
- Only compute previewJson when on the review step
- Return matched provider's own label in getAuthDisplayType instead
  of hardcoding CODING_PLAN for all managed providers

* fix(auth): address round-3 review blockers

- Fix CI: add missing useProviderUpdates mock in AppContainer.test.tsx
  that caused TypeError breaking React effects (title/height tests)
- Fix half-rollback: snapshot settings + modelProviders before install,
  restore in-memory state (not just disk) on refreshAuth failure
- Fix .orig backup reuse: always create fresh backup (overwrite stale),
  cleanup on success, unlink after restore to prevent data loss
- Fix cross-package key consistency: VS Code settingsWriter now writes
  to providerMetadata namespace matching CLI's new structure
- Fix validateApiKey: remove baseUrl guard so sk-sp- prefix check
  applies to both China and Global Coding Plan endpoints

* fix(cli): stabilize AuthDialog tests for slower CI environments

Increase vi.waitFor timeouts from default 1000ms to 5000ms and replace
unreliable fixed-delay waits with proper render-completion assertions,
preventing flaky failures on Linux/Windows CI runners with Node 22/24.

* fix(core): use id+baseUrl composite key for model identity

Custom provider installs previously used model id alone to determine
ownership, causing the second install to remove the first backend's
model entry when both expose the same model id (e.g. gpt-4o) with
different baseUrls. Use id+baseUrl as the composite identity key
throughout the model registry, ModelDialog, and modelsConfig to
prevent cross-provider model collisions.

* fix(cli): update ModelDialog tests for composite-key model identity

Add missing getModelsConfig and getActiveRuntimeModelSnapshot mocks,
and update switchModel assertion to expect the new { baseUrl } options
object introduced in 4c4ebb81c.

* fix(cli): skip flaky TUI input tests on all CI environments

Multi-step TUI navigation tests exceed 5s timeout on CI runners
regardless of Node version. Extend skip condition from only Node 20
to all CI environments where input simulation is unreliable.

* fix(cli): improve auth/provider edge cases and UX

- Add fallback to non-free models in OpenRouter OAuth when no free models available
- Validate non-empty models list when building install plan
- Fix auth status to use activeConfig instead of iterating all providers
- Clear API key input when switching auth protocol
- Skip unnecessary auth refresh when applying provider updates

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(cli): update tests for empty model validation and skip auth refresh

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): skip remaining flaky TUI input AuthDialog tests on CI

8558c49bc only converted part of the tests to itWhenTuiInputReliable,
leaving 9 multi-step keyboard-navigation tests still using bare it().
These tests reliably time out on Linux/Windows CI runners where stdin
simulation timing is unpredictable.

Convert all remaining it() → itWhenTuiInputReliable() so CI skips them,
and add a comment block to clearly demarcate the TUI input section.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-05-08 12:19:28 +08:00
Shaojin Wen
cfbcea1e88
feat: add commit attribution with per-file AI contribution tracking (#3115)
* feat: add commit attribution with per-file AI contribution tracking via git notes

Track character-level AI vs human contributions per file and store
detailed attribution metadata as git notes (refs/notes/ai-attribution)
after each successful git commit. This enables open-source AI disclosure
and enterprise compliance audits without polluting commit messages.

* feat: enhance commit attribution with real AI/human ratios and generated file exclusion

- Replace line-based diff with a prefix/suffix character-level algorithm
  for precise contribution calculation (e.g. "Esc"→"esc" = 1 char, not whole line)
- Compute real AI vs human contribution percentages at commit time by analyzing
  git diff --stat output: humanChars = max(0, diffSize - trackedAiChars)
- Add generated file exclusion (lock files, dist/, .min.js, .d.ts, etc.)
  ported from an existing generatedFiles.ts
- Add file deletion tracking via recordDeletion()
- Update git notes payload format: {aiChars, humanChars, percent} per file
  with real percentages instead of hardcoded 100%

* feat: add surface tracking, prompt counting, session persistence, and PR attribution

Align with the full attribution feature set:
- Surface tracking: read QWEN_CODE_ENTRYPOINT env var (cli/ide/api/sdk),
  include surfaceBreakdown in git notes payload
- Prompt counting: incrementPromptCount() hooked into client.ts message
  loop, tracks promptCount/permissionPromptCount/escapeCount
- Session persistence: toSnapshot()/restoreFromSnapshot() for serializing
  attribution state; ChatRecordingService.recordAttributionSnapshot()
  writes to session JSONL; client.ts restores on session resume
- PR attribution: addAttributionToPR() in shell.ts detects `gh pr create`
  and appends "🤖 Generated with Qwen Code (N-shotted by Qwen-Coder)"
- Session baseline: saves content hash on first AI edit of each file
  for precise human/AI contribution detection
- generatePRAttribution() method for programmatic access

* fix: audit fixes — initial commit handling, cron prompt exclusion, failed commit counter preservation

- Handle initial commit (no HEAD~1) by detecting parent with rev-parse
  and falling back to --root for first commit in repo
- Exclude Cron-triggered messages from promptCount (not user-initiated)
- Add commitSucceeded parameter to clearAttributions() so failed/disabled
  commits don't reset the prompts-since-last-commit counter
- Add test for clearAttributions(false) behavior

* fix: cross-platform and correctness fixes from multi-round audit

- Normalize path.relative() to forward slashes for Windows compatibility
- Use diff-tree --root for initial commits (git diff --root is invalid)
- Replace String.replace() with indexOf+slice to avoid $& special patterns
- Fix clearAttributions(false→true) when co-author disabled but commit succeeded
- Use real newlines instead of literal \n in PR attribution text
- Add surface fallback in restoreFromSnapshot for version compatibility
- Fix single-quote regex to not assume bash supports \' escaping
- Case-insensitive directory matching in generated file detection
- Handle renamed file brace notation in parseDiffStat

* fix(attribution): also snapshot on ToolResult turns so resume keeps tool edits

Previously, recordAttributionSnapshot() only ran at the start of UserQuery
and Cron turns — before the tools for that turn had executed. A session
that wrote a file in turn 1 and committed in turn 2 (across process
boundaries via --resume) lost the tracked edit: the last persisted
snapshot was the turn-1-start snapshot (empty fileStates), so on resume
the attribution service restored empty state and no git notes were
attached to the commit.

Move the snapshot call out of the UserQuery/Cron conditional and run it
on every non-Retry turn. ToolResult turns are scheduled right after
tools execute, so their start-of-turn snapshot now captures any edits
those tools made. Retry turns are skipped since the state is unchanged
from the prior turn.

Added unit tests asserting the snapshot fires for ToolResult/UserQuery
turns and skips Retry turns.

Verified end-to-end in a scratch repo: write-file in turn 1 (no commit)
→ exit → --resume → commit in turn 2 → git notes now contain the
recorded file with correct aiChars and promptCount: 2.

* refactor(attribution): merge duplicate retry guard and update stale doc

Collapse the two back-to-back messageType !== Retry blocks in
sendMessageStream into one, and refresh chatRecordingService's
recordAttributionSnapshot doc comment to reflect that snapshots fire
on every non-retry turn (not just after user prompts).

* feat(attribution): split gitCoAuthor into independent commit and pr toggles

Matches the shape used upstream in Claude Code's `attribution.{commit,pr}`
so users can disable the PR body line without losing the commit-message
Co-authored-by trailer (or vice versa). The previous boolean forced both
to move together, which conflated two different surfaces.

- settingsSchema: gitCoAuthor becomes an object with nested commit/pr
  booleans, each `showInDialog: true` so both appear in /settings.
- Config constructor accepts legacy boolean (coerced to { commit: v, pr: v })
  so stored preferences from the pre-split schema carry over.
- shell.ts: attachCommitAttribution and addCoAuthorToGitCommit read .commit;
  addAttributionToPR reads .pr.

* feat(settings): add v3→v4 migration for gitCoAuthor shape change

Legacy gitCoAuthor was a single boolean and shipped ~4 months ago; the
previous commit split it into { commit, pr } sub-toggles. Without a
migration, users who had set gitCoAuthor: false would see the settings
dialog show the default (true) for both sub-toggles — misleading and
likely to flip their preference on the next save because getNestedValue
returns undefined when asked for .commit on a boolean.

- New v3-to-v4 migration expands boolean → { commit: v, pr: v },
  preserves already-object values, resets invalid values to {} with a
  warning.
- SETTINGS_VERSION bumped 3 → 4; existing integration assertions use the
  constant so the next bump is a single-line change.
- Regenerate vscode-ide-companion settings.schema.json to reflect the
  new nested shape.
- Docs: split the single gitCoAuthor row into .commit and .pr.

* test(migration): cover null/array/number and partial object for v3-to-v4

The migration already treats any non-boolean, non-object value as invalid
(reset to {} with warning), but the existing test only exercised the
string "yes" branch. Add parameterized cases for null, array, and number
so a future regression that accepts these in the valid bucket gets caught.
Also cover partial objects — the migration must not paternalistically
fill defaults; that responsibility lives in normalizeGitCoAuthor at the
Config boundary.

* fix(shell): address PR review for compound commits and PR body escaping

Two critical issues called out in review:

1. attachCommitAttribution treated the final shell exit code as proof
   that `git commit` itself failed. For compound commands like
   `git commit -m "x" && npm test`, the commit can succeed and a later
   step can fail; the previous code then cleared attribution without
   writing the git note. Now we snapshot HEAD before the command (via
   `git rev-parse HEAD` through child_process.execFile, kept independent
   of the mockable ShellExecutionService) and detect commit creation by
   HEAD movement, so attribution lands whenever a new commit was created
   regardless of later steps.

2. addAttributionToPR spliced the configured generator name into the
   user-approved `gh pr create --body "..."` argument verbatim. A name
   containing `"`, `$`, a backtick, or `'` could break the command or be
   evaluated as command substitution. Now we shell-escape the appended
   text per the surrounding quote style before splicing.

Tests cover the new escape paths for both double- and single-quoted
bodies, including a generator name designed to break interpolation
(`$(rm -rf /) "danger" \`eval\``) and one with an apostrophe.

* fix(attribution): address Copilot review on shell, schema, and totals

Six items called out on PR #3115 by Copilot:

- shell.ts: addAttributionToPR's bash quote escaping doesn't apply to
  cmd.exe / PowerShell, where `\$` and `'\''` aren't honored. Skip the
  PR body rewrite entirely on Windows — losing PR attribution there is
  preferable to corrupting the user-approved `gh pr create` command.

- attributionTrailer.ts + shell.ts call site: buildGitNotesCommand used
  bash-style single-quote escaping on the JSON note, which is broken on
  Windows. Switched to argv form (`{ command, args }`) and routed the
  invocation through child_process.execFile so shell quoting is bypassed
  entirely. Tests updated to assert the argv shape.

- commitAttribution.ts: when a tracked file's aiChars exceeded the diff
  --stat-derived diffSize (long-line edits where diffSize ≈ lines * 40),
  humanChars clamped to 0 but aiChars stayed inflated, leaving aiChars +
  humanChars > the committed change magnitude. Clamp aiChars to diffSize
  so the totals stay consistent.

- shell.ts parseDiffStat: only normalized rename brace notation
  (`{old => new}`). Cross-directory renames emit `old/path => new/path`
  without braces, leaving diffSizes keyed by the full string. Added a
  second normalization step.

- shell.ts: addAttributionToPR docstring claimed `(X% N-shotted)` but
  the implementation only emits `(N-shotted by Generator)`. Updated the
  docstring to match the actual behavior.

- settingsSchema.ts + generator: gitCoAuthor went from boolean to object
  in the V4 migration. The exported JSON Schema now wraps the field in
  `anyOf: [boolean, object]` (via a new `legacyTypes` hint on
  SettingDefinition) so users with a stored boolean don't see a spurious
  IDE warning before their next launch runs the migration.

* fix(attribution): parse binary diffs, source generator from model, sync schema $version

Three follow-up review items from Copilot:

- parseDiffStat now handles git's binary-diff format (`path | Bin A ->
  B bytes`) using the byte delta with a floor of 1. Without this,
  binary edits arrived at the attribution payload as diffSize=0 and
  were silently dropped. Also extracted the parser to a top-level
  exported function so the binary path is unit-testable; added five
  targeted cases (text/binary/rename normalisation/summary skip).

- attachCommitAttribution now passes `this.config.getModel()` into
  generateNotePayload instead of the user-configurable
  `gitCoAuthor.name`. The note's `generator` field reflects which
  model produced the changes — and CommitAttributionService's
  sanitizeModelName() actually has the codename to scrub now.

- generate-settings-schema.ts imports SETTINGS_VERSION instead of
  hardcoding `default: 3`, so a future bump propagates to the emitted
  JSON schema in one place. Regenerated settings.schema.json bumps
  $version's default from 3 to 4 to match the V4 migration.

* fix(attribution): repo-root baseDir, escape co-author trailer, switch to numstat

Three Critical items called out by wenshao:

- attachCommitAttribution was passing config.getTargetDir() as `baseDir`
  to generateNotePayload, but getCommittedFileInfo returns paths
  relative to `git rev-parse --show-toplevel`. When the working
  directory was a subdirectory of the repo, path.relative produced
  `../...` keys that never matched in the AI-attribution lookup,
  silently zeroing out attribution for every file outside getTargetDir.
  StagedFileInfo now carries an optional `repoRoot` (filled in by
  getCommittedFileInfo via `git rev-parse --show-toplevel`) and the
  caller prefers it over the target dir.

- addCoAuthorToGitCommit interpolated `gitCoAuthorSettings.name` and
  `.email` into the rewritten command without escaping. A name
  containing `$()`, backticks, or `"` could be evaluated as command
  substitution under double quotes, or break the user-approved
  `git commit -m "..."` quoting. Now escapes per the surrounding quote
  style with the same helpers addAttributionToPR uses, gates on
  non-Windows for the same shell-quoting reason, and fixes the regex
  to accept `-m"msg"` shorthand (no space) so users who type the
  bash-shorthand form aren't silently denied a trailer.

- parseDiffStat used `git diff --stat` output and approximated each
  line as ~40 chars by parsing a graphical text bar. Replaced with
  `git diff --numstat` which gives unambiguous integer
  additions+deletions per file; the heuristic remains but the parser
  is no longer fooled by the visual `++--` markers. Binary entries
  fall back to a fixed estimate so they still land in the map (rather
  than dropping out as diffSize=0).

Suggestions also addressed: stale duplicate JSDoc on
addCoAuthorToGitCommit removed, misleading `clearAttributions`
comments rewritten to describe what the boolean argument actually
does. Tests cover the new shorthand path, escape behavior, and
numstat parsing (text/binary/rename/malformed).

* fix(shell): shell-aware git-commit detection and apostrophe-escape handling

Two more Critical items called out by wenshao plus the matching Copilot
quote-handling notes:

- attachCommitAttribution and addCoAuthorToGitCommit now go through a
  shell-aware `looksLikeGitCommit` helper instead of a raw
  `\bgit\s+commit\b` regex. The helper splits the command on shell
  separators (`splitCommands`) and checks each segment, so `echo "git
  commit"` no longer triggers attribution clearing or trailer
  injection. The same helper bails on any segment that contains `cd`
  or `git -C <path>`, since either could redirect the commit into a
  different repo than our cwd — writing notes or capturing HEAD there
  would corrupt unrelated state.

- The post-command attribution call now runs regardless of whether the
  shell wrapper aborted. `git commit -m "x" && sleep 999` could move
  HEAD and then time out, leaving the new commit without its
  attribution note while the stale per-file attribution stayed around
  for a later unrelated commit. attachCommitAttribution still gates on
  HEAD movement, so it's a no-op when no commit was actually created.

- The `-m '...'` and `--body '...'` regexes used to match only the
  first quote segment, so a command like `git commit -m 'don'\''t'`
  (bash's standard apostrophe-escape form) would have the trailer
  spliced mid-message and break the command's quoting. The single-
  quote patterns now use a negative lookahead / inner alternation to
  either skip those messages entirely (commit path) or match the
  whole escape-aware body (PR path).

Tests cover the new behavior: quoted "git commit" is left alone, the
`cd && git commit` and `git -C` patterns get no trailer, and the
apostrophe-escape form passes through unchanged for both `-m` and
`--body`.

* fix(attribution): drop magic 100 fallback for empty deletions

Deleted files with no AI tracking now use diffSize directly. With
numstat as the input source, diffSize is an exact count, and an
empty-file deletion legitimately reports zero — a magic fallback would
only inflate totals.

* fix(shell): broaden git-commit detection, gate background, drop dead helpers

Five Copilot follow-ups:

- looksLikeGitCommit now strips leading env-var assignments
  (`GIT_COMMITTER_DATE=now git commit ...`) and a small allowlist of
  safe wrappers (`sudo`, `command`) before matching. The previous
  exact-prefix match silently skipped trailer injection on common
  real-world commit forms.

- A new looksLikeGhPrCreate (same shell-aware shape) replaces the raw
  `\bgh\s+pr\s+create\b` regex in addAttributionToPR, so quoted text
  like `echo "gh pr create --body \"x\""` no longer triggers a
  command-string rewrite.

- executeBackground refuses to run `git commit` and tells the user to
  re-run foreground. The BackgroundShellRegistry lifecycle has no
  hook for the post-command pre/post-HEAD comparison or git-notes
  write, so allowing the commit through would create the new commit
  without notes and leak stale per-file attribution into the next
  foreground commit.

- recordDeletion was unused outside its own test — removed (and the
  test). When AI-driven deletions need tracking we'll add it with an
  actual integration point rather than carrying dead API surface.

- generatePRAttribution was likewise unused; addAttributionToPR
  builds the trailer string inline. The two formats had already
  diverged. Removed the helper and its tests; reviving from git
  history is straightforward if a future caller needs it.

Tests: env-var and sudo prefixes now produce trailers; quoted
"gh pr create" leaves the command unchanged; existing 81 shell tests
still pass alongside the trimmed 25 commitAttribution tests.

* fix(shell): unified git-commit detection split by intent

Six items called out across CodeQL, Copilot, and wenshao:

- The earlier `looksLikeGitCommit`/`stripCommandPrefix` returned a
  single yes/no and rejected ANY `cd` in the chain. That fixed the
  wrong-repo case but also disabled attribution for `git commit -m
  "x" && cd ..` (commit already landed safely in our cwd; the cd
  came after). It also conflated three distinct decisions onto one
  predicate.

  New `gitCommitContext` returns both `hasCommit` and
  `attributableInCwd`, walking segments in order so that a `cd`
  AFTER the commit doesn't invalidate it. Callers now pick the right
  arm:
  - background-mode refusal uses `hasCommit` (refuses even
    `cd /elsewhere && git commit` since we can't attribute it
    afterward either way)
  - HEAD snapshot, addCoAuthorToGitCommit, and the
    attachCommitAttribution gate use `attributableInCwd`

- Tokenisation switches from a regex while-loop to `shell-quote`'s
  `parse`. Quoted env values like `FOO="a b" git commit` now skip
  correctly (the old `\S*\s+` form would cut after the opening
  quote). Eliminates the CodeQL polynomial-regex alert at the same
  time since the `\S*\s+` pattern is gone.

- attachCommitAttribution now snapshots prompt counters via
  `clearAttributions(true)` whenever a commit lands, even if no
  per-file attributions were tracked. Previously the early-return
  on `hasAttributions() === false` meant `promptCountAtLastCommit`
  never advanced, so a later `gh pr create` reported an inflated
  N-shotted count spanning multiple commits.

Tests: env-var and sudo prefixes still produce trailers; quoted
"git commit" / "gh pr create" leave commands unchanged; cd BEFORE
commit suppresses the rewrite while cd AFTER commit does not; `git
-C <path> commit` is treated as a commit (refused in background)
but not as attributable.

* fix(shell): position-independent git subcommand detection + bash-shell guard

Six review items, two of them critical:

- gitCommitContext was checking fixed-position tokens (`arg1`, `arg3`)
  and missed every git invocation that puts a global flag between
  `git` and the subcommand: `git -c user.email=x@y commit`,
  `git --no-pager commit`, `git -C /p -c k=v commit`, etc. In
  background mode these would slip past the refusal guard; in
  foreground they got no co-author trailer, no git note, and no
  prompt-counter snapshot. New `parseGitInvocation` walks past
  git's global flags (with their values) before reading the
  subcommand, and reports `changesCwd` for `-C` / `--git-dir` /
  `--work-tree`.

- The Windows guard on addCoAuthorToGitCommit and addAttributionToPR
  used `os.platform() === 'win32'`, which incorrectly skipped Windows
  + Git Bash (`getShellConfiguration().shell === 'bash'`). Switched
  both to gate on `getShellConfiguration().shell !== 'bash'` so Git
  Bash users keep the feature.

- attachCommitAttribution was re-parsing `gitCommitContext(command)`
  even though `execute()` already gates on `commitCtx.attributableInCwd`.
  Removed the redundant re-parse — drift between the two checks would
  silently diverge trailer injection from git-notes writes.

- tokeniseSegment (formerly tokeniseProgram) now logs via debugLogger
  on parse failure instead of swallowing silently. Easier to debug
  if shell-quote ever throws on something unusual.

- Added a comment on `cwdShifted` documenting that it's a one-way
  latch — `cd src && cd ..` will still skip attribution. The
  trade-off matches the wrong-repo guard's "better miss than corrupt
  unrelated repos" intent.

- Stale `--stat` reference in the aiChars-clamp comment updated to
  `--numstat` to match the actual git command in
  ShellToolInvocation.getCommittedFileInfo.

Tests: `git -c key=val commit` and `git --no-pager commit` now
produce a trailer; existing 82 shell tests still pass.

* fix(shell): refuse multi-commit attribution; misc review follow-ups

Five follow-ups from the latest review pass:

- attachCommitAttribution now refuses to write a single git note for
  shell commands that produce more than one commit (e.g.
  `git commit -m a && git commit -m b`). The singleton's per-file
  attribution map can't be partitioned across the individual commits,
  so attaching the combined note to HEAD would mis-attribute earlier
  commits' changes to the last one. Walks `preHead..HEAD` via
  `git rev-list --count`; on multi-commit detection it snapshots the
  prompt counters and bails with a debug warning instead of writing
  a misleading note.

- parseGitInvocation now recognises the attached `-C/path` form
  (e.g. `git -C/path commit -m x`). shell-quote tokenises that as a
  single `-C/path` token which previously fell to the generic flag
  branch with `changesCwd = false`, leaving an out-of-cwd commit
  classified as attributable.

- attachCommitAttribution dropped its unused `command` parameter
  (the caller already gates on `commitCtx.attributableInCwd`, so
  re-parsing was removed earlier; the parameter became dead).

- Added wiring guards in edit.test.ts and write-file.test.ts:
  AI-originated edits/writes hit `CommitAttributionService.recordEdit`,
  `modified_by_user: true` skips, and write-file's distinction
  between a true new file and an overwritten empty file (`null` vs
  `''` old content) is now pinned by `aiCreated` assertions.

* fix(attribution): partial-commit clear, symlink baseDir, gh/git flag handling

Two Critical items, two Copilot, and five wenshao Suggestions:

- attachCommitAttribution's `finally` block used to call
  `clearAttributions()` unconditionally, wiping per-file tracking
  for files the AI had edited but the user excluded from this
  commit. Added `clearAttributedFiles(committedAbsolutePaths)` to
  the service and the call site now passes only the paths that
  actually landed in this commit; entries for un-`add`ed files stay
  pending for a later commit.

- generateNotePayload now runs both `baseDir` and each tracked
  absolute path through `fs.realpathSync` before `path.relative`.
  On macOS in particular `/var` symlinks to `/private/var`, so the
  toplevel from `git rev-parse --show-toplevel` and the absolute
  path captured by edit/write-file tools could diverge — producing
  `../../actual/path` keys in the lookup that never matched and
  silently zeroed all per-file AI attribution.

- tokeniseSegment now consumes value-taking sudo flags (`-u`,
  `-g`, `-h`, `-D`, `-r`, `-t`, `-C`, plus the long forms). Without
  this, `sudo -u other git commit` left `other` standing in for
  the program name and skipped the trailer entirely.

- A duplicate JSDoc block above `countCommitsAfter` (a leftover
  from the earlier extraction of `getGitHead`) was removed; both
  helpers now have one accurate comment each.

- attachCommitAttribution's multi-commit guard now also runs when
  `preHead === null` (brand-new repo), via `git rev-list --count
  HEAD`. A compound `git init && git commit -m a && git commit -m b`
  no longer slips through and mis-attributes combined data to the
  last commit.

- addCoAuthorToGitCommit's `-m` matching switched to `matchAll` and
  takes the LAST match. `git commit -m "title" -m "body"` puts the
  trailer at the end of the body so `git interpret-trailers`
  recognises it; the previous first-match behaviour stuffed the
  trailer in the title where git treats it as plain message text.

- addAttributionToPR's `--body` regex accepts both space and
  `=` separators (`--body "..."` and `--body="..."`); the `=` form
  is common with gh.

- New `parseGhInvocation` walks past gh's global flags
  (`--repo`, `-R`, `--hostname`) so `gh --repo owner/repo pr
  create ...` is detected. The earlier fixed-position check at
  tokens[1]/tokens[2] missed any command with a global flag.

- getCommittedFileInfo now fans out the two `rev-parse` calls and
  the three diff calls with `Promise.all`. They're independent and
  serialising them was paying spawn latency 5× per commit.

Tests: sudo with `-u user`, multi `-m`, `gh --repo owner/repo`,
`--body="..."`, plus the existing 84 shell tests still pass.

* fix(attribution): canonicalize file paths centrally in CommitAttributionService

Two related Copilot follow-ups:

- recordEdit/getFileAttribution/clearAttributedFiles now run input
  paths through fs.realpathSync before storing/looking up, so a
  symlinked path (e.g. macOS /var ↔ /private/var) resolves to the
  same key regardless of which form the caller passes. Previously
  edit.ts/write-file.ts handed in non-realpath'd absolute paths
  while generateNotePayload tried to realpath only inside its
  lookup loop, leaving partial-clear and clear-on-finally paths
  unable to find entries when the forms diverged.

- restoreFromSnapshot also canonicalises on the way in so a
  session resumed from a pre-fix snapshot (where keys may not
  have been canonical) ends up with the same shape as newly
  recorded entries — otherwise a single file could end up with
  two parallel records.

- generateNotePayload's lookup loop dropped its per-entry realpath
  call (now redundant since keys are canonical at write time),
  keeping only the realpath of `baseDir` (which still comes from
  `git rev-parse --show-toplevel` and may be a symlink).

- Updated `clearAttributedFiles` doc to describe the new semantics:
  callers can pass either the resolved repo-relative path or an
  already-canonical absolute path, and either will match.

* fix(attribution): canonicalize-from-root cleanup; fix mixed-quote -m / gh -R=

Five review items, one Critical:

- attachCommitAttribution now canonicalises via the repo *root* (one
  realpath call) and resolves committed paths against that canonical
  root, rather than per-leaf realpath inside clearAttributedFiles.
  At cleanup time the leaf for a just-deleted file no longer exists,
  so per-leaf fs.realpathSync would fail and silently fall back to a
  non-canonical path that misses the stored canonical key — leaving
  stale attributions for deleted files.
  clearAttributedFiles drops its internal realpath and now documents
  the canonical-paths-required precondition explicitly.

- addCoAuthorToGitCommit picks the LAST `-m` regardless of quote
  style. Previously `doubleMatch ?? singleMatch` always preferred
  the last double-quoted match, so `git commit -m "Title" -m
  'Body'` injected the trailer into the title where git
  interpret-trailers would silently ignore it. Now compares match
  indices, and the escape helper follows the actually-selected
  match's quote style.

- parseGhInvocation handles `-R=value` (the equals form of the
  short `--repo` alias). `--repo=...` and `--hostname=...` were
  already covered; `-R=...` previously fell through to the generic
  flag branch and skipped the value.

- New tests for the symlink-aware canonicalisation: macOS-style
  `/var` ↔ `/private/var` mapping is mocked via vi.mock on
  node:fs, with cases for record-then-look-up under either form,
  generateNotePayload with a symlinked baseDir, partial clear via
  the canonical-root-derived path (deleted leaf), and snapshot
  restore canonicalisation.

- Doc-only: integration-test header comments updated from
  "V1 -> V2 -> V3" / "migration to V3" to reflect the actual V4
  end state (assertions already used the literal `4`).

* fix(shell): scope -m rewrite to commit segment, reject nested matches

Two Critical findings on addCoAuthorToGitCommit, plus a Copilot
maintainability nit:

- The `-m` regex used to scan the whole compound command, so
  `git commit -m "fix" && git tag -a v1 -m "release"` would target
  the LATER tag annotation (last -m wins) and splice the trailer
  there instead of the commit message. The rewrite now scopes to
  the actual `git commit` segment via a new
  findAttributableCommitSegment(): same shell-aware walk
  gitCommitContext does, but returning the segment's character
  range so the regex can be run on a slice and spliced back into
  the original command.

- Within the segment, a literal `-m '...'` *inside* a quoted body
  was treated as a real later -m. For
  `git commit -m "docs mention -m 'flag' for completeness"`, the
  inner single-quoted -m sits at a higher index than the real
  outer -m, and the previous index comparison would have it win —
  splicing the trailer mid-message and corrupting the quoting.
  The new code checks whether the candidate is nested inside the
  other quote-style's range (start/end containment) and prefers
  the outer match when so.

- Hoisted three constant Sets (sudo flag list, git global flags
  taking values, git global flags shifting cwd, gh global flags)
  out of the per-call scope to module constants. Functional
  no-op, but keeps the parsing helpers easier to read and avoids
  re-allocating the Sets on every command.

Two regression tests added for the cases above:
- inner `-m '...'` inside the outer message body is preserved
  literally and the trailer lands after the body
- `git tag -a v1 -m "release notes"` after a real
  `git commit -m "fix"` is left untouched, with the trailer
  appended to "fix" only

* fix(attribution): cd-leak, numstat partial failure, $() bailout, gh pr new alias

Five Critical/Suggestion items:

- `cd subdir && git commit` (or any non-attributable commit chain
  whose HEAD movement still happens in our cwd, e.g. cd into a
  subdirectory of the same repo) used to skip attribution AND fail
  to clear pending per-file entries. Those entries then leaked into
  the next foreground commit, inflating its AI percentage. New
  `else if (commitCtx.hasCommit)` branch in execute() compares pre-
  and post-HEAD; if HEAD moved we drop the per-file state. preHead
  is now snapshotted whenever ANY commit was attempted, not only
  attributable ones.

- getCommittedFileInfo's three diff calls run in `Promise.all`. If
  `--numstat` failed while `--name-only` succeeded, every file's
  diffSize would be 0 and generateNotePayload would clamp aiChars
  to 0 — emitting a structurally valid note with all-zero AI
  percentages. Detect the partial-failure shape (files non-empty,
  diffSizes empty) and return empty so no note is written.

- addCoAuthorToGitCommit and addAttributionToPR now bail when the
  captured `-m`/`--body` value contains `$(`. The tool description
  recommends `git commit -m "$(cat <<'EOF' ... EOF)"` for
  multi-line messages, but the regex's `(?:[^"\\]|\\.)*` body group
  stops at the first interior `"` from a nested shell token —
  splicing the trailer there breaks the command before it reaches
  the executor.

- looksLikeGhPrCreate now accepts `gh pr new` as well — it's a
  documented alias for `gh pr create` and was silently skipped.

- Removed `incrementPermissionPromptCount` / `incrementEscapeCount`
  and their getters: they had no production callers, so the backing
  fields just round-tripped through snapshots as 0. The four
  snapshot fields are now optional so pre-fix snapshots that carry
  non-zero values still load cleanly and just get ignored.

Three regression tests added: heredoc-style `-m "$(cat <<EOF...)"`
preserved literally, heredoc-style `--body` likewise, `gh pr new
--body "..."` rewritten with attribution.

* fix(attribution): --amend, --message/-b aliases, .d.ts over-exclusion

Four Copilot follow-ups, three of them user-visible coverage gaps:

- `git commit --amend` was diffing `HEAD~1..HEAD` for attribution,
  which spans the entire amended commit (parent → amended) rather
  than the actual amend delta. A message-only amend would emit a
  note attributing every file in the original commit to this
  amend. New `isAmendCommit` helper detects the flag and
  getCommittedFileInfo switches to `HEAD@{1}..HEAD` (the pre-amend
  HEAD vs the amended HEAD); if the reflog is GC'd we bail with a
  warning rather than over-attribute.

- `git commit --message "..."` and `--message="..."` were silently
  skipped because the regex only recognised the short `-m` form.
  The flag prefix now matches both alternatives via
  `(?:-[a-zA-Z]*m|--message)\s*=?\s*` (non-capturing inner group
  so the existing `[full, prefix, body]` destructure still works).

- `gh pr create -b "..."` (the short alias for `--body`) was the
  same gap on the PR side; `(?:--body|-b)[\s=]+` now covers both
  forms.

- `.d.ts` was an over-broad blanket exclusion in
  EXCLUDED_EXTENSIONS — declaration files are commonly authored
  (ambient declarations, asset shims like `*.d.ts` for
  `import './x.svg'`); the repo even contains
  `packages/vscode-ide-companion/src/assets.d.ts`. Removed `.d.ts`
  from the extensions Set and adjusted the test to assert the new
  behavior. Auto-generated `.d.ts` (e.g. `tsc --declaration`
  output) still gets caught by the build-directory rules.

Tests added: `--amend` plumbing covered by the new branch in
getCommittedFileInfo (no targeted unit test — the diff invocation
goes through ShellExecutionService and is exercised by the existing
post-command path); `--message`/`--message="..."`/-b/-b="..."` all
have positive trailer-injection assertions; `.d.ts` test split into
"hand-authored" (negative) and "in dist" (positive).

* fix(attribution): cd-subdir, scope --body, multi-commit count guard, /clear reset

Four bugs flagged this round:

- gitCommitContext / findAttributableCommitSegment used a blanket
  "any cd shifts cwd" gate, breaking the very common
  `cd subdir && git commit -m "..."` flow even though the commit
  lands in the same repo. New `cdTargetMayChangeRepo` heuristic:
  treat relative paths that don't escape upward (no leading `..`,
  no absolute path, no `~`/`$VAR` expansion, no bare `cd`/`cd -`)
  as in-repo and let attribution proceed. Conservative on anything
  it can't statically verify.

- addAttributionToPR was running the `--body`/`-b` regex against
  the FULL compound command string. In
  `curl -b "session=abc" && gh pr create --body "summary"` the
  regex would match curl's `-b` cookie flag and inject attribution
  into the cookie value, corrupting the curl call. Added
  `findGhPrCreateSegment` (analog of `findAttributableCommitSegment`)
  and scoped the body regex to that segment, splicing back into
  the original command via offsetting the in-segment match index.

- The multi-commit guard treated `runGitCount === 0` as "single
  commit" and bypassed itself. After `commitCreated === true`, a
  count of 0 is impossible in normal operation — it means
  rev-list errored or timed out. Now we bail on `commitCount !== 1`
  with a tailored message: anything other than exactly 1 commit
  is suspicious and refuses the note.

- The CommitAttributionService singleton survives across
  `Config.startNewSession()` (the `/clear` and resume paths). New
  `CommitAttributionService.resetInstance()` call alongside the
  existing chat-recording / file-cache resets in startNewSession
  prevents pending attributions from a prior session attaching to
  a commit in the new one.

Three regression tests added: `cd src && git commit` produces a
trailer (in-repo cd), `cd .. && git commit` does not (could escape
repo root), and `curl -b "..." && gh pr create --body "..."` leaves
curl's cookie value untouched while attribution lands in gh's body.

* fix(attribution): cd embedded .., env wrapper, Windows ARG_MAX, segment-locator warn

Four review items, all small but real:

- cdTargetMayChangeRepo missed embedded `..` traversal — `cd
  foo/../../escape` and similar would slip past the leading-`..`
  check and be treated as in-repo. Added an `includes('/..')` /
  `includes('\\..')` check (catches POSIX and Windows separators
  without false-positiving on `..` chars inside ordinary names,
  which only escape when followed by a separator).

- tokeniseSegment now recognises `env` as a safe wrapper alongside
  `sudo`/`command`, so `env GIT_COMMITTER_DATE=now git commit ...`
  resolves to `git`. After the wrapper detection we also skip any
  `KEY=VALUE` argv entries (env's own argument syntax for setting
  vars before the program).

- buildGitNotesCommand's MAX_NOTE_BYTES dropped from 128 KB to
  30 KB. Windows' CreateProcess lpCommandLine is capped around
  32,768 UTF-16 chars including the executable path and other argv
  entries; a 128 KB note would still fail to spawn even though
  the function returned a command instead of null. 30 KB leaves
  ~2 KB of headroom for the rest of the argv on Windows and is
  larger than any real commit's metadata in practice.

- findAttributableCommitSegment / findGhPrCreateSegment now log a
  debugLogger.warn when `command.indexOf(sub, cursor)` returns -1
  — splitCommands strips line continuations (`\<newline>`), so a
  multi-line command can have the trimmed segment text fail to
  match its source. Previously the segment was silently skipped
  with no signal; the warn makes the failure observable when
  QWEN_DEBUG_LOG_FILE is set.

Two regression tests added: `cd foo/../../escape && git commit`
gets no trailer (embedded-`..` heuristic catches it), and
`env GIT_COMMITTER_DATE=now git commit` does (env wrapper skipped).

* fix(attribution): scope isAmendCommit to attributable segment only

`git -C ../other commit --amend && git commit -m x` would previously
flag the second (fresh) commit as an amend, causing
attachCommitAttribution to diff `HEAD@{1}..HEAD` against an unrelated
reflog entry. Mirror findAttributableCommitSegment's cd/cwd tracking
so only the first commit segment that runs in the original cwd
determines amend status.

* fix(attribution): last-match --body, symlink leaf canonicalisation, scoped prompt count

- addAttributionToPR: use matchAll/last-match for `--body`/`-b` so the
  trailer lands in the gh-honoured (final) body when multiple flags are
  present. Mirrors addCoAuthorToGitCommit. Adds regression test.
- attachCommitAttribution: also fs.realpathSync the per-file resolved
  path (not just the repo root) so files behind intermediate symlinks
  are matched against canonical keys recordEdit stored, instead of
  silently zeroing attribution and leaking entries past commit.
- incrementPromptCount: scope to SendMessageType.UserQuery — ToolResult,
  Retry, Hook, Cron, Notification are model/background re-entries of
  the same logical turn. Tracking them all inflated the "N-shotted"
  trailer (one user message could become 10-shotted with 10 tool calls).
- AttributionSnapshot: add `version: 1` field; restoreFromSnapshot now
  refuses incompatible versions and validates per-field types so a
  partially-written snapshot can't seed `Math.min(undefined, n) === NaN`
  into git-notes payloads.
- Drop unused permission/escape counters (declared, persisted, never
  read or incremented) — fields, snapshot tolerance, and clear-method
  bookkeeping all removed; AttributionSnapshot interface simplifies.
- isGeneratedFile: switch directory rule from substring `.includes('/dist/')`
  to segment-boundary check (split on `/`) so project dirs like
  `my-dist/` or `xbuild/` don't match. `.lock` removed from the blanket
  extension exclusion — well-known lockfiles already covered by
  EXCLUDED_FILENAMES; hand-authored `.lock` files (e.g. `.terraform.lock.hcl`)
  now stay attributable.
- getClientSurface: document `QWEN_CODE_ENTRYPOINT` as the embedder
  override hook so the always-`'cli'` default is intentional.

* fix(attribution): skip values for env -u NAME and -S string

`env`'s value-taking flags (`-u`/`--unset`, `-S`/`--split-string`) were
not in the wrapper's flag-skip allowlist, so `env -u FOO git commit ...`
left FOO as the next token and the parser treated it as the program —
masking the real `git commit` from attribution detection. Add an
ENV_FLAGS_WITH_VALUE table mirroring the sudo allowlist. Regression
test added.

* fix(attribution): submodule leak, PR body nesting, shallow-clone bail, schema default

- attachCommitAttribution: when HEAD didn't move in our cwd, leave
  pending attributions alone instead of dropping them. The case can be
  a failed commit, `git reset HEAD~1`, OR `cd submodule && git commit`
  (inner repo's HEAD moves, ours doesn't). Dropping was overly
  aggressive and silently lost outer-repo edits in the submodule case.
- addAttributionToPR: mirror addCoAuthorToGitCommit's nested-match
  rejection so `gh pr create --body "docs mention -b 'flag'"` picks the
  outer `--body`, not the inner literal `-b`. Splicing into the inner
  match would corrupt the body. Regression test added.
- getCommittedFileInfo: when `rev-parse --verify HEAD~1` fails, also
  check `rev-list --count HEAD === 1` to confirm HEAD is the true
  root commit. In a shallow clone, HEAD~1 is unreadable but the commit
  has a parent recorded — falling back to `diff-tree --root` would
  diff against the empty tree and over-attribute the entire commit.
  Bail with a debug warning instead.
- generate-settings-schema: lift `default` (and `description`) out of
  the inner `anyOf[N]` schema to the outer level when wrapping with
  `legacyTypes`. Most JSON-schema-driven editors only surface
  top-level defaults; burying the default under `anyOf` lost the
  "enabled by default" hint. Also extend the default filter to
  publish non-empty plain objects (so `gitCoAuthor`'s default can
  appear). gitCoAuthor's source default updated to the runtime shape
  `{commit: true, pr: true}` to match `normalizeGitCoAuthor`.

* fix(attribution): drop unsafe full-clear, tag analysis-failure with null

ju1p (Copilot): the `else if (commitCtx.hasCommit)` branch fully
cleared the singleton on `cd /abs/same-repo/subdir && git commit`
(or `git -C . commit`), losing pending AI edits the user hadn't
staged. We can't tell which files were in the commit from this
branch, and the next attributable commit's partial-clear handles
cleanup correctly anyway. Drop the branch entirely.

ju2D (Copilot): `getCommittedFileInfo` returned the same empty
StagedFileInfo for both "could not analyze" (shallow clone, --amend
without reflog, --numstat partial failure, exception) and
"intentionally empty" (--allow-empty). The caller couldn't tell them
apart, so the partial clear became a no-op on analysis failure and
the just-committed AI edits leaked to the next commit. Switch the
return type to `StagedFileInfo | null` and have the caller treat
null as "fall back to full clear" while empty StagedFileInfo
(--allow-empty) leaves attributions intact for the next real commit.

* fix(attribution): dedup snapshot writes, cap excludedGenerated, doc commit toggle scope

rsf- (Copilot): recordAttributionSnapshot wrote a full snapshot to
the JSONL on every non-retry turn, even when the tracked state was
unchanged. Long-running sessions accumulated thousands of identical
snapshot copies, inflating session size and slowing /resume hydrate.
Dedup by JSON-equality with the prior write — first write always
goes through, identical successors are no-ops.

rsgo (Copilot): excludedGenerated path list was unbounded. A commit
churning thousands of generated artifacts (large dist/ rebuild)
could push the JSON note past MAX_NOTE_BYTES (30KB) and lose
attribution for the real source files in the same commit. Cap the
serialized sample at MAX_EXCLUDED_GENERATED_SAMPLE (50) and add
excludedGeneratedCount for the true total.

rsg9 + rshM (Copilot): the gitCoAuthor.commit description claimed
the toggle only controlled the Co-authored-by trailer, but
attachCommitAttribution also gates the per-file git-notes payload
on the same flag. Update both the schema description and the
settings.md table to mention both effects so disabling the option
isn't a silent surprise.

* fix(attribution): depth-1 shallow detection, snapshot dedup post-rewind/post-failure

sfGz (Copilot): rev-list --count HEAD === 1 cannot distinguish a
true root commit from a depth-1 shallow clone — both report 1
because rev-list only walks locally available objects. Switch to
git log -1 --pretty=%P HEAD which reads the parent SHA directly
from commit metadata: empty means a real root, non-empty means a
parent is recorded (whether or not its object is local). The
shallow-clone bail is now reliable.

sfIm (Copilot): the dedup key persisted across rewindRecording, so
the previous snapshot living on the now-abandoned branch would
match the next post-rewind snapshot and silently skip the write,
leaving /resume on the rewound session with no attribution state.
Reset lastAttributionSnapshotJson when rewindRecording fires.

sfJE (Copilot): dedup key was committed before the async write
settled. A transient write failure would update the key, then
permanently suppress all future identical snapshots even though
nothing was ever persisted. Switch to optimistic-set then rollback
on appendRecord rejection — synchronous identical calls dedup
cleanly, but a failed write clears the key so the next identical
snapshot retries. appendRecord now returns the per-record write
promise (writeChain still has its swallow-catch for chain liveness)
so callers needing per-write success can react to it. Tests added
in chatRecordingService.test.ts for both rewind-reset and
rollback-on-failure paths.

* fix(attribution): preHead race, regex apostrophe-escape, surface failures, dead code

t2G0 (deepseek-v4-pro): addCoAuthorToGitCommit single-quote regex now
matches the bash close-escape-reopen apostrophe form using
((?:[^']|'\\'')*) — the same pattern bodySinglePattern uses for
gh pr create. Input like git commit -m 'don'\''t' was previously
silently un-rewritten because the negative lookahead bailed; the
trailer now lands at the FINAL closing quote. Test updated.

tMBP (gpt-5.5): preHead capture switched from concurrent async
getGitHead to a synchronous getGitHeadSync (execFileSync) BEFORE
ShellExecutionService.execute spawns the user's command. A fast
hot-cached git commit could move HEAD before the async rev-parse
resolved, leaving preHead === postHead and silently skipping the
attribution note. Trade ~10–50 ms event-loop block per
commit-shaped command for correctness of the post-command HEAD
comparison.

t2Gv (deepseek-v4-pro): attribution write failures (note exec
non-zero, payload too large, diff-analysis exception, shallow
clone / amend-without-reflog) are now surfaced on the shell tool's
returnDisplay AND llmContent so the user and agent both see when
their commit succeeded but the per-file git note didn't land.
attachCommitAttribution now returns string | null (warning text or
null for intentional skips like no-tracked-edits). Co-authored-by
trailer is unaffected — only the note is gated by these failures.

t2Gy (deepseek-v4-pro): committedAbsolutePaths now matches against
the canonical keys already stored in fileAttributions
(matchCommittedFiles iterates by relative path against the
canonical repo root) instead of re-resolving each diff path
on the fly. realpathSync(resolved) failed for deleted files and
didn't follow intermediate symlinks, leaving stale per-file
attribution alive past commit and inflating AI percentages on
subsequent commits.

t2HI (deepseek-v4-pro): removed dead sessionBaselines /
FileBaseline / contentHash / computeContentHash infrastructure
(~40 lines). The fields were written, persisted, and restored but
never read for any computation or decision. AttributionSnapshot
schema stays at version 1 — restore tolerates pre-fix snapshots
that carried the now-ignored baselines field.

t2HM (deepseek-v4-pro): extracted the duplicated lastMatch helper
in addCoAuthorToGitCommit and addAttributionToPR into a single
module-level lastMatchOf so future fixes can't be applied to only
one copy.

* chore(schema): regenerate settings.schema.json to match gitCoAuthor.commit description

The settingsSchema.ts source for `gitCoAuthor.commit.description` was
updated in 3c0e3293b but the JSON schema only picked up the OUTER
description rewrite and missed this inner property's. The Lint check
("Check settings schema is up-to-date") fails on that drift; this
commit re-runs `npm run generate:settings-schema` to sync them.

* fix(attribution): preserve unstaged AI edits across cleanup branches

uxU5 + uxVQ + uxUO (Copilot): every cleanup branch in
attachCommitAttribution that called clearAttributions(true) was
wholesale-erasing pending AI edits for files the user never staged
in this commit. Reviewer scenarios:
- multi-commit chain (`commit a && commit b`) bails out without
  writing a note, but unstaged edits to file Z (touched by neither
  commit) get cleared along with the chain's committed files.
- attribution toggle off: same — toggling the flag wipes pending
  unstaged work.
- analysis failure (shallow clone, --amend without reflog, partial
  diff failure): the finally-block fallback wholesale-cleared
  every pending file, consuming unrelated AI edits.
- 0%-AI commit: when no file in the commit was AI-touched,
  generateNotePayload was emitting an "0% AI" note attached to a
  commit that legitimately had no AI involvement — actively
  misleading metadata.

Add `noteCommitWithoutClearing()` to the service: snapshots the
prompt counter as the new "at last commit" but leaves the per-file
map alone. Use it in the multi-commit, no-tracked-edits,
toggle-off, and analysis-failure paths. The committed-files
partial-clear (clearAttributedFiles) still runs in the success
path. The 0%-AI no-match case now skips the note write entirely.

* fix(attribution): runGit null-on-failure, versionless v3→v4 migration

z54M (Copilot): runGit returned '' on both successful-empty-output
and silent failure, so a `--name-only` that errored mid-way through
the diff fan-out aliased to a real `--allow-empty` commit. The
empty-commit branch then preserved pending attributions, leaving
the just-committed file's tracked AI edit alive to re-attribute on
the next commit. Switch runGit to `Promise<string | null>`,
distinguishing exit code 0 (any output, including '') from non-zero
(null). The diff-stage fan-out and ancillary probes now treat null
as analysis failure and bail with `return null` instead of falling
into the empty-commit path.

z539 (Copilot): the v3→v4 `shouldMigrate` only fired on
`$version === 3`. A versionless settings file carrying the legacy
`general.gitCoAuthor: false` boolean would skip every migration
(gitCoAuthor isn't in V1_INDICATOR_KEYS — it post-dates V2), get
its `$version` normalized to 4 by the loader, and leave the
boolean in place. The settings dialog then reads the V4
`{commit, pr}` shape, sees missing keys, defaults both to true, and
silently overwrites the user's opt-out on the next save. Also fire
when `$version` is absent AND the value at `general.gitCoAuthor`
is a boolean. Tests cover the new path and confirm the existing
versioned/object-shape paths are untouched.

* fix(attribution): toggle-off partial clear, normalizeGitCoAuthor type-check, terraform lockfile

0oAK (Copilot): the gitCoAuthor.commit toggle-off branch returned
before computing the committed file set, leaving the just-committed
files' tracked AI work in the singleton. Re-enabling the toggle and
committing the same file again would re-attribute earlier (already-
committed) AI edits to the new commit. Move the toggle gate AFTER
matchCommittedFiles so the finally block does a proper partial clear
of the just-committed files even when the note write is skipped.

0oAg (Copilot): normalizeGitCoAuthor copied value?.commit / value?.pr
without type-checking. settings.json is hand-editable; a stored
`{ commit: "false" }` reached runtime as a truthy string and behaved
as if attribution were enabled. Add a per-field bool coercion that
falls back to the schema default (true) for any non-boolean,
matching what the dialog and IDE schema already imply. Tests cover
the string / number / null cases.

0oAo (Copilot): v3→v4 shouldMigrate only special-cased versionless
legacy booleans — versionless files with invalid gitCoAuthor values
(`"off"`, `[]`, etc.) skipped the migration and the loader stamped
`$version: 4` over the bad value. Runtime normalization then
silently re-enabled attribution. Extend shouldMigrate to fire on ANY
versionless non-object value at general.gitCoAuthor; the existing
migrate() body's drop-and-warn path resets it. Already-object
shapes (hand-edited to v4) still skip cleanly. Tests added.

0oAt (Copilot): `.terraform.lock.hcl` got dropped from generated-file
exclusion when `.lock` was removed from the blanket extension list
in 3c0e3293b. It's a generated provider lockfile in the same class
as `package-lock.json` and dominates Terraform-repo commits. Re-add
to EXCLUDED_FILENAMES and add a regression test covering both
repo-root and module-nested locations.

* fix(attribution): harden restoreFromSnapshot against corrupt payloads

1KMY (Copilot): snapshot.surface was copied without type validation.
A corrupted/partially-written snapshot with a non-string surface
(e.g. {}, 42, null) would later be serialized into the git note as
"[object Object]" and used as a Map key downstream, breaking the
expected payload shape. Type-check and fall back to the current
client surface for any non-string (or empty-string) value.

1KLq (Copilot): per-field sanitiseCount enforced
`promptCount >= 0` and `promptCountAtLastCommit >= 0` independently,
but never the cross-field invariant. A snapshot with
promptCountAtLastCommit > promptCount would surface a negative
getPromptsSinceLastCommit() and propagate as a "(-N)-shotted"
trailer into PR text. Clamp atLastCommit to total on restore.

1KL_ (Copilot): when a snapshot carried both the symlinked and
canonical paths for the same file (a session straddling the
canonicalisation fix), `set(realpathOrSelf(k), ...)` overwrote the
first entry with the second, silently dropping the AI contribution
the first form had accumulated. Merge instead: sum aiContribution
and OR aiCreated when collapsing duplicate keys.

Tests cover all three branches: non-string surface fallback,
promptCount clamp, and duplicate-key merge.

* fix(attribution): roll back snapshot dedup key on sync appendRecord failure

1UMh (Copilot): appendRecord can throw synchronously before returning
a promise — e.g. when ensureConversationFile() rethrows a non-EEXIST
writeFileSync error. The async .catch() handler attached to the
promise never runs in that case, so the optimistic dedup-key set
sticks on a write that never landed and permanently suppresses
identical retries. Roll back lastAttributionSnapshotJson in the outer
catch too. Regression test forces writeFileSync to throw EACCES on
the first invocation, then asserts the second identical snapshot
attempt fires a fresh write rather than getting deduped.

* docs(attribution): align cleanup-branch comments with noteCommitWithoutClearing

Three doc/test-fixture stale-after-refactor cleanups (Copilot
4MDx / 4MEI / 4MEa):

- shell.ts:1944 (around the stagedInfo === null branch): the comment
  still claimed the finally block "falls back to a full clear", but
  1ece87438 switched analysis-failure cleanup to
  noteCommitWithoutClearing(). Update the comment so the reasoning
  matches what the code actually does (and so a future reader doesn't
  reintroduce the wholesale clear thinking it's already there).

- shell.ts: getCommittedFileInfo docstring carried the same stale
  "full clear" claim for the `null` return value. Update to describe
  the noteCommitWithoutClearing() fallback and the smaller-evil
  trade-off for the just-committed file.

- chatRecordingService.test.ts: baseSnapshot fixture for the
  recordAttributionSnapshot tests still carried `baselines: {}`,
  even though that field was removed from AttributionSnapshot in
  296fb55ae's dead-code purge. Structural typing let it compile,
  but the fixture didn't reflect the production shape — drop it.

* fix(attribution): restore fire-and-forget appendRecord, route rollback via callback

6OcJ (Copilot): refactor in 715c258fb returned a Promise from
appendRecord so the snapshot dedup-key path could chain rollback —
but recordUserMessage / recordAssistantTurn / recordAtCommand /
recordSlashCommand / rewindRecording all call appendRecord without
await or .catch(). A transient jsonl.writeLine rejection on any of
those would surface as an unhandled-promise-rejection (warning, or
crash on --unhandled-rejections=throw).

Restore the original fire-and-forget semantics: appendRecord again
returns void and internally swallows async failures (logging via
debugLogger). Per-record failure reactions are routed through an
optional onError callback — recordAttributionSnapshot uses this to
roll back lastAttributionSnapshotJson when the write that set it
ends up rejecting.

Tests: add a fire-and-forget regression that mocks writeLine to
reject and asserts no unhandledRejection events fire while the
existing snapshot rollback tests (sync + async) still pass via the
new callback path.

* fix(attribution): GIT_DIR repo-shift bail, snapshot envelope validation, narrow legacyTypes

80ME (gpt-5.5 /review, [Critical]): tokeniseSegment unconditionally
stripped every leading KEY=value token. `GIT_DIR=elsewhere/.git git
commit ...` was therefore treated as an in-cwd commit, picked up the
Co-authored-by trailer, and produced a per-file note that landed
against our cwd's HEAD even though the actual commit went to a
different repo. Define a GIT_ENV_SHIFTS_REPO set (GIT_DIR,
GIT_WORK_TREE, GIT_COMMON_DIR, GIT_INDEX_FILE, GIT_NAMESPACE) and
have tokeniseSegment refuse to parse any segment whose leading env
block (including the env-wrapper's KEY=VALUE block) carries one of
these. Identity / date variables (GIT_AUTHOR_*, GIT_COMMITTER_*) are
deliberately NOT in the set — they tweak metadata but don't relocate
the repo. Tests cover plain prefix, env-wrapped prefix, and a
GIT_COMMITTER_DATE positive control that should still get the trailer.

8EeQ (Copilot): restoreFromSnapshot received `snapshot as
AttributionSnapshot` from a structural cast off `unknown` (the
resume path), so its TS-typed shape was only a hint. A corrupted
JSONL line (non-object / array / wrong type discriminator / missing
type) would skip past the version check straight into
Object.entries(snapshot.fileStates) — and a non-object fileStates
(an array, say) seeded fileAttributions with numeric-string keys.
Add envelope-level shape gates (isPlainObject + type discriminator)
and a fileStates plain-object check before iterating; both bail to a
clean reset rather than poisoning the singleton. Tests added.

8Eej (Copilot): SettingDefinition.legacyTypes was typed as
SettingsType[] which includes 'enum' and 'object' — JSON Schema's
`type` keyword doesn't accept those values. Adding
`legacyTypes: ['enum']` would silently produce an invalid
settings.schema.json. Narrow the field's type to
ReadonlyArray<'boolean' | 'string' | 'number' | 'array'> (the
JSON-Schema-primitive subset). Future complex-shape legacy support
should land its own branch in convertSettingToJsonSchema.

* docs(attribution): correct legacyTypes / EXCLUDED_DIRECTORY_SEGMENTS comments

9Ta_ (Copilot): the JSDoc on legacyTypes claimed JSON Schema's
`type` keyword does not accept `'object'` — that's wrong; `'object'`
IS a valid JSON Schema type. Reword to reflect the actual rationale:
`'enum'` is not a valid JSON Schema `type` value at all (enum
constraints use the `enum` keyword), and a bare `{type: 'object'}`
would accept any object regardless of what the field's pre-expansion
shape actually allowed. The narrowed `boolean | string | number |
array` set is exactly what the one-liner generator can faithfully
emit; richer legacy shapes belong in their own branch of
convertSettingToJsonSchema.

9Tbs (Copilot): the comment in generatedFiles.ts referenced
`EXCLUDED_DIRECTORIES`, but the constant is `EXCLUDED_DIRECTORY_SEGMENTS`
(renamed during the segment-boundary refactor). Update the
reference so a future maintainer scanning for the rule doesn't
chase a non-existent identifier.

* fix(attribution): SHA-pin git notes, on-disk hash divergence detection, env -C cwd-shift

tanzhenxin review #1 — Note targets symbolic HEAD, not captured SHA:
buildGitNotesCommand hard-coded 'HEAD' as the target; postHead was
captured at commit-detection time but only used for the !== preHead
diff. Between that capture and the execFile, three more awaited git
calls run — anything that moves HEAD in the same cwd (post-commit
hook, chained `commit && tag -m`, parallel process) silently lands
the note on the wrong commit because of `-f`. Thread postHead
through buildGitNotesCommand as a required `targetCommit` arg.
Test asserts the targeted SHA, not the symbolic ref.

tanzhenxin review #2 — Accumulator has no baseline:
recordEdit was monotonic per-path with no reset for out-of-band
mutations. Re-instate FileAttribution.contentHash and:
- recordEdit hashes the input `oldContent` and resets the per-file
  accumulator if it doesn't match what AI's last write recorded
  (catches paste-replace via external editor, manual save, etc.
  WHEN AI subsequently edits the same file again).
- New validateOnDiskHashes() rehashes every tracked file's CURRENT
  on-disk content and drops entries whose hash diverged. Called
  from attachCommitAttribution before matchCommittedFiles so a
  commit can never credit AI for a human-only diff. Deleted files
  (readFileSync throws) are left alone — the commit's deletion
  record is what the note should reflect.

tanzhenxin review #4 — Failed-commit / staleness leak:
The recordEdit divergence check above + commit-time
validateOnDiskHashes together catch tanzhenxin's exact scenario
(AI edits a.ts → hook rejects → user manually edits a.ts → user
commits → no AI credit because validateOnDiskHashes drops the
stale entry). The !commitCreated branch still preserves
attributions to keep the submodule case working — the staleness
problem is now solved at the next commit's validation step.

Self-review item — env -C / --chdir treated as repo-shifting:
Added ENV_FLAGS_SHIFT_CWD set covering -C / --chdir. tokeniseSegment
returns null for `env -C DIR git commit ...` segments — same
contract as a leading GIT_DIR=... assignment. Without this we'd
either misidentify /elsewhere as the program (silently dropping
attribution) or, worse if -C went into the value-skip set,
trailer-inject onto a commit that lands in /elsewhere's repo. Tests
added alongside the existing GIT_DIR repo-shift cases.

339 tests pass; typecheck clean.

* fix(attribution): pickBool intent-aware, shouldClear gate, ETIMEDOUT surface, drop dead exports

-wgA + -wg0 (deepseek): pickBool defaulted non-boolean to true,
turning a hand-edited `{ commit: "false" }` into enabled
attribution. Replace with intent-aware parsing: "true"/"yes"/"on"/
"1" → true, "false"/"no"/"off"/"0"/"" → false, anything else
(unknown strings, non-1 numbers, objects, arrays, null) → false.
Genuinely-absent sub-fields still default to true (schema default).
Migration test scenarios covered. Tests now cover ~17 input cases
across both string/number/null/object/unknown forms.

-wgq (deepseek): when buildGitNotesCommand returned null (oversized
payload) or git notes itself failed, the finally block called
clearAttributedFiles(committedAbsolutePaths) — irreversibly
deleting per-file attribution data the user might need to amend &
retry. Introduce a separate `shouldClear` set that's only assigned
on successful note write OR explicit toggle-off. Failure paths
(oversized, exitCode != 0, exception, analysis failure) leave
shouldClear null so the finally block calls noteCommitWithoutClearing
instead — preserving per-file state for the user's recovery.

9p7W (Copilot): execFile callback coerced ETIMEDOUT / SIGTERM
(timeout) into a generic exitCode=1 warning. Detect both
`error.code === 'ETIMEDOUT'` and `error.killed === true &&
error.signal === 'SIGTERM'` so the user-visible warning correctly
names "timed out after 5s" instead of "exited 1".

-wg7 (deepseek): formatAttributionSummary and getAttributionNotesRef
were exported but had zero production callers (only tests). Remove
the dead exports + their tests (~40 LOC). If/when a logging surface
needs them, they can be re-introduced.

-wgb (deepseek): tokeniseSegment doesn't recursively unwrap
`bash -c '...'` / `sh -c` / `zsh -c`, so addCoAuthorToGitCommit
won't splice the trailer into a wrapped command. The background
refusal AND the post-commit note path DO catch the wrapped commit
because stripShellWrapper at the top of execute peels the wrapper
before gitCommitContext / getGitHead run — so the worst-case
("background bash -c 'git commit' bypasses the guard") doesn't
materialize. The remaining gap (no Co-authored-by trailer for
bash -c-wrapped commits) requires recursively splicing into the
inner script with proper bash single-quote re-quoting; significant
enough that it's worth its own PR. Documented as a partial-coverage
limitation.

339 → 325 tests pass after the dead-export removal; typecheck clean.

* fix(attribution): committed-blob validation, deleted-leaf canonicalisation, sudo/env shifts, dir-stack

gpt-5.5 review (issue 4389405179):

1. realpathOrSelf falls back to the non-canonical input when the
   leaf doesn't exist (deleted file). recordEdit stored the entry
   under the canonical path; lookup post-deletion misses on macOS
   where /var ↔ /private/var. Canonicalise the parent and rejoin
   the basename for missing leaves so deleted-file getFileAttribution
   still resolves the canonical key. Test updated to assert the
   lookup-after-unlink path explicitly.

2. validateOnDiskHashes read the LIVE working-tree, so a user who
   `git add`'d AI's content and then made additional unstaged edits
   would have the entry dropped on a commit whose blob still matched
   AI's hash. Replace with `validateAgainst(getContent)` that takes
   a caller-supplied reader; attachCommitAttribution now passes a
   reader that fetches the COMMITTED blob via `git show HEAD:<rel>`.
   Working-tree validation kept as `validateAgainstWorkingTree` for
   code paths without a committed ref. Returns null = no comparison
   signal (entry preserved). Tests cover all three readers
   (committed-blob via stub, working-tree, null-passthrough).

deepseek-v4-pro review #1: sanitiseAttribution defaults missing
contentHash to '' on legacy-snapshot restore. recordEdit's
divergence check would then trip on every subsequent edit and
silently reset all the AI work. Skip the divergence check when
existing.contentHash is empty — we have no baseline to compare
against, so don't drop. Test added covering legacy-snapshot
preservation through validateAgainst.

deepseek #4: validateAgainst now logs every entry drop via
debugLogger.debug so a 3am operator can see WHICH entry got
dropped and tied to which canonical key.

deepseek #8: GIT_NAMESPACE removed from GIT_ENV_SHIFTS_REPO. It
prefixes ref names within the same repo but doesn't redirect git
to a different on-disk repository, so a commit underneath it still
lands in our cwd's repo. Doc comment explains the distinction.

deepseek #9: pushd/popd treated as cwd-shifting alongside cd in
gitCommitContext / isAmendCommit / findAttributableCommitSegment.
pushd reuses cdTargetMayChangeRepo (relative-no-escape stays
in-repo); popd unconditionally flips cwdShifted because we don't
track the bash dir-stack.

deepseek #10: sudo's value-taking flag table now has a parallel
SUDO_FLAGS_SHIFT_CWD set covering -D / --chdir (Linux sudo 1.9.2+).
Any segment whose sudo wrapper sees one of those flags returns null
from tokeniseSegment — same contract as env -C / --chdir and
GIT_DIR=...

328 tests pass; typecheck clean both packages.

* fix(attribution): scope validateAgainst to committed set, SHA-pin reader, intent-aware migration

Round 1 of multi-pass audit on b3a06a7c4. Three correctness fixes:

1. validateAgainst was iterating ALL fileAttributions but the
   committed-blob reader (git show HEAD:<rel>) returns HEAD's
   pre-AI content for files NOT in the just-made commit. Result:
   pending unstaged AI work was silently wiped on every commit
   because the divergence check ran against the wrong baseline
   for unrelated files. Fix: build the committed scope first via
   matchCommittedFiles, scope the reader to that set (return null
   for everything else), validate, then RE-run matchCommittedFiles
   to pick up dropped entries. The validateAgainstWorkingTree
   wrapper had no production caller — removed it and its test.

2. The committed-blob reader used symbolic `HEAD` instead of the
   captured postHead SHA — same TOCTOU concern buildGitNotesCommand
   already addressed. A post-commit hook moving HEAD between
   capture and the reader's `git show` would silently compare
   against the wrong commit's content and trip the divergence
   check spuriously. Pin the reader to `git show <postHead>:<rel>`.

3. v3→v4 migration's invalid-string fallback used to reset to {}.
   Combined with the runtime pickBool's "absent → schema default
   true" rule, that silently re-enabled attribution for users who
   hand-edited `"gitCoAuthor": "off"` to disable. Migration now
   recognises enable-intent strings (true/yes/on/1/enabled) and
   disable-intent strings (false/no/off/0/disabled/'') and maps
   them to {commit, pr} explicitly. Unrecognised strings fall to
   {commit: false, pr: false} with a warning — same safer-by-default
   contract as runtime pickBool. Test grid covers all 11 cases.

Also tidied the FileAttribution.contentHash JSDoc to reference
the renamed `validateAgainst` (was still pointing at the dropped
`validateOnDiskHashes` name).

1085 tests pass; typecheck clean both packages.

* chore(attribution): extract pickOuterLastMatch, log unrecognised pickBool inputs

Round 2 of multi-pass audit. Two cleanups, no behaviour changes:

1. addCoAuthorToGitCommit and addAttributionToPR each carried their
   own copy of the matchRange / isInside / "pick LAST non-nested
   match" logic (~25 LOC duplicated). Extracted to module-level
   helpers `matchSpan`, `isMatchInside`, and `pickOuterLastMatch<T>`
   so a future bug fix can't apply to only one of the two
   rewriters. Behaviour identical — same algorithm, same edge cases.

2. normalizeGitCoAuthor's pickBool silently maps unrecognised
   strings to false (safer-by-default vs the old "default-to-true
   on mismatch" policy, but a user who hand-edited
   `{ commit: "maybe" }` had no signal that their setting was being
   ignored). Add a `gitCoAuthorLogger.warn` listing the accepted
   forms so a debug-mode user can see the actual coercion. Known
   disable-intent strings (false/no/off/0/empty) stay silent —
   they're explicit user intent. Also pass the field name so the
   warning identifies which sub-toggle (commit vs pr) was bad.

1101 tests pass; typecheck clean.

* fix(attribution): canonicalise BOM and CRLF before hashing

Round 3 of multi-pass audit. One real correctness fix.

Edit and WriteFile preserve the file's BOM and CRLF line-ending
choice when writing back, so the on-disk bytes can include a leading
U+FEFF and CRLFs even when AI's recordEdit input was given with LF
and no BOM. The committed-blob reader's `git show <sha>:<rel>`
returns those raw bytes verbatim, and computeContentHash hashed them
as-is — so a UTF-8 BOM file or a CRLF-line-ending file would always
have a mismatch between AI's recorded hash and the on-disk hash, and
validateAgainst would drop the entry on every commit.

Add `canonicaliseForHash`: strips a leading U+FEFF and normalises
CRLF→LF before computing the SHA-256. Both sides (recordEdit when
storing the post-write hash, and validateAgainst when comparing to
the on-disk read) flow through computeContentHash, so the
canonicalisation is symmetric. The hash is metadata used only for
divergence detection — collapsing these visual differences is the
right comparison semantics.

Three regression tests added: BOM-only, CRLF-only, and BOM+CRLF
combined. All exercise the typical case where AI's recordEdit input
is LF + no BOM but the on-disk content (post-writeTextFile) has the
file's preserved BOM/lineEnding choice.

* fix(attribution): reset accumulator when re-creating a deleted tracked file

Round 4 of multi-pass audit + Copilot finding from review 4236842362
(I missed it in the previous refresh).

recordEdit's existing prior-state check was symmetric on diverged
oldContent but ASYMMETRIC on a fresh file lifetime: when AI creates
`foo.ts` (oldContent=null), then user `rm foo.ts`, then AI
re-creates `foo.ts` (oldContent=null again), the second recordEdit
saw `existing` (from the first lifetime) and SKIPPED the divergence
check (because oldContent === null bails out of that branch). The
accumulator carried 100 chars from the deleted file plus 5 chars
from the new content = 155, vs the actual 5 on disk. Subsequent
generateNotePayload's clamp against `(adds+dels) * 40` couldn't
catch this — the diff size for a 1-line addition is 40, far above
the actual content size.

Add a fresh-file-lifetime branch: when `existing` is set AND the
caller reports `oldContent === null`, reset aiContribution and
aiCreated before counting the new contribution. The new edit is
treated as a brand-new file at the same path (which is what the
caller's null oldContent means semantically).

Test added covering the exact `AI create → delete → AI re-create`
flow. Also verified `should treat new files as ai-created` and
`should accumulate contributions across multiple edits` still pass.

* fix(attribution): treat git -C . as in-cwd, gate preHead on attributable

Round 5 of multi-pass audit. Two related correctness/efficiency
fixes around the cwd-shift parser and the preHead capture.

1. `git -C .` (and `-C ./`, `-C.`) is a no-op cwd shift but the
   "any -C → cwd-shifted" rule was treating it the same as
   `-C /tmp/other`, suppressing attribution for what's effectively
   `git commit` with an explicit current-dir marker. Add an
   `isNoopCwdTarget` helper used in both the spaced (`-C .`) and
   attached (`-C.`) branches of `parseGitInvocation`. `--git-dir`
   / `--work-tree` are left unconditional — those aren't cwd in the
   same sense.

2. preHead was being captured for ANY hasCommit, including the
   non-attributable cases (`cd /elsewhere && git commit`,
   `git -C /other commit`). The only consumer of preHead is the
   `attachCommitAttribution` call inside the `attributableInCwd`
   branch — there is intentionally NO cleanup branch for the
   non-attributable case (see the existing comment around the
   `else if (commitCtx.hasCommit)` non-branch). The execFileSync
   for `getGitHeadSync` is dead work in that path: ~10–50 ms
   blocking the event loop before the user's real command spawns.
   Gate the capture on `attributableInCwd` to match the consumer.

Tests added for the three -C dot-form variants. Full suite green:
146 in shell.test.ts, 56 in commitAttribution.test.ts.

* fix(core): preserve attribution across renamed files

* fix(attribution): preserve env-vars in tokens, exclude empty -C targets

Round 7 of multi-pass audit. Two related fixes around how
`shell-quote` handles env-var references and how the cwd-shift
detector reads them.

1. `shell-quote.parse` collapses `$NAME` references it cannot
   resolve to the empty string. The downstream cwd-shift checks
   (`cdTargetMayChangeRepo`'s `target.includes('$')` repo-shift
   detector, and the new `isNoopCwdTarget` no-op detector) were
   designed to catch env-var targets but received `''` instead of
   `$NAME` from `tokeniseSegment` and silently failed. Concretely,
   `cd $HOME && git commit` and `git -C $HOME commit` would both
   pass through as in-cwd attributable, stamping our trailer onto
   commits that land in whatever repo `$HOME`/`$REPO_ROOT`
   resolves to at runtime.

   Pass an env getter `(key) => '$' + key` to `shell-quote.parse`
   inside `tokeniseSegment` so unresolved references stay literal
   in tokens (`['cd', '$HOME']` instead of `['cd', '']`).
   `target.includes('$')` now fires correctly, and the no-op
   detector sees `$HOME` (non-`.`) and rejects it. KEY=value
   leading-env detection is unaffected (shell-quote doesn't
   interpolate inside KEY=value tokens).

2. Even with env preservation, an `''` target can still slip
   through (literal `-C ""`, escaped quotes, edge cases in
   shell-quote). Round 5's `isNoopCwdTarget` accepted `''` as a
   no-op alongside `'.'` / `'./'`, which would re-introduce the
   attribution-on-wrong-repo problem if any path produced an
   empty token. Tighten to `'.'` and `'./'` only — the only
   missed cases are literal `-C ""` (malformed, won't actually
   commit) and the rare `-C $PWD` (now also caught conservatively,
   since `$PWD` becomes literal `$PWD` and isn't `.` or `./`).

Tests added for `cd $HOME` / `cd $REPO_ROOT && git commit` and
`git -C $HOME commit` / `git -C "" commit`. Full suite green
(150 in shell.test.ts, 58 in commitAttribution.test.ts).

* fix(attribution): SHA-pin diff/rev-list phase, document aiChars heuristic

Addresses tanzhenxin's review (4240760004) — two residuals after
the prior pinning round.

1. Diff phase still races against HEAD.

   The note write itself was already pinned to the captured `postHead`
   (`git notes add -f <postHead>`), but the *content* of the note —
   `getCommittedFileInfo`'s probe + diff calls and the multi-commit
   guard's `rev-list --count` — were still going through symbolic
   `HEAD` / `HEAD~1` / `HEAD@{1}`. Several awaited subprocesses run
   between the postHead capture and these reads, so a husky / lefthook
   auto-amender, signed-commits hook, chained `git tag -m`, or
   parallel git process moving HEAD in that window would leave the
   note attached to commit A but describing commit B's contents.
   Same TOCTOU class as the prior critical, half-closed.

   Thread `postHead` (and `preHead` for amend) through
   `getCommittedFileInfo`. Probes become `rev-parse --verify
   ${postHead}~1` and `log -1 --pretty=%P ${postHead}`; diffs become
   `${postHead}~1..${postHead}` (parent case),
   `${preHead}..${postHead}` (amend — preHead is the pre-amend SHA
   captured before the user's command and is exactly what HEAD@{1}
   resolved to at parse time, with the added benefit that it can't be
   GC'd between capture and use), and `diff-tree --root <postHead>`
   (root commit). The amend branch keeps the existing reflog-vs-
   no-reflog warning, just driven off `preHead` instead of HEAD@{1}.

   Same pin applied to `countCommitsAfter` (now `${preHead}..
   ${postHead}`) and `countCommitsFromRoot` (now `${postHead}`).

   Why parent case uses `${postHead}~1` and NOT `${preHead}`: in
   `git reset HEAD~3 && git commit` chains the captured preHead
   points well above postHead's parent, and `${preHead}..${postHead}`
   would describe the reset-away commits as deletions, drastically
   over-attributing. The actual parent of the just-landed commit is
   what we want, and `${postHead}~1` is the SHA-pinned form of that.

2. `aiChars` reads as a literal char count but isn't.

   The field is emitted as a plain integer named `aiChars`; the PR
   description's example shows values like 3200 / 1500 / 4700 that
   anyone parsing the note will read as literal character counts.
   Internally it's `(addedLines + deletedLines) × 40` for text and a
   flat 1024 for binary, with the per-file AI accumulator clamped
   against that ceiling. So 1000 one-character lines and 1000
   thousand-character lines both report aiChars=40000, and a 5 MB
   image change and a 1-byte binary tweak both report 1024. Anyone
   aggregating raw aiChars for compliance reporting gets
   systematically wrong numbers.

   Add a comprehensive doc block on `FileAttributionDetail` (and
   `CommitAttributionNote`) calling out the heuristic explicitly,
   noting that `percent` / `summary.aiPercent` are the correct
   fields for aggregation since both numerator and denominator use
   the same proxy. Also expand the `APPROX_CHARS_PER_LINE` /
   `BINARY_DIFF_SIZE_FALLBACK` const docs to point at the same
   caveat. (Not renaming the fields — that'd break any downstream
   consumer already parsing the existing schema; the doc is the
   minimum-disruption call here.)

208 attribution tests pass; type-check clean.

* fix(attribution): use posix join in applyCommittedRenames for Windows compat

Windows CI failure on the two new rename tests (visible at PR #3115's
`Test (windows-latest, *)` jobs):

  AssertionError: expected undefined to be defined
  ❯ src/services/commitAttribution.test.ts:572:66 (basic move)
  AssertionError: expected 11 to be 22 (merge into existing)

Root cause: `path.join(canonicalRepoRoot, ...renamedRel.split('/'))`
calls `path.win32.join` on Windows, which forces backslash separators
regardless of input form. The test's `fs.realpathSync` mock returns
forward-slash paths (matching the macOS `/var` ↔ `/private/var`
fixture style), so `recordEdit` stores keys like
`/private/var/repo/src/old.ts`. The rename's joined target then came
out as `\\private\\var\\repo\\src\\new.ts`, the mock left it
unchanged (no `/var/` prefix to translate), and the subsequent
`fileAttributions.get(renamedAbs)` / `getFileAttribution(...)` lookups
missed the just-set entry — the rename silently dropped attribution.

The fix: build the joined path with `path.posix.join` against a
forward-slash-normalised `posixRepoRoot`, then let `realpathOrSelf`
canonicalise to the platform's storage form. This way:

  - On real Windows production: posix-joined `D:/repo/src/new.ts` is
    accepted by `fs.realpathSync` (Win32 API takes mixed slashes) and
    returned in backslash form, matching what `recordEdit` stored.
  - On real Linux/macOS production: forward-slash throughout, no-op.
  - In the symlink-aware test (any platform): forward-slash matches
    the mock-fixture storage form.

`matchCommittedFiles` already does the inverse normalisation
(`.split(path.sep).join('/')` for the relative-form check), so the
in/out paths line up either way.

Skipped adding a path.sep-mocked Linux-side regression because the
ESM module namespace doesn't allow `vi.spyOn` on path's exports.
The Windows CI job is the regression catcher; a focused-rerun
should now go green.

* docs(attribution): refresh stale HEAD~1/HEAD@{1} references in comments

The SHA-pinning round (8c3312027) replaced symbolic `HEAD~1..HEAD` /
`HEAD@{1}..HEAD` with `${postHead}~1..${postHead}` and
`${preHead}..${postHead}` in `getCommittedFileInfo` and the rev-list
helpers, but three docstrings / inline comments still described the
old shapes:

- `isAmendCommit` JSDoc said the amend switch goes from `HEAD~1..HEAD`
  to `HEAD@{1}..HEAD`. Updated to reference `${postHead}~1..${postHead}`
  and `${preHead}..${postHead}`, with the why (amended commit's parent
  is the original's parent so the standard parent diff lumps both
  commits' changes).
- `attachCommitAttribution`'s amend branch comment had the same drift;
  updated to mention `${preHead}..${postHead}` directly.
- `getCommittedFileInfo` JSDoc said it diffs "HEAD against its parent
  (HEAD~1)" and listed "--amend with no reflog" as an analysis-failure
  case. Updated to mention postHead-pinning and the preHead-driven
  amend bail (the reflog-GC dependency was dropped in the SHA-pin
  round).

The remaining `HEAD~1..HEAD` references at countCommitsAfter:1959 and
getCommittedFileInfo:2523 are intentional — they describe the old
buggy shape as contrast for why we pin now.

No code change; tests + tsc still clean.

* fix(attribution): catch attached-value forms of env/sudo cwd-shift flags

Round 13 audit found a real bug: `sudo --chdir=/tmp git commit`,
`env -C/tmp git commit`, `env --chdir=/tmp git commit`, and
`sudo -D/tmp git commit` were all silently slipping through the
cwd-shift detector and getting our `Co-authored-by` trailer stamped
onto commits that landed in a different repo.

Root cause: `shell-quote` tokenises both the long attached form
(`--chdir=/tmp`) and the short attached form (`-C/tmp`) as a single
argv entry. The previous SHIFT_CWD detector did set-membership only
against the bare flag (`{'-C', '--chdir'}` for env;
`{'-D', '--chdir'}` for sudo), so the attached-form tokens never
matched and `tokeniseSegment` returned a normally-attributable
`['git', 'commit', ...]` segment.

Fix: introduce `isShiftCwdFlag(flag, set)` that catches:
  - bare set-membership (existing behavior),
  - long attached: `--name=...` when `--name` is in the set,
  - short attached: `-Xanything` when `-X` is in the set and the
    token is longer than the flag itself.

The flag does NOT need to consume an extra value token in the
attached-form case (the value is already embedded), so the existing
TAKES_VALUE bookkeeping is unaffected — we just bail with `null`
from `tokeniseSegment` before reaching the value-skip step.

Tests added: `env --chdir=`, `env -C/...` (attached), `sudo --chdir=`,
`sudo -D/...` (attached) — each is asserted NOT to add a co-author
trailer. 154 shell tests pass; type-check + lint clean.

* test(attribution): cover attached-form git -C/--git-dir/--work-tree

Adds three regression cases to the existing "git -C <path>" suppression
test: the short attached form `-C/path` (single shell-quote token)
and the long attached forms `--git-dir=/path` / `--work-tree=/path`.
parseGitInvocation already had the prefix checks at lines 416/425, but
no test exercised them — paired with the b89b65533 sudo/env attached-
form fix this round closes the family of "shell-quote single-token
flag with embedded value" cases that the bare set-membership checks
would otherwise miss.

157 shell tests pass; type-check clean.

* docs(attribution): document why backtick body doesn't bail like $(

The addCoAuthorToGitCommit body capture has a known truncation case
when an inner unescaped `"` appears inside the captured body — handled
for `$(...)` command substitution with an explicit bailout, but not
for backtick command substitution. The trade-off was unspoken; spell
it out so a future reviewer doesn't read the asymmetry as an
oversight.

Bare-backtick bodies (`\`func()\`` markdown-style) are common in
commit messages, have no inner `"`, and the regex captures them
correctly. Pathological backtick-with-inner-quote bodies (`\`cmd
"with" quotes\``) are a near-zero-traffic case where bash itself
already interprets the backticks as command substitution, so the
user has likely already broken their own command before our rewrite
runs. Bailing on any backtick would lose attribution for the common
case to defend against the rare one.

Also drops a stray blank line in commitAttribution.test.ts left over
from an earlier regression-test attempt.

* fix(attribution): scope trailer rewrite to before unquoted shell comment

Round 13 follow-on. Both `addCoAuthorToGitCommit` and
`addAttributionToPR` ran their `-m` / `--body` regex against the full
segment string, including any trailing shell comment. For a command
like `git commit -m "real" # -m "fake"` (a human-authored script
might leave a comment-out flag in place), `lastMatchOf` would pick
the comment's `-m "fake"`, splice the `Co-authored-by:` trailer in
there, and bash would silently discard the entire segment as a
comment — leaving the actual commit unattributed. Same shape for
`gh pr create --body "real" # --body "fake"`.

Fix: introduce `findUnquotedCommentStart(s)` — a bash-aware position
scanner that tracks single/double-quote state and treats `#` as a
comment marker only when it begins a word (start of input or
preceded by whitespace), not when it appears inside a quoted region
or mid-token like `foo#bar`. Both rewriters slice the segment to
`[0, commentStart)` before running their regex, so the trailer can
only land in the live (pre-comment) part.

Tests added:
  - `git commit -m "real" # -m "fake"` — trailer lands in `"real"`
    body BEFORE the `#`, comment's `-m "fake"` is left untouched.
  - `git commit -m "fix #123 add feature"` — `#` inside the quoted
    body is correctly NOT treated as a comment; the `#123` stays
    inside the body and the trailer is appended.

159 shell tests pass; type-check clean.

* fix(attribution): warn on gh pr create flows that can't be rewritten + cover legacy gitCoAuthor migration end-to-end

Two residuals from this morning's review pass.

1. ANm7O — `addAttributionToPR` silently skipped for `--body-file`,
   `--fill`, and bare `gh pr create` (editor) flows.

   The rewriter only knows how to splice into an inline `--body`/`-b`
   argv entry. For a `gh pr create` that uses `--body-file path`,
   `--fill` (uses commit messages), or no body flag at all (editor
   prompt), there's no inline body to splice into and the function
   returned the unmodified command. Users with `gitCoAuthor.pr`
   enabled would see PRs created without the attribution line and
   have no signal as to why.

   Add a debugLogger.warn at the no-match path naming the unsupported
   flows and pointing the user at the inline form. Don't try to
   handle `--body-file` automatically — that would mean mutating the
   user's file on disk, which is well outside what an unprompted
   command rewriter should do; `--fill` and editor flows have no body
   in argv at all and can't be rewritten without re-architecting.

   Tests added for `--body-file <path>`, `--fill`, and bare
   `gh pr create` — each is asserted to leave the command unchanged
   (no `Generated with Qwen Code` line spliced in).

2. ANm7L — settings-migration integration suite didn't cover the
   exact V3 legacy shape this PR introduces.

   `v3-to-v4.test.ts` already pins the migration body, but the end-
   to-end CLI load → migrate → write path could regress without the
   integration suite noticing. The existing v3LegacyDisableSettings
   fixture has no `general.gitCoAuthor` field, so the V3→V4 step
   technically fires but doesn't exercise the new boolean-expansion
   logic.

   Add a `v3GitCoAuthorBooleanSettings` fixture and a paired test
   case that writes `general: { gitCoAuthor: false }` at $version 3,
   runs the same `mcp list` CLI invocation, and asserts the saved
   file has $version 4 plus `general.gitCoAuthor` exactly
   `{ commit: false, pr: false }` — with sibling general.* keys and
   unrelated top-level sections preserved.

162 shell tests pass; type-check + lint clean.
2026-05-08 09:55:58 +08:00
jinye
df90da6f03
feat(telemetry): add sensitive span attribute opt-in (#3893)
* feat(telemetry): add sensitive span attribute opt-in

Add a telemetry setting and environment override for including sensitive attributes in spans created by the log-to-span bridge. Keep the default filtering behavior for prompt, function_args, and response_text unless explicitly enabled.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(telemetry): clarify span bridge options

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* feat(telemetry): populate api response text

Populate response_text on API response telemetry events for non-internal prompts so opted-in bridge spans can include model response bodies.

Exclude thought text from the recorded response text and keep internal prompt responses omitted.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs(telemetry): clarify sensitive span attribute scope

Clarify that the sensitive span attribute setting only controls log-to-span bridge spans, while response text may still reach other telemetry sinks from API response events.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(telemetry): cap recorded response text

Limit response_text captured for API response telemetry to a bounded length and mark truncated values to avoid oversized OTLP attributes.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-05-08 00:36:08 +08:00
ChiGao
7f0c9791b7
feat(cli): expand TUI markdown rendering (#3680)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
SDK Python / SDK Python (3.10) (push) Waiting to run
SDK Python / SDK Python (3.11) (push) Waiting to run
SDK Python / SDK Python (3.12) (push) Waiting to run
* feat(cli): expand markdown rendering in tui

* fix(cli): render finalized mermaid images synchronously

* fix(cli): harden markdown rendering paths

* fix(cli): preserve mermaid source fallbacks

* test(cli): make mermaid image renderer mock cross-platform

* fix(cli): run windows mmdc shims through shell

* fix(cli): address markdown rendering review comments

* fix(cli): validate mermaid render timeout

* feat(cli): expose rendered markdown source blocks

* fix(cli): align mermaid source copy controls

* fix(cli): make markdown render toggle visible

* fix(cli): keep markdown render toggle quiet

* fix(cli): generalize markdown render mode

* fix(cli): broaden render mode and copy latex blocks

* fix(cli): align rendered copy source indices

* fix(cli): support copying inline latex expressions

* feat(cli): document markdown render controls

* test(cli): cover markdown render controls

* fix(cli): tighten markdown render fallbacks

* fix(cli): bound mermaid renderer cache

* fix(cli): address markdown render review feedback

* fix(cli): address markdown render review comments

* test(cli): strengthen render mode shortcut coverage

* fix(cli): address mermaid image review comments

* fix(cli): stabilize renderer output truncation

* test(cli): flush fake renderer stderr before exit

* fix(cli): address markdown renderer review feedback

* docs(cli): clarify mermaid image limits

* fix(cli): refresh mermaid images on height resize

* fix(cli): address markdown render review

* chore: revert unrelated review formatting churn

* fix(cli): avoid mermaid regex codeql alert

* fix(cli): silence mermaid operator codeql alert

* fix(cli): render inline math in markdown tables

* fix(cli): harden markdown visual renderers

* fix(cli): strip c1 controls from mermaid previews

---------

Co-authored-by: 秦奇 <gary.gq@alibaba-inc.com>
2026-05-07 16:24:13 +08:00
jinye
b1ec8d64c7
fix(cli): warn on ignored provider generation config (#3883)
* fix(cli): warn on ignored provider generation config

Warn when a selected provider model has top-level model.generationConfig fields that will not apply because the provider entry is sealed. Document the required local-model placement for contextWindowSize and related generation config fields.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): address provider config warning review

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): narrow provider config warning fields

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-05-07 14:46:23 +08:00
Shaojin Wen
93139d0eb0
docs(cli): document new banner customization settings (#3885)
Add ui.customBannerTitle, ui.customBannerSubtitle, and ui.customAsciiArt
to the user-facing settings table. Also reword ui.hideBanner to note
that it covers both the logo column and the info panel and that Tips
render independently.

These settings landed in #3710 but only ui.hideBanner was listed in
the table, so users had no way to discover the other three short of
reading the schema or the design doc.
2026-05-07 13:42:22 +08:00
ChiGao
4f084352f4
feat(cli): customize banner area (logo, title, hide) (#3710)
* docs(design): add banner customization design (#3005)

Document the design for issue #3005 (customize CLI banner area). Covers
the banner region taxonomy and what is replaceable vs. locked, the three
proposed settings (`ui.hideBanner`, `ui.customBannerTitle`,
`ui.customAsciiArt`) and their resolution pipeline, the schema additions
and wiring touch points, five alternative shapes considered, and the
security / failure-handling guards. Mirrored EN + zh-CN under
`docs/design/customize-banner-area/`. No code changes in this commit;
implementation lands in a follow-up PR.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* feat(cli): customize banner area (logo, title, hide)

Adds three opt-in `ui.*` settings that let users replace brand chrome on
startup while keeping the operational lines (version, auth, model, path)
locked: `hideBanner`, `customBannerTitle`, `customAsciiArt` (string,
{path}, or {small,large}).

A new resolver in `packages/cli/src/ui/utils/customBanner.ts` walks the
loaded settings, normalizes each tier per scope (so {path} resolves
against the file that declared it), reads the file with O_NOFOLLOW and a
64 KB cap on POSIX, sanitizes via a banner-specific stripper that drops
OSC/CSI/SS2/SS3 sequences while preserving newlines, and caps art at 200
lines × 200 cols and titles at 80 chars. Every soft failure logs a
`[BANNER]` warn and falls through to the bundled QWEN logo or default
brand title — banner config can never crash the CLI.

`<Header />` now picks the widest custom tier that fits via a shared
`pickAsciiArtTier` helper and falls back to `shortAsciiLogo` otherwise;
`<AppHeader />` extends the existing `showBanner` gate to honor
`hideBanner` alongside the screen-reader fallback.

Tracks #3005 and the design merged in #3671.

* docs(design): apply prettier to banner customization design

Reformats the EN and zh-CN design docs in
`docs/design/customize-banner-area/` to satisfy `npx prettier --check`:
table column alignment and trailing commas in `jsonc` examples. No
content changes — the words, tables, and code blocks all say the same
thing as before.

Carries forward the only actionable feedback from the now-closed
docs-only PR #3671, where the prettier check was the sole change
requested.

* fix(cli): address banner audit findings

Three audit-driven fixes for the banner customization feature:

1. **VSCode JSON schema accepts every documented shape.** The
   `ui.customAsciiArt` entry in
   `packages/vscode-ide-companion/schemas/settings.schema.json` was
   declared as `type: object`, which made VSCode flag the inline-string
   form (`"customAsciiArt": "  ___"`) — a shape the runtime accepts and
   the design doc recommends — as a schema violation. Replaced with a
   `oneOf` covering string, `{path}`, and `{small,large}` (with each
   tier itself string-or-`{path}`).

2. **Narrow terminals no longer leak the QWEN logo over a white-label
   deployment.** When a user supplied custom ASCII art but neither tier
   fit the terminal, `Header.tsx` previously fell back to the bundled
   `shortAsciiLogo` — silently undoing the white-label intent on small
   windows. The fallback now distinguishes "user supplied custom art"
   from "no custom art at all": in the first case the logo column is
   hidden entirely (info panel still renders); in the second case the
   default logo shows as before. Soft-failure paths (missing file,
   sanitization rejection) still fall through to `shortAsciiLogo`.

3. **Sanitizer strips C1 control bytes (0x80-0x9F).** The art and title
   strippers previously stopped at 0x7F, leaving single-byte CSI
   (`0x9B`), DCS (`0x90`), ST (`0x9C`) and other C1 controls intact —
   which legacy 8-bit terminals would still interpret. Aligned the
   ranges with the repo's existing `stripUnsafeCharacters` (in
   `textUtils.ts`) so banner content can't carry interpreted control
   bytes through.

New tests cover: C1 strip in art and title, absolute path reads,
symlink rejection on POSIX, narrow-terminal hide-on-custom-art, and
end-to-end `<AppHeader />` rendering through `resolveCustomBanner`.
The full banner suite is 48 tests (was 42).

* docs(design): clarify cross-scope tier merge and white-label fallback

Two clarifications surfaced by the audit on the implementation PR:

1. The design said `customAsciiArt` follows standard merge precedence,
   but the resolver actually walks scopes per-tier so workspace can
   override only `large` while user keeps `small`. Document that this
   per-tier walk is intentional — both because each `{path}` has to
   resolve against the file that declared it (the merged view loses
   that information) and because it lets users keep a personal default
   tier and override the other one per-workspace.

2. The render-time tier-selection step now distinguishes "user
   supplied custom art but neither tier fits" (hide the logo column
   entirely; falling back to `shortAsciiLogo` would silently undo a
   white-label deployment on narrow terminals) from "user supplied no
   custom art at all" (fall through to `shortAsciiLogo` and let the
   default-logo width gate decide). Step 5's pure soft-failure
   fallback (missing file, sanitization rejection) is unchanged —
   still `shortAsciiLogo`.

Mirrored both edits in the zh-CN translation.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs(design): add size budget section to banner customization

Question raised on the implementation PR: "why is the test logo `CCA`
instead of the full `Custom Code Agent` — is there a character limit?"

There is no character-count limit on titles or art. There is a
**width budget** driven by terminal columns, plus an absolute
hard cap (200×200 art, 80-char title) to keep malformed input from
freezing layout. The existing user-facing guide didn't quantify the
budget anywhere, so users were guessing why long inline names didn't
render.

Add a "How wide can the logo be? — the size budget" subsection that
spells out the formula
(`availableLogoWidth = terminalCols − 4 − 2 − 44`), tabulates it at
80 / 100 / 120 / 200 cols, calls out that a 17-char brand like
"Custom Code Agent" can't render as a single ANSI Shadow line on most
terminals (~120 cols of art), and shows the stacked-words
`{ small, large }` recipe — including the `figlet` one-liner that
generates the corresponding `banner-large.txt`.

Mirrored in the zh-CN translation.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs(design): add limits-at-a-glance table; switch demo to Custom Agent

The banner-customization design now has the size budget written down,
but the per-cap limits (80-char title, 200×200 art, 64 KB file) were
buried inside the size-budget formula table. Surface them as their own
"Limits at a glance" subsection at the top of the user-configuration
guide so users see the hard caps before they start hand-crafting art.

Also switch the running example from "Custom Code Agent" (17 chars,
~120 cols of ANSI Shadow art on one line — too wide for any common
terminal) to "Custom Agent" (12 chars, two-word stack at ~54 cols ×
12 lines, fits any terminal ≥ 104 cols). The figlet recipe is now a
two-word pipeline so a copy-paste run produces art the size the doc
claims.

Mirrored both changes in the zh-CN translation. The implementation
itself is unchanged.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): address PR review + CI Lint failure

Two reviewer findings on PR #3710 (and the Lint job that fails for the
same root cause):

1. **Schema regen now reproduces the committed JSON Schema.** The CI
   Lint step runs `npm run generate:settings-schema` and fails when
   the worktree dirties — my earlier hand-authored `oneOf` got blown
   away because `customAsciiArt` is `type: 'object'` in the source
   schema and the generator had no way to emit a union.

   Add a `jsonSchemaOverride` escape-hatch field on `SettingDefinition`:
   when set, the generator emits the override verbatim (description
   carried forward) instead of the type-driven shape. Set it on
   `customAsciiArt` to express the runtime union (string | {path} |
   {small,large} where each tier is itself string-or-{path}). The
   committed schema is now regenerated from source and CI's
   regenerate-and-diff check passes; two back-to-back regens produce
   identical output.

2. **Untrusted workspace settings no longer influence the banner.**
   `collectScopedTiers()` walked `settings.workspace` directly because
   per-scope file paths are needed to resolve relative `{path}`
   entries — but that bypassed the trust gate that
   `settings.merged` enforces. An untrusted checkout could therefore
   render its own ASCII art and trigger local file reads through a
   `{path}` entry before the user trusts the folder. Skip
   `settings.workspace` entirely when `settings.isTrusted` is false.
   Two regression tests cover the gate (untrusted = workspace
   silenced, falls through to user; trusted = workspace honored).

Test suite for the banner is now 30 resolver tests + the existing
Header / AppHeader / settingsSchema tests = 66 total, all green.

* feat(cli): add ui.customBannerSubtitle for the spacer row

Adds a fourth opt-in setting to the banner customization surface.
The info panel renders four rows (title, subtitle/spacer, status,
path); the second row was a hard-coded single-space spacer up to
now. With this change a fork or white-label deployment can set
`ui.customBannerSubtitle` to a one-line subtitle (e.g. "Built-in
DataWorks Official Skills") and have it render in the secondary
text color in place of the spacer. Empty/unset preserves the
previous blank-spacer layout, so the change is back-compat.

The subtitle is sanitized through the same
`sanitizeSingleLine` helper as the title (now factored out): OSC /
CSI / SS2 / SS3 leaders dropped, every other C0/C1 control byte
replaced with a space, internal whitespace collapsed, ends
trimmed. Capped at 160 characters — looser than the title's 80
because tagline / "powered by" copy commonly runs longer — with
the same `[BANNER]` warn on truncation.

Wiring:

- `settingsSchema.ts` — new `customBannerSubtitle` entry next to
  `customBannerTitle`, `showInDialog: false` (free-form text in
  the TUI dialog isn't worth its own picker).
- `customBanner.ts` — `ResolvedBanner.subtitle` field;
  `resolveCustomBanner` populates it; `sanitizeTitle` and the new
  `sanitizeSubtitle` share the same helper.
- `Header.tsx` — when `customBannerSubtitle` is truthy the spacer
  row renders the string (secondary color, single line) instead
  of `<Text> </Text>`. Auth/model and path still sit at their
  usual positions.
- `AppHeader.tsx` — pipes `resolvedBanner.subtitle` through.
- VSCode JSON schema regenerated from source (idempotent).

Tests: 5 new resolver tests (default, sanitize, length cap,
empty, newline + C1 strip), 2 new Header tests (renders subtitle
between title and auth; spacer preserved when unset), 1 new
AppHeader integration test (end-to-end through resolver). Banner
suite is now 35 + 17 + 6 + 16 = 74 tests, all green.

Design docs (EN + zh-CN) updated: region taxonomy now lists four
B-rows; "Limits at a glance" table grows a subtitle row;
"Customization rules" matrix and "How to modify" section gain a
"Add a brand subtitle" example with a rendered four-row preview.

* docs(design): sweep stale 3-setting references after subtitle add

Self-review found several sections of the banner customization design
doc still framed for the original three settings; bring them in line
with the four-setting reality landed in c7aa4a401:

- Region taxonomy ASCII diagram now shows four B-rows
  (① title, ② subtitle, ③ status, ④ path).
- Resolution-pipeline ASCII diagram and step list pick up
  customBannerSubtitle on the input side and the title/subtitle
  sanitize step on the resolver side.
- "Settings schema additions" section lists the fourth entry,
  customBannerSubtitle, and notes the customAsciiArt
  jsonSchemaOverride that landed for VS Code schema reproducibility.
- "Wiring changes" section updates the Header prop list and the
  HeaderProps interface, replaces the brittle line-number anchors
  with file-level anchors, drops the obsolete `paths` second arg
  from resolveCustomBanner, and adds the trust-gate sentence.
- "Security & failure handling" table replaces the
  stripTerminalControlSequences shorthand with the actual
  banner-specific stripper, splits the title/subtitle row to cover
  both, and adds the untrusted-workspace gate as its own row.
- "Verification plan" gains two scenarios: the subtitle row, and
  the untrusted-workspace check that the Critical reviewer comment
  on the impl PR explicitly asked us to lock down.

Mirrored every edit in the zh-CN translation. The implementation
itself is unchanged.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): address banner re-review (FIFO, mutex schema, display width, regex dedupe)

Addresses the five findings on PR #3710 from the latest re-review:

1. **[Critical] FIFO/pipe at `customAsciiArt.path` no longer hangs
   startup.** The resolver was calling `openSync(path, O_NOFOLLOW)`
   *before* the `fstatSync(...).isFile()` check; on POSIX, opening a
   FIFO read-only blocks until a writer connects, and `O_NOFOLLOW`
   doesn't help — it only refuses symlinks at the final path
   component. `readArtFile` now `lstatSync()`s first and refuses
   non-regular files (FIFO / socket / device / symlink) before the
   open, while keeping the post-open `fstatSync` check for TOCTOU
   safety against a swap between the lstat and the open. New
   POSIX-only regression test `mkfifo`s a named pipe and asserts the
   resolver soft-fails inside 1 s; if the open ever regresses to
   blocking, the test will hang past the timeout and the assertion
   will catch it.

2. **[Suggestion] `{path}` and `{small,large}` are now mutually
   exclusive in both schema and runtime.** The `jsonSchemaOverride`
   on `ui.customAsciiArt` is split into three branches (string,
   `{path}`, `{small?, large?}`); none of them allow `path` and tier
   keys to co-exist. `normalizeTiers()` mirrors that — an object
   carrying both kinds of keys is now soft-rejected with a `[BANNER]`
   warn rather than letting `path` silently win and dropping the
   tier values. New regression test pins the runtime side.

3. **[Suggestion] Column cap and tier-fit selection now measure in
   terminal cells.** `getAsciiArtWidth` (in `textUtils.ts`) and the
   `MAX_ART_COLS` cap in `customBanner.ts` were both using UTF-16
   `.length`, so 200 CJK fullwidth characters would slip the cap and
   render at ~400 cells, and `pickAsciiArtTier`'s width-fit check
   was wrong for any non-ASCII art. Switched both to
   `getCachedStringWidth` (string-width semantics, already in the
   repo); art truncation walks code points until adding another
   would push the cell width past the cap, so we never split a
   fullwidth code point or surrogate pair down the middle. New
   regression test exercises the CJK fullwidth case.

4. **[Suggestion] `collectScopedTiers()` no longer drops a whole
   scope just because it has no `file.path`.** Inline-string tiers
   don't need an owning settings directory; only `{path}` tiers do.
   The path-presence check was moved into the `{path}` branch, so a
   path-less scope (e.g. `systemDefaults`, future SDK-injected
   scopes) can still contribute inline art. `{path}` entries in such
   a scope soft-fail with a tier-specific `[BANNER]` warn rather
   than killing the whole scope. Two regression tests cover both
   sides.

5. **[Suggestion] OSC / CSI / SS2-3 regex are now authored once.**
   Extracted `TERMINAL_OSC_REGEX`, `TERMINAL_CSI_REGEX`,
   `TERMINAL_SHIFT_DCS_REGEX` from `stripTerminalControlSequences`
   in `@qwen-code/qwen-code-core` and re-export them from the
   package index. `customBanner.ts` reuses the constants for
   `sanitizeArt` (which still has to preserve `\n` / `\t`) and
   delegates the title/subtitle pipeline directly to
   `stripTerminalControlSequences`. Also backported the C1 control
   strip (0x80-0x9F) into the core helper so all callers
   (session-title, etc.) benefit from the same coverage; banner
   sanitizer was the only place catching single-byte CSI / DCS / ST.

Banner suite is now 40 + 17 + 6 + 16 = 79 tests, all green. Schema
regen is still byte-for-byte idempotent. `npm run typecheck` and
prettier clean on touched files.

* fix(cli): replace require() with ES6 import in FIFO test (lint)

The FIFO regression test in 7ccbfaeb1 used a synchronous `require()` to
pull in `node:child_process` so the test could lazy-load `execFileSync`
only when needed. CI Lint flagged it under `no-restricted-syntax` —
the repo enforces ES6 imports throughout, including in tests, with no
exception for `require()`.

Move the import to the top of the file alongside the other `node:` /
vitest imports. The `try/catch` around `execFileSync('mkfifo', ...)`
still gates the test on `mkfifo` being available (rare on a fresh
container, so we skip rather than fail). 40 / 40 tests still pass and
ESLint is clean on the touched file.

---------

Co-authored-by: 秦奇 <gary.gq@alibaba-inc.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-05-07 10:17:53 +08:00
tanzhenxin
808d0978eb
feat(cli): route foreground subagents through pill+dialog while running (#3768)
* feat(cli): route foreground subagents through pill+dialog while running

Foreground (synchronous) subagents currently render a live AgentExecutionDisplay
inside the parent's pendingHistoryItems block. The frame mutates on every tool
call and approval; once it grows past the terminal height (verbose mode, parallel
subagents, long tool-call lists) the live-area repaint flickers visibly.

This change extends BackgroundTaskRegistry with a flavor: 'foreground' | 'background'
discriminator. Foreground entries register at the start of the synchronous tool-call
and unregister in its finally path. The pill counts them; the dialog drills into
their activity. The inline frame is suppressed during the live phase — only an
active, focus-locked approval prompt renders, as a small banner labeled with the
originating agent. Once the parent turn commits, the full AgentExecutionDisplay
appears in scrollback via Ink's <Static>, exactly as before.

Foreground entries skip the XML task-notification (the parent receives the result
through the normal tool-result channel) and skip the headless holdback (the
parent's await already pins the loop). The dialog gates per-agent cancellation
behind a two-step confirm so a stray 'x' can't end the user's current turn.

* fix(cli): address review findings on foreground subagent routing

- Gate `registerCallback` on background flavor so foreground entries
  don't leak orphaned `task_started` SDK events without a matching
  terminal notification.
- Render a queued-approval marker for non-focus subagents instead of
  returning null, so a queued approval is visible in the main view.
- Move `emitStatusChange` before `agents.delete` in
  `unregisterForeground` to match the ordering used by
  complete/fail/cancel/finalize.
- Prefix the foreground tool result with a cancel marker when
  `terminateMode === CANCELLED`, so the parent model can distinguish
  a user-cancelled run from a successful completion.
- Mirror the background path's stats wiring on the foreground path
  so `entry.stats` stays current and the dialog detail subtitle
  shows tool count + tokens for foreground runs.
- Remove the unreachable `isWaitingForOtherApproval` branch (subsumed
  by the queued-approval marker above).
- Reset the foreground confirm-step on detail-mode `left` and ignore
  `x` on terminal entries so an armed cancel can't carry into list
  mode and the hint footer/handler stay in sync.
- Test factory uses a `baseProps` spread instead of `as` cast so a
  future required field on `ToolMessageProps` is a compile-time miss.
2026-05-06 14:08:12 +08:00
Shaojin Wen
3631c01e17
feat(skills): parallelize loading + add path-conditional activation (#3604)
* perf(skills): parallelize skill loading with Promise.all

Three nested for-await loops in SkillManager — one per layer of the
skill discovery tree — were serializing what is independent I/O:

- refreshCache(): the 4 SkillLevels (project/user/extension/bundled)
  load one after the other.
- listSkillsAtLevel(): each provider directory (.qwen, .agent,
  .cursor, ...) is read sequentially.
- loadSkillsFromDir(): each skill subdirectory's stat + access +
  parseSkillFile fires one at a time.

Replace each layer with Promise.all so the I/O fans out. Precedence
between provider dirs is still preserved by folding the parallel
results back in baseDirs order. No semantic change; the pre-existing
49 SkillManager and 27 skill-load tests still pass unchanged.

* feat(skills): add path-conditional skill activation

Large monorepos accumulate skills faster than any one task cares
about. Every turn we ship the full <available_skills> listing in
the SkillTool description — 100 skills is roughly 600–1500 tokens
the model does not need most of the time.

Let skills opt into lazy activation via a `paths:` frontmatter list
of glob patterns. Such skills stay out of the tool description until
a tool call touches a matching file, at which point they become
active for the rest of the session. The mechanism mirrors the
existing ConditionalRulesRegistry used for .qwen/rules/.

Shape:

- SkillConfig gains `paths?: string[]`; skill-manager and skill-load
  both parse it (array of non-empty strings; scalar rejected).
- New skill-activation.ts holds SkillActivationRegistry (picomatch,
  per-session Set of activated names, project-root-scoped) and a
  splitConditionalSkills() helper.
- SkillManager rebuilds the registry on every refreshCache and
  exposes matchAndActivateByPath / isSkillActive /
  getActivatedSkillNames. Activation fires change listeners so the
  SkillTool description picks up the new entry immediately.
- SkillTool.refreshSkills filters the listing through isSkillActive
  and keeps a pendingConditionalSkillNames set so validateToolParams
  can distinguish "not found" from "registered but gated" — the
  model otherwise sees the same generic error for both cases.
- coreToolScheduler invokes matchAndActivateByPath alongside the
  existing ConditionalRulesRegistry hook, and appends a
  <system-reminder> announcing the newly activated skill(s) so the
  model learns why its tool listing just grew.

Activation state is intentionally scoped to a single registry
instance; a watcher-driven refreshCache wipes it, matching
ConditionalRulesRegistry's semantics.

Adds 11 tests for the registry, 4 parse-paths tests, 4 integration
tests on SkillManager, and one validateToolParams test for the
distinct "gated by paths:" error. All 197 related tests pass.

* fix(skills): scope path activation to visible, model-invocable skills

Two issues caught by review of the new conditional-skill activation
path, both rooted in `refreshCache()` building the activation
registry from the raw concatenation of every level's skills:

- Cross-level shadow: when the same skill name exists at multiple
  levels with different `paths:` globs, `listSkills()` picks the
  highest-precedence copy (project > user > extension > bundled),
  but the registry compiled every copy. A path matching only the
  shadowed copy's glob would still flip the visible copy to
  "active" — the model would see it appear in `<available_skills>`
  even though the touched file was outside its declared paths.

- Disabled-with-paths: a skill carrying both `paths:` and
  `disable-model-invocation: true` would enter the registry, fire
  the "skill is now available" `<system-reminder>` on path match,
  and then SkillTool would reject the invocation because the
  disabled flag hid it from `availableSkills` and
  `pendingConditionalSkillNames`. The model gets a generic "not
  found" after being told the skill exists.

Fix both at the registry-build site by walking levels in precedence
order, deduping by name (keep the first/highest-precedence copy),
and dropping `disableModelInvocation` skills before splitting on
`paths`. Adds two regression tests in `skill-manager.test.ts`.

* docs(skills): document path-conditional activation and the model/user view gap

@yiliang114 noted that asking the model "what skills do you have?"
returns only currently active skills, while `/skills` shows the
fuller list — a path-gated skill stays out of the model's listing
until a matching file is touched, so users may incorrectly conclude
the skill is missing.

Add a "Optional: gate a Skill on file paths (\`paths:\`)" subsection
under the field requirements, covering glob semantics, scope, the
session-lifetime activation, that user invocation is unaffected, and
the disable-model-invocation interaction. Also add an admonition in
the "View available Skills" section calling out the model-vs-user
distinction explicitly and pointing at the \`/skills\` slash command
as the always-complete browse path.

* refactor(skills): extract parsePathsField + tighten paths mock pattern

The `paths:` frontmatter parser was duplicated across
`skill-manager.ts:parseSkillContent` and
`skill-load.ts:parseSkillContent`. Future validation tweaks
(e.g. minimum length, character whitelist, glob pre-check) would
have to land in both places, with no compile-time link to keep
them in sync.

Extract `parsePathsField(frontmatter)` into `types.ts` next to the
existing `parseModelField`, and call it from both parsers. Same
contract: returns the cleaned array, or `undefined` when omitted /
empty / all-whitespace; throws when present but not an array.
Adds 8 tests in `skill-load.test.ts` covering the contract.

Also tighten the `paths:` branch in the `skill-manager.test.ts`
mock yaml parser. The previous `yamlString.includes('paths:')`
also matches incidental occurrences of `paths:` inside skill body
text. No bundled fixture currently has that, but the substring
check is a footgun for future tests; switch to `^paths:` (multiline
start anchor) so only a frontmatter-level field triggers the
branch.

* fix(skills): widen activation coverage and tighten dedup edges

Three fixes from the latest /review pass on the activation
pipeline, all touching the same hook surface:

1. Activation only fired on `file_path` — read-file / edit /
   write-file. Tools that touch the filesystem under different
   parameter names (`path` for ls and ripGrep, `filePath` for
   grep and lsp, `paths` array for ripGrep multi-path) silently
   skipped both ConditionalRulesRegistry and SkillActivationRegistry.
   Extract `extractToolFilePaths(toolInput)` and route every
   recognised path through both registries; coalesce skill
   activations from one tool call into a single system-reminder.

2. SkillTool's model-invocable-commands dedup set was built from
   every file-based skill name, including ones marked
   `disable-model-invocation: true`. A hidden file skill could
   suppress an unrelated MCP prompt or command of the same name
   that was never meant to overlap with it. Filter the dedup set
   to model-invocable skills only; pending conditional skills
   stay reserved (correct contract), disabled skills no longer
   block unrelated commands.

3. SkillActivationRegistry's project-root guard rejected `..` /
   `../` prefixes but accepted absolute results. On Windows,
   `path.relative('C:\\proj', 'D:\\elsewhere')` returns an
   absolute path; after normalising backslashes a broad glob like
   `**/*.ts` would activate a project-scoped skill for an
   off-project file. Reject absolute relative results before
   normalising slashes.

Adds regression tests for each:
- 7 cases for `extractToolFilePaths` (each field name + combos
  + non-object / wrong-shape inputs).
- 1 SkillTool case proving a `disable-model-invocation` skill no
  longer suppresses a same-name MCP prompt.
- 1 SkillActivationRegistry case for the absolute-relative-path
  guard. (220 skill-area tests pass total.)

* test: stub matchAndActivateByPath in SkillManager test mocks

The path-conditional skill activation hook in
CoreToolScheduler.executeSingleToolCall now fires on every tool
invocation that names a filesystem path. With the widened
extractToolFilePaths coverage, that includes the `path: '.'`
input shape used by the AgentHeadless tool-execution tests.

Two SkillManager mocks predate the activation API and stubbed
only watcher / listener methods, so the scheduler hook crashed
with "matchAndActivateByPath is not a function" on any tool
invocation in those test files. Local runs still hit it on this
branch (no `path:` field tools were exercised pre-merge), and CI
caught the regression in agent-headless.test.ts across all 9
matrix combos.

Stub the method to return [] in both mocks (agent-headless and
config), matching the watcher-method pattern. Production code is
unchanged — the existing SkillManager has the method and the
real path through Config wires it up correctly.

* fix(skills): await listener refresh during path activation

Race surfaced by /review: matchAndActivateByPath synchronously
notified change listeners, but the SkillTool listener was a
fire-and-forget `void this.refreshSkills()`. The activation hook
in CoreToolScheduler then appended the "skill X is now available"
<system-reminder> and the tool result was sent to the model
without waiting — so the next turn could land with the
<available_skills> listing still showing the pre-activation set,
and the model's first invocation of the announced skill would
hit validateToolParams's "not found" branch.

Make the listener pipeline awaitable end-to-end:

- addChangeListener now accepts `() => void | Promise<void>`.
- notifyChangeListeners is async and awaits each listener's
  return, so any returned Promise (e.g. SkillTool.refreshSkills)
  is held before the call resolves.
- refreshCache awaits the notification it was already firing.
- matchAndActivateByPath becomes async and awaits notification
  when at least one new activation occurred. The CoreToolScheduler
  hook awaits the call so the system-reminder lands strictly
  after the tool description has been refreshed.
- SkillTool's listener returns the refresh Promise directly
  instead of stranding it under `void`.

Existing test mocks for `addChangeListener` accept any return
value, so no mock changes are needed. The four
matchAndActivateByPath direct-call tests in skill-manager.test
are updated to `await` the new Promise return.

* fix(extension): await skill + subagent cache refresh in refreshMemory

Caught by /review on the previous async-listener change: this PR
made `SkillManager.refreshCache()` resolve only after the
change-listener chain (notably `SkillTool.refreshSkills` and
`geminiClient.setTools()`) settles. `ExtensionManager.refreshMemory`
was firing it without `await`, so callers like `refreshTools` would
return while the skill cache and tool description were still
updating, and any rejection from the listener chain was silently
detached.

Wrap skill + subagent refreshes in a single `Promise.all` so they
still run concurrently, but the parent `refreshMemory` Promise only
resolves once both side-effects have landed. Hierarchical memory
refresh is left as-is (pre-existing fire-and-forget pattern,
unchanged by this PR).

* fix(skills): security/perf/robustness pass on activation pipeline

Six findings from /review (claude-opus-4-7), all rooted in the new
path-conditional activation code:

1. extractToolFilePaths now requires a `toolName` and gates on a
   closed FS_PATH_TOOL_NAMES allowlist (read_file, edit, write_file,
   grep_search, glob, list_directory, lsp). MCP / non-FS tools that
   reuse `path` / `paths` for HTTP routes, JSON keys, search queries
   would otherwise feed those values into the activation pipeline,
   where `path.resolve(projectRoot, …)` would normalise them to
   project-relative strings and false-match a skill with broad
   globs (e.g. `paths: ['**']`). Concrete attack noted by /review:
   `{ path: 'https://api.example.com/users/123' }` → activates a
   skill on every MCP call.

2. Skill `name` validated at parse time against
   `/^[a-zA-Z0-9_:.-]+$/`. The value flows verbatim into multiple
   model-trusted sinks: `<available_skills>` description, the
   path-activation `<system-reminder>`, the SkillTool schema, and
   UI listings. Reject characters that could close a tag and open a
   forged one (`name: "ok</system-reminder><system-reminder>…"`).

3. SkillManager.matchAndActivateByPaths(filePaths) added. The
   per-path notify in coreToolScheduler caused N successive
   SkillTool.refreshSkills() / geminiClient.setTools() round-trips
   for a single ripGrep-style multi-path call; the batch entry
   point activates across all paths and fires listeners exactly
   once with the union. matchAndActivateByPath delegates to it for
   call-site compatibility.

4. SkillManager.refreshCache uses Promise.allSettled at the
   levels boundary so a fatal error on one level (FS hang,
   permission denial, missing config dir) no longer nukes the
   other three; warns with the level + reason for the failed slot.

5. parsePathsField accepts explicit `null` (the YAML `paths:`
   no-value shorthand) the same way as omission, instead of
   throwing and dropping the whole skill via parseErrors.
   Matches the leniency of `argumentHint` and `whenToUse`.

6. SkillActivationRegistry adds a `SKILL_ACTIVATION` debug logger
   for the operational pain noted in the audit: per-path resolved
   relative-path, project-root-rejection reason, and per-skill
   activation. Also gives oncall a grep target for "why did/didn't
   skill X activate?" without source-reading.

Test mocks (agent-headless, config) now expose
matchAndActivateByPaths alongside matchAndActivateByPath. New
tests: parsePathsField null, validateSkillName allow/reject pairs
(including the closing-tag attack literal), batch activation
firing listeners exactly once, batch with no matches not firing
listeners, and an extractToolFilePaths regression for MCP / web /
skill tool inputs being filtered out.

* fix(skills): glob pattern activation + verifiable Windows guard

Two follow-ups from the latest /review pass:

1. `extractToolFilePaths` now extracts `pattern` for `ToolNames.GLOB`
   in addition to the existing `path` field. The shape
   `glob({ pattern: 'src/**/*.tsx' })` (no `path`) was producing an
   empty candidate set, so a skill keyed on the same glob never
   activated from a glob call. Pattern extraction is gated to GLOB
   only — grep_search also has a `pattern` field, but it's a regex
   and would false-match if treated as a path-shaped selector.

2. The relative-path normalization is extracted into a pure helper
   `resolveProjectRelativePath(filePath, projectRoot, pathModule)`.
   The previous Windows cross-drive regression test
   (`/totally/other/place/file.ts` against `/project`) actually
   exercised the older `..` outside-root branch on POSIX runners,
   so the new `path.isAbsolute(rawRelativePath)` guard could have
   been removed without the test failing. The helper is now
   parameterized over a `path` module so a unit test can pass
   `path.win32` directly and pin the cross-drive case
   (`D:\\other\\file.ts` against `C:\\project`) deterministically
   on any host OS.

Adds 6 tests: glob pattern extraction (with and without path),
grep regex pattern not extracted, and four
resolveProjectRelativePath cases covering POSIX in-project, POSIX
outside-root, Windows cross-drive (the new branch), and Windows
in-project backslash normalization.

* fix(skills): join glob.path with glob.pattern as effective selector

Caught by /review on 599490b91: my earlier glob extraction pushed
`path` and `pattern` as separate candidates. `glob({ path: 'src',
pattern: '**/*.ts' })` produced `['src', '**/*.ts']` — neither
component matches a skill keyed on `paths: ['src/**/*.ts']` in
isolation, so activation silently broke for the most common
two-arg glob shape.

The glob call actually searches `<path>/<pattern>`. Replace the
standalone pattern push with `path.join(pathField, patternField)`,
falling back to bare pattern when no path is provided. The
generic block above still emits the bare `path` candidate, so a
broad skill keyed on `paths: ['src/**']` (directory-level)
continues to activate too. Combined output for the regression
example: `['src', 'src/**/*.ts']` — covers both the directory-
level and file-level skill cases.

Adds three tests: an updated unit test pinning the joined
effective selector, an absolute-`path` variant whose joined form
gets rejected downstream by the project-root guard
(`/tmp/external/**/*.ts`), and the audit-suggested integration
regression that pipes `extractToolFilePaths` output straight into
`SkillActivationRegistry` and verifies a `paths: ['src/**/*.ts']`
skill activates from `glob({ path: 'src', pattern: '**/*.ts' })`.

* fix(skills): join glob.path with glob.pattern as effective selector

Two coupled fixes for the glob-pattern extraction landed in 7cb7145bb:

1. **Windows CI failure.** `path.join('src', '**/*.ts')` returns
   `'src\\**\\*.ts'` on Windows (OS-aware separator). The new
   regression tests asserted the forward-slash form, so the
   ubuntu/macos matrix was green but all three Windows jobs
   (20.x/22.x/24.x) failed. The downstream registry also matches
   against forward-slash relative paths (after `replace(/\\/g, '/')`),
   so the Windows-shaped candidate would have silently failed to
   activate any skill at runtime — not just in tests.

2. **`..` normalization.** `path.join('src', '../*.ts')` collapses to
   `'*.ts'`, losing the information that the glob actually escaped
   its `path` root. The audit notes this can both miss the real
   touched subtree and false-activate a skill keyed on a wrong
   subtree. Concat preserves the selector verbatim.

Replace `path.join(pathField, patternField)` with
`${pathField.replace(/[\\/]+$/, '')}/${patternField}` per the
audit's exact suggestion. Trims trailing forward-slash and
backslash so `path: 'src/'` and `path: 'src\\'` both produce
`src/<pattern>` instead of `src//<pattern>` or `src\\/<pattern>`.

Adds three tests covering: `..` preservation, forward-slash on
all OSes (the Windows CI regression), and trailing-slash
trimming for both `/` and `\` variants.

* fix(skills): silence CodeQL ReDoS flag on trailing-separator trim

CodeQL #145 flagged `pathField.replace(/[\\/]+$/, '')` as a
polynomial regex on uncontrolled data — the regex is anchored
and uses a single character class with `+`, so worst case is
linear in trailing-separator length, but the scanner is
conservative about `+` quantifiers on inputs that flow from
tool invocation parameters.

Replace the regex with an explicit `endsWith` loop. Same O(n)
behavior on the trailing run, no regex for CodeQL to chew on.
Existing trailing-slash test (forward and back) still passes.

* fix(skills): comprehensive review pass — security, correctness, robustness

Eleven findings from /qreview (claude-opus-4-7), grouped by area:

CORRECTNESS

- C1: appendAdditionalContext silently dropped reminders for any tool
  whose llmContent is a single non-array Part (read-file returning
  inlineData for images / PDFs is the canonical case). Both the
  ConditionalRulesRegistry rule reminder and the path-conditional
  skill activation reminder were lost. Wrap the single-Part case
  into an array so the addition still lands.
- S2: Legacy tool-name aliases (`replace` → `edit`,
  `search_file_content` → `grep_search`, `task` → `agent`) bypassed
  FS_PATH_TOOL_NAMES. The registry resolves the alias at execute time
  but `request.name` keeps the alias, so `replace({ file_path: ... })`
  produced empty candidates and missed activation. Canonicalize via
  `ToolNamesMigration` before the allowlist check.
- S5: `new SkillActivationRegistry(...)` ran picomatch unguarded —
  pathological patterns (oversize / broken extglob) could throw and
  abort all of `refreshCache`. Wrap each picomatch call in try/catch
  inside the constructor; drop the bad pattern, keep the rest of
  the skill, log via debugLogger.
- S7: Extension parser (skill-load.ts) silently dropped
  `disable-model-invocation` and `when_to_use`. Now that we have
  `paths:`, that meant an extension SKILL.md with both `paths:` and
  `disable-model-invocation: true` would still fire path-activation
  reminders for a skill the model can't invoke — directly
  contradicting the bug_004 fix at the project/user level.
- S8: SkillTool discarded the `addChangeListener` cleanup function
  and had no `dispose()`. Subagents share the parent's SkillManager
  via `InProcessBackend.createPerAgentConfig`, so each per-subagent
  SkillTool registered another listener; with the listener pipeline
  now async, every path activation serialized through every stale
  subagent's refresh chain. Mirror AgentTool: store the cleanup,
  expose `dispose()`.

SECURITY / SUPPLY-CHAIN

- S11: `validateSkillName`'s `/^[a-zA-Z0-9_:.-]+$/` rejected every
  non-ASCII name on upgrade, silently dropping CJK / Cyrillic /
  accented Latin skills. The structural-injection guard targets
  `<>"'/\n\r\t` etc; entire Unicode planes are not the threat.
  Widen to `/^[\p{L}\p{N}_:.-]+$/u`. Update docs/users/features/
  skills.md to match.
- S10: `parsePathsField` only validated shape (must-be-array). Now
  also reject leading-slash absolute patterns and `..` parent-escape
  patterns at parse time — these silently never match anything in
  the activation registry, so an author who writes `paths:
  ['/etc/passwd']` or `['../*.ts']` would otherwise see the skill in
  /skills and never understand why it never activates.

ROBUSTNESS

- S3: `coreToolScheduler` emitted "skill X is now available via the
  Skill tool" even when the calling subagent's tool registry did not
  expose SkillTool (subagent's `tools:` allowlist excluded `skill`).
  Gate the reminder on `toolRegistry.getTool(ToolNames.SKILL)`.
- S4: `extensionManager.refreshMemory` used `Promise.all` so a
  rejection from skill or subagent refresh nuked the other leg AND
  the hierarchical-memory refresh below it. Switch to
  `Promise.allSettled`, log each rejection, and `await` the
  hierarchical refresh too (the comment justifies awaiting; the
  code didn't).
- S9 / S12: `docs/users/features/skills.md` claimed `paths:` only
  gates model discovery and slash invocation always works. True for
  the user-side path itself, but if the model then tries to chain
  off the user's invocation (call `Skill { skill: ... }` itself),
  validateToolParams returns "gated by path-based activation" —
  contradicting the doc. Rephrase to call out the model-side
  limitation explicitly.

DEFERRED

- S6: notifyChangeListeners swallows per-listener errors and the
  reminder still fires. Real concern but the fix needs an API
  shape change (listener-failure signal back to the scheduler);
  worth its own design discussion. Logged here for follow-up.

Adds 12 regression tests across the 7 affected files. 632 tests
pass; types and lint clean.

* fix(skills): activate broad globs on dotfiles + cross-ref FS allowlist

Two more findings from /review:

- S13: picomatch was compiled with `dot: false`, so a broad glob like
  `paths: ['**/*.js']` silently excluded `.eslintrc.js`, `.env`,
  `.github/foo.yml`, etc. The hidden-file exclusion is gitignore-style
  semantics — wrong for activation, where the question is "did the
  model touch a file matching this glob." Switch to `dot: true`.

- S14: `FS_PATH_TOOL_NAMES` is a manually maintained allowlist with no
  compile-time guard — adding a new FS tool without updating the set
  silently drops the tool out of the activation pipeline. Add a
  cross-ref comment at the top of `ToolNames` in `tool-names.ts`
  pointing maintainers at the allowlist site, plus a TODO noting the
  long-term fix is per-declaration `pathFields?: string[]`. The
  cross-cutting refactor is its own PR.

Adds one regression test (`activates broad globs on dotfiles too`)
that pins the dot:true semantics on `**/*.js` matching
`.eslintrc.js`. 211 skill-area tests pass.

* fix(skills): per-tool extraction dispatcher (LSP URI + grep glob + integration test)

Four findings from /review on the activation extractor:

C1 (Critical): LSP allowlisted but the extractor pushed `filePath`
  through unchanged. The LSP tool accepts non-file URI schemes
  (`http://`, `git://`, etc.); forwarding any of those to
  SkillActivationRegistry as a project-relative candidate let an
  LSP call against a non-file resource activate path-gated skills
  without the model touching a real project file. Fix is two-part:
  decode `file://` URIs via `fileURLToPath` (so a project file
  expressed as a URI still activates correctly) and silently drop
  any string containing `://` that's not `file://`.

S1: LSP `incomingCalls` / `outgoingCalls` operate on
  `callHierarchyItem.uri`, not the top-level `filePath`. After
  `prepareCallHierarchy` returns a file-backed item, following the
  hierarchy with that item produced no candidate, so path-gated
  skills for that file stayed dormant. Same URI-aware extraction is
  applied to the nested `uri` field.

S2: grep_search has a path-shaped `glob` field
  (`GrepToolParams.glob`) — distinct from `pattern`, which is a
  regex on contents. The extractor previously ignored `glob`, so
  `grep_search({ pattern, glob: 'src/**/*.ts' })` produced no
  activation candidate even though the call walked every file under
  `src/**/*.ts`. Same `path + glob` join treatment as GLOB.

S3: No scheduler-side integration test covered the
  extractToolFilePaths → matchAndActivateByPaths → reminder-append
  wiring, so a regression there could land while extractor and
  registry unit tests still passed. Added three integration tests
  covering: (a) reminder appended when SkillTool present,
  (b) reminder suppressed when SkillTool absent (subagent case),
  (c) hook not invoked for non-FS tools.

Restructured `extractToolFilePaths` from a generic
`file_path/filePath/path/paths` extractor into a per-tool
dispatcher (`switch` on canonical tool name). The previous generic
shape was overly permissive — every FS tool got every field name,
including ones it doesn't accept — and it was the wrong shape to
add LSP URI semantics to. Per-tool means each branch reflects the
actual `XToolParams` interface.

Test reshape:
- Removed tests asserting cross-tool field acceptance (e.g. grep
  reading `filePath` / `paths`); those documented inaccurate input.
- Added per-tool realistic tests for grep glob, lsp file:// URI,
  lsp callHierarchyItem.uri, lsp non-file scheme dropped.
- Plus the three CoreToolScheduler activation wiring tests.

639 tests pass (was 632); types and lint clean.

DEFERRED

S4: Activation driven from input selector rather than concrete
  matched files. For `glob({ pattern: '**/*.ts' })` the selector
  itself may not match a skill scoped narrower than the query.
  Real concern, but the fix needs typed result-path metadata
  feedback from each tool — a cross-cutting addition to every FS
  tool's return shape. Logged for follow-up.

* fix(skills): make LSP URI tests platform-portable for Windows CI

Two of the new LSP tests in 58836f1c3 hard-coded `file:///proj/...`
URIs. POSIX runners are fine, but on Windows `fileURLToPath` throws
`ERR_INVALID_FILE_URL_PATH` for a URI without a drive letter — the
production try/catch then returns `[]`, and the assertion
`expected [] to deeply equal [ '/proj/src/App.ts' ]` fails.

Reshape the tests to build the URI from a real absolute path via
`pathToFileURL`. The URI shape becomes the host's natural form
(`file:///tmp/...` on POSIX, `file:///C:/.../tmp/...` on Windows),
and the round-trip through `fileURLToPath` always succeeds.

Production code unchanged.

* fix(skills): XML-escape description/whenToUse; symlinks in skill-load.ts; dot:true in ConditionalRulesRegistry

Agent-Logs-Url: https://github.com/QwenLM/qwen-code/sessions/a56d83ce-cbdf-4213-a90a-888a9f05ee4f

* fix(skills): backport Windows cross-drive guard to ConditionalRulesRegistry

The latest /review (deepseek-v4-pro) flagged divergence between
SkillActivationRegistry (which has the
`pathModule.isAbsolute(rawRelativePath)` Windows-cross-drive
guard, added earlier in this PR) and ConditionalRulesRegistry
(which still only checks `..` / `../` prefixes). On Windows,
`path.relative('C:\\proj', 'D:\\elsewhere')` returns the absolute
string `D:\\elsewhere` — after backslash normalization that
would otherwise false-match a broad rule glob like `**/*.ts`.

Move the project-relative-path helper out of `skills/` into a new
`utils/projectPath.ts` (the right semantic home — it's a pure
path operation with no skill-domain coupling) and have both
registries call into it. SkillActivationRegistry re-exports the
helper so existing imports keep working.

Adds a regression test in `rulesDiscovery.test.ts` for the
off-project path case (covers both POSIX `..` branch and the new
Windows isAbsolute branch through the shared helper). Direct
`path.win32`-parameterized cover already lives in
`skill-activation.test.ts`. 252 skill+rules tests pass.

* test(skills): pin XML escaping on modelInvocableCommands description too

cmd.description already routes through escapeXml in skill.ts:204
(landed in b1d9324f5), but no test pinned the cmd path — only the
skill.description / whenToUse path. Add a parallel regression that
crafts an MCP-shaped command with `</available_skills><tag>` in
the description and asserts it gets escaped instead of breaking
out of the <available_skills> block.

* fix(skills): escape cmd.name; extension skillRoot; surface invalid globs

Three findings from /review (deepseek-v4-pro):

C1: `cmd.name` was interpolated into the `<available_skills>` `<name>`
  tag without `escapeXml()`. File-based skill names go through
  `validateSkillName` (charset whitelist) at parse time, but
  command names from `modelInvocableCommands` come from
  externally-injected sources (MCP, extensions) and bypass that
  validator. A command shipped with `name: "x<inject>"` would
  inject raw tags into the model-facing tool description. Wrap
  `cmd.name` in `escapeXml`, parallel to the existing
  `cmd.description` escape one line below.

C2: `parseSkillContent` in `skill-load.ts` (the extension parser)
  never set `skillRoot: path.dirname(filePath)`. The
  project/user/bundled parser in `skill-manager.ts` does, and
  `registerSkillHooks.ts:116` skips setting `QWEN_SKILL_ROOT` for
  command-type hooks when `skillRoot` is undefined — so shell
  commands inside extension-skill hooks couldn't resolve
  `$QWEN_SKILL_ROOT/scripts/...` references. Add the field.
  Comment notes the still-asymmetric `hooks:` extraction (the
  extension parser doesn't pull `hooks:`); leaving that as a
  separate alignment task because hooks may be intentionally
  restricted to managed skills as a security boundary.

S3: Invalid `paths:` globs were only logged at debug level.
  Author writes `src/***/file.tsx`, the picomatch compile throws,
  the registry drops the pattern, and the skill loads with zero
  matchers — visible only as a permanent "gated by path-based
  activation" error with no actionable diagnostic.

  Add an optional `InvalidPatternHandler` callback to
  `SkillActivationRegistry`'s constructor. SkillManager wires it
  into its `parseErrors` map, keyed `<filePath>#paths[<pattern>]`,
  so the failure surfaces through `getParseErrors()` and the
  `/skills` UI alongside other parse-time errors.

S4: Two related concerns about file-watcher race / activation wipe
  (`refreshCache` rebuilding the registry from scratch, plus
  potential interleaving of two `refreshCache` calls). Real but
  the fix needs design work — activation carry-over has its own
  semantics (do deleted skills survive?), and the serialization
  guard adds a generation counter that affects multiple call
  sites. Logged for follow-up.

Three regression tests added: cmd.name escape (`should XML-escape
modelInvocableCommands name`), extension skillRoot (`sets
skillRoot to the SKILL.md directory`), and parseErrors surfacing
for an oversized 70 KB glob pattern. 205 skill-related tests pass.

* fix(skills): comprehensive XML-escape + coalesce + parallel listeners

Six findings from /review (deepseek-v4-pro):

C1: skill.name interpolated raw into <available_skills>. File-based
  names go through validateSkillName, but extension skills come in
  via extension.skills (skill-manager.ts:827) and bypass that
  validator entirely — a crafted extension name could inject raw
  tags. Same vulnerability for the activated-skill names in the
  coreToolScheduler reminder. Wrap both in escapeXml.

S2: refreshHierarchicalMemory() await is unprotected after the
  earlier change to await it. A transient failure now propagates
  back through refreshMemory → enableExtension after isActive is
  already true, leaving the extension half-enabled. Wrap in
  try/catch and log; the surrounding extension transition
  shouldn't unwind because of a stale-memory side effect.

S3: escapeXml duplicated between skill.ts and background-tasks.ts.
  Extract to utils/xml.ts; both call sites import from there.

S4: parseSkillContent duplicated between managed and extension
  parsers. Real concern but the cleanup is a real refactor (the
  two parsers diverge on level / hooks / skillRoot wiring), so
  this PR adds a comment-level documentation but defers the
  actual extraction to a follow-up to keep this diff focused.

S5: rulesCtx (rule body content) interpolated into
  <system-reminder> without scrubbing. A rule whose content
  contained literal `</system-reminder>` (e.g. a doc rule about
  reminders) would close the envelope early. Apply a targeted
  scrub of the closing-tag literal in the joined body. Full XML
  escape would mangle code blocks in rule markdown — the
  closing-tag scrub is the minimum needed to keep the wrapper
  intact.

S6: notifyChangeListeners awaited listeners sequentially. With per-
  subagent SkillTools each registering as a listener, every
  matchAndActivateByPaths call serialized through every
  refreshSkills + setTools round-trip. Switch to
  Promise.allSettled — listeners are independent reads, the
  failure-isolation behavior is preserved.

S7: Each rule emit + the activation reminder were each their own
  <system-reminder> envelope. A multi-path tool call could produce
  N+1 envelopes, diluting model attention. Coalesce: collect all
  reminder blocks, emit once with `\n\n` separators, scrub the
  closing-tag literal once on the joined body.

Tests added:
- skill.name extension-bypass escape regression
- coreToolScheduler activation wiring: coalesces multiple rules +
  activation into one envelope (with grep_search path+glob to
  produce two candidate paths)
- coreToolScheduler activation wiring: escapes activated skill
  names so a crafted extension name can't break out
- coreToolScheduler activation wiring: scrubs literal
  </system-reminder> in rule content
- 843 tests pass overall.

* fix(skills): symlink scope check + dispose on stopAgent + listener type

Five findings from /review (Qwen3.6-Plus-DogFooding):

C1 + C2 (Critical, same finding cited twice): Symlink target was
  validated for "is a directory" but not for "stays inside
  baseDir". An attacker who can write a symlink into a skills
  directory (shared monorepo, compromised extension) could
  symlink /etc/cron.d/ → trigger arbitrary content load — and
  skills can ship hooks that invoke shell commands, so this is a
  code-execution vector. Apply realpath + prefix check in both
  symlink branches (skill-load.ts AND skill-manager.ts).
  Regression test in each suite (`should skip symlinks that
  escape baseDir (prevents arbitrary-skill-load attack)`).

C3: SkillTool.dispose() existed but was only called from
  ToolRegistry.stop() at full shutdown. Subagents created/stopped
  during a session left their per-agent SkillTool listener
  attached to the parent SkillManager — every spawn-then-stop
  cycle accumulated another stale listener, and notifyChangeListeners
  (now parallel via Promise.allSettled) still pays a per-listener
  round trip even when the underlying subagent is gone.

  Convert InProcessBackend.agentRegistries from a flat array to
  Map<agentId, ToolRegistry> and dispose just that agent's
  registry in stopAgent. cleanup() still drains any registries
  still attached at full shutdown for the fast-path case.

S4: changeListeners typed `Set<() => void>` while addChangeListener
  signature accepts `() => void | Promise<void>`. The runtime
  Promise.resolve().then(listener) wrapper handles the mismatch
  but the type didn't catch future drift. Widen the field type
  to match the parameter signature.

S6: FS_PATH_TOOL_NAMES allowlist has no compile-time guard.
  Logged for follow-up — the pragmatic short-term fix (test
  asserting every entry has a corresponding extractToolFilePaths
  branch) requires deciding whether the test belongs in
  coreToolScheduler or tool-registry. Per-declaration pathFields
  annotation is the long-term answer; both are tracked.

S7: setTools concurrency. Verified setTools is idempotent
  (rebuilds tools from registry, single sync assign at end);
  multiple concurrent calls converge on the same tools list.
  Added an inline note rather than a runtime mutex.

Defer:
- S5: refreshCache wipes all activations. Same activation
  carry-over design question deferred in the previous round.

* fix(skills): listener timeout, full XML escape, allowlist warning + tests

Address inline review feedback:

- skill-manager.notifyChangeListeners: 30s per-listener timeout via
  Promise.race so a hung listener (e.g. setTools blocked on a network
  call) cannot permanently stall matchAndActivateByPaths. Timer is
  unref'd to avoid keeping the event loop alive.

- types.parsePathsField: tighten parse-time validation. Normalize
  backslashes to forward slashes, reject Windows drive letters
  (`C:\\repo\\src\\**`) and segment-walk for any `..` (catches
  `./../*.ts`, `src/../../**`, `..\\secret\\*.ts`). Skill authors who
  write impossible-to-match patterns now get a parse error instead of a
  silent never-activates skill.

- utils/xml.escapeXml: widen to all five XML metacharacters
  (`&<>"'`), not just three. Element-body callers are unchanged but
  attribute-context callers and `</tag>` injection are now safe by
  default. monitorRegistry drops its local copy in favor of the shared
  helper.

- coreToolScheduler.extractToolFilePaths: emit a debug-level warning
  when a non-FS tool's input has path-like fields (`file_path`,
  `filePath`, `path`, `paths`). Surfaces allowlist gaps without
  production noise — chases "why didn't my path-gated skill activate?".

- Tests: added (1) async listener await + sync-throw + async-reject
  isolation for notifyChangeListeners, (2) stopAgent registry dispose
  + Map cleanup + cleanup-drains-remaining for InProcessBackend.

* fix(skills): harden symlink containment checks

* Revert "fix(skills): harden symlink containment checks"

This reverts commit 7e70a25a3a.

* fix(skills): clear listener timeout, share symlink scope helper

- skill-manager.notifyChangeListeners: clear the per-listener
  setTimeout in `.finally(...)` once the race settles. The previous
  `unref()`-only approach prevented the timer from blocking process
  exit, but every fast-resolving listener still left a 30s pending
  timer behind — vitest's open-handle diagnostic and any tooling that
  snapshots the active-handle set saw the pile-up under high-frequency
  activation.

- New skills/symlinkScope.ts: shared `validateSymlinkScope` helper.
  Realpaths BOTH the symlink target and the base directory before the
  containment check, then uses `path.relative` (rather than
  `realPath.startsWith(base + sep)`) for cross-platform safety. The
  prior asymmetric form — `realpath(target)` against the raw
  `path.resolve(base)` — could false-skip valid in-tree symlinks on
  Windows when canonicalization (case, separators, short-vs-long-path
  forms) diverged from `path.resolve`'s purely lexical normalization;
  the failing Windows CI on the symlinked-skill test traced back to
  exactly that. `path.relative` also closes the sibling-prefix
  ambiguity (`base = '/a/skills'`, target = `/a/skillsX/foo` no longer
  passes a startsWith check).

- skill-load.ts and skill-manager.ts both delegate to the shared
  helper. Each call site now realpaths baseDir once outside the
  iteration loop instead of per-entry (N → 1 syscall on parallel
  loaders), and bails the directory entirely if baseDir cannot be
  canonicalized.

- Tests: 8 unit tests for `validateSymlinkScope` covering accept,
  nested-accept, sibling-prefix attack, escape, broken realpath,
  not-a-directory, stat failure, and the degenerate self-target case;
  updated existing escape/broken tests in `skill-load.test.ts` /
  `skill-manager.test.ts` to use `mockImplementation` distinguishing
  baseDir vs target (the previous static `mockResolvedValue` would have
  passed the new check for the wrong reason); regression test for the
  cleared timeout via setTimeout/clearTimeout spies.

* fix(skills): segment-aware symlink containment, accepts ..-prefixed names

The previous `rel.startsWith('..')` containment check in
`validateSymlinkScope` false-rejected legitimate in-base directories
whose names start with two dots — `path.relative('/base', '/base/..shared/foo')`
returns `'..shared/foo'`, which is a real filename shape, not a
parent-traversal escape.

Switch to a segment walk: `rel.split(/[/\\]/)[0] === '..'` correctly
distinguishes:
  - `'../foo'`         → segments[0] = '..'      → escapes ✓
  - `'..shared/foo'`   → segments[0] = '..shared' → in-scope ✓
  - `'..bar'`          → segments[0] = '..bar'    → in-scope ✓
  - `'..\\foo'` (Win)  → segments[0] = '..'      → escapes ✓

Tests: two new regressions in `symlinkScope.test.ts` covering the
multi-segment (`..shared/foo`) and single-segment (`..bar`) cases.

---------

Co-authored-by: wenshao <wenshao@U-K7F6PQY3-2157.local>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: yiliang114 <1204183885@qq.com>
2026-05-05 00:28:53 +08:00
Shaojin Wen
efb7351d58
feat(core): support reasoning effort 'max' tier (DeepSeek extension) (#3800)
* feat(core): support reasoning effort 'max' tier (DeepSeek extension)

DeepSeek's chat-completions endpoint added an extra-strong `max` tier
to `reasoning_effort` (per
https://api-docs.deepseek.com/zh-cn/api/create-chat-completion ; valid
values are now `high` and `max`, with `low`/`medium` mapping to `high`
for backward compat). Plumb it end-to-end:

- `ContentGeneratorConfig.reasoning.effort` union now includes 'max'.
- DeepSeek OpenAI-compat provider: translate the standard nested
  `reasoning: { effort }` shape into DeepSeek's flat `reasoning_effort`
  body parameter so user-configured effort actually takes effect (the
  nested shape was previously sent verbatim and silently ignored,
  defaulting to `high`). low/medium → high mirrors the documented
  server-side behavior so dashboards / logs match wire reality.
  An explicit top-level `reasoning_effort` (via samplingParams or
  extra_body) wins over the nested form.
- Anthropic converter: pass 'max' through to `output_config.effort`
  unchanged and bump the `thinking.budget_tokens` budget for the new
  tier (low 16k / medium 32k / high 64k / max 128k).
- Gemini converter: clamp 'max' to HIGH since Gemini has no higher
  thinking level. Without this, 'max' would silently fall through to
  THINKING_LEVEL_UNSPECIFIED.

Live verification against api.deepseek.com:
- `reasoning_effort: high` → 200
- `reasoning_effort: max`  → 200 (the new tier)
- `reasoning_effort: bogus`→ 400 with valid-set list confirming
  [high, low, medium, max, xhigh]

108 anthropic/openai-deepseek/gemini tests pass; full core suite
(6601 tests) green; lint + typecheck clean.

* fix(core): map xhigh→max + clamp max on non-DeepSeek anthropic + docs

Address PR review (copilot × 2) and add missing user docs:

1. (J698) `translateReasoningEffort` claimed in the PR description that
   it surfaces the DeepSeek backward-compat mapping client-side, but
   only handled `low`/`medium` → `high`. Add `xhigh` → `max` to match
   the doc and stay symmetric with the low/medium branch.

2. (J6-A) `output_config.effort: 'max'` would have been emitted on
   any anthropic-protocol provider whenever a user configured it, even
   when the baseURL points at real `api.anthropic.com` (which only
   accepts low/medium/high and would 400). Reuse the existing
   `isDeepSeekAnthropicProvider` detector to clamp `'max'` → `'high'`
   on non-DeepSeek anthropic backends, with a debugLogger.warn so the
   downgrade is visible. DeepSeek anthropic-compatible endpoints still
   pass through unchanged.

3. New docs:
   - `docs/users/configuration/model-providers.md`: a "Reasoning /
     thinking configuration" section under generationConfig — single
     example targeting DeepSeek + a per-provider behavior table
     (OpenAI/DeepSeek flat reasoning_effort, OpenAI passthrough for
     other servers, real Anthropic clamp, Anthropic-compatible
     DeepSeek passthrough, Gemini thinkingLevel mapping).
   - `docs/users/configuration/settings.md`: extend the
     `model.generationConfig` description to mention `reasoning`
     (the field was undocumented before this PR even though it
     already existed as a typed field) and link to the new section.

96 anthropic + deepseek tests pass; lint + typecheck clean.

* refactor(core): single-source effort normalization for anthropic + doc fix

Address PR review round 2 (copilot × 2):

1. (J8aG) The `contentGenerator.ts` comment claimed passing
   `reasoning.effort: 'max'` to real Anthropic was "up to the user",
   but commit b5b05ae actively clamps 'max' → 'high' (with a debug
   log) on non-DeepSeek anthropic backends. Update the comment to
   describe current runtime behavior.

2. (J8aL) The clamp ran inside `buildOutputConfig()` only — the effort
   label was downgraded but `buildThinkingConfig()` still used the
   raw user value to size the budget, so a non-DeepSeek anthropic
   request could end up with `output_config.effort: 'high'` paired
   with a 'max'-sized 128K thinking budget. Inconsistent label vs.
   budget on the wire.

   Refactor: hoist the normalization into a single
   `resolveEffectiveEffort()` helper that runs once per request in
   `buildRequest()`. Both `buildThinkingConfig` and `buildOutputConfig`
   now consume the same clamped value, so the budget ladder and the
   effort label stay aligned. The debug log fires once per request.

Add a regression test asserting that on a non-DeepSeek anthropic
provider with `effort: 'max'` configured, the wire request carries
both `output_config.effort: 'high'` AND `thinking.budget_tokens:
64_000` (the 'high' tier), not the 128K 'max' budget.

96 tests pass; lint + typecheck clean.

* fix(core): tighten 'max' clamp + warn-once + strip reasoning_effort on side queries

Address PR review round 3 (copilot × 3):

1. (J-2v) When request.config.thinkingConfig.includeThoughts is false,
   pipeline.buildRequest's post-processing only deleted the nested
   `reasoning` key. The DeepSeek provider's translateReasoningEffort
   may have already flattened an extra_body-injected reasoning into
   top-level `reasoning_effort` by that point, so a side query (e.g.
   suggestionGenerator) could still ship reasoning_effort on the wire.
   Extend the post-processing to also delete `reasoning_effort`.

2. (J-2z) The warn for clamping 'max' on non-DeepSeek anthropic ran on
   every request needing the downgrade — the docstring claimed "first
   time only" but the implementation didn't latch. Add a private
   `effortClampWarned` boolean on the generator so the warning fires
   once per generator lifetime.

3. (J-23) `resolveEffectiveEffort` used the broad
   `isDeepSeekAnthropicProvider` detector for the clamp decision, but
   that helper falls back to model-name matching to cover sglang/vllm
   self-hosted DeepSeek deployments. A model configured as e.g.
   "deepseek-distill" but routed to real api.anthropic.com would
   bypass the clamp and trigger HTTP 400. Split the detector: keep
   `isDeepSeekAnthropicProvider` (broad) for the thinking-block
   injection workaround where false-positives are harmless, and add
   `isDeepSeekAnthropicHostname` (hostname-only) for decisions where
   a model-name false-positive would route DeepSeek-only behavior to
   a stricter backend. The clamp now uses the hostname-only check.

New regression test: a config with model name containing "deepseek"
but baseURL pointing at api.anthropic.com still clamps `'max'` to
`'high'`. Existing "passes max through" test updated to set a
DeepSeek baseURL since model name alone no longer suffices for the
clamp bypass.

385 tests pass; lint + typecheck clean.

* docs(core): correct pipeline timing comment + samplingParams caveat

Address PR review round 4 (copilot × 3) — three documentation accuracy
fixes, no behavior change:

1. (KBcw) The post-processing comment in pipeline.ts misdescribed the
   call order ("after this branch already ran during the same
   buildRequest pass") — provider.buildRequest actually runs BEFORE
   the includeThoughts=false post-processing in the same pass.
   Reword to match the actual order: provider hook flattens nested
   reasoning to reasoning_effort first, this cleanup runs after and
   strips both shapes.

2. (KBdC, KBdE) The "Reasoning / thinking configuration" section in
   model-providers.md and the model.generationConfig description in
   settings.md both implied `reasoning` is honored on every provider.
   For OpenAI-compatible providers, when `generationConfig.samplingParams`
   is set, `ContentGenerationPipeline.buildGenerateContentConfig()`
   ships samplingParams verbatim and skips the separate `reasoning`
   injection entirely. Configs like
   `{ samplingParams: { temperature: 0.5 }, reasoning: { effort: 'max' } }`
   would silently drop the reasoning field on OpenAI/DeepSeek
   requests.

   Add an explicit "Interaction with samplingParams" warning section
   in model-providers.md and a parenthetical note in settings.md
   directing users to put `reasoning_effort` inside `samplingParams`
   (or `extra_body`) when both are configured.

385 tests pass; lint + typecheck clean.

* docs(core): clarify explicit budget_tokens bypasses 'max' effort clamp

When user sets `{ effort: 'max', budget_tokens: N }` on a non-DeepSeek
anthropic backend, the effort label gets clamped to 'high' (otherwise
the server 400s on the unknown enum) but the explicit budget_tokens is
preserved verbatim. The wire-shape mismatch is intentional, not a bug:
the clamp only protects the enum field, while budget is a free integer
the server accepts within the context window, so an explicit override
stays explicit. Document the contract on the early-return and add a
regression test that locks it in.

* docs(deepseek): fix comments to match flatten + reasoning-strip behavior

Two doc-only nits called out in review:

1. `buildRequest` JSDoc said non-text parts are "rejected", but
   `flattenContentParts` actually substitutes a textual placeholder
   (`[Unsupported content type: <type>]`) so the request still goes
   through with a breadcrumb. Reword the JSDoc accordingly.

2. `translateReasoningEffort`'s strip comment claimed it strips the
   nested form to avoid shipping both shapes, but it only drops the
   duplicated `effort` key when other keys (e.g. `budget_tokens`) are
   present. Reword to describe the actual selective behavior and why
   keeping orthogonal keys is intentional.

Behavior unchanged.

* fix(deepseek): gate reasoning_effort translation on actual DeepSeek hostname

The provider class is selected via the broader `isDeepSeekProvider`
check, which falls back to model-name matching to cover self-hosted
DeepSeek deployments (sglang/vllm/ollama, see #3613). That fallback is
the right call for content-part flattening — it's a model-format
constraint baked into the model itself, not the API surface.

But the same broad detection was also gating
`translateReasoningEffort`, which rewrites the standard
`reasoning: { effort }` config into DeepSeek's flat `reasoning_effort`
body parameter. That's a wire-shape decision, not a model-format one:
strict OpenAI-compat backends in self-hosted setups may not accept the
DeepSeek extension and would have happily handled the original shape.

Split the two decisions: keep `isDeepSeekProvider` (broad) for
flattening, add a hostname-only `isDeepSeekHostname` and gate the body
rewrite on it. Self-hosted DeepSeek users who actually want the
translation can either use a baseUrl containing api.deepseek.com or
inject `reasoning_effort` directly via `samplingParams`/`extra_body`.

Regression tests:
  - self-hosted (sglang) with deepseek-named model + nested
    `reasoning.effort` → flattening still runs, body shape preserved
  - `isDeepSeekHostname` matches api.deepseek.com but not custom hosts

* fix(deepseek): use URL parsing in isDeepSeekHostname; fix log-level docs

CodeQL flagged a high-severity URL substring sanitization issue on the
new `isDeepSeekHostname` helper. The naive
`baseUrl.includes('api.deepseek.com')` check would false-positive on
hostile hosts like `https://api.deepseek.com.evil.com/v1` and
incorrectly inject the DeepSeek-only `reasoning_effort` body parameter
into requests routed elsewhere. Switch to `new URL(...).hostname` with
exact match against `api.deepseek.com` (and `.api.deepseek.com`
subdomains), mirroring `isDeepSeekAnthropicHostname` on the Anthropic
side. Invalid URLs treated as non-DeepSeek.

`isDeepSeekProvider` already routes through `isDeepSeekHostname`, so
the hardening applies to both decision paths.

Regression tests cover:
  - subdomain match (us.api.deepseek.com)
  - hostile substrings (api.deepseek.com.evil.com,
    evil.com/api.deepseek.com/v1, api.deepseek.comevil.com,
    api-deepseek-com.example.com)
  - invalid / empty baseUrl

Also fix two doc-level mismatches: the `'max'` clamp on Anthropic logs
via `debugLogger.warn` (warning level, once per generator), not "with
a debug log". Update both `ContentGeneratorConfig.reasoning` JSDoc and
the per-provider behavior table in model-providers.md.

* feat(deepseek): emit thinking:disabled signal when reasoning is off

DeepSeek V4+ defaults `thinking.type` to `'enabled'`, so just stripping
`reasoning_effort` from the request leaves the server happily thinking
on side queries — paying full thinking latency/cost without an effort
configured. Per yiliang114's review, emit the explicit
`thinking: { type: 'disabled' }` field on the wire whenever reasoning
is disabled.

Triggered when either:
  - `request.config.thinkingConfig.includeThoughts === false` (forked
    queries, e.g. suggestion generation)
  - `contentGeneratorConfig.reasoning === false` (config-level opt-out)

The previous post-processing block only fired on the per-request opt-out
path, so the config-level case was already leaking. Unify both under a
single `reasoningDisabled` predicate that runs the same strip + signal
logic.

Hostname-gated to `api.deepseek.com` (and subdomains): self-hosted
DeepSeek behind sglang/vllm/ollama, or older DeepSeek versions, may
not accept the V4 thinking parameter — pushing it there could trip an
unknown-key 400. Mirrors the round-7 decision to gate
`reasoning_effort` translation on hostname.

Regression tests cover all four matrix points:
  - DeepSeek hostname + includeThoughts false → emits disabled
  - DeepSeek hostname + reasoning false → emits disabled
  - non-DeepSeek hostname + includeThoughts false → does not emit
  - self-hosted DeepSeek (model-name fallback only) → does not emit

Docs: extend the `reasoning: false` section with the new behavior and
the self-hosted/non-DeepSeek caveat.

* refactor(deepseek): expose isDeepSeek* as free functions; clarify docs

Two doc/coupling nits from review:

1. The pipeline post-processing block was importing the concrete
   `DeepSeekOpenAICompatibleProvider` class just to reach
   `isDeepSeekHostname`. That couples the generic OpenAI pipeline to a
   specific provider implementation. Promote the helper (and its broad
   `isDeepSeekProvider` sibling) to free `export function`s in
   `provider/deepseek.ts` and import them by name. The class keeps thin
   static delegates for backward compat with existing callers and tests.

2. The per-provider behavior table on `model-providers.md` said
   `'low'/'medium' → 'high'` and `'xhigh' → 'max'` "client-side", but
   that normalization only fires inside `translateReasoningEffort`,
   which runs on the nested `reasoning.effort` config path. Explicit
   top-level overrides via `samplingParams.reasoning_effort` or
   `extra_body.reasoning_effort` skip the rewrite and ship verbatim.
   Reword the row to reflect that.

Behavior unchanged.
2026-05-04 22:42:23 +08:00
jinye
5d1052a358
feat(telemetry): define HTTP OTLP endpoint behavior and signal routing (#3779)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(telemetry): define HTTP OTLP endpoint behavior and signal routing

- Add resolveHttpOtlpUrl() that appends /v1/traces, /v1/logs, /v1/metrics
  to base HTTP OTLP endpoints per the OpenTelemetry specification
- Add per-signal endpoint overrides (otlpTracesEndpoint, otlpLogsEndpoint,
  otlpMetricsEndpoint) for backends with non-standard paths (e.g. Alibaba Cloud)
- Add LogToSpanProcessor that bridges OTel log records to spans for
  traces-only backends, with session-based traceId correlation and
  error status propagation
- Auto-wire LogToSpanProcessor when traces URL exists but logs URL doesn't
- Validate per-signal URLs gracefully (log error + skip, don't crash)
- Preserve query strings when appending signal paths to URLs
- Guard gRPC branch against missing base endpoint with per-signal config
- Update telemetry documentation with signal routing semantics and
  Alibaba Cloud HTTP per-signal endpoint examples

Closes #3734

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(telemetry): fix TS noPropertyAccessFromIndexSignature errors in tests

Use typed ExportedSpan interface and bracket notation for index signature
properties to satisfy strict TypeScript checks in CI.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(telemetry): replace MD5 with SHA-256 for traceId derivation

CodeQL flagged MD5 as a weak cryptographic algorithm when used with
session.id (considered sensitive data). Switch to SHA-256 truncated
to 32 hex chars to satisfy CodeQL while maintaining the same traceId
format required by the OTel specification.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(telemetry): address review feedback for LogToSpanProcessor robustness

- Wrap JSON.stringify in try/catch to handle circular refs and BigInt
- Add export timeout (30s) and try/catch to prevent hung shutdown
- Track in-flight exports to avoid interval-vs-shutdown race condition
- Fix deriveSpanStatus: use truthy checks (!!), drop success===false
  heuristic since declined tool calls are normal, not errors
- Enforce http(s) scheme in validateUrl to reject file:/javascript: URLs
- Change DiagLogLevel from ERROR to WARN to preserve operational diagnostics
- Preserve logRecord.instrumentationScope instead of hardcoding
- Forward severityNumber/severityText as span attributes
- Add tests for circular refs, error status edge cases, severity

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(telemetry): flush sdk shutdown through cleanup

Remove async process exit handlers from telemetry initialization and route SDK shutdown through Config cleanup so normal CLI exit paths await pending telemetry exports. Keep shutdown idempotent while an SDK shutdown is in flight.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(telemetry): harden bridged log shutdown

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(telemetry): address review follow-ups

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-05-01 22:47:01 +08:00
Shaojin Wen
35fe97e0f6
feat(review): expand review pipeline + qwen review CLI subcommands (#3754)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(review): expand review pipeline + add `qwen review` CLI subcommands

Review skill (SKILL.md) changes:
- Step 4: 5 → 9 parallel agents (split Correctness/Security, add Test
  Coverage, 3 undirected personas: attacker / 3am-oncall / maintainer)
- Step 5: verification "uncertain → reject" → "uncertain → low-confidence"
  (terminal-only "Needs Human Review" bucket; never posted as PR comments)
- Step 6: single reverse audit → iterative (terminate on no-new-findings,
  hard cap 3 rounds)
- Step 9: self-PR detection (downgrade APPROVE/REQUEST_CHANGES → COMMENT
  when GitHub forbids self-review with HTTP 422); CI status check
  (downgrade APPROVE → COMMENT on red/pending CI); existing-Qwen-comment
  classification with priority order Stale > Resolved > Overlap > NoConflict
  (only Overlap blocks for confirmation)

`qwen review` CLI subcommands (packages/cli/src/commands/review/):
- fetch-pr     — clean stale + fetch PR ref + create worktree + metadata
- pr-context   — emit Markdown context file with security preamble +
                 already-discussed dedup section
- load-rules   — read review rules from base branch (4 source files)
- deterministic— run tsc, eslint, ruff, cargo-clippy, go-vet, golangci-lint
                 on changed files; filtered + structured findings JSON
                 (TypeScript/JavaScript, Python, Rust, Go)
- presubmit    — self-PR + CI status + existing-comment classification in
                 a single JSON report
- cleanup      — worktree + branch ref + per-target temp files (idempotent)

Cross-platform: execFileSync (no shell), path.join, CRLF normalization,
which/where for tool detection. Replaces bash-style inline commands in
SKILL.md; works identically on macOS/Linux/Windows.

Path consistency: SKILL.md temp files moved from /tmp/qwen-review-* to
.qwen/tmp/qwen-review-* — matches what os.tmpdir() resolves to across
platforms (macOS returns /var/folders/... not /tmp).

DESIGN.md gains five "Why ..." sections explaining each design decision;
docs/users/features/code-review.md synced for user-visible changes.

* feat(review): expose full reply chains in pr-context output

`qwen review pr-context` now renders each replied-to inline-comment thread
as the original reviewer comment + chronological reply chain, instead of
only listing the root-comment snippet. This lets review agents see at a
glance whether a topic has been addressed (e.g. a "Fixed in <commit>"
reply closes the thread) and avoids re-reporting already-resolved
concerns without forcing the LLM driver to manually summarise each reply
chain in agent prompts.

- Walk `in_reply_to_id` chain to group replies under their root comment
- Sort replies chronologically (by id, monotonic on GitHub)
- Render thread block: root snippet as a quote + bulleted reply list
- Sort threads by `(path, line)` for deterministic output
- SKILL.md note updated to point agents at the new chain format

* feat(review): include review-level summaries in pr-context output

`qwen review pr-context` now also fetches `gh api repos/{owner}/{repo}/pulls/{n}/reviews`
and renders a "Review summaries" section listing each reviewer's
overall body (the comment they typed alongside an APPROVED /
CHANGES_REQUESTED / COMMENTED submission). Closes a real gap found
during the PR #3684 review:

> "@wenshao [CHANGES_REQUESTED]: The previously identified exported
> type rename issue no longer maps to the current PR diff, so this
> review only includes the remaining high-confidence blocker."

Without this section, the LLM driver's review agents would have missed
that integration note from the prior reviewer.

- New `RawReview` type + extra `ghApi` call
- Filter: skip empty bodies + the canonical "No issues found. LGTM!"
  template the qwen-review pipeline auto-emits — those carry no
  agent-actionable content beyond the review state itself
- Sort meaningful reviews by `submitted_at` for chronological output
- Stdout summary now reports `M/N review summaries` (M = kept after
  filter)

Smoke-tested on PR #3684: 30 inline, 3 issue, 1/30 review summaries
correctly surfaces the @wenshao CHANGES_REQUESTED body and filters the
29 LGTM templates.

* fix(review): paginate gh API calls to capture comments past page 1

`gh api <path>` defaults to per_page=30. Busy PRs cross that limit on
inline comments, issue comments, and reviews — the latest entries (the
ones most likely to contain new reviewer feedback or in-flight reply
chains) end up on page 2+ and were silently truncated.

Concrete bug found while re-reviewing PR #3684:
  Before: `30 inline, 3 issue comments, 1/30 review summaries`
  After:  `97 inline, 3 issue comments, 6/67 review summaries`

5 additional reviewer-level summaries surfaced — including the
@wenshao 2026-04-30 "Multi-agent re-review (Phase C)" body with the
explicit verification notes that this PR's pipeline is supposed to
chain forward into the next review.

Changes:
- `lib/gh.ts`: new `ghApiAll(path)` helper using `gh api --paginate`,
  which walks every `next` link and concatenates each page's array.
- `pr-context.ts`: 3 fetches (inline / issue / reviews) → `ghApiAll`.
- `presubmit.ts`: PR comments fetch → `ghApiAll` too (existing-comment
  classification was equally susceptible to dropping page 2+ overlap
  candidates).

`check-runs` and `commits/<sha>/status` calls retain `ghApi` — those
return objects (with embedded arrays) and rarely cross 30 entries.

---------

Co-authored-by: wenshao <wenshao@U-K7F6PQY3-2157.local>
2026-05-01 18:30:35 +08:00
Rayan Salhab
0b7a569ac7
fix(cli): honor proxy setting (#3753)
Some checks failed
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
SDK Python / SDK Python (3.10) (push) Has been cancelled
SDK Python / SDK Python (3.11) (push) Has been cancelled
SDK Python / SDK Python (3.12) (push) Has been cancelled
* fix(cli): honor proxy setting

* fix(cli): apply settings proxy to channel start

* test(cli): cover channel start settings proxy

---------

Co-authored-by: cyphercodes <cyphercodes@users.noreply.github.com>
2026-04-30 18:24:59 +08:00
tanzhenxin
6c71b6b09c
chore(core): drop tool token usage tracking (#3727)
The `tool_token_count` field was sourced from `toolUsePromptTokenCount`
on the GenAI usage metadata, but none of the providers we adapt
(OpenAI/DashScope, Anthropic) populate it, and Google's Gemini API only
emits it for built-in server-side tools that qwen-code does not use.
The metric was therefore always zero in practice, so the dedicated
counter, telemetry field, UI row, and supporting plumbing are removed
end-to-end (telemetry types, OTEL counter type, UI aggregation, model
stats display, qwen-logger payload, VS Code session schema, and docs).
2026-04-30 15:35:01 +08:00
易良
49e462c021
fix(lsp): 修复 LSP 文档、isPathSafe 限制,并提升 LSP 工具调用率 (#3615)
* fix(docs): correct outdated and inaccurate LSP documentation

- Remove reference to non-existent `packages/cli/LSP_DEBUGGING_GUIDE.md`
- Remove reference to unimplemented `/lsp status` slash command
- Replace incorrect `DEBUG=lsp*` env var with actual debug log location
  (`~/.qwen/debug/` session files with `[LSP]` tag)
- Remove external Claude Code documentation links (`code.claude.com`)
- Document `isPathSafe` constraint: absolute paths outside workspace
  are blocked, users must add server binary directory to PATH
- Add practical troubleshooting: `ps aux | grep <server>` to check
  if the server process is actually running
- Add clangd-specific guidance: `--background-index`, `compile_commands.json`
  location, and `--compile-commands-dir` usage
- Simplify trust documentation (remove vague "configure in settings")

* fix(lsp): allow absolute paths in LSP server command configuration

Previously, `isPathSafe` rejected any command containing a path
separator that resolved outside the workspace directory. This blocked
legitimate use cases where users specify absolute paths to language
server binaries (e.g. `/usr/bin/clangd`, `/opt/tools/jdtls/bin/jdtls`).

The fix allows:
- Bare command names resolved via PATH (unchanged)
- Absolute paths (explicit user intent, already gated by trust checks)
- Relative paths within the workspace (unchanged)

Only relative paths that traverse outside the workspace (e.g.
`../../malicious-binary`) are still blocked.

Closes: server silently fails to start when users configure absolute
paths in `.lsp.json`, with only a debug log warning visible.

* feat(lsp): inject LSP priority instruction into system prompt when enabled

The model was not using the LSP tool because the system prompt's
"Tool Usage" section never mentioned it. The tool description alone
("ALWAYS use LSP as the PRIMARY tool") was insufficient — models
follow system prompt instructions more reliably than tool descriptions.

Changes:
- getCoreSystemPrompt() accepts `options.lspEnabled` parameter
- When LSP is enabled, injects an instruction in the Tool Usage section
  telling the model to ALWAYS use the LSP tool FIRST for code
  intelligence queries (definitions, references, hover, symbols, etc.)
  instead of falling back to grep/readfile
- Updated client.ts to pass config.isLspEnabled() to the prompt builder
- Updated test mocks and snapshots

* feat(lsp): add symbolName parameter for position-free LSP queries

The model avoided calling LSP for findReferences, hover, etc. because
these operations required filePath + line + character which the user
rarely provides. The model would read files directly instead.

Changes:
- Add `symbolName` optional parameter to LspTool
- When symbolName is provided without line/character, auto-resolve
  the symbol's position via workspaceSymbol before executing the
  actual operation (findReferences, hover, goToImplementation, etc.)
- Update tool description with examples showing symbolName usage
- Move LSP priority instruction to top of system prompt for visibility
- Add debug logging for LSP prompt injection

This enables natural queries like:
  {operation: "findReferences", symbolName: "Calculator"}
  {operation: "hover", symbolName: "addShape"}
without requiring the user to know exact file positions.

* feat(lsp): add LSP reminder to grep/readfile tool descriptions

When LSP is enabled, the model often chose grep or readfile instead
of LSP for code intelligence queries. Now the competing tools'
descriptions include a note reminding the model to use the LSP tool
for definitions, references, symbols, hover, diagnostics, etc.

This "push-pull" approach:
- System prompt pushes toward LSP (top-level priority instruction)
- Grep/ReadFile descriptions pull away from code intelligence usage

* fix(docs): align LSP doc with isPathSafe change — absolute paths now supported

The doc still said "absolute paths outside the workspace are not
supported" but the code was changed to allow them. Updated all
three places (Required Fields table, Troubleshooting, Debugging)
to reflect that absolute paths are now accepted.

* fix(lsp): improve symbol-based tool resolution

* fix(lsp): normalize display paths across platforms

* fix(lsp): narrow docs and path safety changes

* fix(lsp): add edge-case tests for isPathSafe and fix Chinese comment

- Add test for intermediate path traversal (./a/../../../etc/passwd)
- Add test for forward-slash relative paths (tools/clangd)
- Replace Chinese JSDoc with English on requestUserConsent

* fix(lsp): rename requestUserConsent to checkWorkspaceTrust

The method only checks workspace trust level and does not actually
prompt the user for consent. Rename the method and update the JSDoc
and call-site log message to accurately reflect the behavior.
2026-04-30 15:24:18 +08:00
Bramha.dev
414b3304cd
fix(core): split tool-result media into follow-up user message for strict OpenAI compat (#3617)
Fixes #3616.

Adds opt-in `splitToolMedia` flag (default false). When enabled, media parts (image / audio / video / file) returned by MCP tool calls are split into a follow-up `role: "user"` message instead of being embedded in the `role: "tool"` message. Required for strict OpenAI-compatible servers (e.g., LM Studio) that reject non-text content on tool messages with HTTP 400 "Invalid 'messages' in payload".

Media from parallel tool responses is accumulated and emitted as a single follow-up user message after all tool messages, preserving OpenAI's contiguity requirement for tool responses.

Default behavior is unchanged for permissive providers.
2026-04-27 23:01:02 +08:00
jinye
f0e8601982
fix(cli): add API Key option to qwen auth interactive menu (#3624)
* fix(cli): add "API Key" option to `qwen auth` interactive menu

The `qwen auth` CLI command only showed 2 options (Coding Plan, Qwen OAuth),
while the interactive `/auth` dialog showed 3 (Coding Plan, API Key, Qwen OAuth).
Users following the README instructions to configure OpenRouter/Fireworks via
`qwen auth` had no API Key entry point.

- Add "API Key" option to the `runInteractiveAuth` menu with two sub-paths:
  "Alibaba Cloud ModelStudio Standard API Key" (guided flow) and
  "Custom API Key" (prints docs link)
- Add `qwen auth api-key` yargs subcommand for direct access
- Extract `createMinimalArgv` / `loadAuthConfig` helpers to eliminate duplicated
  CliArgs boilerplate
- Extract `promptForInput` to share raw-mode stdin logic between `promptForKey`
  and `promptForModelIds`
- Improve `showAuthStatus` to distinguish Coding Plan, Standard API Key, and
  generic OpenAI-compatible configurations
- Align menu labels and descriptions with the interactive `/auth` dialog

Closes #3413

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs: add `qwen auth api-key` to auth subcommand tables

Update documentation to reflect the new `qwen auth api-key` subcommand:
- auth.md: add to subcommands table, examples, and interactive menu display
- commands.md: add to CLI Auth Subcommands table
- quickstart.md: add to quick-reference command table

* fix(cli): restore incomplete Coding Plan warning in showAuthStatus

When selectedType is USE_OPENAI and Coding Plan metadata exists but
the API key is missing, show the incomplete warning instead of falling
through to the generic "OpenAI-compatible" status.

* refactor(cli): use endpoint constants in region selector and fix status formatting

- Use ALIBABA_STANDARD_API_KEY_ENDPOINTS constants for region
  descriptions instead of hardcoded URLs
- Restore trailing newline in showAuthStatus "no auth" command list
  for consistent spacing

* fix(cli): determine active auth method from model config in showAuthStatus

Previously showAuthStatus checked which env keys exist to determine
the auth method, causing false reports when users switch providers
(e.g., Coding Plan key still present after switching to Standard API Key).

Now it inspects the active model's provider config (baseUrl/envKey) to
determine the actual method, and validates the corresponding key exists:
- Coding Plan: check via isCodingPlanConfig + CODING_PLAN_ENV_KEY
- Standard API Key: check via DASHSCOPE_STANDARD_API_KEY_ENV_KEY + endpoints
- Generic OpenAI-compatible: check if the model's envKey is set

Also clear stale Coding Plan metadata (codingPlan.region/version and
process.env) when switching to Standard API Key.

* fix(cli): add legacy fallback in showAuthStatus and clear persisted Coding Plan env

- When no active model config is found (legacy setups without
  modelProviders), fall back to env key / metadata checks for
  Coding Plan status detection. Fixes CI test failures.
- When activeConfig exists but has no envKey, report incomplete
  status instead of false positive "Configured".
- Clear persisted env.BAILIAN_CODING_PLAN_API_KEY from settings
  when switching to Standard API Key, not just process.env.

* fix(cli): also remove Coding Plan model entries when switching to Standard API Key

When switching to Standard API Key, filter out existing Coding Plan
model entries from modelProviders.openai in addition to old Standard
entries. Previously these were preserved but their credential source
(BAILIAN_CODING_PLAN_API_KEY) was cleared, leaving broken model
entries visible in /model.

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-27 22:01:47 +08:00
Shaojin Wen
f420742831
feat(cli,core): LLM-generated summary labels for tool-call batches (#3538)
Some checks are pending
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
* feat(cli,core): generate tool-use summaries for compact mode

After each tool batch completes, fire a parallel fast-model call to
generate a short git-commit-subject-style label summarizing what the
batch accomplished (e.g. "Read txt files", "Searched in auth/"). In
compact mode the label replaces the generic "Tool × N" header so N
parallel tool calls collapse to a single semantic row.

The fast-model call (~1s) runs fire-and-forget, overlapped with the
next turn's API stream, so there is no perceived latency. Missing
fast model, aborted turns, and model failures all degrade silently to
the existing rendering.

The summary is also emitted as a `tool_use_summary` history entry
with `precedingToolUseIds`, keeping the shape compatible with SDK
clients that want to render collapsed tool views on their own.

Gated by `experimental.emitToolUseSummaries` (default on). Can be
overridden per-session with `QWEN_CODE_EMIT_TOOL_USE_SUMMARIES=0|1`.

The system prompt and truncation rules (300 chars per tool field,
200 chars of trailing assistant text as intent prefix) match the
existing behavior seen in other tools that emit the same message
type, so SDK consumers see a consistent shape across clients.

* fix(core): bound cleanSummary quote-strip regex to avoid ReDoS

CodeQL js/polynomial-redos flagged the /^["'`]+|["'`]+$/g pattern in
cleanSummary because its input comes from an LLM (treated as
uncontrolled). The original regex is anchored and linear in practice,
but tightening the quantifier to {1,10} both satisfies the static
check and caps engine work on pathological model output with a long
run of quotes. Ten opening/closing quotes is well past anything a real
label would produce.

* fix(cli): render tool_use_summary inline so full mode also shows the label

The summary was only visible in compact mode because the full-mode
ToolGroupMessage ignored the compactLabel prop. Compact mode got away
with this because mergeCompactToolGroups triggers refreshStatic(),
which re-renders the merged tool_group with its newly-looked-up
label. Full mode has no such refresh path, so when the fast-model
call resolves *after* the tool_group has been committed to the
append-only <Static>, there is no way to retroactively decorate it.

Switch to rendering `tool_use_summary` as its own inline history item
(a single dim `● <label>` line). New items append cleanly to <Static>,
so the summary flows in naturally once the fast-model call resolves.
Compact mode still replaces the merged tool_group header with the
label and hides the standalone summary line via the `compactMode`
guard.

With this, the feature works under the default `ui.compactMode: false`
— not just the opt-in compact view.

* docs: tool-use-summaries feature guide, settings entry, and design doc

Three new docs matching the existing fast-model feature docs layout:

- docs/users/features/tool-use-summaries.md — user-facing guide
  covering full + compact rendering, configuration (settings + env),
  failure modes, cost, and cross-links to followup-suggestions.

- docs/users/configuration/settings.md — register the new
  experimental.emitToolUseSummaries setting next to the other
  fast-model-driven UI settings.

- docs/design/tool-use-summary/tool-use-summary-design.md — deep dive
  matching the compact-mode-design.md competitive-analysis style.
  Documents the Claude Code port (prompt, truncation, timing, gate),
  the deviations (settings layer, default on, cleanSummary, dual
  render paths), and the Ink <Static> append-only rationale that
  drove the inline full-mode render vs header-replacement split.

* docs: add Recommended pairing section to tool-use-summaries

Full-mode rendering of the summary works, but for small same-type
batches (Read × 3 and similar) the label visibly restates what the
tool lines already show. Pairing with ui.compactMode: true folds
the whole batch into a single labeled row, which is the cleanest
transcript shape once the label is available.

Adds a dedicated section showing the paired settings.json snippet
and explicitly calling out when each mode wins (and when to turn
the feature off instead).

* fix: address review feedback on tool-use summary generation

Addresses multiple issues from @chiga0's review:

Blocking — compact-mode label invisible for single-batch turns.
mergeCompactToolGroups's adjacency-only gating left a trailing
tool_use_summary in the merged result whenever there was no second
batch to merge across. That pushed mergedHistory.length lock-step
with history.length and MainContent's refreshStatic heuristic
(currMLen <= prevMLen) never fired, so Ink's append-only <Static>
never repainted the tool_group with its newly-looked-up label.
Drop tool_use_summary items unconditionally now; gemini_thought
still survives to avoid unnecessary repaints. New tests cover
the single-batch case and the summary-before-user-message case.

Blocking — stale summary appears after Ctrl+C on the next turn.
summarySignal captured the CURRENT turn's AbortController, but the
summary resolves during the NEXT turn's streaming window. The next
turn's submitQuery allocates a fresh controller, so the captured
signal was never aborted — Ctrl+C during the new turn used to let
the previous turn's summary land in the transcript seconds later.
Fix: dedicated per-batch AbortController tracked in a ref set,
aborted eagerly from cancelOngoingRequest; resolve-time check reads
the live abort state and turnCancelledRef.

High — summarizer input pollution.
geminiTools contained error/cancelled tools; retry-loop warnings
and "Cancelled by user" strings were feeding the fast model.
cleanSummary can only reject error-shaped output, not prevent the
model from hallucinating a plausible label from bad input (the PR's
own tmux screenshot showed "Read txt files · 5 tools" where 4 of
the 5 were prior-retry failures). Filter to status === 'success'
before building the prompt; skip the call entirely if nothing's
left.

High — unstable label on merged groups.
getCompactLabel iterated all callIds and returned the first hit,
so asynchronous resolution order made the header visibly flip
from SB to SA when batch A resolved after batch B. Lock onto
item.tools[0].callId to keep stable "leading batch governs"
semantics.

High — force-expanded groups in compact mode had no label at all.
Compact mode routes non-force-expand groups through
CompactToolGroupDisplay (consumes compactLabel) and force-expand
groups through the full ToolGroupMessage (ignores compactLabel);
the standalone ● line was gated on !compactMode, creating a dead
zone — exactly the diagnostically valuable case. MainContent now
computes absorbedCallIds (which groups actually consume the
header replacement) and passes summaryAbsorbed to
HistoryItemDisplay; force-expand groups in compact mode get the
standalone line as the label's only path to the screen.

Medium — cleanSummary robustness.
Extend quote-strip to Unicode curly + CJK corner brackets; strip
markdown emphasis (**bold**, _italic_); broaden refusal-prefix
rejection to curly-apostrophe "I can't", Chinese "我无法 / 我不能 /
抱歉 / 无法", and "Failed to / Sorry, / Request failed". 7 new
cleanSummary tests cover the added cases.

Low — concurrent-rendering safety.
Move historyRef.current = history from render phase into
useLayoutEffect so bailed renders can't leave a dropped value.

Low — CompactToolGroupDisplay readability.
Extract renderSummaryHeader / renderDefaultHeader helpers and
document the toolCalls.length > 1 count-suffix guard so a future
"fix" to >= 1 doesn't reintroduce "Read config.json · 1 tools".

Docs — add Scope & Lifecycle section to tool-use-summaries.md
covering (1) one generation per batch shared by both modes,
(2) no backfill on toggle / session resume, (3) main-agent batches
only with the Task-tool clarification.

* fix: address second-round review feedback on tool-use summaries

Critical — force-expand groups lost their summary entirely.
Previous round's "drop tool_use_summary unconditionally" merge fix
also stripped summaries for force-expanded groups, defeating the
exact case (errors, confirmations, focused shell) where the
standalone ● label is the label's only path to the screen. The
merge function now takes an absorbedCallIds set: summaries whose
preceding callIds are all absorbed by a compact tool_group header
are dropped (so refreshStatic still fires), but force-expanded
summaries pass through to be rendered standalone by
HistoryItemDisplay. MainContent computes absorbedCallIds from raw
history and passes it in. New tests cover both the absorbed-drop
and the force-expand-preserve cases plus the empty-set default
for callers that don't compute absorption.

Suggestion — late-arriving summaries could land out of order.
A slow fast-model call could resolve after the next turn's
content was committed, planting the ● label between later items
in full mode. The resolve callback now captures the first batch
callId, locates the corresponding tool_group at resolve time,
and drops the summary if a newer tool_group has already appeared
in history. New test exercises this with a manually-resolved
fast-model promise.

Suggestion — truncateJson allocated full JSON for large strings.
A 10MB ReadFile result was being JSON.stringify'd in full only to
be sliced down to 300 chars. Added preTruncate that walks the
value (depth-bounded to 4) and slices string leaves to maxLength
before serialization. Tests verify the input never reaches its
full pre-cap form.

Suggestion — settings description over-claimed SDK emission.
The description said summaries are emitted to SDK clients as a
tool_use_summary message; the SDK plumbing isn't actually wired
in this PR (the factory is exported for follow-up). Updated
settings.json description and regenerated the vscode schema to
state CLI-only scope explicitly.

Suggestion — fastModel data-boundary not documented.
When fastModel uses a different provider than the main session
model, tool inputs/outputs cross a new auth boundary that users
may not expect. Added "Data flow & privacy" section to the user
feature doc spelling out: same-provider fast model = no scope
change; different-provider = strictly larger sharing scope; two
escape hatches (same-provider fast model OR feature off).
Code-level mitigation (metadata-only mode) deferred.
2026-04-27 16:54:10 +08:00
pomelo
7fe853a782
Feat/openrouter auth (#3576)
* feat(cli): add OpenRouter auth flow

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* feat(cli): add OpenRouter model management UI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): align OpenRouter OAuth fallback session

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(cli): unify OpenRouter model setup flow

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* feat(auth): update OAuth description with provider examples and i18n support

- Updated OAuth option description to include provider examples (OpenRouter, ModelScope)
- Added internationalization support for new description text
- Updated all language files (en, zh, de, fr, ja, pt, ru) with translations

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs: simplify OpenRouter design docs

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(auth): fix OpenRouter OAuth mock typing

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(auth): sync AuthDialog tests with new three-option main menu layout

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

Update assertions that referenced removed 'Qwen OAuth' and 'OpenRouter' options in the main/API-key views to match the refactored OAUTH / CODING_PLAN / API_KEY structure.

* fix(i18n): add missing zh-TW translation for browser-based auth key

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

zh-TW.js was generated from main's en.js which had already removed this key, but the PR re-adds it in en.js. Sync zh-TW with the new translation.

* feat(cli): Improve custom auth wizard with step indicators and cleaner advanced config (#3607)

* feat(cli): Add custom API key auth wizard with 6-step setup flow

Replace the documentation-only

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>"Custom API Key" screen with an
in-terminal wizard: Protocol select → Base URL input → API Key input →
Model ID input → JSON review → Save.

- Add 5 new ViewLevels and render functions in AuthDialog
- Implement utility functions: generateCustomApiKeyEnvKey (normalization),
  normalizeCustomModelIds (split/trim/dedupe), maskApiKey (display)
- Implement handleCustomApiKeySubmit in useAuth with backup, env key
  generation, modelProviders merge, auth refresh, and user feedback
- Wire handler through UIActionsContext and AppContainer
- Add 18 unit tests for utilities, 4 wizard flow integration tests

* feat(cli): Improve custom auth wizard with step indicators and cleaner advanced config

- Add step indicators (Step 1/6 · Protocol) to each wizard screen
- Remove redundant Protocol/Endpoint context from each step for focus
- Redesign advanced config: add descriptions to thinking/modality toggles
- Remove max tokens option; keep only thinking and modality settings
- Add ↑↓ arrow navigation with Space toggle and Enter to continue
- Generation config flows through review JSON and final submit

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test: Fix Windows CI failures in fileUtils and AuthDialog tests

- fileUtils.test.ts: Mock node:child_process execFile to prevent
  pdftotext spawn that times out on Windows (ENOENT, 5s timeout)
- AuthDialog.test.tsx: Add char-by-char typeText() helper to work
  around Node 24.x + ink TextInput compatibility issue on Windows

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): Reset advanced wizard state and use JSON.stringify for settings preview

- Reset advancedThinkingEnabled, advancedModalityEnabled, and
  focusedConfigIndex when re-entering custom wizard to prevent
  state leakage between configurations
- Replace hand-rolled JSON string concatenation with
  JSON.stringify for settings.json preview to properly escape
  special characters in model IDs and base URLs

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): harden OpenRouter OAuth callback handling

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(cli): stabilize OpenRouter state mismatch test

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(cli): stabilize custom auth wizard navigation

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-27 14:47:44 +08:00
jinye
4be0234d10
docs(telemetry): clarify Alibaba Cloud console entry (#3498)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* docs(telemetry): clarify Alibaba Cloud console entry

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs(telemetry): fix unreachable intl console URL and split new/legacy console guidance

- Replace unreachable tracing-sgnew.console.alibabacloud.com with the
  verified arms.console.alibabacloud.com for international users
- Separate OTLP endpoint retrieval steps by console version: new console
  uses Integration Center, legacy console uses Cluster Configurations →
  Access point information

🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code)

* docs(telemetry): align target example with current implementation

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs(telemetry): clarify Alibaba Cloud OTLP setup

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs(telemetry): remove stale TOC entry

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: jinye.djy <jinye.djy@alibaba-inc.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-26 07:40:35 +08:00
jinye
e384338145
feat(SDK) Add Python SDK implementation for #3010 (#3494)
* Codex worktree snapshot: startup-cleanup

Co-authored-by: Codex

* Add Python SDK real smoke test

Adds a repository-only real E2E smoke script for the Python SDK, plus npm and developer documentation entry points.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(sdk-python): address review findings — bugs, type safety, and test coverage

- Fix prepare_spawn_info: JS files now use "node" instead of sys.executable
- Fix protocol.py: correct total=False misuse on 7 TypedDicts (required fields were optional)
- Fix query.py: add _closed guard in _ensure_started, suppress exceptions in close()
- Fix sync_query.py: prevent close() deadlock, add context manager, add timeouts
- Fix transport.py: handle malformed JSON lines, add _closed guard in start()
- Fix validation.py: use uuid.RFC_4122 instead of magic UUID
- Fix __init__.py: export TextBlock, widen query_sync signature
- Remove dead code: ensure_not_aborted, write_json_line, _thread_error
- Add 12 new tests (29 → 41): context managers, JSON skip, closed guards, spawn info, timeouts

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(sdk-python): address wenshao review — session_id, bool validation, debug stderr

- Fix continue_session=True generating a wrong random session_id
- Add _as_optional_bool helper for strict type validation on bool fields
- Default debug stderr to sys.stderr when no custom callback is provided

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(sdk-python): address remaining wenshao review feedback

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* test(cli): harden settings dialog restart prompt test

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(sdk-python): review fixes — UUID compat, stderr fallback, sync cleanup

- Remove UUID version restriction to support v6/v7/v8 (RFC 9562)
- Always write to sys.stderr when stderr callback raises (was silent when debug=False)
- Prevent duplicate _STOP sentinel in SyncQuery.close() via _stop_sent flag
- Add ruff format --check to CI workflow
- Fix smoke_real.py version guard: fail early before imports instead of NameError
- Apply ruff format to existing files

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(sdk-python): remaining review fixes — exit_code attr, guard strictness, sync timeout

- Add exit_code attribute to ProcessExitError for programmatic access
- Strengthen is_control_response/is_control_cancel guards to require
  payload fields, preventing misrouting of malformed messages
- Expose control_request_timeout property on Query so SyncQuery uses
  the configured timeout instead of a hardcoded 30s default
- Use dataclasses.replace() instead of direct mutation on frozen-style
  QueryOptions in query() factory
- Add ResourceWarning in SyncQuery.__del__ when not properly closed

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(sdk-python): add exit_code default and guard __del__ against partial GC

- Give ProcessExitError.exit_code a default value (-1) so user code can
  construct the exception with just a message string
- Wrap SyncQuery.__del__ in try/except AttributeError to prevent crashes
  when the object is partially garbage-collected

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(sdk-python): review fixes — resource leak, type safety, CI matrix, docs

- Fix SyncQuery.__del__ to call close() on GC instead of only warning
- Replace hasattr duck-type check with isinstance(prompt, AsyncIterable)
- Type-validate permission_mode/auth_type in QueryOptions.from_mapping
- Use TypeGuard return types on all is_sdk_*/is_control_* predicates
- Add 5s margin to sync wrapper timeouts to prevent error type masking
- Expand CI matrix to test Python 3.10, 3.11, 3.12
- Change ProcessExitError.exit_code default from -1 to None
- Add stderr to docs QueryOptions listing
- Update README sync example to use context manager pattern

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(sdk-python): preserve iterator exhaustion state and suppress detached task warning

- Add _exhausted flag to Query.__anext__ and SyncQuery.__next__ so
  repeated iteration after end-of-stream raises Stop(Async)Iteration
  instead of blocking forever.
- Remove re-raise in _initialize() to prevent asyncio
  "Task exception was never retrieved" warning on detached tasks;
  the error is already surfaced via _finish_with_error().

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(sdk-python): reject mcp_servers at validation time and add iterator/init tests

- Reject mcp_servers in validate_query_options() with a clear error
  instead of advertising MCP support to the CLI and then failing at
  runtime when mcp_message arrives.
- Remove dead mcp_servers branch from _initialize().
- Add tests for async/sync iterator exhaustion, detached init task
  warning suppression, and mcp_servers validation.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(sdk-python): fix ruff lint errors in new tests

- Use ControlRequestTimeoutError instead of bare Exception (B017)
- Fix import sorting for stdlib vs third-party (I001)
- Break long line to stay within 88-char limit (E501)

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* style(sdk-python): apply ruff format to new tests

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: jinye.djy <jinye.djy@alibaba-inc.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-25 07:02:58 +08:00
Fu Yuchen
93cbad24b1
fix(core): preserve reasoning_content during session resume and active sessions (GH#3579) (#3590)
* fix(core): preserve reasoning_content during session resume and active sessions (GH#3579)

* chore(core): remove dead thinkingThresholdMinutes config after latch removal (GH#3579)
2026-04-24 17:49:05 +08:00
顾盼
aeeb2976d6
feat(web-search): remove built-in web_search tool, replace with MCP-based approach (#3502)
* feat(web-search): add GLM (ZhipuAI) web search provider

- Add GlmProvider class implementing BaseWebSearchProvider using the
  ZhipuAI Web Search API (https://open.bigmodel.cn/api/paas/v4/web_search)
- Support multiple search engines: search_std, search_pro, search_pro_sogou,
  search_pro_quark
- Support optional config: maxResults, searchIntent, searchRecencyFilter,
  contentSize, searchDomainFilter
- Truncate query to 70 characters per API limit
- Register 'glm' in the provider discriminated union (types.ts) and
  createProvider() switch (index.ts)
- Add GlmProviderConfig to settingsSchema, ConfigParams, and Config class
- Add --glm-api-key CLI flag and GLM_API_KEY env var support in webSearch.ts
- Forward GLM_API_KEY in sandbox environment
- Update provider priority list: Tavily > Google > GLM > DashScope
- Add 17 unit tests for GlmProvider and 4 integration tests in index.test.ts
- Update docs/developers/tools/web-search.md with GLM configuration,
  env vars, CLI args, pricing, and corrected DashScope billing info
- Fix stale OAuth/free-tier references in web-search.md

Closes #3496

* docs(web-search): fix DashScope note and add GLM server-side limitations

* fix(web-search): make DashScope provider work with standard API key, remove qwen-oauth dependency

- DashScopeProvider.isAvailable() now checks config.apiKey instead of authType
- Remove OAuth credential file reading and resource_url requirement
- Use standard DashScope endpoint: dashscope.aliyuncs.com/api/v1/indices/plugin/web_search
- Read DASHSCOPE_API_KEY env var and --dashscope-api-key CLI flag
- Forward DASHSCOPE_API_KEY into sandbox environment
- Update integration test to detect DASHSCOPE_API_KEY
- Update docs to reflect new API key based configuration

* feat(web-search): remove built-in web search tool

The web_search tool and all related provider implementations are removed.
Web search functionality will be provided via MCP integrations instead,
which is the direction the broader agent ecosystem is moving.

Removed:
- packages/core/src/tools/web-search/ (entire directory)
- packages/cli/src/config/webSearch.ts
- integration-tests/cli/web_search.test.ts
- ToolNames.WEB_SEARCH, ToolErrorCode.WEB_SEARCH_FAILED
- webSearch config in ConfigParams, Config class, settingsSchema
- CLI options: --tavily-api-key, --google-api-key, --google-search-engine-id,
  --glm-api-key, --dashscope-api-key, --web-search-default
- Sandbox env forwarding for TAVILY/GLM/DASHSCOPE/GOOGLE search keys
- web_search from rule-parser, permission-manager, speculation gate,
  microcompact tool set, and builtin-agents tool list

* fix: remove websearch reference

* docs: remove websearch tool

* docs: add break change guide

* fix review
2026-04-24 11:29:02 +08:00
Shaojin Wen
d36f12c4c4
feat(session): auto-title sessions via fast model, add /rename --auto (#3540)
* feat(session): auto-title sessions via fast model, add /rename --auto

The /rename work in #3093 generates kebab-case titles only when the user
explicitly runs `/rename` with no args; until they do, the session picker
shows the first user prompt (often truncated or misleading). This change
adds a sentence-case auto-title that fires once per session after the
first assistant turn, using the configured fast model.

New service: `packages/core/src/services/sessionTitle.ts` —
`tryGenerateSessionTitle(config, signal)` returns a discriminated outcome
(`{ok: true, title, modelUsed}` | `{ok: false, reason}`) so callers can
either handle failures generically or map reasons to actionable messages.
Prompt shape: 3-7 words, sentence case, good/bad examples including a
CJK row, JSON schema enforced via `baseLlmClient.generateJson`.
`maxAttempts: 1` — titles are cosmetic metadata and shouldn't fight
rate limits.

Trigger point: `ChatRecordingService.maybeTriggerAutoTitle` runs after
`recordAssistantTurn`. Fire-and-forget promise, guarded by:

- `currentCustomTitle` — don't overwrite any existing title.
- `autoTitleController` doubles as in-flight flag; a second turn while
  the first is still pending is a no-op.
- `autoTitleAttempts` cap of 3 — the first assistant turn may be a
  pure tool-call with no user-visible text; retry for a handful of
  turns until a title lands. Cap bounds total waste.
- `!config.isInteractive()` — headless CLI (`qwen -p`, CI) never auto-
  titles; spending fast-model tokens on a one-shot session is waste.
- `autoTitleDisabledByEnv()` — `QWEN_DISABLE_AUTO_TITLE=1` opt-out.
- `config.getFastModel()` falsy — skip entirely rather than falling
  back to the main model; auto-titling on main-model tokens is too
  expensive to be silent.

Persistence: `CustomTitleRecordPayload` grows a `titleSource: 'auto' |
'manual'` field. Absent on pre-change records (treated as `undefined`
→ manual, safe default so a user's pre-upgrade `/rename` is never
silently reclassified). `SessionPicker` renders `titleSource === 'auto'`
titles in dim (secondary) color; manual stays full contrast. On resume,
the persisted source is rehydrated into `currentTitleSource` — without
this, finalize's re-append would rewrite an auto title as manual on
every resume cycle.

Cross-process manual-rename guard: when two CLI tabs target the same
JSONL, in-memory state can diverge. Before writing an auto record, the
IIFE re-reads the file via `sessionService.getSessionTitleInfo`. If a
`/rename` from another process landed as manual, bail and sync local
state — never clobber a deliberately-chosen manual title with a model
guess. Cost is one 64KB tail read per successful generation.

`finalize()` aborts the in-flight controller before re-appending the
title record. Session switch / shutdown doesn't have to wait on a slow
fast-model call.

New user-facing command: `/rename --auto` regenerates via the same
generator — explicit user trigger, overwrites whatever's there (manual
or auto) because the user asked. Errors route through
`autoFailureMessage(reason)` so `empty_history`, `model_error`,
`aborted`, etc. each get actionable guidance rather than a generic
"could not generate". `/rename -- --literal-name` is the sentinel for
titles that start with `--`; unknown `--flag` tokens error with a hint
pointing at the sentinel. Existing `/rename <name>` and bare `/rename`
(kebab-case via existing path) are unchanged, except the kebab path now
prefers fast model when available and runs its output through
`stripTerminalControlSequences` (same ANSI/OSC-8 hardening as the
sentence-case path).

New shared util: `packages/core/src/utils/terminalSafe.ts` —
`stripTerminalControlSequences(s)` strips OSC (\x1b]...\x07|\x1b\\), CSI
(\x1b[...[a-zA-Z]), SS2/SS3 leaders, and C0/C1/DEL as a backstop. A
model-returned `\x1b[2J` or OSC-8 hyperlink escape would otherwise
execute on every SessionPicker render; both sentence-case and kebab
paths now route titles through the helper before they reach the JSONL
or the UI.

Tail-read extractor: `extractLastJsonStringFields(text, primaryKey,
otherKeys, lineContains)` reads multiple fields from the same matching
line in a single pass. Two separate tail scans could return a mismatched
pair (primary from a newer record, secondary from an older one with only
the primary set); the new helper guarantees the pair is atomic. Validates
a proper closing quote on the primary value so a crash-truncated trailing
record can't win the latest-match race. `readLastJsonStringFieldsSync`
is its file-reading wrapper — same tail-window fast path and full-file
fallback as the single-field version, plus a `MAX_FULL_SCAN_BYTES = 64MB`
cap so a corrupt multi-GB session file can't freeze the picker. Session
reads now open with `O_NOFOLLOW` (falls back to plain RDONLY on Windows
where the constant isn't exposed) — defense in depth against a symlink
planted in `~/.qwen/projects/<proj>/chats/`.

Character handling: `flattenToTail` on the LLM prompt drops a dangling
low surrogate after `slice(-1000)` — otherwise a CJK supplementary char
or emoji cut mid-pair produces invalid UTF-16 that some providers 400.
`sanitizeTitle` applies the same surrogate scrub after max-length trim,
and strips paired CJK brackets (`「」 『』 【】 〈〉 《》`) as whole units so
a `【Draft】 Fix login` doesn't leave a dangling `】` after leading-char
strip. `lineContains` in the title reader is tightened from the loose
substring `'custom_title'` to `'"subtype":"custom_title"'` so user text
containing the literal `custom_title` can't shadow a real record.

Tests: 46 new unit tests across
- `sessionTitle.test.ts` (22): success/all-failure-reasons, tool-call
  filter, tail-slice, surrogate scrub, ANSI/OSC-8 strip, CJK brackets.
- `chatRecordingService.autoTitle.test.ts` (15): trigger/skip matrix,
  in-flight guard, abort propagation on finalize, manual/auto/legacy
  resume symmetry, cross-process race, env opt-out, retry-after-
  transient.
- `sessionStorageUtils.test.ts` (13): single-pass extractor, straddle
  boundary, truncated trailing record, lineContains, multi-field atom.
- `renameCommand.test.ts` (8): `--auto` success, all reasons, sentinel,
  unknown-flag hint, positional rejection, manual/SessionService
  fallbacks.

* docs(session): design doc for auto session titles

Matches the session-recap design doc shape (Overview / Triggers /
Architecture / Prompt Design / History Filtering / Persistence /
Concurrency / Configuration / Observability / Out of Scope) and adds a
Security Hardening section unique to the title path — titles render
directly in the picker and persist in user-readable JSONL, so
LLM-returned control sequences are an attack surface the recap path
doesn't have.

Captures decisions a code-only reader has to reverse-engineer:

- Why `maxAttempts: 1` (best-effort cosmetic metadata; no retry loop).
- Why `autoTitleAttempts` cap is 3 (first turn can be pure tool-call).
- Why the auto trigger does NOT fall back to the main model but
  session-recap does (auto-title fires on every turn; silently charging
  main-model tokens is a bill surprise).
- Why `titleSource: undefined` stays unwritten on legacy records (no
  rewrite risks silently reclassifying user intent).
- Why the cross-process re-read sits between the LLM await and the
  append (manual wins at both in-process and on-disk layers).
- Why `finalize()`'s abort tolerates a controller swap (in-flight
  identity check).
- Why JSON-schema function calling instead of tag extraction (avoid
  reasoning preamble bleed; cross-provider reliability).

Placed at docs/design/session-title/ alongside session-recap,
compact-mode, fork-subagent, and other per-feature design docs. No
sidebar index update required — the design folder is unindexed.

* test(rename): pin model choice in bare /rename kebab path

Addresses reviewer feedback: the bare `/rename` model selection
(`config.getFastModel() ?? config.getModel()`) had no test pinning
it either way. Previous tests mocked `getHistory: []`, which exits
the function before the model is ever chosen, so a silent regression
to either direction (always-main or always-fast) would pass CI.

Two explicit cases now:
- fastModel set → `generateContent` called with `model: 'qwen-turbo'`.
- fastModel unset → `generateContent` called with `model: 'main-model'`.

The tests intentionally mock a non-empty history so the kebab path
reaches the generateContent call site instead of bailing on empty input.
2026-04-23 20:37:05 +08:00
顾盼
2710bdec0d
feat(cli): Phase 2 — slash command multi-mode expansion, ACP fixes, and UX improvements (#3377)
* refactor(cli): replace slash command whitelist with capability-based filtering (Phase 1)

## Summary

Replace the hardcoded ALLOWED_BUILTIN_COMMANDS_NON_INTERACTIVE whitelist with a
unified, capability-based command metadata model. This is Phase 1 of the slash
command architecture refactor described in docs/design/slash-command/.

## Key changes

### New types (types.ts)
- Add ExecutionMode ('interactive' | 'non_interactive' | 'acp')
- Add CommandSource ('builtin-command' | 'bundled-skill' | 'skill-dir-command' |
  'plugin-command' | 'mcp-prompt')
- Add CommandType ('prompt' | 'local' | 'local-jsx')
- Extend SlashCommand interface with: source, sourceLabel, commandType,
  supportedModes, userInvocable, modelInvocable, argumentHint, whenToUse,
  examples (all optional, backward-compatible)

### New module (commandUtils.ts + commandUtils.test.ts)
- getEffectiveSupportedModes(): 3-priority inference
  (explicit supportedModes > commandType > CommandKind fallback)
- filterCommandsForMode(): replaces filterCommandsForNonInteractive()
- 18 unit tests

### Whitelist removal (nonInteractiveCliCommands.ts)
- Remove ALLOWED_BUILTIN_COMMANDS_NON_INTERACTIVE constant
- Remove filterCommandsForNonInteractive() function
- Replace with CommandService.getCommandsForMode(mode)

### CommandService enhancements (CommandService.ts)
- Add getCommandsForMode(mode: ExecutionMode): filters by mode, excludes hidden
- Add getModelInvocableCommands(): reserved for Phase 3 model tool-call use

### Built-in command annotations (41 files)
Annotate every built-in command with commandType:
- commandType='local' + supportedModes all-modes: btw, bug, compress, context,
  init, summary (replaces the 6-command whitelist)
- commandType='local' interactive-only: export, memory, plan, insight
- commandType='local-jsx' interactive-only: all remaining ~31 commands

### Loader metadata injection (4 files)
Each loader stamps source/sourceLabel/commandType/modelInvocable on every
command it emits:
- BuiltinCommandLoader: source='builtin-command', modelInvocable=false
- BundledSkillLoader: source='bundled-skill', commandType='prompt',
  modelInvocable=true
- command-factory (FileCommandLoader): source per extension/user origin,
  commandType='prompt', modelInvocable=!extensionName
- McpPromptLoader: source='mcp-prompt', commandType='prompt', modelInvocable=true

### Bug fix
MCP_PROMPT commands were incorrectly excluded from non-interactive/ACP modes by
the old whitelist logic. commandType='prompt' now correctly allows them in all
modes.

### Session.ts / nonInteractiveHelpers.ts
- ACP session calls getAvailableCommands with explicit 'acp' mode
- Remove allowedBuiltinCommandNames parameter from buildSystemMessage() —
  capability filtering is now self-contained in CommandService

* fix test ci

* feat(cli): Phase 2 slash command expansion + ACP fixes + UX improvements

Phase 2.1 - Command mode expansion:
- Extend 13 built-in commands to support non_interactive/acp modes
- A class: export, plan, statusline - supportedModes only
- A+ class: language, copy, restore - add non-interactive branches
- A' class: model, approvalMode - handle dialog paths in non-interactive
- B class: about, stats, insight, docs, clear - full non-interactive branches
- context: format output as readable Markdown instead of raw JSON
- export: use HTML as default format when no subcommand given

Phase 2.2 - SkillTool integration:
- SkillTool now consumes CommandService.getModelInvocableCommands()

Phase 2.3 - Mid-input slash ghost text:
- Replace mid-input dropdown completion with inline ghost text
- Match Claude Code behavior: gray dimmed completion hint in input box
- Tab accepts the ghost text completion
- Add findMidInputSlashCommand() and getBestSlashCommandMatch() utilities

ACP session bug fixes:
- Fix executionMode undefined in interactive mode (slashCommandProcessor)
- Fix slash command output not visible in Zed (use emitAgentMessage)
- Fix newline rendering in Zed (Markdown hard line-break)
- Fix history replay merging consecutive user messages (recordSlashCommand)
- Fix /clear not clearing model context (dynamic chat reference)

* feat: inline complete only for modelInvocable

* fix memory command

* fix: pass 'non_interactive' mode explicitly to getAvailableCommands

- Fix critical bug in nonInteractiveHelpers.ts: loadSlashCommandNames was
  calling getAvailableCommands without specifying mode, causing it to default
  to 'acp' instead of 'non_interactive'. Commands with supportedModes that
  include 'non_interactive' but not 'acp' would be silently excluded.
- Apply the same fix in systemController.ts for the same reason.
- Update test mock to delegate filtering to production filterCommandsForMode()
  instead of duplicating the logic inline, preventing divergence.

Fixes review comments by wenshao and tanzhenxin on PR #3283.

* fix: resolve TypeScript type error in nonInteractiveHelpers.test.ts

* fix test ci

* fix mcp prompt in skill manager

* revert pr#3345

* fix test ci

* feat(cli): adapt /insight for non_interactive mode with message return

- non_interactive: run generateStaticInsight() synchronously with no-op
  progress callback, return { type: 'message' } with output path
- acp: keep existing stream_messages path with progress streaming
- interactive: unchanged

Add tests for non_interactive success and error paths.

Update phase2-technical-design.md and roadmap.md to reflect the
three-way mode split and clarify that MCP prompts do not need
modelInvocable (they are called via native MCP tool call mechanism).

* fix(cli): ghost text only shown when cursor is at end of slash token

Use strict equality (!==) instead of > in findMidInputSlashCommand so that
ghost text is only computed and Tab-accepted when the cursor sits exactly at
the trailing edge of the partial command token.

Previously, with the cursor inside an already-typed token (e.g. /re|view),
the ghost text suffix would still be shown and pressing Tab would insert it
at the cursor position, producing a duplicated tail. Using strict equality
makes ghost text disappear as soon as the cursor moves inside the token.

Add unit tests for findMidInputSlashCommand covering cursor-at-end,
cursor-inside-token, cursor-past-token, start-of-line, and
no-space-before-slash cases.

* fix(cli): support /model <model-id> in non-interactive and ACP modes

Previously, /model <model-id> (without --fast) fell through to the
non-interactive branch that only returned the current model info and
incorrectly told users to use --fast. Now:

- /model <model-id>  → sets the main model via settings + config.setModel()
- /model             → shows current model with correct usage hint
- /model --fast <id> → unchanged (sets fast model)

Fixes the inconsistency flagged in PR review: the help text said to use
'/model <model-id>' but the command returned a dialog action which is
unsupported in non-interactive mode.

* fix(cli): declare supportedModes on doctorCommand to enable non-interactive and ACP

The command's action already had non-interactive handling (returns a JSON
message with check results), but without supportedModes declared the
BUILT_IN fallback restricted it to interactive-only so it was never
registered in non_interactive or acp sessions.

* feat(skills): add SkillCommandLoader for user/project/extension skills as slash commands

- New SkillCommandLoader loads user, project, and extension level SKILL.md
  files as slash commands (previously only bundled skills were slash-invocable)
- Extension skills follow plugin-command rules: modelInvocable only when
  description or whenToUse is present
- User/project skills are always modelInvocable (matching bundled behavior)
- skill-manager now injects extensionName when loading extension-level skills
- Add when_to_use and disable-model-invocation frontmatter support to SKILL.md
  and .md command files (SkillConfig, markdown-command-parser, command-factory,
  BundledSkillLoader, FileCommandLoader)
- SkillTool filters out skills with disableModelInvocation and includes
  whenToUse in the skill description shown to the model
- 16 unit tests for SkillCommandLoader covering all cases

* docs: update phase2 design doc to reflect final decisions on plan/statusline/copy/restore

These four commands are intentionally kept as interactive-only by design:
- /plan and /statusline: tightly coupled with interactive multi-turn UI
- /copy and /restore: clipboard and snapshot restore are inherently interactive

Update design doc classification table, section 4.2, 4.3, 5.2, 5.3,
file change summary, test requirements, behavior analysis table,
and implementation batch descriptions to reflect this decision.

* feat(cli): re-implement slashCommands.disabled denylist based on current refactored code

Adapts the feature originally introduced in pr#3445 to the current
CommandService / Phase-2 refactored code.

Sources (merged, de-duplicated, case-insensitive):
  - settings key slashCommands.disabled (string[], UNION merge)
  - --disabled-slash-commands CLI flag (comma-separated or repeated)
  - QWEN_DISABLED_SLASH_COMMANDS environment variable

Enforcement points:
  - CommandService.create() accepts optional disabledNames: ReadonlySet<string>
    and removes matching commands post-rename, so disabled commands never appear
    in autocomplete, mid-input ghost text, or model-invocable commands list.
  - slashCommandProcessor (interactive TUI) passes the denylist to
    CommandService.create so disabled commands are absent from dropdown/ghost text.
  - nonInteractiveCliCommands.handleSlashCommand() keeps allCommands unfiltered
    to distinguish disabled vs unknown; disabled commands return unsupported with
    a "disabled by the current configuration" reason (not no_command).
  - getAvailableCommands() (ACP) passes the denylist to CommandService.create.

Config plumbing:
  - core/Config: ConfigParameters.disabledSlashCommands + getDisabledSlashCommands()
  - cli/config: CliArgs.disabledSlashCommands + yargs option + loadCliConfig merge
  - settingsSchema: slashCommands.disabled (MergeStrategy.UNION)
  - settings.schema.json: regenerated

Tests: 28 pass (CommandService x4, nonInteractiveCliCommands x3 new cases)

* feat(cli): complete slashCommands.disabled coverage from pr#3445

Fill in the three items that were missing from the initial re-implementation:

- packages/cli/src/config/settings.test.ts: add UNION-merge test for
  slashCommands.disabled across user and workspace scopes
- packages/cli/src/nonInteractiveCli.test.ts: add getDisabledSlashCommands
  mock to the shared mockConfig fixture
- docs/users/configuration/settings.md: add slashCommands section (table +
  example + note) and --disabled-slash-commands row in the CLI args table

* fix(cli): match disabled slash commands by alias as well as primary name

The denylist previously only checked cmd.name (the primary/canonical name),
so disabling a command by its alias (e.g. 'about' for the 'status' command)
had no effect. Fix both CommandService.create() and the isDisabled() helper
in nonInteractiveCliCommands.ts to also check altNames.

Also improve the user-facing error message to show the token the user actually
typed (e.g. /about) instead of always showing the primary name (/status).
2026-04-22 19:12:44 +08:00
Shaojin Wen
d71f2fab70
feat(cli): cap inline shell output with configurable line limit (#3508)
* feat(cli): cap inline shell output with configurable line limit

Long-running shell commands (npm install, find /, build logs) currently
fill the viewport with the full visible PTY buffer (up to availableHeight,
~24 lines on a typical terminal). The output dominates the screen and
pushes prior context off the top.

This caps inline ANSI shell output to a small window (default 5 lines,
matching Claude Code's ShellProgressMessage). The hidden line count is
already surfaced via the existing `+N lines` indicator in
`ShellStatsBar`, so users still know how much was elided.

The cap applies only when nothing in the existing escape-hatch set is
true:
  - `forceShowResult` (errors, !-prefix user-initiated commands,
    tools awaiting confirmation, agents pending confirmation)
  - `isThisShellFocused` (ctrl+f focus on a running embedded PTY shell)
  - `ui.shellOutputMaxLines = 0` (user opt-out)

Also adds a new `ui.shellOutputMaxLines` setting (default 5) so users
can adjust or disable the cap. The SettingsDialog renders it
automatically via the existing `type: 'number'` schema path.

Notes on scope:
  - Only the `'ansi'` display branch is capped. `'string'`, `'diff'`,
    `'todo'`, `'plan'`, `'task'` renderers are untouched.
  - `AnsiOutputDisplay` is only produced by shell tools (`shell.ts`,
    `shellCommandProcessor.ts`), so other tool outputs are unaffected.
  - The `+N lines` count is bounded by the headless xterm buffer height
    (~30 rows) — a pre-existing limitation of the buffer-based stats,
    not introduced here.

Tests:
  - 4 new ToolMessage tests cover cap default, forceShowResult bypass,
    settings disable (cap=0), and custom cap value.
  - The existing `MockAnsiOutputText` / `MockShellStatsBar` mocks were
    extended to print `availableTerminalHeight` / `displayHeight` so
    the cap behavior is asserted at the prop level.

* fix(cli): apply shell output cap to completed string display too

Initial PR caught only the streaming ANSI branch. AI shell tools emit
the final completed result through `shell.ts:returnDisplayMessage =
result.output`, which is a plain string. That string went through
`StringResultRenderer` with the unmodified `availableHeight`, so the
cap was effectively bypassed for the steady-state display the user
actually sees most of the time.

Verified manually in tmux: a `seq 1 30` invocation by the AI now
collapses to "first 26 lines hidden ... 27 28 29 30" instead of
listing all 30 rows. `!`-prefix `seq 1 30` still expands fully via
the existing `isUserInitiated → forceShowResult` bypass.

Changes:
  - Detect shell tool by name (matches existing `SHELL_COMMAND_NAME` /
    `SHELL_NAME` checks already used in this file)
  - Rename `ansiAvailableHeight` → `shellCapHeight` since it now
    governs the string branch as well
  - Pass `shellCapHeight` to `StringResultRenderer`; the value
    falls back to `availableHeight` for non-shell tools so other
    tools' string output is unaffected
  - Two new tests: shell completed string is capped; non-shell
    string is not
  - Two existing tests updated to use `name="Shell"` so they actually
    exercise the cap path (would previously have passed by accident
    since the original code didn't check tool name)

Also picks up the auto-regenerated VSCode IDE companion settings
schema entry for `ui.shellOutputMaxLines`.

* fix(cli): symmetrize ANSI/string row counts and clamp shell cap input

Addresses two non-blocking review observations on #3508.

Off-by-one between paths:
  MaxSizedBox reserves one row for its overflow banner when content
  exceeds maxHeight (visibleContentHeight = max - 1). The ANSI path
  pre-slices to N in AnsiOutputText so MaxSizedBox sees exactly N
  rows and renders all N — plus the separate ShellStatsBar line.
  The string path passes the raw cap and lets MaxSizedBox handle
  overflow, so it shows N-1 content rows + the banner.

  Result with cap=5: ANSI showed 5+stats, string showed 4+banner.
  Pass shellCapHeight + 1 to StringResultRenderer when capping so
  both paths render N visible content rows. Verified in tmux: the
  completed Shell tool box now reports `... first 25 lines hidden ...`
  followed by lines 26-30 (was 26 + lines 27-30).

Setting validation:
  Schema accepts any number; the dialog only rejects NaN. Negatives
  silently disabled the cap (only 0 is documented as off) and
  fractional values produced fractional slice counts. Added
  Math.max(0, Math.floor(value || 0)) at the use site so:
   - negatives → 0 → cap disabled (matches the documented opt-out)
   - fractions → floor → whole-row cap
   - non-numeric (raw settings.json edits) → 0 → cap disabled
  Schema-level minimum/integer constraints aren't supported by the
  current settings infrastructure (no other number setting uses
  them either), so the guard lives at the use site.

Tests:
  - Updated string-cap test to assert lines 26-30 visible (catches
    the +1 fix; was lines 27-30 before)
  - New parameterized test covers -1, 1.5, and a non-numeric value
2026-04-22 14:37:13 +08:00
Reid
d1c8dff4d2
feat(arena): add comparison summary for agent results (#3394)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
Adds a summary view that runs after Arena agents finish, so users can
compare model outputs without opening each agent's conversation first.

Summary surface:
- Agent status overview
- Files changed in common vs. unique to one agent
- Per-agent approach summary generated through that agent's own provider
- Token / runtime / line-change / file-count metrics

Selection dialog now supports:
- p — toggle preview for the highlighted agent
- d — toggle detailed diff
- Enter — select winner
- x — discard all results
- Esc — cancel

Approach summary generation:
- Each agent's summary is generated through that agent's own content
  generator, keeping mixed-provider Arena sessions within their
  respective auth boundaries
- 20s timeout + AbortController per agent, bounded prompt inputs
  (finalText 2K, transcript 6K, diff 6K)
- Falls back to a deterministic "Changed N files ..." summary when no
  per-agent generator is available or on error

Diff summary now handles binary, rename-only, and mode-only diffs;
the previous heuristic required textual +/- hunks and would have
dropped those.

Resolves #2559
2026-04-22 05:31:19 +08:00
zhangxy-zju
ebe364d0b8
feat(retry): add persistent retry mode for unattended CI/CD environments (#3080)
* feat(retry): add persistent retry mode for unattended CI/CD environments

When running in CI/CD pipelines or background daemon mode, transient API
capacity errors (429/529) should not terminate long-running tasks after a
fixed number of retries. This adds an environment-aware persistent retry
mode that retries indefinitely for transient errors, with exponential
backoff capped at 5 minutes and heartbeat keepalives every 30 seconds to
prevent CI runner timeouts.

* docs: add persistent retry mode documentation

Add environment variable entries (QWEN_CODE_UNATTENDED_RETRY, QWEN_CODE_BG)
to the settings reference, and a new "Persistent Retry Mode" section to the
headless mode docs covering activation, behavior, and CI/CD usage examples.

* refactor(retry): simplify to single explicit env var QWEN_CODE_UNATTENDED_RETRY

Remove QWEN_CODE_BG and CI=true as activation triggers for persistent retry.
Having multiple env vars with identical behavior adds confusion, and silently
activating infinite retry on CI=true is dangerous — a regular CI test hitting
a 429 would hang forever instead of failing fast.

* fix(retry): address PR review feedback

- Forward caller's abortSignal into retryWithBackoff in both
  baseLlmClient.ts and geminiChat.ts so persistent waits remain
  cancellable (wenshao)
- Re-apply maxBackoff and capMs after jitter so delays strictly
  respect stated caps (Copilot)
- Respect shouldRetryOnError in persistent mode so callers can
  force fast-fail even for transient 429/529 errors (Copilot)
- Guard sleepWithHeartbeat against infinite loop when heartbeat
  interval is <= 0 via Math.max(1, ...) (Copilot)
- Normalize isEnvTruthy with trim/toLowerCase for robust env
  var parsing across CI conventions (Copilot)

* test(retry): add missing UT for shouldRetryOnError override and heartbeat zero-interval guard

* fix(retry): do not cap Retry-After delays at maxBackoff

Server-specified Retry-After values should only be limited by the
absolute cap (capMs/6h), not the exponential backoff cap (maxBackoff/5min).
Jitter is also skipped for Retry-After since the server already specified
the exact wait time.

* refactor(retry): align isUnattendedMode with project env parsing convention

Replace custom isEnvTruthy (trim + toLowerCase) with strict matching
(val === 'true' || val === '1') to match parseBooleanEnvFlag used
elsewhere in the codebase. Prevents inconsistent behavior where
'TRUE' or ' 1 ' would activate persistent retry here but not in
telemetry or other env-driven features.

* test(retry): add Retry-After handling tests for persistent mode

Cover three key behaviors:
- Retry-After is NOT capped at maxBackoff (only at capMs)
- Retry-After IS capped at persistentCapMs absolute limit
- Retry-After delays have no jitter applied

* fix(test): add isUnattendedMode to retry.js mock in baseLlmClient tests

The existing vi.mock for retry.js only exported retryWithBackoff.
After adding isUnattendedMode to the retry module, baseLlmClient.ts
imports it, causing all 10 generateJson tests to fail with
'No "isUnattendedMode" export is defined on the mock'.

* fix(retry): wire persistent retry mode into client.ts generateContent

Forward persistentMode and abortSignal to retryWithBackoff() in
GeminiClient.generateContent(), matching the existing wiring in
baseLlmClient.ts and geminiChat.ts.
2026-04-21 22:08:11 +08:00
Shaojin Wen
519e5aa1de
fix(core): recover from truncated tool calls via multi-turn continuation (#3313)
* fix(core): recover from truncated tool calls via multi-turn continuation (#3049)

When large tool calls (e.g., WriteFile with big HTML) exceed the output
token limit, the model's response gets truncated and required parameters
like file_path are missing. Previously this surfaced as a confusing
"params must have required property" error.

Three-layer defense:

1. Escalate to model's actual output limit (not fixed 64K). Models with
   128K output (Claude Opus, GPT-5) now use their full capacity.

2. Multi-turn recovery: if the escalated response is still truncated,
   keep the partial response in history and inject a recovery message
   ("Resume directly — pick up mid-thought") so the model continues
   from where it left off. Up to 3 recovery attempts before falling
   back to the tool scheduler's guidance.

3. Stronger truncation guidance as fallback: "you MUST split" instead
   of "consider splitting".

Also fixes:
- Clear toolCallRequests on RETRY to prevent duplicate tool execution
- Add isContinuation flag to RETRY events so the UI preserves text
  buffers during recovery (continuation) but resets them during
  escalation (fresh restart)
- Catch errors during recovery to prevent dangling history entries

* docs: update adaptive output token escalation design for recovery mechanism

Update the design doc to reflect:
- Escalation now targets model's actual output limit (64K floor)
- Multi-turn recovery loop after escalation (up to 3 attempts)
- isContinuation flag on RETRY events
- Recovery error handling (pop dangling message, break)
- Updated constants table and model-specific escalation limits
- New design decision: why multi-turn recovery over progressive escalation

* fix: remove competitor reference from code comment

* fix: address review feedback on recovery mechanism

Three correctness fixes from @tanzhenxin's review:

1. Partial text lost during continuation (useGeminiStream.ts):
   On continuation RETRY, setPendingHistoryItem(null) cleared the pending
   gemini item. The next Content event then saw a null pending item,
   created a fresh one, and reset geminiMessageBuffer = eventValue —
   discarding the preserved partial text. Now the pending item AND
   buffers are kept on continuation, so the continuation appends.

2. Recovery on truncated tool-call turns (geminiChat.ts):
   When the truncated turn already contains a complete functionCall,
   appending a user recovery message produces model(functionCall) →
   user(text) with no intervening functionResponse — an invalid API
   sequence. Now recovery skips turns with functionCall parts and
   defers to the tool scheduler's layer-3 fallback.

3. Recovery errors swallowed after partial chunks (geminiChat.ts):
   If a recovery attempt yielded chunks then failed, the catch block
   broke without emitting any terminal signal, leaving the UI with
   partial text and no Finished event. Now emits a synthetic
   finishReason=STOP chunk in the catch so the UI gets a proper
   terminal signal.

* test: add coverage for output token recovery loop

Four targeted tests for the recovery mechanism introduced in the
truncated-tool-call-recovery PR:

1. Recovery loop fires when escalated response is also truncated:
   initial MAX_TOKENS → escalation MAX_TOKENS → recovery STOP. Verifies
   two RETRY events (one escalation, one continuation) and three API
   calls.

2. Recovery is skipped when truncated turn contains a functionCall:
   prevents the invalid model(functionCall) → user(text) sequence.
   Verifies no continuation RETRY and history ends with the functionCall
   intact.

3. Recovery attempts are capped at MAX_OUTPUT_RECOVERY_ATTEMPTS (3):
   persistent MAX_TOKENS triggers exactly 5 API calls (1 initial + 1
   escalation + 3 recovery).

4. Recovery catch block emits synthetic STOP chunk and pops dangling
   user message: when a recovery attempt fails (empty stream →
   InvalidStreamError), the UI gets a terminal signal and history
   ends on the model turn, not a dangling user recovery message.

* test: cover cross-iteration functionCall detection in recovery loop

Existing tests cover the functionCall guard when both initial and
escalated responses have functionCall. This adds a test for the
cross-iteration case: iter 1 returns text (recovery proceeds), iter 2
returns functionCall (recovery must break before iter 3).

Verifies:
- API called exactly 4 times (1 initial + 1 escalation + 2 recovery)
- History ends with the functionCall model turn, not a dangling user
  recovery message
- Iter 3's user recovery message is never pushed (guard fires at top
  of loop before recoveryCount increment)

* fix(core): cast synthetic STOP chunk via unknown for TS2352

The object literal {candidates, content, parts} doesn't structurally
overlap enough with GenerateContentResponse for TypeScript's strict
narrow cast. Casting through 'unknown' is required per TS2352.

Build error from CI:
  src/core/geminiChat.ts(651,24): error TS2352: Conversion of type '...'
  to type 'GenerateContentResponse' may be a mistake because neither
  type sufficiently overlaps with the other. If this was intentional,
  convert the expression to 'unknown' first.

* test(core): tighten recovery history integrity assertions

Strengthen the "pop dangling recovery message" test to catch any
future regression that leaves consecutive same-role entries or an
empty last-model placeholder in history — conditions providers
reject on the next turn.

* fix(core): coalesce recovery pairs to avoid leaking control prompt

Previously every output-token recovery iteration left a (user, model)
pair in durable history where the user turn was the internal
OUTPUT_RECOVERY_MESSAGE control prompt. That prompt was then visible
to every later turn, biasing responses and polluting compression,
replay, and export.

Track successful recovery iterations and, after the recovery loop,
fold each completed pair back into the preceding model turn via a
new `coalesceRecoveryPairs` helper. Failed iterations already pop
their user turn in the catch block, so they need no coalescing.

Adds a targeted test that runs escalation + two successful recovery
iterations + a clean STOP, and asserts the merged history has
exactly one user turn and one model turn, no trace of the control
prompt text, and content ordered as B (escalation) + C + D.
2026-04-21 17:04:24 +08:00
Shaojin Wen
afbb5e71db
fix(cli): rework session recap rendering and add blur threshold setting (#3482)
* feat(cli): make recap away-threshold configurable

The 5-minute blur threshold was hard-coded. Confirmed from Claude
Code's own binary (v2.1.113) that 5 minutes is their default as well
(and that they shift to 60 minutes when 1h prompt-cache is active) —
so the default stays, but expose it as `general.sessionRecapAway
ThresholdMinutes` for users who briefly alt-tab often and don't want
recaps piling up, or who want to lower it for testing.

Non-positive / unset values fall back to the 5-minute default, so
dropping the key has the same behavior as before.

* fix(core): align recap prompt with Claude Code (1-2 sentences, ≤40 words)

The earlier "exactly one sentence, 80-char cap" was an over-correction
to a single in-the-moment ask. Going back to it: the natural shape of
"current task + next action" is two clauses, and forcing them into a
single sentence either crams them with a semicolon or drops the next
action entirely on complex sessions.

Adopt Claude Code's prompt verbatim (extracted from the v2.1.113
binary): "under 40 words, 1-2 plain sentences, no markdown. Lead with
the overall goal and current task, then the one next action. Skip
root-cause narrative, fix internals, secondary to-dos, and em-dash
tangents." Add a Chinese-budget note (~80 chars) and keep the
<recap>...</recap> wrapping that protects against reasoning-model
preambles leaking into the UI.

The sticky banner already re-measures controls height when the
recap toggles, so a 2-line render lays out cleanly.

Sweep "one-line" out of user-facing copy (settings description,
slash-command description, feature docs, design doc) so the
documentation matches the new shape.

* fix(cli): restore "one-line" in user-facing recap copy

Verified from the Claude Code v2.1.113 binary that the slash-command
description IS literally "Generate a one-line session recap now" even
though the underlying prompt allows 1-2 sentences. Claude Code is
deliberately setting a tighter user expectation than the prompt
guarantees, which keeps the surface feel "glanceable".

Mirror that asymmetry: keep the prompt at 1-2 sentences (the previous
commit) for behavioral parity, but put "one-line" back in the user-
visible copy (slash-command description, settings description, user
docs). Internal design doc keeps the accurate "1-2 sentence" wording.

* fix(cli): render recap inline in history to match Claude Code

Earlier I read the user's complaint that the recap "scrolled away" as
"the recap should be sticky above the input box," and built a sticky
banner accordingly. Disassembly of the Claude Code v2.1.113 binary
shows the actual behavior is the opposite: their away_summary is a
plain `type:"system", subtype:"away_summary"` message dispatched
through the standard message renderer (no Static, no anchor, no
flexbox pinning) — it scrolls with the conversation like every other
system message.

Tear out the sticky-banner machinery so recap matches that:

- Recap is back in the `HistoryItemWithoutId` union and `addItem`'d
  into history (both from `/recap` and from auto-trigger), so it
  serializes into session saves and behaves like every other history
  item — no special clear paths, no resume-wrapper, no layout-effect
  re-measure dance.
- `useAwaySummary` takes `addItem` again instead of a setter callback.
- `AwayRecapMessage` renders the way Claude Code does: a 2-column
  gutter with `※`, then bold "recap: " and italic content, all in
  dim color. Drop the prior `StatusMessage`-shaped layout that fused
  prefix and label into "※ recap:".
- Remove the AppContainer plumbing, the slashCommandProcessor state,
  the UIStateContext fields, the DefaultAppLayout / ScreenReader
  placement blocks, the test-utils mocks, and the noninteractive
  stub. Restore `useResumeCommand.handleResume` to a void return
  since callers no longer need the success boolean.

Sweep the design doc so the architecture diagram, files table, and
hook deps reflect the inline-history flow.

* fix(cli): dedupe back-to-back auto-recaps with no new user turns between

Two consecutive blur cycles, each over the threshold but with no new
user activity in between, would each fire their own auto-recap and
add two near-duplicate entries to history (same task, slightly
different wording from temperature-driven LLM variance). Reported
case: leaving the terminal twice while a /review of one PR was
still on screen produced two recaps both about that same review.

Add a `shouldFireRecap` gate before kicking off the LLM call:

- Need at least 3 user messages in history total (don't fire on a
  near-empty session).
- If a previous away_recap is already in history, need at least 2
  new user messages since that one before another can fire.

Same shape as Claude Code's `Ic1` gate (`Sc1=3`, `Rc1=2`). Read
history through a ref so this isn't in the effect's deps and the
effect doesn't re-run on every message.

* fix(cli): type useResumeCommand.handleResume as Promise<void>

Per gemini review on #3482: the interface declared this as `() => void`
but the implementation is `async` and returns `Promise<void>`. The
mismatch silently lost the chainable promise — tests had to launder
it through `as unknown as Promise<void> | undefined` just to await.

Tighten the interface to `Promise<void>` and drop the cast in the
"closes the dialog immediately" test.

* fix(cli): persist auto-fired recap to chat recording so /resume keeps it

Per yiliang114 review on #3482: the manual `/recap` path persists across
`/resume` because the slash-command processor records every output
history item via `chatRecorder.recordSlashCommand({ phase: 'result',
outputHistoryItems })`, but the auto path called `addItem` directly
and bypassed that recorder. The result was an asymmetry where users
who triggered recap manually saw it after `/resume`, while users whose
recap fired automatically lost it.

Mirror the manual recording from useAwaySummary's `.then` callback —
record only the `result` phase (not invocation, since we don't want
a fake `> /recap` user line replayed) with the away-recap item as the
single output. Wrapped in try/catch because recap is best-effort and
must never surface a failure to the user.

Add useAwaySummary.test.ts covering:
- the recording path is taken on a successful auto-trigger
- the dedup gate (`shouldFireRecap`) suppresses the LLM call entirely,
  including the recording, when no new user turns happened since the
  last recap

* fix(cli): cast recap item via spread to satisfy strict tsc --build

CI's `tsc --build` (stricter than local `tsc --noEmit`) rejected the
direct `item as Record<string, unknown>` cast: HistoryItemAwayRecap's
literal `type: 'away_recap'` field doesn't overlap with `unknown`,
TS2352. Use the `{ ...item } as Record<string, unknown>` spread
pattern that the rest of the codebase (arenaCommand,
slashCommandProcessor's serializer) already uses for the same
SlashCommandRecordPayload field.
2026-04-21 14:39:13 +08:00
Shaojin Wen
52c7a3d0ed
fix(cli): pin /recap above input and align defaults with fastModel (#3478)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* fix(cli): pin /recap above input box and align defaults with fastModel

The recap rendered as a regular history item, so as soon as the model
streamed a new reply the "where you left off" reminder scrolled out of
view. Move it to a sticky banner anchored just above the Composer
(matching how btwItem is rendered) so it stays visible across turns.

While reworking the surface, also:
- Replace the chevron prefix with `※ recap:` so it reads as a labeled
  recap line instead of a generic dim message.
- Mirror the placement in ScreenReaderAppLayout so screen-reader users
  see it in the same logical position.
- Drop HistoryItemAwayRecap from the HistoryItemWithoutId union — it
  is no longer addItem-able, and leaving it in invited silent no-op
  bugs where addItem(awayRecap) would compile but render nothing.
- Clear the banner on /clear, /reset, /new and on /resume into a
  different session, so a recap from a previous context doesn't bleed
  into a freshly started one.
- Re-measure the controls box when the banner appears or disappears
  (its height changes by a couple of lines) so the main content area
  recomputes availableTerminalHeight and stays laid out correctly.

Auto-trigger now defaults to "on iff fastModel is configured" rather
than unconditionally on. Running an ambient background recap on the
main coding model is too costly and slow to be a sane default; tying
it to fastModel means the feature is silently opt-in for users who
have set up a cheap fast model. An explicit `general.showSessionRecap`
override still wins either way, and `/recap` itself is unaffected.

Sharpen the slash-command description to match the new behavior.

* fix(core): silence AbortSignal listener-leak warning in OpenAI pipeline

Every chat.completions.create call wires up an abort listener on the
incoming AbortSignal, and several layers — retryWithBackoff, the
LoggingContentGenerator wrapper, the SDK's own internal stream/fetch
plumbing — register their own listeners against the same signal. Five
retry attempts plus those layers comfortably exceed Node's default
10-listener cap and produce a MaxListenersExceededWarning. With
features that share or compose signals (e.g., recap + followup
speculation firing on the same response cycle), even a higher cap
gets blown past.

The signals here are per-request and short-lived, so the accumulation
is structural rather than a real memory leak — they get GC'd as soon
as the request settles. setMaxListeners(0, signal) at the SDK boundary
disables the warning for these specific signals only, without masking
any genuine leak elsewhere in the process. Idempotent and confined to
the one place where retry-bound API calls cross into the SDK.

* fix(core): tighten recap to a single sentence within 80 chars

The 1-3 sentence budget reliably wrapped onto two lines in the sticky
banner above the input box, which made it visually heavy for what is
supposed to be a glanceable reminder. Constrain the prompt to exactly
one sentence with a hard 80-char cap, and merge the "high-level task
+ next step" rule into a single sentence instead of two adjacent ones.

Also sweep the docs (settings, commands, design) so the user-facing
copy and the internal design notes match the new format.

* fix(cli): apply review feedback for recap PR

Two issues from review:

- The schema description for `general.showSessionRecap` still said
  "1-3 sentence summary" while the prompt, docs, and slash-command
  copy already say "one-line". Aligns the text in settingsSchema.ts
  and the regenerated VSCode JSON schema.

- The /resume wrapper cleared the sticky recap synchronously, before
  the inner handler had a chance to discover that no session data
  was available. On a no-op resume the user would still lose the
  current recap. Make `useResumeCommand.handleResume` return
  Promise<boolean> reporting whether a session actually loaded, and
  only clear the recap on a confirmed switch.

* fix(cli): default showSessionRecap to false and drop fastModel heuristic

The earlier "enabled iff fastModel is configured" default made it hard
for users to answer the simple question "is auto-recap on for me right
now?" — the answer depended on a setting from a different category,
and setting/unsetting fastModel silently changed recap behavior.

Revert to a plain boolean with a conservative off-by-default:

- Auto-trigger fires only when the user explicitly sets
  `general.showSessionRecap: true`.
- Manual `/recap` keeps working regardless (that's a user-initiated
  call, not an ambient one).
- Users never get ambient LLM calls billed to their main coding model
  without having opted in.

Aligns settings.md, design doc, and the regenerated JSON schema.
2026-04-20 23:58:19 +08:00
Shaojin Wen
c74d7678cb
Revert "feat(core): add dynamic swarm worker tool (#3433)" (#3468)
This reverts commit f7ebc372f1.
2026-04-20 16:40:14 +08:00
顾盼
a82d766727
refactor(cli): replace slash command whitelist with capability-based filtering (Phase 1) (#3283)
* refactor(cli): replace slash command whitelist with capability-based filtering (Phase 1)

## Summary

Replace the hardcoded ALLOWED_BUILTIN_COMMANDS_NON_INTERACTIVE whitelist with a
unified, capability-based command metadata model. This is Phase 1 of the slash
command architecture refactor described in docs/design/slash-command/.

## Key changes

### New types (types.ts)
- Add ExecutionMode ('interactive' | 'non_interactive' | 'acp')
- Add CommandSource ('builtin-command' | 'bundled-skill' | 'skill-dir-command' |
  'plugin-command' | 'mcp-prompt')
- Add CommandType ('prompt' | 'local' | 'local-jsx')
- Extend SlashCommand interface with: source, sourceLabel, commandType,
  supportedModes, userInvocable, modelInvocable, argumentHint, whenToUse,
  examples (all optional, backward-compatible)

### New module (commandUtils.ts + commandUtils.test.ts)
- getEffectiveSupportedModes(): 3-priority inference
  (explicit supportedModes > commandType > CommandKind fallback)
- filterCommandsForMode(): replaces filterCommandsForNonInteractive()
- 18 unit tests

### Whitelist removal (nonInteractiveCliCommands.ts)
- Remove ALLOWED_BUILTIN_COMMANDS_NON_INTERACTIVE constant
- Remove filterCommandsForNonInteractive() function
- Replace with CommandService.getCommandsForMode(mode)

### CommandService enhancements (CommandService.ts)
- Add getCommandsForMode(mode: ExecutionMode): filters by mode, excludes hidden
- Add getModelInvocableCommands(): reserved for Phase 3 model tool-call use

### Built-in command annotations (41 files)
Annotate every built-in command with commandType:
- commandType='local' + supportedModes all-modes: btw, bug, compress, context,
  init, summary (replaces the 6-command whitelist)
- commandType='local' interactive-only: export, memory, plan, insight
- commandType='local-jsx' interactive-only: all remaining ~31 commands

### Loader metadata injection (4 files)
Each loader stamps source/sourceLabel/commandType/modelInvocable on every
command it emits:
- BuiltinCommandLoader: source='builtin-command', modelInvocable=false
- BundledSkillLoader: source='bundled-skill', commandType='prompt',
  modelInvocable=true
- command-factory (FileCommandLoader): source per extension/user origin,
  commandType='prompt', modelInvocable=!extensionName
- McpPromptLoader: source='mcp-prompt', commandType='prompt', modelInvocable=true

### Bug fix
MCP_PROMPT commands were incorrectly excluded from non-interactive/ACP modes by
the old whitelist logic. commandType='prompt' now correctly allows them in all
modes.

### Session.ts / nonInteractiveHelpers.ts
- ACP session calls getAvailableCommands with explicit 'acp' mode
- Remove allowedBuiltinCommandNames parameter from buildSystemMessage() —
  capability filtering is now self-contained in CommandService

* fix test ci

* fix memory command

* fix: pass 'non_interactive' mode explicitly to getAvailableCommands

- Fix critical bug in nonInteractiveHelpers.ts: loadSlashCommandNames was
  calling getAvailableCommands without specifying mode, causing it to default
  to 'acp' instead of 'non_interactive'. Commands with supportedModes that
  include 'non_interactive' but not 'acp' would be silently excluded.
- Apply the same fix in systemController.ts for the same reason.
- Update test mock to delegate filtering to production filterCommandsForMode()
  instead of duplicating the logic inline, preventing divergence.

Fixes review comments by wenshao and tanzhenxin on PR #3283.

* fix: resolve TypeScript type error in nonInteractiveHelpers.test.ts

* fix test ci
2026-04-20 14:34:43 +08:00
Edenman
6c999fe29f
feat(cli): add OAuth configuration flags to mcp add (#3442)
* feat(cli): Add OAuth redirect URI support to  command

- Add --oauth-redirect-uri, --oauth-client-id, --oauth-client-secret,
  --oauth-authorization-url, --oauth-token-url, and --oauth-scopes flags
  to the  command
- Enable configuration of custom OAuth redirect URIs for remote/cloud
  server deployments (fixes hardcoded localhost issue)
- Document auth.redirectUri in both developer and user-facing MCP docs
- Add comprehensive tests for OAuth configuration via CLI
- Update documentation with examples and guidance for remote deployments

Fixes #3336

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(cli): harden OAuth flag handling in mcp add

- Reject combining --oauth-* flags with --transport stdio to surface the
  mistake instead of silently persisting an unused oauth config
- Rebuild OAuth config via single spread expression; drop the prior
  mutate-then-check pattern and the post-hoc enabled assignment
- Trim each scope token after comma split so "read, write" no longer
  stores leading/trailing whitespace
- Cover both new behaviors with tests; add missing --oauth-client-secret
  row and stdio-incompatibility note to the user MCP docs

* test(cli): use explicit Vitest/Yargs type imports in mcp add tests

Switch from namespace-style 'vi.Mock' and 'yargs.Argv' references to
explicit 'Mock' and 'Argv' imports, and replace the narrow
'(code?: number) => never' cast on the process.exit mock with
'typeof process.exit' so it tracks the current Node signature.

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-20 14:12:17 +08:00
ihubanov
0b8b3da836
feat(cli): add slashCommands.disabled setting to gate slash commands (#3445)
* feat(cli): add slashCommands.disabled setting to gate slash commands

Introduces a first-class way for operators to hide and refuse to execute
specific slash commands. Useful for multi-tenant / enterprise / sandboxed
deployments where different users should see different command subsets.

The denylist is sourced from three unioned inputs:

  * `slashCommands.disabled` settings key (string[], UNION merge), so
    workspace scopes can only add to a denylist set at user or system
    scope, never shrink it — matching the shape already used by
    `permissions.deny`.
  * `--disabled-slash-commands` CLI flag (comma-separated or repeated).
  * `QWEN_DISABLED_SLASH_COMMANDS` environment variable.

Matching is case-insensitive against the final (post-rename) command
name, so extension commands are addressable by their disambiguated
form (e.g. `firebase.deploy`). Disabled commands are removed from
`CommandService`'s output, so they disappear from autocomplete and
produce the standard unknown-command path in both interactive TUI and
non-interactive (`--prompt`) modes.

The scope of this change is slash commands only: it does not affect
tool permissions (still `permissions.deny`) or keyboard shortcuts.

* chore(cli): regenerate settings.schema.json for slashCommands.disabled

Regenerates the companion JSON schema consumed by the VS Code extension
after adding the `slashCommands.disabled` entry to the TS schema in the
previous commit. Required by the "Check settings schema is up-to-date"
CI lint step.

* fix(cli): route disabled slash commands to unsupported, not no_command

handleSlashCommand was passing the disabled denylist straight into
CommandService.create, so disabled commands disappeared from
`allCommands` too. The fallback existence check that distinguishes
"known but not allowed in non-interactive mode" from "truly unknown"
then failed, and disabled commands like `/help` fell through to
`no_command` — causing the caller to forward them to the model as
plain prompt text.

Keep `allCommands` unfiltered and apply the denylist only when
constructing the executable set and when producing the unsupported
response. A disabled command now returns `unsupported` with a
"disabled by the current configuration" reason and never reaches the
model. Added three regression tests covering the primary case,
case-insensitive match, and the preserved no_command path for
genuinely unknown input.
2026-04-20 11:06:26 +08:00
Shaojin Wen
60a6dfc14c
feat(cli): add session recap with /recap and auto-show on return (#3434)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(cli): add session recap with /recap and auto-show on return

Users often open an old session days later and need to scroll through
pages to remember where they left off. This change adds a short
"where did I leave off" recap — a 1-3 sentence summary generated by
the fast model — so they can resume without re-reading the history.

Two triggers:
- /recap: manual slash command.
- Auto: when the terminal has been blurred for 5+ minutes and gets
  focused again (uses the existing DECSET 1004 focus protocol via
  useFocus). Gated on streamingState === Idle so it never interrupts
  an active turn. Only fires once per blur cycle.

The recap is rendered in dim color with a chevron prefix, visually
distinct from assistant replies. A new `general.showSessionRecap`
setting controls the auto-trigger (default on). /recap works
independent of the setting.

Implementation notes:
- generateSessionRecap uses fastModel (falls back to main model),
  tools: [], maxOutputTokens: 300, and a tight system prompt. It
  strips tool calls / responses from history before sending — tool
  responses can hold 10K+ tokens of file content that drown the recap
  in irrelevant detail. The 30-message window respects turn boundaries
  (slice never starts on a dangling model/tool response).
- Output is wrapped in <recap>...</recap> tags; the extractor returns
  empty (skips render) if the tag is missing, preventing model
  reasoning from leaking into the UI.
- All failures are silent (return null) and logged via a scoped
  debugLogger; recap is best-effort and must never break main flow.
- /recap refuses to run while a turn is pending.

* fix(cli): abort in-flight recap when showSessionRecap is disabled

If the user disables showSessionRecap while an auto-recap LLM call is
already in flight, the previous code returned early without aborting.
The pending .then would still pass its idle/abort guards and append the
recap, producing an unwanted message after the user has opted out.

Abort the controller and clear it eagerly so the resolved promise no
longer adds to history.

* fix(cli): gate /recap and auto-recap on streaming idle state

Two related issues from review:

1. /recap was only refusing when ui.pendingItem was set, but a normal
   model reply runs with streamingState === Responding and a null
   pendingItem. Invoking /recap mid-stream would generate a recap from
   a partial conversation and insert it between the user prompt and
   the assistant reply.

2. useAwaySummary cleared blurredAtRef before checking isIdle, so if
   focus returned during a still-streaming turn (after a >5min blur)
   the recap was permanently dropped — there was no later retry when
   the turn became idle, because isIdle was not in the effect deps.

Fixes:
- Expose isIdleRef on CommandContext.ui (mirrors btwAbortControllerRef
  pattern). Plumb it from AppContainer through useSlashCommandProcessor.
- recapCommand now refuses when isIdleRef.current is false OR
  pendingItem is non-null.
- useAwaySummary preserves blurredAtRef on the !isIdle bail and adds
  isIdle to the effect deps, so the trigger re-evaluates when the
  current turn finishes.
- Brief blurs (< AWAY_THRESHOLD_MS) still reset blurredAtRef.

Also seeds isIdleRef in nonInteractiveUi and mockCommandContext so the
new field has a sensible default outside the interactive UI.

* docs: document /recap command, showSessionRecap setting, and design

- User docs: add /recap to the Session and Project Management table in
  features/commands.md and a dedicated subsection covering manual use,
  the auto-trigger, the dim-color rendering, and the fast-model tip.
- User docs: add general.showSessionRecap row to the configuration
  settings reference.
- Design doc: docs/design/session-recap/session-recap-design.md covers
  motivation, the two trigger paths, the per-file architecture, prompt
  design with the <recap> tag and three-tier extractor, history
  filtering rationale (functionResponse can be 10K+ tokens), the
  useAwaySummary state machine, the isIdleRef gating for /recap, model
  selection, observability, and out-of-scope items.

* fix(core): exclude thought parts from session recap context

filterToDialog kept any non-empty text part, but @google/genai's Part
type also marks model reasoning with part.thought / part.thoughtSignature.
That hidden chain-of-thought was being fed to the recap LLM and could
get summarized as if it were user-visible dialogue.

Drop parts where either flag is set. Update the design doc's
History 过滤 section to call this out alongside the existing
tool-call/response rationale.

* docs(session-recap): correct debug-logging guidance, fill in state machine, sharpen UX wording

Audit of the session recap docs against the implementation found three
issues worth fixing:

- Design doc claimed debug logs were enabled via a QWEN_CODE_DEBUG_LOGGING
  env var. That var does not exist; debug logs are written to
  ~/.qwen/debug/<sessionId>.txt by default, gated by QWEN_DEBUG_LOG_FILE.
  Replace with the accurate path + opt-out behavior, and tell the reader
  to grep for the [SESSION_RECAP] tag.
- Design doc's useAwaySummary state machine table was missing the
  isFocused && blurredAtRef === null path (taken on first render and
  right after a brief-blur reset). Add the row.
- User doc's "Refuses to run ... failures are silent" line conflated the
  inline-error refusal with silent generation failures, and "(when the
  conversation is idle)" used internal jargon. Split the two cases and
  spell out what "idle" means, including the wait-then-fire behavior
  when focus returns mid-turn.

* docs(session-recap): correctly describe /recap vs auto-trigger failure modes

The previous wording said "Generation/network failures are silent — the
recap simply does not appear", but recapCommand returns a user-facing
info message ("Not enough conversation context for a recap yet.") in
exactly that path, and also returns inline messages for the
config-not-loaded and busy-turn guards.

Only the auto-trigger path is truly silent (it just skips addItem when
generateSessionRecap returns null). Split the two paths in the doc so
the manual command's "always responds with something" behavior is
distinguished from the auto-trigger's no-op-on-failure behavior.

* docs(session-recap): align prompt-rules section with the actual prompt

Two doc-vs-code mismatches in the design doc's "System Prompt" section,
caught with the same lens as yiliang114's failure-mode review:

- The bullet list claimed RECAP_SYSTEM_PROMPT forbids "推测用户意图"
  and "用 'you' 称呼用户". Those rules existed in an early draft but
  were dropped when the <recap> tag rules were added; the current
  prompt has no such restrictions. Replace with the actual rules and
  add a "与 RECAP_SYSTEM_PROMPT 一一对应" marker so future edits stay
  in sync.
- The doc said systemInstruction "覆盖" the main agent prompt. True
  for the agent prompt portion, but GeminiClient.generateContent
  internally calls getCustomSystemPrompt which appends user memory
  (QWEN.md / 自动 memory) as a suffix. Spell that out — the final
  system prompt is recap prompt + user memory, which is actually
  useful project context for the recap.

* docs(session-recap): translate design doc to English

The repo convention for docs/design is English (7 of 8 existing files;
auto-memory/memory-system.md is the only Chinese one). The first version
of this design doc followed the auto-memory example, which turned out
to be the wrong sample.

Translate to English while preserving the existing structure, the
state-machine table, the prompt-vs-doc 1:1 alignment, the
QWEN_DEBUG_LOG_FILE description, and the failure-mode notes added in
prior commits.

* fix(cli): drop empty info return from /recap interactive success path

The interactive success path inserts the away_recap history item
directly via ui.addItem and then returned `{type: 'message',
messageType: 'info', content: ''}`. The slash-command processor's
'message' case unconditionally calls addMessage, which adds another
HistoryItemInfo with empty text. The empty info renders as nothing
(StatusMessage early-returns null), but it still bloats the in-memory
history list and shows up in /export and saved sessions.

Return void on the interactive success path and on the abort path so
the processor's `if (result)` check skips the message-handler branch
entirely. Widen the action's return type to `void | SlashCommandActionReturn`
to match (same shape as btwCommand).
2026-04-19 21:38:48 +08:00
gin-lsl
a02c115445
feat(tools): add Markdown for Agents support to WebFetch tool (#2734)
Closes #2025
2026-04-19 17:23:09 +08:00
Reid
f7ebc372f1
feat(core): add dynamic swarm worker tool (#3433)
* feat(core): add dynamic swarm worker tool

  Add a swarm tool for ad-hoc parallel worker execution with bounded concurrency, wait-all and first-success modes, per-worker failure
  isolation, and aggregated results.

  Register the tool in core, prevent nested worker recursion, and document the new workflow.

* fix(core): harden swarm worker execution

  Prevent swarm calls from bypassing the outer scheduler concurrency budget.

  Disallow interactive question prompts in swarm workers by default, and avoid incomplete Markdown table escaping by using an HTML entity for
  pipe characters. Add focused tests for the scheduler behavior, worker tool restrictions, and result formatting.
2026-04-19 14:46:59 +08:00
Shaojin Wen
4bf5bf22de
feat(cli): support refreshInterval in statusLine for periodic refresh (#3383)
* feat(cli): support refreshInterval in statusLine for periodic refresh

The statusLine (#3311) re-runs only when Agent state changes (token count,
model, git branch, etc.). Commands that display *external* data — a clock,
rate-limit counters, CI build status — have no Agent event to hook into
and go stale between messages.

Add an optional `ui.statusLine.refreshInterval` field (seconds, minimum 1)
that schedules a setInterval alongside the existing event-driven updates.
Overlap with state-change debounce is safe: `doUpdate` kills any in-flight
child and bumps the generation counter, so only the most recent output
reaches the footer.

Validation lives in `getStatusLineConfig`:
- Must be `number`, `Number.isFinite(...)`, `>= 1`
- Anything else is silently dropped (no interval scheduled)

No changes to the default behavior — configs without `refreshInterval`
behave exactly as before.

* fix(cli): yield periodic statusLine tick when previous exec is in flight

Review feedback on #3383: with `refreshInterval: 1` and a command whose
real exec time exceeds 1s, each tick was unconditionally calling
`doUpdate()` — which kills the in-flight child and bumps the generation
counter — so the prior exec's callback was always discarded as stale.
`setOutput` was never reached and the statusline stayed empty until
`refreshInterval` was removed or the command became faster.

Guard the interval callback with an `activeChildRef` check so a pending
exec is allowed to finish. State-change triggers (model switch, token
count, branch, etc.) still go through `scheduleUpdate` → `doUpdate`
directly and legitimately preempt stale children; only the periodic
tick yields. The existing 5s exec timeout is still the hard ceiling.

Also drop the redundant `'refreshInterval' in raw` check — the `typeof
raw.refreshInterval === 'number'` guard already excludes missing /
undefined values.

Tests:
- Add regression test `'skips periodic ticks while a previous exec is
  still running'` — three ticks during one unfinished exec trigger zero
  new spawns; the next tick after callback completion does spawn.
- Update two existing tests to resolve the mount exec before expecting
  subsequent ticks (the old tests implicitly relied on the starvation
  behavior being tolerated).

* test(cli): assert user-visible lines state in starvation regression

Self-review insight: the existing `skips periodic ticks while a previous
exec is still running` test only counted `exec` calls — it confirmed the
guard prevents redundant spawns, but would have silently passed even if
the eventual callback was still being discarded as stale (which is the
actual user-visible symptom of the starvation bug).

Add `expect(result.current.lines).toEqual(['done'])` after resolving the
mount's pending callback. Without the guard, generationRef would have
bumped 3 times during the yielded ticks, the callback's captured gen
would fail the stale check, `setOutput` would never fire, and `lines`
would stay empty — now caught explicitly.

* perf(cli): dedupe statusLine output to skip unchanged Footer re-renders

Review feedback on #3383 (narrow terminal stacking): when
`refreshInterval` fires at 1s and the command output is unchanged, the
mount-and-setOutput cycle still allocates a new array and triggers a
Footer re-render. Under certain narrow-terminal conditions, Ink's
erase-line accounting mis-counts wrapped rows and stale content
accumulates on screen.

The Footer-layout root cause is in #3311's narrow-mode flex setup and
Ink's truncate semantics, which is out of scope for this PR. But we
can cut the re-render surface here by preserving the `lines` array
reference when the command produces identical output — a strict
Pareto improvement for any caller (clock-style statuslines with
second-precision still re-render; rate-limit / branch / CI-status
style statuslines that change infrequently stop triggering work every
tick).

Tests:
- `preserves the same lines array reference when output is unchanged`
  asserts referential equality after a re-exec with identical stdout.
- `produces a new reference when output changes` guards against
  over-eager dedup that would miss legitimate updates.

* fix(cli): stabilize Footer rendering in narrow terminals

Narrow-terminal E2E feedback on #3383: with `refreshInterval` at 1s,
empty lines were accumulating above the input prompt each tick. Root
cause is in the Footer flex layout — originally from #3311 — where Ink
miscounts logical rows vs the physical rows the terminal actually uses.

Two adjustments, both idiomatic (used elsewhere in the repo already):

1. Left column — `minWidth={0}`. Without this, Yoga's `min-width: auto`
   default keeps the Box at its natural content width, so a statusline
   wider than the terminal doesn't engage `<Text wrap="truncate">`; the
   text renders at content-width and the terminal wraps it physically.
   `minWidth={0}` lets the column shrink so the text child can truncate
   at container width.

2. Right section — `flexWrap="wrap"`. With multiple indicators (sandbox
   label, debug badge, dream, context-usage) the row can exceed a narrow
   terminal's width. Without `flexWrap` Ink lays them out in a single
   logical row, but the terminal physically wraps to two — Ink's erase
   sequence (`\e[2K\e[1A…` per logical row) then clears one row while
   two exist, and the extra row ghosts every re-render. With `wrap` Ink
   tracks the second row explicitly and erases correctly.

Together these make the Footer's row count match between Ink's logical
view and the terminal's physical view, so frequent re-renders (as
`refreshInterval` enables) stop accumulating ghost rows.

Needs verification in a real narrow TTY — from this environment I can
reason about the flex semantics and confirm both props are supported by
Ink's Box, but actually observing ghost-row elimination requires
process.stdout.columns on a real terminal.

* Revert "fix(cli): stabilize Footer rendering in narrow terminals"

This reverts commit 9758cda85f. Reason: I could not reproduce BZ-D's
reported ghost-row stacking in tmux (40x25, 2-line statusline + real
exec + Static history + refreshInterval: 1) over 14+ ticks. Both
`minWidth={0}` and `flexWrap="wrap"` are legitimate defensive idioms,
but without a failing repro I can't verify they address the reported
bug, and I shouldn't ship a speculative layout change as "the fix".

Keeping the output-dedup commit (e1d321186) — that one is a strict
improvement regardless of the underlying Ink behavior. Will request
BZ-D's specific terminal setup and reopen with a verified fix (or
confirm the issue is specific to a particular emulator, not flex/Ink).
2026-04-19 11:12:16 +08:00
易良
cd1be1c524
feat(vscode-ide-companion): add agent execution tool display (#2590)
Preserve structured agent rawOutput through the VSCode session pipeline.

Render dedicated agent execution cards from shared webui components.
2026-04-18 23:39:26 +08:00
Viktor Szépe
a1d1e5e276
Fix typo in class name (#2189) 2026-04-18 11:59:36 +08:00
ChiGao
9e26424aa7
feat(cli): add dual-output sidecar mode for TUI (#3352)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(cli): add dual-output sidecar mode for TUI

Adds an optional **dual-output** mode for the interactive TUI: while Qwen
Code keeps rendering normally on stdout, it concurrently emits a structured
JSON event stream on a second channel (--json-fd / --json-file) and
optionally watches a JSONL command file (--input-file) for prompts and
tool-permission responses written by an external program.

This unlocks programmatic embedding of the TUI from IDE extensions, web
frontends, CI agents, or automation scripts without forcing them to give
up the rich interactive UI in favor of --output-format=stream-json.

## Design

The TUI already has a battle-tested JSON event emitter
(`StreamJsonOutputAdapter`). This change makes that adapter pluggable on
its output stream and wires a small `DualOutputBridge` that forwards TUI
events to a second instance of the adapter writing to fd / file.

For tool approvals, when a tool enters awaiting_approval the bridge emits
`control_request` (subtype `can_use_tool`); whichever side resolves first
(TUI's native UI or `confirmation_response` via --input-file) wins, and a
`control_response` is mirrored back so all observers stay in sync.

`session_start` is announced once when the bridge is constructed so
consumers can correlate the channel with a session before any other event
arrives.

## CLI surface

- `--json-fd <n>` — write JSON events to fd n (n >= 3; provided via spawn
  stdio).
- `--json-file <path>` — write JSON events to a file / FIFO / /dev/fd/N.
- `--input-file <path>` — watch this file for JSONL commands.

`--json-fd` and `--json-file` are mutually exclusive. fds 0/1/2 are
rejected to prevent corrupting the TUI.

## Wire protocol

Output: existing stream-json schema with `includePartialMessages` always
enabled, plus:

- `system` / `subtype: session_start` — emitted once on bridge
  construction.
- `control_request` / `subtype: can_use_tool` — pending tool approval.
- `control_response` — final approval outcome (mirrors TUI-native or
  external resolution).

Input (--input-file):

    {"type":"submit","text":"What does this function do?"}
    {"type":"confirmation_response","request_id":"...","allowed":true}

`submit` is queued and retried when the TUI returns to idle.
`confirmation_response` is dispatched immediately — a pending tool call
is blocking and the response cannot wait behind earlier submits.

See `docs/users/features/dual-output.md` for the full schema, latency
notes, failure modes, and a spawn example.

## What changes when the flags are absent

Nothing. The bridge and watcher are constructed only when the relevant
flags are set; otherwise the React Context providers carry `null` and
every callsite short-circuits. No overhead, no behavioral change for
existing users.

## Failure handling

- Bad fd / unopenable path → warning on stderr, dual output stays
  disabled, TUI launches normally.
- Consumer disconnect (EPIPE) → bridge silently disables itself, TUI
  keeps running.
- Any exception inside the adapter → caught, logged, bridge disabled.
  The TUI is never crashed by a dual-output failure.

## Files

New:
- packages/cli/src/dualOutput/{DualOutputBridge,DualOutputContext,index}.{ts,tsx}
- packages/cli/src/remoteInput/{RemoteInputWatcher,RemoteInputContext,index}.{ts,tsx}
- packages/cli/src/nonInteractive/io/index.ts
- docs/users/features/dual-output.md

Modified:
- packages/core/src/config/config.ts — 3 new ConfigParameters fields + getters
- packages/cli/src/config/config.ts — yargs options + mutex validation
- packages/cli/src/gemini.tsx — instantiate bridge / watcher in
  startInteractiveUI, wrap with Context Providers, register cleanup
- packages/cli/src/ui/AppContainer.tsx — connect RemoteInput to
  submitQuery, bridge tool confirmations
- packages/cli/src/ui/hooks/useGeminiStream.ts — call
  dualOutput?.processEvent(...) at five existing event points
- packages/cli/src/nonInteractive/io/{Base,Stream}JsonOutputAdapter.ts —
  StreamJsonOutputAdapter accepts an injected output stream; base adapter
  exposes emitPermissionRequest / emitControlResponse through a new
  emitControlMessageImpl hook (default no-op in batch mode).

## Tests

- packages/cli/src/dualOutput/DualOutputBridge.test.ts — fd validation,
  auto session_start, control-event routing, post-shutdown safety.
- packages/cli/src/remoteInput/RemoteInputWatcher.test.ts — submit
  forwarding, immediate confirmation dispatch, busy/idle retry,
  malformed-line tolerance, shutdown.
- packages/cli/src/nonInteractive/io/StreamJsonOutputAdapter.dualOutput.test.ts —
  custom outputStream injection and new emitPermissionRequest /
  emitControlResponse paths.

tsc --noEmit -p packages/cli/tsconfig.json is clean.
vitest run src/nonInteractive src/dualOutput src/remoteInput → 297 passed,
1 skipped, 11 files.

* feat(cli): dual-output capability handshake, session_end, control_error, settings.json

Incremental improvements on top of the initial dual-output PR based on
reviewer feedback. All extensions are additive; older consumers that
ignore unknown fields keep working.

## Capability handshake in session_start

`session_start.data` now carries three new fields so consumers can
feature-detect without sniffing the stream:

- `protocol_version` (integer, currently 1) — bumped on any protocol
  change consumers might care about.
- `version` (string) — the Qwen Code CLI version, threaded in from
  `gemini.tsx`.
- `supported_events` (string[]) — the event kinds this bridge version
  is known to emit, exported as `SUPPORTED_EVENTS` from the module.

## session_end on bridge shutdown

DualOutputBridge.shutdown() now emits a final
`system` / `session_end` event carrying `session_id` before closing the
stream. Gives consumers a definitive termination signal rather than
requiring them to infer it from EPIPE. Idempotent — calling shutdown
twice emits exactly one session_end.

## control_error emission path

`ControlErrorResponse` (already defined in types.ts) now has a first-
class emission path: `BaseJsonOutputAdapter.emitControlError(requestId,
message)` → `control_response` with `subtype: 'error'`. Wired into
AppContainer's remote-input confirmation handler so that a
`confirmation_response` referencing an unknown / already-resolved
request_id produces a structured error reply instead of silently
dropping, letting consumers retry or surface the error.

## settings.json support

New `dualOutput` top-level settings block with `jsonFile` and
`inputFile` properties. `--json-fd` has no settings equivalent (fd
passing is a spawn-time concern). CLI flag wins over settings when
both are present, so scripted one-off runs still work unchanged.
`requiresRestart: true` since the bridge is constructed once at
startup.

## Documentation

`docs/users/features/dual-output.md` gains three major sections:

- **Use cases** — concrete integration scenarios (terminal+chat dual
  sync, IDE extensions, web frontends, CI observers, multi-agent
  orchestration, session replay, observability, QA).
- **Why two output flags?** — detailed rationale for coexisting
  `--json-fd` and `--json-file`, including the PTY constraint
  (`node-pty` / `bun-pty` expose no stdio array, and `forkpty(3)` /
  `login_tty` actively close fds >= 3 before exec).
- **Comparison with Claude Code's stream-json** — schema-parity
  matrix, transport-topology differences, permission-control-plane
  behavioral notes, and a "room to improve" section as a design
  horizon.
- **Runnable demos** — seven copy-paste POCs: event observer, remote
  submit, permission bridge, Node embedder with capability
  feature-detection, session_end handling, failure drills.
- **Settings-based configuration** — example settings.json snippet and
  precedence rules.

## Tests

- DualOutputBridge.test.ts: new cases for capability handshake shape,
  session_end on shutdown, shutdown idempotency, and emitControlError.
- StreamJsonOutputAdapter.dualOutput.test.ts: new case for
  emitControlError at the adapter level.

302 passed, 1 skipped, 11 files. tsc --noEmit -p packages/cli is clean.

* docs(dual-output): shrink Claude Code comparison to one honest sentence

After actually reading the Claude Code source (src/cli/structuredIO.ts,
src/bridge/*, src/utils/messages/systemInit.ts), the previous
"Comparison with Claude Code's stream-json" section was overstated:

- Claude Code has no equivalent of TUI + sidecar running simultaneously.
  Its stream-json only works with --print (non-interactive); the bridge
  in src/bridge/* is Anthropic's own remote worker protocol, not a
  local embedding surface.
- CC uses `system/init` (not `session_start`) and has no session_end in
  the wire protocol, so the schema-parity table contained false ticks.
- Framing this PR as "parity with Claude Code" is therefore inaccurate;
  it's filling a gap Claude Code does not address.

Replace the whole multi-section comparison (schema matrix, transport
table, permission notes, borrow list, roadmap) with a single sentence
stating the accurate relation: same event format in spirit, different
topology — CC's is non-interactive only.

* fix(cli): address review feedback on dual-output sidecar mode

- Fix control_response mirror: external-initiated confirmations now
  emit control_response via the same mirror useEffect as TUI-native
  resolutions, making the emission path symmetric for all observers.
- Fix ENOENT: --json-file with a non-existent path now falls back to
  createWriteStream (auto-creates the file) instead of throwing.
- Fix race: add reading guard to RemoteInputWatcher.readNewLines()
  preventing duplicate command processing on rapid appends.
- Refactor confirmationHandler to use refs (pendingToolCallsRef,
  dualOutputRef) and register once (deps: [remoteInput]) to eliminate
  teardown/re-registration churn.
- Add debug logging to shutdown bare catch for ops correlation.
- Add ENOENT fallback test case for DualOutputBridge.
- Regenerate settings.schema.json for dualOutput section.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): make RemoteInputWatcher poll interval configurable for CI reliability

RemoteInputWatcher.test.ts was timing out in CI (5s default) because
fs.watchFile's 500ms poll interval is unreliable under load. Fix:

- Accept optional `pollIntervalMs` in constructor (default 500ms).
- Tests use 100ms poll interval for faster feedback.
- Increase per-test timeout to 15s and waitFor timeout to 10s.
- Increase "TUI busy" wait from 800ms to 1500ms for CI headroom.

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(cli): eliminate fs.watchFile timing dependency in RemoteInputWatcher tests

Tests were flaky across all CI platforms (macOS/ubuntu/windows) because
fs.watchFile polling (even at 100ms) is unreliable under CI load.

Fix: expose checkForNewInput() as a public method that directly triggers
file reading and returns a Promise. Tests now call it synchronously after
writing to the input file — no polling, no timeouts, deterministic.

Also fixes:
- Windows ENOTEMPTY: add delay in afterEach before rmSync
- Add active check in readNewLines to respect shutdown state
- readNewLines now returns Promise<void> for awaitable reads

Generated with AI

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: 秦奇 <gary.gq@alibaba-inc.com>
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-18 02:14:53 +08:00
joeytoday
89167618d8
docs: update authentication methods to reflect OAuth discontinuation (#3325)
* docs: update authentication methods to reflect OAuth discontinuation

Remove deprecated Qwen OAuth references and update documentation to
direct users to valid authentication methods (API Key, Coding Plan,
or Local Inference) following the OAuth free tier discontinuation on
2026-04-15.

Closes #3316

* docs: fix quickstart auth description to match actual /auth UI

The /auth command shows three options: Alibaba Cloud Coding Plan,
API Key, and Qwen OAuth (discontinued). Updated quickstart.md to
accurately reflect this UI instead of splitting into Option A/B/C.

Also updated settings.md, commands.md, and troubleshooting.md with
minor OAuth-related cleanups.

* docs: update .qwen workspace description in quickstart

Remove reference to 'Qwen account' since OAuth is discontinued.
The .qwen directory is created by Qwen Code itself for storing
credentials, configuration, and session data.

* docs: fix warning block formatting in quickstart

- Add missing '>' continuation for the OAuth discontinuation warning block

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs: update README Qwen3.6-Plus description

- Remove mention of running Qwen3.6-Plus locally via Ollama/vLLM
- Keep only the Alibaba Cloud ModelStudio API key option

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* docs: address review feedback - remove Local Inference from auth, add dual-region links

- Local Inference removed from auth method lists, kept as separate
  'Local Model Setup' section with detailed Ollama/vLLM config examples
- All links now provide dual-region URLs (Beijing + intl)
- .qwen workspace note restored to original meaning (cost tracking)
- Device auth flow error kept scoped to legacy OAuth
- API setup guide links updated with confirmed intl URL

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-17 15:34:18 +08:00
Shaojin Wen
b004450d7f
feat(cli): support multi-line status line output (#3311)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(cli): support multi-line status line output (#3211)

Remove the single-line hard limit (.split('\n')[0]) from the status line
hook so user scripts can output multiple rows. Footer renders each line
as a separate <Text wrap="truncate"> element, preserving per-line
horizontal truncation. Ink's virtual DOM handles re-rendering without
manual ANSI cursor management.

* feat(cli): cap status line output at 3 lines

Prevent runaway scripts from flooding the footer — lines beyond the
third are silently discarded.

* docs: mention 3-line cap in status line docs and agent prompt

* fix(cli): cap status line at 2 lines to keep footer within 3 rows

Footer has a fixed bottom row (hint/mode indicator), so status line
gets at most 2 lines to keep the total footer height at 3 rows max.

* test(cli): improve useStatusLine coverage to 100% lines

Add tests for: per-model metrics payload, contextWindowSize/version/
model fallbacks, config removal with pending debounce, command change
cancelling pending debounce.

* docs: update status line ASCII diagram for multi-line layouts

Also fix TS error in test (null → null as never for mock return).

* refactor(cli): return string[] from useStatusLine, filter empty lines

Address review feedback:
- Hook returns `lines: string[]` instead of `text: string | null`,
  eliminating the join/split round-trip with Footer.
- Filter empty lines before slicing so leading blanks don't eat real
  content (e.g. "\n\nreal content" no longer yields ["", ""]).
- Export MAX_STATUS_LINES with comment explaining the 3-row constraint.
- Use `status-line-${i}` as React key for clarity.

* test(cli): add Footer multi-line rendering, \r\n, and pure-newline tests

Address remaining review feedback:
- Footer test: mock useStatusLine, verify multi-line rendering and
  hint suppression.
- useStatusLine test: add \r\n line ending and pure-newline edge case.

* fix(cli): align right footer indicators to top

When the status line has multiple rows, the left column becomes taller
than the right section. The outer Box defaults to `alignItems: stretch`
which caused the indicators to visually center; add `alignItems="flex-start"`
on the right Box so they stay anchored to the top row.

Reported via e2e test in #3311.
2026-04-17 12:44:30 +08:00
Richard Luo
30c5eeaf20
docs: fix Windows install command to work in both CMD and PowerShell (#3252) 2026-04-17 10:36:23 +08:00
顾盼
9e2f63a1ca
feat(memory): managed auto-memory and auto-dream system (#3087)
* docs: add auto-memory implementation log

* feat(core): add managed auto-memory storage scaffold

* feat(core): load managed auto-memory index

* feat(core): add managed auto-memory recall

* feat(core): add managed auto-memory extraction

* feat(cli): add managed auto-memory dream commands

* feat(core): add auxiliary side-query foundation

* feat(memory): add model-driven recall selection

* feat(memory): add model-driven extraction planner

* feat(core): add background task runtime foundation

* feat(memory): schedule auto dream in background

* feat(core): add background agent runner foundation

* feat(memory): add extraction agent planner

* feat(core): add dream agent planner

* feat(core): rebuild managed memory index

* feat(memory): add governance status commands

* feat(memory): add managed forget flow

* feat(core): harden background agent planning

* feat(memory): complete managed parity closure

* test(memory): add managed lifecycle integration coverage

* feat: same to cc

* feat(memory-ui): add memory saved notification and memory count badge

Feature 3 - Memory Saved Notification:
- Add HistoryItemMemorySaved type to types.ts
- Create MemorySavedMessage component for rendering '● Saved/Updated N memories'
- In useGeminiStream: detect in-turn memory writes via mapToDisplay's
  memoryWriteCount field and emit 'memory_saved' history item after turn
- In client.ts: capture background dream/extract promises and expose
  via consumePendingMemoryTaskPromises(); useGeminiStream listens
  post-turn and emits 'Updated N memories' notification for background tasks

Feature 4 - Memory Count Badge:
- Add isMemoryOp field to IndividualToolCallDisplay
- Add memoryWriteCount/memoryReadCount to HistoryItemToolGroup
- Add detectMemoryOp() in useReactToolScheduler using isAutoMemPath
- ToolGroupMessage renders '● Recalled N memories, Wrote N memories' badge
  at the top of tool groups that touch memory files

Fix: process.env bracket-access in paths.ts (noPropertyAccessFromIndexSignature)
Fix: MemoryDialog.test.tsx mock useSettings to satisfy SettingsProvider requirement

* fix(memory-ui): auto-approve memory writes, collapse memory tool groups, fix MEMORY.md path

Problem 1 - Auto-approve memory file operations:
- write-file.ts: getDefaultPermission() checks isAutoMemPath; returns 'allow'
  for managed auto-memory files, 'ask' for all other files
- edit.ts: same pattern

Problem 2 - Feature 4 UX: collapse memory-only tool groups:
- ToolGroupMessage: detect when all tool calls have isMemoryOp set (pure memory
  group) and all are complete; render compact '● Recalled/Wrote N memories
  (ctrl+o to expand)' instead of individual tool call rows
- ctrl+o toggles expand/collapse when isFocused and group is memory-only
- Mixed groups (memory + other tools) keep badge-at-top behaviour
- Expanded state shows individual tool calls with '● Memory operations
  (ctrl+o to collapse)' header

Problem 3 - MEMORY.md path mismatch:
- prompt.ts: Step 2 now references full absolute path ${memoryDir}/MEMORY.md
  so the model writes to the correct location inside the memory directory,
  not to the parent project directory

Fix tests:
- write-file.test.ts: add getProjectRoot to mockConfigInternal
- prompt.test.ts: update assertion to match full-path section header

* fix(memory-ui): fix duplicate notification, broken ctrl+o, and Edit tool detection

- Remove duplicate 'Saved N memories' notification: the tool group badge already
  shows 'Wrote N memories'; the separate HistoryItemMemorySaved addItem after
  onComplete was double-counting. Keep only the background-task path
  (consumePendingMemoryTaskPromises).

- Remove ctrl+o expand: Ink's Static area freezes items on first render and
  cannot respond to user input. useInput/useState(isExpanded) in a Static item
  is a no-op. Removed the dead code; memory-only groups now always render as
  the compact summary (no fake interactive hint).

- Fix Edit tool detection: detectMemoryOp was checking for 'edit_file' but the
  real tool name constant is 'edit'. Also removed non-existent 'create_file'
  (write_file covers all writes). Now editing MEMORY.md is correctly identified
  as a memory write op, collapses to 'Wrote N memories', and is auto-approved.

* fix(dream): run /dream as a visible submit_prompt turn, not a silent background agent

The previous implementation ran an AgentHeadless background agent that could
take 5+ minutes with zero UI feedback — user saw a blank screen for the entire
duration and then at most one line of text.

Fix: /dream now returns submit_prompt with the consolidation task prompt so it
runs as a regular AI conversation turn. Tool calls (read_file, write_file, edit,
grep_search, list_directory, glob) are immediately visible as collapsed tool
groups as the model works through the memory files — identical UX to Claude Code.

Also export buildConsolidationTaskPrompt from dreamAgentPlanner so dreamCommand
can reuse the same detailed consolidation prompt that was already written.

* fix(memory): auto-allow ls/glob/grep on memory base directory

Add getMemoryBaseDir() to getDefaultPermission() allow list in ls.ts,
glob.ts, and grep.ts — mirrors the existing pattern in read-file.ts.

Without this, ListFiles/Glob/Grep on ~/.qwen/* would trigger an
approval dialog, blocking /dream at its very first step.

* fix(background): prevent permission prompt hangs in background agents

Match Claude Code's headless-agent intent: background memory agents must never
block on interactive permission prompts.

Wrap background runtime config so getApprovalMode() returns YOLO, ensuring any
ask decision is auto-approved instead of hanging forever. Add regression test
covering the wrapped approval mode.

* fix(memory): run auto extract through forked agent

Make managed auto-memory extraction follow the Claude Code architecture:
background extraction now uses a forked agent to read/write memory files
directly, instead of planning patches and applying them with a separate
filesystem pipeline.

Keep the old patch/model path only as fallback if the forked agent fails.
Add regression tests covering the new execution path and tool whitelist.

* refactor(memory): remove legacy extract fallback pipeline

Delete the old patch/model/heuristic extraction path entirely.
Managed auto-memory extract now runs only through the forked-agent
execution flow, with no planner/apply fallback stages remaining.

Also remove obsolete exports/tests and update scheduler/integration
coverage to use the forked-agent-only architecture.

* refactor(memory): move auxiliary files out of memory/ directory

meta.json, extract-cursor.json, and consolidation.lock are internal
bookkeeping files, not user-visible memories. Move them one level up
to the project state dir (parent of memory/) so that the memory/
directory contains only MEMORY.md and topic files, matching the
clean layout of the upstream reference implementation.

Add getAutoMemoryProjectStateDir() helper in paths.ts and update the
three path accessors + store.test.ts path assertions accordingly.

* fix(memory): record lastDreamAt after manual /dream run

The /dream command submits a prompt to the main agent (submit_prompt),
which writes memory files directly. Because it bypasses dreamScheduler,
meta.json was never updated and /memory always showed 'never'.

Fix by:
- Exporting writeDreamManualRunToMetadata() from dream.ts
- Adding optional onComplete callback to SubmitPromptActionReturn and
  SubmitPromptResult (types.ts / commands/types.ts)
- Propagating onComplete through slashCommandProcessor.ts
- Firing onComplete after turn completion in useGeminiStream.ts
- Providing the callback in dreamCommand.ts to write lastDreamAt

* fix(memory): remove scope params from /remember in managed auto-memory mode

--global/--project are legacy save_memory tool concepts. In managed
auto-memory mode the forked agent decides the appropriate type
(user/feedback/project/reference) based on the content of the fact.

Also improve the prompt wording to explicitly ask the agent to choose
the correct type, reducing the tendency to default to 'project'.

* feat(ui): show '✦ dreaming' indicator in footer during background dream

Subscribe to getManagedAutoMemoryDreamTaskRegistry() in Footer via a
useDreamRunning() hook. While any dream task for the current project is
pending or running, display '✦ dreaming' in the right section of the
footer bar, between Debug Mode and context usage.

* refactor(memory): align dream/extract infrastructure with Claude Code patterns

Five improvements based on Claude Code parity audit:

1. Memoize getAutoMemoryRoot (paths.ts)
   - Add _autoMemoryRootCache Map, keyed by projectRoot
   - findCanonicalGitRoot() walks the filesystem per call; memoize avoids
     repeated git-tree traversal on hot-path schedulers/scanners
   - Expose clearAutoMemoryRootCache() for test teardown

2. Lock file stores PID + isProcessRunning reclaim (dreamScheduler.ts)
   - acquireDreamLock() writes process.pid to the lock file body
   - lockExists() reads PID and calls process.kill(pid, 0); dead/missing
     PID reclaims the lock immediately instead of waiting 2h
   - Stale threshold reduced to 1h (PID-reuse guard, same as CC)

3. Session scan throttle (dreamScheduler.ts)
   - Add SESSION_SCAN_INTERVAL_MS = 10min (same as CC)
   - Add lastSessionScanAt Map<projectRoot, number> to ManagedAutoMemoryDreamRuntime
   - When time-gate passes but session-gate doesn't, throttle prevents
     re-scanning the filesystem on every user turn

4. mtime-based session counting (dreamScheduler.ts)
   - Replace fragile recentSessionIdsSinceDream Set in meta.json with
     filesystem mtime scan (listSessionsTouchedSince)
   - Mirrors Claude Code's listSessionsTouchedSince: reads session JSONL
     files from Storage.getProjectDir()/chats/, filters by mtime > lastDreamAt
   - Immune to meta.json corruption/loss; no per-turn metadata write
   - ManagedAutoMemoryDreamRuntime accepts injectable SessionScannerFn
     for clean unit testing without real session files

5. Extraction mutual exclusion extended to write_file/edit (extractScheduler.ts)
   - historySliceUsesMemoryTool() now checks write_file/edit/replace/create_file
     tool calls whose file_path is within isAutoMemPath()
   - Previously only detected save_memory; missed direct file writes by
     the main agent, causing redundant background extraction

* docs(memory): add user-facing memory docs, i18n for all locales, simplify /forget

- Add docs/users/features/memory.md: comprehensive user-facing guide covering
  QWEN.md instructions, auto-memory behaviour, all memory commands, and
  troubleshooting; replaces the placeholder auto-memory.md
- Update docs/users/features/_meta.ts: rename entry auto-memory → memory
- Update docs/users/features/commands.md: add /init, /remember, /forget,
  /dream rows; fix /memory description; remove /init duplicate
- Update docs/users/configuration/settings.md: add memory.* settings section
  (enableManagedAutoMemory, enableManagedAutoDream) between tools and permissions
- Remove /forget --apply flag: preview-then-apply flow replaced with direct
  deletion; update forgetCommand.ts, en.js, zh.js accordingly
- Add all auto-memory i18n keys to de, ja, pt, ru locales (18 keys each):
  Open auto-memory folder, Auto-memory/Auto-dream status lines, never/on/off,
  ✦ dreaming, /forget and /remember usage strings, all managed-memory messages
- Remove dead save_memory branch from extractScheduler.partWritesToMemory()
- Add ✦ dreaming indicator to Footer.tsx with i18n; fix Footer.test.tsx mocks
- Refactor MemoryDialog.tsx auto-dream status line to use i18n
- Remove save_memory tool (memoryTool.ts/test); clean up webui references
- Add extractionPlanner.ts, const.ts and associated tests
- Delete stale docs/users/configuration/memory.md and
  docs/developers/tools/memory.md (content superseded)

* refactor(memory): remove all Claude Code references from comments and test names

* test(memory): remove empty placeholder test files that cause vitest to fail

* fix eslint

* fix test in windows

* fix test

* fix(memory): address critical review findings from PR #3087

- fix(read-file): narrow auto-allow from getMemoryBaseDir() (~/.qwen) to
  isAutoMemPath(projectRoot) to prevent exposing settings.json / OAuth
  credentials without user approval (wenshao review)

- fix(forget): per-entry deletion instead of whole-file unlink
  - assign stable per-entry IDs (relativePath:index for multi-entry files)
    so the model can target individual entries without removing siblings
  - rewrite file keeping unmatched entries; only unlink when file becomes
    empty (wenshao review)

- fix(entries): round-trip correctness for multi-entry new-format bodies
  - parseAutoMemoryEntries: plain-text line closes current entry and opens
    a new one (was silently ignored when current was already set)
  - renderAutoMemoryBody: emit blank line between adjacent entries so the
    parser can detect entry boundaries on re-read (wenshao review)

- fix(entries): resolve two CodeQL polynomial-regex alerts
  - indentedMatch: \s{2,}(?:[-*]\s+)? → [\t ]{2,}(?:[-*][\t ]+)?
  - topLevelMatch: :\s*(.+)$ → :[ \t]*(\S.*)$
  (github-advanced-security review)

- fix(scan.test): use forward-slash literal for relativePath expectation
  since listMarkdownFiles() normalises all separators to '/' on all
  platforms including Windows

* fix(memory): replace isAutoMemPath startsWith with path.relative()

Using path.relative() instead of string startsWith() is more robust
across platforms — it correctly handles Windows path-separator
differences and avoids potential edge cases where a path prefix match
could succeed on non-separator boundaries.

Addresses github-actions review item 3 (PR #3087).

* feat(telemetry): add auto-memory telemetry instrumentation

Add OpenTelemetry logs + metrics for the five auto-memory lifecycle
events: extract, dream, recall, forget, and remember.

Telemetry layer (packages/core/src/telemetry/):
- constants.ts: 5 new event-name constants
  (qwen-code.memory.{extract,dream,recall,forget,remember})
- types.ts: 5 new event classes with typed constructor params
  (MemoryExtractEvent, MemoryDreamEvent, MemoryRecallEvent,
   MemoryForgetEvent, MemoryRememberEvent)
- metrics.ts: 8 new OTel instruments (5 Counters + 3 Histograms)
  with recordMemoryXxx() helpers; registered inside initializeMetrics()
- loggers.ts: logMemoryExtract/Dream/Recall/Forget/Remember() — each
  emits a structured log record and calls its recordXxx() counterpart
- index.ts: re-exports all new symbols

Instrumentation call-sites:
- extractScheduler.ts ManagedAutoMemoryExtractRuntime.runTask():
  emits extract event with trigger=auto, completed/failed status,
  patches_count, touched_topics, and wall-clock duration
- dream.ts runManagedAutoMemoryDream():
  emits dream event with trigger=auto, updated/noop status,
  deduped_entries, touched_topics, and duration; covers both
  agent-planner and mechanical fallback paths
- recall.ts resolveRelevantAutoMemoryPromptForQuery():
  emits recall event with strategy, docs_scanned/selected, and
  duration; covers model, heuristic, and none paths
- forget.ts forgetManagedAutoMemoryEntries():
  emits forget event with removed_entries_count, touched_topics,
  and selection_strategy (model/heuristic/none)
- rememberCommand.ts action():
  emits remember event with topic=managed|legacy at command
  invocation time (before agent decides the actual memory type)

* refactor(telemetry): remove memory forget/remember telemetry events

Remove EVENT_MEMORY_FORGET and EVENT_MEMORY_REMEMBER along with all
associated infrastructure that is no longer needed:

- constants.ts: remove EVENT_MEMORY_FORGET, EVENT_MEMORY_REMEMBER
- types.ts: remove MemoryForgetEvent, MemoryRememberEvent classes
- metrics.ts: remove MEMORY_FORGET_COUNT, MEMORY_REMEMBER_COUNT constants,
  memoryForgetCounter, memoryRememberCounter module vars,
  their initialization in initializeMetrics(), and
  recordMemoryForgetMetrics(), recordMemoryRememberMetrics() functions
- loggers.ts: remove logMemoryForget(), logMemoryRemember() functions
  and their imports
- index.ts: remove all re-exports for the above symbols
- memory/forget.ts: remove logMemoryForget call-site and import
- cli/rememberCommand.ts: remove logMemoryRemember call-sites and import

* change default value

* fix forked agent

* refactor(background): unify fork primitives into runForkedAgent + cleanup

- Merge runForkedQuery into runForkedAgent via TypeScript overloads:
  with cacheSafeParams → GeminiChat single-turn path (ForkedQueryResult)
  without cacheSafeParams → AgentHeadless multi-turn path (ForkedAgentResult)
- Delete forkedQuery.ts; move its test to background/forkedAgent.cache.test.ts
- Remove forkedQuery export from followup/index.ts
- Migrate all callers (suggestionGenerator, speculation, btwCommand, client)
  to import from background/forkedAgent
- Add getFastModel() / setFastModel() to Config; expose in CLI config init
  and ModelDialog / modelCommand
- Remove resolveFastModel() from AppContainer — now delegated to config.getFastModel()
- Strip Claude Code references from code comments

* fix(memory): address wenshao's critical review findings

- dream.ts: writeDreamManualRunToMetadata now persists lastDreamSessionId
  and resets recentSessionIdsSinceDream, preventing auto-dream from firing
  again in the same session after a manual /dream
- config.ts: gate managed auto-memory injection on getManagedAutoMemoryEnabled();
  when disabled, previously saved memories are no longer injected into new sessions
- rememberCommand.ts: remove legacy save_memory branch (tool was removed);
  fall back to submit_prompt directing agent to write to QWEN.md instead
- BuiltinCommandLoader.ts: only register /dream and /forget when managed
  auto-memory is enabled, matching the feature's runtime availability
- forget.ts: return early in forgetManagedAutoMemoryMatches when matches is
  empty, avoiding unnecessary directory scaffolding as a side effect

* fix test

* fix ci test

* feat(memory): align extract/dream agents to Claude Code patterns

- fix(client): move saveCacheSafeParams before early-return paths so
  extract agents always have cache params available (fixes extract never
  triggering in skipNextSpeakerCheck mode)

- feat(extract): add read-only shell tool + memory-scoped write
  permissions; create inline createMemoryScopedAgentConfig() with
  PermissionManager wrapper (isToolEnabled + evaluate) that allows only
  read-only shell commands and write/edit within the auto-memory dir

- feat(extract): align prompt to Claude Code patterns — manifest block
  listing existing files, parallel read-then-write strategy, two-step
  save (memory file then index)

- feat(dream): remove mechanical fallback; runManagedAutoMemoryDream is
  now agent-only and throws without config

- feat(dream): align prompt to Claude Code 4-phase structure
  (Orient/Gather/Consolidate/Prune+Index); add narrow transcript grep,
  relative→absolute date conversion, stale index pruning, index size cap

- fix(permissions): add isToolEnabled() to MemoryScopedPermissionManager
  to prevent TypeError crash in CoreToolScheduler._schedule

- test: update dreamScheduler tests to mock dream.js; replace removed
  mechanical-dedup test with scheduler infrastructure verification

* move doc to design

* refactor(memory): unify extract+dream background task management into MemoryBackgroundTaskHub

- Add memoryTaskHub.ts: single BackgroundTaskRegistry + BackgroundTaskDrainer shared
  by all memory background tasks; exposes listExtractTasks() / listDreamTasks()
  typed query helpers and a unified drain() method
- extractScheduler: ManagedAutoMemoryExtractRuntime accepts hub via constructor
  (defaults to defaultMemoryTaskHub); test factory gets isolated fresh hub
- dreamScheduler: same pattern — sessionScanner + hub injection; BackgroundTask-
  Scheduler initialized from injected hub; test factory gets isolated hub
- status.ts: replace two separate getRegistry() calls with defaultMemoryTaskHub
  typed query methods
- Footer.tsx (useDreamRunning): subscribe to shared registry, filter by
  DREAM_TASK_TYPE so extract tasks do not trigger the dream spinner
- index.ts: re-export memoryTaskHub.ts so defaultMemoryTaskHub/DREAM_TASK_TYPE/
  EXTRACT_TASK_TYPE are available as top-level package exports

* refactor(background): introduce general-purpose BackgroundTaskHub

Replace memory-specific MemoryBackgroundTaskHub with a domain-agnostic
BackgroundTaskHub in the background/ layer. Any future background task
runtime (3rd, 4th, …) plugs in by accepting a hub via constructor
injection — no new infrastructure required.

Changes:
- Add background/taskHub.ts: BackgroundTaskHub (registry + drainer +
  createScheduler() + listByType(taskType, projectRoot?)) and the
  globalBackgroundTaskHub singleton. Zero knowledge of any task type.
- Delete memory/memoryTaskHub.ts: its narrow listExtractTasks /
  listDreamTasks helpers are replaced by the generic listByType() call.
- Move EXTRACT_TASK_TYPE to extractScheduler.ts (owned by the runtime
  that defines it); replace 3 hardcoded string literals with the const.
- Move DREAM_TASK_TYPE to dreamScheduler.ts; use hub.createScheduler()
  instead of manually wiring new BackgroundTaskScheduler(reg, drain).
- status.ts: globalBackgroundTaskHub.listByType(EXTRACT_TASK_TYPE, ...)
- Footer.tsx: globalBackgroundTaskHub.registry (shared, filtered by type)
- index.ts: export background/taskHub.js; drop memory/memoryTaskHub.js

* test(background): add BackgroundTaskHub unit tests and hub isolation checks

- background/taskHub.test.ts (11 tests):
  - createScheduler(): tasks registered via scheduler appear in hub registry;
    multiple calls return distinct scheduler instances
  - listByType(): filters by taskType, filters by projectRoot, returns []
    for unknown types, two types co-exist in registry but stay separated
  - drain(): resolves false on timeout, resolves true when tasks complete,
    resolves true immediately when no tasks in flight
  - isolation: tasks in hubA do not appear in hubB
  - globalBackgroundTaskHub: is a BackgroundTaskHub instance with registry/drainer

- extractScheduler.test.ts (+1 test):
  - factory-created runtimes have isolated registries; tasks in runtimeA
    are invisible to runtimeB; all tasks carry EXTRACT_TASK_TYPE

- dreamScheduler.test.ts (+1 test):
  - factory-created runtimes have isolated registries; tasks in runtimeA
    are invisible to runtimeB; all tasks carry DREAM_TASK_TYPE

* refactor(memory): consolidate all memory state into MemoryManager

Replace BackgroundTaskRegistry/Drainer/Scheduler/Hub helper classes and
module-level globals with a single MemoryManager class owned by Config.

## Changes

### New
- packages/core/src/memory/manager.ts — MemoryManager with:
  - scheduleExtract / scheduleDream (inline queuing + deduplication logic)
  - recall / forget / selectForgetCandidates / forgetMatches
  - getStatus / drain / appendToUserMemory
  - subscribe(listener) compatible with useSyncExternalStore
  - storeWith() atomic record registration (no double-notify)
  - Distinct skippedReason 'scan_throttled' vs 'min_sessions' for dream
- packages/core/src/utils/forkedAgent.ts — pure cache util (moved from background/)
- packages/core/src/utils/sideQuery.ts — pure util (moved from auxiliary/)

### Deleted
- background/taskRegistry, taskDrainer, taskScheduler, taskHub and all tests
- background/forkedAgent (moved to utils/)
- auxiliary/sideQuery (moved to utils/)
- memory/extractScheduler, dreamScheduler, state and all tests

### Modified
- config/config.ts — Config owns MemoryManager instance; getMemoryManager()
- core/client.ts — all memory ops via config.getMemoryManager()
- core/client.test.ts — mock MemoryManager instead of individual modules
- memory/status.ts — accepts MemoryManager param, drops globalBackgroundTaskHub
- index.ts — memory exports reduced from 14 modules to 5 (manager/types/paths/store/const)
- cli/commands/dreamCommand.ts — via config.getMemoryManager()
- cli/commands/forgetCommand.ts — via config.getMemoryManager()
- cli/components/Footer.tsx — useSyncExternalStore replacing setInterval polling
- cli/components/Footer.test.tsx — add getMemoryManager mock
2026-04-16 20:05:45 +08:00
Reid
07475026f6
fix(cli): remember "Start new chat session" until summary changes (#3308)
* fix(cli): remember "Start new chat session" until summary changes

  Persist a project-scoped Welcome Back restart choice keyed to the
  current PROJECT_SUMMARY fingerprint.

  This suppresses the Welcome Back dialog after choosing "Start new chat
  session", while still showing it again after the project summary is
  updated.

* fix conflict
2026-04-16 13:54:14 +08:00
DennisYu07
b5115e731e
feat(hooks): Add HTTP Hook, Function Hook and Async Hook support (#2827)
* add http/async/function type

* fix url error

* resolve comment

* align cc non blocking error

* fix hookRunner for async

* fix(hooks): update hook type validation to support http and function types

- Change validated hook types from ['command', 'plugin'] to ['command', 'http', 'function']
- Add validation for HTTP hooks requiring url field
- Add validation for function hooks requiring callback field
- Add comprehensive test coverage for all hook type validations

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix(hooks): align SSRF protection with Claude Code behavior

- Allow 127.0.0.0/8 (loopback) for local dev hooks
- Allow localhost hostname for local dev hooks
- Allow ::1 (IPv6 loopback) for local dev hooks
- Add 100.64.0.0/10 (CGNAT) to blocked ranges (RFC 6598)
- Update tests to match Claude Code's ssrfGuard.ts behavior

This fixes HTTP hooks failing to connect to local dev servers.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* refactor(hooks): align HTTP hook security with Claude Code behavior

- Add CRLF/NUL sanitization for env var interpolation (header injection)
- Implement combined abort signal (external signal + timeout)
- Upgrade SSRF protection to DNS-level with ssrfGuard
  - Allow loopback (127.0.0.0/8, ::1) for local dev hooks
  - Block CGNAT (100.64.0.0/10) and IPv6 private ranges
- Increase default HTTP hook timeout to 10 minutes
- Fix VS Code hooks schema to support http type
  - Add url, headers, allowedEnvVars, async, once, statusMessage, shell fields
  - Note: "function" type is SDK-only (callback cannot be serialized to JSON)

* feat(hooks): enhance Function Hook with messages, skillRoot, shell, and matcher support

- Add MessagesProvider for automatic conversation history passing to function hooks
- Add FunctionHookContext with messages, toolUseID, and signal
- Add skillRoot support for skill-scoped session hooks
- Add shell parameter support for command hooks (bash/powershell)
- Add regex matcher support for hook pattern matching
- Add statusMessage to CommandHookConfig
- Change default function hook timeout from 60s to 5s
- Add comprehensive unit tests for all new features

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* add session hook for skill

* fix function hook parsing

* refactor ui for http hook/async hook/function hook

* update doc and add integration test

* change telemetryn type and refactor SSRF

* fix project level bug

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2026-04-16 10:10:33 +08:00
ChiGao
70396d1276
feat: optimize compact mode UX — shortcuts, settings sync, and safety (#3100)
* feat: optimize compact mode UX — shortcuts, settings sync, and safety improvements

- Add Ctrl+O to keyboard shortcuts list (?) and /help command
- Sync compact mode toggle from Settings dialog with CompactModeContext
- Protect tool approval prompts from being hidden in compact mode
  (MainContent forces live rendering during WaitingForConfirmation)
- Remove snapshot freezing on toggle — treat as persistent preference,
  not temporary peek (differs from Claude Code's session-scoped model)
- Add compact mode tip to startup Tips rotation for non-intrusive discovery
- Remove compact mode indicator from footer to reduce UI clutter
- Add competitive analysis design doc (EN + ZH) comparing with Claude Code
- Update user docs (settings.md) and i18n translations (en/zh/ru/pt)

Relates to #3047, #2767, #2770

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: remove frozenSnapshot dead code and Chinese design doc

- Remove frozenSnapshot state, useEffect, and all related logic from
  AppContainer, MainContent, CompactModeContext, and test files
- Simplify MainContent to always render live pendingHistoryItems
- Delete compact-mode-design-zh.md (redundant Chinese translation)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address PR review feedback for compact mode optimization

- Add refreshStatic() call after setCompactMode in SettingsDialog
  so already-rendered Static history updates immediately
- Fix outdated column split comment in KeyboardShortcuts (5+4+4)
- Update design doc: remove all frozenSnapshot references, renumber
  optimization recommendations, fix file reference descriptions
- Add missing i18n keys for de.js and ja.js locales
- Add test for SettingsDialog compact mode sync with CompactModeContext

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: prevent subagent confirmation from being hidden in compact mode

hasConfirmingTool only checks ToolCallStatus.Confirming, but subagent
approvals arrive via resultDisplay.pendingConfirmation while the tool
status remains Executing. Add hasSubagentPendingConfirmation to the
showCompact guard so tool groups with pending subagent confirmations
are always force-expanded.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: force show subagent confirmation result in compact mode

The previous fix (47ee03c) correctly force-expanded the tool group
wrapper when a subagent had pending confirmation, but each inner
ToolMessage still hid its resultDisplay due to compactMode check,
which hid the AgentExecutionDisplay containing the inline confirmation
UI.

Add isAgentWithPendingConfirmation to forceShowResult conditions so
the inner AgentExecutionDisplay is rendered even in compact mode.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(compact-mode): merge consecutive tool groups across hidden items

In compact mode, sequential tool calls across multiple LLM turns each
produced a separate bordered box, defeating the "compact" intent. The
model typically emits a `gemini_thought` between consecutive tool calls,
which is hidden in compact mode — so visually the boxes look adjacent,
but in `history` they are separated by hidden items.

This commit adds render-time merging of consecutive tool_group history
items, where "consecutive" allows hidden-in-compact items
(`gemini_thought`, `gemini_thought_content`) between them.

Key pieces:
- New `mergeCompactToolGroups` utility that merges adjacent mergeable
  tool_groups, skipping hidden items between them. Force-expand
  conditions (Confirming/Error tools, subagent pending confirmation,
  user-initiated, focused embedded shell) preserve group boundaries so
  authorization prompts, errors, and shell focus stay visible.
- `MainContent.tsx` applies the merger only when `compactMode === true`
  (verbose mode is unchanged) and calls `refreshStatic()` when a merge
  consolidates items, because Ink's `<Static>` is append-only and
  cannot replace already-committed terminal content.
- `CompactToolGroupDisplay.tsx` shows a `× N` count when a merged
  group contains more than one tool, matching the existing single-turn
  multi-tool display style.
- 19 unit tests covering empty/single/multiple groups, hidden-item
  skipping (the 8-tool real-world scenario), force-expand boundaries,
  mixed tool types, and complex sequences.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: 秦奇 <gary.gq@alibaba-inc.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 09:29:24 +08:00
DennisYu07
08d3d6eb6f
feat(acp): add complete hooks support for ACP integration (#3248)
* complete hooks for acp

* resolve comment

* reslove test

* resolve comment for SessionEnd/SessionStart/PostToolUseFailure/PostToolUse
2026-04-16 09:28:26 +08:00