mirror of
https://github.com/QwenLM/qwen-code.git
synced 2026-04-28 11:41:04 +00:00
86 commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
aeeb2976d6
|
feat(web-search): remove built-in web_search tool, replace with MCP-based approach (#3502)
* feat(web-search): add GLM (ZhipuAI) web search provider - Add GlmProvider class implementing BaseWebSearchProvider using the ZhipuAI Web Search API (https://open.bigmodel.cn/api/paas/v4/web_search) - Support multiple search engines: search_std, search_pro, search_pro_sogou, search_pro_quark - Support optional config: maxResults, searchIntent, searchRecencyFilter, contentSize, searchDomainFilter - Truncate query to 70 characters per API limit - Register 'glm' in the provider discriminated union (types.ts) and createProvider() switch (index.ts) - Add GlmProviderConfig to settingsSchema, ConfigParams, and Config class - Add --glm-api-key CLI flag and GLM_API_KEY env var support in webSearch.ts - Forward GLM_API_KEY in sandbox environment - Update provider priority list: Tavily > Google > GLM > DashScope - Add 17 unit tests for GlmProvider and 4 integration tests in index.test.ts - Update docs/developers/tools/web-search.md with GLM configuration, env vars, CLI args, pricing, and corrected DashScope billing info - Fix stale OAuth/free-tier references in web-search.md Closes #3496 * docs(web-search): fix DashScope note and add GLM server-side limitations * fix(web-search): make DashScope provider work with standard API key, remove qwen-oauth dependency - DashScopeProvider.isAvailable() now checks config.apiKey instead of authType - Remove OAuth credential file reading and resource_url requirement - Use standard DashScope endpoint: dashscope.aliyuncs.com/api/v1/indices/plugin/web_search - Read DASHSCOPE_API_KEY env var and --dashscope-api-key CLI flag - Forward DASHSCOPE_API_KEY into sandbox environment - Update integration test to detect DASHSCOPE_API_KEY - Update docs to reflect new API key based configuration * feat(web-search): remove built-in web search tool The web_search tool and all related provider implementations are removed. Web search functionality will be provided via MCP integrations instead, which is the direction the broader agent ecosystem is moving. Removed: - packages/core/src/tools/web-search/ (entire directory) - packages/cli/src/config/webSearch.ts - integration-tests/cli/web_search.test.ts - ToolNames.WEB_SEARCH, ToolErrorCode.WEB_SEARCH_FAILED - webSearch config in ConfigParams, Config class, settingsSchema - CLI options: --tavily-api-key, --google-api-key, --google-search-engine-id, --glm-api-key, --dashscope-api-key, --web-search-default - Sandbox env forwarding for TAVILY/GLM/DASHSCOPE/GOOGLE search keys - web_search from rule-parser, permission-manager, speculation gate, microcompact tool set, and builtin-agents tool list * fix: remove websearch reference * docs: remove websearch tool * docs: add break change guide * fix review |
||
|
|
d71f2fab70
|
feat(cli): cap inline shell output with configurable line limit (#3508)
* feat(cli): cap inline shell output with configurable line limit
Long-running shell commands (npm install, find /, build logs) currently
fill the viewport with the full visible PTY buffer (up to availableHeight,
~24 lines on a typical terminal). The output dominates the screen and
pushes prior context off the top.
This caps inline ANSI shell output to a small window (default 5 lines,
matching Claude Code's ShellProgressMessage). The hidden line count is
already surfaced via the existing `+N lines` indicator in
`ShellStatsBar`, so users still know how much was elided.
The cap applies only when nothing in the existing escape-hatch set is
true:
- `forceShowResult` (errors, !-prefix user-initiated commands,
tools awaiting confirmation, agents pending confirmation)
- `isThisShellFocused` (ctrl+f focus on a running embedded PTY shell)
- `ui.shellOutputMaxLines = 0` (user opt-out)
Also adds a new `ui.shellOutputMaxLines` setting (default 5) so users
can adjust or disable the cap. The SettingsDialog renders it
automatically via the existing `type: 'number'` schema path.
Notes on scope:
- Only the `'ansi'` display branch is capped. `'string'`, `'diff'`,
`'todo'`, `'plan'`, `'task'` renderers are untouched.
- `AnsiOutputDisplay` is only produced by shell tools (`shell.ts`,
`shellCommandProcessor.ts`), so other tool outputs are unaffected.
- The `+N lines` count is bounded by the headless xterm buffer height
(~30 rows) — a pre-existing limitation of the buffer-based stats,
not introduced here.
Tests:
- 4 new ToolMessage tests cover cap default, forceShowResult bypass,
settings disable (cap=0), and custom cap value.
- The existing `MockAnsiOutputText` / `MockShellStatsBar` mocks were
extended to print `availableTerminalHeight` / `displayHeight` so
the cap behavior is asserted at the prop level.
* fix(cli): apply shell output cap to completed string display too
Initial PR caught only the streaming ANSI branch. AI shell tools emit
the final completed result through `shell.ts:returnDisplayMessage =
result.output`, which is a plain string. That string went through
`StringResultRenderer` with the unmodified `availableHeight`, so the
cap was effectively bypassed for the steady-state display the user
actually sees most of the time.
Verified manually in tmux: a `seq 1 30` invocation by the AI now
collapses to "first 26 lines hidden ... 27 28 29 30" instead of
listing all 30 rows. `!`-prefix `seq 1 30` still expands fully via
the existing `isUserInitiated → forceShowResult` bypass.
Changes:
- Detect shell tool by name (matches existing `SHELL_COMMAND_NAME` /
`SHELL_NAME` checks already used in this file)
- Rename `ansiAvailableHeight` → `shellCapHeight` since it now
governs the string branch as well
- Pass `shellCapHeight` to `StringResultRenderer`; the value
falls back to `availableHeight` for non-shell tools so other
tools' string output is unaffected
- Two new tests: shell completed string is capped; non-shell
string is not
- Two existing tests updated to use `name="Shell"` so they actually
exercise the cap path (would previously have passed by accident
since the original code didn't check tool name)
Also picks up the auto-regenerated VSCode IDE companion settings
schema entry for `ui.shellOutputMaxLines`.
* fix(cli): symmetrize ANSI/string row counts and clamp shell cap input
Addresses two non-blocking review observations on #3508.
Off-by-one between paths:
MaxSizedBox reserves one row for its overflow banner when content
exceeds maxHeight (visibleContentHeight = max - 1). The ANSI path
pre-slices to N in AnsiOutputText so MaxSizedBox sees exactly N
rows and renders all N — plus the separate ShellStatsBar line.
The string path passes the raw cap and lets MaxSizedBox handle
overflow, so it shows N-1 content rows + the banner.
Result with cap=5: ANSI showed 5+stats, string showed 4+banner.
Pass shellCapHeight + 1 to StringResultRenderer when capping so
both paths render N visible content rows. Verified in tmux: the
completed Shell tool box now reports `... first 25 lines hidden ...`
followed by lines 26-30 (was 26 + lines 27-30).
Setting validation:
Schema accepts any number; the dialog only rejects NaN. Negatives
silently disabled the cap (only 0 is documented as off) and
fractional values produced fractional slice counts. Added
Math.max(0, Math.floor(value || 0)) at the use site so:
- negatives → 0 → cap disabled (matches the documented opt-out)
- fractions → floor → whole-row cap
- non-numeric (raw settings.json edits) → 0 → cap disabled
Schema-level minimum/integer constraints aren't supported by the
current settings infrastructure (no other number setting uses
them either), so the guard lives at the use site.
Tests:
- Updated string-cap test to assert lines 26-30 visible (catches
the +1 fix; was lines 27-30 before)
- New parameterized test covers -1, 1.5, and a non-numeric value
|
||
|
|
ebe364d0b8
|
feat(retry): add persistent retry mode for unattended CI/CD environments (#3080)
* feat(retry): add persistent retry mode for unattended CI/CD environments When running in CI/CD pipelines or background daemon mode, transient API capacity errors (429/529) should not terminate long-running tasks after a fixed number of retries. This adds an environment-aware persistent retry mode that retries indefinitely for transient errors, with exponential backoff capped at 5 minutes and heartbeat keepalives every 30 seconds to prevent CI runner timeouts. * docs: add persistent retry mode documentation Add environment variable entries (QWEN_CODE_UNATTENDED_RETRY, QWEN_CODE_BG) to the settings reference, and a new "Persistent Retry Mode" section to the headless mode docs covering activation, behavior, and CI/CD usage examples. * refactor(retry): simplify to single explicit env var QWEN_CODE_UNATTENDED_RETRY Remove QWEN_CODE_BG and CI=true as activation triggers for persistent retry. Having multiple env vars with identical behavior adds confusion, and silently activating infinite retry on CI=true is dangerous — a regular CI test hitting a 429 would hang forever instead of failing fast. * fix(retry): address PR review feedback - Forward caller's abortSignal into retryWithBackoff in both baseLlmClient.ts and geminiChat.ts so persistent waits remain cancellable (wenshao) - Re-apply maxBackoff and capMs after jitter so delays strictly respect stated caps (Copilot) - Respect shouldRetryOnError in persistent mode so callers can force fast-fail even for transient 429/529 errors (Copilot) - Guard sleepWithHeartbeat against infinite loop when heartbeat interval is <= 0 via Math.max(1, ...) (Copilot) - Normalize isEnvTruthy with trim/toLowerCase for robust env var parsing across CI conventions (Copilot) * test(retry): add missing UT for shouldRetryOnError override and heartbeat zero-interval guard * fix(retry): do not cap Retry-After delays at maxBackoff Server-specified Retry-After values should only be limited by the absolute cap (capMs/6h), not the exponential backoff cap (maxBackoff/5min). Jitter is also skipped for Retry-After since the server already specified the exact wait time. * refactor(retry): align isUnattendedMode with project env parsing convention Replace custom isEnvTruthy (trim + toLowerCase) with strict matching (val === 'true' || val === '1') to match parseBooleanEnvFlag used elsewhere in the codebase. Prevents inconsistent behavior where 'TRUE' or ' 1 ' would activate persistent retry here but not in telemetry or other env-driven features. * test(retry): add Retry-After handling tests for persistent mode Cover three key behaviors: - Retry-After is NOT capped at maxBackoff (only at capMs) - Retry-After IS capped at persistentCapMs absolute limit - Retry-After delays have no jitter applied * fix(test): add isUnattendedMode to retry.js mock in baseLlmClient tests The existing vi.mock for retry.js only exported retryWithBackoff. After adding isUnattendedMode to the retry module, baseLlmClient.ts imports it, causing all 10 generateJson tests to fail with 'No "isUnattendedMode" export is defined on the mock'. * fix(retry): wire persistent retry mode into client.ts generateContent Forward persistentMode and abortSignal to retryWithBackoff() in GeminiClient.generateContent(), matching the existing wiring in baseLlmClient.ts and geminiChat.ts. |
||
|
|
afbb5e71db
|
fix(cli): rework session recap rendering and add blur threshold setting (#3482)
* feat(cli): make recap away-threshold configurable The 5-minute blur threshold was hard-coded. Confirmed from Claude Code's own binary (v2.1.113) that 5 minutes is their default as well (and that they shift to 60 minutes when 1h prompt-cache is active) — so the default stays, but expose it as `general.sessionRecapAway ThresholdMinutes` for users who briefly alt-tab often and don't want recaps piling up, or who want to lower it for testing. Non-positive / unset values fall back to the 5-minute default, so dropping the key has the same behavior as before. * fix(core): align recap prompt with Claude Code (1-2 sentences, ≤40 words) The earlier "exactly one sentence, 80-char cap" was an over-correction to a single in-the-moment ask. Going back to it: the natural shape of "current task + next action" is two clauses, and forcing them into a single sentence either crams them with a semicolon or drops the next action entirely on complex sessions. Adopt Claude Code's prompt verbatim (extracted from the v2.1.113 binary): "under 40 words, 1-2 plain sentences, no markdown. Lead with the overall goal and current task, then the one next action. Skip root-cause narrative, fix internals, secondary to-dos, and em-dash tangents." Add a Chinese-budget note (~80 chars) and keep the <recap>...</recap> wrapping that protects against reasoning-model preambles leaking into the UI. The sticky banner already re-measures controls height when the recap toggles, so a 2-line render lays out cleanly. Sweep "one-line" out of user-facing copy (settings description, slash-command description, feature docs, design doc) so the documentation matches the new shape. * fix(cli): restore "one-line" in user-facing recap copy Verified from the Claude Code v2.1.113 binary that the slash-command description IS literally "Generate a one-line session recap now" even though the underlying prompt allows 1-2 sentences. Claude Code is deliberately setting a tighter user expectation than the prompt guarantees, which keeps the surface feel "glanceable". Mirror that asymmetry: keep the prompt at 1-2 sentences (the previous commit) for behavioral parity, but put "one-line" back in the user- visible copy (slash-command description, settings description, user docs). Internal design doc keeps the accurate "1-2 sentence" wording. * fix(cli): render recap inline in history to match Claude Code Earlier I read the user's complaint that the recap "scrolled away" as "the recap should be sticky above the input box," and built a sticky banner accordingly. Disassembly of the Claude Code v2.1.113 binary shows the actual behavior is the opposite: their away_summary is a plain `type:"system", subtype:"away_summary"` message dispatched through the standard message renderer (no Static, no anchor, no flexbox pinning) — it scrolls with the conversation like every other system message. Tear out the sticky-banner machinery so recap matches that: - Recap is back in the `HistoryItemWithoutId` union and `addItem`'d into history (both from `/recap` and from auto-trigger), so it serializes into session saves and behaves like every other history item — no special clear paths, no resume-wrapper, no layout-effect re-measure dance. - `useAwaySummary` takes `addItem` again instead of a setter callback. - `AwayRecapMessage` renders the way Claude Code does: a 2-column gutter with `※`, then bold "recap: " and italic content, all in dim color. Drop the prior `StatusMessage`-shaped layout that fused prefix and label into "※ recap:". - Remove the AppContainer plumbing, the slashCommandProcessor state, the UIStateContext fields, the DefaultAppLayout / ScreenReader placement blocks, the test-utils mocks, and the noninteractive stub. Restore `useResumeCommand.handleResume` to a void return since callers no longer need the success boolean. Sweep the design doc so the architecture diagram, files table, and hook deps reflect the inline-history flow. * fix(cli): dedupe back-to-back auto-recaps with no new user turns between Two consecutive blur cycles, each over the threshold but with no new user activity in between, would each fire their own auto-recap and add two near-duplicate entries to history (same task, slightly different wording from temperature-driven LLM variance). Reported case: leaving the terminal twice while a /review of one PR was still on screen produced two recaps both about that same review. Add a `shouldFireRecap` gate before kicking off the LLM call: - Need at least 3 user messages in history total (don't fire on a near-empty session). - If a previous away_recap is already in history, need at least 2 new user messages since that one before another can fire. Same shape as Claude Code's `Ic1` gate (`Sc1=3`, `Rc1=2`). Read history through a ref so this isn't in the effect's deps and the effect doesn't re-run on every message. * fix(cli): type useResumeCommand.handleResume as Promise<void> Per gemini review on #3482: the interface declared this as `() => void` but the implementation is `async` and returns `Promise<void>`. The mismatch silently lost the chainable promise — tests had to launder it through `as unknown as Promise<void> | undefined` just to await. Tighten the interface to `Promise<void>` and drop the cast in the "closes the dialog immediately" test. * fix(cli): persist auto-fired recap to chat recording so /resume keeps it Per yiliang114 review on #3482: the manual `/recap` path persists across `/resume` because the slash-command processor records every output history item via `chatRecorder.recordSlashCommand({ phase: 'result', outputHistoryItems })`, but the auto path called `addItem` directly and bypassed that recorder. The result was an asymmetry where users who triggered recap manually saw it after `/resume`, while users whose recap fired automatically lost it. Mirror the manual recording from useAwaySummary's `.then` callback — record only the `result` phase (not invocation, since we don't want a fake `> /recap` user line replayed) with the away-recap item as the single output. Wrapped in try/catch because recap is best-effort and must never surface a failure to the user. Add useAwaySummary.test.ts covering: - the recording path is taken on a successful auto-trigger - the dedup gate (`shouldFireRecap`) suppresses the LLM call entirely, including the recording, when no new user turns happened since the last recap * fix(cli): cast recap item via spread to satisfy strict tsc --build CI's `tsc --build` (stricter than local `tsc --noEmit`) rejected the direct `item as Record<string, unknown>` cast: HistoryItemAwayRecap's literal `type: 'away_recap'` field doesn't overlap with `unknown`, TS2352. Use the `{ ...item } as Record<string, unknown>` spread pattern that the rest of the codebase (arenaCommand, slashCommandProcessor's serializer) already uses for the same SlashCommandRecordPayload field. |
||
|
|
52c7a3d0ed
|
fix(cli): pin /recap above input and align defaults with fastModel (#3478)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* fix(cli): pin /recap above input box and align defaults with fastModel The recap rendered as a regular history item, so as soon as the model streamed a new reply the "where you left off" reminder scrolled out of view. Move it to a sticky banner anchored just above the Composer (matching how btwItem is rendered) so it stays visible across turns. While reworking the surface, also: - Replace the chevron prefix with `※ recap:` so it reads as a labeled recap line instead of a generic dim message. - Mirror the placement in ScreenReaderAppLayout so screen-reader users see it in the same logical position. - Drop HistoryItemAwayRecap from the HistoryItemWithoutId union — it is no longer addItem-able, and leaving it in invited silent no-op bugs where addItem(awayRecap) would compile but render nothing. - Clear the banner on /clear, /reset, /new and on /resume into a different session, so a recap from a previous context doesn't bleed into a freshly started one. - Re-measure the controls box when the banner appears or disappears (its height changes by a couple of lines) so the main content area recomputes availableTerminalHeight and stays laid out correctly. Auto-trigger now defaults to "on iff fastModel is configured" rather than unconditionally on. Running an ambient background recap on the main coding model is too costly and slow to be a sane default; tying it to fastModel means the feature is silently opt-in for users who have set up a cheap fast model. An explicit `general.showSessionRecap` override still wins either way, and `/recap` itself is unaffected. Sharpen the slash-command description to match the new behavior. * fix(core): silence AbortSignal listener-leak warning in OpenAI pipeline Every chat.completions.create call wires up an abort listener on the incoming AbortSignal, and several layers — retryWithBackoff, the LoggingContentGenerator wrapper, the SDK's own internal stream/fetch plumbing — register their own listeners against the same signal. Five retry attempts plus those layers comfortably exceed Node's default 10-listener cap and produce a MaxListenersExceededWarning. With features that share or compose signals (e.g., recap + followup speculation firing on the same response cycle), even a higher cap gets blown past. The signals here are per-request and short-lived, so the accumulation is structural rather than a real memory leak — they get GC'd as soon as the request settles. setMaxListeners(0, signal) at the SDK boundary disables the warning for these specific signals only, without masking any genuine leak elsewhere in the process. Idempotent and confined to the one place where retry-bound API calls cross into the SDK. * fix(core): tighten recap to a single sentence within 80 chars The 1-3 sentence budget reliably wrapped onto two lines in the sticky banner above the input box, which made it visually heavy for what is supposed to be a glanceable reminder. Constrain the prompt to exactly one sentence with a hard 80-char cap, and merge the "high-level task + next step" rule into a single sentence instead of two adjacent ones. Also sweep the docs (settings, commands, design) so the user-facing copy and the internal design notes match the new format. * fix(cli): apply review feedback for recap PR Two issues from review: - The schema description for `general.showSessionRecap` still said "1-3 sentence summary" while the prompt, docs, and slash-command copy already say "one-line". Aligns the text in settingsSchema.ts and the regenerated VSCode JSON schema. - The /resume wrapper cleared the sticky recap synchronously, before the inner handler had a chance to discover that no session data was available. On a no-op resume the user would still lose the current recap. Make `useResumeCommand.handleResume` return Promise<boolean> reporting whether a session actually loaded, and only clear the recap on a confirmed switch. * fix(cli): default showSessionRecap to false and drop fastModel heuristic The earlier "enabled iff fastModel is configured" default made it hard for users to answer the simple question "is auto-recap on for me right now?" — the answer depended on a setting from a different category, and setting/unsetting fastModel silently changed recap behavior. Revert to a plain boolean with a conservative off-by-default: - Auto-trigger fires only when the user explicitly sets `general.showSessionRecap: true`. - Manual `/recap` keeps working regardless (that's a user-initiated call, not an ambient one). - Users never get ambient LLM calls billed to their main coding model without having opted in. Aligns settings.md, design doc, and the regenerated JSON schema. |
||
|
|
0b8b3da836
|
feat(cli): add slashCommands.disabled setting to gate slash commands (#3445)
* feat(cli): add slashCommands.disabled setting to gate slash commands
Introduces a first-class way for operators to hide and refuse to execute
specific slash commands. Useful for multi-tenant / enterprise / sandboxed
deployments where different users should see different command subsets.
The denylist is sourced from three unioned inputs:
* `slashCommands.disabled` settings key (string[], UNION merge), so
workspace scopes can only add to a denylist set at user or system
scope, never shrink it — matching the shape already used by
`permissions.deny`.
* `--disabled-slash-commands` CLI flag (comma-separated or repeated).
* `QWEN_DISABLED_SLASH_COMMANDS` environment variable.
Matching is case-insensitive against the final (post-rename) command
name, so extension commands are addressable by their disambiguated
form (e.g. `firebase.deploy`). Disabled commands are removed from
`CommandService`'s output, so they disappear from autocomplete and
produce the standard unknown-command path in both interactive TUI and
non-interactive (`--prompt`) modes.
The scope of this change is slash commands only: it does not affect
tool permissions (still `permissions.deny`) or keyboard shortcuts.
* chore(cli): regenerate settings.schema.json for slashCommands.disabled
Regenerates the companion JSON schema consumed by the VS Code extension
after adding the `slashCommands.disabled` entry to the TS schema in the
previous commit. Required by the "Check settings schema is up-to-date"
CI lint step.
* fix(cli): route disabled slash commands to unsupported, not no_command
handleSlashCommand was passing the disabled denylist straight into
CommandService.create, so disabled commands disappeared from
`allCommands` too. The fallback existence check that distinguishes
"known but not allowed in non-interactive mode" from "truly unknown"
then failed, and disabled commands like `/help` fell through to
`no_command` — causing the caller to forward them to the model as
plain prompt text.
Keep `allCommands` unfiltered and apply the denylist only when
constructing the executable set and when producing the unsupported
response. A disabled command now returns `unsupported` with a
"disabled by the current configuration" reason and never reaches the
model. Added three regression tests covering the primary case,
case-insensitive match, and the preserved no_command path for
genuinely unknown input.
|
||
|
|
60a6dfc14c
|
feat(cli): add session recap with /recap and auto-show on return (#3434)
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(cli): add session recap with /recap and auto-show on return
Users often open an old session days later and need to scroll through
pages to remember where they left off. This change adds a short
"where did I leave off" recap — a 1-3 sentence summary generated by
the fast model — so they can resume without re-reading the history.
Two triggers:
- /recap: manual slash command.
- Auto: when the terminal has been blurred for 5+ minutes and gets
focused again (uses the existing DECSET 1004 focus protocol via
useFocus). Gated on streamingState === Idle so it never interrupts
an active turn. Only fires once per blur cycle.
The recap is rendered in dim color with a chevron prefix, visually
distinct from assistant replies. A new `general.showSessionRecap`
setting controls the auto-trigger (default on). /recap works
independent of the setting.
Implementation notes:
- generateSessionRecap uses fastModel (falls back to main model),
tools: [], maxOutputTokens: 300, and a tight system prompt. It
strips tool calls / responses from history before sending — tool
responses can hold 10K+ tokens of file content that drown the recap
in irrelevant detail. The 30-message window respects turn boundaries
(slice never starts on a dangling model/tool response).
- Output is wrapped in <recap>...</recap> tags; the extractor returns
empty (skips render) if the tag is missing, preventing model
reasoning from leaking into the UI.
- All failures are silent (return null) and logged via a scoped
debugLogger; recap is best-effort and must never break main flow.
- /recap refuses to run while a turn is pending.
* fix(cli): abort in-flight recap when showSessionRecap is disabled
If the user disables showSessionRecap while an auto-recap LLM call is
already in flight, the previous code returned early without aborting.
The pending .then would still pass its idle/abort guards and append the
recap, producing an unwanted message after the user has opted out.
Abort the controller and clear it eagerly so the resolved promise no
longer adds to history.
* fix(cli): gate /recap and auto-recap on streaming idle state
Two related issues from review:
1. /recap was only refusing when ui.pendingItem was set, but a normal
model reply runs with streamingState === Responding and a null
pendingItem. Invoking /recap mid-stream would generate a recap from
a partial conversation and insert it between the user prompt and
the assistant reply.
2. useAwaySummary cleared blurredAtRef before checking isIdle, so if
focus returned during a still-streaming turn (after a >5min blur)
the recap was permanently dropped — there was no later retry when
the turn became idle, because isIdle was not in the effect deps.
Fixes:
- Expose isIdleRef on CommandContext.ui (mirrors btwAbortControllerRef
pattern). Plumb it from AppContainer through useSlashCommandProcessor.
- recapCommand now refuses when isIdleRef.current is false OR
pendingItem is non-null.
- useAwaySummary preserves blurredAtRef on the !isIdle bail and adds
isIdle to the effect deps, so the trigger re-evaluates when the
current turn finishes.
- Brief blurs (< AWAY_THRESHOLD_MS) still reset blurredAtRef.
Also seeds isIdleRef in nonInteractiveUi and mockCommandContext so the
new field has a sensible default outside the interactive UI.
* docs: document /recap command, showSessionRecap setting, and design
- User docs: add /recap to the Session and Project Management table in
features/commands.md and a dedicated subsection covering manual use,
the auto-trigger, the dim-color rendering, and the fast-model tip.
- User docs: add general.showSessionRecap row to the configuration
settings reference.
- Design doc: docs/design/session-recap/session-recap-design.md covers
motivation, the two trigger paths, the per-file architecture, prompt
design with the <recap> tag and three-tier extractor, history
filtering rationale (functionResponse can be 10K+ tokens), the
useAwaySummary state machine, the isIdleRef gating for /recap, model
selection, observability, and out-of-scope items.
* fix(core): exclude thought parts from session recap context
filterToDialog kept any non-empty text part, but @google/genai's Part
type also marks model reasoning with part.thought / part.thoughtSignature.
That hidden chain-of-thought was being fed to the recap LLM and could
get summarized as if it were user-visible dialogue.
Drop parts where either flag is set. Update the design doc's
History 过滤 section to call this out alongside the existing
tool-call/response rationale.
* docs(session-recap): correct debug-logging guidance, fill in state machine, sharpen UX wording
Audit of the session recap docs against the implementation found three
issues worth fixing:
- Design doc claimed debug logs were enabled via a QWEN_CODE_DEBUG_LOGGING
env var. That var does not exist; debug logs are written to
~/.qwen/debug/<sessionId>.txt by default, gated by QWEN_DEBUG_LOG_FILE.
Replace with the accurate path + opt-out behavior, and tell the reader
to grep for the [SESSION_RECAP] tag.
- Design doc's useAwaySummary state machine table was missing the
isFocused && blurredAtRef === null path (taken on first render and
right after a brief-blur reset). Add the row.
- User doc's "Refuses to run ... failures are silent" line conflated the
inline-error refusal with silent generation failures, and "(when the
conversation is idle)" used internal jargon. Split the two cases and
spell out what "idle" means, including the wait-then-fire behavior
when focus returns mid-turn.
* docs(session-recap): correctly describe /recap vs auto-trigger failure modes
The previous wording said "Generation/network failures are silent — the
recap simply does not appear", but recapCommand returns a user-facing
info message ("Not enough conversation context for a recap yet.") in
exactly that path, and also returns inline messages for the
config-not-loaded and busy-turn guards.
Only the auto-trigger path is truly silent (it just skips addItem when
generateSessionRecap returns null). Split the two paths in the doc so
the manual command's "always responds with something" behavior is
distinguished from the auto-trigger's no-op-on-failure behavior.
* docs(session-recap): align prompt-rules section with the actual prompt
Two doc-vs-code mismatches in the design doc's "System Prompt" section,
caught with the same lens as yiliang114's failure-mode review:
- The bullet list claimed RECAP_SYSTEM_PROMPT forbids "推测用户意图"
and "用 'you' 称呼用户". Those rules existed in an early draft but
were dropped when the <recap> tag rules were added; the current
prompt has no such restrictions. Replace with the actual rules and
add a "与 RECAP_SYSTEM_PROMPT 一一对应" marker so future edits stay
in sync.
- The doc said systemInstruction "覆盖" the main agent prompt. True
for the agent prompt portion, but GeminiClient.generateContent
internally calls getCustomSystemPrompt which appends user memory
(QWEN.md / 自动 memory) as a suffix. Spell that out — the final
system prompt is recap prompt + user memory, which is actually
useful project context for the recap.
* docs(session-recap): translate design doc to English
The repo convention for docs/design is English (7 of 8 existing files;
auto-memory/memory-system.md is the only Chinese one). The first version
of this design doc followed the auto-memory example, which turned out
to be the wrong sample.
Translate to English while preserving the existing structure, the
state-machine table, the prompt-vs-doc 1:1 alignment, the
QWEN_DEBUG_LOG_FILE description, and the failure-mode notes added in
prior commits.
* fix(cli): drop empty info return from /recap interactive success path
The interactive success path inserts the away_recap history item
directly via ui.addItem and then returned `{type: 'message',
messageType: 'info', content: ''}`. The slash-command processor's
'message' case unconditionally calls addMessage, which adds another
HistoryItemInfo with empty text. The empty info renders as nothing
(StatusMessage early-returns null), but it still bloats the in-memory
history list and shows up in /export and saved sessions.
Return void on the interactive success path and on the abort path so
the processor's `if (result)` check skips the message-handler branch
entirely. Widen the action's return type to `void | SlashCommandActionReturn`
to match (same shape as btwCommand).
|
||
|
|
89167618d8
|
docs: update authentication methods to reflect OAuth discontinuation (#3325)
* docs: update authentication methods to reflect OAuth discontinuation Remove deprecated Qwen OAuth references and update documentation to direct users to valid authentication methods (API Key, Coding Plan, or Local Inference) following the OAuth free tier discontinuation on 2026-04-15. Closes #3316 * docs: fix quickstart auth description to match actual /auth UI The /auth command shows three options: Alibaba Cloud Coding Plan, API Key, and Qwen OAuth (discontinued). Updated quickstart.md to accurately reflect this UI instead of splitting into Option A/B/C. Also updated settings.md, commands.md, and troubleshooting.md with minor OAuth-related cleanups. * docs: update .qwen workspace description in quickstart Remove reference to 'Qwen account' since OAuth is discontinued. The .qwen directory is created by Qwen Code itself for storing credentials, configuration, and session data. * docs: fix warning block formatting in quickstart - Add missing '>' continuation for the OAuth discontinuation warning block Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * docs: update README Qwen3.6-Plus description - Remove mention of running Qwen3.6-Plus locally via Ollama/vLLM - Keep only the Alibaba Cloud ModelStudio API key option Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * docs: address review feedback - remove Local Inference from auth, add dual-region links - Local Inference removed from auth method lists, kept as separate 'Local Model Setup' section with detailed Ollama/vLLM config examples - All links now provide dual-region URLs (Beijing + intl) - .qwen workspace note restored to original meaning (cost tracking) - Device auth flow error kept scoped to legacy OAuth - API setup guide links updated with confirmed intl URL --------- Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
9e2f63a1ca
|
feat(memory): managed auto-memory and auto-dream system (#3087)
* docs: add auto-memory implementation log
* feat(core): add managed auto-memory storage scaffold
* feat(core): load managed auto-memory index
* feat(core): add managed auto-memory recall
* feat(core): add managed auto-memory extraction
* feat(cli): add managed auto-memory dream commands
* feat(core): add auxiliary side-query foundation
* feat(memory): add model-driven recall selection
* feat(memory): add model-driven extraction planner
* feat(core): add background task runtime foundation
* feat(memory): schedule auto dream in background
* feat(core): add background agent runner foundation
* feat(memory): add extraction agent planner
* feat(core): add dream agent planner
* feat(core): rebuild managed memory index
* feat(memory): add governance status commands
* feat(memory): add managed forget flow
* feat(core): harden background agent planning
* feat(memory): complete managed parity closure
* test(memory): add managed lifecycle integration coverage
* feat: same to cc
* feat(memory-ui): add memory saved notification and memory count badge
Feature 3 - Memory Saved Notification:
- Add HistoryItemMemorySaved type to types.ts
- Create MemorySavedMessage component for rendering '● Saved/Updated N memories'
- In useGeminiStream: detect in-turn memory writes via mapToDisplay's
memoryWriteCount field and emit 'memory_saved' history item after turn
- In client.ts: capture background dream/extract promises and expose
via consumePendingMemoryTaskPromises(); useGeminiStream listens
post-turn and emits 'Updated N memories' notification for background tasks
Feature 4 - Memory Count Badge:
- Add isMemoryOp field to IndividualToolCallDisplay
- Add memoryWriteCount/memoryReadCount to HistoryItemToolGroup
- Add detectMemoryOp() in useReactToolScheduler using isAutoMemPath
- ToolGroupMessage renders '● Recalled N memories, Wrote N memories' badge
at the top of tool groups that touch memory files
Fix: process.env bracket-access in paths.ts (noPropertyAccessFromIndexSignature)
Fix: MemoryDialog.test.tsx mock useSettings to satisfy SettingsProvider requirement
* fix(memory-ui): auto-approve memory writes, collapse memory tool groups, fix MEMORY.md path
Problem 1 - Auto-approve memory file operations:
- write-file.ts: getDefaultPermission() checks isAutoMemPath; returns 'allow'
for managed auto-memory files, 'ask' for all other files
- edit.ts: same pattern
Problem 2 - Feature 4 UX: collapse memory-only tool groups:
- ToolGroupMessage: detect when all tool calls have isMemoryOp set (pure memory
group) and all are complete; render compact '● Recalled/Wrote N memories
(ctrl+o to expand)' instead of individual tool call rows
- ctrl+o toggles expand/collapse when isFocused and group is memory-only
- Mixed groups (memory + other tools) keep badge-at-top behaviour
- Expanded state shows individual tool calls with '● Memory operations
(ctrl+o to collapse)' header
Problem 3 - MEMORY.md path mismatch:
- prompt.ts: Step 2 now references full absolute path ${memoryDir}/MEMORY.md
so the model writes to the correct location inside the memory directory,
not to the parent project directory
Fix tests:
- write-file.test.ts: add getProjectRoot to mockConfigInternal
- prompt.test.ts: update assertion to match full-path section header
* fix(memory-ui): fix duplicate notification, broken ctrl+o, and Edit tool detection
- Remove duplicate 'Saved N memories' notification: the tool group badge already
shows 'Wrote N memories'; the separate HistoryItemMemorySaved addItem after
onComplete was double-counting. Keep only the background-task path
(consumePendingMemoryTaskPromises).
- Remove ctrl+o expand: Ink's Static area freezes items on first render and
cannot respond to user input. useInput/useState(isExpanded) in a Static item
is a no-op. Removed the dead code; memory-only groups now always render as
the compact summary (no fake interactive hint).
- Fix Edit tool detection: detectMemoryOp was checking for 'edit_file' but the
real tool name constant is 'edit'. Also removed non-existent 'create_file'
(write_file covers all writes). Now editing MEMORY.md is correctly identified
as a memory write op, collapses to 'Wrote N memories', and is auto-approved.
* fix(dream): run /dream as a visible submit_prompt turn, not a silent background agent
The previous implementation ran an AgentHeadless background agent that could
take 5+ minutes with zero UI feedback — user saw a blank screen for the entire
duration and then at most one line of text.
Fix: /dream now returns submit_prompt with the consolidation task prompt so it
runs as a regular AI conversation turn. Tool calls (read_file, write_file, edit,
grep_search, list_directory, glob) are immediately visible as collapsed tool
groups as the model works through the memory files — identical UX to Claude Code.
Also export buildConsolidationTaskPrompt from dreamAgentPlanner so dreamCommand
can reuse the same detailed consolidation prompt that was already written.
* fix(memory): auto-allow ls/glob/grep on memory base directory
Add getMemoryBaseDir() to getDefaultPermission() allow list in ls.ts,
glob.ts, and grep.ts — mirrors the existing pattern in read-file.ts.
Without this, ListFiles/Glob/Grep on ~/.qwen/* would trigger an
approval dialog, blocking /dream at its very first step.
* fix(background): prevent permission prompt hangs in background agents
Match Claude Code's headless-agent intent: background memory agents must never
block on interactive permission prompts.
Wrap background runtime config so getApprovalMode() returns YOLO, ensuring any
ask decision is auto-approved instead of hanging forever. Add regression test
covering the wrapped approval mode.
* fix(memory): run auto extract through forked agent
Make managed auto-memory extraction follow the Claude Code architecture:
background extraction now uses a forked agent to read/write memory files
directly, instead of planning patches and applying them with a separate
filesystem pipeline.
Keep the old patch/model path only as fallback if the forked agent fails.
Add regression tests covering the new execution path and tool whitelist.
* refactor(memory): remove legacy extract fallback pipeline
Delete the old patch/model/heuristic extraction path entirely.
Managed auto-memory extract now runs only through the forked-agent
execution flow, with no planner/apply fallback stages remaining.
Also remove obsolete exports/tests and update scheduler/integration
coverage to use the forked-agent-only architecture.
* refactor(memory): move auxiliary files out of memory/ directory
meta.json, extract-cursor.json, and consolidation.lock are internal
bookkeeping files, not user-visible memories. Move them one level up
to the project state dir (parent of memory/) so that the memory/
directory contains only MEMORY.md and topic files, matching the
clean layout of the upstream reference implementation.
Add getAutoMemoryProjectStateDir() helper in paths.ts and update the
three path accessors + store.test.ts path assertions accordingly.
* fix(memory): record lastDreamAt after manual /dream run
The /dream command submits a prompt to the main agent (submit_prompt),
which writes memory files directly. Because it bypasses dreamScheduler,
meta.json was never updated and /memory always showed 'never'.
Fix by:
- Exporting writeDreamManualRunToMetadata() from dream.ts
- Adding optional onComplete callback to SubmitPromptActionReturn and
SubmitPromptResult (types.ts / commands/types.ts)
- Propagating onComplete through slashCommandProcessor.ts
- Firing onComplete after turn completion in useGeminiStream.ts
- Providing the callback in dreamCommand.ts to write lastDreamAt
* fix(memory): remove scope params from /remember in managed auto-memory mode
--global/--project are legacy save_memory tool concepts. In managed
auto-memory mode the forked agent decides the appropriate type
(user/feedback/project/reference) based on the content of the fact.
Also improve the prompt wording to explicitly ask the agent to choose
the correct type, reducing the tendency to default to 'project'.
* feat(ui): show '✦ dreaming' indicator in footer during background dream
Subscribe to getManagedAutoMemoryDreamTaskRegistry() in Footer via a
useDreamRunning() hook. While any dream task for the current project is
pending or running, display '✦ dreaming' in the right section of the
footer bar, between Debug Mode and context usage.
* refactor(memory): align dream/extract infrastructure with Claude Code patterns
Five improvements based on Claude Code parity audit:
1. Memoize getAutoMemoryRoot (paths.ts)
- Add _autoMemoryRootCache Map, keyed by projectRoot
- findCanonicalGitRoot() walks the filesystem per call; memoize avoids
repeated git-tree traversal on hot-path schedulers/scanners
- Expose clearAutoMemoryRootCache() for test teardown
2. Lock file stores PID + isProcessRunning reclaim (dreamScheduler.ts)
- acquireDreamLock() writes process.pid to the lock file body
- lockExists() reads PID and calls process.kill(pid, 0); dead/missing
PID reclaims the lock immediately instead of waiting 2h
- Stale threshold reduced to 1h (PID-reuse guard, same as CC)
3. Session scan throttle (dreamScheduler.ts)
- Add SESSION_SCAN_INTERVAL_MS = 10min (same as CC)
- Add lastSessionScanAt Map<projectRoot, number> to ManagedAutoMemoryDreamRuntime
- When time-gate passes but session-gate doesn't, throttle prevents
re-scanning the filesystem on every user turn
4. mtime-based session counting (dreamScheduler.ts)
- Replace fragile recentSessionIdsSinceDream Set in meta.json with
filesystem mtime scan (listSessionsTouchedSince)
- Mirrors Claude Code's listSessionsTouchedSince: reads session JSONL
files from Storage.getProjectDir()/chats/, filters by mtime > lastDreamAt
- Immune to meta.json corruption/loss; no per-turn metadata write
- ManagedAutoMemoryDreamRuntime accepts injectable SessionScannerFn
for clean unit testing without real session files
5. Extraction mutual exclusion extended to write_file/edit (extractScheduler.ts)
- historySliceUsesMemoryTool() now checks write_file/edit/replace/create_file
tool calls whose file_path is within isAutoMemPath()
- Previously only detected save_memory; missed direct file writes by
the main agent, causing redundant background extraction
* docs(memory): add user-facing memory docs, i18n for all locales, simplify /forget
- Add docs/users/features/memory.md: comprehensive user-facing guide covering
QWEN.md instructions, auto-memory behaviour, all memory commands, and
troubleshooting; replaces the placeholder auto-memory.md
- Update docs/users/features/_meta.ts: rename entry auto-memory → memory
- Update docs/users/features/commands.md: add /init, /remember, /forget,
/dream rows; fix /memory description; remove /init duplicate
- Update docs/users/configuration/settings.md: add memory.* settings section
(enableManagedAutoMemory, enableManagedAutoDream) between tools and permissions
- Remove /forget --apply flag: preview-then-apply flow replaced with direct
deletion; update forgetCommand.ts, en.js, zh.js accordingly
- Add all auto-memory i18n keys to de, ja, pt, ru locales (18 keys each):
Open auto-memory folder, Auto-memory/Auto-dream status lines, never/on/off,
✦ dreaming, /forget and /remember usage strings, all managed-memory messages
- Remove dead save_memory branch from extractScheduler.partWritesToMemory()
- Add ✦ dreaming indicator to Footer.tsx with i18n; fix Footer.test.tsx mocks
- Refactor MemoryDialog.tsx auto-dream status line to use i18n
- Remove save_memory tool (memoryTool.ts/test); clean up webui references
- Add extractionPlanner.ts, const.ts and associated tests
- Delete stale docs/users/configuration/memory.md and
docs/developers/tools/memory.md (content superseded)
* refactor(memory): remove all Claude Code references from comments and test names
* test(memory): remove empty placeholder test files that cause vitest to fail
* fix eslint
* fix test in windows
* fix test
* fix(memory): address critical review findings from PR #3087
- fix(read-file): narrow auto-allow from getMemoryBaseDir() (~/.qwen) to
isAutoMemPath(projectRoot) to prevent exposing settings.json / OAuth
credentials without user approval (wenshao review)
- fix(forget): per-entry deletion instead of whole-file unlink
- assign stable per-entry IDs (relativePath:index for multi-entry files)
so the model can target individual entries without removing siblings
- rewrite file keeping unmatched entries; only unlink when file becomes
empty (wenshao review)
- fix(entries): round-trip correctness for multi-entry new-format bodies
- parseAutoMemoryEntries: plain-text line closes current entry and opens
a new one (was silently ignored when current was already set)
- renderAutoMemoryBody: emit blank line between adjacent entries so the
parser can detect entry boundaries on re-read (wenshao review)
- fix(entries): resolve two CodeQL polynomial-regex alerts
- indentedMatch: \s{2,}(?:[-*]\s+)? → [\t ]{2,}(?:[-*][\t ]+)?
- topLevelMatch: :\s*(.+)$ → :[ \t]*(\S.*)$
(github-advanced-security review)
- fix(scan.test): use forward-slash literal for relativePath expectation
since listMarkdownFiles() normalises all separators to '/' on all
platforms including Windows
* fix(memory): replace isAutoMemPath startsWith with path.relative()
Using path.relative() instead of string startsWith() is more robust
across platforms — it correctly handles Windows path-separator
differences and avoids potential edge cases where a path prefix match
could succeed on non-separator boundaries.
Addresses github-actions review item 3 (PR #3087).
* feat(telemetry): add auto-memory telemetry instrumentation
Add OpenTelemetry logs + metrics for the five auto-memory lifecycle
events: extract, dream, recall, forget, and remember.
Telemetry layer (packages/core/src/telemetry/):
- constants.ts: 5 new event-name constants
(qwen-code.memory.{extract,dream,recall,forget,remember})
- types.ts: 5 new event classes with typed constructor params
(MemoryExtractEvent, MemoryDreamEvent, MemoryRecallEvent,
MemoryForgetEvent, MemoryRememberEvent)
- metrics.ts: 8 new OTel instruments (5 Counters + 3 Histograms)
with recordMemoryXxx() helpers; registered inside initializeMetrics()
- loggers.ts: logMemoryExtract/Dream/Recall/Forget/Remember() — each
emits a structured log record and calls its recordXxx() counterpart
- index.ts: re-exports all new symbols
Instrumentation call-sites:
- extractScheduler.ts ManagedAutoMemoryExtractRuntime.runTask():
emits extract event with trigger=auto, completed/failed status,
patches_count, touched_topics, and wall-clock duration
- dream.ts runManagedAutoMemoryDream():
emits dream event with trigger=auto, updated/noop status,
deduped_entries, touched_topics, and duration; covers both
agent-planner and mechanical fallback paths
- recall.ts resolveRelevantAutoMemoryPromptForQuery():
emits recall event with strategy, docs_scanned/selected, and
duration; covers model, heuristic, and none paths
- forget.ts forgetManagedAutoMemoryEntries():
emits forget event with removed_entries_count, touched_topics,
and selection_strategy (model/heuristic/none)
- rememberCommand.ts action():
emits remember event with topic=managed|legacy at command
invocation time (before agent decides the actual memory type)
* refactor(telemetry): remove memory forget/remember telemetry events
Remove EVENT_MEMORY_FORGET and EVENT_MEMORY_REMEMBER along with all
associated infrastructure that is no longer needed:
- constants.ts: remove EVENT_MEMORY_FORGET, EVENT_MEMORY_REMEMBER
- types.ts: remove MemoryForgetEvent, MemoryRememberEvent classes
- metrics.ts: remove MEMORY_FORGET_COUNT, MEMORY_REMEMBER_COUNT constants,
memoryForgetCounter, memoryRememberCounter module vars,
their initialization in initializeMetrics(), and
recordMemoryForgetMetrics(), recordMemoryRememberMetrics() functions
- loggers.ts: remove logMemoryForget(), logMemoryRemember() functions
and their imports
- index.ts: remove all re-exports for the above symbols
- memory/forget.ts: remove logMemoryForget call-site and import
- cli/rememberCommand.ts: remove logMemoryRemember call-sites and import
* change default value
* fix forked agent
* refactor(background): unify fork primitives into runForkedAgent + cleanup
- Merge runForkedQuery into runForkedAgent via TypeScript overloads:
with cacheSafeParams → GeminiChat single-turn path (ForkedQueryResult)
without cacheSafeParams → AgentHeadless multi-turn path (ForkedAgentResult)
- Delete forkedQuery.ts; move its test to background/forkedAgent.cache.test.ts
- Remove forkedQuery export from followup/index.ts
- Migrate all callers (suggestionGenerator, speculation, btwCommand, client)
to import from background/forkedAgent
- Add getFastModel() / setFastModel() to Config; expose in CLI config init
and ModelDialog / modelCommand
- Remove resolveFastModel() from AppContainer — now delegated to config.getFastModel()
- Strip Claude Code references from code comments
* fix(memory): address wenshao's critical review findings
- dream.ts: writeDreamManualRunToMetadata now persists lastDreamSessionId
and resets recentSessionIdsSinceDream, preventing auto-dream from firing
again in the same session after a manual /dream
- config.ts: gate managed auto-memory injection on getManagedAutoMemoryEnabled();
when disabled, previously saved memories are no longer injected into new sessions
- rememberCommand.ts: remove legacy save_memory branch (tool was removed);
fall back to submit_prompt directing agent to write to QWEN.md instead
- BuiltinCommandLoader.ts: only register /dream and /forget when managed
auto-memory is enabled, matching the feature's runtime availability
- forget.ts: return early in forgetManagedAutoMemoryMatches when matches is
empty, avoiding unnecessary directory scaffolding as a side effect
* fix test
* fix ci test
* feat(memory): align extract/dream agents to Claude Code patterns
- fix(client): move saveCacheSafeParams before early-return paths so
extract agents always have cache params available (fixes extract never
triggering in skipNextSpeakerCheck mode)
- feat(extract): add read-only shell tool + memory-scoped write
permissions; create inline createMemoryScopedAgentConfig() with
PermissionManager wrapper (isToolEnabled + evaluate) that allows only
read-only shell commands and write/edit within the auto-memory dir
- feat(extract): align prompt to Claude Code patterns — manifest block
listing existing files, parallel read-then-write strategy, two-step
save (memory file then index)
- feat(dream): remove mechanical fallback; runManagedAutoMemoryDream is
now agent-only and throws without config
- feat(dream): align prompt to Claude Code 4-phase structure
(Orient/Gather/Consolidate/Prune+Index); add narrow transcript grep,
relative→absolute date conversion, stale index pruning, index size cap
- fix(permissions): add isToolEnabled() to MemoryScopedPermissionManager
to prevent TypeError crash in CoreToolScheduler._schedule
- test: update dreamScheduler tests to mock dream.js; replace removed
mechanical-dedup test with scheduler infrastructure verification
* move doc to design
* refactor(memory): unify extract+dream background task management into MemoryBackgroundTaskHub
- Add memoryTaskHub.ts: single BackgroundTaskRegistry + BackgroundTaskDrainer shared
by all memory background tasks; exposes listExtractTasks() / listDreamTasks()
typed query helpers and a unified drain() method
- extractScheduler: ManagedAutoMemoryExtractRuntime accepts hub via constructor
(defaults to defaultMemoryTaskHub); test factory gets isolated fresh hub
- dreamScheduler: same pattern — sessionScanner + hub injection; BackgroundTask-
Scheduler initialized from injected hub; test factory gets isolated hub
- status.ts: replace two separate getRegistry() calls with defaultMemoryTaskHub
typed query methods
- Footer.tsx (useDreamRunning): subscribe to shared registry, filter by
DREAM_TASK_TYPE so extract tasks do not trigger the dream spinner
- index.ts: re-export memoryTaskHub.ts so defaultMemoryTaskHub/DREAM_TASK_TYPE/
EXTRACT_TASK_TYPE are available as top-level package exports
* refactor(background): introduce general-purpose BackgroundTaskHub
Replace memory-specific MemoryBackgroundTaskHub with a domain-agnostic
BackgroundTaskHub in the background/ layer. Any future background task
runtime (3rd, 4th, …) plugs in by accepting a hub via constructor
injection — no new infrastructure required.
Changes:
- Add background/taskHub.ts: BackgroundTaskHub (registry + drainer +
createScheduler() + listByType(taskType, projectRoot?)) and the
globalBackgroundTaskHub singleton. Zero knowledge of any task type.
- Delete memory/memoryTaskHub.ts: its narrow listExtractTasks /
listDreamTasks helpers are replaced by the generic listByType() call.
- Move EXTRACT_TASK_TYPE to extractScheduler.ts (owned by the runtime
that defines it); replace 3 hardcoded string literals with the const.
- Move DREAM_TASK_TYPE to dreamScheduler.ts; use hub.createScheduler()
instead of manually wiring new BackgroundTaskScheduler(reg, drain).
- status.ts: globalBackgroundTaskHub.listByType(EXTRACT_TASK_TYPE, ...)
- Footer.tsx: globalBackgroundTaskHub.registry (shared, filtered by type)
- index.ts: export background/taskHub.js; drop memory/memoryTaskHub.js
* test(background): add BackgroundTaskHub unit tests and hub isolation checks
- background/taskHub.test.ts (11 tests):
- createScheduler(): tasks registered via scheduler appear in hub registry;
multiple calls return distinct scheduler instances
- listByType(): filters by taskType, filters by projectRoot, returns []
for unknown types, two types co-exist in registry but stay separated
- drain(): resolves false on timeout, resolves true when tasks complete,
resolves true immediately when no tasks in flight
- isolation: tasks in hubA do not appear in hubB
- globalBackgroundTaskHub: is a BackgroundTaskHub instance with registry/drainer
- extractScheduler.test.ts (+1 test):
- factory-created runtimes have isolated registries; tasks in runtimeA
are invisible to runtimeB; all tasks carry EXTRACT_TASK_TYPE
- dreamScheduler.test.ts (+1 test):
- factory-created runtimes have isolated registries; tasks in runtimeA
are invisible to runtimeB; all tasks carry DREAM_TASK_TYPE
* refactor(memory): consolidate all memory state into MemoryManager
Replace BackgroundTaskRegistry/Drainer/Scheduler/Hub helper classes and
module-level globals with a single MemoryManager class owned by Config.
## Changes
### New
- packages/core/src/memory/manager.ts — MemoryManager with:
- scheduleExtract / scheduleDream (inline queuing + deduplication logic)
- recall / forget / selectForgetCandidates / forgetMatches
- getStatus / drain / appendToUserMemory
- subscribe(listener) compatible with useSyncExternalStore
- storeWith() atomic record registration (no double-notify)
- Distinct skippedReason 'scan_throttled' vs 'min_sessions' for dream
- packages/core/src/utils/forkedAgent.ts — pure cache util (moved from background/)
- packages/core/src/utils/sideQuery.ts — pure util (moved from auxiliary/)
### Deleted
- background/taskRegistry, taskDrainer, taskScheduler, taskHub and all tests
- background/forkedAgent (moved to utils/)
- auxiliary/sideQuery (moved to utils/)
- memory/extractScheduler, dreamScheduler, state and all tests
### Modified
- config/config.ts — Config owns MemoryManager instance; getMemoryManager()
- core/client.ts — all memory ops via config.getMemoryManager()
- core/client.test.ts — mock MemoryManager instead of individual modules
- memory/status.ts — accepts MemoryManager param, drops globalBackgroundTaskHub
- index.ts — memory exports reduced from 14 modules to 5 (manager/types/paths/store/const)
- cli/commands/dreamCommand.ts — via config.getMemoryManager()
- cli/commands/forgetCommand.ts — via config.getMemoryManager()
- cli/components/Footer.tsx — useSyncExternalStore replacing setInterval polling
- cli/components/Footer.test.tsx — add getMemoryManager mock
|
||
|
|
07475026f6
|
fix(cli): remember "Start new chat session" until summary changes (#3308)
* fix(cli): remember "Start new chat session" until summary changes Persist a project-scoped Welcome Back restart choice keyed to the current PROJECT_SUMMARY fingerprint. This suppresses the Welcome Back dialog after choosing "Start new chat session", while still showing it again after the project summary is updated. * fix conflict |
||
|
|
70396d1276
|
feat: optimize compact mode UX — shortcuts, settings sync, and safety (#3100)
* feat: optimize compact mode UX — shortcuts, settings sync, and safety improvements
- Add Ctrl+O to keyboard shortcuts list (?) and /help command
- Sync compact mode toggle from Settings dialog with CompactModeContext
- Protect tool approval prompts from being hidden in compact mode
(MainContent forces live rendering during WaitingForConfirmation)
- Remove snapshot freezing on toggle — treat as persistent preference,
not temporary peek (differs from Claude Code's session-scoped model)
- Add compact mode tip to startup Tips rotation for non-intrusive discovery
- Remove compact mode indicator from footer to reduce UI clutter
- Add competitive analysis design doc (EN + ZH) comparing with Claude Code
- Update user docs (settings.md) and i18n translations (en/zh/ru/pt)
Relates to #3047, #2767, #2770
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor: remove frozenSnapshot dead code and Chinese design doc
- Remove frozenSnapshot state, useEffect, and all related logic from
AppContainer, MainContent, CompactModeContext, and test files
- Simplify MainContent to always render live pendingHistoryItems
- Delete compact-mode-design-zh.md (redundant Chinese translation)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address PR review feedback for compact mode optimization
- Add refreshStatic() call after setCompactMode in SettingsDialog
so already-rendered Static history updates immediately
- Fix outdated column split comment in KeyboardShortcuts (5+4+4)
- Update design doc: remove all frozenSnapshot references, renumber
optimization recommendations, fix file reference descriptions
- Add missing i18n keys for de.js and ja.js locales
- Add test for SettingsDialog compact mode sync with CompactModeContext
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: prevent subagent confirmation from being hidden in compact mode
hasConfirmingTool only checks ToolCallStatus.Confirming, but subagent
approvals arrive via resultDisplay.pendingConfirmation while the tool
status remains Executing. Add hasSubagentPendingConfirmation to the
showCompact guard so tool groups with pending subagent confirmations
are always force-expanded.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: force show subagent confirmation result in compact mode
The previous fix (
|
||
|
|
08d3d6eb6f
|
feat(acp): add complete hooks support for ACP integration (#3248)
* complete hooks for acp * resolve comment * reslove test * resolve comment for SessionEnd/SessionStart/PostToolUseFailure/PostToolUse |
||
|
|
7103c905f7
|
feat(cli): add startup performance profiler (#3232)
feat(cli): add startup performance profiler (#3219) Add a lightweight startup profiler activated via QWEN_CODE_PROFILE_STARTUP=1. When enabled, collects performance.now() timestamps at 7 key phases in main() and writes a JSON report to ~/.qwen/startup-perf/. Also records process.uptime() at T0 to capture module loading time not covered by checkpoint-based measurement. Key design decisions: - Only profiles inside sandbox child process to avoid duplicate reports - initStartupProfiler() is idempotent (resets state on each call) - Filename uses report.sessionId for consistency with JSON content - Zero overhead when disabled (single env var check) Initial measurement: module loading ~1342ms (94%), main() ~85ms (6%), confirming barrel exports and eager dependency loading as primary optimization targets for #3011. Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
4daf7f9353
|
feat(core): add microcompaction for idle context cleanup (#3006)
Some checks are pending
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
* feat(core): add microcompaction for idle context cleanup
Clear old tool result content from chat history when the user returns
after an idle period (default 60 min). Replaces functionResponse output
with a sentinel string for compactable tools (read_file, shell, grep,
glob, web_fetch, web_search, edit, write_file), keeping the N most
recent results intact (default 5). Runs before full compression so it
can shed tokens cheaply without an API call.
- Time-based trigger reuses lastApiCompletionTimestamp from thinking cleanup
- Per-part counting so keepRecent applies to individual tool results
even when batched in parallel
- Preserves tool error responses (only clears successful outputs)
- Configurable via settings.json (context.microcompaction) with env var
overrides for E2E testing
- Enabled by default
* refactor(config): unify idle cleanup settings under clearContextOnIdle
Consolidate thinking block cleanup and tool results microcompaction
config into a single `context.clearContextOnIdle` settings group:
{
"context": {
"clearContextOnIdle": {
"thinkingThresholdMinutes": 5,
"toolResultsThresholdMinutes": 60,
"toolResultsNumToKeep": 5
}
}
}
- Use -1 on either threshold to disable that cleanup (no enabled bool)
- Remove separate `microcompaction` and `gapThresholdMinutes` settings
- Thinking cleanup: 5 min default (unchanged)
- Tool results cleanup: 60 min default
- Preserve tool error responses (only clear successful outputs)
* feat(vscode-ide-companion): add clearContextOnIdle settings configuration
- Add gapThresholdMinutes settings for thinking blocks, tool results, and retention count
- Remove deprecated gapThresholdMinutes from root settings level
This reorganizes the context clearing settings into a dedicated clearContextOnIdle object with configurable thresholds for thinking blocks and tool results.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
* fix(core): restrict microcompaction to user-initiated messages only
Move microcompactHistory() inside the UserQuery/Cron guard so model
latency during tool-call loops doesn't count as user idle time.
* docs: update settings docs for clearContextOnIdle config rename
Replace stale `context.gapThresholdMinutes` entry with the new
`context.clearContextOnIdle.*` settings group introduced in the
microcompaction feature.
* fix(core): address review comments on microcompaction PR
- Guard against NaN in toolResultsNumToKeep with Number.isFinite()
- Report effective keepRecent (after Math.max) in meta, not raw config
- Fix comment to mention cron messages alongside user messages
---------
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
|
||
|
|
8d74a0cf0a
|
feat(subagents): add disallowedTools field to agent definitions (#3064)
* feat(subagents): add disallowedTools field to agent definitions Add a `disallowedTools` blocklist to agent frontmatter, letting agents specify tools they should not have access to. Supports exact tool names, MCP server-level patterns (e.g., `mcp__slack`), and display name aliases. Applied as a post-filter in AgentCore.prepareTools() after the existing `tools` allowlist. Persisted through serialize/parse roundtrips. * docs: document disallowedTools and MCP tool behavior for subagents Add Tool Configuration section to sub-agents docs explaining: - tools allowlist and disallowedTools blocklist - How MCP tools follow the same allowlist/blocklist rules - MCP server-level patterns in disallowedTools * fix(subagents): validate disallowedTools in SubagentValidator Reuse the existing validateTools() method to validate disallowedTools entries at config validation time, catching non-string and empty entries before they reach runtime. * test: remove flaky BaseSelectionList scroll test on Windows |
||
|
|
b3bc42931e
|
feat: add contextual tips system with post-response context awareness (#2904)
* feat: add contextual tips system with post-response context awareness Add a context-aware tips system that proactively shows helpful tips based on session state. Post-response tips warn when context usage exceeds 80% or 95%, suggesting /compress. Startup tips rotate across sessions via LRU scheduling with cross-session persistence (~/.qwen/tip_history.json). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: use value import for runtime values in useContextualTips Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address PR review feedback - Use lastSessionTimestamp instead of totalShown for cross-session LRU - Move getTipHistory singleton from Tips.tsx to services/tips/index.ts - Defer TipHistory.load() when hideTips is true (no side effects) - Use os.tmpdir() in tests for cross-platform portability - Add proper translations for de/ja/pt/ru locale files - Accept TipHistory | null in useContextualTips Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Copilot review feedback - Validate tips field type in TipHistory.load() to handle corrupted JSON - Split approval-mode tip into platform-specific variants using ctx.platform - Add afterEach cleanup for temp files in all test suites - Guard useContextualTips against null tipHistory Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: import shared DEFAULT_TOKEN_LIMIT, harden tipHistory, set file permissions - Import DEFAULT_TOKEN_LIMIT from @qwen-code/qwen-code-core instead of hardcoding 1_048_576 in tipRegistry.ts and useContextualTips.ts - Add normalizeEntry() to defensively handle corrupted tip history entries - Write tip_history.json with mode 0o600 for privacy on multi-user systems Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: remove unused compressionThreshold from TipContext compressionThreshold was defined in TipContext but never used by any tip's isRelevant check. Remove it to avoid misleading consumers into thinking tips respect the user's compression settings. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: sanitize sessionCount and getLastShown against corrupted tip history - Validate sessionCount is finite and non-negative in TipHistory.load() - Use normalizeEntry() in getLastShown() for corrupted lastSessionTimestamp Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add contextual tips user documentation Add docs/users/features/tips.md covering startup tips, post-response context warnings, tip history persistence, and the hideTips setting. Update settings.md description and register the new page in _meta.ts. --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
1557d93043
|
feat(cli): support tools.sandboxImage in settings (#3146)
Co-authored-by: jinye.djy <jinye.djy@alibaba-inc.com> |
||
|
|
5482044e59
|
fix: improve /model --fast description clarity and prevent accidental activation (#3077)
Replace vague "background tasks" with specific "prompt suggestions and speculative execution" in the --fast flag description across all i18n locales, docs, and VS Code schema. Update example model name from qwen3.5-flash to qwen3-coder-flash. Also fix completion logic to require a non-empty partial arg before suggesting --fast, preventing Tab+Enter from accidentally entering fast model mode. Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
746f67f436
|
refactor: rename verboseMode to compactMode for better UX clarity (#3075)
The "Compact Mode" label is more intuitive than "Verbose Mode" for users, as it directly describes the default compact view experience. This change inverts the boolean semantics (compactMode=false means show full output) and exposes the setting in the /settings dialog (showInDialog: true). - Rename ui.verboseMode → ui.compactMode with inverted default (false) - Rename VerboseModeContext → CompactModeContext (file and exports) - Rename TOGGLE_VERBOSE_MODE → TOGGLE_COMPACT_MODE in key bindings - Update all consumer components with inverted logic - Update i18n keys across 6 locales (verbose → compact) - Update VS Code settings schema - Add ui.compactMode documentation to settings.md - Fix Ctrl+O description in keyboard-shortcuts.md |
||
|
|
bcd0b5efe6 |
docs: update status line documentation to reflect inline footer layout
- Update architecture diagram to show status line in footer left section instead of separate row below - Document 1-row (default mode) and 2-row (non-default mode) layouts - Note suppressHint behavior and truncation - Update settings reference description |
||
|
|
0be4d32cb0 | Merge remote-tracking branch 'origin/main' into feature/status-line-customization | ||
|
|
1e8bc031cc
|
feat(core): adaptive output token escalation (8K default + 64K retry) (#2898)
* feat(core): adaptive output token escalation (8K default + 64K retry) 99% of model responses are under 5K tokens, but we previously reserved 32K for every request. This wastes GPU slot capacity by ~4x. Now the default output limit is 8K. When a response hits this cap (stop_reason=max_tokens), it automatically retries once at 64K — only the ~1% of requests that actually need more tokens pay the cost. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add design doc and user doc for adaptive output token escalation - Add design doc covering problem, architecture, token limit determination, escalation mechanism, and design decisions - Document QWEN_CODE_MAX_OUTPUT_TOKENS env var in settings.md - Add max_tokens adaptive behavior explanation in model config section --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
6a55a9aeea |
feat(config): make thinking idle threshold configurable and lower default to 5min
Align with observed provider prompt-cache TTL (~5 min). Add `context.gapThresholdMinutes` setting so users can tune the threshold for providers with different cache TTLs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
7902806e63 |
docs: add ui.statusLine entry to settings reference
Add the missing ui.statusLine setting to the settings.md reference table with a link to the status-line feature documentation. |
||
|
|
3bce84d5da
|
feat(cli, webui): add follow-up suggestions feature (#2525)
Some checks failed
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Has been cancelled
E2E Tests / E2E Test (Linux) - sandbox:none (push) Has been cancelled
Qwen Code CI / Lint (push) Has been cancelled
Qwen Code CI / CodeQL (push) Has been cancelled
E2E Tests / E2E Test - macOS (push) Has been cancelled
Qwen Code CI / Test (push) Has been cancelled
Qwen Code CI / Test-1 (push) Has been cancelled
Qwen Code CI / Test-2 (push) Has been cancelled
Qwen Code CI / Test-3 (push) Has been cancelled
Qwen Code CI / Test-4 (push) Has been cancelled
Qwen Code CI / Test-5 (push) Has been cancelled
Qwen Code CI / Test-6 (push) Has been cancelled
Qwen Code CI / Test-7 (push) Has been cancelled
Qwen Code CI / Test-8 (push) Has been cancelled
Qwen Code CI / Post Coverage Comment (push) Has been cancelled
* feat(cli, webui): add follow-up suggestions feature Implement context-aware follow-up suggestions that appear after task completion, suggesting relevant next actions like "commit this", "run tests", etc. - Add `followup/` module with types, generator, and rule-based provider - Export follow-up types and functions from core index - 8 default suggestion rules covering common workflows - Add `useFollowupSuggestionsCLI` hook for Ink/React - Integrate suggestion generation in AppContainer when streaming completes - Add Tab key to accept, arrow keys to cycle through suggestions - Display suggestions as ghost text in input prompt - Add `useFollowupSuggestions` hook for React - Update InputForm to display suggestions as placeholder - Add CSS styling for suggestion appearance with counter - Add keyboard handlers (Tab, arrow keys) - After streaming completes with tool calls, suggestions appear - Tab accepts the current suggestion - Left/Right arrows cycle through multiple suggestions - Typing or pasting dismisses the suggestion - Shell command rules (tests, git, npm install) don't work yet due to history not storing tool arguments - VSCode extension integration pending - Web UI needs parent app integration for suggestion generation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: resolve merge conflicts and build errors - Rebased on upstream main ( |
||
|
|
f9d9a985ce | Merge branch 'main' into feat/support-permission | ||
|
|
080271031d
|
Merge pull request #2400 from QwenLM/feat/system-prompt-sdk
feat: add system prompt customization options in SDK and CLI |
||
|
|
d129ddc489 | Merge branch 'main' into feat/support-permission | ||
|
|
ce6be9aadd | feat: add system prompt customization options for CLI and SDK | ||
|
|
e484dfbbad |
refactor: remove summarizeToolOutput feature
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> Remove the summarizeToolOutput setting and related functionality. This feature allowed LLM-based summarization of shell tool output but is no longer needed. This simplifies the codebase by removing unused summarization logic and configuration options. |
||
|
|
fed08cb1dd |
fix(config): remove enableToolOutputTruncation setting
Remove the enableToolOutputTruncation boolean setting. Users can now disable truncation by setting truncateToolOutputThreshold to 0 or a negative value instead of using a separate toggle. This simplifies the configuration and prevents users from accidentally disabling truncation which could cause memory issues with large tool outputs. Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
7450067e37 | Merge branch 'main' into feat/support-permission | ||
|
|
6fee1ebeb8 | fix workspace dirs | ||
|
|
3a549419ba | Merge branch 'main' into feat/sandbox-config-improvements | ||
|
|
eeb4d85785 | feat(permissions): add permission system and rename folder trust command | ||
|
|
112318d092 | Merge branch 'main' into feat/improve-auth-dialog-ux | ||
|
|
d3cdad5100 |
feat(cli): improve auth dialog UX with clearer three-option layout
- Replace nested API-KEY submenu with flat three-option layout - Add descriptive labels for each authentication method: - Qwen OAuth: Free, up to 1,000 requests/day - Alibaba Cloud Coding Plan: Paid, multiple model providers - API Key: Bring your own API key - Simplify region selection for Coding Plan (China vs Global) - Use DescriptiveRadioButtonSelect for better visual hierarchy - Add itemGap prop to BaseSelectionList for spacing - Update i18n strings in en.js, zh.js, and ru.js - Simplify custom API key configuration info view - Clean up unused region-specific strings Closes #2016 Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
89f5f9c4c4 |
fix(openai): improve modality error messages and docs
- Update error messages for unsupported image/PDF inputs with clearer guidance - Add `modalities` setting to override auto-detected input modalities - Document `modalities` config in model-providers.md and settings.md - Update converter tests to match new error message format This provides users with actionable alternatives when their selected model doesn't support certain input types, and allows manual modality overrides for models not recognized by auto-detection. Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
9cb624f79f |
docs: extract modelProviders section to standalone model-providers.md document
- Create new model-providers.md with complete model provider configuration guide - Add Bailian Coding Plan documentation with setup and auto-update details - Remove modelProviders content from settings.md to avoid duplication - Document reserved envKey BAILIAN_CODING_PLAN_API_KEY and security recommendations Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
1c9ff66c76 |
docs: enhance modelProviders configuration documentation
- Add comprehensive configuration examples for all auth types (openai, anthropic, gemini, vertex-ai) - Add local self-hosted model examples (vLLM, Ollama, LM Studio) - Clarify generation config layering with impermeable provider layer concept - Add Provider Model vs Runtime Model explanation - Document duplicate model ID limitation - Deprecate security.auth.apiKey and security.auth.baseUrl settings - Add notes about extra_body parameter support limitations Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
8b3aeb4550 |
refactor: unify sandbox configuration naming and improve telemetry config
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
a4ffc6eb24 |
feat: promote Agent Skills from experimental to stable
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
ff0ba0cc4e |
Merge branch and resolve conflicts
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
0bcc5a96f4 | docs: add settings migration instructions | ||
|
|
561fda297e | Rename disable* settings to enable* and consolidated 1 setting | ||
|
|
831d74dbfe |
feat: Preserve UTF-8 BOM when editing files (Fix #1672)
- Add FileEncoding constants (UTF8, UTF8_BOM) - Add detectFileBOM() to detect existing file encoding - Modify writeTextFile() to support BOM option - Add defaultFileEncoding configuration option - Preserve BOM when editing existing files - Use configured encoding for new files - Add comprehensive tests (unit, integration, e2e) - Update documentation Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
1c5b74ebd9 | Merge branch 'main' into pr-1539 | ||
|
|
a896fc4a70
|
Merge pull request #1401 from QwenLM/feat/support-lsp
Add experimental LSP support for code intelligence |
||
|
|
532d97670b |
feat: add extra_body support for OpenAI-compatible providers
Add extra_body configuration option to model.generationConfig for passing custom parameters to OpenAI-compatible API request bodies. - Add extra_body to ContentGeneratorConfig type - Add extra_body to MODEL_GENERATION_CONFIG_FIELDS and ModelGenerationConfig - Implement extra_body merging in DefaultOpenAICompatibleProvider - Implement extra_body merging in DashScopeOpenAICompatibleProvider - Update documentation with examples and provider compatibility notes - Note: This feature is only for OpenAI-compatible providers (openai, qwen-oauth) Resolves #1647 Resolves #1644 Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> |
||
|
|
e77a67cd5e |
refactor: remove maxOutputTokens user configuration
- Remove maxOutputTokens field from ContentGeneratorConfig - Remove maxOutputTokens auto-calculation logic in contentGenerator.ts - Remove maxOutputTokens sync logic in config.ts hot-update - Update dashscope.ts to use tokenLimit() function for dynamic calculation - Remove maxOutputTokens from documentation (settings.md) - Update all test cases to use correct model output limits - All 143 tests passing (config.test.ts: 105, dashscope.test.ts: 38) Changes: - Users can no longer configure maxOutputTokens via settings - Output token limits are now always dynamically calculated based on model - contextWindowSize (input) remains user-configurable - Prevents long text output truncation caused by default value settings |