From e384338145985973436e9cdc937a29e2f0c230c2 Mon Sep 17 00:00:00 2001 From: jinye Date: Sat, 25 Apr 2026 07:02:58 +0800 Subject: [PATCH 01/36] feat(SDK) Add Python SDK implementation for #3010 (#3494) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Codex worktree snapshot: startup-cleanup Co-authored-by: Codex * Add Python SDK real smoke test Adds a repository-only real E2E smoke script for the Python SDK, plus npm and developer documentation entry points. Co-authored-by: Qwen-Coder * fix(sdk-python): address review findings — bugs, type safety, and test coverage - Fix prepare_spawn_info: JS files now use "node" instead of sys.executable - Fix protocol.py: correct total=False misuse on 7 TypedDicts (required fields were optional) - Fix query.py: add _closed guard in _ensure_started, suppress exceptions in close() - Fix sync_query.py: prevent close() deadlock, add context manager, add timeouts - Fix transport.py: handle malformed JSON lines, add _closed guard in start() - Fix validation.py: use uuid.RFC_4122 instead of magic UUID - Fix __init__.py: export TextBlock, widen query_sync signature - Remove dead code: ensure_not_aborted, write_json_line, _thread_error - Add 12 new tests (29 → 41): context managers, JSON skip, closed guards, spawn info, timeouts Co-authored-by: Qwen-Coder * fix(sdk-python): address wenshao review — session_id, bool validation, debug stderr - Fix continue_session=True generating a wrong random session_id - Add _as_optional_bool helper for strict type validation on bool fields - Default debug stderr to sys.stderr when no custom callback is provided Co-authored-by: Qwen-Coder * fix(sdk-python): address remaining wenshao review feedback Co-authored-by: Qwen-Coder * test(cli): harden settings dialog restart prompt test Co-authored-by: Qwen-Coder * fix(sdk-python): review fixes — UUID compat, stderr fallback, sync cleanup - Remove UUID version restriction to support v6/v7/v8 (RFC 9562) - Always write to sys.stderr when stderr callback raises (was silent when debug=False) - Prevent duplicate _STOP sentinel in SyncQuery.close() via _stop_sent flag - Add ruff format --check to CI workflow - Fix smoke_real.py version guard: fail early before imports instead of NameError - Apply ruff format to existing files Co-authored-by: Qwen-Coder * fix(sdk-python): remaining review fixes — exit_code attr, guard strictness, sync timeout - Add exit_code attribute to ProcessExitError for programmatic access - Strengthen is_control_response/is_control_cancel guards to require payload fields, preventing misrouting of malformed messages - Expose control_request_timeout property on Query so SyncQuery uses the configured timeout instead of a hardcoded 30s default - Use dataclasses.replace() instead of direct mutation on frozen-style QueryOptions in query() factory - Add ResourceWarning in SyncQuery.__del__ when not properly closed Co-authored-by: Qwen-Coder * fix(sdk-python): add exit_code default and guard __del__ against partial GC - Give ProcessExitError.exit_code a default value (-1) so user code can construct the exception with just a message string - Wrap SyncQuery.__del__ in try/except AttributeError to prevent crashes when the object is partially garbage-collected Co-authored-by: Qwen-Coder * fix(sdk-python): review fixes — resource leak, type safety, CI matrix, docs - Fix SyncQuery.__del__ to call close() on GC instead of only warning - Replace hasattr duck-type check with isinstance(prompt, AsyncIterable) - Type-validate permission_mode/auth_type in QueryOptions.from_mapping - Use TypeGuard return types on all is_sdk_*/is_control_* predicates - Add 5s margin to sync wrapper timeouts to prevent error type masking - Expand CI matrix to test Python 3.10, 3.11, 3.12 - Change ProcessExitError.exit_code default from -1 to None - Add stderr to docs QueryOptions listing - Update README sync example to use context manager pattern Co-authored-by: Qwen-Coder * fix(sdk-python): preserve iterator exhaustion state and suppress detached task warning - Add _exhausted flag to Query.__anext__ and SyncQuery.__next__ so repeated iteration after end-of-stream raises Stop(Async)Iteration instead of blocking forever. - Remove re-raise in _initialize() to prevent asyncio "Task exception was never retrieved" warning on detached tasks; the error is already surfaced via _finish_with_error(). Co-authored-by: Qwen-Coder * fix(sdk-python): reject mcp_servers at validation time and add iterator/init tests - Reject mcp_servers in validate_query_options() with a clear error instead of advertising MCP support to the CLI and then failing at runtime when mcp_message arrives. - Remove dead mcp_servers branch from _initialize(). - Add tests for async/sync iterator exhaustion, detached init task warning suppression, and mcp_servers validation. Co-authored-by: Qwen-Coder * fix(sdk-python): fix ruff lint errors in new tests - Use ControlRequestTimeoutError instead of bare Exception (B017) - Fix import sorting for stdlib vs third-party (I001) - Break long line to stay within 88-char limit (E501) Co-authored-by: Qwen-Coder * style(sdk-python): apply ruff format to new tests Co-authored-by: Qwen-Coder --------- Co-authored-by: jinye.djy Co-authored-by: Qwen-Coder --- .github/workflows/sdk-python.yml | 59 ++ README.md | 35 +- docs/developers/_meta.ts | 3 +- docs/developers/sdk-python.md | 168 +++++ package.json | 4 + .../src/ui/components/SettingsDialog.test.tsx | 30 +- packages/sdk-python/README.md | 117 ++++ packages/sdk-python/pyproject.toml | 61 ++ packages/sdk-python/scripts/smoke_real.py | 388 +++++++++++ .../sdk-python/src/qwen_code_sdk/__init__.py | 108 ++++ .../sdk-python/src/qwen_code_sdk/errors.py | 27 + .../src/qwen_code_sdk/json_lines.py | 14 + .../sdk-python/src/qwen_code_sdk/protocol.py | 357 ++++++++++ .../sdk-python/src/qwen_code_sdk/py.typed | 0 .../sdk-python/src/qwen_code_sdk/query.py | 607 +++++++++++++++++ .../src/qwen_code_sdk/sync_query.py | 217 +++++++ .../sdk-python/src/qwen_code_sdk/transport.py | 243 +++++++ .../sdk-python/src/qwen_code_sdk/types.py | 323 +++++++++ .../src/qwen_code_sdk/validation.py | 94 +++ .../sdk-python/tests/integration/conftest.py | 400 ++++++++++++ .../tests/integration/test_async_query.py | 276 ++++++++ .../tests/integration/test_sync_query.py | 82 +++ .../sdk-python/tests/unit/test_query_core.py | 612 ++++++++++++++++++ .../sdk-python/tests/unit/test_transport.py | 291 +++++++++ .../sdk-python/tests/unit/test_validation.py | 174 +++++ 25 files changed, 4676 insertions(+), 14 deletions(-) create mode 100644 .github/workflows/sdk-python.yml create mode 100644 docs/developers/sdk-python.md create mode 100644 packages/sdk-python/README.md create mode 100644 packages/sdk-python/pyproject.toml create mode 100644 packages/sdk-python/scripts/smoke_real.py create mode 100644 packages/sdk-python/src/qwen_code_sdk/__init__.py create mode 100644 packages/sdk-python/src/qwen_code_sdk/errors.py create mode 100644 packages/sdk-python/src/qwen_code_sdk/json_lines.py create mode 100644 packages/sdk-python/src/qwen_code_sdk/protocol.py create mode 100644 packages/sdk-python/src/qwen_code_sdk/py.typed create mode 100644 packages/sdk-python/src/qwen_code_sdk/query.py create mode 100644 packages/sdk-python/src/qwen_code_sdk/sync_query.py create mode 100644 packages/sdk-python/src/qwen_code_sdk/transport.py create mode 100644 packages/sdk-python/src/qwen_code_sdk/types.py create mode 100644 packages/sdk-python/src/qwen_code_sdk/validation.py create mode 100644 packages/sdk-python/tests/integration/conftest.py create mode 100644 packages/sdk-python/tests/integration/test_async_query.py create mode 100644 packages/sdk-python/tests/integration/test_sync_query.py create mode 100644 packages/sdk-python/tests/unit/test_query_core.py create mode 100644 packages/sdk-python/tests/unit/test_transport.py create mode 100644 packages/sdk-python/tests/unit/test_validation.py diff --git a/.github/workflows/sdk-python.yml b/.github/workflows/sdk-python.yml new file mode 100644 index 000000000..43e76dedb --- /dev/null +++ b/.github/workflows/sdk-python.yml @@ -0,0 +1,59 @@ +name: 'SDK Python' + +on: + pull_request: + branches: + - 'main' + - 'release/**' + paths: + - 'packages/sdk-python/**' + - 'docs/developers/sdk-python.md' + - 'docs/developers/_meta.ts' + - 'README.md' + - 'package.json' + - '.github/workflows/sdk-python.yml' + push: + branches: + - 'main' + - 'release/**' + paths: + - 'packages/sdk-python/**' + - 'docs/developers/sdk-python.md' + - 'docs/developers/_meta.ts' + - 'README.md' + - 'package.json' + - '.github/workflows/sdk-python.yml' + +jobs: + sdk-python: + name: 'SDK Python (${{ matrix.python-version }})' + runs-on: 'ubuntu-latest' + strategy: + fail-fast: false + matrix: + python-version: ['3.10', '3.11', '3.12'] + steps: + - name: 'Checkout' + uses: 'actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8' # ratchet:actions/checkout@v5 + + - name: 'Set up Python' + uses: 'actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065' # ratchet:actions/setup-python@v5 + with: + python-version: '${{ matrix.python-version }}' + + - name: 'Install SDK test dependencies' + run: | + python -m pip install --upgrade pip + python -m pip install -e 'packages/sdk-python[dev]' + + - name: 'Run Ruff' + run: 'python -m ruff check --config packages/sdk-python/pyproject.toml packages/sdk-python' + + - name: 'Run Ruff Format' + run: 'python -m ruff format --check --config packages/sdk-python/pyproject.toml packages/sdk-python' + + - name: 'Run Mypy' + run: 'python -m mypy --config-file packages/sdk-python/pyproject.toml packages/sdk-python/src' + + - name: 'Run Pytest' + run: 'python -m pytest -c packages/sdk-python/pyproject.toml packages/sdk-python/tests -q' diff --git a/README.md b/README.md index b56c72120..5358f9d0e 100644 --- a/README.md +++ b/README.md @@ -424,7 +424,7 @@ As an open-source terminal agent, you can use Qwen Code in four primary ways: 1. Interactive mode (terminal UI) 2. Headless mode (scripts, CI) 3. IDE integration (VS Code, Zed) -4. TypeScript SDK +4. SDKs (TypeScript, Python, Java) #### Interactive mode @@ -452,11 +452,38 @@ Use Qwen Code inside your editor (VS Code, Zed, and JetBrains IDEs): - [Use in Zed](https://qwenlm.github.io/qwen-code-docs/en/users/integration-zed/) - [Use in JetBrains IDEs](https://qwenlm.github.io/qwen-code-docs/en/users/integration-jetbrains/) -#### TypeScript SDK +#### SDKs -Build on top of Qwen Code with the TypeScript SDK: +Build on top of Qwen Code with the available SDKs: -- [Use the Qwen Code SDK](./packages/sdk-typescript/README.md) +- TypeScript: [Use the Qwen Code SDK](./packages/sdk-typescript/README.md) +- Python: [Use the Python SDK](./packages/sdk-python/README.md) +- Java: [Use the Java SDK](./packages/sdk-java/qwencode/README.md) + +Python SDK example: + +```python +import asyncio + +from qwen_code_sdk import is_sdk_result_message, query + + +async def main() -> None: + result = query( + "Summarize the repository layout.", + { + "cwd": "/path/to/project", + "path_to_qwen_executable": "qwen", + }, + ) + + async for message in result: + if is_sdk_result_message(message): + print(message["result"]) + + +asyncio.run(main()) +``` ## Commands & Shortcuts diff --git a/docs/developers/_meta.ts b/docs/developers/_meta.ts index 42c938c9f..ed6597257 100644 --- a/docs/developers/_meta.ts +++ b/docs/developers/_meta.ts @@ -11,7 +11,8 @@ export default { type: 'separator', }, 'sdk-typescript': 'Typescript SDK', - 'sdk-java': 'Java SDK(alpha)', + 'sdk-python': 'Python SDK (alpha)', + 'sdk-java': 'Java SDK (alpha)', 'Dive Into Qwen Code': { title: 'Dive Into Qwen Code', type: 'separator', diff --git a/docs/developers/sdk-python.md b/docs/developers/sdk-python.md new file mode 100644 index 000000000..7ef2c0ab6 --- /dev/null +++ b/docs/developers/sdk-python.md @@ -0,0 +1,168 @@ +# Python SDK + +## `qwen-code-sdk` + +`qwen-code-sdk` is an experimental Python SDK for Qwen Code. v1 targets the +existing `stream-json` CLI protocol and keeps the transport surface small and +testable. + +## Scope + +- Package name: `qwen-code-sdk` +- Import path: `qwen_code_sdk` +- Runtime requirement: Python `>=3.10` +- CLI dependency: external `qwen` executable is required in v1 +- Transport scope: process transport only +- Not included in v1: ACP transport, SDK-embedded MCP servers + +## Install + +```bash +pip install qwen-code-sdk +``` + +If `qwen` is not on `PATH`, pass `path_to_qwen_executable` explicitly. + +## Quick Start + +```python +import asyncio + +from qwen_code_sdk import is_sdk_result_message, query + + +async def main() -> None: + result = query( + "Explain the repository structure.", + { + "cwd": "/path/to/project", + "path_to_qwen_executable": "qwen", + }, + ) + + async for message in result: + if is_sdk_result_message(message): + print(message["result"]) + + +asyncio.run(main()) +``` + +## API Surface + +### Top-level entry points + +- `query(prompt, options=None) -> Query` +- `query_sync(prompt, options=None) -> SyncQuery` + +`prompt` supports either: + +- `str` for single-turn requests +- `AsyncIterable[SDKUserMessage]` for multi-turn streams + +### `Query` + +- Async iterable over SDK messages +- `close()` +- `interrupt()` +- `set_model(model)` +- `set_permission_mode(mode)` +- `supported_commands()` +- `mcp_server_status()` +- `get_session_id()` +- `is_closed()` + +### `QueryOptions` + +Supported options in v1: + +- `cwd` +- `model` +- `path_to_qwen_executable` +- `permission_mode` +- `can_use_tool` +- `env` +- `system_prompt` +- `append_system_prompt` +- `debug` +- `max_session_turns` +- `core_tools` +- `exclude_tools` +- `allowed_tools` +- `auth_type` +- `include_partial_messages` +- `resume` +- `continue_session` +- `session_id` +- `timeout` +- `mcp_servers` +- `stderr` + +Session argument priority is fixed as: + +1. `resume` +2. `continue_session` +3. `session_id` + +## Permission Handling + +When the CLI emits a `can_use_tool` control request, the SDK routes it through +`can_use_tool(tool_name, tool_input, context)`. + +- Default behavior: deny +- Default timeout: 60 seconds +- Timeout fallback: deny +- Callback exceptions: converted to deny with an error message +- Callback context: `cancel_event`, `suggestions`, and `blocked_path` +- Callback contract: `can_use_tool` must be async with 3 positional arguments; + `stderr` must accept 1 positional string argument + +## Error Model + +- `ValidationError`: invalid options, invalid UUIDs, unsupported combinations +- `ControlRequestTimeoutError`: initialize, interrupt, or other control request + timed out +- `ProcessExitError`: CLI exited non-zero +- `AbortError`: control request or session was cancelled + +## Troubleshooting + +If the SDK cannot start the CLI: + +- Verify `qwen --version` works in the target environment +- Pass `path_to_qwen_executable` if your shell uses `nvm`, `pyenv`, or other + non-standard PATH setup +- Use `debug=True` or `stderr=print` to surface CLI stderr while debugging + +If session control calls time out: + +- Check that the target `qwen` version supports `--input-format stream-json` +- Increase `timeout.control_request` +- Verify that no wrapper script is swallowing stdout/stderr + +## Repository Integration + +Repository-level helper commands: + +- `npm run test:sdk:python` +- `npm run lint:sdk:python` +- `npm run typecheck:sdk:python` +- `npm run smoke:sdk:python -- --qwen qwen` + +## Real E2E Smoke + +For a real runtime check (actual `qwen` process + real model call), run from +the repository root. The npm helper uses `python3`, so ensure it resolves to a +Python `>=3.10` interpreter: + +```bash +npm run smoke:sdk:python -- --qwen qwen +``` + +This script runs: + +- async single-turn query +- async control flow (`supported_commands`, permission mode updates) +- sync `query_sync` query + +It prints JSON and returns non-zero on failure. diff --git a/package.json b/package.json index 5611c59ea..507ea5da2 100644 --- a/package.json +++ b/package.json @@ -43,6 +43,7 @@ "test:integration:sandbox:podman": "cross-env QWEN_SANDBOX=podman vitest run --root ./integration-tests", "test:integration:sdk:sandbox:none": "cross-env QWEN_SANDBOX=false vitest run --root ./integration-tests --poolOptions.threads.maxThreads 2 sdk-typescript", "test:integration:sdk:sandbox:docker": "cross-env QWEN_SANDBOX=docker npm run build:sandbox && QWEN_SANDBOX=docker vitest run --root ./integration-tests --poolOptions.threads.maxThreads 2 sdk-typescript", + "test:sdk:python": "python3 -m pytest -c packages/sdk-python/pyproject.toml packages/sdk-python/tests -q", "test:integration:cli:sandbox:none": "cross-env QWEN_SANDBOX=false vitest run --root ./integration-tests cli", "test:integration:cli:sandbox:docker": "cross-env QWEN_SANDBOX=docker npm run build:sandbox && QWEN_SANDBOX=docker vitest run --root ./integration-tests cli", "test:integration:interactive:sandbox:none": "cross-env QWEN_SANDBOX=false vitest run --root ./integration-tests interactive", @@ -53,9 +54,12 @@ "lint": "eslint . --ext .ts,.tsx && eslint integration-tests", "lint:fix": "eslint . --fix && eslint integration-tests --fix", "lint:ci": "eslint . --ext .ts,.tsx --max-warnings 0 && eslint integration-tests --max-warnings 0", + "lint:sdk:python": "python3 -m ruff check --config packages/sdk-python/pyproject.toml packages/sdk-python", "lint:all": "node scripts/lint.js", "format": "prettier --experimental-cli --write .", "typecheck": "npm run typecheck --workspaces --if-present", + "typecheck:sdk:python": "python3 -m mypy --config-file packages/sdk-python/pyproject.toml packages/sdk-python/src", + "smoke:sdk:python": "python3 packages/sdk-python/scripts/smoke_real.py", "check-i18n": "npm run check-i18n --workspace=packages/cli", "preflight": "npm run clean && npm ci && npm run format && npm run lint:ci && npm run build && npm run typecheck && npm run test:ci", "prepare": "husky && npm run build && npm run bundle", diff --git a/packages/cli/src/ui/components/SettingsDialog.test.tsx b/packages/cli/src/ui/components/SettingsDialog.test.tsx index c72750737..c1b75f755 100644 --- a/packages/cli/src/ui/components/SettingsDialog.test.tsx +++ b/packages/cli/src/ui/components/SettingsDialog.test.tsx @@ -963,11 +963,25 @@ describe('SettingsDialog', () => { , ); - // Trigger a restart-required setting change: navigate to "Language: UI" (2nd item) and toggle it. - stdin.write(TerminalKeys.DOWN_ARROW as string); - await wait(); - stdin.write(TerminalKeys.ENTER as string); - await wait(); + await waitFor(() => { + expect(lastFrame()).toContain('Tool Approval Mode'); + }); + + const languageIndex = getDialogSettingKeys().indexOf('general.language'); + expect(languageIndex).toBeGreaterThanOrEqual(0); + + const press = async (key: string) => { + act(() => { + stdin.write(key); + }); + await wait(); + }; + + // Trigger a restart-required setting change by toggling the UI language setting. + for (let i = 0; i < languageIndex; i++) { + await press(TerminalKeys.DOWN_ARROW as string); + } + await press(TerminalKeys.ENTER as string); await waitFor(() => { expect(lastFrame()).toContain( @@ -976,10 +990,8 @@ describe('SettingsDialog', () => { }); // Switch scopes; restart prompt should remain visible. - stdin.write(TerminalKeys.TAB as string); - await wait(); - stdin.write('2'); - await wait(); + await press(TerminalKeys.TAB as string); + await press('2'); await waitFor(() => { expect(lastFrame()).toContain( diff --git a/packages/sdk-python/README.md b/packages/sdk-python/README.md new file mode 100644 index 000000000..67cf3ae34 --- /dev/null +++ b/packages/sdk-python/README.md @@ -0,0 +1,117 @@ +# qwen-code-sdk + +Experimental Python SDK for programmatic access to Qwen Code through the +`stream-json` protocol. + +## Installation + +```bash +pip install qwen-code-sdk +``` + +## Requirements + +- Python `>=3.10` +- External `qwen` CLI installed and available in `PATH` + +You can also point the SDK at an explicit CLI binary or script with +`path_to_qwen_executable`. + +## Quick Start + +```python +import asyncio + +from qwen_code_sdk import is_sdk_result_message, query + + +async def main() -> None: + result = query( + "List the top-level packages in this repository.", + { + "cwd": "/path/to/project", + "path_to_qwen_executable": "qwen", + }, + ) + + async for message in result: + if is_sdk_result_message(message): + print(message["result"]) + + +asyncio.run(main()) +``` + +## Sync API + +```python +from qwen_code_sdk import query_sync + + +with query_sync( + "Say hello", + { + "path_to_qwen_executable": "qwen", + }, +) as result: + for message in result: + print(message) +``` + +## Main APIs + +- `query(prompt, options=None) -> Query` +- `query_sync(prompt, options=None) -> SyncQuery` +- `Query.close()`, `interrupt()`, `set_model()`, `set_permission_mode()` +- `Query.supported_commands()`, `mcp_server_status()`, `get_session_id()` + +`prompt` accepts either a single `str` or an `AsyncIterable[SDKUserMessage]` +for multi-turn sessions. + +## Permission Callback + +```python +from qwen_code_sdk import query + + +async def can_use_tool(tool_name, tool_input, context): + if tool_name == "write_file": + return {"behavior": "deny", "message": "Writes disabled in this app"} + return {"behavior": "allow", "updatedInput": tool_input} + + +result = query( + "Create hello.txt", + { + "path_to_qwen_executable": "qwen", + "can_use_tool": can_use_tool, + }, +) +``` + +The callback defaults to deny. If it does not return within 60 seconds, the SDK +auto-denies the tool request. + +The `context` argument includes `cancel_event`, `suggestions`, and +`blocked_path` when the CLI provides a path-specific permission target. +`can_use_tool` must be an `async def` callback accepting +`(tool_name, tool_input, context)`. `stderr` must accept a single `str`. + +## Errors + +- `ValidationError`: invalid query options or malformed session identifiers +- `ControlRequestTimeoutError`: CLI control operation exceeded timeout +- `ProcessExitError`: `qwen` exited with a non-zero code +- `AbortError`: query or control request was cancelled + +## Current Scope + +`0.1.x` is intentionally narrow: + +- Uses external `qwen` CLI via process transport +- Targets `stream-json` parity with the TypeScript SDK core flow +- Does not yet implement ACP transport +- Does not yet embed MCP servers inside the SDK process + +See [developer documentation](../../docs/developers/sdk-python.md) for more +detail. diff --git a/packages/sdk-python/pyproject.toml b/packages/sdk-python/pyproject.toml new file mode 100644 index 000000000..10745ef69 --- /dev/null +++ b/packages/sdk-python/pyproject.toml @@ -0,0 +1,61 @@ +[build-system] +requires = ["hatchling>=1.27.0"] +build-backend = "hatchling.build" + +[project] +name = "qwen-code-sdk" +version = "0.1.0" +description = "Python SDK for programmatic access to qwen-code CLI" +readme = "README.md" +requires-python = ">=3.10" +license = { text = "Apache-2.0" } +authors = [{ name = "Qwen Team" }] +keywords = ["qwen", "qwen-code", "sdk", "python", "ai", "code-assistant"] +classifiers = [ + "Development Status :: 3 - Alpha", + "Intended Audience :: Developers", + "License :: OSI Approved :: Apache Software License", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", + "Topic :: Software Development :: Libraries :: Python Modules", +] +dependencies = ["typing-extensions>=4.12.0"] + +[project.urls] +Homepage = "https://qwenlm.github.io/qwen-code-docs/" +Repository = "https://github.com/QwenLM/qwen-code" +Issues = "https://github.com/QwenLM/qwen-code/issues" + +[project.optional-dependencies] +dev = [ + "mypy>=1.11.0", + "pytest>=8.3.0", + "pytest-asyncio>=0.24.0", + "ruff>=0.8.0", +] + +[tool.hatch.build.targets.wheel] +packages = ["src/qwen_code_sdk"] + +[tool.pytest.ini_options] +testpaths = ["tests"] +pythonpath = ["src"] +asyncio_mode = "auto" + +[tool.ruff] +line-length = 88 +target-version = "py310" + +[tool.ruff.lint] +select = ["E", "F", "I", "B", "UP", "ASYNC", "RUF"] + +[tool.mypy] +python_version = "3.10" +strict = true +warn_unused_configs = true +warn_redundant_casts = true +warn_unused_ignores = true +show_error_codes = true +mypy_path = "src" diff --git a/packages/sdk-python/scripts/smoke_real.py b/packages/sdk-python/scripts/smoke_real.py new file mode 100644 index 000000000..820b5a21b --- /dev/null +++ b/packages/sdk-python/scripts/smoke_real.py @@ -0,0 +1,388 @@ +#!/usr/bin/env python3 +"""Run real end-to-end smoke tests against qwen CLI using qwen_code_sdk. + +This script is intentionally lightweight and avoids any test doubles. +It is useful for manual verification after changing SDK runtime behavior. +""" + +from __future__ import annotations + +import sys + +if sys.version_info < (3, 10): # noqa: UP036 + import json + + version = ".".join(str(part) for part in sys.version_info[:3]) + payload = { + "ok": False, + "stage": "startup", + "error": f"Python >=3.10 is required, current version is {version}", + "error_type": "RuntimeError", + } + print(json.dumps(payload, ensure_ascii=False, indent=2)) + raise SystemExit(2) + +import argparse +import asyncio +import json +import subprocess +import threading +from collections.abc import AsyncIterator, Awaitable, Callable +from dataclasses import asdict, dataclass +from pathlib import Path +from queue import Empty, Queue +from typing import Any, TypeVar + +SDK_ROOT = Path(__file__).resolve().parents[1] +SRC_ROOT = SDK_ROOT / "src" +if str(SRC_ROOT) not in sys.path: + sys.path.insert(0, str(SRC_ROOT)) + +from qwen_code_sdk import ( # noqa: E402 + SDKUserMessage, + SyncQuery, + is_sdk_assistant_message, + is_sdk_result_message, + is_sdk_system_message, + query, + query_sync, +) +from qwen_code_sdk.transport import prepare_spawn_info # noqa: E402 + +T = TypeVar("T") + + +@dataclass +class AsyncSingleResult: + ok: bool + assistant_text: str | None + result_text: str | None + session_id: str + + +@dataclass +class AsyncControlResult: + ok: bool + supported_commands_type: str + saw_system_message: bool + saw_result_message: bool + session_id: str + + +@dataclass +class SyncResult: + ok: bool + saw_result_message: bool + result_text: str | None + session_id: str + + +def parse_args() -> argparse.Namespace: + parser = argparse.ArgumentParser( + description="Run real qwen_code_sdk smoke tests using qwen CLI", + ) + parser.add_argument( + "--qwen", + default="qwen", + help="Path or command for qwen executable (default: qwen)", + ) + parser.add_argument( + "--cwd", + default=str(Path.cwd()), + help="Working directory passed to SDK query options", + ) + parser.add_argument( + "--model", + default=None, + help="Optional model name. If set, script will call set_model(model).", + ) + parser.add_argument( + "--timeout-seconds", + type=float, + default=90.0, + help="Timeout used for control/callback/stream-close options", + ) + parser.add_argument( + "--json-only", + action="store_true", + help="Print only JSON result (no progress logs)", + ) + return parser.parse_args() + + +def check_qwen_cli_available(qwen_cmd: str, timeout_seconds: float) -> str: + spawn_info = prepare_spawn_info(qwen_cmd) + completed = subprocess.run( + [spawn_info.command, *spawn_info.args, "--version"], + check=True, + capture_output=True, + text=True, + timeout=timeout_seconds, + ) + return completed.stdout.strip() + + +def build_options(args: argparse.Namespace) -> dict[str, Any]: + return { + "cwd": args.cwd, + "path_to_qwen_executable": args.qwen, + "permission_mode": "yolo", + "max_session_turns": 1, + "timeout": { + "control_request": args.timeout_seconds, + "can_use_tool": args.timeout_seconds, + "stream_close": args.timeout_seconds, + }, + } + + +def extract_assistant_text(message: dict[str, Any]) -> str: + content = message["message"].get("content", []) + if not isinstance(content, list): + return "" + + text_parts: list[str] = [] + for block in content: + if isinstance(block, dict) and block.get("type") == "text": + text_parts.append(str(block.get("text", ""))) + return "".join(text_parts) + + +async def run_async_single(args: argparse.Namespace) -> AsyncSingleResult: + token = "SDK_REAL_ASYNC_OK" + options = build_options(args) + q = query( + f"Reply exactly with {token}", + options, + ) + + assistant_text: str | None = None + result_text: str | None = None + try: + async for message in q: + if is_sdk_assistant_message(message): + assistant_text = (assistant_text or "") + extract_assistant_text( + message + ) + if is_sdk_result_message(message): + result_text = str(message.get("result", "")) + finally: + await q.close() + + ok = token in (assistant_text or "") and token in (result_text or "") + return AsyncSingleResult( + ok=ok, + assistant_text=assistant_text, + result_text=result_text, + session_id=q.get_session_id(), + ) + + +async def run_async_controls(args: argparse.Namespace) -> AsyncControlResult: + token = "SDK_REAL_CONTROL_OK" + options = build_options(args) + release_prompt = asyncio.Event() + + async def prompts() -> AsyncIterator[SDKUserMessage]: + await release_prompt.wait() + yield { + "type": "user", + "session_id": "00000000-0000-4000-8000-000000000001", + "message": { + "role": "user", + "content": f"Reply exactly with {token}", + }, + "parent_tool_use_id": None, + } + + q = query(prompts(), options) + + supported: dict[str, Any] | None = None + saw_system_message = False + saw_result_message = False + try: + supported = await q.supported_commands() + await q.set_permission_mode("plan") + await q.set_permission_mode("yolo") + if args.model: + await q.set_model(args.model) + + release_prompt.set() + async for message in q: + if is_sdk_system_message(message): + saw_system_message = True + if is_sdk_result_message(message): + saw_result_message = True + break + finally: + await q.close() + + ok = isinstance(supported, dict) and saw_result_message + return AsyncControlResult( + ok=ok, + supported_commands_type=type(supported).__name__, + saw_system_message=saw_system_message, + saw_result_message=saw_result_message, + session_id=q.get_session_id(), + ) + + +async def run_stage(stage: str, coro: Awaitable[T], timeout_seconds: float) -> T: + try: + return await asyncio.wait_for(coro, timeout=timeout_seconds) + except TimeoutError as exc: + message = f"{stage} timed out after {timeout_seconds} seconds" + raise TimeoutError(message) from exc + + +def run_sync( + args: argparse.Namespace, + on_query: Callable[[SyncQuery], None] | None = None, +) -> SyncResult: + token = "SDK_REAL_SYNC_OK" + options = build_options(args) + q = query_sync( + f"Reply exactly with {token}", + options, + ) + if on_query is not None: + on_query(q) + + saw_result_message = False + result_text: str | None = None + try: + for message in q: + if is_sdk_result_message(message): + saw_result_message = True + result_text = str(message.get("result", "")) + break + finally: + q.close() + + ok = saw_result_message and token in (result_text or "") + return SyncResult( + ok=ok, + saw_result_message=saw_result_message, + result_text=result_text, + session_id=q.get_session_id(), + ) + + +def run_sync_with_timeout(args: argparse.Namespace) -> SyncResult: + result_queue: Queue[SyncResult | BaseException] = Queue(maxsize=1) + query_holder: dict[str, SyncQuery] = {} + + def remember_query(q: SyncQuery) -> None: + query_holder["query"] = q + + def worker() -> None: + try: + result_queue.put(run_sync(args, on_query=remember_query)) + except BaseException as exc: + result_queue.put(exc) + + thread = threading.Thread( + target=worker, + name="qwen-sdk-real-smoke-sync", + daemon=True, + ) + thread.start() + + try: + item = result_queue.get(timeout=args.timeout_seconds) + except Empty as exc: + q = query_holder.get("query") + if q is not None: + q.close() + raise TimeoutError( + f"sync check timed out after {args.timeout_seconds} seconds" + ) from exc + + thread.join(timeout=1.0) + if isinstance(item, BaseException): + raise item + return item + + +def build_failure_payload( + *, + stage: str, + exc: BaseException, + qwen_version: str | None = None, + completed: dict[str, Any] | None = None, +) -> dict[str, Any]: + payload: dict[str, Any] = { + "ok": False, + "stage": stage, + "error": str(exc), + "error_type": type(exc).__name__, + } + if qwen_version is not None: + payload["qwen_version"] = qwen_version + if completed: + payload["completed"] = completed + return payload + + +async def main() -> int: + args = parse_args() + + try: + qwen_version = check_qwen_cli_available(args.qwen, args.timeout_seconds) + except (subprocess.CalledProcessError, OSError, subprocess.TimeoutExpired) as exc: + payload = build_failure_payload(stage="preflight", exc=exc) + print(json.dumps(payload, ensure_ascii=False, indent=2)) + return 2 + + stage = "async single-turn check" + completed: dict[str, Any] = {} + try: + if not args.json_only: + print(f"[smoke] qwen version: {qwen_version}") + print(f"[smoke] running {stage}...") + async_single = await run_stage( + stage, + run_async_single(args), + args.timeout_seconds, + ) + completed["async_single"] = asdict(async_single) + + stage = "async control check" + if not args.json_only: + print(f"[smoke] running {stage}...") + async_controls = await run_stage( + stage, + run_async_controls(args), + args.timeout_seconds, + ) + completed["async_controls"] = asdict(async_controls) + + stage = "sync check" + if not args.json_only: + print(f"[smoke] running {stage}...") + sync_result = run_sync_with_timeout(args) + completed["sync"] = asdict(sync_result) + except Exception as exc: + payload = build_failure_payload( + stage=stage, + exc=exc, + qwen_version=qwen_version, + completed=completed, + ) + print(json.dumps(payload, ensure_ascii=False, indent=2)) + return 1 + + all_ok = async_single.ok and async_controls.ok and sync_result.ok + payload = { + "ok": all_ok, + "qwen_version": qwen_version, + "async_single": asdict(async_single), + "async_controls": asdict(async_controls), + "sync": asdict(sync_result), + } + print(json.dumps(payload, ensure_ascii=False, indent=2)) + return 0 if all_ok else 1 + + +if __name__ == "__main__": + raise SystemExit(asyncio.run(main())) diff --git a/packages/sdk-python/src/qwen_code_sdk/__init__.py b/packages/sdk-python/src/qwen_code_sdk/__init__.py new file mode 100644 index 000000000..b2de65ad2 --- /dev/null +++ b/packages/sdk-python/src/qwen_code_sdk/__init__.py @@ -0,0 +1,108 @@ +"""qwen_code_sdk package exports.""" + +from __future__ import annotations + +from collections.abc import AsyncIterable, Iterable, Mapping +from typing import Any + +from .errors import ( + AbortError, + ControlRequestTimeoutError, + ProcessExitError, + QwenSDKError, + ValidationError, +) +from .protocol import ( + APIAssistantMessage, + APIUserMessage, + ContentBlock, + SDKAssistantMessage, + SDKMessage, + SDKPartialAssistantMessage, + SDKResultMessage, + SDKSystemMessage, + SDKUserMessage, + TextBlock, + ThinkingBlock, + ToolResultBlock, + ToolUseBlock, + Usage, + is_control_cancel, + is_control_request, + is_control_response, + is_sdk_assistant_message, + is_sdk_partial_assistant_message, + is_sdk_result_message, + is_sdk_system_message, + is_sdk_user_message, +) +from .query import Query, query +from .sync_query import SyncQuery +from .types import ( + AuthType, + CanUseTool, + CanUseToolContext, + PermissionAllowResult, + PermissionDenyResult, + PermissionMode, + PermissionResult, + PermissionSuggestion, + QueryOptions, + QueryOptionsDict, + TimeoutOptions, + TimeoutOptionsDict, +) + + +def query_sync( + prompt: str | Iterable[SDKUserMessage] | AsyncIterable[SDKUserMessage], + options: QueryOptions | QueryOptionsDict | Mapping[str, Any] | None = None, +) -> SyncQuery: + return SyncQuery(prompt=prompt, options=options) + + +__all__ = [ + "APIAssistantMessage", + "APIUserMessage", + "AbortError", + "AuthType", + "CanUseTool", + "CanUseToolContext", + "ContentBlock", + "ControlRequestTimeoutError", + "PermissionAllowResult", + "PermissionDenyResult", + "PermissionMode", + "PermissionResult", + "PermissionSuggestion", + "ProcessExitError", + "Query", + "QueryOptions", + "QueryOptionsDict", + "QwenSDKError", + "SDKAssistantMessage", + "SDKMessage", + "SDKPartialAssistantMessage", + "SDKResultMessage", + "SDKSystemMessage", + "SDKUserMessage", + "SyncQuery", + "TextBlock", + "ThinkingBlock", + "TimeoutOptions", + "TimeoutOptionsDict", + "ToolResultBlock", + "ToolUseBlock", + "Usage", + "ValidationError", + "is_control_cancel", + "is_control_request", + "is_control_response", + "is_sdk_assistant_message", + "is_sdk_partial_assistant_message", + "is_sdk_result_message", + "is_sdk_system_message", + "is_sdk_user_message", + "query", + "query_sync", +] diff --git a/packages/sdk-python/src/qwen_code_sdk/errors.py b/packages/sdk-python/src/qwen_code_sdk/errors.py new file mode 100644 index 000000000..e8837f917 --- /dev/null +++ b/packages/sdk-python/src/qwen_code_sdk/errors.py @@ -0,0 +1,27 @@ +"""Error types for qwen_code_sdk.""" + +from __future__ import annotations + + +class QwenSDKError(Exception): + """Base error for all SDK failures.""" + + +class ValidationError(QwenSDKError): + """Raised when query options are invalid.""" + + +class AbortError(QwenSDKError): + """Raised when an operation is aborted by caller or transport.""" + + +class ProcessExitError(QwenSDKError): + """Raised when qwen CLI exits with non-zero status or signal.""" + + def __init__(self, message: str, exit_code: int | None = None) -> None: + super().__init__(message) + self.exit_code = exit_code + + +class ControlRequestTimeoutError(QwenSDKError): + """Raised when a control request times out waiting for response.""" diff --git a/packages/sdk-python/src/qwen_code_sdk/json_lines.py b/packages/sdk-python/src/qwen_code_sdk/json_lines.py new file mode 100644 index 000000000..8b4b33ef7 --- /dev/null +++ b/packages/sdk-python/src/qwen_code_sdk/json_lines.py @@ -0,0 +1,14 @@ +"""JSON lines utilities.""" + +from __future__ import annotations + +import json +from typing import Any + + +def serialize_json_line(payload: Any) -> str: + return json.dumps(payload, ensure_ascii=False, separators=(",", ":")) + "\n" + + +def parse_json_line(line: str) -> Any: + return json.loads(line) diff --git a/packages/sdk-python/src/qwen_code_sdk/protocol.py b/packages/sdk-python/src/qwen_code_sdk/protocol.py new file mode 100644 index 000000000..7e5e50b70 --- /dev/null +++ b/packages/sdk-python/src/qwen_code_sdk/protocol.py @@ -0,0 +1,357 @@ +"""Protocol message types and helpers for qwen stream-json.""" + +from __future__ import annotations + +from typing import Any, Literal, TypeAlias, TypeGuard + +from typing_extensions import NotRequired, TypedDict + +from .types import PermissionMode, PermissionSuggestion + + +class Annotation(TypedDict): + type: str + value: str + + +class Usage(TypedDict): + input_tokens: int + output_tokens: int + cache_creation_input_tokens: NotRequired[int] + cache_read_input_tokens: NotRequired[int] + total_tokens: NotRequired[int] + + +class ExtendedUsage(Usage, total=False): + server_tool_use: dict[str, int] + service_tier: str + cache_creation: dict[str, int] + + +class CLIPermissionDenial(TypedDict): + tool_name: str + tool_use_id: str + tool_input: Any + + +class TextBlock(TypedDict): + type: Literal["text"] + text: str + annotations: NotRequired[list[Annotation]] + + +class ThinkingBlock(TypedDict): + type: Literal["thinking"] + thinking: str + signature: NotRequired[str] + annotations: NotRequired[list[Annotation]] + + +class ToolUseBlock(TypedDict): + type: Literal["tool_use"] + id: str + name: str + input: Any + annotations: NotRequired[list[Annotation]] + + +class ToolResultBlock(TypedDict): + type: Literal["tool_result"] + tool_use_id: str + content: NotRequired[str | list[ContentBlock]] + is_error: NotRequired[bool] + annotations: NotRequired[list[Annotation]] + + +ContentBlock: TypeAlias = TextBlock | ThinkingBlock | ToolUseBlock | ToolResultBlock + + +class APIUserMessage(TypedDict): + role: Literal["user"] + content: str | list[ContentBlock] + + +class APIAssistantMessage(TypedDict): + role: Literal["assistant"] + content: list[ContentBlock] + id: NotRequired[str] + type: NotRequired[Literal["message"]] + model: NotRequired[str] + stop_reason: NotRequired[str | None] + usage: NotRequired[Usage] + + +class SDKUserMessage(TypedDict): + type: Literal["user"] + session_id: str + message: APIUserMessage + parent_tool_use_id: str | None + uuid: NotRequired[str] + options: NotRequired[dict[str, Any]] + + +class SDKAssistantMessage(TypedDict): + type: Literal["assistant"] + uuid: str + session_id: str + message: APIAssistantMessage + parent_tool_use_id: str | None + + +class MCPServerState(TypedDict): + name: str + status: str + + +class SDKSystemMessage(TypedDict): + type: Literal["system"] + subtype: str + uuid: str + session_id: str + data: NotRequired[Any] + cwd: NotRequired[str] + tools: NotRequired[list[str]] + mcp_servers: NotRequired[list[MCPServerState]] + model: NotRequired[str] + permission_mode: NotRequired[str] + slash_commands: NotRequired[list[str]] + qwen_code_version: NotRequired[str] + output_style: NotRequired[str] + agents: NotRequired[list[str]] + skills: NotRequired[list[str]] + capabilities: NotRequired[dict[str, Any]] + + +class SDKResultMessageSuccess(TypedDict): + type: Literal["result"] + subtype: Literal["success"] + uuid: str + session_id: str + is_error: Literal[False] + duration_ms: int + duration_api_ms: int + num_turns: int + result: str + usage: ExtendedUsage + permission_denials: list[CLIPermissionDenial] + + +class ResultErrorObject(TypedDict): + message: str + type: NotRequired[str] + + +class SDKResultMessageError(TypedDict): + type: Literal["result"] + subtype: Literal["error_max_turns", "error_during_execution"] + uuid: str + session_id: str + is_error: Literal[True] + duration_ms: int + duration_api_ms: int + num_turns: int + usage: ExtendedUsage + permission_denials: list[CLIPermissionDenial] + error: NotRequired[ResultErrorObject] + + +SDKResultMessage: TypeAlias = SDKResultMessageSuccess | SDKResultMessageError + + +class MessageStartStreamEvent(TypedDict): + type: Literal["message_start"] + message: dict[str, Any] + + +class ContentBlockStartEvent(TypedDict): + type: Literal["content_block_start"] + index: int + content_block: ContentBlock + + +class ContentBlockDeltaEvent(TypedDict): + type: Literal["content_block_delta"] + index: int + delta: dict[str, Any] + + +class ContentBlockStopEvent(TypedDict): + type: Literal["content_block_stop"] + index: int + + +class MessageStopStreamEvent(TypedDict): + type: Literal["message_stop"] + + +StreamEvent: TypeAlias = ( + MessageStartStreamEvent + | ContentBlockStartEvent + | ContentBlockDeltaEvent + | ContentBlockStopEvent + | MessageStopStreamEvent +) + + +class SDKPartialAssistantMessage(TypedDict): + type: Literal["stream_event"] + uuid: str + session_id: str + event: StreamEvent + parent_tool_use_id: str | None + + +class CLIControlInterruptRequest(TypedDict): + subtype: Literal["interrupt"] + + +class CLIControlPermissionRequest(TypedDict): + subtype: Literal["can_use_tool"] + tool_name: str + tool_use_id: str + input: Any + permission_suggestions: list[PermissionSuggestion] | None + blocked_path: str | None + + +class CLIControlInitializeRequest(TypedDict): + subtype: Literal["initialize"] + hooks: NotRequired[Any] + mcpServers: NotRequired[dict[str, dict[str, Any]]] + + +class CLIControlSetPermissionModeRequest(TypedDict): + subtype: Literal["set_permission_mode"] + mode: PermissionMode + + +class CLIControlSetModelRequest(TypedDict): + subtype: Literal["set_model"] + model: str + + +class CLIControlMcpStatusRequest(TypedDict): + subtype: Literal["mcp_server_status"] + + +class CLIControlSupportedCommandsRequest(TypedDict): + subtype: Literal["supported_commands"] + + +ControlRequestPayload: TypeAlias = ( + CLIControlInterruptRequest + | CLIControlPermissionRequest + | CLIControlInitializeRequest + | CLIControlSetPermissionModeRequest + | CLIControlSetModelRequest + | CLIControlMcpStatusRequest + | CLIControlSupportedCommandsRequest + | dict[str, Any] +) + + +class CLIControlRequest(TypedDict): + type: Literal["control_request"] + request_id: str + request: ControlRequestPayload + + +class ControlResponseSuccess(TypedDict): + subtype: Literal["success"] + request_id: str + response: Any + + +class ControlResponseError(TypedDict): + subtype: Literal["error"] + request_id: str + error: str | dict[str, Any] + + +class CLIControlResponse(TypedDict): + type: Literal["control_response"] + response: ControlResponseSuccess | ControlResponseError + + +class ControlCancelRequest(TypedDict): + type: Literal["control_cancel_request"] + request_id: NotRequired[str] + + +SDKMessage: TypeAlias = ( + SDKUserMessage + | SDKAssistantMessage + | SDKSystemMessage + | SDKResultMessage + | SDKPartialAssistantMessage +) + + +ControlMessage: TypeAlias = ( + CLIControlRequest | CLIControlResponse | ControlCancelRequest +) + + +def is_sdk_user_message(msg: Any) -> TypeGuard[SDKUserMessage]: + return isinstance(msg, dict) and msg.get("type") == "user" and "message" in msg + + +def is_sdk_assistant_message(msg: Any) -> TypeGuard[SDKAssistantMessage]: + return ( + isinstance(msg, dict) + and msg.get("type") == "assistant" + and "session_id" in msg + and "message" in msg + ) + + +def is_sdk_system_message(msg: Any) -> TypeGuard[SDKSystemMessage]: + return ( + isinstance(msg, dict) + and msg.get("type") == "system" + and "subtype" in msg + and "session_id" in msg + ) + + +def is_sdk_result_message(msg: Any) -> TypeGuard[SDKResultMessage]: + return ( + isinstance(msg, dict) + and msg.get("type") == "result" + and "subtype" in msg + and "session_id" in msg + ) + + +def is_sdk_partial_assistant_message(msg: Any) -> TypeGuard[SDKPartialAssistantMessage]: + return ( + isinstance(msg, dict) + and msg.get("type") == "stream_event" + and "session_id" in msg + and "event" in msg + ) + + +def is_control_request(msg: Any) -> TypeGuard[CLIControlRequest]: + return ( + isinstance(msg, dict) + and msg.get("type") == "control_request" + and "request_id" in msg + and "request" in msg + ) + + +def is_control_response(msg: Any) -> TypeGuard[CLIControlResponse]: + return ( + isinstance(msg, dict) + and msg.get("type") == "control_response" + and "response" in msg + ) + + +def is_control_cancel(msg: Any) -> TypeGuard[ControlCancelRequest]: + return ( + isinstance(msg, dict) + and msg.get("type") == "control_cancel_request" + and "request_id" in msg + ) diff --git a/packages/sdk-python/src/qwen_code_sdk/py.typed b/packages/sdk-python/src/qwen_code_sdk/py.typed new file mode 100644 index 000000000..e69de29bb diff --git a/packages/sdk-python/src/qwen_code_sdk/query.py b/packages/sdk-python/src/qwen_code_sdk/query.py new file mode 100644 index 000000000..9bef85781 --- /dev/null +++ b/packages/sdk-python/src/qwen_code_sdk/query.py @@ -0,0 +1,607 @@ +"""Async Query implementation for qwen_code_sdk.""" + +from __future__ import annotations + +import asyncio +import contextlib +from collections.abc import AsyncIterable, Mapping, MutableMapping +from dataclasses import dataclass, replace +from types import TracebackType +from typing import Any, cast +from uuid import uuid4 + +from .errors import AbortError, ControlRequestTimeoutError +from .json_lines import serialize_json_line +from .protocol import ( + CLIControlRequest, + CLIControlResponse, + SDKMessage, + SDKUserMessage, + is_control_cancel, + is_control_request, + is_control_response, + is_sdk_assistant_message, + is_sdk_partial_assistant_message, + is_sdk_result_message, + is_sdk_system_message, + is_sdk_user_message, +) +from .transport import ProcessTransport +from .types import ( + CanUseToolContext, + PermissionDenyResult, + QueryOptions, + QueryOptionsDict, +) +from .validation import validate_query_options + +_DONE = object() + + +@dataclass +class _PendingControlRequest: + future: asyncio.Future[dict[str, Any] | None] + cancel_event: asyncio.Event + timeout_handle: asyncio.TimerHandle + + +@dataclass +class _IncomingControlRequest: + task: asyncio.Task[None] + cancel_event: asyncio.Event + + +class Query: + def __init__( + self, + transport: ProcessTransport, + options: QueryOptions, + prompt: str | AsyncIterable[SDKUserMessage], + session_id: str, + ) -> None: + self._transport = transport + self._options = options + self._prompt = prompt + self._single_turn = isinstance(prompt, str) + self._session_id = session_id + self._session_id_locked = bool(options.resume or options.session_id) + + self._message_queue: asyncio.Queue[SDKMessage | Exception | object] = ( + asyncio.Queue() + ) + self._closed = False + self._started = False + self._start_lock = asyncio.Lock() + self._cancel_event = asyncio.Event() + + self._router_task: asyncio.Task[None] | None = None + self._input_task: asyncio.Task[None] | None = None + self._initialize_task: asyncio.Task[None] | None = None + self._first_result_event = asyncio.Event() + self._terminal_event_sent = False + self._exhausted = False + + self._pending_control_requests: dict[str, _PendingControlRequest] = {} + self._incoming_control_requests: dict[str, _IncomingControlRequest] = {} + + async def _ensure_started(self) -> None: + if self._closed: + raise RuntimeError("Query is closed") + if self._started: + return + + async with self._start_lock: + if self._closed: + raise RuntimeError("Query is closed") + if self._started: + return + await self._transport.start() + self._router_task = asyncio.create_task(self._message_router()) + self._initialize_task = asyncio.create_task(self._initialize()) + + if self._single_turn: + self._input_task = asyncio.create_task(self._send_single_turn_prompt()) + else: + self._input_task = asyncio.create_task( + self.stream_input(self._prompt) # type: ignore[arg-type] + ) + self._started = True + + async def _initialize(self) -> None: + try: + payload: dict[str, Any] = {"hooks": None} + await self._send_control_request("initialize", payload) + except Exception as exc: + await self._finish_with_error(exc) + + async def _send_single_turn_prompt(self) -> None: + try: + assert isinstance(self._prompt, str) + await self._wait_initialized() + message: SDKUserMessage = { + "type": "user", + "session_id": self._session_id, + "message": { + "role": "user", + "content": self._prompt, + }, + "parent_tool_use_id": None, + } + + await self._write_payload(message) + except Exception as exc: + await self._finish_with_error(exc) + raise + + async def _wait_initialized(self) -> None: + if self._initialize_task is None: + return + await self._initialize_task + + async def _message_router(self) -> None: + try: + async for message in self._transport.read_messages(): + await self._route_message(message) + if self._closed: + break + + if self._closed: + return + + if self._transport.exit_error is not None: + await self._finish_with_error(self._transport.exit_error) + return + + await self._finish() + except Exception as exc: # pragma: no cover - critical propagation path + await self._finish_with_error(exc) + + async def _route_message(self, message: Any) -> None: + self._maybe_update_session_id(message) + + if is_control_request(message): + self._start_incoming_control_request(message) + return + + if is_control_response(message): + self._handle_control_response(message) + return + + if is_control_cancel(message): + self._handle_control_cancel_request(message) + return + + if is_sdk_result_message(message): + self._first_result_event.set() + if self._single_turn: + self._transport.end_input() + await self._message_queue.put(message) + return + + if ( + is_sdk_system_message(message) + or is_sdk_assistant_message(message) + or is_sdk_user_message(message) + or is_sdk_partial_assistant_message(message) + ): + await self._message_queue.put(message) + return + + def _maybe_update_session_id(self, message: Any) -> None: + if self._session_id_locked or not isinstance(message, Mapping): + return + + session_id = message.get("session_id") + if isinstance(session_id, str) and session_id: + self._session_id = session_id + self._session_id_locked = True + + def _start_incoming_control_request(self, request: CLIControlRequest) -> None: + request_id = request["request_id"] + cancel_event = asyncio.Event() + + async def runner() -> None: + try: + await self._handle_control_request(request, cancel_event) + except asyncio.CancelledError: + pass + except Exception as exc: # pragma: no cover - fatal background path + await self._finish_with_error(exc) + finally: + self._incoming_control_requests.pop(request_id, None) + + task = asyncio.create_task(runner()) + self._incoming_control_requests[request_id] = _IncomingControlRequest( + task=task, + cancel_event=cancel_event, + ) + + async def _handle_control_request( + self, + request: CLIControlRequest, + cancel_event: asyncio.Event, + ) -> None: + request_id = request["request_id"] + payload = request["request"] + subtype = payload.get("subtype") + + try: + if subtype == "can_use_tool": + response = await self._handle_permission_request( + cast(MutableMapping[str, Any], payload), + cancel_event, + ) + elif subtype == "mcp_message": + raise RuntimeError("mcp_message is unsupported in python sdk v1") + else: + raise RuntimeError(f"Unknown control request subtype: {subtype}") + + if cancel_event.is_set(): + return + + await self._send_control_response( + request_id, success=True, response=response + ) + except Exception as exc: + if cancel_event.is_set(): + return + await self._send_control_response( + request_id, + success=False, + response=str(exc), + ) + + async def _handle_permission_request( + self, + payload: MutableMapping[str, Any], + cancel_event: asyncio.Event, + ) -> dict[str, Any]: + tool_name = str(payload.get("tool_name", "")) + tool_input = payload.get("input") + if not isinstance(tool_input, dict): + tool_input = {} + + if self._options.can_use_tool is None: + return {"behavior": "deny", "message": "Denied"} + + context: CanUseToolContext = { + "cancel_event": cancel_event, + "suggestions": payload.get("permission_suggestions"), + "blocked_path": payload.get("blocked_path"), + } + + try: + result = await asyncio.wait_for( + self._options.can_use_tool(tool_name, tool_input, context), + timeout=self._options.timeout.can_use_tool, + ) + except asyncio.TimeoutError: + return { + "behavior": "deny", + "message": "Permission request timed out", + } + except asyncio.CancelledError: + if cancel_event.is_set(): + raise + return { + "behavior": "deny", + "message": "Permission check failed: callback cancelled", + } + except Exception as exc: + return { + "behavior": "deny", + "message": f"Permission check failed: {exc}", + } + + behavior = result.get("behavior") + if behavior == "allow": + return { + "behavior": "allow", + "updatedInput": result.get("updatedInput", tool_input), + } + + deny_result = cast(PermissionDenyResult, result) + return { + "behavior": "deny", + "message": deny_result.get("message", "Denied"), + **( + {"interrupt": deny_result["interrupt"]} + if "interrupt" in deny_result + else {} + ), + } + + def _handle_control_response(self, response: CLIControlResponse) -> None: + payload = response["response"] + request_id = payload["request_id"] + + pending = self._pending_control_requests.pop(request_id, None) + if pending is None: + return + + pending.timeout_handle.cancel() + + if payload["subtype"] == "success": + if not pending.future.done(): + pending.future.set_result(payload.get("response")) + else: + error = payload.get("error", "Unknown control error") + if isinstance(error, dict): + error_message = str(error.get("message", "Unknown control error")) + else: + error_message = str(error) + if not pending.future.done(): + pending.future.set_exception(RuntimeError(error_message)) + + def _handle_control_cancel_request(self, message: Mapping[str, Any]) -> None: + request_id = message.get("request_id") + if not isinstance(request_id, str): + return + + pending = self._pending_control_requests.pop(request_id, None) + if pending is not None: + pending.timeout_handle.cancel() + pending.cancel_event.set() + if not pending.future.done(): + pending.future.set_exception(AbortError("Control request cancelled")) + + incoming = self._incoming_control_requests.get(request_id) + if incoming is None: + return + + incoming.cancel_event.set() + incoming.task.cancel() + + async def _send_control_request( + self, + subtype: str, + data: dict[str, Any] | None = None, + ) -> dict[str, Any] | None: + if self._closed: + raise RuntimeError("Query is closed") + + if subtype != "initialize": + await self._wait_initialized() + + request_id = str(uuid4()) + + loop = asyncio.get_running_loop() + future: asyncio.Future[dict[str, Any] | None] = loop.create_future() + cancel_event = asyncio.Event() + + def on_timeout() -> None: + pending = self._pending_control_requests.pop(request_id, None) + if pending is None: + return + pending.cancel_event.set() + if not pending.future.done(): + pending.future.set_exception( + ControlRequestTimeoutError(f"Control request timeout: {subtype}") + ) + + timeout_handle = loop.call_later( + self._options.timeout.control_request, + on_timeout, + ) + + self._pending_control_requests[request_id] = _PendingControlRequest( + future=future, + cancel_event=cancel_event, + timeout_handle=timeout_handle, + ) + + request_payload: dict[str, Any] = {"subtype": subtype} + if data: + request_payload.update(data) + + payload: CLIControlRequest = { + "type": "control_request", + "request_id": request_id, + "request": request_payload, + } + + await self._write_payload(payload) + return await future + + async def _send_control_response( + self, + request_id: str, + *, + success: bool, + response: Any, + ) -> None: + payload: CLIControlResponse + if success: + payload = { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": request_id, + "response": response, + }, + } + else: + payload = { + "type": "control_response", + "response": { + "subtype": "error", + "request_id": request_id, + "error": str(response), + }, + } + + await self._write_payload(payload) + + async def _write_payload(self, payload: Any) -> None: + self._transport.write(serialize_json_line(payload)) + await self._transport.drain() + + async def stream_input(self, messages: AsyncIterable[SDKUserMessage]) -> None: + try: + if self._closed: + raise RuntimeError("Query is closed") + + await self._wait_initialized() + + async for message in messages: + if self._cancel_event.is_set() or self._closed: + break + await self._write_payload(message) + + if not self._single_turn: + try: + await asyncio.wait_for( + self._first_result_event.wait(), + timeout=self._options.timeout.stream_close, + ) + except asyncio.TimeoutError: + pass + + self._transport.end_input() + except Exception as exc: + await self._finish_with_error(exc) + raise + + async def interrupt(self) -> None: + await self._ensure_started() + await self._send_control_request("interrupt") + + async def set_permission_mode(self, mode: str) -> None: + await self._ensure_started() + await self._send_control_request("set_permission_mode", {"mode": mode}) + + async def set_model(self, model: str) -> None: + await self._ensure_started() + await self._send_control_request("set_model", {"model": model}) + + async def supported_commands(self) -> dict[str, Any] | None: + await self._ensure_started() + return await self._send_control_request("supported_commands") + + async def mcp_server_status(self) -> dict[str, Any] | None: + await self._ensure_started() + return await self._send_control_request("mcp_server_status") + + @property + def control_request_timeout(self) -> float: + return self._options.timeout.control_request + + def get_session_id(self) -> str: + return self._session_id + + def is_closed(self) -> bool: + return self._closed + + def _fail_pending_control_requests(self, error: Exception) -> None: + for request_id, pending in list(self._pending_control_requests.items()): + pending.timeout_handle.cancel() + pending.cancel_event.set() + if not pending.future.done(): + pending.future.set_exception(error) + self._pending_control_requests.pop(request_id, None) + + async def _cancel_incoming_control_requests(self) -> None: + current_task = asyncio.current_task() + tasks: list[asyncio.Task[None]] = [] + + for incoming in list(self._incoming_control_requests.values()): + incoming.cancel_event.set() + if incoming.task is current_task: + continue + incoming.task.cancel() + tasks.append(incoming.task) + + if tasks: + await asyncio.gather(*tasks, return_exceptions=True) + + async def close(self) -> None: + if self._closed: + return + + self._closed = True + self._cancel_event.set() + + error = RuntimeError("Query is closed") + self._fail_pending_control_requests(error) + await self._cancel_incoming_control_requests() + + await self._transport.close() + + if self._input_task is not None: + self._input_task.cancel() + with contextlib.suppress(asyncio.CancelledError, Exception): + await self._input_task + + if self._router_task is not None: + with contextlib.suppress(Exception): + await self._router_task + + await self._finish() + + async def _finish(self) -> None: + if self._terminal_event_sent: + return + self._terminal_event_sent = True + await self._message_queue.put(_DONE) + + async def _finish_with_error(self, exc: Exception) -> None: + if self._terminal_event_sent: + return + self._closed = True + self._terminal_event_sent = True + self._cancel_event.set() + self._fail_pending_control_requests(exc) + await self._cancel_incoming_control_requests() + await self._transport.close() + await self._message_queue.put(exc) + await self._message_queue.put(_DONE) + + def __aiter__(self) -> Query: + return self + + async def __anext__(self) -> SDKMessage: + if self._exhausted: + raise StopAsyncIteration + await self._ensure_started() + item = await self._message_queue.get() + + if item is _DONE: + self._exhausted = True + raise StopAsyncIteration + + if isinstance(item, Exception): + raise item + + return cast(SDKMessage, item) + + async def __aenter__(self) -> Query: + return self + + async def __aexit__( + self, + exc_type: type[BaseException] | None, + exc_val: BaseException | None, + exc_tb: TracebackType | None, + ) -> None: + await self.close() + + +def query( + prompt: str | AsyncIterable[SDKUserMessage], + options: QueryOptions | QueryOptionsDict | Mapping[str, Any] | None = None, +) -> Query: + if isinstance(options, QueryOptions): + parsed_options = replace(options) + else: + parsed_options = QueryOptions.from_mapping(options) + + validate_query_options(parsed_options) + + session_id = parsed_options.resume or parsed_options.session_id + if session_id is None and not parsed_options.continue_session: + session_id = str(uuid4()) + if parsed_options.resume is None and not parsed_options.continue_session: + parsed_options = replace(parsed_options, session_id=session_id) + + transport = ProcessTransport(parsed_options) + return Query(transport, parsed_options, prompt, session_id or "") diff --git a/packages/sdk-python/src/qwen_code_sdk/sync_query.py b/packages/sdk-python/src/qwen_code_sdk/sync_query.py new file mode 100644 index 000000000..c08713dd6 --- /dev/null +++ b/packages/sdk-python/src/qwen_code_sdk/sync_query.py @@ -0,0 +1,217 @@ +"""Synchronous wrapper around the async Query API.""" + +from __future__ import annotations + +import asyncio +import threading +import warnings +from collections.abc import AsyncIterable, AsyncIterator, Iterable, Mapping +from queue import Queue +from typing import Any, cast + +from .protocol import SDKMessage, SDKUserMessage +from .query import Query, query +from .types import QueryOptions, QueryOptionsDict + +_STOP = object() +_SYNC_TIMEOUT_MARGIN = 5.0 + + +class SyncQuery: + def __init__( + self, + prompt: str | Iterable[SDKUserMessage] | AsyncIterable[SDKUserMessage], + options: QueryOptions | QueryOptionsDict | Mapping[str, Any] | None = None, + ) -> None: + self._queue: Queue[SDKMessage | Exception | object] = Queue() + self._ready = threading.Event() + self._shutdown = threading.Event() + self._stop_sent = threading.Event() + self._exhausted = False + self._query: Query | None = None + self._consumer_task: asyncio.Task[None] | None = None + + self._loop = asyncio.new_event_loop() + self._thread = threading.Thread( + target=self._run_loop, + name="qwen-sdk-sync-loop", + daemon=True, + ) + self._thread.start() + + if isinstance(prompt, str) or isinstance(prompt, AsyncIterable): + source_prompt: str | AsyncIterable[SDKUserMessage] = prompt + else: + source_prompt = _iterable_to_async(prompt) + + future = asyncio.run_coroutine_threadsafe( + self._bootstrap(source_prompt, options), + self._loop, + ) + try: + future.result() + except Exception: + self._stop_loop() + raise + + def _run_loop(self) -> None: + asyncio.set_event_loop(self._loop) + self._loop.run_forever() + + async def _bootstrap( + self, + prompt: str | AsyncIterable[SDKUserMessage], + options: QueryOptions | QueryOptionsDict | Mapping[str, Any] | None, + ) -> None: + self._query = query(prompt=prompt, options=options) + self._ready.set() + self._consumer_task = asyncio.create_task(self._consume()) + + async def _consume(self) -> None: + assert self._query is not None + try: + async for message in self._query: + self._queue.put(message) + except Exception as exc: + self._queue.put(exc) + finally: + if not self._stop_sent.is_set(): + self._stop_sent.set() + self._queue.put(_STOP) + + def _require_query(self) -> Query: + self._ready.wait(timeout=30) + if self._query is None: + raise RuntimeError("SyncQuery failed to initialize") + return self._query + + def __iter__(self) -> SyncQuery: + return self + + def __next__(self) -> SDKMessage: + if self._exhausted: + raise StopIteration + item = self._queue.get() + + if item is _STOP: + self._exhausted = True + raise StopIteration + + if isinstance(item, Exception): + raise item + + return cast(SDKMessage, item) + + def __enter__(self) -> SyncQuery: + return self + + def __exit__(self, *_args: object) -> None: + self.close() + + def interrupt(self) -> None: + q = self._require_query() + asyncio.run_coroutine_threadsafe(q.interrupt(), self._loop).result( + timeout=q.control_request_timeout + _SYNC_TIMEOUT_MARGIN + ) + + def set_model(self, model: str) -> None: + q = self._require_query() + asyncio.run_coroutine_threadsafe(q.set_model(model), self._loop).result( + timeout=q.control_request_timeout + _SYNC_TIMEOUT_MARGIN + ) + + def set_permission_mode(self, mode: str) -> None: + q = self._require_query() + asyncio.run_coroutine_threadsafe( + q.set_permission_mode(mode), + self._loop, + ).result(timeout=q.control_request_timeout + _SYNC_TIMEOUT_MARGIN) + + def supported_commands(self) -> Any: + q = self._require_query() + return asyncio.run_coroutine_threadsafe( + q.supported_commands(), + self._loop, + ).result(timeout=q.control_request_timeout + _SYNC_TIMEOUT_MARGIN) + + def mcp_server_status(self) -> Any: + q = self._require_query() + return asyncio.run_coroutine_threadsafe( + q.mcp_server_status(), + self._loop, + ).result(timeout=q.control_request_timeout + _SYNC_TIMEOUT_MARGIN) + + def get_session_id(self) -> str: + q = self._require_query() + return q.get_session_id() + + def is_closed(self) -> bool: + q = self._require_query() + return q.is_closed() + + def close(self) -> None: + if self._shutdown.is_set(): + return + + self._shutdown.set() + + q = self._query + if q is not None: + try: + asyncio.run_coroutine_threadsafe(q.close(), self._loop).result( + timeout=30 + ) + except Exception: + pass + + # Wait for _consume() to put _STOP before stopping the loop, + # otherwise consumers blocked on queue.get() will deadlock. + if self._consumer_task is not None: + try: + asyncio.run_coroutine_threadsafe( + self._await_consumer(), self._loop + ).result(timeout=5) + except Exception: + pass + + if not self._stop_sent.is_set(): + self._stop_sent.set() + self._queue.put(_STOP) + self._stop_loop() + + async def _await_consumer(self) -> None: + if self._consumer_task is not None: + try: + await asyncio.wait_for(self._consumer_task, timeout=5.0) + except Exception: + pass + + def _stop_loop(self) -> None: + if self._loop.is_running(): + self._loop.call_soon_threadsafe(self._loop.stop) + self._thread.join(timeout=5) + if not self._loop.is_closed(): + self._loop.close() + + def __del__(self) -> None: + try: + if not self._shutdown.is_set(): + warnings.warn( + "SyncQuery was not closed. " + "Use 'with SyncQuery(...) as q:' or call q.close() explicitly.", + ResourceWarning, + stacklevel=1, + ) + try: + self.close() + except Exception: + pass + except AttributeError: + pass + + +async def _iterable_to_async( + messages: Iterable[SDKUserMessage], +) -> AsyncIterator[SDKUserMessage]: + for message in messages: + yield message diff --git a/packages/sdk-python/src/qwen_code_sdk/transport.py b/packages/sdk-python/src/qwen_code_sdk/transport.py new file mode 100644 index 000000000..854236494 --- /dev/null +++ b/packages/sdk-python/src/qwen_code_sdk/transport.py @@ -0,0 +1,243 @@ +"""Process transport for qwen CLI stream-json protocol.""" + +from __future__ import annotations + +import asyncio +import json +import os +import subprocess +import sys +from collections.abc import AsyncIterator +from dataclasses import dataclass +from pathlib import Path +from typing import Any + +from .errors import ProcessExitError +from .json_lines import parse_json_line +from .types import QueryOptions + + +@dataclass(frozen=True) +class SpawnInfo: + command: str + args: list[str] + + +def prepare_spawn_info(path_to_qwen_executable: str | None) -> SpawnInfo: + if path_to_qwen_executable is None: + return SpawnInfo(command="qwen", args=[]) + + spec = path_to_qwen_executable + if os.path.sep not in spec and ( + os.path.altsep is None or os.path.altsep not in spec + ): + return SpawnInfo(command=spec, args=[]) + + path = Path(spec).expanduser().resolve() + + suffix = path.suffix.lower() + if suffix == ".py": + return SpawnInfo(command=sys.executable, args=[str(path)]) + if suffix in {".js", ".mjs", ".cjs"}: + return SpawnInfo(command="node", args=[str(path)]) + + return SpawnInfo(command=str(path), args=[]) + + +class ProcessTransport: + def __init__(self, options: QueryOptions): + self._options = options + self._process: asyncio.subprocess.Process | None = None + self._stderr_task: asyncio.Task[None] | None = None + self._closed = False + self._input_closed = False + self._exit_error: Exception | None = None + + @property + def exit_error(self) -> Exception | None: + return self._exit_error + + @property + def is_closed(self) -> bool: + return self._closed + + async def start(self) -> None: + if self._closed: + raise RuntimeError("Transport is closed") + if self._process is not None: + return + + spawn_info = prepare_spawn_info(self._options.path_to_qwen_executable) + args = [*spawn_info.args, *build_cli_arguments(self._options)] + stderr_target = ( + asyncio.subprocess.PIPE + if self._options.debug or self._options.stderr is not None + else subprocess.DEVNULL + ) + + self._process = await asyncio.create_subprocess_exec( + spawn_info.command, + *args, + cwd=self._options.cwd, + env={**os.environ, **(self._options.env or {})}, + stdin=asyncio.subprocess.PIPE, + stdout=asyncio.subprocess.PIPE, + stderr=stderr_target, + ) + + if self._options.debug or self._options.stderr is not None: + self._stderr_task = asyncio.create_task(self._forward_stderr()) + + async def _forward_stderr(self) -> None: + if self._process is None or self._process.stderr is None: + return + + while True: + chunk = await self._process.stderr.readline() + if not chunk: + return + text = chunk.decode("utf-8", errors="replace").rstrip("\n") + try: + if self._options.stderr is not None: + self._options.stderr(text) + elif self._options.debug: + print(text, file=sys.stderr) + except Exception: + print(text, file=sys.stderr) + + def write(self, data: str) -> None: + if self._closed: + raise RuntimeError("Transport is closed") + if self._process is None or self._process.stdin is None: + raise RuntimeError("Transport is not started") + if self._input_closed: + raise RuntimeError("Transport input is already closed") + + self._process.stdin.write(data.encode("utf-8")) + + async def drain(self) -> None: + if self._process is None or self._process.stdin is None: + return + await self._process.stdin.drain() + + def end_input(self) -> None: + if self._closed or self._input_closed: + return + if self._process is None or self._process.stdin is None: + return + self._process.stdin.close() + self._input_closed = True + + async def read_messages(self) -> AsyncIterator[Any]: + if self._process is None or self._process.stdout is None: + raise RuntimeError("Transport is not started") + + while True: + line = await self._process.stdout.readline() + if not line: + break + + raw = line.decode("utf-8", errors="replace").strip() + if not raw: + continue + try: + yield parse_json_line(raw) + except json.JSONDecodeError: + continue + + await self._finalize_exit() + + async def wait_for_exit(self) -> None: + if self._process is None: + return + await self._finalize_exit() + + async def _finalize_exit(self) -> None: + if self._process is None: + return + + return_code = self._process.returncode + if return_code is None: + return_code = await self._process.wait() + + if return_code != 0 and self._exit_error is None: + self._exit_error = ProcessExitError( + f"CLI process exited with code {return_code}", + exit_code=return_code, + ) + + if self._stderr_task is not None: + await self._stderr_task + self._stderr_task = None + + async def close(self) -> None: + if self._closed: + return + + self._closed = True + + if self._process is None: + return + + if self._process.stdin is not None and not self._input_closed: + self._process.stdin.close() + self._input_closed = True + + if self._process.returncode is None: + self._process.terminate() + try: + await asyncio.wait_for(self._process.wait(), timeout=5.0) + except asyncio.TimeoutError: + self._process.kill() + await self._process.wait() + + await self._finalize_exit() + + +def build_cli_arguments(options: QueryOptions) -> list[str]: + args: list[str] = [ + "--input-format", + "stream-json", + "--output-format", + "stream-json", + "--channel=SDK", + ] + + if options.model: + args.extend(["--model", options.model]) + + if options.system_prompt: + args.extend(["--system-prompt", options.system_prompt]) + + if options.append_system_prompt: + args.extend(["--append-system-prompt", options.append_system_prompt]) + + if options.permission_mode: + args.extend(["--approval-mode", options.permission_mode]) + + if options.max_session_turns is not None: + args.extend(["--max-session-turns", str(options.max_session_turns)]) + + if options.core_tools: + args.extend(["--core-tools", ",".join(options.core_tools)]) + + if options.exclude_tools: + args.extend(["--exclude-tools", ",".join(options.exclude_tools)]) + + if options.allowed_tools: + args.extend(["--allowed-tools", ",".join(options.allowed_tools)]) + + if options.auth_type: + args.extend(["--auth-type", options.auth_type]) + + if options.include_partial_messages: + args.append("--include-partial-messages") + + if options.resume: + args.extend(["--resume", options.resume]) + elif options.continue_session: + args.append("--continue") + elif options.session_id: + args.extend(["--session-id", options.session_id]) + + return args diff --git a/packages/sdk-python/src/qwen_code_sdk/types.py b/packages/sdk-python/src/qwen_code_sdk/types.py new file mode 100644 index 000000000..3d8ec7203 --- /dev/null +++ b/packages/sdk-python/src/qwen_code_sdk/types.py @@ -0,0 +1,323 @@ +"""Public type definitions for qwen_code_sdk.""" + +from __future__ import annotations + +from collections.abc import Awaitable, Callable, Mapping, MutableMapping +from dataclasses import dataclass +from inspect import Parameter, Signature, iscoroutinefunction, signature +from typing import ( + Any, + Literal, + TypeAlias, + TypedDict, + cast, +) + +from typing_extensions import NotRequired + +PermissionMode: TypeAlias = Literal["default", "plan", "auto-edit", "yolo"] +AuthType: TypeAlias = Literal[ + "openai", + "anthropic", + "qwen-oauth", + "gemini", + "vertex-ai", +] + + +class PermissionSuggestion(TypedDict): + type: Literal["allow", "deny", "modify"] + label: str + description: NotRequired[str] + modifiedInput: NotRequired[Any] + + +class PermissionAllowResult(TypedDict): + behavior: Literal["allow"] + updatedInput: NotRequired[dict[str, Any]] + + +class PermissionDenyResult(TypedDict): + behavior: Literal["deny"] + message: NotRequired[str] + interrupt: NotRequired[bool] + + +PermissionResult: TypeAlias = PermissionAllowResult | PermissionDenyResult + + +class CanUseToolContext(TypedDict): + cancel_event: Any + suggestions: list[PermissionSuggestion] | None + blocked_path: str | None + + +CanUseTool: TypeAlias = Callable[ + [str, dict[str, Any], CanUseToolContext], + Awaitable[PermissionResult], +] + + +class TimeoutOptionsDict(TypedDict, total=False): + """Timeout configuration. All values are in seconds.""" + + can_use_tool: float + control_request: float + stream_close: float + + +@dataclass(frozen=True) +class TimeoutOptions: + can_use_tool: float = 60.0 + control_request: float = 60.0 + stream_close: float = 60.0 + + @classmethod + def from_mapping(cls, value: Mapping[str, Any] | None) -> TimeoutOptions: + if value is None: + return cls() + + def _read(name: str, default: float) -> float: + raw = value.get(name, default) + if isinstance(raw, bool) or not isinstance(raw, (int, float)): + raise TypeError(f"timeout.{name} must be a positive number") + if raw <= 0: + raise ValueError(f"timeout.{name} must be a positive number") + return float(raw) + + return cls( + can_use_tool=_read("can_use_tool", 60.0), + control_request=_read("control_request", 60.0), + stream_close=_read("stream_close", 60.0), + ) + + +class QueryOptionsDict(TypedDict, total=False): + cwd: str + model: str + path_to_qwen_executable: str + permission_mode: PermissionMode + can_use_tool: CanUseTool + env: dict[str, str] + system_prompt: str + append_system_prompt: str + debug: bool + max_session_turns: int + core_tools: list[str] + exclude_tools: list[str] + allowed_tools: list[str] + auth_type: AuthType + include_partial_messages: bool + resume: str + continue_session: bool + session_id: str + timeout: TimeoutOptionsDict + mcp_servers: dict[str, dict[str, Any]] + stderr: Callable[[str], None] + + +@dataclass +class QueryOptions: + cwd: str | None = None + model: str | None = None + path_to_qwen_executable: str | None = None + permission_mode: PermissionMode | None = None + can_use_tool: CanUseTool | None = None + env: dict[str, str] | None = None + system_prompt: str | None = None + append_system_prompt: str | None = None + debug: bool = False + max_session_turns: int | None = None + core_tools: list[str] | None = None + exclude_tools: list[str] | None = None + allowed_tools: list[str] | None = None + auth_type: AuthType | None = None + include_partial_messages: bool = False + resume: str | None = None + continue_session: bool = False + session_id: str | None = None + timeout: TimeoutOptions = TimeoutOptions() + mcp_servers: dict[str, dict[str, Any]] | None = None + stderr: Callable[[str], None] | None = None + + @classmethod + def from_mapping(cls, value: Mapping[str, Any] | None) -> QueryOptions: + if value is None: + return cls() + + data: MutableMapping[str, Any] = dict(value) + timeout = TimeoutOptions.from_mapping(data.get("timeout")) + + return cls( + cwd=_as_optional_str(data, "cwd"), + model=_as_optional_str(data, "model"), + path_to_qwen_executable=_as_optional_str(data, "path_to_qwen_executable"), + permission_mode=cast( + PermissionMode | None, + _as_optional_str(data, "permission_mode"), + ), + can_use_tool=cast( + CanUseTool | None, + _as_optional_callable(data, "can_use_tool"), + ), + env=_as_optional_str_dict(data, "env"), + system_prompt=_as_optional_str(data, "system_prompt"), + append_system_prompt=_as_optional_str(data, "append_system_prompt"), + debug=_as_optional_bool(data, "debug") or False, + max_session_turns=_as_optional_int(data, "max_session_turns"), + core_tools=_as_optional_str_list(data, "core_tools"), + exclude_tools=_as_optional_str_list(data, "exclude_tools"), + allowed_tools=_as_optional_str_list(data, "allowed_tools"), + auth_type=cast( + AuthType | None, + _as_optional_str(data, "auth_type"), + ), + include_partial_messages=_as_optional_bool(data, "include_partial_messages") + or False, + resume=_as_optional_str(data, "resume"), + continue_session=_as_optional_bool(data, "continue_session") or False, + session_id=_as_optional_str(data, "session_id"), + timeout=timeout, + mcp_servers=_as_optional_nested_dict(data, "mcp_servers"), + stderr=cast( + Callable[[str], None] | None, + _as_optional_callable(data, "stderr"), + ), + ) + + +def _as_optional_str(data: Mapping[str, Any], key: str) -> str | None: + raw = data.get(key) + if raw is None: + return None + if not isinstance(raw, str): + raise TypeError(f"{key} must be a string") + return raw + + +def _as_optional_int(data: Mapping[str, Any], key: str) -> int | None: + raw = data.get(key) + if raw is None: + return None + if isinstance(raw, bool) or not isinstance(raw, int): + raise TypeError(f"{key} must be an integer") + return int(raw) + + +def _as_optional_bool(data: Mapping[str, Any], key: str) -> bool | None: + raw = data.get(key) + if raw is None: + return None + if not isinstance(raw, bool): + raise TypeError(f"{key} must be a boolean") + return raw + + +def _as_optional_callable( + data: Mapping[str, Any], key: str +) -> Callable[..., Any] | None: + raw = data.get(key) + if raw is None: + return None + if not callable(raw): + raise TypeError(f"{key} must be callable") + if key == "can_use_tool": + _validate_can_use_tool_callable(raw, error_type=TypeError) + elif key == "stderr": + _validate_stderr_callable(raw, error_type=TypeError) + return cast(Callable[..., Any], raw) + + +def _validate_can_use_tool_callable(value: object, error_type: type[Exception]) -> None: + if not callable(value): + raise error_type("can_use_tool must be callable") + + if not iscoroutinefunction(value): + raise error_type("can_use_tool must be an async callable") + + try: + sig = signature(value) + except (TypeError, ValueError): + return + + if not _supports_argument_count(sig, 3): + raise error_type("can_use_tool must accept exactly 3 positional arguments") + + +def _validate_stderr_callable(value: object, error_type: type[Exception]) -> None: + if not callable(value): + raise error_type("stderr must be callable") + + try: + sig = signature(value) + except (TypeError, ValueError): + return + + if not _supports_argument_count(sig, 1): + raise error_type("stderr must accept exactly 1 positional argument") + + +def _supports_argument_count(sig: Signature, count: int) -> bool: + params = list(sig.parameters.values()) + positional_params = [ + param + for param in params + if param.kind + in ( + Parameter.POSITIONAL_ONLY, + Parameter.POSITIONAL_OR_KEYWORD, + ) + ] + required_positional = [ + param for param in positional_params if param.default is Parameter.empty + ] + has_var_positional = any(param.kind is Parameter.VAR_POSITIONAL for param in params) + + if len(required_positional) > count: + return False + if has_var_positional: + return True + return len(positional_params) >= count + + +def _as_optional_str_dict(data: Mapping[str, Any], key: str) -> dict[str, str] | None: + raw = data.get(key) + if raw is None: + return None + if not isinstance(raw, Mapping): + raise TypeError(f"{key} must be a mapping of string to string") + + parsed: dict[str, str] = {} + for k, v in raw.items(): + if not isinstance(k, str) or not isinstance(v, str): + raise TypeError(f"{key} must be a mapping of string to string") + parsed[k] = v + return parsed + + +def _as_optional_str_list(data: Mapping[str, Any], key: str) -> list[str] | None: + raw = data.get(key) + if raw is None: + return None + if not isinstance(raw, list): + raise TypeError(f"{key} must be a list of strings") + if any(not isinstance(item, str) for item in raw): + raise TypeError(f"{key} must be a list of strings") + return list(raw) + + +def _as_optional_nested_dict( + data: Mapping[str, Any], key: str +) -> dict[str, dict[str, Any]] | None: + raw = data.get(key) + if raw is None: + return None + if not isinstance(raw, Mapping): + raise TypeError(f"{key} must be a mapping") + + parsed: dict[str, dict[str, Any]] = {} + for k, v in raw.items(): + if not isinstance(k, str) or not isinstance(v, Mapping): + raise TypeError(f"{key} must be a mapping of string to mapping") + parsed[k] = dict(v) + return parsed diff --git a/packages/sdk-python/src/qwen_code_sdk/validation.py b/packages/sdk-python/src/qwen_code_sdk/validation.py new file mode 100644 index 000000000..f19fe4cfb --- /dev/null +++ b/packages/sdk-python/src/qwen_code_sdk/validation.py @@ -0,0 +1,94 @@ +"""Validation helpers for query options.""" + +from __future__ import annotations + +from collections.abc import Callable +from uuid import RFC_4122, UUID + +from .errors import ValidationError +from .types import ( + QueryOptions, + _validate_can_use_tool_callable, + _validate_stderr_callable, +) + +_VALID_PERMISSION_MODES = {"default", "plan", "auto-edit", "yolo"} +_VALID_AUTH_TYPES = {"openai", "anthropic", "qwen-oauth", "gemini", "vertex-ai"} + + +def validate_query_options(options: QueryOptions) -> None: + if ( + options.permission_mode + and options.permission_mode not in _VALID_PERMISSION_MODES + ): + raise ValidationError( + f"Invalid permission_mode: {options.permission_mode!r}. " + "Expected one of: default, plan, auto-edit, yolo." + ) + + if options.auth_type and options.auth_type not in _VALID_AUTH_TYPES: + raise ValidationError( + f"Invalid auth_type: {options.auth_type!r}. " + "Expected one of: openai, anthropic, qwen-oauth, gemini, vertex-ai." + ) + + _validate_optional_callable(options.can_use_tool, _validate_can_use_tool_callable) + _validate_optional_callable(options.stderr, _validate_stderr_callable) + + if options.resume and options.continue_session: + raise ValidationError( + "Cannot use resume together with continue_session. " + "Use continue_session for latest session " + "or resume for a specific session ID." + ) + + if options.session_id and (options.resume or options.continue_session): + raise ValidationError( + "Cannot use session_id with resume or continue_session. " + "session_id starts a new session, " + "resume/continue_session restore existing sessions." + ) + + if options.session_id: + validate_session_id(options.session_id, "session_id") + + if options.resume: + validate_session_id(options.resume, "resume") + + if options.max_session_turns is not None and options.max_session_turns < -1: + raise ValidationError("max_session_turns must be -1 or a non-negative integer") + + if ( + options.path_to_qwen_executable is not None + and not options.path_to_qwen_executable.strip() + ): + raise ValidationError("path_to_qwen_executable cannot be empty") + + if options.mcp_servers: + raise ValidationError( + "mcp_servers is not supported in Python SDK v1. " + "Remove the mcp_servers option or use the TypeScript SDK." + ) + + +def _validate_optional_callable( + value: object, + validator: Callable[[object, type[ValidationError]], None], +) -> None: + if value is None: + return + validator(value, ValidationError) + + +def validate_session_id(value: str, param_name: str) -> None: + try: + parsed = UUID(value) + except ValueError as exc: + raise ValidationError( + f"Invalid {param_name}: {value!r}. Must be a valid UUID." + ) from exc + + if parsed.variant != RFC_4122: + raise ValidationError( + f"Invalid {param_name}: {value!r}. UUID variant must be RFC 4122." + ) diff --git a/packages/sdk-python/tests/integration/conftest.py b/packages/sdk-python/tests/integration/conftest.py new file mode 100644 index 000000000..f73a5e6ac --- /dev/null +++ b/packages/sdk-python/tests/integration/conftest.py @@ -0,0 +1,400 @@ +from __future__ import annotations + +import os +import stat +import textwrap +from pathlib import Path + +import pytest + + +@pytest.fixture() +def fake_qwen_path(tmp_path: Path) -> str: + script_path = tmp_path / "fake_qwen.py" + script_path.write_text( + textwrap.dedent( + """ + #!/usr/bin/env python3 + import argparse + import json + import sys + import uuid + + + def send(message): + sys.stdout.write(json.dumps(message, separators=(",", ":")) + "\\n") + sys.stdout.flush() + + + def parse_user_content(message): + payload = message.get("message", {}) + content = payload.get("content", "") + if isinstance(content, str): + return content + if isinstance(content, list): + text_parts = [] + for block in content: + if isinstance(block, dict) and block.get("type") == "text": + text_parts.append(str(block.get("text", ""))) + return " ".join(text_parts) + return str(content) + + + def build_system_message(): + return { + "type": "system", + "subtype": "init", + "uuid": session_id, + "session_id": session_id, + "cwd": ".", + "tools": ["Read", "Edit", "Bash"], + "mcp_servers": [], + "model": state["model"], + "permission_mode": state["permission_mode"], + "qwen_code_version": "fake-1.0.0", + "capabilities": { + "canSetModel": True, + "canSetPermissionMode": True, + }, + } + + + def build_assistant_message(text): + return { + "type": "assistant", + "uuid": str(uuid.uuid4()), + "session_id": session_id, + "message": { + "id": str(uuid.uuid4()), + "type": "message", + "role": "assistant", + "model": state["model"], + "content": [ + { + "type": "text", + "text": text, + } + ], + "usage": { + "input_tokens": 1, + "output_tokens": 1, + }, + }, + "parent_tool_use_id": None, + } + + + def build_result_message(result_text): + return { + "type": "result", + "subtype": "success", + "uuid": str(uuid.uuid4()), + "session_id": session_id, + "is_error": False, + "duration_ms": 5, + "duration_api_ms": 1, + "num_turns": 1, + "result": result_text, + "usage": { + "input_tokens": 1, + "output_tokens": 1, + }, + "permission_denials": [], + } + + + parser = argparse.ArgumentParser(add_help=False) + parser.add_argument("--model") + parser.add_argument("--approval-mode") + parser.add_argument("--include-partial-messages", action="store_true") + parser.add_argument("--session-id") + parser.add_argument("--resume") + parser.add_argument( + "--continue", + dest="continue_session", + action="store_true", + ) + args, _ = parser.parse_known_args() + + session_id = ( + args.resume + or args.session_id + or ( + "aaaaaaaa-bbbb-4ccc-8ddd-eeeeeeeeeeee" + if args.continue_session + else str(uuid.uuid4()) + ) + ) + state = { + "model": args.model or "coder-model", + "permission_mode": args.approval_mode or "default", + "include_partial": bool(args.include_partial_messages), + } + + pending_permission = None + pending_unknown_control = None + + for line in sys.stdin: + line = line.strip() + if not line: + continue + message = json.loads(line) + msg_type = message.get("type") + + if msg_type == "control_request": + request_id = message["request_id"] + request = message["request"] + subtype = request.get("subtype") + + if subtype == "initialize": + send( + { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": request_id, + "response": {}, + }, + } + ) + send(build_system_message()) + elif subtype == "set_model": + state["model"] = request["model"] + send( + { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": request_id, + "response": {}, + }, + } + ) + send(build_system_message()) + elif subtype == "set_permission_mode": + state["permission_mode"] = request["mode"] + send( + { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": request_id, + "response": {}, + }, + } + ) + send(build_system_message()) + elif subtype == "interrupt": + send( + { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": request_id, + "response": {}, + }, + } + ) + elif subtype == "supported_commands": + send( + { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": request_id, + "response": { + "commands": [ + "initialize", + "interrupt", + "set_model", + "set_permission_mode", + ] + }, + }, + } + ) + elif subtype == "mcp_server_status": + send( + { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": request_id, + "response": {"servers": []}, + }, + } + ) + else: + send( + { + "type": "control_response", + "response": { + "subtype": "error", + "request_id": request_id, + "error": f"unsupported request: {subtype}", + }, + } + ) + + elif msg_type == "user": + prompt = parse_user_content(message) + + if "exit nonzero" in prompt: + sys.exit(9) + + if "request unknown control" in prompt: + request_id = str(uuid.uuid4()) + pending_unknown_control = { + "request_id": request_id, + "prompt": prompt, + } + send( + { + "type": "control_request", + "request_id": request_id, + "request": { + "subtype": "something_new", + "payload": {}, + }, + } + ) + continue + + if "use tool" in prompt or "create file" in prompt: + tool_use_id = str(uuid.uuid4()) + send( + { + "type": "assistant", + "uuid": str(uuid.uuid4()), + "session_id": session_id, + "message": { + "id": str(uuid.uuid4()), + "type": "message", + "role": "assistant", + "model": state["model"], + "content": [ + { + "type": "tool_use", + "id": tool_use_id, + "name": "write_file", + "input": { + "path": "demo.txt", + "content": "hello", + }, + } + ], + "usage": {"input_tokens": 1, "output_tokens": 1}, + }, + "parent_tool_use_id": None, + } + ) + request_id = str(uuid.uuid4()) + pending_permission = { + "request_id": request_id, + "tool_use_id": tool_use_id, + "prompt": prompt, + } + send( + { + "type": "control_request", + "request_id": request_id, + "request": { + "subtype": "can_use_tool", + "tool_name": "write_file", + "tool_use_id": tool_use_id, + "input": {"path": "demo.txt", "content": "hello"}, + "permission_suggestions": [ + {"type": "allow", "label": "Allow write"} + ], + "blocked_path": None, + }, + } + ) + continue + + if state["include_partial"]: + send( + { + "type": "stream_event", + "uuid": str(uuid.uuid4()), + "session_id": session_id, + "event": { + "type": "content_block_delta", + "index": 0, + "delta": {"type": "text_delta", "text": "partial"}, + }, + "parent_tool_use_id": None, + } + ) + + send(build_assistant_message(f"Echo: {prompt}")) + send(build_result_message(f"done: {prompt}")) + + elif msg_type == "control_response": + payload = message.get("response", {}) + request_id = payload.get("request_id") + + if ( + pending_unknown_control + and request_id == pending_unknown_control["request_id"] + ): + if payload.get("subtype") != "error": + sys.exit(3) + prompt = pending_unknown_control["prompt"] + pending_unknown_control = None + send( + build_assistant_message( + f"Unknown control handled for: {prompt}" + ) + ) + send(build_result_message(f"unknown-control: {prompt}")) + continue + + if ( + pending_permission + and request_id == pending_permission["request_id"] + ): + prompt = pending_permission["prompt"] + tool_use_id = pending_permission["tool_use_id"] + pending_permission = None + + behavior = "deny" + if payload.get("subtype") == "success": + response_payload = payload.get("response") or {} + behavior = response_payload.get("behavior", "deny") + + is_allowed = behavior == "allow" + send( + { + "type": "user", + "session_id": session_id, + "message": { + "role": "user", + "content": [ + { + "type": "tool_result", + "tool_use_id": tool_use_id, + "is_error": not is_allowed, + "content": "ok" if is_allowed else "denied", + } + ], + }, + "parent_tool_use_id": tool_use_id, + } + ) + send(build_assistant_message(f"tool handled: {prompt}")) + send(build_result_message(f"tool-result: {prompt}")) + continue + """ + ).strip() + + "\n", + encoding="utf-8", + ) + script_path.chmod(script_path.stat().st_mode | stat.S_IEXEC) + return str(script_path) + + +@pytest.fixture(autouse=True) +def disable_history_expansion() -> None: + # No-op fixture used as explicit marker for deterministic test env. + os.environ.setdefault("PYTHONUTF8", "1") diff --git a/packages/sdk-python/tests/integration/test_async_query.py b/packages/sdk-python/tests/integration/test_async_query.py new file mode 100644 index 000000000..3aca1e807 --- /dev/null +++ b/packages/sdk-python/tests/integration/test_async_query.py @@ -0,0 +1,276 @@ +from __future__ import annotations + +import asyncio +from collections.abc import AsyncIterator, Callable +from typing import Any + +import pytest +from qwen_code_sdk import ( + ProcessExitError, + SDKUserMessage, + is_sdk_assistant_message, + is_sdk_partial_assistant_message, + is_sdk_result_message, + is_sdk_system_message, + is_sdk_user_message, + query, +) + +CONTINUED_SESSION_ID = "aaaaaaaa-bbbb-4ccc-8ddd-eeeeeeeeeeee" +VALID_UUID = "123e4567-e89b-12d3-a456-426614174000" +RESUME_UUID = "223e4567-e89b-12d3-a456-426614174000" + + +async def _collect_messages(result: Any) -> list[dict[str, Any]]: + messages: list[dict[str, Any]] = [] + async for message in result: + messages.append(message) + return messages + + +async def _wait_for(predicate: Callable[[], bool], timeout: float = 2.0) -> None: + loop = asyncio.get_running_loop() + deadline = loop.time() + timeout + while loop.time() < deadline: + if predicate(): + return + await asyncio.sleep(0.01) + raise AssertionError("timed out waiting for expected SDK state") + + +def _tool_result_error_flag(message: dict[str, Any]) -> bool: + content = message["message"]["content"] + assert isinstance(content, list) + return bool(content[0]["is_error"]) + + +@pytest.mark.asyncio +async def test_single_turn_query(fake_qwen_path: str) -> None: + result = query( + "hello world", + { + "path_to_qwen_executable": fake_qwen_path, + }, + ) + messages = await _collect_messages(result) + + assistant = next( + message for message in messages if is_sdk_assistant_message(message) + ) + final = next(message for message in messages if is_sdk_result_message(message)) + + assert assistant["message"]["content"][0]["text"] == "Echo: hello world" + assert final["result"] == "done: hello world" + await result.close() + + +@pytest.mark.asyncio +async def test_include_partial_messages(fake_qwen_path: str) -> None: + result = query( + "stream partial", + { + "path_to_qwen_executable": fake_qwen_path, + "include_partial_messages": True, + }, + ) + messages = await _collect_messages(result) + + partial = next( + message for message in messages if is_sdk_partial_assistant_message(message) + ) + assert partial["event"]["type"] == "content_block_delta" + await result.close() + + +@pytest.mark.asyncio +async def test_default_permission_callback_denies_tool_use(fake_qwen_path: str) -> None: + result = query( + "use tool now", + { + "path_to_qwen_executable": fake_qwen_path, + }, + ) + messages = await _collect_messages(result) + + tool_result = next( + message + for message in messages + if is_sdk_user_message(message) + and isinstance(message["message"]["content"], list) + ) + assert _tool_result_error_flag(tool_result) is True + await result.close() + + +@pytest.mark.asyncio +async def test_permission_callback_can_allow_tool_use(fake_qwen_path: str) -> None: + async def can_use_tool( + tool_name: str, + tool_input: dict[str, Any], + context: dict[str, Any], + ) -> dict[str, Any]: + assert tool_name == "write_file" + assert tool_input["path"] == "demo.txt" + assert context["suggestions"][0]["type"] == "allow" + return {"behavior": "allow", "updatedInput": tool_input} + + result = query( + "create file with use tool", + { + "path_to_qwen_executable": fake_qwen_path, + "can_use_tool": can_use_tool, + }, + ) + messages = await _collect_messages(result) + + tool_result = next( + message + for message in messages + if is_sdk_user_message(message) + and isinstance(message["message"]["content"], list) + ) + assert _tool_result_error_flag(tool_result) is False + await result.close() + + +@pytest.mark.asyncio +async def test_unknown_control_requests_are_rejected(fake_qwen_path: str) -> None: + result = query( + "request unknown control", + { + "path_to_qwen_executable": fake_qwen_path, + }, + ) + messages = await _collect_messages(result) + + final = next(message for message in messages if is_sdk_result_message(message)) + assert final["result"] == "unknown-control: request unknown control" + await result.close() + + +@pytest.mark.asyncio +async def test_dynamic_controls_and_status(fake_qwen_path: str) -> None: + release_input = asyncio.Event() + + async def prompts() -> AsyncIterator[SDKUserMessage]: + yield { + "type": "user", + "session_id": VALID_UUID, + "message": { + "role": "user", + "content": "first turn", + }, + "parent_tool_use_id": None, + } + await release_input.wait() + + result = query( + prompts(), + { + "path_to_qwen_executable": fake_qwen_path, + "session_id": VALID_UUID, + }, + ) + + messages: list[dict[str, Any]] = [] + + async def consume() -> list[dict[str, Any]]: + async for message in result: + messages.append(message) + return messages + + collector = asyncio.create_task(consume()) + await _wait_for(lambda: any(is_sdk_result_message(message) for message in messages)) + + assert await result.supported_commands() == { + "commands": [ + "initialize", + "interrupt", + "set_model", + "set_permission_mode", + ] + } + assert await result.mcp_server_status() == {"servers": []} + + await result.set_model("new-model") + await result.set_permission_mode("plan") + release_input.set() + await collector + + system_messages = [ + message for message in messages if is_sdk_system_message(message) + ] + assert any(message["model"] == "new-model" for message in system_messages) + assert any(message["permission_mode"] == "plan" for message in system_messages) + await result.close() + + +@pytest.mark.asyncio +async def test_session_id_resume_and_continue(fake_qwen_path: str) -> None: + explicit = query( + "hello explicit", + { + "path_to_qwen_executable": fake_qwen_path, + "session_id": VALID_UUID, + }, + ) + explicit_messages = await _collect_messages(explicit) + assert explicit.get_session_id() == VALID_UUID + assert all(message["session_id"] == VALID_UUID for message in explicit_messages) + await explicit.close() + + resumed = query( + "hello resume", + { + "path_to_qwen_executable": fake_qwen_path, + "resume": RESUME_UUID, + }, + ) + resumed_messages = await _collect_messages(resumed) + assert resumed.get_session_id() == RESUME_UUID + assert all(message["session_id"] == RESUME_UUID for message in resumed_messages) + await resumed.close() + + continued = query( + "hello continue", + { + "path_to_qwen_executable": fake_qwen_path, + "continue_session": True, + }, + ) + continued_messages = await _collect_messages(continued) + assert continued.get_session_id() == CONTINUED_SESSION_ID + assert any( + message["session_id"] == CONTINUED_SESSION_ID for message in continued_messages + ) + await continued.close() + + +@pytest.mark.asyncio +async def test_non_zero_process_exit_is_propagated(fake_qwen_path: str) -> None: + result = query( + "please exit nonzero", + { + "path_to_qwen_executable": fake_qwen_path, + }, + ) + + with pytest.raises(ProcessExitError, match="code 9"): + await _collect_messages(result) + + await result.close() + + +@pytest.mark.asyncio +async def test_async_context_manager(fake_qwen_path: str) -> None: + async with query( + "hello context", + { + "path_to_qwen_executable": fake_qwen_path, + }, + ) as result: + messages = await _collect_messages(result) + + assert result.is_closed() + final = next(m for m in messages if is_sdk_result_message(m)) + assert final["result"] == "done: hello context" diff --git a/packages/sdk-python/tests/integration/test_sync_query.py b/packages/sdk-python/tests/integration/test_sync_query.py new file mode 100644 index 000000000..6f129ccbf --- /dev/null +++ b/packages/sdk-python/tests/integration/test_sync_query.py @@ -0,0 +1,82 @@ +from __future__ import annotations + +import threading +import time + +import pytest +import qwen_code_sdk.sync_query as sync_query_module +from qwen_code_sdk import is_sdk_result_message, query_sync +from qwen_code_sdk.sync_query import SyncQuery + + +def test_sync_query_single_turn(fake_qwen_path: str) -> None: + result = query_sync( + "hello sync", + { + "path_to_qwen_executable": fake_qwen_path, + }, + ) + + commands = result.supported_commands() + messages = list(result) + + assert commands["commands"][0] == "initialize" + assert any( + is_sdk_result_message(message) and message["result"] == "done: hello sync" + for message in messages + ) + + result.close() + result.close() + + +def test_sync_query_bootstrap_failure_cleans_up_loop_thread( + monkeypatch: pytest.MonkeyPatch, +) -> None: + def raising_query(*args: object, **kwargs: object) -> object: + raise RuntimeError("bootstrap failed") + + monkeypatch.setattr(sync_query_module, "query", raising_query) + + baseline_threads = { + thread.ident + for thread in threading.enumerate() + if thread.name == "qwen-sdk-sync-loop" + } + + with pytest.raises(RuntimeError, match="bootstrap failed"): + SyncQuery("hello") + + deadline = time.time() + 1.0 + while time.time() < deadline: + active_threads = { + thread.ident + for thread in threading.enumerate() + if thread.name == "qwen-sdk-sync-loop" + } + if active_threads == baseline_threads: + break + time.sleep(0.01) + + active_threads = { + thread.ident + for thread in threading.enumerate() + if thread.name == "qwen-sdk-sync-loop" + } + assert active_threads == baseline_threads + + +def test_sync_query_context_manager(fake_qwen_path: str) -> None: + with query_sync( + "hello context", + { + "path_to_qwen_executable": fake_qwen_path, + }, + ) as result: + messages = list(result) + assert any( + is_sdk_result_message(m) and m["result"] == "done: hello context" + for m in messages + ) + + assert result.is_closed() diff --git a/packages/sdk-python/tests/unit/test_query_core.py b/packages/sdk-python/tests/unit/test_query_core.py new file mode 100644 index 000000000..0dd8f3f62 --- /dev/null +++ b/packages/sdk-python/tests/unit/test_query_core.py @@ -0,0 +1,612 @@ +from __future__ import annotations + +import asyncio +from collections.abc import Callable +from typing import Any, cast + +import pytest +from qwen_code_sdk.errors import AbortError, ControlRequestTimeoutError +from qwen_code_sdk.json_lines import parse_json_line +from qwen_code_sdk.query import Query +from qwen_code_sdk.types import QueryOptions, TimeoutOptions + +VALID_UUID = "123e4567-e89b-12d3-a456-426614174000" +_EOF = object() + + +class FakeTransport: + def __init__(self) -> None: + self.writes: list[dict[str, Any]] = [] + self.exit_error: Exception | None = None + self.closed = False + self.close_calls = 0 + self.input_closed = False + self._queue: asyncio.Queue[dict[str, Any] | object] = asyncio.Queue() + + async def start(self) -> None: + return None + + def write(self, data: str) -> None: + self.writes.append(parse_json_line(data)) + + async def drain(self) -> None: + return None + + def end_input(self) -> None: + self.input_closed = True + + async def read_messages(self): # type: ignore[no-untyped-def] + while True: + item = await self._queue.get() + if item is _EOF: + break + yield item + + async def close(self) -> None: + self.closed = True + self.close_calls += 1 + self.input_closed = True + self._queue.put_nowait(_EOF) + + def push(self, payload: dict[str, Any]) -> None: + self._queue.put_nowait(payload) + + +async def _wait_for(predicate: Callable[[], bool], timeout: float = 1.0) -> None: + loop = asyncio.get_running_loop() + deadline = loop.time() + timeout + while loop.time() < deadline: + if predicate(): + return + await asyncio.sleep(0.01) + raise AssertionError("timed out waiting for test condition") + + +async def _wait_for_request( + transport: FakeTransport, + subtype: str, + timeout: float = 1.0, +) -> dict[str, Any]: + await _wait_for( + lambda: any( + payload.get("type") == "control_request" + and payload.get("request", {}).get("subtype") == subtype + for payload in transport.writes + ), + timeout=timeout, + ) + for payload in transport.writes: + if ( + payload.get("type") == "control_request" + and payload.get("request", {}).get("subtype") == subtype + ): + return payload + raise AssertionError(f"missing control request: {subtype}") + + +async def _wait_for_control_response( + transport: FakeTransport, + request_id: str, + timeout: float = 1.0, +) -> dict[str, Any]: + await _wait_for( + lambda: any( + payload.get("type") == "control_response" + and payload.get("response", {}).get("request_id") == request_id + for payload in transport.writes + ), + timeout=timeout, + ) + for payload in transport.writes: + if ( + payload.get("type") == "control_response" + and payload.get("response", {}).get("request_id") == request_id + ): + return payload + raise AssertionError(f"missing control response: {request_id}") + + +async def _start_query(transport: FakeTransport) -> Query: + query = Query( + transport=transport, # type: ignore[arg-type] + options=QueryOptions( + timeout=TimeoutOptions( + can_use_tool=0.05, + control_request=0.05, + stream_close=0.05, + ) + ), + prompt="hello", + session_id=VALID_UUID, + ) + await query._ensure_started() + + init_request = await _wait_for_request(transport, "initialize") + transport.push( + { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": init_request["request_id"], + "response": {}, + }, + } + ) + await _wait_for( + lambda: any(payload.get("type") == "user" for payload in transport.writes) + ) + return query + + +@pytest.mark.asyncio +async def test_unknown_control_request_returns_error_response() -> None: + transport = FakeTransport() + query = await _start_query(transport) + + transport.push( + { + "type": "control_request", + "request_id": "unknown-1", + "request": { + "subtype": "something_new", + }, + } + ) + + response = await _wait_for_control_response(transport, "unknown-1") + + assert response["response"]["subtype"] == "error" + assert "Unknown control request subtype" in response["response"]["error"] + await query.close() + + +@pytest.mark.asyncio +async def test_control_request_times_out() -> None: + transport = FakeTransport() + query = await _start_query(transport) + + with pytest.raises(ControlRequestTimeoutError, match="supported_commands"): + await query.supported_commands() + + await query.close() + + +@pytest.mark.asyncio +async def test_control_request_cancel_propagates_abort_error() -> None: + transport = FakeTransport() + query = await _start_query(transport) + + task = asyncio.create_task(query.supported_commands()) + request = await _wait_for_request(transport, "supported_commands") + transport.push( + { + "type": "control_cancel_request", + "request_id": request["request_id"], + } + ) + + with pytest.raises(AbortError, match="Control request cancelled"): + await task + + await query.close() + + +@pytest.mark.asyncio +async def test_incoming_control_request_cancel_does_not_block_router() -> None: + transport = FakeTransport() + started = asyncio.Event() + cancelled = asyncio.Event() + captured_cancel_events: list[asyncio.Event] = [] + + async def can_use_tool( + tool_name: str, + tool_input: dict[str, Any], + context: dict[str, Any], + ) -> dict[str, Any]: + assert tool_name == "write_file" + assert tool_input["path"] == "demo.txt" + cancel_event = cast(asyncio.Event, context["cancel_event"]) + captured_cancel_events.append(cancel_event) + started.set() + try: + await cancel_event.wait() + cancelled.set() + return {"behavior": "deny", "message": "Cancelled"} + except asyncio.CancelledError: + if cancel_event.is_set(): + cancelled.set() + raise + + query = Query( + transport=transport, # type: ignore[arg-type] + options=QueryOptions( + can_use_tool=can_use_tool, + timeout=TimeoutOptions( + can_use_tool=1.0, + control_request=0.2, + stream_close=0.05, + ), + ), + prompt="hello", + session_id=VALID_UUID, + ) + await query._ensure_started() + + init_request = await _wait_for_request(transport, "initialize") + transport.push( + { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": init_request["request_id"], + "response": {}, + }, + } + ) + await _wait_for( + lambda: any(payload.get("type") == "user" for payload in transport.writes) + ) + + transport.push( + { + "type": "control_request", + "request_id": "incoming-1", + "request": { + "subtype": "can_use_tool", + "tool_name": "write_file", + "tool_use_id": "tool-1", + "input": {"path": "demo.txt", "content": "hello"}, + "permission_suggestions": [], + "blocked_path": None, + }, + } + ) + + await _wait_for(lambda: started.is_set()) + assert captured_cancel_events[0] is not query._cancel_event + + supported_commands_task = asyncio.create_task(query.supported_commands()) + supported_request = await _wait_for_request(transport, "supported_commands") + + transport.push( + { + "type": "control_cancel_request", + "request_id": "incoming-1", + } + ) + await _wait_for(lambda: cancelled.is_set()) + + transport.push( + { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": supported_request["request_id"], + "response": {"commands": ["supported_commands"]}, + }, + } + ) + + assert await supported_commands_task == {"commands": ["supported_commands"]} + assert all( + not ( + payload.get("type") == "control_response" + and payload.get("response", {}).get("request_id") == "incoming-1" + ) + for payload in transport.writes + ) + await query.close() + + +@pytest.mark.asyncio +async def test_permission_request_passes_blocked_path_to_callback() -> None: + transport = FakeTransport() + captured_context: dict[str, Any] | None = None + + async def can_use_tool( + tool_name: str, + tool_input: dict[str, Any], + context: dict[str, Any], + ) -> dict[str, Any]: + nonlocal captured_context + assert tool_name == "write_file" + assert tool_input["path"] == "demo.txt" + captured_context = context + return {"behavior": "deny", "message": "blocked"} + + query = Query( + transport=transport, # type: ignore[arg-type] + options=QueryOptions( + can_use_tool=can_use_tool, + timeout=TimeoutOptions( + can_use_tool=1.0, + control_request=0.2, + stream_close=0.05, + ), + ), + prompt="hello", + session_id=VALID_UUID, + ) + await query._ensure_started() + + init_request = await _wait_for_request(transport, "initialize") + transport.push( + { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": init_request["request_id"], + "response": {}, + }, + } + ) + await _wait_for( + lambda: any(payload.get("type") == "user" for payload in transport.writes) + ) + + transport.push( + { + "type": "control_request", + "request_id": "incoming-2", + "request": { + "subtype": "can_use_tool", + "tool_name": "write_file", + "tool_use_id": "tool-2", + "input": {"path": "demo.txt", "content": "hello"}, + "permission_suggestions": [], + "blocked_path": "/tmp/demo.txt", + }, + } + ) + + response = await _wait_for_control_response(transport, "incoming-2") + + assert captured_context is not None + assert isinstance(captured_context["cancel_event"], asyncio.Event) + assert captured_context["suggestions"] == [] + assert captured_context["blocked_path"] == "/tmp/demo.txt" + assert response["response"]["subtype"] == "success" + assert response["response"]["response"] == { + "behavior": "deny", + "message": "blocked", + } + await query.close() + + +@pytest.mark.asyncio +async def test_permission_request_cancelled_callback_returns_deny() -> None: + transport = FakeTransport() + + async def can_use_tool( + tool_name: str, + tool_input: dict[str, Any], + context: dict[str, Any], + ) -> dict[str, Any]: + assert tool_name == "write_file" + assert tool_input["path"] == "demo.txt" + assert isinstance(context["cancel_event"], asyncio.Event) + raise asyncio.CancelledError() + + query = Query( + transport=transport, # type: ignore[arg-type] + options=QueryOptions( + can_use_tool=can_use_tool, + timeout=TimeoutOptions( + can_use_tool=1.0, + control_request=0.2, + stream_close=0.05, + ), + ), + prompt="hello", + session_id=VALID_UUID, + ) + await query._ensure_started() + + init_request = await _wait_for_request(transport, "initialize") + transport.push( + { + "type": "control_response", + "response": { + "subtype": "success", + "request_id": init_request["request_id"], + "response": {}, + }, + } + ) + await _wait_for( + lambda: any(payload.get("type") == "user" for payload in transport.writes) + ) + + transport.push( + { + "type": "control_request", + "request_id": "incoming-3", + "request": { + "subtype": "can_use_tool", + "tool_name": "write_file", + "tool_use_id": "tool-3", + "input": {"path": "demo.txt", "content": "hello"}, + "permission_suggestions": [], + "blocked_path": None, + }, + } + ) + + response = await _wait_for_control_response(transport, "incoming-3") + + assert response["response"]["subtype"] == "success" + assert response["response"]["response"] == { + "behavior": "deny", + "message": "Permission check failed: callback cancelled", + } + await query.close() + + +@pytest.mark.asyncio +async def test_finish_with_error_closes_transport_and_fails_pending_requests() -> None: + transport = FakeTransport() + query = await _start_query(transport) + + supported_commands_task = asyncio.create_task(query.supported_commands()) + await _wait_for_request(transport, "supported_commands") + + await query._finish_with_error(RuntimeError("boom")) + + with pytest.raises(RuntimeError, match="boom"): + await supported_commands_task + + assert query.is_closed() is True + assert transport.closed is True + + +@pytest.mark.asyncio +async def test_ensure_started_raises_after_close() -> None: + transport = FakeTransport() + query = await _start_query(transport) + + await query.close() + + with pytest.raises(RuntimeError, match="Query is closed"): + await query.supported_commands() + + +@pytest.mark.asyncio +async def test_anext_after_exhaustion_raises_stop_async_iteration() -> None: + """After the async iterator is exhausted, subsequent __anext__ calls must + raise StopAsyncIteration immediately instead of blocking.""" + transport = FakeTransport() + query = await _start_query(transport) + + # Deliver one assistant message, then a result to end the turn. + transport.push( + { + "type": "assistant", + "session_id": VALID_UUID, + "message": { + "role": "assistant", + "content": [{"type": "text", "text": "hi"}], + }, + } + ) + transport.push( + { + "type": "result", + "session_id": VALID_UUID, + "result": "done", + "is_error": False, + "duration_ms": 10, + "duration_api_ms": 5, + "num_turns": 1, + } + ) + # Signal end of transport stream so the router finishes naturally. + await transport.close() + + # Consume all messages until exhaustion. + messages: list[Any] = [] + with pytest.raises(StopAsyncIteration): + while True: + messages.append(await query.__anext__()) + + assert len(messages) >= 1 + + # The iterator is now exhausted — a second call must raise immediately. + with pytest.raises(StopAsyncIteration): + await query.__anext__() + + +@pytest.mark.asyncio +async def test_initialize_failure_no_unhandled_task_exception( + recwarn: pytest.WarningsChecker, +) -> None: + """When _initialize fails, no 'Task exception was never retrieved' warning + should appear — _finish_with_error already surfaces the error.""" + transport = FakeTransport() + query = Query( + transport=transport, # type: ignore[arg-type] + options=QueryOptions( + timeout=TimeoutOptions( + can_use_tool=0.05, + control_request=0.05, + stream_close=0.05, + ) + ), + prompt="hello", + session_id=VALID_UUID, + ) + await query._ensure_started() + + # Let the initialize request time out — this triggers _finish_with_error + # inside _initialize. + init_request = await _wait_for_request(transport, "initialize") + assert init_request is not None # init was sent + + # Don't respond to initialize — let the control-request timeout fire. + # The error propagates through _message_queue. + with pytest.raises(ControlRequestTimeoutError): + await query.__anext__() + + await query.close() + + # Give the event loop a moment to report any unhandled task exceptions. + await asyncio.sleep(0.1) + + # No "Task exception was never retrieved" warnings should have appeared. + task_warnings = [w for w in recwarn.list if "never retrieved" in str(w.message)] + assert task_warnings == [] + + +@pytest.mark.asyncio +async def test_async_context_manager_closes_on_exit() -> None: + transport = FakeTransport() + query = Query( + transport=transport, # type: ignore[arg-type] + options=QueryOptions( + timeout=TimeoutOptions( + can_use_tool=0.05, + control_request=0.05, + stream_close=0.05, + ) + ), + prompt="hello", + session_id=VALID_UUID, + ) + + async with query as q: + assert q is query + assert not q.is_closed() + + assert query.is_closed() is True + + +def test_sync_next_after_exhaustion_raises_stop_iteration() -> None: + """After the sync iterator is exhausted, subsequent __next__ calls must + raise StopIteration immediately instead of blocking on queue.get().""" + from queue import Queue + + from qwen_code_sdk.sync_query import _STOP, SyncQuery + + # Build a minimal SyncQuery without spawning the real event-loop thread. + sq = object.__new__(SyncQuery) + sq._queue = Queue() + sq._exhausted = False + + # Put one message then the sentinel. + msg_payload = { + "type": "assistant", + "message": {"role": "assistant", "content": []}, + } + sq._queue.put(msg_payload) + sq._queue.put(_STOP) + + # First call returns the message. + msg = next(sq) + assert msg["type"] == "assistant" + + # Second call should exhaust. + with pytest.raises(StopIteration): + next(sq) + + # Third call must raise immediately, not block. + with pytest.raises(StopIteration): + next(sq) diff --git a/packages/sdk-python/tests/unit/test_transport.py b/packages/sdk-python/tests/unit/test_transport.py new file mode 100644 index 000000000..340d1c1e6 --- /dev/null +++ b/packages/sdk-python/tests/unit/test_transport.py @@ -0,0 +1,291 @@ +from __future__ import annotations + +import asyncio +import subprocess +import sys +from pathlib import Path +from typing import Any + +import pytest +from qwen_code_sdk.transport import build_cli_arguments, prepare_spawn_info +from qwen_code_sdk.types import QueryOptions, TimeoutOptions + +VALID_UUID = "123e4567-e89b-12d3-a456-426614174000" + + +class DummyProcess: + def __init__(self) -> None: + self.stdin = None + self.stdout = None + self.stderr = None + self.returncode = 0 + + +def test_build_cli_arguments_maps_supported_options() -> None: + args = build_cli_arguments( + QueryOptions( + model="qwen3-coder", + system_prompt="system prompt", + append_system_prompt="append prompt", + permission_mode="auto-edit", + max_session_turns=7, + core_tools=["Read", "Edit"], + exclude_tools=["Bash(rm *)"], + allowed_tools=["Bash(git status)"], + auth_type="openai", + include_partial_messages=True, + session_id=VALID_UUID, + ) + ) + + assert args == [ + "--input-format", + "stream-json", + "--output-format", + "stream-json", + "--channel=SDK", + "--model", + "qwen3-coder", + "--system-prompt", + "system prompt", + "--append-system-prompt", + "append prompt", + "--approval-mode", + "auto-edit", + "--max-session-turns", + "7", + "--core-tools", + "Read,Edit", + "--exclude-tools", + "Bash(rm *)", + "--allowed-tools", + "Bash(git status)", + "--auth-type", + "openai", + "--include-partial-messages", + "--session-id", + VALID_UUID, + ] + + +def test_cli_argument_precedence_prefers_resume_then_continue_then_session_id() -> None: + args = build_cli_arguments( + QueryOptions( + resume=VALID_UUID, + continue_session=True, + session_id="223e4567-e89b-12d3-a456-426614174000", + ) + ) + + assert "--resume" in args + assert "--continue" not in args + assert "--session-id" not in args + + +def test_prepare_spawn_info_uses_runtime_for_python_scripts(tmp_path: Path) -> None: + script_path = tmp_path / "fake-qwen.py" + script_path.write_text("print('ok')\n", encoding="utf-8") + + spawn_info = prepare_spawn_info(str(script_path)) + + assert spawn_info.command == sys.executable + assert spawn_info.args == [str(script_path.resolve())] + + +def test_prepare_spawn_info_uses_node_for_javascript_files(tmp_path: Path) -> None: + script_path = tmp_path / "fake-qwen.js" + script_path.write_text("console.log('ok');\n", encoding="utf-8") + + spawn_info = prepare_spawn_info(str(script_path)) + + assert spawn_info.command == "node" + assert spawn_info.args == [str(script_path.resolve())] + + +def test_prepare_spawn_info_keeps_plain_command_names() -> None: + spawn_info = prepare_spawn_info("qwen-custom") + + assert spawn_info.command == "qwen-custom" + assert spawn_info.args == [] + + +@pytest.mark.asyncio +async def test_transport_discards_stderr_when_debug_is_disabled( + monkeypatch: pytest.MonkeyPatch, +) -> None: + captured: dict[str, Any] = {} + + async def fake_create_subprocess_exec(*args: Any, **kwargs: Any) -> DummyProcess: + captured["args"] = args + captured["kwargs"] = kwargs + return DummyProcess() + + monkeypatch.setattr( + asyncio, + "create_subprocess_exec", + fake_create_subprocess_exec, + ) + + transport_module = __import__( + "qwen_code_sdk.transport", + fromlist=["ProcessTransport"], + ) + transport = transport_module.ProcessTransport( + QueryOptions(timeout=TimeoutOptions()) + ) + + await transport.start() + + assert captured["kwargs"]["stderr"] is subprocess.DEVNULL + + +def test_prepare_spawn_info_defaults_to_qwen_when_none() -> None: + spawn_info = prepare_spawn_info(None) + + assert spawn_info.command == "qwen" + assert spawn_info.args == [] + + +def test_prepare_spawn_info_uses_node_for_mjs_files(tmp_path: Path) -> None: + script_path = tmp_path / "cli.mjs" + script_path.write_text("export default {};\n", encoding="utf-8") + + spawn_info = prepare_spawn_info(str(script_path)) + + assert spawn_info.command == "node" + assert spawn_info.args == [str(script_path.resolve())] + + +def test_prepare_spawn_info_uses_node_for_cjs_files(tmp_path: Path) -> None: + script_path = tmp_path / "cli.cjs" + script_path.write_text("module.exports = {};\n", encoding="utf-8") + + spawn_info = prepare_spawn_info(str(script_path)) + + assert spawn_info.command == "node" + assert spawn_info.args == [str(script_path.resolve())] + + +@pytest.mark.asyncio +async def test_transport_start_raises_after_close( + monkeypatch: pytest.MonkeyPatch, +) -> None: + async def fake_create_subprocess_exec(*args: Any, **kwargs: Any) -> DummyProcess: + return DummyProcess() + + monkeypatch.setattr( + asyncio, + "create_subprocess_exec", + fake_create_subprocess_exec, + ) + + transport_module = __import__( + "qwen_code_sdk.transport", + fromlist=["ProcessTransport"], + ) + transport = transport_module.ProcessTransport( + QueryOptions(timeout=TimeoutOptions()) + ) + transport._closed = True + + with pytest.raises(RuntimeError, match="Transport is closed"): + await transport.start() + + +@pytest.mark.asyncio +async def test_read_messages_skips_malformed_json_lines() -> None: + """Malformed JSON lines should be skipped, not crash the stream.""" + + class FakeStdout: + def __init__(self, lines: list[bytes]) -> None: + self._lines = iter(lines) + + async def readline(self) -> bytes: + return next(self._lines, b"") + + transport_module = __import__( + "qwen_code_sdk.transport", + fromlist=["ProcessTransport"], + ) + transport = transport_module.ProcessTransport( + QueryOptions(timeout=TimeoutOptions()) + ) + + class FakeProcess: + returncode = 0 + stdin = None + stderr = None + + def __init__(self) -> None: + self.stdout = FakeStdout( + [ + b"not valid json\n", + b'{"type":"system","subtype":"init","uuid":"u","session_id":"s"}\n', + b"also bad\n", + b"", + ] + ) + + async def wait(self) -> int: + return 0 + + transport._process = FakeProcess() + + messages: list[Any] = [] + async for msg in transport.read_messages(): + messages.append(msg) + + assert len(messages) == 1 + assert messages[0]["type"] == "system" + + +@pytest.mark.asyncio +async def test_stderr_callback_exceptions_do_not_fail_transport() -> None: + class FakeStdout: + async def readline(self) -> bytes: + return b"" + + class FakeStderr: + def __init__(self) -> None: + self._lines = iter([b"error message\n", b""]) + + async def readline(self) -> bytes: + return next(self._lines, b"") + + transport_module = __import__( + "qwen_code_sdk.transport", + fromlist=["ProcessTransport"], + ) + + callback_calls = 0 + + def stderr_callback(text: str) -> None: + nonlocal callback_calls + callback_calls += 1 + assert text == "error message" + raise RuntimeError("sink failed") + + transport = transport_module.ProcessTransport( + QueryOptions( + stderr=stderr_callback, + timeout=TimeoutOptions(), + ) + ) + + class FakeProcess: + returncode = 0 + stdin = None + + def __init__(self) -> None: + self.stdout = FakeStdout() + self.stderr = FakeStderr() + + async def wait(self) -> int: + return 0 + + transport._process = FakeProcess() + transport._stderr_task = asyncio.create_task(transport._forward_stderr()) + + await transport.wait_for_exit() + + assert callback_calls == 1 diff --git a/packages/sdk-python/tests/unit/test_validation.py b/packages/sdk-python/tests/unit/test_validation.py new file mode 100644 index 000000000..dd85ef2e9 --- /dev/null +++ b/packages/sdk-python/tests/unit/test_validation.py @@ -0,0 +1,174 @@ +from __future__ import annotations + +from typing import Any, cast + +import pytest +from qwen_code_sdk.errors import ValidationError +from qwen_code_sdk.types import QueryOptions, TimeoutOptions +from qwen_code_sdk.validation import validate_query_options + +VALID_UUID = "123e4567-e89b-12d3-a456-426614174000" + + +def test_rejects_resume_with_continue_session() -> None: + with pytest.raises(ValidationError, match="resume together with continue_session"): + validate_query_options( + QueryOptions( + resume=VALID_UUID, + continue_session=True, + ) + ) + + +def test_rejects_session_id_with_resume() -> None: + with pytest.raises(ValidationError, match="Cannot use session_id with resume"): + validate_query_options( + QueryOptions( + session_id=VALID_UUID, + resume="223e4567-e89b-12d3-a456-426614174000", + ) + ) + + +def test_rejects_invalid_session_id() -> None: + with pytest.raises(ValidationError, match="Invalid session_id"): + validate_query_options(QueryOptions(session_id="not-a-uuid")) + + +def test_rejects_invalid_resume() -> None: + with pytest.raises(ValidationError, match="Invalid resume"): + validate_query_options(QueryOptions(resume="not-a-uuid")) + + +def test_rejects_invalid_permission_mode() -> None: + with pytest.raises(ValidationError, match="Invalid permission_mode"): + validate_query_options( + QueryOptions.from_mapping({"permission_mode": "unsafe-mode"}) + ) + + +def test_rejects_invalid_auth_type() -> None: + with pytest.raises(ValidationError, match="Invalid auth_type"): + validate_query_options(QueryOptions.from_mapping({"auth_type": "custom"})) + + +def test_from_mapping_rejects_non_callable_can_use_tool() -> None: + with pytest.raises(TypeError, match="can_use_tool must be callable"): + QueryOptions.from_mapping({"can_use_tool": "bad"}) + + +def test_from_mapping_rejects_non_callable_stderr() -> None: + with pytest.raises(TypeError, match="stderr must be callable"): + QueryOptions.from_mapping({"stderr": "bad"}) + + +def test_validation_rejects_non_callable_can_use_tool() -> None: + with pytest.raises(ValidationError, match="can_use_tool must be callable"): + validate_query_options(QueryOptions(can_use_tool=cast(Any, "bad"))) + + +def test_validation_rejects_non_callable_stderr() -> None: + with pytest.raises(ValidationError, match="stderr must be callable"): + validate_query_options(QueryOptions(stderr=cast(Any, "bad"))) + + +def test_from_mapping_rejects_sync_can_use_tool() -> None: + def can_use_tool( # type: ignore[no-untyped-def] + tool_name, tool_input, context + ): + return {"behavior": "deny", "message": "bad"} + + with pytest.raises(TypeError, match="can_use_tool must be an async callable"): + QueryOptions.from_mapping({"can_use_tool": can_use_tool}) + + +def test_validation_rejects_sync_can_use_tool() -> None: + def can_use_tool( # type: ignore[no-untyped-def] + tool_name, tool_input, context + ): + return {"behavior": "deny", "message": "bad"} + + with pytest.raises(ValidationError, match="can_use_tool must be an async callable"): + validate_query_options(QueryOptions(can_use_tool=cast(Any, can_use_tool))) + + +def test_from_mapping_rejects_can_use_tool_with_wrong_arity() -> None: + async def can_use_tool( + tool_name: str, + tool_input: dict[str, Any], + ) -> dict[str, str]: + return {"behavior": "deny"} + + with pytest.raises( + TypeError, match="can_use_tool must accept exactly 3 positional arguments" + ): + QueryOptions.from_mapping({"can_use_tool": can_use_tool}) + + +def test_validation_rejects_can_use_tool_with_wrong_arity() -> None: + async def can_use_tool( + tool_name: str, + tool_input: dict[str, Any], + ) -> dict[str, str]: + return {"behavior": "deny"} + + with pytest.raises( + ValidationError, + match="can_use_tool must accept exactly 3 positional arguments", + ): + validate_query_options(QueryOptions(can_use_tool=cast(Any, can_use_tool))) + + +def test_from_mapping_rejects_stderr_with_wrong_arity() -> None: + def stderr() -> None: + return None + + with pytest.raises( + TypeError, match="stderr must accept exactly 1 positional argument" + ): + QueryOptions.from_mapping({"stderr": stderr}) + + +def test_validation_rejects_stderr_with_wrong_arity() -> None: + def stderr() -> None: + return None + + with pytest.raises( + ValidationError, match="stderr must accept exactly 1 positional argument" + ): + validate_query_options(QueryOptions(stderr=cast(Any, stderr))) + + +def test_rejects_invalid_max_session_turns() -> None: + with pytest.raises(ValidationError, match="max_session_turns"): + validate_query_options(QueryOptions(max_session_turns=-2)) + + +def test_rejects_empty_qwen_executable_path() -> None: + with pytest.raises( + ValidationError, match="path_to_qwen_executable cannot be empty" + ): + validate_query_options(QueryOptions(path_to_qwen_executable=" ")) + + +def test_timeout_rejects_non_numeric_value() -> None: + with pytest.raises(TypeError, match=r"timeout\.can_use_tool must be a positive"): + TimeoutOptions.from_mapping({"can_use_tool": "fast"}) + + +def test_timeout_rejects_negative_value() -> None: + pattern = r"timeout\.control_request must be a positive" + with pytest.raises(ValueError, match=pattern): + TimeoutOptions.from_mapping({"control_request": -1}) + + +def test_timeout_rejects_boolean_value() -> None: + with pytest.raises(TypeError, match=r"timeout\.stream_close must be a positive"): + TimeoutOptions.from_mapping({"stream_close": True}) + + +def test_rejects_mcp_servers() -> None: + with pytest.raises(ValidationError, match="mcp_servers is not supported"): + validate_query_options( + QueryOptions(mcp_servers={"my-server": {"command": "node", "args": []}}) + ) From 007a109db8efdd120c923214a0ad9e973a1ee6a3 Mon Sep 17 00:00:00 2001 From: John London Date: Fri, 24 Apr 2026 18:19:27 -0500 Subject: [PATCH 02/36] fix(cli): respect OPENAI_MODEL precedence in CLI model resolution (#3567) * fix(cli): respect OPENAI_MODEL precedence in CLI model resolution * test(cli): cover env-driven model precedence for OpenAI-compatible auth * fix(cli): scope model env precedence by auth type * test(cli): cover QWEN_MODEL fallback precedence --- .../cli/src/utils/modelConfigUtils.test.ts | 247 ++++++++++++++++++ packages/cli/src/utils/modelConfigUtils.ts | 9 +- 2 files changed, 254 insertions(+), 2 deletions(-) diff --git a/packages/cli/src/utils/modelConfigUtils.test.ts b/packages/cli/src/utils/modelConfigUtils.test.ts index 97cea9974..abd556308 100644 --- a/packages/cli/src/utils/modelConfigUtils.test.ts +++ b/packages/cli/src/utils/modelConfigUtils.test.ts @@ -442,6 +442,187 @@ describe('modelConfigUtils', () => { ); }); + it('should find modelProvider from OPENAI_MODEL when argv.model is not provided', () => { + const argv = {}; + const modelProvider: ProviderModelConfig = { + id: 'env-openai-model', + name: 'Env OpenAI Model', + generationConfig: { + samplingParams: { temperature: 0.6 }, + }, + }; + const settings = makeMockSettings({ + model: { name: 'settings-model' }, + modelProviders: { + [AuthType.USE_OPENAI]: [ + { id: 'settings-model', name: 'Settings Model' }, + modelProvider, + ], + }, + }); + const selectedAuthType = AuthType.USE_OPENAI; + + vi.mocked(resolveModelConfig).mockReturnValue({ + config: { + model: 'env-openai-model', + apiKey: '', + baseUrl: '', + }, + sources: {}, + warnings: [], + }); + + resolveCliGenerationConfig({ + argv, + settings, + selectedAuthType, + env: { OPENAI_MODEL: 'env-openai-model' }, + }); + + expect(vi.mocked(resolveModelConfig)).toHaveBeenCalledWith( + expect.objectContaining({ + modelProvider, + }), + ); + }); + + it('should find modelProvider from QWEN_MODEL when OPENAI_MODEL is not provided', () => { + const argv = {}; + const modelProvider: ProviderModelConfig = { + id: 'qwen-env-model', + name: 'Qwen Env Model', + generationConfig: { + samplingParams: { temperature: 0.7 }, + }, + }; + const settings = makeMockSettings({ + model: { name: 'settings-model' }, + modelProviders: { + [AuthType.USE_OPENAI]: [ + { id: 'settings-model', name: 'Settings Model' }, + modelProvider, + ], + }, + }); + const selectedAuthType = AuthType.USE_OPENAI; + + vi.mocked(resolveModelConfig).mockReturnValue({ + config: { + model: 'qwen-env-model', + apiKey: '', + baseUrl: '', + }, + sources: {}, + warnings: [], + }); + + resolveCliGenerationConfig({ + argv, + settings, + selectedAuthType, + env: { QWEN_MODEL: 'qwen-env-model' }, + }); + + expect(vi.mocked(resolveModelConfig)).toHaveBeenCalledWith( + expect.objectContaining({ + modelProvider, + }), + ); + }); + + it('should prefer OPENAI_MODEL over QWEN_MODEL and settings.model.name for USE_OPENAI provider lookup', () => { + const argv = {}; + const openAIProvider: ProviderModelConfig = { + id: 'openai-env-model', + name: 'OpenAI Env Model', + }; + const qwenProvider: ProviderModelConfig = { + id: 'qwen-env-model', + name: 'Qwen Env Model', + }; + const settings = makeMockSettings({ + model: { name: 'settings-model' }, + modelProviders: { + [AuthType.USE_OPENAI]: [ + { id: 'settings-model', name: 'Settings Model' }, + qwenProvider, + openAIProvider, + ], + }, + }); + const selectedAuthType = AuthType.USE_OPENAI; + + vi.mocked(resolveModelConfig).mockReturnValue({ + config: { + model: 'openai-env-model', + apiKey: '', + baseUrl: '', + }, + sources: {}, + warnings: [], + }); + + resolveCliGenerationConfig({ + argv, + settings, + selectedAuthType, + env: { + OPENAI_MODEL: 'openai-env-model', + QWEN_MODEL: 'qwen-env-model', + }, + }); + + expect(vi.mocked(resolveModelConfig)).toHaveBeenCalledWith( + expect.objectContaining({ + modelProvider: openAIProvider, + }), + ); + }); + + it('should ignore OPENAI_MODEL for non-USE_OPENAI provider lookup', () => { + const argv = {}; + const settingsModelProvider: ProviderModelConfig = { + id: 'settings-model', + name: 'Settings Model', + }; + const unrelatedOpenAIProvider: ProviderModelConfig = { + id: 'openai-env-model', + name: 'OpenAI Env Model', + }; + const settings = makeMockSettings({ + model: { name: 'settings-model' }, + modelProviders: { + [AuthType.USE_ANTHROPIC]: [ + settingsModelProvider, + unrelatedOpenAIProvider, + ], + }, + }); + const selectedAuthType = AuthType.USE_ANTHROPIC; + + vi.mocked(resolveModelConfig).mockReturnValue({ + config: { + model: 'settings-model', + apiKey: '', + baseUrl: '', + }, + sources: {}, + warnings: [], + }); + + resolveCliGenerationConfig({ + argv, + settings, + selectedAuthType, + env: { OPENAI_MODEL: 'openai-env-model' }, + }); + + expect(vi.mocked(resolveModelConfig)).toHaveBeenCalledWith( + expect.objectContaining({ + modelProvider: settingsModelProvider, + }), + ); + }); it('should not find modelProvider when authType is undefined', () => { const argv = { model: 'test-model' }; const settings = makeMockSettings({ @@ -723,5 +904,71 @@ describe('modelConfigUtils', () => { }), ); }); + + it('should respect precedence: argv.model > OPENAI_MODEL > QWEN_MODEL > settings.model.name', () => { + const mockSettings = makeMockSettings({ + model: { name: 'settings-model' }, + modelProviders: { + [AuthType.USE_OPENAI]: [ + { id: 'settings-model' } as ProviderModelConfig, + { id: 'openai-env-model' } as ProviderModelConfig, + { id: 'qwen-env-model' } as ProviderModelConfig, + { id: 'cli-model' } as ProviderModelConfig, + ], + }, + }); + + vi.mocked(resolveModelConfig).mockReturnValue({ + config: { model: 'cli-model', apiKey: '', baseUrl: '' }, + sources: {}, + warnings: [], + }); + const result1 = resolveCliGenerationConfig({ + argv: { model: 'cli-model' }, + settings: mockSettings, + selectedAuthType: AuthType.USE_OPENAI, + env: { OPENAI_MODEL: 'openai-env-model' }, + }); + expect(result1.model).toBe('cli-model'); + + vi.mocked(resolveModelConfig).mockReturnValue({ + config: { model: 'openai-env-model', apiKey: '', baseUrl: '' }, + sources: {}, + warnings: [], + }); + const result2 = resolveCliGenerationConfig({ + argv: {}, + settings: mockSettings, + selectedAuthType: AuthType.USE_OPENAI, + env: { OPENAI_MODEL: 'openai-env-model', QWEN_MODEL: 'qwen-env-model' }, + }); + expect(result2.model).toBe('openai-env-model'); + + vi.mocked(resolveModelConfig).mockReturnValue({ + config: { model: 'openai-env-model', apiKey: '', baseUrl: '' }, + sources: {}, + warnings: [], + }); + const result3 = resolveCliGenerationConfig({ + argv: {}, + settings: mockSettings, + selectedAuthType: AuthType.USE_OPENAI, + env: { OPENAI_MODEL: 'openai-env-model' }, + }); + expect(result3.model).toBe('openai-env-model'); + + vi.mocked(resolveModelConfig).mockReturnValue({ + config: { model: 'settings-model', apiKey: '', baseUrl: '' }, + sources: {}, + warnings: [], + }); + const result4 = resolveCliGenerationConfig({ + argv: {}, + settings: mockSettings, + selectedAuthType: AuthType.USE_OPENAI, + env: {}, + }); + expect(result4.model).toBe('settings-model'); + }); }); }); diff --git a/packages/cli/src/utils/modelConfigUtils.ts b/packages/cli/src/utils/modelConfigUtils.ts index aa5ac5e82..83dbae1cc 100644 --- a/packages/cli/src/utils/modelConfigUtils.ts +++ b/packages/cli/src/utils/modelConfigUtils.ts @@ -100,8 +100,13 @@ export function resolveCliGenerationConfig( if (authType && settings.modelProviders) { const providers = settings.modelProviders[authType]; if (providers && Array.isArray(providers)) { - // Try to find by requested model (from CLI or settings) - const requestedModel = argv.model || settings.model?.name; + const requestedModel = + authType === AuthType.USE_OPENAI + ? argv.model || + env['OPENAI_MODEL'] || + env['QWEN_MODEL'] || + settings.model?.name + : argv.model || settings.model?.name; if (requestedModel) { modelProvider = providers.find((p) => p.id === requestedModel) as | ProviderModelConfig From 54465b0c024930a12343740ffe1cc66748bbdab0 Mon Sep 17 00:00:00 2001 From: ChiGao Date: Sat, 25 Apr 2026 10:13:34 +0800 Subject: [PATCH 03/36] fix(cli): add TUI flicker foundation fixes (#3591) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * fix(cli): reduce main screen flicker * fix(cli): pre-slice large tool text output * fix(cli): slice tool output by visual height * fix(core): preserve shell transcript across narrow wraps * fix(core): suppress soft-wrap-only shell rerenders * fix(core): compare default shell output by logical wraps * fix(cli): gate synchronized terminal output --------- Co-authored-by: 秦奇 --- packages/cli/src/gemini.tsx | 6 + packages/cli/src/ui/AppContainer.test.tsx | 78 ++++ packages/cli/src/ui/AppContainer.tsx | 14 +- .../components/messages/ToolMessage.test.tsx | 70 ++++ .../ui/components/messages/ToolMessage.tsx | 76 +++- .../cli/src/ui/hooks/useGeminiStream.test.tsx | 176 +++++++++- packages/cli/src/ui/hooks/useGeminiStream.ts | 332 ++++++++++++------ .../src/ui/utils/synchronizedOutput.test.ts | 191 ++++++++++ .../cli/src/ui/utils/synchronizedOutput.ts | 131 +++++++ .../ui/utils/terminalRedrawOptimizer.test.ts | 27 ++ .../src/ui/utils/terminalRedrawOptimizer.ts | 127 +++++-- .../services/shellExecutionService.test.ts | 158 +++++++++ .../src/services/shellExecutionService.ts | 162 ++++++--- .../core/src/utils/terminalSerializer.test.ts | 74 ++++ packages/core/src/utils/terminalSerializer.ts | 51 ++- 15 files changed, 1473 insertions(+), 200 deletions(-) create mode 100644 packages/cli/src/ui/utils/synchronizedOutput.test.ts create mode 100644 packages/cli/src/ui/utils/synchronizedOutput.ts diff --git a/packages/cli/src/gemini.tsx b/packages/cli/src/gemini.tsx index 49c29532d..f8283cca3 100644 --- a/packages/cli/src/gemini.tsx +++ b/packages/cli/src/gemini.tsx @@ -85,6 +85,7 @@ import { DualOutputContext } from './dualOutput/DualOutputContext.js'; import { RemoteInputWatcher } from './remoteInput/RemoteInputWatcher.js'; import { RemoteInputContext } from './remoteInput/RemoteInputContext.js'; import { installTerminalRedrawOptimizer } from './ui/utils/terminalRedrawOptimizer.js'; +import { installSynchronizedOutput } from './ui/utils/synchronizedOutput.js'; const debugLogger = createDebugLogger('STARTUP'); @@ -174,6 +175,10 @@ export async function startInteractiveUI( process.stdout.isTTY && !config.getScreenReader() ? installTerminalRedrawOptimizer(process.stdout) : () => {}; + const restoreSynchronizedOutput = + process.stdout.isTTY && !config.getScreenReader() + ? installSynchronizedOutput(process.stdout) + : () => {}; // Create dual output bridge if --json-fd or --json-file is specified. // Errors are caught so a bad fd/path degrades gracefully instead of @@ -297,6 +302,7 @@ export async function startInteractiveUI( // operational, preventing garbled terminal output after the app exits. disableKittyProtocol(); instance.unmount(); + restoreSynchronizedOutput(); restoreTerminalRedrawOptimizer(); }); } diff --git a/packages/cli/src/ui/AppContainer.test.tsx b/packages/cli/src/ui/AppContainer.test.tsx index 2c4392cf9..a213c2fa9 100644 --- a/packages/cli/src/ui/AppContainer.test.tsx +++ b/packages/cli/src/ui/AppContainer.test.tsx @@ -15,6 +15,7 @@ import { } from 'vitest'; import { render, cleanup } from 'ink-testing-library'; import { AppContainer, dedupeNewestFirst } from './AppContainer.js'; +import ansiEscapes from 'ansi-escapes'; import { type Config, makeFakeConfig, @@ -428,6 +429,83 @@ describe('AppContainer State Management', () => { }).not.toThrow(); }); + it('refreshStatic clears the terminal before remounting history', () => { + render( + , + ); + + capturedUIActions.refreshStatic(); + + expect(mockStdout.write).toHaveBeenCalledWith(ansiEscapes.clearTerminal); + }); + + it('handleClearScreen avoids a second clearTerminal write', () => { + const clearSpy = vi.spyOn(console, 'clear').mockImplementation(() => {}); + + render( + , + ); + + capturedUIActions.handleClearScreen(); + + expect(clearSpy).toHaveBeenCalledTimes(1); + expect(mockStdout.write).not.toHaveBeenCalledWith( + ansiEscapes.clearTerminal, + ); + + clearSpy.mockRestore(); + }); + + it('passes a remount-only refresh callback to slash commands', () => { + let slashRefreshStatic: (() => void) | undefined; + mockedUseSlashCommandProcessor.mockImplementation( + ( + _config, + _settings, + _addItem, + _clearItems, + _loadHistory, + refreshStatic, + ) => { + slashRefreshStatic = refreshStatic; + return { + handleSlashCommand: vi.fn(), + slashCommands: [], + pendingHistoryItems: [], + commandContext: {}, + shellConfirmationRequest: null, + confirmationRequest: null, + }; + }, + ); + + render( + , + ); + + slashRefreshStatic?.(); + + expect(slashRefreshStatic).toBeDefined(); + expect(mockStdout.write).not.toHaveBeenCalledWith( + ansiEscapes.clearTerminal, + ); + }); + it('provides ConfigContext with config object', () => { expect(() => { render( diff --git a/packages/cli/src/ui/AppContainer.tsx b/packages/cli/src/ui/AppContainer.tsx index e8156b59e..9df46b62c 100644 --- a/packages/cli/src/ui/AppContainer.tsx +++ b/packages/cli/src/ui/AppContainer.tsx @@ -478,10 +478,14 @@ export const AppContainer = (props: AppContainerProps) => { fetchUserMessages(); }, [historyManager.history, logger]); + const remountStaticHistory = useCallback(() => { + setHistoryRemountKey((prev) => prev + 1); + }, []); + const refreshStatic = useCallback(() => { stdout.write(ansiEscapes.clearTerminal); - setHistoryRemountKey((prev) => prev + 1); - }, [setHistoryRemountKey, stdout]); + remountStaticHistory(); + }, [remountStaticHistory, stdout]); const { isThemeDialogOpen, @@ -704,7 +708,7 @@ export const AppContainer = (props: AppContainerProps) => { historyManager.addItem, historyManager.clearItems, historyManager.loadHistory, - refreshStatic, + remountStaticHistory, toggleVimEnabled, isProcessing, setIsProcessing, @@ -1246,8 +1250,8 @@ export const AppContainer = (props: AppContainerProps) => { const handleClearScreen = useCallback(() => { historyManager.clearItems(); clearScreen(); - refreshStatic(); - }, [historyManager, refreshStatic]); + remountStaticHistory(); + }, [historyManager, remountStaticHistory]); const { handleInput: vimHandleInput } = useVim(buffer, handleFinalSubmit); diff --git a/packages/cli/src/ui/components/messages/ToolMessage.test.tsx b/packages/cli/src/ui/components/messages/ToolMessage.test.tsx index a8e57c8a1..8e2fb0d33 100644 --- a/packages/cli/src/ui/components/messages/ToolMessage.test.tsx +++ b/packages/cli/src/ui/components/messages/ToolMessage.test.tsx @@ -547,6 +547,76 @@ describe('', () => { expect(output).toContain('line 30'); }); + it('pre-slices large non-shell string output before MaxSizedBox layout', () => { + const longString = Array.from( + { length: 5000 }, + (_, i) => `line ${i + 1}`, + ).join('\n'); + const { lastFrame } = renderWithContext( + , + StreamingState.Idle, + ); + const output = lastFrame()!; + + expect(output).toContain('... first 4995 lines hidden ...'); + expect(output).not.toContain('line 4995'); + expect(output).toContain('line 4996'); + expect(output).toContain('line 4997'); + expect(output).toContain('line 4998'); + expect(output).toContain('line 4999'); + expect(output).toContain('line 5000'); + }); + + it('pre-slices single-line output by visual width before MaxSizedBox layout', () => { + const longSingleLine = Array.from({ length: 1000 }, (_, i) => + String(i % 10), + ).join(''); + const { lastFrame } = renderWithContext( + , + StreamingState.Idle, + ); + const output = lastFrame()!; + + expect(output).toMatch(/\.\.\. first \d+ lin/); + expect(output).not.toContain(longSingleLine); + expect(output).toContain(longSingleLine.slice(-10)); + }); + + it('does not pre-slice string output that exactly fits available height', () => { + const exactFitString = Array.from( + { length: 6 }, + (_, i) => `line ${i + 1}`, + ).join('\n'); + const { lastFrame } = renderWithContext( + , + StreamingState.Idle, + ); + const output = lastFrame()!; + + expect(output).not.toContain('lines hidden'); + expect(output).toContain('line 1'); + expect(output).toContain('line 6'); + }); + it.each([ ['negative', -1], ['fractional', 1.5], diff --git a/packages/cli/src/ui/components/messages/ToolMessage.tsx b/packages/cli/src/ui/components/messages/ToolMessage.tsx index 64f616cfc..417c00010 100644 --- a/packages/cli/src/ui/components/messages/ToolMessage.tsx +++ b/packages/cli/src/ui/components/messages/ToolMessage.tsx @@ -12,7 +12,7 @@ import { DiffRenderer } from './DiffRenderer.js'; import { MarkdownDisplay } from '../../utils/MarkdownDisplay.js'; import { AnsiOutputText, ShellStatsBar } from '../AnsiOutput.js'; import type { ShellStatsBarProps } from '../AnsiOutput.js'; -import { MaxSizedBox } from '../shared/MaxSizedBox.js'; +import { MaxSizedBox, MINIMUM_MAX_HEIGHT } from '../shared/MaxSizedBox.js'; import { TodoDisplay } from '../TodoDisplay.js'; import type { TodoResultDisplay, @@ -31,6 +31,7 @@ import { theme } from '../../semantic-colors.js'; import { useSettings } from '../../contexts/SettingsContext.js'; import type { LoadedSettings } from '../../../config/settings.js'; import { useCompactMode } from '../../contexts/CompactModeContext.js'; +import { getCachedStringWidth, toCodePoints } from '../../utils/textUtils.js'; import { ToolStatusIndicator, @@ -48,6 +49,65 @@ const DEFAULT_SHELL_OUTPUT_MAX_LINES = 5; const MAXIMUM_RESULT_DISPLAY_CHARACTERS = 1000000; export type TextEmphasis = 'high' | 'medium' | 'low'; +function sliceTextForMaxHeight( + text: string, + maxHeight: number | undefined, + maxWidth: number, +): { text: string; hiddenLinesCount: number } { + if (maxHeight === undefined) { + return { text, hiddenLinesCount: 0 }; + } + + const targetMaxHeight = Math.max(Math.round(maxHeight), MINIMUM_MAX_HEIGHT); + const visibleContentHeight = targetMaxHeight - 1; + const visualWidth = Math.max(1, Math.floor(maxWidth)); + const visibleLines: string[] = []; + let visualLineCount = 0; + let currentLine = ''; + let currentLineWidth = 0; + + const appendVisibleLine = (line: string) => { + visualLineCount += 1; + visibleLines.push(line); + if (visibleLines.length > visibleContentHeight) { + visibleLines.shift(); + } + }; + + const flushCurrentLine = () => { + appendVisibleLine(currentLine); + currentLine = ''; + currentLineWidth = 0; + }; + + for (const char of toCodePoints(text)) { + if (char === '\n') { + flushCurrentLine(); + continue; + } + + const charWidth = Math.max(getCachedStringWidth(char), 1); + if (currentLineWidth > 0 && currentLineWidth + charWidth > visualWidth) { + flushCurrentLine(); + } + + currentLine += char; + currentLineWidth += charWidth; + } + + flushCurrentLine(); + + if (visualLineCount <= targetMaxHeight) { + return { text, hiddenLinesCount: 0 }; + } + + const hiddenLinesCount = visualLineCount - visibleContentHeight; + return { + text: visibleLines.join('\n'), + hiddenLinesCount, + }; +} + type DisplayRendererResult = | { type: 'none' } | { type: 'todo'; data: TodoResultDisplay } @@ -234,11 +294,21 @@ const StringResultRenderer: React.FC<{ ); } + const sliced = sliceTextForMaxHeight( + displayData, + availableHeight, + childWidth, + ); + return ( - + - {displayData} + {sliced.text} diff --git a/packages/cli/src/ui/hooks/useGeminiStream.test.tsx b/packages/cli/src/ui/hooks/useGeminiStream.test.tsx index 7483e9349..b9ad835f3 100644 --- a/packages/cli/src/ui/hooks/useGeminiStream.test.tsx +++ b/packages/cli/src/ui/hooks/useGeminiStream.test.tsx @@ -6,7 +6,7 @@ /* eslint-disable @typescript-eslint/no-explicit-any */ import type { Mock, MockInstance } from 'vitest'; -import { describe, it, expect, vi, beforeEach } from 'vitest'; +import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; import { renderHook, act, waitFor } from '@testing-library/react'; import { useGeminiStream, classifyApiError } from './useGeminiStream.js'; import * as atCommandProcessor from './atCommandProcessor.js'; @@ -238,6 +238,10 @@ describe('useGeminiStream', () => { handleAtCommandSpy = vi.spyOn(atCommandProcessor, 'handleAtCommand'); }); + afterEach(() => { + vi.useRealTimers(); + }); + const mockLoadedSettings: LoadedSettings = { merged: { preferredEditor: 'vscode' }, user: { path: '/user/settings.json', settings: {} }, @@ -823,6 +827,165 @@ describe('useGeminiStream', () => { }); describe('Cancellation', () => { + it('buffers streamed content until the throttle interval elapses', async () => { + vi.useFakeTimers(); + + let releaseStream!: () => void; + const holdStream = new Promise((resolve) => { + releaseStream = resolve; + }); + + const mockStream = (async function* () { + yield { + type: ServerGeminiEventType.Content, + value: 'Hel', + }; + yield { + type: ServerGeminiEventType.Content, + value: 'lo', + }; + await holdStream; + })(); + mockSendMessageStream.mockReturnValue(mockStream); + + const { result } = renderTestHook(); + + act(() => { + void result.current.submitQuery('test query'); + }); + + await act(async () => { + await Promise.resolve(); + await Promise.resolve(); + }); + + expect(mockSendMessageStream).toHaveBeenCalledTimes(1); + + expect(result.current.pendingHistoryItems).toEqual([]); + + await act(async () => { + vi.advanceTimersByTime(60); + }); + + expect(result.current.pendingHistoryItems).toEqual([ + expect.objectContaining({ + type: 'gemini', + text: 'Hello', + }), + ]); + + act(() => { + result.current.cancelOngoingRequest(); + }); + + await act(async () => { + releaseStream(); + }); + }); + + it('buffers streamed thoughts until the throttle interval elapses', async () => { + vi.useFakeTimers(); + + let releaseStream!: () => void; + const holdStream = new Promise((resolve) => { + releaseStream = resolve; + }); + + const mockStream = (async function* () { + yield { + type: ServerGeminiEventType.Thought, + value: { description: 'Think' }, + }; + yield { + type: ServerGeminiEventType.Thought, + value: { description: 'ing' }, + }; + await holdStream; + })(); + mockSendMessageStream.mockReturnValue(mockStream); + + const { result } = renderTestHook(); + + act(() => { + void result.current.submitQuery('test query'); + }); + + await act(async () => { + await Promise.resolve(); + await Promise.resolve(); + }); + + expect(mockSendMessageStream).toHaveBeenCalledTimes(1); + expect(result.current.pendingHistoryItems).toEqual([]); + + await act(async () => { + vi.advanceTimersByTime(60); + }); + + expect(result.current.pendingHistoryItems).toEqual([ + expect.objectContaining({ + type: 'gemini_thought', + text: 'Thinking', + }), + ]); + expect(result.current.thought).toEqual({ description: 'Thinking' }); + + act(() => { + result.current.cancelOngoingRequest(); + }); + + await act(async () => { + releaseStream(); + }); + }); + + it('flushes buffered content before cancellation', async () => { + vi.useFakeTimers(); + + let releaseStream!: () => void; + const holdStream = new Promise((resolve) => { + releaseStream = resolve; + }); + + const mockStream = (async function* () { + yield { + type: ServerGeminiEventType.Content, + value: 'Initial', + }; + await holdStream; + })(); + mockSendMessageStream.mockReturnValue(mockStream); + + const { result } = renderTestHook(); + + act(() => { + void result.current.submitQuery('test query'); + }); + + await act(async () => { + await Promise.resolve(); + await Promise.resolve(); + }); + + expect(mockSendMessageStream).toHaveBeenCalledTimes(1); + + act(() => { + result.current.cancelOngoingRequest(); + }); + + expect(mockAddItem).toHaveBeenCalledWith( + { + type: 'gemini', + text: 'Initial', + }, + expect.any(Number), + ); + + await act(async () => { + releaseStream(); + }); + }); + it('should cancel an in-progress stream when cancelOngoingRequest is called', async () => { const mockStream = (async function* () { yield { type: 'content', value: 'Part 1' }; @@ -992,6 +1155,10 @@ describe('useGeminiStream', () => { expect(mockSendMessageStream).toHaveBeenCalledTimes(1); }); + await act(async () => { + await Promise.resolve(); + }); + // Cancel the request act(() => { result.current.cancelOngoingRequest(); @@ -2811,6 +2978,13 @@ describe('useGeminiStream', () => { result.current.cancelOngoingRequest(); }); + expect(mockAddItem).toHaveBeenCalledWith( + { + type: 'gemini', + text: 'First call content', + }, + expect.any(Number), + ); expect(mainAbortSignal?.aborted).toBe(true); } finally { resolveFirstCall(); diff --git a/packages/cli/src/ui/hooks/useGeminiStream.ts b/packages/cli/src/ui/hooks/useGeminiStream.ts index d680a736d..fd80a6ec4 100644 --- a/packages/cli/src/ui/hooks/useGeminiStream.ts +++ b/packages/cli/src/ui/hooks/useGeminiStream.ts @@ -178,6 +178,11 @@ enum StreamProcessingStatus { } const EDIT_TOOL_NAMES = new Set(['replace', 'write_file']); +const STREAM_UPDATE_THROTTLE_MS = 60; + +type BufferedStreamEvent = + | { kind: 'content'; value: string } + | { kind: 'thought'; value: ThoughtSummary }; function showCitations(settings: LoadedSettings): boolean { const enabled = settings?.merged?.ui?.showCitations; @@ -216,6 +221,7 @@ export const useGeminiStream = ( ) => { const [initError, setInitError] = useState(null); const abortControllerRef = useRef(null); + const flushBufferedStreamEventsRef = useRef void>>(new Set()); const turnCancelledRef = useRef(false); const isSubmittingQueryRef = useRef(false); const lastPromptRef = useRef(null); @@ -483,6 +489,9 @@ export const useGeminiStream = ( if (turnCancelledRef.current) { return; } + for (const flushBufferedStreamEvents of flushBufferedStreamEventsRef.current) { + flushBufferedStreamEvents(); + } turnCancelledRef.current = true; isSubmittingQueryRef.current = false; abortControllerRef.current?.abort(); @@ -1121,133 +1130,226 @@ export const useGeminiStream = ( let geminiMessageBuffer = ''; let thoughtBuffer = ''; const toolCallRequests: ToolCallRequestInfo[] = []; - dualOutput?.startAssistantMessage(); - for await (const event of stream) { - dualOutput?.processEvent(event); - switch (event.type) { - case ServerGeminiEventType.Thought: - // If the thought has a subject, it's a discrete status update rather than - // a streamed textual thought, so we update the thought state directly. - if (event.value.subject) { - setThought(event.value); - } else { - thoughtBuffer = handleThoughtEvent( - event.value, - thoughtBuffer, - userMessageTimestamp, - ); + const bufferedEvents: BufferedStreamEvent[] = []; + let flushTimer: ReturnType | null = null; + + const discardBufferedStreamEvents = () => { + if (flushTimer) { + clearTimeout(flushTimer); + flushTimer = null; + } + bufferedEvents.length = 0; + }; + + const flushBufferedStreamEvents = () => { + if (flushTimer) { + clearTimeout(flushTimer); + flushTimer = null; + } + + if (bufferedEvents.length === 0) { + return; + } + + while (bufferedEvents.length > 0) { + const nextEvent = bufferedEvents.shift()!; + + if (nextEvent.kind === 'content') { + let mergedContent = nextEvent.value; + + while (bufferedEvents[0]?.kind === 'content') { + const queuedContent = bufferedEvents.shift(); + if (queuedContent?.kind !== 'content') { + break; + } + mergedContent += queuedContent.value; } - break; - case ServerGeminiEventType.Content: + geminiMessageBuffer = handleContentEvent( - event.value, + mergedContent, geminiMessageBuffer, userMessageTimestamp, ); - break; - case ServerGeminiEventType.ToolCallRequest: - toolCallRequests.push(event.value); - // Count tool call args JSON toward token estimation (matches - // Claude Code's input_json_delta handling). - try { - const argsJson = JSON.stringify(event.value.args); - streamingResponseLengthRef.current += argsJson.length; - } catch { - // Best-effort — don't block on serialization errors + continue; + } + + let mergedThought = nextEvent.value; + + while (bufferedEvents[0]?.kind === 'thought') { + const queuedThought = bufferedEvents.shift(); + if (queuedThought?.kind !== 'thought') { + break; } - break; - case ServerGeminiEventType.UserCancelled: - handleUserCancelledEvent(userMessageTimestamp); - break; - case ServerGeminiEventType.Error: - handleErrorEvent(event.value, userMessageTimestamp); - break; - case ServerGeminiEventType.ChatCompressed: - handleChatCompressionEvent(event.value, userMessageTimestamp); - break; - case ServerGeminiEventType.ToolCallConfirmation: - case ServerGeminiEventType.ToolCallResponse: - // do nothing - break; - case ServerGeminiEventType.MaxSessionTurns: - handleMaxSessionTurnsEvent(); - break; - case ServerGeminiEventType.SessionTokenLimitExceeded: - handleSessionTokenLimitExceededEvent(event.value); - break; - case ServerGeminiEventType.Finished: - handleFinishedEvent( - event as ServerGeminiFinishedEvent, - userMessageTimestamp, - ); - break; - case ServerGeminiEventType.Citation: - handleCitationEvent(event.value, userMessageTimestamp); - break; - case ServerGeminiEventType.LoopDetected: - // handle later because we want to move pending history to history - // before we add loop detected message to history - loopDetectedRef.current = true; - break; - case ServerGeminiEventType.Retry: - // On fresh restart (escalation / rate-limit / invalid stream), - // clear pending content and buffers to discard the failed attempt. - // On continuation (recovery), keep the pending gemini item AND - // buffers so the model's continuation text appends to them — - // otherwise handleContentEvent would see a null pending item, - // create a fresh one, and reset the buffer to just the new chunk, - // losing the partial text we meant to preserve. - if (!event.isContinuation) { + mergedThought = { + subject: queuedThought.value.subject || mergedThought.subject, + description: `${mergedThought.description ?? ''}${ + queuedThought.value.description ?? '' + }`, + }; + } + + thoughtBuffer = handleThoughtEvent( + mergedThought, + thoughtBuffer, + userMessageTimestamp, + ); + } + }; + + const scheduleBufferedStreamFlush = () => { + if (flushTimer) { + return; + } + + flushTimer = setTimeout(() => { + flushBufferedStreamEvents(); + }, STREAM_UPDATE_THROTTLE_MS); + }; + + flushBufferedStreamEventsRef.current.add(flushBufferedStreamEvents); + dualOutput?.startAssistantMessage(); + try { + for await (const event of stream) { + dualOutput?.processEvent(event); + switch (event.type) { + case ServerGeminiEventType.Thought: + // If the thought has a subject, it's a discrete status update rather than + // a streamed textual thought, so we update the thought state directly. + if (event.value.subject) { + flushBufferedStreamEvents(); + setThought(event.value); + } else { + bufferedEvents.push({ kind: 'thought', value: event.value }); + scheduleBufferedStreamFlush(); + } + break; + case ServerGeminiEventType.Content: + bufferedEvents.push({ kind: 'content', value: event.value }); + scheduleBufferedStreamFlush(); + break; + case ServerGeminiEventType.ToolCallRequest: + flushBufferedStreamEvents(); + toolCallRequests.push(event.value); + // Count tool call args JSON toward token estimation (matches + // Claude Code's input_json_delta handling). + try { + const argsJson = JSON.stringify(event.value.args); + streamingResponseLengthRef.current += argsJson.length; + } catch { + // Best-effort — don't block on serialization errors + } + break; + case ServerGeminiEventType.UserCancelled: + flushBufferedStreamEvents(); + handleUserCancelledEvent(userMessageTimestamp); + break; + case ServerGeminiEventType.Error: + flushBufferedStreamEvents(); + handleErrorEvent(event.value, userMessageTimestamp); + break; + case ServerGeminiEventType.ChatCompressed: + flushBufferedStreamEvents(); + handleChatCompressionEvent(event.value, userMessageTimestamp); + break; + case ServerGeminiEventType.ToolCallConfirmation: + case ServerGeminiEventType.ToolCallResponse: + flushBufferedStreamEvents(); + break; + case ServerGeminiEventType.MaxSessionTurns: + flushBufferedStreamEvents(); + handleMaxSessionTurnsEvent(); + break; + case ServerGeminiEventType.SessionTokenLimitExceeded: + flushBufferedStreamEvents(); + handleSessionTokenLimitExceededEvent(event.value); + break; + case ServerGeminiEventType.Finished: + flushBufferedStreamEvents(); + handleFinishedEvent( + event as ServerGeminiFinishedEvent, + userMessageTimestamp, + ); + break; + case ServerGeminiEventType.Citation: + flushBufferedStreamEvents(); + handleCitationEvent(event.value, userMessageTimestamp); + break; + case ServerGeminiEventType.LoopDetected: + flushBufferedStreamEvents(); + // handle later because we want to move pending history to history + // before we add loop detected message to history + loopDetectedRef.current = true; + break; + case ServerGeminiEventType.Retry: + // On fresh restart (escalation / rate-limit / invalid stream), + // clear pending content and buffers to discard the failed attempt. + // On continuation (recovery), keep the pending gemini item AND + // buffers so the model's continuation text appends to them — + // otherwise handleContentEvent would see a null pending item, + // create a fresh one, and reset the buffer to just the new chunk, + // losing the partial text we meant to preserve. + if (!event.isContinuation) { + discardBufferedStreamEvents(); + if (pendingHistoryItemRef.current) { + setPendingHistoryItem(null); + } + geminiMessageBuffer = ''; + thoughtBuffer = ''; + } else { + flushBufferedStreamEvents(); + } + // Always discard tool call requests from the truncated/failed + // attempt to prevent duplicate execution after escalation or + // recovery. The recovery path now skips turns that already + // contain a functionCall (see geminiChat.ts), so this only + // clears stale requests from pre-RETRY accumulation. + toolCallRequests.length = 0; + // Show retry info if available (rate-limit / throttling errors) + if (event.retryInfo) { + startRetryCountdown(event.retryInfo); + } else { + // The retry attempt is starting now, so any prior retry UI is stale. + clearRetryCountdown(); + } + break; + case ServerGeminiEventType.HookSystemMessage: + flushBufferedStreamEvents(); + // Display system message from Stop hooks with "Stop says:" prefix + // First commit any pending AI response to ensure correct ordering if (pendingHistoryItemRef.current) { + addItem(pendingHistoryItemRef.current, userMessageTimestamp); setPendingHistoryItem(null); } - geminiMessageBuffer = ''; - thoughtBuffer = ''; + addItem( + { + type: 'stop_hook_system_message', + message: event.value, + } as HistoryItemWithoutId, + userMessageTimestamp, + ); + break; + case ServerGeminiEventType.UserPromptSubmitBlocked: + flushBufferedStreamEvents(); + handleUserPromptSubmitBlockedEvent( + event.value, + userMessageTimestamp, + ); + break; + case ServerGeminiEventType.StopHookLoop: + flushBufferedStreamEvents(); + handleStopHookLoopEvent(event.value, userMessageTimestamp); + break; + default: { + // enforces exhaustive switch-case + const unreachable: never = event; + return unreachable; } - // Always discard tool call requests from the truncated/failed - // attempt to prevent duplicate execution after escalation or - // recovery. The recovery path now skips turns that already - // contain a functionCall (see geminiChat.ts), so this only - // clears stale requests from pre-RETRY accumulation. - toolCallRequests.length = 0; - // Show retry info if available (rate-limit / throttling errors) - if (event.retryInfo) { - startRetryCountdown(event.retryInfo); - } else { - // The retry attempt is starting now, so any prior retry UI is stale. - clearRetryCountdown(); - } - break; - case ServerGeminiEventType.HookSystemMessage: - // Display system message from Stop hooks with "Stop says:" prefix - // First commit any pending AI response to ensure correct ordering - if (pendingHistoryItemRef.current) { - addItem(pendingHistoryItemRef.current, userMessageTimestamp); - setPendingHistoryItem(null); - } - addItem( - { - type: 'stop_hook_system_message', - message: event.value, - } as HistoryItemWithoutId, - userMessageTimestamp, - ); - break; - case ServerGeminiEventType.UserPromptSubmitBlocked: - handleUserPromptSubmitBlockedEvent( - event.value, - userMessageTimestamp, - ); - break; - case ServerGeminiEventType.StopHookLoop: - handleStopHookLoopEvent(event.value, userMessageTimestamp); - break; - default: { - // enforces exhaustive switch-case - const unreachable: never = event; - return unreachable; } } + } finally { + flushBufferedStreamEvents(); + discardBufferedStreamEvents(); + flushBufferedStreamEventsRef.current.delete(flushBufferedStreamEvents); } dualOutput?.finalizeAssistantMessage(); if (toolCallRequests.length > 0) { diff --git a/packages/cli/src/ui/utils/synchronizedOutput.test.ts b/packages/cli/src/ui/utils/synchronizedOutput.test.ts new file mode 100644 index 000000000..f9c9da0e1 --- /dev/null +++ b/packages/cli/src/ui/utils/synchronizedOutput.test.ts @@ -0,0 +1,191 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { afterEach, describe, expect, it, vi } from 'vitest'; +import { + BEGIN_SYNCHRONIZED_UPDATE, + END_SYNCHRONIZED_UPDATE, + getSynchronizedOutputStatsSnapshot, + installSynchronizedOutput, + resetSynchronizedOutputStats, + terminalSupportsSynchronizedOutput, +} from './synchronizedOutput.js'; +import { installTerminalRedrawOptimizer } from './terminalRedrawOptimizer.js'; + +const ESC = '\u001B['; +const ERASE_LINE = `${ESC}2K`; +const CURSOR_UP_ONE = `${ESC}1A`; +const CURSOR_DOWN_ONE = `${ESC}1B`; +const CURSOR_LEFT = `${ESC}G`; + +function createStdout(write: NodeJS.WriteStream['write']): NodeJS.WriteStream { + return { + isTTY: true, + write, + } as NodeJS.WriteStream; +} + +describe('terminalSupportsSynchronizedOutput', () => { + it.each([ + [{ TERM_PROGRAM: 'WezTerm' }, true], + [{ TERM_PROGRAM: 'iTerm.app' }, true], + [{ TERM: 'xterm-kitty' }, true], + [{ KITTY_WINDOW_ID: '1' }, true], + [{ TERM_PROGRAM: 'Apple_Terminal' }, false], + [{ TERM_PROGRAM: 'JetBrains-JediTerm' }, false], + [{ TERM_PROGRAM: 'WezTerm', TMUX: '/tmp/tmux' }, false], + [{ TERM_PROGRAM: 'WezTerm', SSH_TTY: '/dev/pts/1' }, false], + [{ TERM_PROGRAM: 'WezTerm', SSH_CLIENT: '127.0.0.1 1 2' }, false], + [{ TERM_PROGRAM: 'WezTerm', QWEN_CODE_SYNCHRONIZED_OUTPUT: '0' }, false], + [ + { + TERM_PROGRAM: 'WezTerm', + QWEN_CODE_DISABLE_SYNCHRONIZED_OUTPUT: '1', + QWEN_CODE_FORCE_SYNCHRONIZED_OUTPUT: '1', + }, + false, + ], + [ + { + TERM_PROGRAM: 'Apple_Terminal', + QWEN_CODE_FORCE_SYNCHRONIZED_OUTPUT: '1', + }, + true, + ], + ])('detects support for %j', (env, expected) => { + expect(terminalSupportsSynchronizedOutput(env)).toBe(expected); + }); +}); + +describe('installSynchronizedOutput', () => { + afterEach(() => { + resetSynchronizedOutputStats(); + }); + + it('does not install for non-TTY stdout', () => { + const write = vi.fn(() => true) as NodeJS.WriteStream['write']; + const stdout = { + isTTY: false, + write, + } as NodeJS.WriteStream; + + const restore = installSynchronizedOutput(stdout, { + TERM_PROGRAM: 'WezTerm', + }); + + expect(stdout.write).toBe(write); + restore(); + }); + + it('wraps one synchronous write burst in balanced BSU and ESU markers', async () => { + const writes: string[] = []; + const write = vi.fn((chunk: string | Uint8Array) => { + writes.push(typeof chunk === 'string' ? chunk : chunk.toString()); + return true; + }) as NodeJS.WriteStream['write']; + const stdout = createStdout(write); + + const restore = installSynchronizedOutput(stdout, { + TERM_PROGRAM: 'WezTerm', + }); + + stdout.write('frame-a'); + stdout.write(Buffer.from('frame-b')); + await Promise.resolve(); + + expect(writes).toEqual([ + BEGIN_SYNCHRONIZED_UPDATE, + 'frame-a', + 'frame-b', + END_SYNCHRONIZED_UPDATE, + ]); + expect(getSynchronizedOutputStatsSnapshot()).toEqual({ + synchronizedOutputFrameCount: 1, + synchronizedOutputBeginCount: 1, + synchronizedOutputEndCount: 1, + }); + + restore(); + expect(stdout.write).toBe(write); + }); + + it('preserves write return values and callbacks', async () => { + const callback = vi.fn(); + const write = vi.fn( + ( + _chunk: string | Uint8Array, + encodingOrCallback?: BufferEncoding | ((error?: Error | null) => void), + ) => { + if (typeof encodingOrCallback === 'function') { + encodingOrCallback(); + } + return false; + }, + ) as NodeJS.WriteStream['write']; + const stdout = createStdout(write); + + const restore = installSynchronizedOutput(stdout, { + TERM_PROGRAM: 'iTerm.app', + }); + + const result = stdout.write('payload', callback); + await Promise.resolve(); + + expect(result).toBe(false); + expect(callback).toHaveBeenCalledTimes(1); + restore(); + }); + + it('composes after terminal redraw optimization without losing erase optimization', async () => { + const writes: string[] = []; + const write = vi.fn((chunk: string | Uint8Array) => { + writes.push(typeof chunk === 'string' ? chunk : chunk.toString()); + return true; + }) as NodeJS.WriteStream['write']; + const stdout = createStdout(write); + const restoreRedrawOptimizer = installTerminalRedrawOptimizer(stdout); + const restoreSynchronizedOutput = installSynchronizedOutput(stdout, { + TERM_PROGRAM: 'WezTerm', + }); + + stdout.write( + `${ERASE_LINE}${CURSOR_UP_ONE}${ERASE_LINE}${CURSOR_UP_ONE}${ERASE_LINE}${CURSOR_LEFT}`, + ); + await Promise.resolve(); + + expect(writes).toEqual([ + BEGIN_SYNCHRONIZED_UPDATE, + `${ESC}2A${ERASE_LINE}${CURSOR_DOWN_ONE}${ERASE_LINE}${CURSOR_DOWN_ONE}${ERASE_LINE}${ESC}2A${CURSOR_LEFT}`, + END_SYNCHRONIZED_UPDATE, + ]); + + restoreSynchronizedOutput(); + restoreRedrawOptimizer(); + expect(stdout.write).toBe(write); + }); + + it('closes an open frame before restore', () => { + const writes: string[] = []; + const write = vi.fn((chunk: string | Uint8Array) => { + writes.push(typeof chunk === 'string' ? chunk : chunk.toString()); + return true; + }) as NodeJS.WriteStream['write']; + const stdout = createStdout(write); + const restore = installSynchronizedOutput(stdout, { + TERM_PROGRAM: 'WezTerm', + }); + + stdout.write('payload'); + restore(); + + expect(writes).toEqual([ + BEGIN_SYNCHRONIZED_UPDATE, + 'payload', + END_SYNCHRONIZED_UPDATE, + ]); + expect(stdout.write).toBe(write); + }); +}); diff --git a/packages/cli/src/ui/utils/synchronizedOutput.ts b/packages/cli/src/ui/utils/synchronizedOutput.ts new file mode 100644 index 000000000..fb75786ad --- /dev/null +++ b/packages/cli/src/ui/utils/synchronizedOutput.ts @@ -0,0 +1,131 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +export const BEGIN_SYNCHRONIZED_UPDATE = '\u001B[?2026h'; +export const END_SYNCHRONIZED_UPDATE = '\u001B[?2026l'; + +export interface SynchronizedOutputStatsSnapshot { + synchronizedOutputFrameCount: number; + synchronizedOutputBeginCount: number; + synchronizedOutputEndCount: number; +} + +const synchronizedOutputStats: SynchronizedOutputStatsSnapshot = { + synchronizedOutputFrameCount: 0, + synchronizedOutputBeginCount: 0, + synchronizedOutputEndCount: 0, +}; + +let installed = false; + +export function getSynchronizedOutputStatsSnapshot(): SynchronizedOutputStatsSnapshot { + return { ...synchronizedOutputStats }; +} + +export function resetSynchronizedOutputStats(): void { + synchronizedOutputStats.synchronizedOutputFrameCount = 0; + synchronizedOutputStats.synchronizedOutputBeginCount = 0; + synchronizedOutputStats.synchronizedOutputEndCount = 0; +} + +export function terminalSupportsSynchronizedOutput( + env: NodeJS.ProcessEnv = process.env, +): boolean { + if ( + env['QWEN_CODE_DISABLE_SYNCHRONIZED_OUTPUT'] === '1' || + env['QWEN_CODE_SYNCHRONIZED_OUTPUT'] === '0' + ) { + return false; + } + + if ( + env['QWEN_CODE_FORCE_SYNCHRONIZED_OUTPUT'] === '1' || + env['QWEN_CODE_SYNCHRONIZED_OUTPUT'] === '1' + ) { + return true; + } + + if (env['TMUX'] || env['SSH_TTY'] || env['SSH_CLIENT']) { + return false; + } + + const termProgram = env['TERM_PROGRAM']; + if (termProgram === 'WezTerm' || termProgram === 'iTerm.app') { + return true; + } + + const term = env['TERM']; + return Boolean(env['KITTY_WINDOW_ID'] || term?.includes('kitty')); +} + +export function installSynchronizedOutput( + stdout: NodeJS.WriteStream = process.stdout, + env: NodeJS.ProcessEnv = process.env, +): () => void { + if (installed || !stdout.isTTY || !terminalSupportsSynchronizedOutput(env)) { + return () => {}; + } + + const originalWrite = stdout.write; + let inFrame = false; + + const writeControlSequence = (sequence: string) => { + originalWrite.call(stdout, sequence); + }; + + const endFrame = () => { + if (!inFrame) { + return; + } + + inFrame = false; + synchronizedOutputStats.synchronizedOutputEndCount += 1; + writeControlSequence(END_SYNCHRONIZED_UPDATE); + }; + + const patchedWrite = function ( + this: NodeJS.WriteStream, + chunk: unknown, + encodingOrCallback?: BufferEncoding | ((error?: Error | null) => void), + callback?: (error?: Error | null) => void, + ) { + if (!inFrame) { + inFrame = true; + synchronizedOutputStats.synchronizedOutputFrameCount += 1; + synchronizedOutputStats.synchronizedOutputBeginCount += 1; + writeControlSequence(BEGIN_SYNCHRONIZED_UPDATE); + queueMicrotask(endFrame); + } + + return originalWrite.call( + this, + chunk as string | Uint8Array, + encodingOrCallback as BufferEncoding, + callback, + ); + } as typeof stdout.write; + + const exitHandler = () => { + try { + endFrame(); + } catch { + // stdout may already be closed during process shutdown. + } + }; + + stdout.write = patchedWrite; + installed = true; + process.once('exit', exitHandler); + + return () => { + if (stdout.write === patchedWrite) { + endFrame(); + stdout.write = originalWrite; + } + process.removeListener('exit', exitHandler); + installed = false; + }; +} diff --git a/packages/cli/src/ui/utils/terminalRedrawOptimizer.test.ts b/packages/cli/src/ui/utils/terminalRedrawOptimizer.test.ts index 53d747c14..2e411758e 100644 --- a/packages/cli/src/ui/utils/terminalRedrawOptimizer.test.ts +++ b/packages/cli/src/ui/utils/terminalRedrawOptimizer.test.ts @@ -6,8 +6,10 @@ import { afterEach, describe, expect, it, vi } from 'vitest'; import { + getTerminalRedrawStatsSnapshot, installTerminalRedrawOptimizer, optimizeMultilineEraseLines, + resetTerminalRedrawStats, } from './terminalRedrawOptimizer.js'; const ESC = '\u001B['; @@ -56,6 +58,7 @@ describe('optimizeMultilineEraseLines', () => { describe('installTerminalRedrawOptimizer', () => { afterEach(() => { vi.unstubAllEnvs(); + resetTerminalRedrawStats(); }); it('optimizes string writes and restores the original writer', () => { @@ -87,6 +90,30 @@ describe('installTerminalRedrawOptimizer', () => { expect(write).toHaveBeenCalledWith(input, undefined, undefined); }); + it('tracks write, byte, clear, and erase optimization counters', () => { + const write = vi.fn(() => true); + const stdout = { write } as unknown as NodeJS.WriteStream; + installTerminalRedrawOptimizer(stdout); + + stdout.write( + `${ERASE_LINE}${CURSOR_UP_ONE}${ERASE_LINE}${CURSOR_UP_ONE}${ERASE_LINE}${CURSOR_LEFT}`, + ); + stdout.write(Buffer.from('ok')); + stdout.write('\u001B[2J\u001B[3J\u001B[H'); + + expect(getTerminalRedrawStatsSnapshot()).toEqual({ + stdoutWriteCount: 3, + stdoutBytes: + Buffer.byteLength( + `${ESC}2A${ERASE_LINE}${CURSOR_DOWN_ONE}${ERASE_LINE}${CURSOR_DOWN_ONE}${ERASE_LINE}${ESC}2A${CURSOR_LEFT}`, + ) + + Buffer.byteLength('ok') + + Buffer.byteLength('\u001B[2J\u001B[3J\u001B[H'), + clearTerminalCount: 1, + eraseLinesOptimizedCount: 1, + }); + }); + it('can be disabled for terminal compatibility fallback', () => { vi.stubEnv('QWEN_CODE_LEGACY_ERASE_LINES', '1'); const write = vi.fn(() => true); diff --git a/packages/cli/src/ui/utils/terminalRedrawOptimizer.ts b/packages/cli/src/ui/utils/terminalRedrawOptimizer.ts index 166c687ed..b82186a6d 100644 --- a/packages/cli/src/ui/utils/terminalRedrawOptimizer.ts +++ b/packages/cli/src/ui/utils/terminalRedrawOptimizer.ts @@ -4,6 +4,8 @@ * SPDX-License-Identifier: Apache-2.0 */ +import ansiEscapes from 'ansi-escapes'; + const ESC = '\u001B['; const ERASE_LINE = `${ESC}2K`; const CURSOR_UP_ONE = `${ESC}1A`; @@ -33,6 +35,79 @@ function countOccurrences(value: string, search: string): number { return count; } +export interface TerminalRedrawStatsSnapshot { + stdoutWriteCount: number; + stdoutBytes: number; + clearTerminalCount: number; + eraseLinesOptimizedCount: number; +} + +const terminalRedrawStats: TerminalRedrawStatsSnapshot = { + stdoutWriteCount: 0, + stdoutBytes: 0, + clearTerminalCount: 0, + eraseLinesOptimizedCount: 0, +}; + +export function getTerminalRedrawStatsSnapshot(): TerminalRedrawStatsSnapshot { + return { ...terminalRedrawStats }; +} + +export function resetTerminalRedrawStats(): void { + terminalRedrawStats.stdoutWriteCount = 0; + terminalRedrawStats.stdoutBytes = 0; + terminalRedrawStats.clearTerminalCount = 0; + terminalRedrawStats.eraseLinesOptimizedCount = 0; +} + +function getChunkByteLength( + chunk: string | Uint8Array, + encodingOrCallback?: BufferEncoding | ((error?: Error | null) => void), +): number { + if (typeof chunk === 'string') { + const encoding = + typeof encodingOrCallback === 'string' ? encodingOrCallback : undefined; + return Buffer.byteLength(chunk, encoding); + } + + return chunk.byteLength; +} + +function optimizeMultilineEraseLinesWithCount(output: string): { + output: string; + optimizedSequenceCount: number; +} { + let optimizedSequenceCount = 0; + + const optimizedOutput = output.replace( + MULTILINE_ERASE_LINES_PATTERN, + (sequence) => { + const lineCount = countOccurrences(sequence, ERASE_LINE); + const cursorUpCount = lineCount - 1; + + if (cursorUpCount <= 1) { + return sequence; + } + + optimizedSequenceCount += 1; + + let boundedErase = `${ESC}${cursorUpCount}A`; + + for (let line = 0; line < lineCount; line++) { + boundedErase += ERASE_LINE; + + if (line < lineCount - 1) { + boundedErase += CURSOR_DOWN_ONE; + } + } + + return `${boundedErase}${ESC}${cursorUpCount}A${CURSOR_LEFT}`; + }, + ); + + return { output: optimizedOutput, optimizedSequenceCount }; +} + /** * Ink clears dynamic output via ansi-escapes.eraseLines(), which emits a * clear-line + cursor-up pair for every previous line. That can make terminal @@ -40,26 +115,7 @@ function countOccurrences(value: string, search: string): number { * upward cursor movement while still clearing only the same old frame lines. */ export function optimizeMultilineEraseLines(output: string): string { - return output.replace(MULTILINE_ERASE_LINES_PATTERN, (sequence) => { - const lineCount = countOccurrences(sequence, ERASE_LINE); - const cursorUpCount = lineCount - 1; - - if (cursorUpCount <= 1) { - return sequence; - } - - let boundedErase = `${ESC}${cursorUpCount}A`; - - for (let line = 0; line < lineCount; line++) { - boundedErase += ERASE_LINE; - - if (line < lineCount - 1) { - boundedErase += CURSOR_DOWN_ONE; - } - } - - return `${boundedErase}${ESC}${cursorUpCount}A${CURSOR_LEFT}`; - }); + return optimizeMultilineEraseLinesWithCount(output).output; } export function installTerminalRedrawOptimizer( @@ -77,8 +133,35 @@ export function installTerminalRedrawOptimizer( encodingOrCallback?: BufferEncoding | ((error?: Error | null) => void), callback?: (error?: Error | null) => void, ) { - const optimizedChunk = - typeof chunk === 'string' ? optimizeMultilineEraseLines(chunk) : chunk; + const optimizedResult = + typeof chunk === 'string' + ? optimizeMultilineEraseLinesWithCount(chunk) + : undefined; + const optimizedChunk = optimizedResult?.output ?? chunk; + + if ( + typeof optimizedChunk === 'string' || + optimizedChunk instanceof Uint8Array || + Buffer.isBuffer(optimizedChunk) + ) { + terminalRedrawStats.stdoutWriteCount += 1; + terminalRedrawStats.stdoutBytes += getChunkByteLength( + optimizedChunk, + encodingOrCallback, + ); + + if (typeof optimizedChunk === 'string') { + terminalRedrawStats.clearTerminalCount += countOccurrences( + optimizedChunk, + ansiEscapes.clearTerminal, + ); + } + } + + if (optimizedResult) { + terminalRedrawStats.eraseLinesOptimizedCount += + optimizedResult.optimizedSequenceCount; + } return originalWrite.call( this, diff --git a/packages/core/src/services/shellExecutionService.test.ts b/packages/core/src/services/shellExecutionService.test.ts index 17a0a525e..5cc203929 100644 --- a/packages/core/src/services/shellExecutionService.test.ts +++ b/packages/core/src/services/shellExecutionService.test.ts @@ -33,6 +33,26 @@ const mockIsBinary = vi.hoisted(() => vi.fn()); const mockPlatform = vi.hoisted(() => vi.fn()); const mockGetPty = vi.hoisted(() => vi.fn()); const mockSerializeTerminalToObject = vi.hoisted(() => vi.fn()); +const mockSerializeTerminalToText = vi.hoisted(() => + vi.fn((terminal: pkg.Terminal): string => { + const buffer = terminal.buffer.active; + const lines: string[] = []; + + for (let i = 0; i < buffer.length; i++) { + const line = buffer.getLine(i); + const lineContent = line ? line.translateToString(true) : ''; + + if (line?.isWrapped && lines.length > 0) { + lines[lines.length - 1] += lineContent; + continue; + } + + lines.push(lineContent); + } + + return lines.join('\n').trimEnd(); + }), +); const mockGetShellConfiguration = vi.hoisted(() => vi.fn().mockReturnValue({ executable: 'bash', @@ -74,6 +94,7 @@ vi.mock('../utils/getPty.js', () => ({ })); vi.mock('../utils/terminalSerializer.js', () => ({ serializeTerminalToObject: mockSerializeTerminalToObject, + serializeTerminalToText: mockSerializeTerminalToText, })); vi.mock('../utils/shell-utils.js', () => ({ getShellConfiguration: mockGetShellConfiguration, @@ -122,6 +143,17 @@ const createExpectedAnsiOutput = (text: string | string[]): AnsiOutput => { return expected; }; +const createAnsiToken = (text: string) => ({ + text, + bold: false, + italic: false, + underline: false, + dim: false, + inverse: false, + fg: '', + bg: '', +}); + const setupConflictingPathEnv = () => { process.env = { ...originalProcessEnv, @@ -135,6 +167,21 @@ const expectNormalizedWindowsPathEnv = (env: NodeJS.ProcessEnv) => { expect(env['Path']).toBeUndefined(); }; +const waitForDataEventCount = async ( + onOutputEventMock: Mock<(event: ShellOutputEvent) => void>, + expectedCount: number, +) => { + for (let attempt = 0; attempt < 20; attempt++) { + const dataEvents = onOutputEventMock.mock.calls.filter( + ([event]) => event.type === 'data', + ); + if (dataEvents.length >= expectedCount) { + return; + } + await new Promise((resolve) => setTimeout(resolve, 0)); + } +}; + describe('ShellExecutionService', () => { let mockPtyProcess: EventEmitter & { pid: number; @@ -364,6 +411,23 @@ describe('ShellExecutionService', () => { expect(result.output).toBe('Compressing objects: 100% (7/7), done.'); }); + + it('should not persist narrow terminal soft wraps as transcript newlines', async () => { + const { result } = await simulateExecution( + 'narrow-output', + (pty) => { + pty.onData.mock.calls[0][0]('abcdefghijklmnopqrstuvwxyz\nshort\n'); + pty.onExit.mock.calls[0][0]({ exitCode: 0, signal: null }); + }, + { + ...shellExecutionConfig, + terminalWidth: 8, + terminalHeight: 4, + }, + ); + + expect(result.output).toBe('abcdefghijklmnopqrstuvwxyz\nshort'); + }); }); describe('pty interaction', () => { @@ -709,6 +773,59 @@ describe('ShellExecutionService', () => { ); }); + it('does not re-emit live output when only soft-wrap segmentation changes', async () => { + const coloredShellExecutionConfig = { + ...shellExecutionConfig, + showColor: true, + disableDynamicLineTrimming: true, + }; + const firstWrappedOutput = [ + [createAnsiToken('abcd')], + [createAnsiToken('efgh')], + ]; + const rewrappedOutput = [ + [createAnsiToken('ab')], + [createAnsiToken('cdef')], + [createAnsiToken('gh')], + ]; + const logicalOutput = [[createAnsiToken('abcdefgh')]]; + let rawRenderCount = 0; + + mockSerializeTerminalToObject.mockImplementation( + ( + _terminal, + _scrollOffset, + options?: { unwrapWrappedLines?: boolean }, + ) => { + if (options?.unwrapWrappedLines) { + return logicalOutput; + } + + rawRenderCount += 1; + return rawRenderCount === 1 ? firstWrappedOutput : rewrappedOutput; + }, + ); + + await simulateExecution( + 'narrow-output', + (pty) => { + pty.onData.mock.calls[0][0]('abcdefgh'); + pty.onData.mock.calls[0][0]('\r'); + pty.onExit.mock.calls[0][0]({ exitCode: 0, signal: null }); + }, + coloredShellExecutionConfig, + ); + + const dataEvents = onOutputEventMock.mock.calls.filter( + ([event]) => event.type === 'data', + ); + expect(dataEvents).toHaveLength(1); + expect(dataEvents[0][0]).toEqual({ + type: 'data', + chunk: firstWrappedOutput, + }); + }); + it('should call onOutputEvent with AnsiOutput when showColor is false', async () => { await simulateExecution( 'ls --color=auto', @@ -733,6 +850,47 @@ describe('ShellExecutionService', () => { ); }); + it('does not re-emit default plain live output when only soft-wrap segmentation changes', async () => { + const abortController = new AbortController(); + const handle = await ShellExecutionService.execute( + 'narrow-output', + '/test/dir', + onOutputEventMock, + abortController.signal, + true, + { + ...shellExecutionConfig, + terminalWidth: 4, + terminalHeight: 4, + showColor: false, + disableDynamicLineTrimming: false, + }, + ); + + await new Promise((resolve) => process.nextTick(resolve)); + mockPtyProcess.onData.mock.calls[0][0]('abcdefgh'); + await waitForDataEventCount(onOutputEventMock, 1); + + ShellExecutionService.resizePty(handle.pid!, 2, 4); + mockPtyProcess.onData.mock.calls[0][0]('\r'); + mockPtyProcess.onExit.mock.calls[0][0]({ exitCode: 0, signal: null }); + await handle.result; + + const dataEvents = onOutputEventMock.mock.calls.filter( + ([event]) => event.type === 'data', + ); + expect(dataEvents).toHaveLength(1); + const firstDataEvent = dataEvents[0][0]; + if (firstDataEvent.type !== 'data') { + throw new Error('Expected a shell data event.'); + } + const chunk = firstDataEvent.chunk as AnsiOutput; + expect(chunk.map((line) => line[0]?.text).filter(Boolean)).toEqual([ + 'abcd', + 'efgh', + ]); + }); + it('should handle multi-line output correctly when showColor is false', async () => { await simulateExecution( 'ls --color=auto', diff --git a/packages/core/src/services/shellExecutionService.ts b/packages/core/src/services/shellExecutionService.ts index 113ddb5a2..06a628881 100644 --- a/packages/core/src/services/shellExecutionService.ts +++ b/packages/core/src/services/shellExecutionService.ts @@ -17,6 +17,7 @@ import { getShellConfiguration } from '../utils/shell-utils.js'; import pkg from '@xterm/headless'; import { serializeTerminalToObject, + serializeTerminalToText, type AnsiOutput, } from '../utils/terminalSerializer.js'; import { normalizePathEnvForWindows } from '../utils/windowsPath.js'; @@ -135,17 +136,6 @@ const isExpectedPtyExitRaceError = (error: unknown): boolean => { ); }; -const getFullBufferText = (terminal: pkg.Terminal): string => { - const buffer = terminal.buffer.active; - const lines: string[] = []; - for (let i = 0; i < buffer.length; i++) { - const line = buffer.getLine(i); - const lineContent = line ? line.translateToString(true) : ''; - lines.push(lineContent); - } - return lines.join('\n').trimEnd(); -}; - const replayTerminalOutput = async ( output: string, cols: number, @@ -163,7 +153,90 @@ const replayTerminalOutput = async ( replayTerminal.write(output, () => resolve()); }); - return getFullBufferText(replayTerminal); + return serializeTerminalToText(replayTerminal); +}; + +const getLastNonEmptyAnsiLineIndex = (output: AnsiOutput): number => { + for (let i = output.length - 1; i >= 0; i--) { + const line = output[i]; + if ( + line + .map((segment) => segment.text) + .join('') + .trim().length > 0 + ) { + return i; + } + } + + return -1; +}; + +const trimTrailingEmptyAnsiLines = (output: AnsiOutput): AnsiOutput => + output.slice(0, getLastNonEmptyAnsiLineIndex(output) + 1); + +const areAnsiOutputsEqual = ( + left: AnsiOutput | null, + right: AnsiOutput, +): boolean => { + if (!left || left.length !== right.length) { + return false; + } + + return left.every((leftLine, lineIndex) => { + const rightLine = right[lineIndex]; + if (leftLine.length !== rightLine.length) { + return false; + } + + return leftLine.every((leftToken, tokenIndex) => { + const rightToken = rightLine[tokenIndex]; + return ( + leftToken.text === rightToken.text && + leftToken.bold === rightToken.bold && + leftToken.italic === rightToken.italic && + leftToken.underline === rightToken.underline && + leftToken.dim === rightToken.dim && + leftToken.inverse === rightToken.inverse && + leftToken.fg === rightToken.fg && + leftToken.bg === rightToken.bg + ); + }); + }); +}; + +const createPlainAnsiLine = (text: string) => [ + { + text, + bold: false, + italic: false, + underline: false, + dim: false, + inverse: false, + fg: '', + bg: '', + }, +]; + +const serializePlainViewportToAnsiOutput = ( + terminal: pkg.Terminal, + unwrapWrappedLines = false, +): AnsiOutput => { + const buffer = terminal.buffer.active; + const lines: AnsiOutput = []; + + for (let y = 0; y < terminal.rows; y++) { + const line = buffer.getLine(buffer.viewportY + y); + const lineContent = line ? line.translateToString(true) : ''; + + if (unwrapWrappedLines && line?.isWrapped && lines.length > 0) { + lines[lines.length - 1][0].text += lineContent; + } else { + lines.push(createPlainAnsiLine(lineContent)); + } + } + + return lines; }; interface ProcessCleanupStrategy { @@ -536,7 +609,7 @@ export class ShellExecutionService { let processingChain = Promise.resolve(); let decoder: TextDecoder | null = null; - let output: string | AnsiOutput | null = null; + let outputComparison: AnsiOutput | null = null; const outputChunks: Buffer[] = []; const error: Error | null = null; let exited = false; @@ -558,7 +631,7 @@ export class ShellExecutionService { if (!shellExecutionConfig.disableDynamicLineTrimming) { if (!hasStartedOutput) { - const bufferText = getFullBufferText(headlessTerminal); + const bufferText = serializeTerminalToText(headlessTerminal); if (bufferText.trim().length === 0) { return; } @@ -567,53 +640,36 @@ export class ShellExecutionService { } let newOutput: AnsiOutput; + let newOutputComparison: AnsiOutput; if (shellExecutionConfig.showColor) { newOutput = serializeTerminalToObject(headlessTerminal); + newOutputComparison = serializeTerminalToObject( + headlessTerminal, + 0, + { unwrapWrappedLines: true }, + ); } else { - const buffer = headlessTerminal.buffer.active; - const lines: AnsiOutput = []; - for (let y = 0; y < headlessTerminal.rows; y++) { - const line = buffer.getLine(buffer.viewportY + y); - const lineContent = line ? line.translateToString(true) : ''; - lines.push([ - { - text: lineContent, - bold: false, - italic: false, - underline: false, - dim: false, - inverse: false, - fg: '', - bg: '', - }, - ]); - } - newOutput = lines; + newOutput = serializePlainViewportToAnsiOutput(headlessTerminal); + newOutputComparison = serializePlainViewportToAnsiOutput( + headlessTerminal, + true, + ); } - let lastNonEmptyLine = -1; - for (let i = newOutput.length - 1; i >= 0; i--) { - const line = newOutput[i]; - if ( - line - .map((segment) => segment.text) - .join('') - .trim().length > 0 - ) { - lastNonEmptyLine = i; - break; - } - } - - const trimmedOutput = newOutput.slice(0, lastNonEmptyLine + 1); + const trimmedOutput = trimTrailingEmptyAnsiLines(newOutput); + const trimmedOutputComparison = + trimTrailingEmptyAnsiLines(newOutputComparison); const finalOutput = shellExecutionConfig.disableDynamicLineTrimming ? newOutput : trimmedOutput; + const finalOutputComparison = shellExecutionConfig + .disableDynamicLineTrimming + ? newOutputComparison + : trimmedOutputComparison; - // Using stringify for a quick deep comparison. - if (JSON.stringify(output) !== JSON.stringify(finalOutput)) { - output = finalOutput; + if (!areAnsiOutputsEqual(outputComparison, finalOutputComparison)) { + outputComparison = finalOutputComparison; onOutputEvent({ type: 'data', chunk: finalOutput, @@ -759,11 +815,11 @@ export class ShellExecutionService { rows, ); } else { - fullOutput = getFullBufferText(headlessTerminal); + fullOutput = serializeTerminalToText(headlessTerminal); } } catch { try { - fullOutput = getFullBufferText(headlessTerminal); + fullOutput = serializeTerminalToText(headlessTerminal); } catch { // Ignore fallback rendering errors and resolve with empty text. } diff --git a/packages/core/src/utils/terminalSerializer.test.ts b/packages/core/src/utils/terminalSerializer.test.ts index fd6241d04..fe3123bef 100644 --- a/packages/core/src/utils/terminalSerializer.test.ts +++ b/packages/core/src/utils/terminalSerializer.test.ts @@ -8,6 +8,7 @@ import { describe, it, expect } from 'vitest'; import { Terminal } from '@xterm/headless'; import { serializeTerminalToObject, + serializeTerminalToText, convertColorToHex, ColorMode, } from './terminalSerializer.js'; @@ -171,7 +172,80 @@ describe('terminalSerializer', () => { expect(result[0][0].bg).toBe('#008000'); expect(result[0][0].text).toBe('Styled text'); }); + + it('can unwrap soft-wrapped ANSI rows for live output comparison', async () => { + const terminal = new Terminal({ + cols: 8, + rows: 4, + allowProposedApi: true, + scrollback: 100, + convertEol: true, + }); + + await writeToTerminal(terminal, 'abcdefghijkl\nshort\n'); + + const result = serializeTerminalToObject(terminal, 0, { + unwrapWrappedLines: true, + }); + const visibleText = result + .map((line) => line.map((token) => token.text).join('').trimEnd()) + .filter(Boolean); + + expect(visibleText).toEqual(['abcdefghijkl', 'short']); + expect(result[0]).toHaveLength(1); + }); }); + + describe('serializeTerminalToText', () => { + it('unwraps soft-wrapped narrow terminal lines for transcript text', async () => { + const terminal = new Terminal({ + cols: 10, + rows: 4, + allowProposedApi: true, + scrollback: 100, + convertEol: true, + }); + + await writeToTerminal(terminal, 'abcdefghijklmnopqrstuvwxyz\n'); + + expect(serializeTerminalToText(terminal)).toBe( + 'abcdefghijklmnopqrstuvwxyz', + ); + }); + + it('keeps explicit newlines while unwrapping visual continuation rows', async () => { + const terminal = new Terminal({ + cols: 8, + rows: 4, + allowProposedApi: true, + scrollback: 100, + convertEol: true, + }); + + await writeToTerminal(terminal, 'abcdefghijkl\nshort\n'); + + expect(serializeTerminalToText(terminal)).toBe('abcdefghijkl\nshort'); + }); + + it('does not treat resize reflow as duplicated transcript lines', async () => { + const terminal = new Terminal({ + cols: 12, + rows: 4, + allowProposedApi: true, + scrollback: 100, + convertEol: true, + }); + + await writeToTerminal(terminal, 'abcdefghijklmnopqrstuvwxyz\n123456\n'); + terminal.resize(6, 4); + await writeToTerminal(terminal, 'done\n'); + + expect(serializeTerminalToText(terminal)).toBe( + 'abcdefghijklmnopqrstuvwxyz\n123456\ndone', + ); + }); + }); + describe('convertColorToHex', () => { it('should convert RGB color to hex', () => { const color = (100 << 16) | (200 << 8) | 50; diff --git a/packages/core/src/utils/terminalSerializer.ts b/packages/core/src/utils/terminalSerializer.ts index e12fe25aa..94f85a5c3 100644 --- a/packages/core/src/utils/terminalSerializer.ts +++ b/packages/core/src/utils/terminalSerializer.ts @@ -5,6 +5,7 @@ */ import type { IBufferCell, Terminal } from '@xterm/headless'; + export interface AnsiToken { text: string; bold: boolean; @@ -19,6 +20,49 @@ export interface AnsiToken { export type AnsiLine = AnsiToken[]; export type AnsiOutput = AnsiLine[]; +export interface SerializeTerminalToObjectOptions { + unwrapWrappedLines?: boolean; +} + +const canMergeAnsiTokens = (left: AnsiToken, right: AnsiToken): boolean => + left.bold === right.bold && + left.italic === right.italic && + left.underline === right.underline && + left.dim === right.dim && + left.inverse === right.inverse && + left.fg === right.fg && + left.bg === right.bg; + +const appendAnsiLineTokens = (target: AnsiLine, source: AnsiLine) => { + for (const token of source) { + const previous = target[target.length - 1]; + if (previous && canMergeAnsiTokens(previous, token)) { + previous.text += token.text; + } else { + target.push({ ...token }); + } + } +}; + +export function serializeTerminalToText(terminal: Terminal): string { + const buffer = terminal.buffer.active; + const lines: string[] = []; + + for (let i = 0; i < buffer.length; i++) { + const line = buffer.getLine(i); + const lineContent = line ? line.translateToString(true) : ''; + + if (line?.isWrapped && lines.length > 0) { + lines[lines.length - 1] += lineContent; + continue; + } + + lines.push(lineContent); + } + + return lines.join('\n').trimEnd(); +} + const enum Attribute { inverse = 1, bold = 2, @@ -134,6 +178,7 @@ class Cell { export function serializeTerminalToObject( terminal: Terminal, scrollOffset: number = 0, + options: SerializeTerminalToObjectOptions = {}, ): AnsiOutput { const buffer = terminal.buffer.active; const defaultFg = ''; @@ -199,7 +244,11 @@ export function serializeTerminalToObject( currentLine.push(token); } - result.push(currentLine); + if (options.unwrapWrappedLines && line.isWrapped && result.length > 0) { + appendAnsiLineTokens(result[result.length - 1], currentLine); + } else { + result.push(currentLine); + } } return result; From e937d15b904291fa2d41acbcb02664fe11777273 Mon Sep 17 00:00:00 2001 From: Shaojin Wen Date: Sat, 25 Apr 2026 11:07:30 +0800 Subject: [PATCH 04/36] fix(cli): drain runExitCleanup before process.exit in error handlers (#3602) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit handleError / handleCancellationError / handleMaxTurnsExceededError all called process.exit synchronously, bypassing the caller's runExitCleanup -> Config.shutdown -> chat-recording flush() chain on SIGINT, max-turn, and fatal-error paths. Same family as the EPIPE bypass fixed in bf24fff1f, just on a different code path. Makes the three handlers async and routes the exit through a shared exitAfterCleanup() helper that awaits runExitCleanup() before the actual process.exit. The helper carries an exit-once latch so a SIGINT racing a stream rejection (handleCancellationError + handleError fired concurrently) doesn't end up running cleanup twice or interleaving exit calls — only the first caller drains and exits, the second parks on a never-resolving promise that's killed when process.exit fires. Text-mode handleError still throws to the caller (unchanged behavior), but now drains the queue first so the unhandled-rejection path doesn't lose chat-recording records. Five call sites in nonInteractiveCli.ts updated to await. Existing 11 errors.test.ts cases adapted to async + rejects.toThrow; added 5 new regression guards covering cleanup-before-exit ordering for each handler plus the concurrent-handler race. Co-authored-by: wenshao --- packages/cli/src/nonInteractiveCli.test.ts | 8 + packages/cli/src/nonInteractiveCli.ts | 10 +- packages/cli/src/utils/errors.test.ts | 207 ++++++++++++++++----- packages/cli/src/utils/errors.ts | 51 ++++- 4 files changed, 218 insertions(+), 58 deletions(-) diff --git a/packages/cli/src/nonInteractiveCli.test.ts b/packages/cli/src/nonInteractiveCli.test.ts index c1f8ab9ba..ecda79ff9 100644 --- a/packages/cli/src/nonInteractiveCli.test.ts +++ b/packages/cli/src/nonInteractiveCli.test.ts @@ -28,6 +28,8 @@ import { vi, type Mock, type MockInstance } from 'vitest'; import type { LoadedSettings } from './config/settings.js'; import { CommandKind, type ExecutionMode } from './ui/commands/types.js'; import { filterCommandsForMode } from './services/commandUtils.js'; +import { _resetCleanupFunctionsForTest } from './utils/cleanup.js'; +import { _resetExitLatchForTest } from './utils/errors.js'; // Mock core modules vi.mock('./ui/hooks/atCommandProcessor.js'); @@ -79,6 +81,12 @@ describe('runNonInteractive', () => { let mockGetDebugResponses: Mock; beforeEach(async () => { + // Reset module-level state from any prior test in this file. Without + // these resets the once-set exit latch parks subsequent JSON-mode + // handleError tests in the never-resolving promise (5s vitest timeout). + _resetCleanupFunctionsForTest(); + _resetExitLatchForTest(); + mockCoreExecuteToolCall = vi.mocked(executeToolCall); mockShutdownTelemetry = vi.mocked(shutdownTelemetry); mockGetCommandsForMode.mockImplementation((mode: ExecutionMode) => diff --git a/packages/cli/src/nonInteractiveCli.ts b/packages/cli/src/nonInteractiveCli.ts index 23965c7d9..1541660b2 100644 --- a/packages/cli/src/nonInteractiveCli.ts +++ b/packages/cli/src/nonInteractiveCli.ts @@ -349,7 +349,7 @@ export async function runNonInteractive( config.getMaxSessionTurns() >= 0 && turnCount > config.getMaxSessionTurns() ) { - handleMaxTurnsExceededError(config); + await handleMaxTurnsExceededError(config); } const toolCallRequests: ToolCallRequestInfo[] = []; @@ -372,7 +372,7 @@ export async function runNonInteractive( for await (const event of responseStream) { if (abortController.signal.aborted) { - handleCancellationError(config); + await handleCancellationError(config); } // Use adapter for all event processing adapter.processEvent(event); @@ -498,7 +498,7 @@ export async function runNonInteractive( config.getMaxSessionTurns() >= 0 && turnCount > config.getMaxSessionTurns() ) { - handleMaxTurnsExceededError(config); + await handleMaxTurnsExceededError(config); } const inputFormat = @@ -711,7 +711,7 @@ export async function runNonInteractive( while (localQueue.length > 0) { emitNotificationToSdk(localQueue.shift()!); } - handleCancellationError(config); + await handleCancellationError(config); } await drainLocalQueue(); const running = registry.getRunning(); @@ -766,7 +766,7 @@ export async function runNonInteractive( usage, stats, }); - handleError(error, config); + await handleError(error, config); } finally { const reg = config.getBackgroundTaskRegistry(); reg.setNotificationCallback(undefined); diff --git a/packages/cli/src/utils/errors.test.ts b/packages/cli/src/utils/errors.test.ts index 1dfa0deda..b899ab1b5 100644 --- a/packages/cli/src/utils/errors.test.ts +++ b/packages/cli/src/utils/errors.test.ts @@ -12,12 +12,14 @@ import { ToolErrorType, } from '@qwen-code/qwen-code-core'; import { + _resetExitLatchForTest, getErrorMessage, handleError, handleToolError, handleCancellationError, handleMaxTurnsExceededError, } from './errors.js'; +import { _resetCleanupFunctionsForTest, registerCleanup } from './cleanup.js'; const mockWriteStderrLine = vi.hoisted(() => vi.fn()); const debugLoggerSpy = vi.hoisted(() => ({ @@ -99,6 +101,8 @@ describe('errors', () => { debugLoggerSpy.info.mockClear(); debugLoggerSpy.warn.mockClear(); debugLoggerSpy.error.mockClear(); + _resetCleanupFunctionsForTest(); + _resetExitLatchForTest(); // Mock process.stderr.write processStderrWriteSpy = vi @@ -176,24 +180,24 @@ describe('errors', () => { ).mockReturnValue(OutputFormat.TEXT); }); - it('should log error message and re-throw', () => { + it('should log error message and re-throw', async () => { const testError = new Error('Test error'); - expect(() => { - handleError(testError, mockConfig); - }).toThrow(testError); + await expect(handleError(testError, mockConfig)).rejects.toThrow( + testError, + ); expect(mockWriteStderrLine).toHaveBeenCalledWith( 'API Error: Test error', ); }); - it('should handle non-Error objects', () => { + it('should handle non-Error objects', async () => { const testError = 'String error'; - expect(() => { - handleError(testError, mockConfig); - }).toThrow(testError); + await expect(handleError(testError, mockConfig)).rejects.toThrow( + testError, + ); expect(mockWriteStderrLine).toHaveBeenCalledWith( 'API Error: String error', @@ -208,12 +212,12 @@ describe('errors', () => { ).mockReturnValue(OutputFormat.JSON); }); - it('should format error as JSON and exit with default code', () => { + it('should format error as JSON and exit with default code', async () => { const testError = new Error('Test error'); - expect(() => { - handleError(testError, mockConfig); - }).toThrow('process.exit called with code: 1'); + await expect(handleError(testError, mockConfig)).rejects.toThrow( + 'process.exit called with code: 1', + ); expect(mockWriteStderrLine).toHaveBeenCalledWith( JSON.stringify( @@ -230,12 +234,12 @@ describe('errors', () => { ); }); - it('should use custom error code when provided', () => { + it('should use custom error code when provided', async () => { const testError = new Error('Test error'); - expect(() => { - handleError(testError, mockConfig, 42); - }).toThrow('process.exit called with code: 42'); + await expect(handleError(testError, mockConfig, 42)).rejects.toThrow( + 'process.exit called with code: 42', + ); expect(mockWriteStderrLine).toHaveBeenCalledWith( JSON.stringify( @@ -252,12 +256,12 @@ describe('errors', () => { ); }); - it('should extract exitCode from FatalError instances', () => { + it('should extract exitCode from FatalError instances', async () => { const fatalError = new FatalInputError('Fatal error'); - expect(() => { - handleError(fatalError, mockConfig); - }).toThrow('process.exit called with code: 42'); + await expect(handleError(fatalError, mockConfig)).rejects.toThrow( + 'process.exit called with code: 42', + ); expect(mockWriteStderrLine).toHaveBeenCalledWith( JSON.stringify( @@ -274,26 +278,26 @@ describe('errors', () => { ); }); - it('should handle error with code property', () => { + it('should handle error with code property', async () => { const errorWithCode = new Error('Error with code') as Error & { code: number; }; errorWithCode.code = 404; - expect(() => { - handleError(errorWithCode, mockConfig); - }).toThrow('process.exit called with code: 404'); + await expect(handleError(errorWithCode, mockConfig)).rejects.toThrow( + 'process.exit called with code: 404', + ); }); - it('should handle error with status property', () => { + it('should handle error with status property', async () => { const errorWithStatus = new Error('Error with status') as Error & { status: string; }; errorWithStatus.status = 'TIMEOUT'; - expect(() => { - handleError(errorWithStatus, mockConfig); - }).toThrow('process.exit called with code: 1'); // string codes become 1 + await expect(handleError(errorWithStatus, mockConfig)).rejects.toThrow( + 'process.exit called with code: 1', // string codes become 1 + ); expect(mockWriteStderrLine).toHaveBeenCalledWith( JSON.stringify( @@ -590,10 +594,10 @@ describe('errors', () => { ).mockReturnValue(OutputFormat.TEXT); }); - it('should log cancellation message and exit with 130', () => { - expect(() => { - handleCancellationError(mockConfig); - }).toThrow('process.exit called with code: 130'); + it('should log cancellation message and exit with 130', async () => { + await expect(handleCancellationError(mockConfig)).rejects.toThrow( + 'process.exit called with code: 130', + ); expect(mockWriteStderrLine).toHaveBeenCalledWith( 'Operation cancelled.', @@ -608,10 +612,10 @@ describe('errors', () => { ).mockReturnValue(OutputFormat.JSON); }); - it('should format cancellation as JSON and exit with 130', () => { - expect(() => { - handleCancellationError(mockConfig); - }).toThrow('process.exit called with code: 130'); + it('should format cancellation as JSON and exit with 130', async () => { + await expect(handleCancellationError(mockConfig)).rejects.toThrow( + 'process.exit called with code: 130', + ); expect(mockWriteStderrLine).toHaveBeenCalledWith( JSON.stringify( @@ -638,10 +642,10 @@ describe('errors', () => { ).mockReturnValue(OutputFormat.TEXT); }); - it('should log max turns message and exit with 53', () => { - expect(() => { - handleMaxTurnsExceededError(mockConfig); - }).toThrow('process.exit called with code: 53'); + it('should log max turns message and exit with 53', async () => { + await expect(handleMaxTurnsExceededError(mockConfig)).rejects.toThrow( + 'process.exit called with code: 53', + ); expect(mockWriteStderrLine).toHaveBeenCalledWith( 'Reached max session turns for this session. Increase the number of turns by specifying maxSessionTurns in settings.json.', @@ -656,10 +660,10 @@ describe('errors', () => { ).mockReturnValue(OutputFormat.JSON); }); - it('should format max turns error as JSON and exit with 53', () => { - expect(() => { - handleMaxTurnsExceededError(mockConfig); - }).toThrow('process.exit called with code: 53'); + it('should format max turns error as JSON and exit with 53', async () => { + await expect(handleMaxTurnsExceededError(mockConfig)).rejects.toThrow( + 'process.exit called with code: 53', + ); expect(mockWriteStderrLine).toHaveBeenCalledWith( JSON.stringify( @@ -678,4 +682,119 @@ describe('errors', () => { }); }); }); + + describe('cleanup-before-exit invariant', () => { + // Regression: previously these handlers called process.exit synchronously, + // bypassing the caller's runExitCleanup → flush() chain on SIGINT, max- + // turn, and fatal-error paths. Same family as the EPIPE/process.exit + // bug fixed for stdout in nonInteractiveCli. + it('handleCancellationError drains registered cleanups before exit', async () => { + const cleanupOrder: string[] = []; + registerCleanup(() => { + cleanupOrder.push('cleanup'); + }); + processExitSpy.mockImplementation((code) => { + cleanupOrder.push(`exit:${code}`); + throw new Error(`process.exit called with code: ${code}`); + }); + + await expect(handleCancellationError(mockConfig)).rejects.toThrow( + 'process.exit called with code: 130', + ); + + expect(cleanupOrder).toEqual(['cleanup', 'exit:130']); + }); + + it('handleMaxTurnsExceededError drains registered cleanups before exit', async () => { + const cleanupOrder: string[] = []; + registerCleanup(() => { + cleanupOrder.push('cleanup'); + }); + processExitSpy.mockImplementation((code) => { + cleanupOrder.push(`exit:${code}`); + throw new Error(`process.exit called with code: ${code}`); + }); + + await expect(handleMaxTurnsExceededError(mockConfig)).rejects.toThrow( + 'process.exit called with code: 53', + ); + + expect(cleanupOrder).toEqual(['cleanup', 'exit:53']); + }); + + it('handleError drains registered cleanups before exit (JSON mode)', async () => { + (mockConfig.getOutputFormat as ReturnType).mockReturnValue( + OutputFormat.JSON, + ); + const cleanupOrder: string[] = []; + registerCleanup(() => { + cleanupOrder.push('cleanup'); + }); + processExitSpy.mockImplementation((code) => { + cleanupOrder.push(`exit:${code}`); + throw new Error(`process.exit called with code: ${code}`); + }); + + await expect(handleError(new Error('boom'), mockConfig)).rejects.toThrow( + 'process.exit called with code: 1', + ); + + expect(cleanupOrder).toEqual(['cleanup', 'exit:1']); + }); + + it('a second terminating handler does not race the first into double-exit', async () => { + // Models the real concurrency: SIGINT → handleCancellationError fires + // while a stream rejection lands in the catch → handleError(JSON). + // Without the exit-once latch we'd get duplicate cleanup runs + + // duplicate process.exit calls + interleaved stderr writes. + // (Text-mode handleError throws instead of exiting, so it isn't part + // of the race — the latch lives on the exit path.) + (mockConfig.getOutputFormat as ReturnType).mockReturnValue( + OutputFormat.JSON, + ); + + let exitCalls = 0; + processExitSpy.mockImplementation((code) => { + exitCalls += 1; + throw new Error(`process.exit called with code: ${code}`); + }); + + const first = handleCancellationError(mockConfig); + const second = handleError(new Error('boom'), mockConfig); + + await expect(first).rejects.toThrow('process.exit called with code: 130'); + + // The second handler is parked in the latch's unresolved promise. + let secondSettled = false; + void second.then( + () => { + secondSettled = true; + }, + () => { + secondSettled = true; + }, + ); + await new Promise((r) => setTimeout(r, 20)); + + expect(exitCalls).toBe(1); + expect(secondSettled).toBe(false); + }); + + it('handleError drains registered cleanups before re-throw (text mode)', async () => { + // Text mode re-throws to the caller; we still want the queue drained + // first so the unhandled-rejection path doesn't lose records. + (mockConfig.getOutputFormat as ReturnType).mockReturnValue( + OutputFormat.TEXT, + ); + const events: string[] = []; + registerCleanup(() => { + events.push('cleanup'); + }); + + const original = new Error('boom'); + await expect(handleError(original, mockConfig)).rejects.toBe(original); + + expect(events).toEqual(['cleanup']); + }); + }); }); diff --git a/packages/cli/src/utils/errors.ts b/packages/cli/src/utils/errors.ts index 598f535b7..76d95fcfa 100644 --- a/packages/cli/src/utils/errors.ts +++ b/packages/cli/src/utils/errors.ts @@ -14,6 +14,7 @@ import { ToolErrorType, createDebugLogger, } from '@qwen-code/qwen-code-core'; +import { runExitCleanup } from './cleanup.js'; import { writeStderrLine } from './stdioHelpers.js'; const debugLogger = createDebugLogger('CLI_ERRORS'); @@ -81,16 +82,45 @@ function getNumericExitCode(errorCode: string | number): number { return typeof errorCode === 'number' ? errorCode : 1; } +/** + * Drains pending cleanup before terminating. Routing every "we're about + * to die" path through here keeps async exit-side I/O (chat-recording + * flush, telemetry shutdown, MCP disconnect) from being skipped — the + * earlier sync writes were inherently bounded so a bare `process.exit` + * was safe; with the async-jsonl change it is not. + */ +// Guards against double-entry when two terminating paths race (e.g. SIGINT +// fires `handleCancellationError` while a stream rejection routes through +// `handleError`): only the first caller drains cleanup + exits; the second +// suspends forever in the unresolved promise and gets killed when the first +// caller's process.exit fires. +let exiting = false; + +async function exitAfterCleanup(code: number): Promise { + if (exiting) return new Promise(() => {}); + exiting = true; + await runExitCleanup(); + // `return` so process.exit's `never` narrows the function's terminating + // statement — without it TS reports "function returning 'never' cannot + // have a reachable end point" because await doesn't propagate `never`. + return process.exit(code); +} + +/** Test-only — reset the exit-once latch between cases. */ +export function _resetExitLatchForTest(): void { + exiting = false; +} + /** * Handles errors consistently for both JSON and text output formats. * In JSON mode, outputs formatted JSON error and exits. * In text mode, outputs error message and re-throws. */ -export function handleError( +export async function handleError( error: unknown, config: Config, customErrorCode?: string | number, -): never { +): Promise { const errorMessage = parseAndFormatApiError( error, config.getContentGeneratorConfig()?.authType, @@ -106,9 +136,12 @@ export function handleError( ); writeStderrLine(formattedError); - process.exit(getNumericExitCode(errorCode)); + return exitAfterCleanup(getNumericExitCode(errorCode)); } else { writeStderrLine(errorMessage); + // Drain queued writes before re-throwing so the unhandled rejection + // path doesn't lose chat-recording records that are still in the queue. + await runExitCleanup(); throw error; } } @@ -155,7 +188,7 @@ export function handleToolError( /** * Handles cancellation/abort signals consistently. */ -export function handleCancellationError(config: Config): never { +export async function handleCancellationError(config: Config): Promise { const cancellationError = new FatalCancellationError('Operation cancelled.'); if (config.getOutputFormat() === OutputFormat.JSON) { @@ -166,17 +199,18 @@ export function handleCancellationError(config: Config): never { ); writeStderrLine(formattedError); - process.exit(cancellationError.exitCode); } else { writeStderrLine(cancellationError.message); - process.exit(cancellationError.exitCode); } + return exitAfterCleanup(cancellationError.exitCode); } /** * Handles max session turns exceeded consistently. */ -export function handleMaxTurnsExceededError(config: Config): never { +export async function handleMaxTurnsExceededError( + config: Config, +): Promise { const maxTurnsError = new FatalTurnLimitedError( 'Reached max session turns for this session. Increase the number of turns by specifying maxSessionTurns in settings.json.', ); @@ -189,9 +223,8 @@ export function handleMaxTurnsExceededError(config: Config): never { ); writeStderrLine(formattedError); - process.exit(maxTurnsError.exitCode); } else { writeStderrLine(maxTurnsError.message); - process.exit(maxTurnsError.exitCode); } + return exitAfterCleanup(maxTurnsError.exitCode); } From c406c73509d713a0096aa545b33296043beaca0e Mon Sep 17 00:00:00 2001 From: jinye Date: Sat, 25 Apr 2026 22:12:29 +0800 Subject: [PATCH 05/36] feat(cli): add conversation rewind feature with double-ESC and /rewind command (#3441) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat(cli): add conversation rewind feature with double-ESC and /rewind command (#3186) Add the ability to rewind conversation to a previous user turn, similar to Claude Code's message selector. Users can trigger rewind via: - Double-ESC on empty prompt while idle - /rewind (or /rollback) slash command The RewindSelector component provides a two-phase UI: a scrollable pick-list of user turns followed by a confirmation dialog. On confirm, both UI history and API history are truncated consistently, the terminal is re-rendered, and the original prompt text is pre-populated in the input for editing. Key implementation details: - historyMapping.ts correctly handles tool-call loops (functionResponse entries) and the startup context pair when mapping UI turns to API Content[] indices - useDoublePress hook provides generic double-press detection with 800ms timeout and proper cleanup on unmount - ESC handler guards against WaitingForConfirmation state to prevent accidental rewind during tool approval - Chat recording service records rewind events with tree-branching via parentUuid for session replay support Closes #3186 Co-authored-by: Qwen-Coder * fix: call recordRewind() in handleRewindConfirm and simplify payload - Actually invoke chatRecordingService.recordRewind() after rewind - Remove tree-branching from recordRewind (no UI-to-recording UUID mapping exists yet) to avoid corrupting the parentUuid chain - Simplify RewindRecordPayload to just truncatedCount Co-authored-by: Qwen-Coder * test: add tmux-based E2E script for rewind feature Automated verification of all 5 manual test items from PR description: 1. /rewind command flow (pick turn, confirm, verify truncation) 2. Double-ESC opens selector (with btw dismiss handling) 3. ESC during streaming cancels (no rewind) 4. /rewind with no history (guard blocks) 5. After rewind, model ignores removed turns Co-authored-by: Qwen-Coder * fix(rewind): resolve resume persistence and IDE mode issues - chatRecordingService: add turnParentUuids tracking and rewindRecording() which re-roots the parentUuid chain so rewound messages land on a dead branch; reconstructHistory() then skips them automatically on resume. Add rebuildTurnBoundaries() for re-populating the index after /resume. - AppContainer: fix truncatedCount bug (was always 0 after loadHistory), wire handleRewindConfirm to rewindRecording() with correct targetTurnIndex, add config.getIdeMode() guard to openRewindSelector so rewind is disabled in IDE sessions where extra user Content entries break the API boundary mapping. - useResumeCommand: call rebuildTurnBoundaries() after startNewSession so rewind works correctly within resumed sessions. - resumeHistoryUtils: surface "Conversation rewound." info item when a rewind record is encountered during history reconstruction. - historyMapping.test.ts: add 9 unit tests for computeApiTruncationIndex covering normal flow, startup context pair, tool responses, and compression fallback. - Copyright headers: standardize new files to "Copyright 2025 Qwen Code". 🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code) * fix(rewind): close slash-command, compression, and IDE bypass holes Three bugs found by Codex review: 1. P1: `/rewind` slash command bypassed the IDE-mode guard because `slashCommandActions.openRewindSelector` called `setIsRewindSelectorOpen` directly. Fixed by introducing a ref bridge (`openRewindSelectorRef`) that delegates to the guarded callback. 2. P1: Slash-command invocations (`/help`, `/stats`, etc.) are stored as `type: 'user'` in UI history but never reach the API or recording service. The turn-index counter in `handleRewindConfirm` and `computeApiTruncationIndex` counted them, producing off-by-N errors. Added `isRealUserTurn()` helper that excludes items starting with `/` or `?`, applied in all three counting sites (AppContainer, historyMapping, RewindSelector). 3. P2: After chat compression, `computeApiTruncationIndex` returned `apiHistory.length` when the target turn was unreachable, silently keeping the full API history while the UI was truncated. Changed to return `-1`; `handleRewindConfirm` now aborts with an error message when the target turn was absorbed by compression. Tests: 14 unit tests for historyMapping (including slash-command and compression cases), full suite 616/616 passed. 🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code) --------- Co-authored-by: jinye.djy Co-authored-by: Qwen-Coder --- .../cli/src/services/BuiltinCommandLoader.ts | 2 + packages/cli/src/ui/AppContainer.tsx | 138 ++++- packages/cli/src/ui/commands/rewindCommand.ts | 22 + packages/cli/src/ui/commands/types.ts | 3 +- .../cli/src/ui/components/DialogManager.tsx | 11 + packages/cli/src/ui/components/Footer.tsx | 4 + .../cli/src/ui/components/RewindSelector.tsx | 328 ++++++++++++ .../cli/src/ui/contexts/UIActionsContext.tsx | 6 +- .../cli/src/ui/contexts/UIStateContext.tsx | 3 + .../cli/src/ui/hooks/slashCommandProcessor.ts | 4 + packages/cli/src/ui/hooks/useDoublePress.ts | 56 +++ .../cli/src/ui/hooks/useHistoryManager.ts | 12 +- .../cli/src/ui/hooks/useResumeCommand.test.ts | 1 + packages/cli/src/ui/hooks/useResumeCommand.ts | 4 + .../cli/src/ui/utils/historyMapping.test.ts | 243 +++++++++ packages/cli/src/ui/utils/historyMapping.ts | 126 +++++ .../cli/src/ui/utils/resumeHistoryUtils.ts | 3 + packages/core/src/core/client.ts | 5 + packages/core/src/core/geminiChat.ts | 4 + .../core/src/services/chatRecordingService.ts | 91 +++- scripts/test-rewind-e2e.sh | 473 ++++++++++++++++++ 21 files changed, 1533 insertions(+), 6 deletions(-) create mode 100644 packages/cli/src/ui/commands/rewindCommand.ts create mode 100644 packages/cli/src/ui/components/RewindSelector.tsx create mode 100644 packages/cli/src/ui/hooks/useDoublePress.ts create mode 100644 packages/cli/src/ui/utils/historyMapping.test.ts create mode 100644 packages/cli/src/ui/utils/historyMapping.ts create mode 100755 scripts/test-rewind-e2e.sh diff --git a/packages/cli/src/services/BuiltinCommandLoader.ts b/packages/cli/src/services/BuiltinCommandLoader.ts index cdde266b8..9d7fd4722 100644 --- a/packages/cli/src/services/BuiltinCommandLoader.ts +++ b/packages/cli/src/services/BuiltinCommandLoader.ts @@ -45,6 +45,7 @@ import { recapCommand } from '../ui/commands/recapCommand.js'; import { renameCommand } from '../ui/commands/renameCommand.js'; import { restoreCommand } from '../ui/commands/restoreCommand.js'; import { resumeCommand } from '../ui/commands/resumeCommand.js'; +import { rewindCommand } from '../ui/commands/rewindCommand.js'; import { settingsCommand } from '../ui/commands/settingsCommand.js'; import { skillsCommand } from '../ui/commands/skillsCommand.js'; import { statsCommand } from '../ui/commands/statsCommand.js'; @@ -126,6 +127,7 @@ export class BuiltinCommandLoader implements ICommandLoader { renameCommand, restoreCommand(this.config), resumeCommand, + rewindCommand, skillsCommand, statsCommand, summaryCommand, diff --git a/packages/cli/src/ui/AppContainer.tsx b/packages/cli/src/ui/AppContainer.tsx index 9df46b62c..5a3deeb27 100644 --- a/packages/cli/src/ui/AppContainer.tsx +++ b/packages/cli/src/ui/AppContainer.tsx @@ -74,6 +74,11 @@ import { useApprovalModeCommand } from './hooks/useApprovalModeCommand.js'; import { useResumeCommand } from './hooks/useResumeCommand.js'; import { useDeleteCommand } from './hooks/useDeleteCommand.js'; import { useSlashCommandProcessor } from './hooks/slashCommandProcessor.js'; +import { useDoublePress } from './hooks/useDoublePress.js'; +import { + computeApiTruncationIndex, + isRealUserTurn, +} from './utils/historyMapping.js'; import { useVimMode } from './contexts/VimModeContext.js'; import { CompactModeProvider } from './contexts/CompactModeContext.js'; import { useTerminalSize } from './hooks/useTerminalSize.js'; @@ -636,6 +641,12 @@ export const AppContainer = (props: AppContainerProps) => { const { isHooksDialogOpen, openHooksDialog, closeHooksDialog } = useHooksDialog(); + // Ref bridge: the guarded openRewindSelector callback is defined later + // (after useDoublePress), but slashCommandActions needs it now. The ref + // lets the useMemo capture a stable function pointer whose implementation + // is swapped in once the real callback exists. + const openRewindSelectorRef = useRef<() => void>(() => {}); + const slashCommandActions = useMemo( () => ({ openAuthDialog, @@ -664,6 +675,7 @@ export const AppContainer = (props: AppContainerProps) => { openMcpDialog, openHooksDialog, openResumeDialog, + openRewindSelector: () => openRewindSelectorRef.current(), handleResume, openDeleteDialog, }), @@ -1534,6 +1546,8 @@ export const AppContainer = (props: AppContainerProps) => { const [escapePressedOnce, setEscapePressedOnce] = useState(false); const escapeTimerRef = useRef(null); const dialogsVisibleRef = useRef(false); + const [isRewindSelectorOpen, setIsRewindSelectorOpen] = useState(false); + const [rewindEscPending, setRewindEscPending] = useState(false); const [constrainHeight, setConstrainHeight] = useState(true); const [ideContextState, setIdeContextState] = useState< IdeContext | undefined @@ -1581,6 +1595,98 @@ export const AppContainer = (props: AppContainerProps) => { setShowEscapePrompt(showPrompt); }, []); + // --- Rewind selector callbacks --- + const openRewindSelector = useCallback(() => { + if (streamingState !== StreamingState.Idle) return; + if (config.getIdeMode()) return; + if (dialogsVisibleRef.current) return; + const hasUserTurns = historyManager.history.some((h) => h.type === 'user'); + if (!hasUserTurns) return; + setIsRewindSelectorOpen(true); + }, [streamingState, config, historyManager.history]); + openRewindSelectorRef.current = openRewindSelector; + + const closeRewindSelector = useCallback(() => { + setIsRewindSelectorOpen(false); + }, []); + + const handleRewindConfirm = useCallback( + (userItem: HistoryItem) => { + const geminiClient = config.getGeminiClient(); + if (!geminiClient) return; + + // 1. Compute values from current history BEFORE truncation + const originalHistory = historyManager.history; + const originalLength = originalHistory.length; + + let targetTurnIndex = 0; + for (const h of originalHistory) { + if (h.id === userItem.id) break; + if (isRealUserTurn(h)) targetTurnIndex++; + } + + // 2. Compute API truncation point + const apiHistory = geminiClient.getHistory(); + const apiTruncateIndex = computeApiTruncationIndex( + originalHistory, + userItem.id, + apiHistory, + ); + + // Abort if the target turn is unreachable (e.g., absorbed by compression) + if (apiTruncateIndex < 0) { + historyManager.addItem( + { + type: 'error', + text: 'Cannot rewind to a turn that was compressed. Try a more recent turn.', + }, + Date.now(), + ); + setIsRewindSelectorOpen(false); + return; + } + + // 3. Truncate API history and strip stale thinking blocks + geminiClient.truncateHistory(apiTruncateIndex); + geminiClient.stripThoughtsFromHistory(); + + // 4. Truncate UI history (keep everything before the target item) + const truncatedUi = originalHistory.filter((h) => h.id < userItem.id); + historyManager.loadHistory(truncatedUi); + + // 5. Re-render the terminal + refreshStatic(); + + // 6. Pre-populate input with the original user text + if (userItem.type === 'user' && userItem.text) { + buffer.setText(userItem.text); + } + + // 7. Add info message + historyManager.addItem( + { + type: 'info', + text: 'Conversation rewound. Edit your prompt and press Enter to continue.', + }, + Date.now(), + ); + + // 8. Record the rewind event — re-roots the parentUuid chain so + // rewound messages end up on a dead branch during resume. + config.getChatRecordingService()?.rewindRecording(targetTurnIndex, { + truncatedCount: originalLength - truncatedUi.length, + }); + + // 9. Close the selector + setIsRewindSelectorOpen(false); + }, + [config, historyManager, refreshStatic, buffer], + ); + + const handleDoubleEscRewind = useDoublePress(openRewindSelector, (pending) => + setRewindEscPending(pending), + ); + const handleIdePromptComplete = useCallback( (result: IdeIntegrationNudgeResult) => { if (result.userSelection === 'yes') { @@ -1874,6 +1980,20 @@ export const AppContainer = (props: AppContainerProps) => { return; } + // Input is empty and idle — double-ESC opens rewind selector + if ( + streamingState === StreamingState.Idle && + !dialogsVisibleRef.current + ) { + if (escapeTimerRef.current) { + clearTimeout(escapeTimerRef.current); + escapeTimerRef.current = null; + } + setEscapePressedOnce(false); + handleDoubleEscRewind(); + return; + } + // No action available, reset the flag if (escapeTimerRef.current) { clearTimeout(escapeTimerRef.current); @@ -1970,6 +2090,7 @@ export const AppContainer = (props: AppContainerProps) => { compactMode, setCompactMode, refreshStatic, + handleDoubleEscRewind, ], ); @@ -2043,7 +2164,8 @@ export const AppContainer = (props: AppContainerProps) => { isApprovalModeDialogOpen || isResumeDialogOpen || isDeleteDialogOpen || - isExtensionsManagerDialogOpen; + isExtensionsManagerDialogOpen || + isRewindSelectorOpen; dialogsVisibleRef.current = dialogsVisible; // Drain queued messages when idle. `queueDrainNonce` re-fires the effect @@ -2210,6 +2332,9 @@ export const AppContainer = (props: AppContainerProps) => { // Prompt suggestion promptSuggestion, dismissPromptSuggestion, + // Rewind selector + isRewindSelectorOpen, + rewindEscPending, }), [ isThemeDialogOpen, @@ -2326,6 +2451,9 @@ export const AppContainer = (props: AppContainerProps) => { // Prompt suggestion promptSuggestion, dismissPromptSuggestion, + // Rewind selector + isRewindSelectorOpen, + rewindEscPending, ], ); @@ -2395,6 +2523,10 @@ export const AppContainer = (props: AppContainerProps) => { closeFeedbackDialog, temporaryCloseFeedbackDialog, submitFeedback, + // Rewind selector + openRewindSelector, + closeRewindSelector, + handleRewindConfirm, }), [ openThemeDialog, @@ -2459,6 +2591,10 @@ export const AppContainer = (props: AppContainerProps) => { closeFeedbackDialog, temporaryCloseFeedbackDialog, submitFeedback, + // Rewind selector + openRewindSelector, + closeRewindSelector, + handleRewindConfirm, ], ); diff --git a/packages/cli/src/ui/commands/rewindCommand.ts b/packages/cli/src/ui/commands/rewindCommand.ts new file mode 100644 index 000000000..07f78a1a5 --- /dev/null +++ b/packages/cli/src/ui/commands/rewindCommand.ts @@ -0,0 +1,22 @@ +/** + * @license + * Copyright 2026 Qwen Team + * SPDX-License-Identifier: Apache-2.0 + */ + +import type { SlashCommand, SlashCommandActionReturn } from './types.js'; +import { CommandKind } from './types.js'; +import { t } from '../../i18n/index.js'; + +export const rewindCommand: SlashCommand = { + name: 'rewind', + altNames: ['rollback'], + get description() { + return t('Rewind conversation to a previous turn'); + }, + kind: CommandKind.BUILT_IN, + action: async (): Promise => ({ + type: 'dialog', + dialog: 'rewind', + }), +}; diff --git a/packages/cli/src/ui/commands/types.ts b/packages/cli/src/ui/commands/types.ts index 01e6d6564..2e3f8df62 100644 --- a/packages/cli/src/ui/commands/types.ts +++ b/packages/cli/src/ui/commands/types.ts @@ -182,7 +182,8 @@ export interface OpenDialogActionReturn { | 'delete' | 'extensions_manage' | 'hooks' - | 'mcp'; + | 'mcp' + | 'rewind'; } /** diff --git a/packages/cli/src/ui/components/DialogManager.tsx b/packages/cli/src/ui/components/DialogManager.tsx index dc0c64d27..dc35c33d1 100644 --- a/packages/cli/src/ui/components/DialogManager.tsx +++ b/packages/cli/src/ui/components/DialogManager.tsx @@ -43,6 +43,7 @@ import { ExtensionsManagerDialog } from './extensions/ExtensionsManagerDialog.js import { MCPManagementDialog } from './mcp/MCPManagementDialog.js'; import { HooksManagementDialog } from './hooks/HooksManagementDialog.js'; import { SessionPicker } from './SessionPicker.js'; +import { RewindSelector } from './RewindSelector.js'; import { MemoryDialog } from './MemoryDialog.js'; import { t } from '../../i18n/index.js'; @@ -397,5 +398,15 @@ export const DialogManager = ({ ); } + if (uiState.isRewindSelectorOpen) { + return ( + + ); + } + return null; }; diff --git a/packages/cli/src/ui/components/Footer.tsx b/packages/cli/src/ui/components/Footer.tsx index 1debdcb9c..cb2e2f47f 100644 --- a/packages/cli/src/ui/components/Footer.tsx +++ b/packages/cli/src/ui/components/Footer.tsx @@ -100,6 +100,10 @@ export const Footer: React.FC = () => { {t('Press Ctrl+D again to exit.')} ) : uiState.showEscapePrompt ? ( {t('Press Esc again to clear.')} + ) : uiState.rewindEscPending ? ( + + {t('Press Esc again to rewind conversation.')} + ) : vimEnabled && vimMode === 'INSERT' ? ( -- INSERT -- ) : uiState.shellModeActive ? ( diff --git a/packages/cli/src/ui/components/RewindSelector.tsx b/packages/cli/src/ui/components/RewindSelector.tsx new file mode 100644 index 000000000..5da006cad --- /dev/null +++ b/packages/cli/src/ui/components/RewindSelector.tsx @@ -0,0 +1,328 @@ +/** + * @license + * Copyright 2025 Qwen Code + * SPDX-License-Identifier: Apache-2.0 + */ + +import { useState, useMemo, useCallback } from 'react'; +import { Box, Text } from 'ink'; +import type { HistoryItem } from '../types.js'; +import { theme } from '../semantic-colors.js'; +import { useTerminalSize } from '../hooks/useTerminalSize.js'; +import { useKeypress } from '../hooks/useKeypress.js'; +import { truncateText } from '../utils/sessionPickerUtils.js'; +import { isRealUserTurn } from '../utils/historyMapping.js'; +import { t } from '../../i18n/index.js'; + +export interface RewindSelectorProps { + history: HistoryItem[]; + onRewind: (userItem: HistoryItem) => void; + onCancel: () => void; +} + +const MAX_VISIBLE_ITEMS = 7; + +/** + * Extract user-type items from UI history for the rewind pick list. + */ +function getUserTurns(history: HistoryItem[]): HistoryItem[] { + return history.filter(isRealUserTurn); +} + +interface TurnItemViewProps { + item: HistoryItem; + isSelected: boolean; + isFirst: boolean; + isLast: boolean; + showScrollUp: boolean; + showScrollDown: boolean; + maxPromptWidth: number; + turnNumber: number; +} + +function TurnItemView({ + item, + isSelected, + isFirst, + isLast, + showScrollUp, + showScrollDown, + maxPromptWidth, + turnNumber, +}: TurnItemViewProps): React.JSX.Element { + const showUpIndicator = isFirst && showScrollUp; + const showDownIndicator = isLast && showScrollDown; + + const prefix = isSelected + ? '› ' + : showUpIndicator + ? '↑ ' + : showDownIndicator + ? '↓ ' + : ' '; + + const promptText = item.text || '(empty prompt)'; + const truncatedPrompt = truncateText(promptText, maxPromptWidth); + + return ( + + + + {prefix} + + {`#${turnNumber} `} + + {truncatedPrompt} + + + + ); +} + +/** + * Two-phase rewind selector: + * 1. Pick list — choose which user turn to rewind to + * 2. Confirm — confirm the rewind action + */ +export function RewindSelector({ + history, + onRewind, + onCancel, +}: RewindSelectorProps) { + const { columns: width, rows: height } = useTerminalSize(); + const userTurns = useMemo(() => getUserTurns(history), [history]); + + const [selectedIndex, setSelectedIndex] = useState(userTurns.length - 1); + const [confirmItem, setConfirmItem] = useState(null); + + const boxWidth = width - 4; + const maxVisibleItems = Math.min(MAX_VISIBLE_ITEMS, userTurns.length); + + // Centered scroll offset + const scrollOffset = useMemo(() => { + if (userTurns.length <= maxVisibleItems) return 0; + const halfVisible = Math.floor(maxVisibleItems / 2); + let offset = selectedIndex - halfVisible; + offset = Math.max(0, offset); + offset = Math.min(userTurns.length - maxVisibleItems, offset); + return offset; + }, [userTurns.length, maxVisibleItems, selectedIndex]); + + const visibleTurns = useMemo( + () => userTurns.slice(scrollOffset, scrollOffset + maxVisibleItems), + [userTurns, scrollOffset, maxVisibleItems], + ); + const showScrollUp = scrollOffset > 0; + const showScrollDown = scrollOffset + maxVisibleItems < userTurns.length; + + const handleConfirmSelect = useCallback( + (confirmed: boolean) => { + if (confirmed && confirmItem) { + onRewind(confirmItem); + } else { + setConfirmItem(null); + } + }, + [confirmItem, onRewind], + ); + + // Pick-list key handler + useKeypress( + (key) => { + const { name, ctrl } = key; + + if (name === 'escape' || (ctrl && name === 'c')) { + onCancel(); + return; + } + + if (name === 'return') { + const selected = userTurns[selectedIndex]; + if (selected) { + setConfirmItem(selected); + } + return; + } + + if (name === 'up' || name === 'k') { + setSelectedIndex((prev) => Math.max(0, prev - 1)); + return; + } + + if (name === 'down' || name === 'j') { + setSelectedIndex((prev) => Math.min(userTurns.length - 1, prev + 1)); + return; + } + }, + { isActive: confirmItem === null }, + ); + + // Confirm key handler + useKeypress( + (key) => { + const { name, ctrl, sequence } = key; + + if (name === 'escape' || (ctrl && name === 'c')) { + setConfirmItem(null); + return; + } + + if (name === 'return' || sequence === 'y' || sequence === 'Y') { + handleConfirmSelect(true); + return; + } + + if (sequence === 'n' || sequence === 'N') { + handleConfirmSelect(false); + return; + } + }, + { isActive: confirmItem !== null }, + ); + + if (userTurns.length === 0) { + return ( + + + + + {t('No user turns to rewind to.')} + + + + + ); + } + + // Confirm phase + if (confirmItem) { + const promptPreview = truncateText( + confirmItem.text || '(empty)', + boxWidth - 10, + ); + return ( + + + + + {t('Rewind Conversation')} + + + + {'─'.repeat(boxWidth - 2)} + + + + {t('Rewind to: ')} + + {promptPreview} + + + + {t( + 'This will remove all conversation after this turn. The prompt will be pre-populated in the input for editing.', + )} + + + + {'─'.repeat(boxWidth - 2)} + + + + {t('Enter/Y to confirm · Esc/N to go back')} + + + + + ); + } + + // Pick-list phase + return ( + + + {/* Header */} + + + {t('Rewind Conversation')} + + + {' '} + {t('({{count}} turns)', { count: String(userTurns.length) })} + + + + {/* Separator */} + + {'─'.repeat(boxWidth - 2)} + + + {/* Turn list */} + + {visibleTurns.map((item, visibleIndex) => { + const actualIndex = scrollOffset + visibleIndex; + return ( + + ); + })} + + + {/* Separator */} + + {'─'.repeat(boxWidth - 2)} + + + {/* Footer */} + + + {t('↑↓ to navigate · Enter to select · Esc to cancel')} + + + + + ); +} diff --git a/packages/cli/src/ui/contexts/UIActionsContext.tsx b/packages/cli/src/ui/contexts/UIActionsContext.tsx index 5aac4e66a..c035e6421 100644 --- a/packages/cli/src/ui/contexts/UIActionsContext.tsx +++ b/packages/cli/src/ui/contexts/UIActionsContext.tsx @@ -17,7 +17,7 @@ import { } from '@qwen-code/qwen-code-core'; import { type SettingScope } from '../../config/settings.js'; import { type AlibabaStandardRegion } from '../../constants/alibabaStandardApiKey.js'; -import type { AuthState } from '../types.js'; +import type { AuthState, HistoryItem } from '../types.js'; import { type ArenaDialogType } from '../hooks/useArenaCommand.js'; // OpenAICredentials type (previously imported from OpenAIKeyPrompt) export interface OpenAICredentials { @@ -110,6 +110,10 @@ export interface UIActions { closeFeedbackDialog: () => void; temporaryCloseFeedbackDialog: () => void; submitFeedback: (rating: number) => void; + // Rewind selector + openRewindSelector: () => void; + closeRewindSelector: () => void; + handleRewindConfirm: (userItem: HistoryItem) => void; } export const UIActionsContext = createContext(null); diff --git a/packages/cli/src/ui/contexts/UIStateContext.tsx b/packages/cli/src/ui/contexts/UIStateContext.tsx index b2450ec2c..a51961128 100644 --- a/packages/cli/src/ui/contexts/UIStateContext.tsx +++ b/packages/cli/src/ui/contexts/UIStateContext.tsx @@ -158,6 +158,9 @@ export interface UIState { promptSuggestion: string | null; /** Dismiss prompt suggestion (clears state, aborts speculation) */ dismissPromptSuggestion: () => void; + // Rewind selector + isRewindSelectorOpen: boolean; + rewindEscPending: boolean; } export const UIStateContext = createContext(null); diff --git a/packages/cli/src/ui/hooks/slashCommandProcessor.ts b/packages/cli/src/ui/hooks/slashCommandProcessor.ts index 135ec7d11..fea4b12a0 100644 --- a/packages/cli/src/ui/hooks/slashCommandProcessor.ts +++ b/packages/cli/src/ui/hooks/slashCommandProcessor.ts @@ -101,6 +101,7 @@ interface SlashCommandProcessorActions { openExtensionsManagerDialog: () => void; openMcpDialog: () => void; openHooksDialog: () => void; + openRewindSelector: () => void; } /** @@ -628,6 +629,9 @@ export const useSlashCommandProcessor = ( case 'extensions_manage': actions.openExtensionsManagerDialog(); return { type: 'handled' }; + case 'rewind': + actions.openRewindSelector(); + return { type: 'handled' }; case 'help': return { type: 'handled' }; default: { diff --git a/packages/cli/src/ui/hooks/useDoublePress.ts b/packages/cli/src/ui/hooks/useDoublePress.ts new file mode 100644 index 000000000..89d6c1b97 --- /dev/null +++ b/packages/cli/src/ui/hooks/useDoublePress.ts @@ -0,0 +1,56 @@ +/** + * @license + * Copyright 2025 Qwen Code + * SPDX-License-Identifier: Apache-2.0 + */ + +import { useRef, useCallback, useEffect } from 'react'; + +const DOUBLE_PRESS_TIMEOUT_MS = 800; + +/** + * Generic double-press detection hook. + * + * Returns a callback that should be invoked on each press. On the first + * press, optionally calls `onPending(true)` and starts a timer. If a + * second press arrives within 800ms, calls `onDoublePress`. Otherwise, + * the pending state is cleared after the timeout. + * + * @param onDoublePress Callback fired when a double-press is detected + * @param onPending Optional callback to update pending state (for UI hints) + * @returns A callback to invoke on each press + */ +export function useDoublePress( + onDoublePress: () => void, + onPending?: (pending: boolean) => void, +): () => void { + const timeoutRef = useRef | null>(null); + + // Clean up timer on unmount + useEffect( + () => () => { + if (timeoutRef.current !== null) { + clearTimeout(timeoutRef.current); + timeoutRef.current = null; + } + }, + [], + ); + + return useCallback(() => { + if (timeoutRef.current !== null) { + // Second press within the timeout window + clearTimeout(timeoutRef.current); + timeoutRef.current = null; + onPending?.(false); + onDoublePress(); + } else { + // First press — start the timer + onPending?.(true); + timeoutRef.current = setTimeout(() => { + timeoutRef.current = null; + onPending?.(false); + }, DOUBLE_PRESS_TIMEOUT_MS); + } + }, [onDoublePress, onPending]); +} diff --git a/packages/cli/src/ui/hooks/useHistoryManager.ts b/packages/cli/src/ui/hooks/useHistoryManager.ts index c25fc84a2..1d3fc8de1 100644 --- a/packages/cli/src/ui/hooks/useHistoryManager.ts +++ b/packages/cli/src/ui/hooks/useHistoryManager.ts @@ -21,6 +21,7 @@ export interface UseHistoryManagerReturn { ) => void; clearItems: () => void; loadHistory: (newHistory: HistoryItem[]) => void; + truncateToItem: (itemId: number) => void; } /** @@ -101,6 +102,14 @@ export function useHistory(): UseHistoryManagerReturn { messageIdCounterRef.current = 0; }, []); + // Truncates history to exclude the item with the given ID and everything after it. + const truncateToItem = useCallback((itemId: number) => { + setHistory((prev) => { + const index = prev.findIndex((h) => h.id === itemId); + return index === -1 ? prev : prev.slice(0, index); + }); + }, []); + return useMemo( () => ({ history, @@ -108,7 +117,8 @@ export function useHistory(): UseHistoryManagerReturn { updateItem, clearItems, loadHistory, + truncateToItem, }), - [history, addItem, updateItem, clearItems, loadHistory], + [history, addItem, updateItem, clearItems, loadHistory, truncateToItem], ); } diff --git a/packages/cli/src/ui/hooks/useResumeCommand.test.ts b/packages/cli/src/ui/hooks/useResumeCommand.test.ts index 4d5a5a68d..a346e0ad9 100644 --- a/packages/cli/src/ui/hooks/useResumeCommand.test.ts +++ b/packages/cli/src/ui/hooks/useResumeCommand.test.ts @@ -145,6 +145,7 @@ describe('useResumeCommand', () => { getTargetDir: () => '/tmp', getGeminiClient: () => geminiClient, startNewSession: vi.fn(), + getChatRecordingService: () => ({ rebuildTurnBoundaries: vi.fn() }), getDebugLogger: () => ({ warn: vi.fn(), debug: vi.fn(), diff --git a/packages/cli/src/ui/hooks/useResumeCommand.ts b/packages/cli/src/ui/hooks/useResumeCommand.ts index 037b75f3c..f9ca2df23 100644 --- a/packages/cli/src/ui/hooks/useResumeCommand.ts +++ b/packages/cli/src/ui/hooks/useResumeCommand.ts @@ -95,6 +95,10 @@ export function useResumeCommand( // Update session history core. config.startNewSession(sessionId, sessionData); + // Rebuild turn boundary tracking so rewind works within resumed sessions. + config + .getChatRecordingService() + ?.rebuildTurnBoundaries(sessionData.conversation.messages); await config.getGeminiClient()?.initialize?.(); // Fire SessionStart event after resuming session diff --git a/packages/cli/src/ui/utils/historyMapping.test.ts b/packages/cli/src/ui/utils/historyMapping.test.ts new file mode 100644 index 000000000..84c18a2ff --- /dev/null +++ b/packages/cli/src/ui/utils/historyMapping.test.ts @@ -0,0 +1,243 @@ +/** + * @license + * Copyright 2025 Qwen Code + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, it, expect } from 'vitest'; +import { computeApiTruncationIndex, isRealUserTurn } from './historyMapping.js'; +import type { HistoryItem } from '../types.js'; +import type { Content, Part } from '@google/genai'; + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +function userContent(text: string): Content { + return { role: 'user', parts: [{ text } as Part] }; +} + +function modelContent(text: string): Content { + return { role: 'model', parts: [{ text } as Part] }; +} + +function functionResponseContent(): Content { + return { + role: 'user', + parts: [ + { + functionResponse: { name: 'tool', response: { result: 'ok' } }, + } as unknown as Part, + ], + }; +} + +function startupPair(): [Content, Content] { + return [ + userContent('Environment context...'), + modelContent('Got it. Thanks for the context!'), + ]; +} + +function userItem(id: number, text = `prompt ${id}`): HistoryItem { + return { type: 'user', id, text } as HistoryItem; +} + +function geminiItem(id: number): HistoryItem { + return { type: 'gemini', id, text: `response ${id}` } as HistoryItem; +} + +// --------------------------------------------------------------------------- +// Tests +// --------------------------------------------------------------------------- + +describe('computeApiTruncationIndex', () => { + it('returns 0 for empty API history', () => { + const ui: HistoryItem[] = [userItem(1)]; + const api: Content[] = []; + expect(computeApiTruncationIndex(ui, 1, api)).toBe(0); + }); + + describe('without startup context', () => { + it('rewinds to the first user turn (keep nothing)', () => { + const ui: HistoryItem[] = [ + userItem(1), + geminiItem(2), + userItem(3), + geminiItem(4), + ]; + const api: Content[] = [ + userContent('prompt 1'), + modelContent('response 1'), + userContent('prompt 3'), + modelContent('response 3'), + ]; + // Rewind to turn 1 → keep 0 entries before it + expect(computeApiTruncationIndex(ui, 1, api)).toBe(0); + }); + + it('rewinds to the second user turn (keep first turn)', () => { + const ui: HistoryItem[] = [ + userItem(1), + geminiItem(2), + userItem(3), + geminiItem(4), + ]; + const api: Content[] = [ + userContent('prompt 1'), + modelContent('response 1'), + userContent('prompt 3'), + modelContent('response 3'), + ]; + // Rewind to turn 3 → keep entries before the second user Content + expect(computeApiTruncationIndex(ui, 3, api)).toBe(2); + }); + + it('rewinds to the third user turn', () => { + const ui: HistoryItem[] = [ + userItem(1), + geminiItem(2), + userItem(3), + geminiItem(4), + userItem(5), + geminiItem(6), + ]; + const api: Content[] = [ + userContent('prompt 1'), + modelContent('response 1'), + userContent('prompt 3'), + modelContent('response 3'), + userContent('prompt 5'), + modelContent('response 5'), + ]; + expect(computeApiTruncationIndex(ui, 5, api)).toBe(4); + }); + }); + + describe('with startup context pair', () => { + it('keeps startup context when rewinding to the first turn', () => { + const ui: HistoryItem[] = [userItem(1), geminiItem(2)]; + const api: Content[] = [ + ...startupPair(), + userContent('prompt 1'), + modelContent('response 1'), + ]; + // Rewind to turn 1 → keep startup pair (2 entries) + expect(computeApiTruncationIndex(ui, 1, api)).toBe(2); + }); + + it('keeps startup + first turn when rewinding to second turn', () => { + const ui: HistoryItem[] = [ + userItem(1), + geminiItem(2), + userItem(3), + geminiItem(4), + ]; + const api: Content[] = [ + ...startupPair(), + userContent('prompt 1'), + modelContent('response 1'), + userContent('prompt 3'), + modelContent('response 3'), + ]; + // startup(2) + turn1(2) = 4 entries to keep + expect(computeApiTruncationIndex(ui, 3, api)).toBe(4); + }); + }); + + describe('with tool call entries (functionResponse)', () => { + it('skips functionResponse entries when counting user prompts', () => { + const ui: HistoryItem[] = [ + userItem(1), + geminiItem(2), + // tool_group items are not type 'user', they don't affect the count + userItem(5), + geminiItem(6), + ]; + const api: Content[] = [ + userContent('prompt 1'), + modelContent('response with tool call'), + functionResponseContent(), // tool result — should be skipped + modelContent('response after tool'), + userContent('prompt 5'), + modelContent('response 5'), + ]; + // Rewind to turn 5: 1 user turn before it → find the 2nd user text + // API walk: idx 0 = user text (count=1), idx 4 = user text (count=2 > 1) → return 4 + expect(computeApiTruncationIndex(ui, 5, api)).toBe(4); + }); + }); + + describe('compression fallback', () => { + it('returns -1 when not enough user prompts found', () => { + const ui: HistoryItem[] = [ + userItem(1), + geminiItem(2), + userItem(3), + geminiItem(4), + userItem(5), + geminiItem(6), + ]; + // After compression, API history may be shorter than expected + const api: Content[] = [ + modelContent('compressed summary'), + userContent('prompt 5'), + modelContent('response 5'), + ]; + // Rewind to turn 5 → 2 user turns before it, but API only has 1 user text + expect(computeApiTruncationIndex(ui, 5, api)).toBe(-1); + }); + }); + + describe('with slash-command items in UI history', () => { + it('ignores slash-command items when counting user turns', () => { + const ui: HistoryItem[] = [ + userItem(1, 'hello'), + geminiItem(2), + userItem(3, '/help'), // slash command — should be skipped + userItem(5, 'world'), + geminiItem(6), + ]; + const api: Content[] = [ + userContent('hello'), + modelContent('response 1'), + userContent('world'), + modelContent('response 2'), + ]; + // Rewind to 'world' (id=5): 1 real user turn before it (id=1) + // Slash '/help' (id=3) should not be counted + expect(computeApiTruncationIndex(ui, 5, api)).toBe(2); + }); + }); + + describe('single turn', () => { + it('handles rewinding the only turn', () => { + const ui: HistoryItem[] = [userItem(1), geminiItem(2)]; + const api: Content[] = [ + userContent('prompt 1'), + modelContent('response 1'), + ]; + expect(computeApiTruncationIndex(ui, 1, api)).toBe(0); + }); + }); +}); + +describe('isRealUserTurn', () => { + it('returns true for normal user prompts', () => { + expect(isRealUserTurn(userItem(1, 'hello world'))).toBe(true); + }); + + it('returns false for slash commands', () => { + expect(isRealUserTurn(userItem(1, '/help'))).toBe(false); + expect(isRealUserTurn(userItem(1, '/rewind'))).toBe(false); + expect(isRealUserTurn(userItem(1, '/stats'))).toBe(false); + }); + + it('returns false for ? commands', () => { + expect(isRealUserTurn(userItem(1, '?help'))).toBe(false); + }); + + it('returns false for non-user items', () => { + expect(isRealUserTurn(geminiItem(1))).toBe(false); + }); +}); diff --git a/packages/cli/src/ui/utils/historyMapping.ts b/packages/cli/src/ui/utils/historyMapping.ts new file mode 100644 index 000000000..389acf2e9 --- /dev/null +++ b/packages/cli/src/ui/utils/historyMapping.ts @@ -0,0 +1,126 @@ +/** + * @license + * Copyright 2025 Qwen Code + * SPDX-License-Identifier: Apache-2.0 + */ + +import type { HistoryItem } from '../types.js'; +import type { Content } from '@google/genai'; + +/** + * Returns true when the history item represents a real user prompt that was + * sent to the model, as opposed to a slash-command invocation (`/help`, + * `/stats`, …) which is stored with `type: 'user'` in the UI but never + * reaches the API history or `turnParentUuids`. + */ +export function isRealUserTurn(item: HistoryItem): boolean { + if (item.type !== 'user' || !item.text) return false; + return !item.text.startsWith('/') && !item.text.startsWith('?'); +} + +/** + * The well-known startup context model acknowledgment. + * Used to identify the startup context pair in the API history. + */ +const STARTUP_CONTEXT_MODEL_ACK = 'Got it. Thanks for the context!'; + +/** + * Checks if a Content entry is a user-initiated text prompt + * as opposed to a tool result (functionResponse). + */ +function isUserTextContent(content: Content): boolean { + if (content.role !== 'user') return false; + if (!content.parts || content.parts.length === 0) return false; + + const hasFunctionResponse = content.parts.some( + (part) => 'functionResponse' in part, + ); + if (hasFunctionResponse) return false; + + return content.parts.some((part) => 'text' in part && part.text); +} + +/** + * Detects whether the API history starts with the startup context pair + * (user env context + model acknowledgment). + */ +function hasStartupContext(apiHistory: Content[]): boolean { + if (apiHistory.length < 2) return false; + const first = apiHistory[0]; + const second = apiHistory[1]; + if (first?.role !== 'user' || second?.role !== 'model') return false; + return ( + second.parts?.some( + (part) => 'text' in part && part.text === STARTUP_CONTEXT_MODEL_ACK, + ) ?? false + ); +} + +/** + * Computes the number of API Content[] entries to keep when rewinding + * to a specific user turn in the UI history. + * + * The API history may include: + * - A startup context pair: [user(env), model(ack)] at the beginning + * - User text prompts (corresponding to UI user turns) + * - Model responses (with optional functionCall parts) + * - Tool result entries: user(functionResponse) + model(response) + * + * This function counts user text Content entries (skipping tool results + * and the startup context pair) to find the API boundary corresponding + * to the target UI user turn. + * + * Note: In IDE mode, additional user Content entries may be injected for + * IDE context. This function does not account for those and will produce + * incorrect results. Rewind is therefore disabled in IDE mode (guarded + * in openRewindSelector). + * + * @param uiHistory The full UI history array + * @param targetUserItemId The ID of the user HistoryItem to rewind to + * @param apiHistory The current API Content[] array + * @returns The number of Content entries to keep, or -1 if the target turn + * could not be located (e.g., it was absorbed by chat compression). + */ +export function computeApiTruncationIndex( + uiHistory: HistoryItem[], + targetUserItemId: number, + apiHistory: Content[], +): number { + // Count how many UI user turns exist before the target + let uiUserTurnCount = 0; + for (const item of uiHistory) { + if (item.id === targetUserItemId) { + break; + } + if (isRealUserTurn(item)) { + uiUserTurnCount++; + } + } + + // Determine the starting index in the API history (skip startup context) + const startIndex = hasStartupContext(apiHistory) ? 2 : 0; + + if (uiUserTurnCount === 0) { + // Rewinding to the first user turn: keep only startup context (if any) + return startIndex; + } + + // Walk the API history from after the startup context, counting + // user text prompts to find the one corresponding to the target turn. + let realUserPromptCount = 0; + + for (let i = startIndex; i < apiHistory.length; i++) { + if (isUserTextContent(apiHistory[i]!)) { + realUserPromptCount++; + // The target turn is the (uiUserTurnCount + 1)th real user prompt. + // We want to truncate right before it. + if (realUserPromptCount > uiUserTurnCount) { + return i; + } + } + } + + // If we didn't find enough user prompts (e.g., after compression), + // signal that the target turn is unreachable. + return -1; +} diff --git a/packages/cli/src/ui/utils/resumeHistoryUtils.ts b/packages/cli/src/ui/utils/resumeHistoryUtils.ts index bf134785a..1807cfd85 100644 --- a/packages/cli/src/ui/utils/resumeHistoryUtils.ts +++ b/packages/cli/src/ui/utils/resumeHistoryUtils.ts @@ -252,6 +252,9 @@ function convertToHistoryItems( if (!payload) continue; pendingAtCommands.push(payload); } + if (record.subtype === 'rewind') { + items.push({ type: 'info', text: 'Conversation rewound.' }); + } continue; } switch (record.type) { diff --git a/packages/core/src/core/client.ts b/packages/core/src/core/client.ts index 797f61190..f8961ff04 100644 --- a/packages/core/src/core/client.ts +++ b/packages/core/src/core/client.ts @@ -217,6 +217,11 @@ export class GeminiClient { this.forceFullIdeContext = true; } + truncateHistory(keepCount: number) { + this.getChat().truncateHistory(keepCount); + this.forceFullIdeContext = true; + } + async setTools(): Promise { if (!this.isInitialized()) { return; diff --git a/packages/core/src/core/geminiChat.ts b/packages/core/src/core/geminiChat.ts index ff76eb5c6..cf6e37761 100644 --- a/packages/core/src/core/geminiChat.ts +++ b/packages/core/src/core/geminiChat.ts @@ -788,6 +788,10 @@ export class GeminiChat { this.history = history; } + truncateHistory(keepCount: number): void { + this.history = this.history.slice(0, keepCount); + } + stripThoughtsFromHistory(): void { this.history = this.history .map((content) => { diff --git a/packages/core/src/services/chatRecordingService.ts b/packages/core/src/services/chatRecordingService.ts index 7732be1a5..c7976d8ef 100644 --- a/packages/core/src/services/chatRecordingService.ts +++ b/packages/core/src/services/chatRecordingService.ts @@ -91,7 +91,8 @@ export interface ChatRecord { | 'at_command' | 'notification' | 'cron' - | 'custom_title'; + | 'custom_title' + | 'rewind'; /** Working directory at time of message */ cwd: string; /** CLI version for compatibility tracking */ @@ -133,7 +134,8 @@ export interface ChatRecord { | UiTelemetryRecordPayload | AtCommandRecordPayload | CustomTitleRecordPayload - | NotificationRecordPayload; + | NotificationRecordPayload + | RewindRecordPayload; } export interface NotificationRecordPayload { @@ -212,6 +214,14 @@ export interface UiTelemetryRecordPayload { uiEvent: UiEvent; } +/** + * Stored payload for conversation rewind events. + */ +export interface RewindRecordPayload { + /** Number of UI history items truncated. */ + truncatedCount: number; +} + /** * Service for recording the current chat session to disk. * @@ -240,6 +250,18 @@ export class ChatRecordingService { /** UUID of the last written record in the chain */ private lastRecordUuid: string | null = null; private readonly config: Config; + /** + * Tracks the `lastRecordUuid` value just before each user turn was recorded. + * Used by {@link rewindRecording} to re-root the parentUuid chain so that + * rewound messages end up on a dead branch in the tree, making + * `reconstructHistory()` skip them automatically on resume. + * + * Index `i` holds the UUID of the last record written before the (i+1)th + * user message was appended. For example, `turnParentUuids[0]` is the UUID + * right before the very first user message (often `null` or the startup + * context record). + */ + private turnParentUuids: Array = []; /** * Cached chats-dir / conversation-file path so per-record appendRecord * doesn't re-stat them on every write. The first call performs the @@ -463,6 +485,7 @@ export class ChatRecordingService { */ recordUserMessage(message: PartListUnion): void { try { + this.turnParentUuids.push(this.lastRecordUuid); const record: ChatRecord = { ...this.createBaseRecord('user'), message: createUserContent(message), @@ -739,6 +762,70 @@ export class ChatRecordingService { } } + /** + * Records a conversation rewind and re-roots the parentUuid chain. + * + * Sets `lastRecordUuid` back to the UUID that was current just before the + * target user turn was recorded, then appends a rewind system record. + * This makes all messages after that point sit on a dead branch in the + * UUID tree, so `reconstructHistory()` will skip them on resume. + * + * @param targetTurnIndex 0-based index of the user turn to rewind to. + * For example, 0 means rewind to the very first user message (keeping + * nothing before it), 1 means keep the first user turn, etc. + * @param payload Additional metadata to persist with the rewind record. + */ + rewindRecording(targetTurnIndex: number, payload: RewindRecordPayload): void { + try { + // Re-root: point back to the record just before the target user turn. + this.lastRecordUuid = this.turnParentUuids[targetTurnIndex] ?? null; + // Trim future boundaries — they no longer exist in the active branch. + this.turnParentUuids = this.turnParentUuids.slice(0, targetTurnIndex); + + const record: ChatRecord = { + ...this.createBaseRecord('system'), + type: 'system', + subtype: 'rewind', + systemPayload: payload, + }; + + this.appendRecord(record); + } catch (error) { + debugLogger.error('Error saving rewind record:', error); + } + } + + /** + * Rebuilds `turnParentUuids` from a reconstructed message list. + * + * Call this after resuming a session so that subsequent rewinds within + * the resumed session have correct boundary data. Also updates + * `lastRecordUuid` to the last record in the chain. + */ + rebuildTurnBoundaries(messages: ChatRecord[]): void { + this.turnParentUuids = []; + let prevUuid: string | null = + this.config.getResumedSessionData()?.lastCompletedUuid !== undefined + ? null + : this.lastRecordUuid; + + for (let i = 0; i < messages.length; i++) { + const record = messages[i]; + if ( + record.type === 'user' && + record.subtype !== 'notification' && + record.subtype !== 'cron' + ) { + this.turnParentUuids.push(prevUuid); + } + prevUuid = record.uuid; + } + // Ensure lastRecordUuid points to the end of the reconstructed chain. + if (messages.length > 0) { + this.lastRecordUuid = messages[messages.length - 1].uuid; + } + } + /** * Records a custom title for the session. * Appended as a system record so it persists with the session data. diff --git a/scripts/test-rewind-e2e.sh b/scripts/test-rewind-e2e.sh new file mode 100755 index 000000000..9b09f5cbe --- /dev/null +++ b/scripts/test-rewind-e2e.sh @@ -0,0 +1,473 @@ +#!/usr/bin/env bash +# ============================================================================= +# test-rewind-e2e.sh — tmux-based E2E verification for the conversation rewind +# feature (PR #3441). +# +# Covers all 5 manual test items from the PR description: +# 1. /rewind command → pick turn → UI truncated, input pre-populated +# 2. Double-ESC on empty prompt → selector opens → rewind → continue +# 3. ESC during streaming → cancels request, does NOT open selector +# 4. /rewind with no history → selector does not open +# 5. After rewind, model does not reference removed turns +# +# Prerequisites: +# - tmux installed +# - CLI already built: npm run build && npm run bundle +# - Valid model API credentials in environment +# +# Usage: +# bash scripts/test-rewind-e2e.sh +# ============================================================================= + +set -uo pipefail + +SESSION="test-rewind-$$" +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" +PROJECT_DIR="$(cd "$SCRIPT_DIR/.." && pwd)" +BUNDLE="$PROJECT_DIR/dist/cli.js" +WORKDIR="$(mktemp -d)" +PASS_COUNT=0 +FAIL_COUNT=0 +TIMEOUT=${REWIND_TEST_TIMEOUT:-120} # seconds per wait_for call + +# Colors +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[0;33m' +BOLD='\033[1m' +RESET='\033[0m' + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + +cleanup() { + tmux kill-session -t "$SESSION" 2>/dev/null || true + rm -rf "$WORKDIR" +} +trap cleanup EXIT + +start_session() { + # Deliver ESC immediately — without this, tmux holds ESC for up to 500ms + # thinking it might be the start of an escape sequence, which breaks + # double-ESC detection and other ESC-dependent interactions. + # Must be set as a server option (not session) in tmux 2.6+. + tmux set-option -sg escape-time 0 2>/dev/null || true + tmux new-session -d -s "$SESSION" -x 120 -y 40 \ + "cd '$WORKDIR' && node '$BUNDLE' --approval-mode yolo 2>'$WORKDIR/stderr.log'" + wait_for_prompt 60 +} + +kill_session() { + tmux kill-session -t "$SESSION" 2>/dev/null || true + sleep 1 +} + +# Capture entire pane including scrollback (for content assertions) +capture() { + tmux capture-pane -t "$SESSION" -p -S -200 2>/dev/null || true +} + +# Capture only the visible pane (for prompt detection) +capture_visible() { + tmux capture-pane -t "$SESSION" -p 2>/dev/null || true +} + +send() { + # Type text using literal mode then press Enter + tmux send-keys -t "$SESSION" -l "$1" + sleep 0.5 + tmux send-keys -t "$SESSION" Enter +} + +send_keys() { + tmux send-keys -t "$SESSION" "$@" +} + +# Wait for "Type your message" to appear on the visible pane. +wait_for_prompt() { + local timeout="${1:-$TIMEOUT}" + local elapsed=0 + + while [ $elapsed -lt "$timeout" ]; do + if capture_visible | grep -qF "Type your message"; then + return 0 + fi + sleep 2 + elapsed=$((elapsed + 2)) + done + echo -e "${RED}TIMEOUT waiting for prompt (Type your message)${RESET}" >&2 + echo "--- Visible pane ---" >&2 + capture_visible >&2 + echo "--- End ---" >&2 + return 1 +} + +# Wait for the CLI to be truly idle: +# 1. "Type your message" is visible (prompt ready) +# 2. No "esc to cancel" on screen (no btw/side-query running) +# 3. Screen content unchanged for 3 consecutive seconds +wait_idle() { + local timeout="${1:-$TIMEOUT}" + local elapsed=0 + local last_hash="" + local stable_count=0 + + while [ $elapsed -lt "$timeout" ]; do + local screen + screen=$(capture_visible) + + # Must have prompt visible + if ! echo "$screen" | grep -qF "Type your message"; then + stable_count=0 + last_hash="" + sleep 2 + elapsed=$((elapsed + 2)) + continue + fi + + # Must not have btw side-query running + if echo "$screen" | grep -qF "esc to cancel"; then + stable_count=0 + last_hash="" + sleep 2 + elapsed=$((elapsed + 2)) + continue + fi + + # Check screen stability + local current + current=$(echo "$screen" | md5sum | cut -d' ' -f1) + if [ "$current" = "$last_hash" ]; then + stable_count=$((stable_count + 1)) + if [ $stable_count -ge 3 ]; then + return 0 + fi + else + last_hash="$current" + stable_count=0 + fi + sleep 1 + elapsed=$((elapsed + 1)) + done + echo -e "${RED}TIMEOUT waiting for idle${RESET}" >&2 + echo "--- Visible pane ---" >&2 + capture_visible >&2 + echo "--- End ---" >&2 + return 1 +} + +# Wait for text to appear on the visible pane +wait_for() { + local text="$1" + local timeout="${2:-$TIMEOUT}" + local elapsed=0 + while [ $elapsed -lt "$timeout" ]; do + if capture_visible | grep -qF "$text"; then + return 0 + fi + sleep 2 + elapsed=$((elapsed + 2)) + done + echo -e "${RED}TIMEOUT waiting for: ${text}${RESET}" >&2 + echo "--- Visible pane ---" >&2 + capture_visible >&2 + echo "--- End ---" >&2 + return 1 +} + +# Assert text IS on visible pane +assert_screen() { + local text="$1" + if capture_visible | grep -qF "$text"; then + return 0 + fi + echo -e "${RED}ASSERT FAILED: expected '${text}' on screen${RESET}" >&2 + echo "--- Visible pane ---" >&2 + capture_visible >&2 + echo "--- End ---" >&2 + return 1 +} + +# Assert text IS on full capture (including scrollback) +assert_scrollback() { + local text="$1" + if capture | grep -qF "$text"; then + return 0 + fi + echo -e "${RED}ASSERT FAILED: expected '${text}' in scrollback${RESET}" >&2 + return 1 +} + +# Assert text is NOT on visible pane +assert_no_screen() { + local text="$1" + if capture_visible | grep -qF "$text"; then + echo -e "${RED}ASSERT FAILED: did NOT expect '${text}' on screen${RESET}" >&2 + echo "--- Visible pane ---" >&2 + capture_visible >&2 + echo "--- End ---" >&2 + return 1 + fi + return 0 +} + +pass() { + echo -e "${GREEN}[PASS]${RESET} $1" + PASS_COUNT=$((PASS_COUNT + 1)) +} + +fail() { + echo -e "${RED}[FAIL]${RESET} $1: $2" + FAIL_COUNT=$((FAIL_COUNT + 1)) +} + +# Run a test function, capturing its exit code properly. +# Usage: run_test "Test Name" test_function_name +run_test() { + local name="$1" + local func="$2" + local rc=0 + local errmsg="" + + errmsg=$($func 2>&1) || rc=$? + + if [ $rc -eq 0 ]; then + pass "$name" + else + # Extract last meaningful error line from stderr + local last_err + last_err=$(echo "$errmsg" | grep -E 'TIMEOUT|ASSERT FAILED' | tail -1) + fail "$name" "${last_err:-exit code $rc}" + echo "$errmsg" | head -30 + fi + + # Always clean up the session between tests + kill_session 2>/dev/null || true +} + +# --------------------------------------------------------------------------- +# Pre-flight checks +# --------------------------------------------------------------------------- + +if ! command -v tmux &>/dev/null; then + echo -e "${RED}Error: tmux is not installed${RESET}" >&2 + exit 1 +fi + +if [ ! -f "$BUNDLE" ]; then + echo -e "${YELLOW}Bundle not found at $BUNDLE, building...${RESET}" + (cd "$PROJECT_DIR" && npm run build && npm run bundle) +fi + +echo -e "${BOLD}=== Rewind Feature E2E Tests (tmux) ===${RESET}" +echo "Session: $SESSION" +echo "Workdir: $WORKDIR" +echo "" + +# --------------------------------------------------------------------------- +# Test 1: /rewind command flow +# --------------------------------------------------------------------------- + +test_rewind_command() { + start_session + + # Build 3-turn conversation with unique markers + send "say exactly ALPHA1 and nothing else" + wait_idle || return 1 + + send "say exactly BETA2 and nothing else" + wait_idle || return 1 + + send "say exactly GAMMA3 and nothing else" + wait_idle || return 1 + + # Open rewind selector via /rewind command + send "/rewind" + wait_for "Rewind Conversation" || return 1 + + # Navigate up to select BETA2 turn (selector starts at last turn GAMMA3) + send_keys Up + sleep 0.5 + + # Select the turn + send_keys Enter + sleep 1 + wait_for "confirm" 15 || return 1 + + # Confirm rewind + send_keys y + wait_for "Conversation rewound" || return 1 + + # After rewind: the input should be pre-populated with the selected turn's + # text ("say exactly GAMMA3..."). The GAMMA3 *response* turn should be gone + # from the conversation, but the text appears in the input bar — which is + # the correct pre-population behavior. + # Verify pre-population: the input bar should contain GAMMA3 text + assert_screen "say exactly GAMMA3" || return 1 + # Verify the earlier turns (ALPHA1, BETA2) are still in conversation + assert_scrollback "ALPHA1" || return 1 +} + +run_test "Test 1: /rewind command flow" test_rewind_command + +# --------------------------------------------------------------------------- +# Test 2: Double-ESC opens selector +# --------------------------------------------------------------------------- + +test_double_esc() { + start_session + + send "say exactly DELTA4 and nothing else" + wait_idle || return 1 + + send "say exactly EPSILON5 and nothing else" + wait_idle || return 1 + + # Double-ESC to open rewind selector. + # Complication: a btw side-question (prompt suggestion) may be active after + # the model responds. If btwItem is non-null, the first ESC cancels the btw + # (AppContainer.tsx:1896) and never reaches the rewind handler. We send + # 3 ESCs with proper timing to handle both btw-present and btw-absent cases: + # ESC #1: cancels btw (if present), or starts rewind pending (if absent) + # sleep 1.5s: >800ms to reset any rewind pending from ESC #1 + # ESC #2: starts rewind pending (btw now dismissed) + # sleep 0.3s: within 800ms window + # ESC #3: triggers rewind selector + send_keys Escape + sleep 1.5 + send_keys Escape + sleep 0.5 + wait_for "Esc again to rewind" 15 || return 1 + + # Third ESC within 800ms — should open selector + send_keys Escape + wait_for "Rewind Conversation" || return 1 + + # Select last turn (pre-selected) & confirm + send_keys Enter + sleep 1 + send_keys y + wait_for "Conversation rewound" || return 1 + + # Continue conversation after rewind — verify model still works + send "say exactly ZETA6 and nothing else" + wait_idle || return 1 + assert_scrollback "ZETA6" || return 1 +} + +run_test "Test 2: Double-ESC opens selector" test_double_esc + +# --------------------------------------------------------------------------- +# Test 3: ESC during streaming cancels (no rewind) +# --------------------------------------------------------------------------- + +test_esc_during_streaming() { + start_session + + # Send a prompt that will generate a long response + send "write a detailed 500 word essay about the history of computing from 1940 to 2000" + + # Wait for streaming to start (prompt disappears) + sleep 4 + + # Single ESC while streaming — should cancel, NOT open rewind + send_keys Escape + + # Verify rewind selector did NOT open + sleep 3 + assert_no_screen "Rewind Conversation" || return 1 + + # Should eventually return to idle + wait_idle || return 1 +} + +run_test "Test 3: ESC during streaming cancels (no rewind)" test_esc_during_streaming + +# --------------------------------------------------------------------------- +# Test 4: /rewind with no prior conversation +# --------------------------------------------------------------------------- + +test_rewind_no_history() { + start_session + + # Immediately try /rewind with no conversation history. + # The /rewind text itself gets recorded as a user turn before the slash + # command handler runs, so the guard (≥1 user turn) passes and the + # selector opens showing only the "/rewind" entry — which is not a + # meaningful rewindable turn. We verify the selector has only 1 turn. + send "/rewind" + sleep 3 + + # The selector may or may not open depending on implementation. + # If it opens, it should show exactly "1 turns" (only the /rewind itself). + if capture_visible | grep -qF "Rewind Conversation"; then + assert_screen "1 turns" || return 1 + # Close the selector with ESC + send_keys Escape + sleep 1 + fi + + # Either way, after dismissing we should be back at the prompt + wait_for_prompt 10 || return 1 +} + +run_test "Test 4: /rewind with no prior conversation" test_rewind_no_history + +# --------------------------------------------------------------------------- +# Test 5: After rewind, model ignores removed turns +# --------------------------------------------------------------------------- + +test_rewind_context_isolation() { + start_session + + # First turn: give model a unique fact + send "The secret code for this session is XRAY99. Just confirm you received it by saying OK." + wait_idle || return 1 + + # Second turn: different content + send "say exactly YANKEEZ and nothing else" + wait_idle || return 1 + + # Rewind to remove the YANKEEZ turn + send "/rewind" + wait_for "Rewind Conversation" || return 1 + + # Select the most recent turn (YANKEEZ) and confirm + send_keys Enter + sleep 1 + send_keys y + wait_for "Conversation rewound" || return 1 + + # Clear pre-populated input (Ctrl-U clears line in most terminals) + send_keys C-u + sleep 0.5 + + # Ask the model what it remembers + send "What was the secret code I told you? Reply with just the code, nothing else." + wait_idle || return 1 + + # Model should reference XRAY99 (surviving turn) + assert_scrollback "XRAY99" || return 1 +} + +run_test "Test 5: After rewind, model ignores removed turns" test_rewind_context_isolation + +# --------------------------------------------------------------------------- +# Summary +# --------------------------------------------------------------------------- + +echo "" +echo -e "${BOLD}=== Results ===${RESET}" +echo -e "${GREEN}Passed: ${PASS_COUNT}${RESET}" +if [ "$FAIL_COUNT" -gt 0 ]; then + echo -e "${RED}Failed: ${FAIL_COUNT}${RESET}" +else + echo -e "Failed: 0" +fi + +if [ "$FAIL_COUNT" -gt 0 ]; then + exit 1 +fi + +echo -e "${GREEN}All ${PASS_COUNT} tests passed.${RESET}" From b127258328279d13c249e90628a4fcab320e1fd9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=98=93=E8=89=AF?= <1204183885@qq.com> Date: Sat, 25 Apr 2026 22:27:30 +0800 Subject: [PATCH 06/36] fix(review): respect /language output setting for local reviews (#3611) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The /review skill's language rule "match the language of the PR" has no applicable target during local reviews (no PR exists). When a user sets an output language via /language, local review output now honors that preference instead of defaulting to English. PR reviews remain unchanged — they continue matching the PR's language since findings may be published as inline comments visible to all collaborators. Closes #3594 --- packages/core/src/skills/bundled/review/SKILL.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/packages/core/src/skills/bundled/review/SKILL.md b/packages/core/src/skills/bundled/review/SKILL.md index 62022b3c5..ca4e166c9 100644 --- a/packages/core/src/skills/bundled/review/SKILL.md +++ b/packages/core/src/skills/bundled/review/SKILL.md @@ -17,7 +17,7 @@ You are an expert code reviewer. Your job is to review code changes and provide **Critical rules (most commonly violated — read these first):** -1. **Match the language of the PR.** If the PR is in English, ALL your output (terminal + PR comments) MUST be in English. If in Chinese, use Chinese. Do NOT switch languages. +1. **Match the language of the PR.** If the PR is in English, ALL your output (terminal + PR comments) MUST be in English. If in Chinese, use Chinese. Do NOT switch languages. For **local reviews** (no PR), if the system prompt includes an output language preference, use that language; otherwise follow the user's input language. 2. **Step 9: use Create Review API** with `comments` array for inline comments. Do NOT use `gh api .../pulls/.../comments` to post individual comments. See Step 9 for the JSON format. **Design philosophy: Silence is better than noise.** Every comment you make should be worth the reader's time. If you're unsure whether something is a problem, DO NOT MENTION IT. Low-quality feedback causes "cry wolf" fatigue — developers stop reading all AI comments and miss real issues. @@ -528,4 +528,4 @@ These criteria apply to both Step 4 (review agents) and Step 5 (verification age - Flag any exposed secrets, credentials, API keys, or tokens in the diff as **Critical**. - Silence is better than noise. If you have nothing important to say, say nothing. - **Do NOT use `#N` notation** (e.g., `#1`, `#2`) in PR comments or summaries — GitHub auto-links these to issues/PRs. Use `(1)`, `[1]`, or descriptive references instead. -- **Match the language of the PR.** Write review comments, findings, and summaries in the same language as the PR title/description/code comments. If the PR is in English, write in English. If in Chinese, write in Chinese. Do NOT switch languages. +- **Match the language of the PR.** Write review comments, findings, and summaries in the same language as the PR title/description/code comments. If the PR is in English, write in English. If in Chinese, write in Chinese. Do NOT switch languages. For **local reviews** (no PR), respect the user's output language preference if set; otherwise follow the user's input language. From 70cebc46a80b010514c7ed64fc35418fd7885626 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Sat, 25 Apr 2026 22:30:11 +0800 Subject: [PATCH 07/36] test(arena): cover select dialog key actions (#3614) Add ArenaSelectDialog tests for Escape, discard, and winner selection key paths. Verify Escape only closes the dialog, x discards without applying changes, Enter applies the highlighted successful agent, and failed agents remain inert when selected. --- .../arena/ArenaSelectDialog.test.tsx | 260 +++++++++++++----- 1 file changed, 194 insertions(+), 66 deletions(-) diff --git a/packages/cli/src/ui/components/arena/ArenaSelectDialog.test.tsx b/packages/cli/src/ui/components/arena/ArenaSelectDialog.test.tsx index efcfc9165..e123444a0 100644 --- a/packages/cli/src/ui/components/arena/ArenaSelectDialog.test.tsx +++ b/packages/cli/src/ui/components/arena/ArenaSelectDialog.test.tsx @@ -10,6 +10,7 @@ import { AgentStatus, ArenaSessionStatus, type ArenaManager, + type ArenaAgentResult, type Config, } from '@qwen-code/qwen-code-core'; import { renderWithProviders } from '../../../test-utils/render.js'; @@ -17,72 +18,7 @@ import { ArenaSelectDialog } from './ArenaSelectDialog.js'; describe('ArenaSelectDialog', () => { it('toggles quick preview and detailed diff for the highlighted agent', async () => { - const result = { - sessionId: 'arena-1', - task: 'Update auth', - status: ArenaSessionStatus.IDLE, - agents: [ - { - agentId: 'model-1', - model: { modelId: 'model-1', authType: 'openai' }, - status: AgentStatus.IDLE, - worktree: { - id: 'w1', - name: 'model-1', - path: '/tmp/model-1', - branch: 'arena/model-1', - isActive: true, - createdAt: 1, - }, - stats: { - rounds: 1, - totalTokens: 1000, - inputTokens: 700, - outputTokens: 300, - durationMs: 2000, - toolCalls: 2, - successfulToolCalls: 2, - failedToolCalls: 0, - }, - diff: `diff --git a/src/auth.ts b/src/auth.ts ---- a/src/auth.ts -+++ b/src/auth.ts -@@ -1 +1 @@ --old -+new`, - diffSummary: { - files: [{ path: 'src/auth.ts', additions: 1, deletions: 1 }], - additions: 1, - deletions: 1, - }, - modifiedFiles: ['src/auth.ts'], - approachSummary: 'Updated the auth implementation inline.', - startedAt: 1, - }, - ], - startedAt: 1, - wasRepoInitialized: false, - }; - - const manager = { - getResult: vi.fn(() => result), - getAgentStates: vi.fn(() => [ - { - agentId: 'model-1', - model: { modelId: 'model-1', authType: 'openai' }, - status: AgentStatus.IDLE, - stats: result.agents[0]!.stats, - }, - ]), - getAgentState: vi.fn(), - applyAgentResult: vi.fn(), - } as unknown as ArenaManager; - - const config = { - getArenaManager: () => manager, - cleanupArenaRuntime: vi.fn(), - getChatRecordingService: () => undefined, - } as unknown as Config; + const { manager, config } = createDialogHarness(); const { lastFrame, stdin } = renderWithProviders( { }); expect(lastFrame()).toContain('diff --git a/src/auth.ts b/src/auth.ts'); }); + + it('closes without applying or cleaning up when Escape is pressed', async () => { + const { manager, config, closeArenaDialog, applyAgentResult } = + createDialogHarness(); + const cleanupArenaRuntime = config.cleanupArenaRuntime as ReturnType< + typeof vi.fn + >; + + const { stdin } = renderWithProviders( + , + ); + + stdin.write('\x1B'); + + await waitFor(() => { + expect(closeArenaDialog).toHaveBeenCalledTimes(1); + }); + expect(applyAgentResult).not.toHaveBeenCalled(); + expect(cleanupArenaRuntime).not.toHaveBeenCalled(); + }); + + it('discards results without applying changes when x is pressed', async () => { + const { manager, config, closeArenaDialog, applyAgentResult } = + createDialogHarness(); + const cleanupArenaRuntime = config.cleanupArenaRuntime as ReturnType< + typeof vi.fn + >; + + const { stdin } = renderWithProviders( + , + ); + + stdin.write('x'); + + await waitFor(() => { + expect(cleanupArenaRuntime).toHaveBeenCalledWith(true); + }); + expect(closeArenaDialog).toHaveBeenCalledTimes(1); + expect(applyAgentResult).not.toHaveBeenCalled(); + }); + + it('applies the highlighted successful agent when Enter is pressed', async () => { + const { manager, config, closeArenaDialog, applyAgentResult } = + createDialogHarness(); + const cleanupArenaRuntime = config.cleanupArenaRuntime as ReturnType< + typeof vi.fn + >; + + const { stdin } = renderWithProviders( + , + ); + + stdin.write('\r'); + + await waitFor(() => { + expect(applyAgentResult).toHaveBeenCalledWith('model-1'); + }); + expect(closeArenaDialog).toHaveBeenCalledTimes(1); + await waitFor(() => { + expect(cleanupArenaRuntime).toHaveBeenCalledWith(true); + }); + }); + + it('ignores Enter when the highlighted agent is not selectable', async () => { + const failedAgent = createAgentResult({ + agentId: 'model-1', + status: AgentStatus.FAILED, + }); + const { manager, config, closeArenaDialog, applyAgentResult } = + createDialogHarness([failedAgent]); + const cleanupArenaRuntime = config.cleanupArenaRuntime as ReturnType< + typeof vi.fn + >; + + const { stdin } = renderWithProviders( + , + ); + + stdin.write('\r'); + + await new Promise((resolve) => setTimeout(resolve, 0)); + expect(applyAgentResult).not.toHaveBeenCalled(); + expect(closeArenaDialog).not.toHaveBeenCalled(); + expect(cleanupArenaRuntime).not.toHaveBeenCalled(); + }); }); + +function createDialogHarness(agents = [createAgentResult()]) { + const result = { + sessionId: 'arena-1', + task: 'Update auth', + status: ArenaSessionStatus.IDLE, + agents, + startedAt: 1, + wasRepoInitialized: false, + }; + + const applyAgentResult = vi.fn().mockResolvedValue({ success: true }); + const manager = { + getResult: vi.fn(() => result), + getAgentStates: vi.fn(() => + agents.map((agent) => ({ + agentId: agent.agentId, + model: agent.model, + status: agent.status, + stats: agent.stats, + })), + ), + getAgentState: vi.fn((agentId: string) => + agents.find((agent) => agent.agentId === agentId), + ), + applyAgentResult, + } as unknown as ArenaManager; + + const config = { + getArenaManager: () => manager, + cleanupArenaRuntime: vi.fn().mockResolvedValue(undefined), + getChatRecordingService: () => undefined, + } as unknown as Config; + + return { + manager, + config, + closeArenaDialog: vi.fn(), + applyAgentResult, + }; +} + +function createAgentResult({ + agentId = 'model-1', + status = AgentStatus.IDLE, +}: { + agentId?: string; + status?: AgentStatus; +} = {}): ArenaAgentResult { + return { + agentId, + model: { modelId: agentId, authType: 'openai' }, + status, + worktree: { + id: `worktree-${agentId}`, + name: agentId, + path: `/tmp/${agentId}`, + branch: `arena/${agentId}`, + isActive: true, + createdAt: 1, + }, + stats: { + rounds: 1, + totalTokens: 1000, + inputTokens: 700, + outputTokens: 300, + durationMs: 2000, + toolCalls: 2, + successfulToolCalls: 2, + failedToolCalls: 0, + }, + diff: `diff --git a/src/auth.ts b/src/auth.ts +--- a/src/auth.ts ++++ b/src/auth.ts +@@ -1 +1 @@ +-old ++new`, + diffSummary: { + files: [{ path: 'src/auth.ts', additions: 1, deletions: 1 }], + additions: 1, + deletions: 1, + }, + modifiedFiles: ['src/auth.ts'], + approachSummary: 'Updated the auth implementation inline.', + startedAt: 1, + }; +} From 83d1e6dcaeecf0d8ff081a2d7ae513d10a3f0fab Mon Sep 17 00:00:00 2001 From: qqqys Date: Sat, 25 Apr 2026 22:41:03 +0800 Subject: [PATCH 08/36] feat: adds a Space-to-preview affordance to the /resume session picker (#3605) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat(cli): add Space-to-preview in resume session picker Press Space on a highlighted session to open a read-only transcript preview; Enter resumes, Esc returns. Works from both in-session `/resume` and standalone `qwen --resume`. The standalone path runs before `loadCliConfig`, so no real Config / LoadedSettings exist when its render tree mounts. `StandaloneSessionPicker` wraps the picker in stub Providers — every downstream access in the preview render path is either optional-chained or gated on states (Confirming / Executing) that never occur in resumed session data, so the stubs' methods are only read, never invoked for real work. Tool descriptions degrade to the raw function-call name in preview; users get full fidelity after pressing Enter to resume. Co-Authored-By: Qwen-Coder * fix(cli): guard SessionPreview separator width on narrow terminals `'─'.repeat(boxWidth - 2)` would throw RangeError when columns < 6 (tmux splits, small panes). Clamp boxWidth to a safe minimum and compute separatorWidth with Math.max(0, …). Co-Authored-By: Qwen-Coder * fix(cli): gate Space-to-preview behind enablePreview prop `SessionPicker` is shared by the resume dialog and the delete-session dialog. Preview's Enter shortcut forwards to `onSelect`, which for delete is `handleDelete` — so Space → preview → Enter would silently delete the session while the preview UI still says "Enter to resume". Add `enablePreview?: boolean` (default false). Resume callers (the in-app resume dialog and `--resume` standalone) opt in; the delete dialog stays opt-out and behaves exactly as before. Footer hint and preview render branch are both gated on the prop. Add a regression test that emulates the delete dialog and asserts Space is a no-op, the hint is absent, and Enter still flows straight to onSelect. Co-Authored-By: Qwen-Coder --------- Co-authored-by: Qwen-Coder Co-authored-by: Qwen-Coder --- .../cli/src/ui/components/DialogManager.tsx | 1 + .../cli/src/ui/components/SessionPicker.tsx | 41 ++- .../src/ui/components/SessionPreview.test.tsx | 176 +++++++++++ .../cli/src/ui/components/SessionPreview.tsx | 164 ++++++++++ .../StandaloneSessionPicker.test.tsx | 282 ++++++++++++++++++ .../ui/components/StandaloneSessionPicker.tsx | 62 +++- packages/cli/src/ui/hooks/useSessionPicker.ts | 38 ++- .../cli/src/ui/utils/resumeHistoryUtils.ts | 28 +- 8 files changed, 766 insertions(+), 26 deletions(-) create mode 100644 packages/cli/src/ui/components/SessionPreview.test.tsx create mode 100644 packages/cli/src/ui/components/SessionPreview.tsx diff --git a/packages/cli/src/ui/components/DialogManager.tsx b/packages/cli/src/ui/components/DialogManager.tsx index dc35c33d1..e626a60df 100644 --- a/packages/cli/src/ui/components/DialogManager.tsx +++ b/packages/cli/src/ui/components/DialogManager.tsx @@ -382,6 +382,7 @@ export const DialogManager = ({ onSelect={uiActions.handleResume} onCancel={uiActions.closeResumeDialog} initialSessions={uiState.resumeMatchedSessions} + enablePreview /> ); } diff --git a/packages/cli/src/ui/components/SessionPicker.tsx b/packages/cli/src/ui/components/SessionPicker.tsx index c80c35be1..8bcf3f5b9 100644 --- a/packages/cli/src/ui/components/SessionPicker.tsx +++ b/packages/cli/src/ui/components/SessionPicker.tsx @@ -18,6 +18,7 @@ import { } from '../utils/sessionPickerUtils.js'; import { useTerminalSize } from '../hooks/useTerminalSize.js'; import { t } from '../../i18n/index.js'; +import { SessionPreview } from './SessionPreview.js'; export interface SessionPickerProps { sessionService: SessionService | null; @@ -41,6 +42,14 @@ export interface SessionPickerProps { * When provided, skips initial load and disables pagination. */ initialSessions?: SessionData[]; + + /** + * Enable Space-to-preview. Off by default — preview's Enter shortcut + * forwards to `onSelect`, which for resume flows is "resume", but for + * destructive flows (e.g. delete) would commit the action. Only opt in + * for non-destructive selection flows. + */ + enablePreview?: boolean; } const PREFIX_CHARS = { @@ -147,6 +156,7 @@ export function SessionPicker(props: SessionPickerProps) { title, centerSelection = true, initialSessions, + enablePreview = false, } = props; const { columns: width, rows: height } = useTerminalSize(); @@ -172,8 +182,32 @@ export function SessionPicker(props: SessionPickerProps) { centerSelection, initialSessions, isActive: true, + enablePreview, }); + if ( + enablePreview && + picker.viewMode === 'preview' && + picker.previewSessionId && + sessionService + ) { + const previewed = picker.filteredSessions.find( + (s) => s.sessionId === picker.previewSessionId, + ); + return ( + + ); + } + return ( B - {t(' to toggle branch')} · + {t(' to toggle branch · ')} + + )} + {enablePreview && ( + + {t('Space to preview · ')} )} diff --git a/packages/cli/src/ui/components/SessionPreview.test.tsx b/packages/cli/src/ui/components/SessionPreview.test.tsx new file mode 100644 index 000000000..479eeee54 --- /dev/null +++ b/packages/cli/src/ui/components/SessionPreview.test.tsx @@ -0,0 +1,176 @@ +/** + * @license + * Copyright 2025 Qwen Code + * SPDX-License-Identifier: Apache-2.0 + */ +import { render } from 'ink-testing-library'; +import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; +import { KeypressProvider } from '../contexts/KeypressContext.js'; +import { SessionPreview } from './SessionPreview.js'; + +beforeEach(() => { + Object.defineProperty(process.stdout, 'columns', { + value: 80, + configurable: true, + }); + Object.defineProperty(process.stdout, 'rows', { + value: 24, + configurable: true, + }); +}); + +afterEach(() => vi.clearAllMocks()); + +const wait = (ms = 50) => new Promise((r) => setTimeout(r, ms)); + +function mockService(resolved: unknown) { + return { + loadSession: vi + .fn() + .mockReturnValue( + resolved instanceof Promise ? resolved : Promise.resolve(resolved), + ), + listSessions: vi.fn(), + loadLastSession: vi.fn(), + } as never; +} + +function fakeResumedData() { + return { + conversation: { + sessionId: 's1', + projectHash: 'h', + startTime: '2026-01-01T00:00:00.000Z', + lastUpdated: '2026-01-01T00:00:00.000Z', + messages: [ + { + uuid: 'u1', + parentUuid: null, + sessionId: 's1', + timestamp: '2026-01-01T00:00:00.000Z', + type: 'user', + cwd: '/tmp', + version: 'test', + message: { + role: 'user', + parts: [{ text: 'Hello world PREVIEW-MARKER' }], + }, + }, + { + uuid: 'u2', + parentUuid: 'u1', + sessionId: 's1', + timestamp: '2026-01-01T00:00:01.000Z', + type: 'assistant', + cwd: '/tmp', + version: 'test', + message: { + role: 'model', + parts: [{ text: 'Hi from assistant REPLY-MARKER' }], + }, + }, + ], + }, + filePath: '/tmp/s1.jsonl', + lastCompletedUuid: 'u2', + }; +} + +describe('SessionPreview', () => { + it('shows loading state before data arrives', () => { + const svc = mockService(new Promise(() => {})); // never resolves + const { lastFrame } = render( + + + , + ); + expect(lastFrame()).toContain('Loading session preview'); + }); + + it('renders all messages after load', async () => { + const svc = mockService(fakeResumedData()); + const { lastFrame } = render( + + + , + ); + await wait(100); + const frame = lastFrame() ?? ''; + expect(frame).toContain('PREVIEW-MARKER'); + expect(frame).toContain('REPLY-MARKER'); + }); + + it('renders footer metadata (messageCount · time · branch)', async () => { + const svc = mockService(fakeResumedData()); + const { lastFrame } = render( + + + , + ); + await wait(100); + const frame = lastFrame() ?? ''; + expect(frame).toMatch(/42\s*messages/); + expect(frame).toContain('feat/preview'); + }); + + it('calls onExit when Escape is pressed', async () => { + const onExit = vi.fn(); + const svc = mockService(fakeResumedData()); + const { stdin } = render( + + + , + ); + await wait(100); + stdin.write('\u001B'); // ESC + await wait(50); + expect(onExit).toHaveBeenCalledTimes(1); + }); + + it('calls onResume(sessionId) when Enter is pressed', async () => { + const onResume = vi.fn(); + const svc = mockService(fakeResumedData()); + const { stdin } = render( + + + , + ); + await wait(100); + stdin.write('\r'); // Enter + await wait(50); + expect(onResume).toHaveBeenCalledWith('s1'); + }); +}); diff --git a/packages/cli/src/ui/components/SessionPreview.tsx b/packages/cli/src/ui/components/SessionPreview.tsx new file mode 100644 index 000000000..c8b9cb4dd --- /dev/null +++ b/packages/cli/src/ui/components/SessionPreview.tsx @@ -0,0 +1,164 @@ +/** + * @license + * Copyright 2025 Qwen Code + * SPDX-License-Identifier: Apache-2.0 + */ + +import { Box, Text } from 'ink'; +import { useEffect, useMemo, useState } from 'react'; +import type { + ResumedSessionData, + SessionService, +} from '@qwen-code/qwen-code-core'; +import { theme } from '../semantic-colors.js'; +import { HistoryItemDisplay } from './HistoryItemDisplay.js'; +import { useKeypress } from '../hooks/useKeypress.js'; +import { useTerminalSize } from '../hooks/useTerminalSize.js'; +import { buildResumedHistoryItems } from '../utils/resumeHistoryUtils.js'; +import { formatRelativeTime } from '../utils/formatters.js'; +import { formatMessageCount } from '../utils/sessionPickerUtils.js'; +import { t } from '../../i18n/index.js'; + +export interface SessionPreviewProps { + sessionService: SessionService; + sessionId: string; + sessionTitle?: string; + /** Message count from the session list entry, for the footer. */ + messageCount?: number; + /** Last-modified time (ms epoch) from the session list entry, for the footer. */ + mtime?: number; + /** Git branch from the session list entry, for the footer. */ + gitBranch?: string; + onExit: () => void; + onResume: (sessionId: string) => void; +} + +export function SessionPreview(props: SessionPreviewProps) { + const { + sessionService, + sessionId, + sessionTitle, + messageCount, + mtime, + gitBranch, + onExit, + onResume, + } = props; + const { columns } = useTerminalSize(); + const [data, setData] = useState(null); + const [error, setError] = useState(null); + + useEffect(() => { + let cancelled = false; + setData(null); + setError(null); + sessionService + .loadSession(sessionId) + .then((d: ResumedSessionData | undefined) => { + if (cancelled) return; + if (!d) { + setError('Session not found'); + return; + } + setData(d); + }) + .catch((e: unknown) => { + if (cancelled) return; + setError(e instanceof Error ? e.message : String(e)); + }); + return () => { + cancelled = true; + }; + }, [sessionService, sessionId]); + + // Preview passes `null` config: tool_group entries degrade to name-only + // (no description). Users can press Enter to resume for full fidelity. + const items = useMemo(() => { + if (!data) return []; + return buildResumedHistoryItems(data, null); + }, [data]); + + useKeypress( + (key) => { + const { name, ctrl } = key; + if (name === 'escape' || (ctrl && name === 'c')) { + onExit(); + return; + } + if (name === 'return') { + onResume(sessionId); + } + }, + { isActive: true }, + ); + + // Clamp to a safe minimum: `'─'.repeat(boxWidth - 2)` would throw RangeError + // in very narrow terminals (tmux splits, small panes) if boxWidth < 2. + const boxWidth = Math.max(10, columns - 4); + const separatorWidth = Math.max(0, boxWidth - 2); + + const metaParts: string[] = []; + if (typeof messageCount === 'number') { + metaParts.push(formatMessageCount(messageCount)); + } + if (typeof mtime === 'number') { + metaParts.push(formatRelativeTime(mtime)); + } + if (gitBranch) { + metaParts.push(gitBranch); + } + const metaLine = metaParts.join(' · '); + + return ( + + {/* Header */} + + + {sessionTitle ?? t('Session Preview')} + + + + {'─'.repeat(separatorWidth)} + + + {/* Body: render all items, let the terminal's scrollback own overflow. */} + {error ? ( + + {error} + + ) : !data ? ( + + + {t('Loading session preview...')} + + + ) : ( + + {items.map((item) => ( + + ))} + + )} + + {/* Footer */} + + {'─'.repeat(separatorWidth)} + + {metaLine && ( + + {metaLine} + + )} + + + {t('Enter to resume · Esc to back')} + + + + ); +} diff --git a/packages/cli/src/ui/components/StandaloneSessionPicker.test.tsx b/packages/cli/src/ui/components/StandaloneSessionPicker.test.tsx index 9a7c7b193..629231eb9 100644 --- a/packages/cli/src/ui/components/StandaloneSessionPicker.test.tsx +++ b/packages/cli/src/ui/components/StandaloneSessionPicker.test.tsx @@ -4,11 +4,16 @@ * SPDX-License-Identifier: Apache-2.0 */ +import type { ReactNode } from 'react'; import { render } from 'ink-testing-library'; import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; import { KeypressProvider } from '../contexts/KeypressContext.js'; +import { ConfigContext } from '../contexts/ConfigContext.js'; +import { SettingsContext } from '../contexts/SettingsContext.js'; import { SessionPicker } from './SessionPicker.js'; +import type { LoadedSettings } from '../../config/settings.js'; import type { + Config, SessionListItem, ListSessionsResult, } from '@qwen-code/qwen-code-core'; @@ -621,4 +626,281 @@ describe('SessionPicker', () => { unmount(); }); }); + + describe('Preview Mode', () => { + // Mirror `StandaloneSessionPicker`'s runtime wrapping so the preview + // render tree (ToolGroupMessage, ToolMessage) can safely call + // `useConfig()` / `useSettings()` in tests. Without these, any test + // whose previewed session contains tool calls would crash. + const PREVIEW_CONFIG_STUB = { + getShouldUseNodePtyShell: () => false, + getIdeMode: () => false, + isTrustedFolder: () => false, + getToolRegistry: () => ({ getTool: () => undefined }), + getContentGenerator: () => ({ useSummarizedThinking: () => false }), + } as unknown as Config; + const PREVIEW_SETTINGS_STUB = { + merged: { ui: {} }, + } as unknown as LoadedSettings; + + function renderPicker(children: ReactNode) { + return render( + + + + {children} + + + , + ); + } + + function fakeResumedData(sessionId: string) { + return { + conversation: { + sessionId, + projectHash: 'h', + startTime: '2026-01-01T00:00:00.000Z', + lastUpdated: '2026-01-01T00:00:00.000Z', + messages: [ + { + uuid: 'u1', + parentUuid: null, + sessionId, + timestamp: '2026-01-01T00:00:00.000Z', + type: 'user', + cwd: '/tmp', + version: 'test', + message: { + role: 'user', + parts: [{ text: 'USER-ASKED-THIS' }], + }, + }, + { + uuid: 'u2', + parentUuid: 'u1', + sessionId, + timestamp: '2026-01-01T00:00:01.000Z', + type: 'assistant', + cwd: '/tmp', + version: 'test', + message: { + role: 'model', + parts: [{ text: 'ASSISTANT-REPLIED' }], + }, + }, + ], + }, + filePath: `/tmp/${sessionId}.jsonl`, + lastCompletedUuid: 'u2', + }; + } + + it('opens preview on Space and closes on Esc', async () => { + const sessions = [ + createMockSession({ + sessionId: 's1', + prompt: 'First session', + messageCount: 2, + }), + ]; + const service = createMockSessionService(sessions); + service.loadSession.mockResolvedValue(fakeResumedData('s1')); + + const { stdin, lastFrame } = renderPicker( + , + ); + + await wait(100); + expect(lastFrame()).toContain('First session'); + + stdin.write(' '); // Space + await wait(150); + const previewFrame = lastFrame() ?? ''; + expect(previewFrame).toContain('USER-ASKED-THIS'); + expect(previewFrame).toContain('ASSISTANT-REPLIED'); + + stdin.write('\u001B'); // Esc + await wait(50); + const afterExitFrame = lastFrame() ?? ''; + expect(afterExitFrame).toContain('First session'); + expect(afterExitFrame).not.toContain('USER-ASKED-THIS'); + }); + + it('renders tool_group items without crashing (stub Providers mounted)', async () => { + // The previewed session contains a function call + tool_result, which + // produces a `tool_group` HistoryItem that exercises ToolGroupMessage + // and ToolMessage — the places that throw without stub Providers. + const toolSession = { + conversation: { + sessionId: 's1', + projectHash: 'h', + startTime: '2026-01-01T00:00:00.000Z', + lastUpdated: '2026-01-01T00:00:00.000Z', + messages: [ + { + uuid: 'u1', + parentUuid: null, + sessionId: 's1', + timestamp: '2026-01-01T00:00:00.000Z', + type: 'user', + cwd: '/tmp', + version: 'test', + message: { role: 'user', parts: [{ text: 'list files' }] }, + }, + { + uuid: 'u2', + parentUuid: 'u1', + sessionId: 's1', + timestamp: '2026-01-01T00:00:01.000Z', + type: 'assistant', + cwd: '/tmp', + version: 'test', + message: { + role: 'model', + parts: [ + { + functionCall: { + id: 'call-1', + name: 'BashTool', + args: { command: 'ls' }, + }, + }, + ], + }, + }, + { + uuid: 'u3', + parentUuid: 'u2', + sessionId: 's1', + timestamp: '2026-01-01T00:00:02.000Z', + type: 'tool_result', + cwd: '/tmp', + version: 'test', + toolCallResult: { + callId: 'call-1', + resultDisplay: 'a.txt\nb.txt', + status: 'success', + }, + }, + ], + }, + filePath: '/tmp/s1.jsonl', + lastCompletedUuid: 'u3', + }; + + const sessions = [ + createMockSession({ + sessionId: 's1', + prompt: 'list files', + messageCount: 3, + }), + ]; + const service = createMockSessionService(sessions); + service.loadSession.mockResolvedValue(toolSession); + + const { stdin, lastFrame } = renderPicker( + , + ); + + await wait(100); + stdin.write(' '); // Space → preview + await wait(150); + const frame = lastFrame() ?? ''; + // Tool group renders with raw function name fallback (no registry). + expect(frame).toContain('BashTool'); + }); + + it('Enter inside preview fires onSelect with previewed sessionId', async () => { + const sessions = [ + createMockSession({ + sessionId: 's1', + prompt: 'First', + messageCount: 2, + }), + createMockSession({ + sessionId: 's2', + prompt: 'Second', + messageCount: 2, + }), + ]; + const service = createMockSessionService(sessions); + service.loadSession.mockResolvedValue(fakeResumedData('s1')); + const onSelect = vi.fn(); + + const { stdin } = renderPicker( + , + ); + + await wait(100); + stdin.write(' '); // open preview on s1 + await wait(150); + stdin.write('\r'); // Enter + await wait(50); + expect(onSelect).toHaveBeenCalledWith('s1'); + }); + + it('without enablePreview, Space is a no-op and footer omits the hint', async () => { + // Regression: SessionPicker is also reused by the delete-session + // dialog, where `onSelect = handleDelete`. If preview were on by + // default, Space → preview → Enter would silently delete the session + // while the preview UI still says "Enter to resume". The default must + // stay opt-in. + const sessions = [ + createMockSession({ + sessionId: 's1', + prompt: 'Deletable session', + messageCount: 2, + }), + ]; + const service = createMockSessionService(sessions); + service.loadSession.mockResolvedValue(fakeResumedData('s1')); + const onSelect = vi.fn(); + + const { stdin, lastFrame } = renderPicker( + , + ); + + await wait(100); + const beforeFrame = lastFrame() ?? ''; + expect(beforeFrame).toContain('Deletable session'); + // Hint must not appear, otherwise we are training users to press + // Space in destructive flows. + expect(beforeFrame).not.toContain('Space to preview'); + + stdin.write(' '); // Space + await wait(150); + const afterFrame = lastFrame() ?? ''; + // No preview body, still on the list. + expect(afterFrame).not.toContain('USER-ASKED-THIS'); + expect(afterFrame).toContain('Deletable session'); + + // Enter must still call onSelect on the highlighted row (delete path + // unchanged), not be eaten by a phantom preview. + stdin.write('\r'); + await wait(50); + expect(onSelect).toHaveBeenCalledWith('s1'); + expect(service.loadSession).not.toHaveBeenCalled(); + }); + }); }); diff --git a/packages/cli/src/ui/components/StandaloneSessionPicker.tsx b/packages/cli/src/ui/components/StandaloneSessionPicker.tsx index ba9a3f21f..1a245e0ab 100644 --- a/packages/cli/src/ui/components/StandaloneSessionPicker.tsx +++ b/packages/cli/src/ui/components/StandaloneSessionPicker.tsx @@ -9,12 +9,41 @@ import { render, Box, useApp } from 'ink'; import { getGitBranch, SessionService, + type Config, type SessionListItem, } from '@qwen-code/qwen-code-core'; import { KeypressProvider } from '../contexts/KeypressContext.js'; +import { ConfigContext } from '../contexts/ConfigContext.js'; +import { SettingsContext } from '../contexts/SettingsContext.js'; +import type { LoadedSettings } from '../../config/settings.js'; import { SessionPicker } from './SessionPicker.js'; import { writeStdoutLine } from '../../utils/stdioHelpers.js'; +/** + * `--resume` runs this picker BEFORE `loadCliConfig`, so no real Config / + * LoadedSettings exist yet. But the preview render tree (HistoryItemDisplay + * → ToolGroupMessage → ToolMessage) calls `useConfig()` / `useSettings()`, + * which throw without a Provider mounted. + * + * These stubs satisfy the Context consumers. Every downstream access of + * Config/Settings in the preview path is either optional-chained or gated + * on states (Confirming / Executing) that never occur in resumed session + * data, so the stubbed methods are only read, never invoked for real work. + * Tool descriptions fall back to the raw function-call name (see + * `buildResumedHistoryItems` handling when the registry returns undefined). + */ +const PREVIEW_CONFIG_STUB = { + getShouldUseNodePtyShell: () => false, + getIdeMode: () => false, + isTrustedFolder: () => false, + getToolRegistry: () => ({ getTool: () => undefined }), + getContentGenerator: () => ({ useSummarizedThinking: () => false }), +} as unknown as Config; + +const PREVIEW_SETTINGS_STUB = { + merged: { ui: {} }, +} as unknown as LoadedSettings; + interface StandalonePickerScreenProps { sessionService: SessionService; onSelect: (sessionId: string) => void; @@ -43,20 +72,25 @@ function StandalonePickerScreen({ } return ( - { - onSelect(id); - handleExit(); - }} - onCancel={() => { - onCancel(); - handleExit(); - }} - currentBranch={currentBranch} - centerSelection={true} - initialSessions={initialSessions} - /> + + + { + onSelect(id); + handleExit(); + }} + onCancel={() => { + onCancel(); + handleExit(); + }} + currentBranch={currentBranch} + centerSelection={true} + initialSessions={initialSessions} + enablePreview + /> + + ); } diff --git a/packages/cli/src/ui/hooks/useSessionPicker.ts b/packages/cli/src/ui/hooks/useSessionPicker.ts index 98bba2f65..fef9dbd07 100644 --- a/packages/cli/src/ui/hooks/useSessionPicker.ts +++ b/packages/cli/src/ui/hooks/useSessionPicker.ts @@ -48,6 +48,11 @@ export interface UseSessionPickerOptions { * Enable/disable input handling. */ isActive?: boolean; + /** + * Enable Space-to-preview. See SessionPickerProps.enablePreview for the + * safety rationale (preview's Enter forwards to onSelect). + */ + enablePreview?: boolean; } export interface UseSessionPickerResult { @@ -61,6 +66,9 @@ export interface UseSessionPickerResult { showScrollUp: boolean; showScrollDown: boolean; loadMoreSessions: () => Promise; + viewMode: 'list' | 'preview'; + previewSessionId: string | null; + exitPreview: () => void; } export function useSessionPicker({ @@ -72,6 +80,7 @@ export function useSessionPicker({ centerSelection = false, initialSessions, isActive = true, + enablePreview = false, }: UseSessionPickerOptions): UseSessionPickerResult { const hasInitialSessions = initialSessions !== undefined; const [selectedIndex, setSelectedIndex] = useState(0); @@ -86,6 +95,15 @@ export function useSessionPicker({ // For follow mode (non-centered) const [followScrollOffset, setFollowScrollOffset] = useState(0); + // Preview view state. + const [viewMode, setViewMode] = useState<'list' | 'preview'>('list'); + const [previewSessionId, setPreviewSessionId] = useState(null); + + const exitPreview = useCallback(() => { + setViewMode('list'); + setPreviewSessionId(null); + }, []); + const isLoadingMoreRef = useRef(false); const filteredSessions = useMemo( @@ -213,6 +231,12 @@ export function useSessionPicker({ // Key handling (KeypressContext) useKeypress( (key) => { + // Preview owns the keyboard while active; suppress list-mode + // handlers so we don't double-handle Escape / Enter / navigation. + if (viewMode !== 'list') { + return; + } + const { name, sequence, ctrl } = key; if (name === 'escape' || (ctrl && name === 'c')) { @@ -264,13 +288,22 @@ export function useSessionPicker({ return; } + if (name === 'space' && enablePreview) { + const session = filteredSessions[selectedIndex]; + if (session) { + setPreviewSessionId(session.sessionId); + setViewMode('preview'); + } + return; + } + if (sequence === 'b' || sequence === 'B') { if (currentBranch) { setFilterByBranch((prev) => !prev); } } }, - { isActive }, + { isActive: isActive && viewMode === 'list' }, ); return { @@ -284,5 +317,8 @@ export function useSessionPicker({ showScrollUp, showScrollDown, loadMoreSessions, + viewMode, + previewSessionId, + exitPreview, }; } diff --git a/packages/cli/src/ui/utils/resumeHistoryUtils.ts b/packages/cli/src/ui/utils/resumeHistoryUtils.ts index 1807cfd85..2de73a5b1 100644 --- a/packages/cli/src/ui/utils/resumeHistoryUtils.ts +++ b/packages/cli/src/ui/utils/resumeHistoryUtils.ts @@ -82,7 +82,11 @@ function extractFunctionCalls( return calls; } -function getTool(config: Config, name: string): AnyDeclarativeTool | undefined { +function getTool( + config: Config | null, + name: string, +): AnyDeclarativeTool | undefined { + if (!config) return undefined; const toolRegistry = config.getToolRegistry(); return toolRegistry.getTool(name); } @@ -144,7 +148,7 @@ function restoreHistoryItem(raw: unknown): HistoryItemWithoutId | undefined { */ function convertToHistoryItems( conversation: ConversationRecord, - config: Config, + config: Config | null, ): HistoryItemWithoutId[] { const items: HistoryItemWithoutId[] = []; const pendingAtCommands: AtCommandRecordPayload[] = []; @@ -321,12 +325,13 @@ function convertToHistoryItems( case 'assistant': { const parts = record.message?.parts as Part[] | undefined; - // Extract thought content - const thoughtText = !config - .getContentGenerator() - .useSummarizedThinking() - ? extractThoughtTextFromParts(parts) - : ''; + // Extract thought content. With no config (standalone picker preview), + // default to showing thoughts verbatim (same path as + // `!useSummarizedThinking()`). + const thoughtText = + !config || !config.getContentGenerator().useSummarizedThinking() + ? extractThoughtTextFromParts(parts) + : ''; // Extract text content (non-function-call, non-thought) const text = extractTextFromParts(parts); @@ -451,13 +456,16 @@ function convertToHistoryItems( * and assigns unique IDs to each item for use with loadHistory. * * @param sessionData The resumed session data from SessionService - * @param config The config object for accessing tool registry + * @param config The config object for accessing tool registry. Pass `null` + * to render in "preview" mode (no tool metadata lookup, thoughts shown + * verbatim) — used by the standalone resume picker that runs before + * `loadCliConfig`. * @param baseTimestamp Base timestamp for generating unique IDs * @returns Array of HistoryItem with proper IDs */ export function buildResumedHistoryItems( sessionData: ResumedSessionData, - config: Config, + config: Config | null, baseTimestamp: number = Date.now(), ): HistoryItem[] { const items: HistoryItem[] = []; From 72c31d378dba5869dde2d0f110d41b395bc972dd Mon Sep 17 00:00:00 2001 From: jinye Date: Sun, 26 Apr 2026 06:49:42 +0800 Subject: [PATCH 09/36] fix(test): update rewind E2E Test 1 assertion after isRealUserTurn fix (#3622) Test 1 asserted `say exactly GAMMA3` after pressing Up once in the rewind selector, but that only passed because `/rewind` was incorrectly counted as a user turn. After `isRealUserTurn()` excluded slash commands, the turn list is [ALPHA1, BETA2, GAMMA3] and Up from the initial selection (GAMMA3) lands on BETA2. Update the assertion to match. Ref: https://github.com/QwenLM/qwen-code/pull/3441#issuecomment-4319798259 Co-authored-by: jinye.djy Co-authored-by: Qwen-Coder --- scripts/test-rewind-e2e.sh | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/scripts/test-rewind-e2e.sh b/scripts/test-rewind-e2e.sh index 9b09f5cbe..394780372 100755 --- a/scripts/test-rewind-e2e.sh +++ b/scripts/test-rewind-e2e.sh @@ -299,13 +299,12 @@ test_rewind_command() { send_keys y wait_for "Conversation rewound" || return 1 - # After rewind: the input should be pre-populated with the selected turn's - # text ("say exactly GAMMA3..."). The GAMMA3 *response* turn should be gone - # from the conversation, but the text appears in the input bar — which is - # the correct pre-population behavior. - # Verify pre-population: the input bar should contain GAMMA3 text - assert_screen "say exactly GAMMA3" || return 1 - # Verify the earlier turns (ALPHA1, BETA2) are still in conversation + # After rewind: pressing Up once from the initial selection (GAMMA3, the last + # real user turn) lands on BETA2. Rewind targets BETA2, so its text gets + # pre-populated into the input bar. Slash commands like /rewind are excluded + # from the turn list by isRealUserTurn(). + assert_screen "say exactly BETA2" || return 1 + # Verify the earlier turn (ALPHA1) is still in conversation assert_scrollback "ALPHA1" || return 1 } From f7cfe53c6a869dc8578e12a90258857dbdacd531 Mon Sep 17 00:00:00 2001 From: jinye Date: Sun, 26 Apr 2026 07:37:56 +0800 Subject: [PATCH 10/36] fix(core): preserve settings-sourced apiKey when registry model envKey is absent (#3495) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * fix(core): preserve settings-sourced apiKey when registry model envKey is absent (#3417) On restart, `applyResolvedModelDefaults` unconditionally cleared the apiKey resolved from `settings.security.auth.apiKey` (layer 4 fallback) and only read from `process.env[model.envKey]`. When the provider-specific env var was absent (e.g. key stored only in settings), the correctly resolved key was discarded, causing a 401 error. Now capture the previously-resolved apiKey before clearing and fall back to it when `process.env[model.envKey]` is empty, but only for safe source kinds (`settings` and general `env` without `via.modelProviders`). Co-authored-by: Qwen-Coder * fix(core): also preserve CLI-sourced apiKey during syncAfterAuthRefresh Address review feedback: keys passed via CLI flags (e.g. --openaiApiKey) were dropped on restart because source kind 'cli' was not in the fallback allowlist. Add 'cli' to the condition and a regression test. Co-authored-by: Qwen-Coder * fix(core): move apiKey preservation from applyResolvedModelDefaults to syncAfterAuthRefresh The previous fallback logic inside applyResolvedModelDefaults could leak a settings/cli-sourced apiKey to a different provider when switching models within the same authType (e.g. dashscope → openai). This is a credential safety issue because the two providers may have different baseUrls. Move the save/restore logic to syncAfterAuthRefresh Step 1, guarded by an `isUnchanged` check (same authType AND same modelId). This ensures: - Restart scenario: apiKey preserved (same model, no change) - Cross-provider switch: apiKey cleared (different modelId) Also adds two cross-provider switch tests (settings-sourced and CLI-sourced) per review feedback. Co-authored-by: Qwen-Coder * fix(core): replace non-null assertion with truthiness guard and add cold-start test - Replace `savedApiKeySource!` with a truthiness guard for safer source restoration - Add test for cold-start scenario (previousAuthType undefined) to verify no key preservation occurs on first syncAfterAuthRefresh - Fix stale "short-circuit" comment in programmatic key test Co-authored-by: Qwen-Coder * fix(core): detect provider config hot-reload in isUnchanged check When a model provider config is hot-reloaded (e.g. via Coding Plan update) changing envKey or baseUrl while keeping the same model id, the save/restore logic must not preserve the old apiKey. Extend the isUnchanged guard to compare apiKeyEnvKey and baseUrl against the resolved model, but only after applyResolvedModelDefaults has run at least once (apiKeyEnvKey !== undefined). On first startup call these fields are still unset, so the check is skipped to preserve the settings/cli-sourced key correctly. Adds two hot-reload tests (envKey change and baseUrl change). Co-authored-by: Qwen-Coder * fix(core): use baseUrl source as hasBeenApplied signal for provider change detection Replace `apiKeyEnvKey !== undefined` guard with `baseUrl source === 'modelProviders'` to reliably detect whether applyResolvedModelDefaults has been called before. This fixes two edge cases: 1. No-envKey models: hot-reload changing baseUrl was undetected because apiKeyEnvKey remained undefined. Now baseUrl source is checked. 2. Startup with envKey but omitted baseUrl: undefined !== default URL could falsely trigger isProviderChanged. Now skipped at startup since baseUrl source is not yet 'modelProviders'. Updates hot-reload test fixtures to simulate post-apply state (baseUrl source as 'modelProviders') and adds no-envKey hot-reload test. Co-authored-by: Qwen-Coder * fix(core): shallow-clone savedApiKeySource to avoid mutation risk Copy the ConfigSource object before applyResolvedModelDefaults runs, so a future refactor that mutates source objects in place won't break the save/restore logic. Co-authored-by: Qwen-Coder --------- Co-authored-by: jinye.djy Co-authored-by: Qwen-Coder --- packages/core/src/models/modelsConfig.test.ts | 633 ++++++++++++++++++ packages/core/src/models/modelsConfig.ts | 42 ++ 2 files changed, 675 insertions(+) diff --git a/packages/core/src/models/modelsConfig.test.ts b/packages/core/src/models/modelsConfig.test.ts index 87c8aaf34..f2e4bc1f1 100644 --- a/packages/core/src/models/modelsConfig.test.ts +++ b/packages/core/src/models/modelsConfig.test.ts @@ -597,6 +597,639 @@ describe('ModelsConfig', () => { expect(gc.samplingParams?.temperature).toBe(0.9); // Preserved from initial config }); + it('should fall back to settings-sourced apiKey when registry model envKey is not in process.env (restart scenario)', () => { + // Simulate the restart scenario from issue #3417: + // 1. User has settings.security.auth.apiKey = 'settings-api-key' + // 2. modelProviders.openai has a model with envKey = 'CODING_PLAN_KEY' + // 3. process.env['CODING_PLAN_KEY'] is NOT set + // 4. resolveCliGenerationConfig correctly resolved apiKey from settings (layer 4) + // 5. syncAfterAuthRefresh should NOT discard the settings-sourced key + + const envKey = 'CODING_PLAN_KEY_TEST_3417'; + // Ensure the env var is NOT set + delete process.env[envKey]; + + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'qwen3.5-plus', + name: 'Test Model', + baseUrl: 'https://api.example.com/v1', + envKey, + generationConfig: { + samplingParams: { temperature: 0.3 }, + }, + }, + ], + }; + + // ModelsConfig initialized with settings-sourced apiKey (as would happen at startup) + const modelsConfig = new ModelsConfig({ + initialAuthType: AuthType.USE_OPENAI, + modelProvidersConfig, + generationConfig: { + model: 'qwen3.5-plus', + apiKey: 'settings-api-key', + baseUrl: 'https://dashscope.aliyuncs.com/compatible-mode/v1', + }, + generationConfigSources: { + model: { kind: 'settings', detail: 'settings.model.name' }, + apiKey: { kind: 'settings', detail: 'security.auth.apiKey' }, + baseUrl: { kind: 'settings', detail: 'security.auth.baseUrl' }, + }, + }); + + // Verify initial state + expect(currentGenerationConfig(modelsConfig).apiKey).toBe( + 'settings-api-key', + ); + + // Simulate what refreshAuth does on startup + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'qwen3.5-plus'); + + const gc = currentGenerationConfig(modelsConfig); + // The settings-sourced apiKey should be preserved as fallback + expect(gc.apiKey).toBe('settings-api-key'); + // envKey metadata should still be set for diagnostics + expect(gc.apiKeyEnvKey).toBe(envKey); + // Model and other provider config should be applied + expect(gc.model).toBe('qwen3.5-plus'); + expect(gc.samplingParams?.temperature).toBe(0.3); + + // Source should still reflect settings origin + const sources = modelsConfig.getGenerationConfigSources(); + expect(sources['apiKey']?.kind).toBe('settings'); + }); + + it('should prefer env var over settings apiKey when both exist (restart scenario)', () => { + const envKey = 'CODING_PLAN_KEY_TEST_3417_PREFER'; + // Set the env var + process.env[envKey] = 'env-api-key'; + + try { + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'test-model', + name: 'Test Model', + baseUrl: 'https://api.example.com/v1', + envKey, + }, + ], + }; + + const modelsConfig = new ModelsConfig({ + initialAuthType: AuthType.USE_OPENAI, + modelProvidersConfig, + generationConfig: { + model: 'test-model', + apiKey: 'settings-api-key', + }, + generationConfigSources: { + model: { kind: 'settings', detail: 'settings.model.name' }, + apiKey: { kind: 'settings', detail: 'security.auth.apiKey' }, + }, + }); + + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'test-model'); + + const gc = currentGenerationConfig(modelsConfig); + // Env var should take priority over settings apiKey + expect(gc.apiKey).toBe('env-api-key'); + expect(gc.apiKeyEnvKey).toBe(envKey); + + const sources = modelsConfig.getGenerationConfigSources(); + expect(sources['apiKey']?.kind).toBe('env'); + } finally { + delete process.env[envKey]; + } + }); + + it('should preserve programmatic apiKey when authType and modelId unchanged (restart scenario)', () => { + // When apiKey was set via updateCredentials (programmatic source) and + // syncAfterAuthRefresh is called with the same authType+modelId, + // the short-circuit should preserve the existing key. + const envKey = 'CODING_PLAN_KEY_TEST_3417_PROG'; + delete process.env[envKey]; + + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'provider-model', + name: 'Provider Model', + baseUrl: 'https://api.example.com/v1', + envKey, + }, + ], + }; + + const modelsConfig = new ModelsConfig({ + initialAuthType: AuthType.USE_OPENAI, + modelProvidersConfig, + generationConfig: { + model: 'provider-model', + apiKey: 'programmatic-key', + }, + generationConfigSources: { + model: { kind: 'programmatic', detail: 'updateCredentials' }, + apiKey: { kind: 'programmatic', detail: 'updateCredentials' }, + }, + }); + + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'provider-model'); + + const gc = currentGenerationConfig(modelsConfig); + // Same authType + same modelId → apiKey preserved via save/restore around applyResolvedModelDefaults + expect(gc.apiKey).toBe('programmatic-key'); + }); + + it('should NOT preserve env apiKey with via.modelProviders during model switch', () => { + // When switching from model-a to model-b, model-a's provider-specific + // envKey value should NOT be reused for model-b — they may target + // different services with different credentials. + const envKeyA = 'PROVIDER_KEY_A_TEST_3417'; + const envKeyB = 'PROVIDER_KEY_B_TEST_3417'; + delete process.env[envKeyA]; + delete process.env[envKeyB]; + + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'model-a', + name: 'Model A', + baseUrl: 'https://api-a.example.com/v1', + envKey: envKeyA, + }, + { + id: 'model-b', + name: 'Model B', + baseUrl: 'https://api-b.example.com/v1', + envKey: envKeyB, + }, + ], + }; + + const modelsConfig = new ModelsConfig({ + initialAuthType: AuthType.USE_OPENAI, + modelProvidersConfig, + generationConfig: { + model: 'model-a', + apiKey: 'key-for-model-a', + }, + generationConfigSources: { + model: { + kind: 'modelProviders', + authType: 'openai', + modelId: 'model-a', + detail: 'model.id', + }, + apiKey: { + kind: 'env', + envKey: envKeyA, + via: { + kind: 'modelProviders', + authType: 'openai', + modelId: 'model-a', + detail: 'envKey', + }, + }, + }, + }); + + // Switch to model-b whose envKey is also not set + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'model-b'); + + const gc = currentGenerationConfig(modelsConfig); + // model-a's key should NOT be reused for model-b + expect(gc.apiKey).toBeUndefined(); + expect(gc.model).toBe('model-b'); + }); + + it('should NOT preserve settings-sourced apiKey when switching to a different provider within same authType', () => { + // Cross-provider switch: provider-A (settings-sourced key) → provider-B + // Settings key must NOT leak to provider-B which may have a different baseUrl. + const envKeyA = 'PROVIDER_KEY_A_SETTINGS_TEST'; + const envKeyB = 'PROVIDER_KEY_B_SETTINGS_TEST'; + delete process.env[envKeyA]; + delete process.env[envKeyB]; + + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'provider-a', + name: 'Provider A', + baseUrl: 'https://dashscope.aliyuncs.com/compatible-mode/v1', + envKey: envKeyA, + }, + { + id: 'provider-b', + name: 'Provider B', + baseUrl: 'https://api.openai.com/v1', + envKey: envKeyB, + }, + ], + }; + + const modelsConfig = new ModelsConfig({ + initialAuthType: AuthType.USE_OPENAI, + modelProvidersConfig, + generationConfig: { + model: 'provider-a', + apiKey: 'settings-api-key', + baseUrl: 'https://dashscope.aliyuncs.com/compatible-mode/v1', + }, + generationConfigSources: { + model: { kind: 'settings', detail: 'settings.model.name' }, + apiKey: { kind: 'settings', detail: 'security.auth.apiKey' }, + baseUrl: { kind: 'settings', detail: 'security.auth.baseUrl' }, + }, + }); + + // Switch to provider-b (different model, same authType) + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'provider-b'); + + const gc = currentGenerationConfig(modelsConfig); + // settings-sourced key for provider-a must NOT be sent to provider-b + expect(gc.apiKey).toBeUndefined(); + expect(gc.model).toBe('provider-b'); + expect(gc.baseUrl).toBe('https://api.openai.com/v1'); + }); + + it('should NOT preserve CLI-sourced apiKey when switching to a different provider within same authType', () => { + // Cross-provider switch: provider-A (CLI-sourced key) → provider-B + const envKeyA = 'PROVIDER_KEY_A_CLI_TEST'; + const envKeyB = 'PROVIDER_KEY_B_CLI_TEST'; + delete process.env[envKeyA]; + delete process.env[envKeyB]; + + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'cli-provider-a', + name: 'CLI Provider A', + baseUrl: 'https://api-a.example.com/v1', + envKey: envKeyA, + }, + { + id: 'cli-provider-b', + name: 'CLI Provider B', + baseUrl: 'https://api-b.example.com/v1', + envKey: envKeyB, + }, + ], + }; + + const modelsConfig = new ModelsConfig({ + initialAuthType: AuthType.USE_OPENAI, + modelProvidersConfig, + generationConfig: { + model: 'cli-provider-a', + apiKey: 'cli-provided-key', + }, + generationConfigSources: { + model: { kind: 'cli', detail: '--model' }, + apiKey: { kind: 'cli', detail: '--openaiApiKey' }, + }, + }); + + // Switch to cli-provider-b (different model, same authType) + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'cli-provider-b'); + + const gc = currentGenerationConfig(modelsConfig); + // CLI key for provider-a must NOT be sent to provider-b + expect(gc.apiKey).toBeUndefined(); + expect(gc.model).toBe('cli-provider-b'); + expect(gc.baseUrl).toBe('https://api-b.example.com/v1'); + }); + + it('should NOT preserve apiKey on first syncAfterAuthRefresh when previousAuthType is undefined (cold start)', () => { + // Cold start: ModelsConfig created without initialAuthType, then + // syncAfterAuthRefresh is called for the first time. previousAuthType + // is undefined, so isUnchanged must be false — no key preservation. + const envKey = 'COLD_START_KEY_TEST_3417'; + delete process.env[envKey]; + + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'cold-start-model', + name: 'Cold Start Model', + baseUrl: 'https://api.example.com/v1', + envKey, + }, + ], + }; + + const modelsConfig = new ModelsConfig({ + modelProvidersConfig, + generationConfig: { + model: 'cold-start-model', + apiKey: 'stale-key-from-previous-session', + }, + generationConfigSources: { + model: { kind: 'settings', detail: 'settings.model.name' }, + apiKey: { kind: 'settings', detail: 'security.auth.apiKey' }, + }, + }); + + // First auth refresh — previousAuthType is undefined + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'cold-start-model'); + + const gc = currentGenerationConfig(modelsConfig); + // previousAuthType (undefined) !== USE_OPENAI → isUnchanged is false → no preservation + expect(gc.apiKey).toBeUndefined(); + expect(gc.model).toBe('cold-start-model'); + }); + + it('should NOT preserve apiKey when same modelId but envKey changed (hot-reload)', () => { + // Hot-reload scenario: model provider config is reloaded, changing the + // envKey for the same model id. The old apiKey must NOT be restored. + const oldEnvKey = 'OLD_ENV_KEY_HOT_RELOAD_TEST'; + const newEnvKey = 'NEW_ENV_KEY_HOT_RELOAD_TEST'; + delete process.env[oldEnvKey]; + delete process.env[newEnvKey]; + + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'hot-reload-model', + name: 'Hot Reload Model', + baseUrl: 'https://api.example.com/v1', + envKey: oldEnvKey, + }, + ], + }; + + const modelsConfig = new ModelsConfig({ + initialAuthType: AuthType.USE_OPENAI, + modelProvidersConfig, + generationConfig: { + model: 'hot-reload-model', + apiKey: 'old-api-key', + baseUrl: 'https://api.example.com/v1', + apiKeyEnvKey: oldEnvKey, + }, + generationConfigSources: { + model: { kind: 'settings', detail: 'settings.model.name' }, + apiKey: { kind: 'settings', detail: 'security.auth.apiKey' }, + baseUrl: { + kind: 'modelProviders', + authType: 'openai', + modelId: 'hot-reload-model', + detail: 'baseUrl', + }, + apiKeyEnvKey: { + kind: 'modelProviders', + authType: 'openai', + modelId: 'hot-reload-model', + detail: 'envKey', + }, + }, + }); + + // Simulate hot-reload: update registry with new envKey + modelsConfig.reloadModelProvidersConfig({ + openai: [ + { + id: 'hot-reload-model', + name: 'Hot Reload Model', + baseUrl: 'https://api.example.com/v1', + envKey: newEnvKey, + }, + ], + }); + + // syncAfterAuthRefresh with same authType and modelId + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'hot-reload-model'); + + const gc = currentGenerationConfig(modelsConfig); + // envKey changed → isUnchanged is false → old key must NOT be preserved + expect(gc.apiKey).toBeUndefined(); + expect(gc.apiKeyEnvKey).toBe(newEnvKey); + expect(gc.model).toBe('hot-reload-model'); + }); + + it('should NOT preserve apiKey when same modelId but baseUrl changed (hot-reload)', () => { + // Hot-reload scenario: model provider config is reloaded, changing the + // baseUrl for the same model id. The old apiKey must NOT be restored. + const envKey = 'BASE_URL_HOT_RELOAD_TEST'; + delete process.env[envKey]; + + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'url-reload-model', + name: 'URL Reload Model', + baseUrl: 'https://old-api.example.com/v1', + envKey, + }, + ], + }; + + const modelsConfig = new ModelsConfig({ + initialAuthType: AuthType.USE_OPENAI, + modelProvidersConfig, + generationConfig: { + model: 'url-reload-model', + apiKey: 'old-api-key', + baseUrl: 'https://old-api.example.com/v1', + apiKeyEnvKey: envKey, + }, + generationConfigSources: { + model: { kind: 'settings', detail: 'settings.model.name' }, + apiKey: { kind: 'settings', detail: 'security.auth.apiKey' }, + baseUrl: { + kind: 'modelProviders', + authType: 'openai', + modelId: 'url-reload-model', + detail: 'baseUrl', + }, + apiKeyEnvKey: { + kind: 'modelProviders', + authType: 'openai', + modelId: 'url-reload-model', + detail: 'envKey', + }, + }, + }); + + // Simulate hot-reload: update registry with new baseUrl + modelsConfig.reloadModelProvidersConfig({ + openai: [ + { + id: 'url-reload-model', + name: 'URL Reload Model', + baseUrl: 'https://new-api.example.com/v1', + envKey, + }, + ], + }); + + // syncAfterAuthRefresh with same authType and modelId + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'url-reload-model'); + + const gc = currentGenerationConfig(modelsConfig); + // baseUrl changed → isUnchanged is false → old key must NOT be preserved + expect(gc.apiKey).toBeUndefined(); + expect(gc.baseUrl).toBe('https://new-api.example.com/v1'); + expect(gc.model).toBe('url-reload-model'); + }); + + it('should NOT preserve apiKey when no-envKey model has baseUrl changed (hot-reload)', () => { + // Hot-reload scenario for a model without envKey: baseUrl changes but + // modelId stays the same. The old apiKey must NOT be restored. + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'no-envkey-model', + name: 'No EnvKey Model', + baseUrl: 'https://old-api.example.com/v1', + // no envKey + }, + ], + }; + + const modelsConfig = new ModelsConfig({ + initialAuthType: AuthType.USE_OPENAI, + modelProvidersConfig, + generationConfig: { + model: 'no-envkey-model', + apiKey: 'old-settings-key', + baseUrl: 'https://old-api.example.com/v1', + }, + // Simulate post-apply state: baseUrl source is modelProviders + generationConfigSources: { + model: { + kind: 'modelProviders', + authType: 'openai', + modelId: 'no-envkey-model', + detail: 'model.id', + }, + apiKey: { kind: 'settings', detail: 'security.auth.apiKey' }, + baseUrl: { + kind: 'modelProviders', + authType: 'openai', + modelId: 'no-envkey-model', + detail: 'baseUrl', + }, + }, + }); + + // Simulate hot-reload: update registry with new baseUrl + modelsConfig.reloadModelProvidersConfig({ + openai: [ + { + id: 'no-envkey-model', + name: 'No EnvKey Model', + baseUrl: 'https://new-api.example.com/v1', + }, + ], + }); + + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'no-envkey-model'); + + const gc = currentGenerationConfig(modelsConfig); + // baseUrl changed → isProviderChanged is true even without envKey + expect(gc.apiKey).toBeUndefined(); + expect(gc.baseUrl).toBe('https://new-api.example.com/v1'); + expect(gc.model).toBe('no-envkey-model'); + }); + + it('should preserve general env var apiKey (e.g. OPENAI_API_KEY) when provider envKey is absent', () => { + // If the user has OPENAI_API_KEY set but NOT the provider-specific envKey, + // the general env var should be preserved as a fallback. + const providerEnvKey = 'SPECIFIC_PROVIDER_KEY_TEST_3417'; + delete process.env[providerEnvKey]; + + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'test-model', + name: 'Test Model', + baseUrl: 'https://api.example.com/v1', + envKey: providerEnvKey, + }, + ], + }; + + // resolveCliGenerationConfig resolved apiKey from OPENAI_API_KEY (layer 3) + // — source has kind:'env' but no 'via' (general env var, not provider-specific) + const modelsConfig = new ModelsConfig({ + initialAuthType: AuthType.USE_OPENAI, + modelProvidersConfig, + generationConfig: { + model: 'test-model', + apiKey: 'openai-api-key-value', + }, + generationConfigSources: { + model: { kind: 'settings', detail: 'settings.model.name' }, + apiKey: { kind: 'env', envKey: 'OPENAI_API_KEY' }, + }, + }); + + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'test-model'); + + const gc = currentGenerationConfig(modelsConfig); + // General env var key should be preserved + expect(gc.apiKey).toBe('openai-api-key-value'); + + const sources = modelsConfig.getGenerationConfigSources(); + expect(sources['apiKey']?.kind).toBe('env'); + expect(sources['apiKey']?.envKey).toBe('OPENAI_API_KEY'); + }); + + it('should preserve CLI-sourced apiKey (--openaiApiKey) when registry model envKey is absent', () => { + // Regression: CLI-passed keys (source kind 'cli') must not be discarded + // during syncAfterAuthRefresh when the provider's envKey is unset. + const envKey = 'CODING_PLAN_KEY_TEST_3417_CLI'; + delete process.env[envKey]; + + const modelProvidersConfig: ModelProvidersConfig = { + openai: [ + { + id: 'cli-test-model', + name: 'CLI Test Model', + baseUrl: 'https://api.example.com/v1', + envKey, + generationConfig: { + samplingParams: { temperature: 0.5 }, + }, + }, + ], + }; + + const modelsConfig = new ModelsConfig({ + initialAuthType: AuthType.USE_OPENAI, + modelProvidersConfig, + generationConfig: { + model: 'cli-test-model', + apiKey: 'cli-provided-key', + }, + generationConfigSources: { + model: { kind: 'cli', detail: '--model' }, + apiKey: { kind: 'cli', detail: '--openaiApiKey' }, + }, + }); + + // Verify initial state + expect(currentGenerationConfig(modelsConfig).apiKey).toBe( + 'cli-provided-key', + ); + + modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'cli-test-model'); + + const gc = currentGenerationConfig(modelsConfig); + // CLI-sourced apiKey should be preserved as fallback + expect(gc.apiKey).toBe('cli-provided-key'); + expect(gc.apiKeyEnvKey).toBe(envKey); + expect(gc.model).toBe('cli-test-model'); + expect(gc.samplingParams?.temperature).toBe(0.5); + + const sources = modelsConfig.getGenerationConfigSources(); + expect(sources['apiKey']?.kind).toBe('cli'); + expect(sources['apiKey']?.detail).toBe('--openaiApiKey'); + }); + it('should maintain consistency between currentModelId and _generationConfig.model after initialization', () => { const modelProvidersConfig: ModelProvidersConfig = { openai: [ diff --git a/packages/core/src/models/modelsConfig.ts b/packages/core/src/models/modelsConfig.ts index f7086bbca..d34cc08c6 100644 --- a/packages/core/src/models/modelsConfig.ts +++ b/packages/core/src/models/modelsConfig.ts @@ -877,7 +877,49 @@ export class ModelsConfig { if (modelId && this.modelRegistry.hasModel(authType, modelId)) { const resolved = this.modelRegistry.getModel(authType, modelId); if (resolved) { + // When authType and modelId haven't changed (startup/restart scenario), + // the current apiKey was already correctly resolved by + // resolveCliGenerationConfig. Save it so we can restore it if + // applyResolvedModelDefaults clears it (i.e. process.env[envKey] is + // absent). For cross-provider switches (different modelId), we must + // NOT preserve the previous key — it may belong to a different + // service. Also detect hot-reload scenarios where the provider + // config changed in place (same modelId, different envKey/baseUrl) + // by comparing fields that applyResolvedModelDefaults sets. Use + // baseUrl source === 'modelProviders' as the "has been applied" + // signal — it covers both envKey and no-envKey models, and avoids + // false positives when startup baseUrl differs from registry + // default. (See #3417) + const hasBeenApplied = + this.generationConfigSources['baseUrl']?.kind === 'modelProviders'; + const isProviderChanged = + hasBeenApplied && + (this._generationConfig.apiKeyEnvKey !== resolved.envKey || + this._generationConfig.baseUrl !== resolved.baseUrl); + const isUnchanged = + previousAuthType === authType && + this._generationConfig.model === modelId && + !isProviderChanged; + const savedApiKey = isUnchanged + ? this._generationConfig.apiKey + : undefined; + const savedApiKeySource = isUnchanged + ? this.generationConfigSources['apiKey'] + ? { ...this.generationConfigSources['apiKey'] } + : undefined + : undefined; + this.applyResolvedModelDefaults(resolved); + + // Restore the previously-resolved apiKey if applyResolvedModelDefaults + // cleared it (env var not found) and this is the same model. + if (isUnchanged && !this._generationConfig.apiKey && savedApiKey) { + this._generationConfig.apiKey = savedApiKey; + if (savedApiKeySource) { + this.generationConfigSources['apiKey'] = savedApiKeySource; + } + } + this.strictModelProviderSelection = true; // Clear active runtime model snapshot since we're now using a registry model this.activeRuntimeModelSnapshotId = undefined; From 4be0234d100b6860a5f94abbfd0280fe31deb424 Mon Sep 17 00:00:00 2001 From: jinye Date: Sun, 26 Apr 2026 07:40:35 +0800 Subject: [PATCH 11/36] docs(telemetry): clarify Alibaba Cloud console entry (#3498) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * docs(telemetry): clarify Alibaba Cloud console entry Co-authored-by: Qwen-Coder * docs(telemetry): fix unreachable intl console URL and split new/legacy console guidance - Replace unreachable tracing-sgnew.console.alibabacloud.com with the verified arms.console.alibabacloud.com for international users - Separate OTLP endpoint retrieval steps by console version: new console uses Integration Center, legacy console uses Cluster Configurations → Access point information 🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code) * docs(telemetry): align target example with current implementation Co-authored-by: Qwen-Coder * docs(telemetry): clarify Alibaba Cloud OTLP setup Co-authored-by: Qwen-Coder * docs(telemetry): remove stale TOC entry Co-authored-by: Qwen-Coder --------- Co-authored-by: jinye.djy Co-authored-by: Qwen-Coder --- docs/developers/development/telemetry.md | 72 ++++++++++++++++++------ 1 file changed, 55 insertions(+), 17 deletions(-) diff --git a/docs/developers/development/telemetry.md b/docs/developers/development/telemetry.md index 94859048e..006a668a7 100644 --- a/docs/developers/development/telemetry.md +++ b/docs/developers/development/telemetry.md @@ -7,8 +7,7 @@ Learn how to enable and setup OpenTelemetry for Qwen Code. - [OpenTelemetry Integration](#opentelemetry-integration) - [Configuration](#configuration) - [Aliyun Telemetry](#aliyun-telemetry) - - [Prerequisites](#prerequisites) - - [Direct Export (Recommended)](#direct-export-recommended) + - [Manual OTLP Export](#manual-otlp-export) - [Local Telemetry](#local-telemetry) - [File-based Output (Recommended)](#file-based-output-recommended) - [Collector-Based Export (Advanced)](#collector-based-export-advanced) @@ -44,6 +43,11 @@ observability framework — Qwen Code's observability system provides: instrumentation [OpenTelemetry]: https://opentelemetry.io/ +[aliyun-opentelemetry-overview]: https://www.alibabacloud.com/help/en/arms/tracing-analysis/product-overview/what-is-tracing-analysis +[aliyun-opentelemetry-get-started]: https://www.alibabacloud.com/help/en/arms/tracing-analysis/before-you-begin +[aliyun-opentelemetry-console-cn]: https://trace.console.aliyun.com +[aliyun-opentelemetry-console-cn-legacy]: https://tracing.console.aliyun.com +[aliyun-opentelemetry-console-intl]: https://arms.console.alibabacloud.com ## Configuration @@ -54,15 +58,15 @@ observability framework — Qwen Code's observability system provides: All telemetry behavior is controlled through your `.qwen/settings.json` file. These settings can be overridden by environment variables or CLI flags. -| Setting | Environment Variable | CLI Flag | Description | Values | Default | -| -------------- | ------------------------------ | -------------------------------------------------------- | ------------------------------------------------- | ------------------ | ----------------------- | -| `enabled` | `QWEN_TELEMETRY_ENABLED` | `--telemetry` / `--no-telemetry` | Enable or disable telemetry | `true`/`false` | `false` | -| `target` | `QWEN_TELEMETRY_TARGET` | `--telemetry-target ` | Where to send telemetry data | `"qwen"`/`"local"` | `"local"` | -| `otlpEndpoint` | `QWEN_TELEMETRY_OTLP_ENDPOINT` | `--telemetry-otlp-endpoint ` | OTLP collector endpoint | URL string | `http://localhost:4317` | -| `otlpProtocol` | `QWEN_TELEMETRY_OTLP_PROTOCOL` | `--telemetry-otlp-protocol ` | OTLP transport protocol | `"grpc"`/`"http"` | `"grpc"` | -| `outfile` | `QWEN_TELEMETRY_OUTFILE` | `--telemetry-outfile ` | Save telemetry to file (overrides `otlpEndpoint`) | file path | - | -| `logPrompts` | `QWEN_TELEMETRY_LOG_PROMPTS` | `--telemetry-log-prompts` / `--no-telemetry-log-prompts` | Include prompts in telemetry logs | `true`/`false` | `true` | -| `useCollector` | `QWEN_TELEMETRY_USE_COLLECTOR` | - | Use external OTLP collector (advanced) | `true`/`false` | `false` | +| Setting | Environment Variable | CLI Flag | Description | Values | Default | +| -------------- | ------------------------------ | -------------------------------------------------------- | ------------------------------------------------- | ----------------- | ----------------------- | +| `enabled` | `QWEN_TELEMETRY_ENABLED` | `--telemetry` / `--no-telemetry` | Enable or disable telemetry | `true`/`false` | `false` | +| `target` | `QWEN_TELEMETRY_TARGET` | `--telemetry-target ` | Where to send telemetry data | `"gcp"`/`"local"` | `"local"` | +| `otlpEndpoint` | `QWEN_TELEMETRY_OTLP_ENDPOINT` | `--telemetry-otlp-endpoint ` | OTLP collector endpoint | URL string | `http://localhost:4317` | +| `otlpProtocol` | `QWEN_TELEMETRY_OTLP_PROTOCOL` | `--telemetry-otlp-protocol ` | OTLP transport protocol | `"grpc"`/`"http"` | `"grpc"` | +| `outfile` | `QWEN_TELEMETRY_OUTFILE` | `--telemetry-outfile ` | Save telemetry to file (overrides `otlpEndpoint`) | file path | - | +| `logPrompts` | `QWEN_TELEMETRY_LOG_PROMPTS` | `--telemetry-log-prompts` / `--no-telemetry-log-prompts` | Include prompts in telemetry logs | `true`/`false` | `true` | +| `useCollector` | `QWEN_TELEMETRY_USE_COLLECTOR` | - | Use external OTLP collector (advanced) | `true`/`false` | `false` | **Note on boolean environment variables:** For the boolean settings (`enabled`, `logPrompts`, `useCollector`), setting the corresponding environment variable to @@ -73,21 +77,55 @@ For detailed information about all configuration options, see the ## Aliyun Telemetry -### Direct Export (Recommended) +### Manual OTLP Export -Sends telemetry directly to Aliyun services. No collector needed. +To view Qwen Code telemetry in Alibaba Cloud Managed Service for +OpenTelemetry, configure Qwen Code to export to the OTLP endpoint +provided by ARMS. -1. Enable telemetry in your `.qwen/settings.json`: +Setting `"target": "gcp"` alone does not configure the export +destination. If `otlpEndpoint` is not set, Qwen Code still defaults to +`http://localhost:4317`. If `outfile` is set, it overrides +`otlpEndpoint` and telemetry is written to the file instead of being +sent to Alibaba Cloud. + +1. Enable telemetry in your `.qwen/settings.json` and set the OTLP + endpoint: ```json { "telemetry": { "enabled": true, - "target": "qwen" + "target": "gcp", + "otlpEndpoint": "https://", + "otlpProtocol": "grpc" } } ``` -2. Run Qwen Code and send prompts. -3. View logs and metrics in the Aliyun Console. +2. If your Alibaba Cloud endpoint requires authentication, provide OTLP + headers through standard OpenTelemetry environment variables such as + `OTEL_EXPORTER_OTLP_HEADERS` (or the signal-specific variants). Qwen + Code does not currently expose OTLP auth headers directly in + `.qwen/settings.json`. +3. Run Qwen Code and send prompts. +4. View telemetry in Managed Service for OpenTelemetry: + - Product overview: + [What is Managed Service for OpenTelemetry?][aliyun-opentelemetry-overview] + - Getting started: + [Get started with Managed Service for OpenTelemetry][aliyun-opentelemetry-get-started] + - Console entry points: + - China mainland: + [trace.console.aliyun.com][aliyun-opentelemetry-console-cn] + (legacy console: + [tracing.console.aliyun.com][aliyun-opentelemetry-console-cn-legacy]) + - International: + [arms.console.alibabacloud.com][aliyun-opentelemetry-console-intl] + - In the console, use `Applications` to inspect traces and service + topology. + - To locate the OTLP endpoint and access information: + - **New console** (`trace.console.aliyun.com` or international): + navigate to `Integration Center`. + - **Legacy console** (`tracing.console.aliyun.com`): navigate to + `Cluster Configurations` → `Access point information`. ## Local Telemetry From eea4e10eea1d74b42f8efdba00bac6d0447ab998 Mon Sep 17 00:00:00 2001 From: Yan Shen Date: Sun, 26 Apr 2026 12:21:30 +0800 Subject: [PATCH 12/36] feat(cli): add sticky todo panel to app layouts (#3507) * feat(cli): add sticky todo panel to app layouts * fix(cli): hide sticky todos during feedback dialog --- packages/cli/src/ui/App.test.tsx | 1 + packages/cli/src/ui/AppContainer.test.tsx | 4 +- packages/cli/src/ui/AppContainer.tsx | 199 ++++++++++-------- .../src/ui/components/StickyTodoList.test.tsx | 57 +++++ .../cli/src/ui/components/StickyTodoList.tsx | 85 ++++++++ .../cli/src/ui/contexts/UIStateContext.tsx | 2 + .../src/ui/layouts/DefaultAppLayout.test.tsx | 183 ++++++++++++++++ .../cli/src/ui/layouts/DefaultAppLayout.tsx | 14 ++ .../ui/layouts/ScreenReaderAppLayout.test.tsx | 123 +++++++++++ .../src/ui/layouts/ScreenReaderAppLayout.tsx | 14 ++ .../cli/src/ui/utils/todoSnapshot.test.ts | 199 ++++++++++++++++++ packages/cli/src/ui/utils/todoSnapshot.ts | 109 ++++++++++ 12 files changed, 895 insertions(+), 95 deletions(-) create mode 100644 packages/cli/src/ui/components/StickyTodoList.test.tsx create mode 100644 packages/cli/src/ui/components/StickyTodoList.tsx create mode 100644 packages/cli/src/ui/layouts/DefaultAppLayout.test.tsx create mode 100644 packages/cli/src/ui/layouts/ScreenReaderAppLayout.test.tsx create mode 100644 packages/cli/src/ui/utils/todoSnapshot.test.ts create mode 100644 packages/cli/src/ui/utils/todoSnapshot.ts diff --git a/packages/cli/src/ui/App.test.tsx b/packages/cli/src/ui/App.test.tsx index 8df422f4b..7dcc98d90 100644 --- a/packages/cli/src/ui/App.test.tsx +++ b/packages/cli/src/ui/App.test.tsx @@ -57,6 +57,7 @@ describe('App', () => { streamingState: StreamingState.Idle, quittingMessages: null, dialogsVisible: false, + stickyTodos: null, mainControlsRef: { current: null }, historyManager: { addItem: vi.fn(), diff --git a/packages/cli/src/ui/AppContainer.test.tsx b/packages/cli/src/ui/AppContainer.test.tsx index a213c2fa9..0ecd6dd6c 100644 --- a/packages/cli/src/ui/AppContainer.test.tsx +++ b/packages/cli/src/ui/AppContainer.test.tsx @@ -31,6 +31,7 @@ import { } from './contexts/UIActionsContext.js'; import { ToolCallStatus } from './types.js'; import { useContext } from 'react'; +import { Box, measureElement } from 'ink'; // Mock useStdout to capture terminal title writes let mockStdout: { write: ReturnType }; @@ -50,7 +51,7 @@ let capturedUIActions: UIActions; function TestContextConsumer() { capturedUIState = useContext(UIStateContext)!; capturedUIActions = useContext(UIActionsContext)!; - return null; + return ; } vi.mock('./App.js', () => ({ @@ -122,7 +123,6 @@ import { useSessionStats } from './contexts/SessionContext.js'; import { useTextBuffer } from './components/shared/text-buffer.js'; import { useLogger } from './hooks/useLogger.js'; import { useLoadingIndicator } from './hooks/useLoadingIndicator.js'; -import { measureElement } from 'ink'; import { useTerminalSize } from './hooks/useTerminalSize.js'; import { ShellExecutionService } from '@qwen-code/qwen-code-core'; diff --git a/packages/cli/src/ui/AppContainer.tsx b/packages/cli/src/ui/AppContainer.tsx index 5a3deeb27..c16e5305d 100644 --- a/packages/cli/src/ui/AppContainer.tsx +++ b/packages/cli/src/ui/AppContainer.tsx @@ -58,6 +58,7 @@ import { type WaitingToolCall, } from '@qwen-code/qwen-code-core'; import { buildResumedHistoryItems } from './utils/resumeHistoryUtils.js'; +import { getStickyTodos } from './utils/todoSnapshot.js'; import { validateAuthMethod } from '../config/auth.js'; import { loadHierarchicalGeminiMemory } from '../config/config.js'; import process from 'node:process'; @@ -1229,6 +1230,10 @@ export const AppContainer = (props: AppContainerProps) => { () => [...pendingSlashCommandHistoryItems, ...pendingGeminiHistoryItems], [pendingSlashCommandHistoryItems, pendingGeminiHistoryItems], ); + const stickyTodos = useMemo( + () => getStickyTodos(historyManager.history, pendingHistoryItems), + [historyManager.history, pendingHistoryItems], + ); // Terminal tab progress bar (OSC 9;4) for iTerm2/Ghostty useTerminalProgress(streamingState, isToolExecuting(pendingHistoryItems)); @@ -1281,40 +1286,6 @@ export const AppContainer = (props: AppContainerProps) => { (streamingState === StreamingState.Idle || streamingState === StreamingState.Responding); - const [controlsHeight, setControlsHeight] = useState(0); - - useLayoutEffect(() => { - if (mainControlsRef.current) { - const fullFooterMeasurement = measureElement(mainControlsRef.current); - if (fullFooterMeasurement.height > 0) { - setControlsHeight(fullFooterMeasurement.height); - } - } - }, [buffer, terminalWidth, terminalHeight, btwItem]); - - // agentViewState is declared earlier (before handleFinalSubmit) so it - // is available for input routing. Referenced here for layout computation. - - // Compute available terminal height based on controls measurement. - // When in-process agents are present the AgentTabBar renders an extra - // row at the top of the layout; subtract it so downstream consumers - // (shell, transcript, etc.) don't overestimate available space. - const tabBarHeight = agentViewState.agents.size > 0 ? 1 : 0; - const availableTerminalHeight = Math.max( - 0, - terminalHeight - controlsHeight - staticExtraHeight - 2 - tabBarHeight, - ); - - config.setShellExecutionConfig({ - terminalWidth: Math.floor(terminalWidth * SHELL_WIDTH_FRACTION), - terminalHeight: Math.max( - Math.floor(availableTerminalHeight - SHELL_HEIGHT_PADDING), - 1, - ), - pager: settings.merged.tools?.shell?.pager, - showColor: settings.merged.tools?.shell?.showColor, - }); - const isFocused = useFocus(); useBracketedPaste(); @@ -1342,16 +1313,6 @@ export const AppContainer = (props: AppContainerProps) => { const initialPrompt = useMemo(() => config.getQuestion(), [config]); const initialPromptSubmitted = useRef(false); - useEffect(() => { - if (activePtyId) { - ShellExecutionService.resizePty( - activePtyId, - Math.floor(terminalWidth * SHELL_WIDTH_FRACTION), - Math.max(Math.floor(availableTerminalHeight - SHELL_HEIGHT_PADDING), 1), - ); - } - }, [terminalWidth, availableTerminalHeight, activePtyId]); - useEffect(() => { if ( initialPrompt && @@ -1561,8 +1522,106 @@ export const AppContainer = (props: AppContainerProps) => { needsRestart: ideNeedsRestart, restartReason: ideTrustRestartReason, } = useIdeTrustListener(); + const { + isFeedbackDialogOpen, + openFeedbackDialog, + closeFeedbackDialog, + temporaryCloseFeedbackDialog, + submitFeedback, + } = useFeedbackDialog({ + config, + settings, + streamingState, + history: historyManager.history, + sessionStats, + }); + const dialogsVisible = + showWelcomeBackDialog || + shouldShowIdePrompt || + shouldShowCommandMigrationNudge || + isFolderTrustDialogOpen || + !!shellConfirmationRequest || + !!confirmationRequest || + confirmUpdateExtensionRequests.length > 0 || + !!codingPlanUpdateRequest || + settingInputRequests.length > 0 || + pluginChoiceRequests.length > 0 || + !!loopDetectionConfirmationRequest || + isThemeDialogOpen || + isSettingsDialogOpen || + isMemoryDialogOpen || + isModelDialogOpen || + isTrustDialogOpen || + activeArenaDialog !== null || + isPermissionsDialogOpen || + isAuthDialogOpen || + isAuthenticating || + isEditorDialogOpen || + showIdeRestartPrompt || + isSubagentCreateDialogOpen || + isAgentsManagerDialogOpen || + isMcpDialogOpen || + isHooksDialogOpen || + isApprovalModeDialogOpen || + isResumeDialogOpen || + isDeleteDialogOpen || + isExtensionsManagerDialogOpen || + isRewindSelectorOpen; + dialogsVisibleRef.current = dialogsVisible; + const shouldShowStickyTodos = + stickyTodos !== null && + !dialogsVisible && + !isFeedbackDialogOpen && + streamingState !== StreamingState.WaitingForConfirmation; + const [controlsHeight, setControlsHeight] = useState(0); + + useLayoutEffect(() => { + if (!mainControlsRef.current) { + setControlsHeight(0); + return; + } + + const fullFooterMeasurement = measureElement(mainControlsRef.current); + setControlsHeight(fullFooterMeasurement.height); + }, [ + buffer, + terminalWidth, + terminalHeight, + btwItem, + dialogsVisible, + shouldShowStickyTodos, + stickyTodos, + ]); + + // agentViewState is declared earlier (before handleFinalSubmit) so it + // is available for input routing. Referenced here for layout computation. + const tabBarHeight = agentViewState.agents.size > 0 ? 1 : 0; + const availableTerminalHeight = Math.max( + 0, + terminalHeight - controlsHeight - staticExtraHeight - 2 - tabBarHeight, + ); + + config.setShellExecutionConfig({ + terminalWidth: Math.floor(terminalWidth * SHELL_WIDTH_FRACTION), + terminalHeight: Math.max( + Math.floor(availableTerminalHeight - SHELL_HEIGHT_PADDING), + 1, + ), + pager: settings.merged.tools?.shell?.pager, + showColor: settings.merged.tools?.shell?.showColor, + }); const isInitialMount = useRef(true); + useEffect(() => { + if (activePtyId) { + ShellExecutionService.resizePty( + activePtyId, + Math.floor(terminalWidth * SHELL_WIDTH_FRACTION), + Math.max(Math.floor(availableTerminalHeight - SHELL_HEIGHT_PADDING), 1), + ); + } + }, [terminalWidth, availableTerminalHeight, activePtyId]); + useEffect(() => { if (ideNeedsRestart) { // IDE trust changed, force a restart. @@ -2132,42 +2191,6 @@ export const AppContainer = (props: AppContainerProps) => { stdout, ]); - const nightly = props.version.includes('nightly'); - - const dialogsVisible = - showWelcomeBackDialog || - shouldShowIdePrompt || - shouldShowCommandMigrationNudge || - isFolderTrustDialogOpen || - !!shellConfirmationRequest || - !!confirmationRequest || - confirmUpdateExtensionRequests.length > 0 || - !!codingPlanUpdateRequest || - settingInputRequests.length > 0 || - pluginChoiceRequests.length > 0 || - !!loopDetectionConfirmationRequest || - isThemeDialogOpen || - isSettingsDialogOpen || - isMemoryDialogOpen || - isModelDialogOpen || - isTrustDialogOpen || - activeArenaDialog !== null || - isPermissionsDialogOpen || - isAuthDialogOpen || - isAuthenticating || - isEditorDialogOpen || - showIdeRestartPrompt || - isSubagentCreateDialogOpen || - isAgentsManagerDialogOpen || - isMcpDialogOpen || - isHooksDialogOpen || - isApprovalModeDialogOpen || - isResumeDialogOpen || - isDeleteDialogOpen || - isExtensionsManagerDialogOpen || - isRewindSelectorOpen; - dialogsVisibleRef.current = dialogsVisible; - // Drain queued messages when idle. `queueDrainNonce` re-fires the effect // after each submission settles so multi-step queues drain end-to-end. const queueDrainingRef = useRef(false); @@ -2201,19 +2224,7 @@ export const AppContainer = (props: AppContainerProps) => { queueDrainNonce, ]); - const { - isFeedbackDialogOpen, - openFeedbackDialog, - closeFeedbackDialog, - temporaryCloseFeedbackDialog, - submitFeedback, - } = useFeedbackDialog({ - config, - settings, - streamingState, - history: historyManager.history, - sessionStats, - }); + const nightly = props.version.includes('nightly'); const uiState: UIState = useMemo( () => ({ @@ -2289,6 +2300,7 @@ export const AppContainer = (props: AppContainerProps) => { staticExtraHeight, dialogsVisible, pendingHistoryItems, + stickyTodos, btwItem, setBtwItem, cancelBtw, @@ -2406,6 +2418,7 @@ export const AppContainer = (props: AppContainerProps) => { staticExtraHeight, dialogsVisible, pendingHistoryItems, + stickyTodos, btwItem, setBtwItem, cancelBtw, diff --git a/packages/cli/src/ui/components/StickyTodoList.test.tsx b/packages/cli/src/ui/components/StickyTodoList.test.tsx new file mode 100644 index 000000000..0c2af5454 --- /dev/null +++ b/packages/cli/src/ui/components/StickyTodoList.test.tsx @@ -0,0 +1,57 @@ +/** + * @license + * Copyright 2025 Qwen Code + * SPDX-License-Identifier: Apache-2.0 + */ + +import { render } from 'ink-testing-library'; +import { describe, expect, it } from 'vitest'; +import { StickyTodoList } from './StickyTodoList.js'; +import type { TodoItem } from './TodoDisplay.js'; + +describe('StickyTodoList', () => { + it('keeps each task number attached to the original task after sorting', () => { + const todos: TodoItem[] = [ + { + id: 'done', + content: 'Summarize results', + status: 'completed', + }, + { + id: 'pending', + content: 'Run cli tests', + status: 'pending', + }, + { + id: 'active', + content: 'Run core tests', + status: 'in_progress', + }, + ]; + + const { lastFrame } = render(); + const output = lastFrame() ?? ''; + const lines = output + .split('\n') + .map((line) => line.trim()) + .filter(Boolean); + + expect(output).toContain('Current tasks'); + expect(output).toContain('╭'); + expect( + lines.find((line) => line.includes('Run core tests')) ?? '', + ).toContain('3.'); + expect( + lines.find((line) => line.includes('Run cli tests')) ?? '', + ).toContain('2.'); + expect( + lines.find((line) => line.includes('Summarize results')) ?? '', + ).toContain('1.'); + expect(output.indexOf('Run core tests')).toBeLessThan( + output.indexOf('Run cli tests'), + ); + expect(output.indexOf('Run cli tests')).toBeLessThan( + output.indexOf('Summarize results'), + ); + }); +}); diff --git a/packages/cli/src/ui/components/StickyTodoList.tsx b/packages/cli/src/ui/components/StickyTodoList.tsx new file mode 100644 index 000000000..7f68591a8 --- /dev/null +++ b/packages/cli/src/ui/components/StickyTodoList.tsx @@ -0,0 +1,85 @@ +/** + * @license + * Copyright 2025 Qwen Code + * SPDX-License-Identifier: Apache-2.0 + */ + +import type React from 'react'; +import { useMemo } from 'react'; +import { Box, Text } from 'ink'; +import { t } from '../../i18n/index.js'; +import { Colors } from '../colors.js'; +import { theme } from '../semantic-colors.js'; +import { getOrderedStickyTodos } from '../utils/todoSnapshot.js'; +import type { TodoItem } from './TodoDisplay.js'; + +interface StickyTodoListProps { + todos: TodoItem[]; + width: number; +} + +const STATUS_ICONS = { + pending: '○', + in_progress: '◐', + completed: '●', +} as const; + +export const StickyTodoList: React.FC = ({ + todos, + width, +}) => { + const orderedTodos = useMemo(() => getOrderedStickyTodos(todos), [todos]); + const todoNumberById = useMemo( + () => + new Map(todos.map((todo, index) => [todo.id, `${index + 1}.`] as const)), + [todos], + ); + + if (todos.length === 0) { + return null; + } + + const numberColumnWidth = String(orderedTodos.length).length + 2; + + return ( + + + {t('Current tasks')} + + {orderedTodos.map((todo, index) => { + const todoNumber = todoNumberById.get(todo.id) ?? `${index + 1}.`; + const itemColor = + todo.status === 'in_progress' + ? Colors.AccentGreen + : Colors.Foreground; + + return ( + + + {todoNumber} + + + {STATUS_ICONS[todo.status]} + + + + {todo.content} + + + + ); + })} + + ); +}; diff --git a/packages/cli/src/ui/contexts/UIStateContext.tsx b/packages/cli/src/ui/contexts/UIStateContext.tsx index a51961128..a0ddd9c9d 100644 --- a/packages/cli/src/ui/contexts/UIStateContext.tsx +++ b/packages/cli/src/ui/contexts/UIStateContext.tsx @@ -17,6 +17,7 @@ import type { SettingInputRequest, PluginChoiceRequest, } from '../types.js'; +import type { TodoItem } from '../components/TodoDisplay.js'; import type { QwenAuthState } from '../hooks/useQwenAuth.js'; import type { CommandContext, SlashCommand } from '../commands/types.js'; import type { TextBuffer } from '../components/shared/text-buffer.js'; @@ -110,6 +111,7 @@ export interface UIState { staticExtraHeight: number; dialogsVisible: boolean; pendingHistoryItems: HistoryItemWithoutId[]; + stickyTodos: TodoItem[] | null; btwItem: HistoryItemBtw | null; setBtwItem: (item: HistoryItemBtw | null) => void; cancelBtw: () => void; diff --git a/packages/cli/src/ui/layouts/DefaultAppLayout.test.tsx b/packages/cli/src/ui/layouts/DefaultAppLayout.test.tsx new file mode 100644 index 000000000..29aab2f93 --- /dev/null +++ b/packages/cli/src/ui/layouts/DefaultAppLayout.test.tsx @@ -0,0 +1,183 @@ +/** + * @license + * Copyright 2025 Qwen Code + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, expect, it, vi, type Mock } from 'vitest'; +import { render } from 'ink-testing-library'; +import { Text } from 'ink'; +import { DefaultAppLayout } from './DefaultAppLayout.js'; +import { UIStateContext, type UIState } from '../contexts/UIStateContext.js'; +import { + UIActionsContext, + type UIActions, +} from '../contexts/UIActionsContext.js'; +import { useAgentViewState } from '../contexts/AgentViewContext.js'; +import { StreamingState } from '../types.js'; + +vi.mock('../components/MainContent.js', () => ({ + MainContent: () => MainContent, +})); + +vi.mock('../components/DialogManager.js', () => ({ + DialogManager: () => DialogManager, +})); + +vi.mock('../components/Composer.js', () => ({ + Composer: () => Composer, +})); + +vi.mock('../components/ExitWarning.js', () => ({ + ExitWarning: () => ExitWarning, +})); + +vi.mock('../components/messages/BtwMessage.js', () => ({ + BtwMessage: () => BtwMessage, +})); + +vi.mock('../components/StickyTodoList.js', () => ({ + StickyTodoList: () => StickyTodoList, +})); + +vi.mock('../components/agent-view/AgentTabBar.js', () => ({ + AgentTabBar: () => AgentTabBar, +})); + +vi.mock('../components/agent-view/AgentChatView.js', () => ({ + AgentChatView: () => AgentChatView, +})); + +vi.mock('../components/agent-view/AgentComposer.js', () => ({ + AgentComposer: () => AgentComposer, +})); + +vi.mock('../hooks/useTerminalSize.js', () => ({ + useTerminalSize: () => ({ columns: 80 }), +})); + +vi.mock('../contexts/AgentViewContext.js', () => ({ + useAgentViewState: vi.fn(), +})); + +const mockedUseAgentViewState = useAgentViewState as Mock; + +const mockUIActions = { + refreshStatic: vi.fn(), +} as unknown as UIActions; + +const baseUIState: Partial = { + dialogsVisible: false, + isFeedbackDialogOpen: false, + mainControlsRef: { current: null }, + mainAreaWidth: 80, + terminalWidth: 80, + streamingState: StreamingState.Idle, + historyManager: { + addItem: vi.fn(), + history: [], + updateItem: vi.fn(), + clearItems: vi.fn(), + loadHistory: vi.fn(), + truncateToItem: vi.fn(), + }, + stickyTodos: [ + { + id: 'todo-1', + content: 'Pinned task', + status: 'pending', + }, + ], + btwItem: null, +}; + +const renderLayout = (uiState: Partial) => + render( + + + + + , + ); + +describe('DefaultAppLayout', () => { + it('renders sticky todo list before the composer in the main view', () => { + mockedUseAgentViewState.mockReturnValue({ + activeView: 'main', + agents: new Map(), + }); + + const { lastFrame } = renderLayout(baseUIState); + const output = lastFrame() ?? ''; + + expect(output).toContain('StickyTodoList'); + expect(output.indexOf('StickyTodoList')).toBeGreaterThan( + output.indexOf('MainContent'), + ); + expect(output.indexOf('StickyTodoList')).toBeLessThan( + output.indexOf('Composer'), + ); + }); + + it('does not render sticky todo list when dialogs are visible', () => { + mockedUseAgentViewState.mockReturnValue({ + activeView: 'main', + agents: new Map(), + }); + + const { lastFrame } = renderLayout({ + ...baseUIState, + dialogsVisible: true, + }); + + const output = lastFrame() ?? ''; + expect(output).not.toContain('StickyTodoList'); + expect(output).toContain('DialogManager'); + }); + + it('does not render sticky todo list while waiting for confirmation', () => { + mockedUseAgentViewState.mockReturnValue({ + activeView: 'main', + agents: new Map(), + }); + + const { lastFrame } = renderLayout({ + ...baseUIState, + streamingState: StreamingState.WaitingForConfirmation, + }); + + const output = lastFrame() ?? ''; + expect(output).not.toContain('StickyTodoList'); + expect(output).toContain('Composer'); + }); + + it('does not render sticky todo list when feedback dialog is open', () => { + mockedUseAgentViewState.mockReturnValue({ + activeView: 'main', + agents: new Map(), + }); + + const { lastFrame } = renderLayout({ + ...baseUIState, + isFeedbackDialogOpen: true, + }); + + const output = lastFrame() ?? ''; + expect(output).not.toContain('StickyTodoList'); + expect(output).toContain('Composer'); + }); + + it('does not render sticky todo list in an agent tab view', () => { + mockedUseAgentViewState.mockReturnValue({ + activeView: 'agent-1', + agents: new Map([['agent-1', {}]]), + }); + + const { lastFrame } = renderLayout(baseUIState); + + const output = lastFrame() ?? ''; + expect(output).not.toContain('StickyTodoList'); + expect(output).toContain('AgentChatView'); + expect(output).toContain('AgentComposer'); + }); +}); diff --git a/packages/cli/src/ui/layouts/DefaultAppLayout.tsx b/packages/cli/src/ui/layouts/DefaultAppLayout.tsx index 88efdbefd..f66e5a713 100644 --- a/packages/cli/src/ui/layouts/DefaultAppLayout.tsx +++ b/packages/cli/src/ui/layouts/DefaultAppLayout.tsx @@ -11,6 +11,7 @@ import { MainContent } from '../components/MainContent.js'; import { DialogManager } from '../components/DialogManager.js'; import { Composer } from '../components/Composer.js'; import { ExitWarning } from '../components/ExitWarning.js'; +import { StickyTodoList } from '../components/StickyTodoList.js'; import { BtwMessage } from '../components/messages/BtwMessage.js'; import { AgentTabBar } from '../components/agent-view/AgentTabBar.js'; import { AgentChatView } from '../components/agent-view/AgentChatView.js'; @@ -19,6 +20,7 @@ import { useUIState } from '../contexts/UIStateContext.js'; import { useUIActions } from '../contexts/UIActionsContext.js'; import { useAgentViewState } from '../contexts/AgentViewContext.js'; import { useTerminalSize } from '../hooks/useTerminalSize.js'; +import { StreamingState } from '../types.js'; export const DefaultAppLayout: React.FC = () => { const uiState = useUIState(); @@ -27,6 +29,12 @@ export const DefaultAppLayout: React.FC = () => { const { columns: terminalWidth } = useTerminalSize(); const hasAgents = agents.size > 0; const isAgentTab = activeView !== 'main' && agents.has(activeView); + const stickyTodoWidth = Math.min(uiState.mainAreaWidth, 64); + const shouldShowStickyTodos = + uiState.stickyTodos !== null && + !uiState.dialogsVisible && + !uiState.isFeedbackDialogOpen && + uiState.streamingState !== StreamingState.WaitingForConfirmation; // Clear terminal on view switch so previous view's output // is removed. refreshStatic clears the terminal and bumps the @@ -69,6 +77,12 @@ export const DefaultAppLayout: React.FC = () => { ) : ( <> + {shouldShowStickyTodos && ( + + )} {uiState.btwItem && ( ({ + Notifications: () => Notifications, +})); + +vi.mock('../components/MainContent.js', () => ({ + MainContent: () => MainContent, +})); + +vi.mock('../components/DialogManager.js', () => ({ + DialogManager: () => DialogManager, +})); + +vi.mock('../components/Composer.js', () => ({ + Composer: () => Composer, +})); + +vi.mock('../components/Footer.js', () => ({ + Footer: () => Footer, +})); + +vi.mock('../components/ExitWarning.js', () => ({ + ExitWarning: () => ExitWarning, +})); + +vi.mock('../components/messages/BtwMessage.js', () => ({ + BtwMessage: () => BtwMessage, +})); + +vi.mock('../components/StickyTodoList.js', () => ({ + StickyTodoList: () => StickyTodoList, +})); + +const baseUIState: Partial = { + dialogsVisible: false, + isFeedbackDialogOpen: false, + mainAreaWidth: 80, + terminalWidth: 80, + streamingState: StreamingState.Idle, + historyManager: { + addItem: vi.fn(), + history: [], + updateItem: vi.fn(), + clearItems: vi.fn(), + loadHistory: vi.fn(), + truncateToItem: vi.fn(), + }, + stickyTodos: [ + { + id: 'todo-1', + content: 'Pinned task', + status: 'pending', + }, + ], + btwItem: null, +}; + +const renderLayout = (uiState: Partial) => + render( + + + , + ); + +describe('ScreenReaderAppLayout', () => { + it('renders sticky todo list before the composer', () => { + const { lastFrame } = renderLayout(baseUIState); + const output = lastFrame() ?? ''; + + expect(output).toContain('StickyTodoList'); + expect(output.indexOf('StickyTodoList')).toBeGreaterThan( + output.indexOf('MainContent'), + ); + expect(output.indexOf('StickyTodoList')).toBeLessThan( + output.indexOf('Composer'), + ); + }); + + it('does not render sticky todo list when dialogs are visible', () => { + const { lastFrame } = renderLayout({ + ...baseUIState, + dialogsVisible: true, + }); + + const output = lastFrame() ?? ''; + expect(output).not.toContain('StickyTodoList'); + expect(output).toContain('DialogManager'); + }); + + it('does not render sticky todo list while waiting for confirmation', () => { + const { lastFrame } = renderLayout({ + ...baseUIState, + streamingState: StreamingState.WaitingForConfirmation, + }); + + const output = lastFrame() ?? ''; + expect(output).not.toContain('StickyTodoList'); + expect(output).toContain('Composer'); + }); + + it('does not render sticky todo list when feedback dialog is open', () => { + const { lastFrame } = renderLayout({ + ...baseUIState, + isFeedbackDialogOpen: true, + }); + + const output = lastFrame() ?? ''; + expect(output).not.toContain('StickyTodoList'); + expect(output).toContain('Composer'); + }); +}); diff --git a/packages/cli/src/ui/layouts/ScreenReaderAppLayout.tsx b/packages/cli/src/ui/layouts/ScreenReaderAppLayout.tsx index e601db4b9..b79eda8ec 100644 --- a/packages/cli/src/ui/layouts/ScreenReaderAppLayout.tsx +++ b/packages/cli/src/ui/layouts/ScreenReaderAppLayout.tsx @@ -12,11 +12,19 @@ import { DialogManager } from '../components/DialogManager.js'; import { Composer } from '../components/Composer.js'; import { Footer } from '../components/Footer.js'; import { ExitWarning } from '../components/ExitWarning.js'; +import { StickyTodoList } from '../components/StickyTodoList.js'; import { BtwMessage } from '../components/messages/BtwMessage.js'; import { useUIState } from '../contexts/UIStateContext.js'; +import { StreamingState } from '../types.js'; export const ScreenReaderAppLayout: React.FC = () => { const uiState = useUIState(); + const stickyTodoWidth = Math.min(uiState.mainAreaWidth, 64); + const shouldShowStickyTodos = + uiState.stickyTodos !== null && + !uiState.dialogsVisible && + !uiState.isFeedbackDialogOpen && + uiState.streamingState !== StreamingState.WaitingForConfirmation; return ( @@ -35,6 +43,12 @@ export const ScreenReaderAppLayout: React.FC = () => { ) : ( <> + {shouldShowStickyTodos && ( + + )} {uiState.btwItem && ( , + withId?: number, +): HistoryItem | HistoryItemWithoutId { + const item = { + type: 'tool_group' as const, + tools: [ + { + callId: 'todo-custom', + name: 'TodoWrite', + description: 'Update todos', + resultDisplay: { + type: 'todo_list' as const, + todos, + }, + status: ToolCallStatus.Success, + confirmationDetails: undefined, + }, + ], + }; + + if (withId !== undefined) { + return { ...item, id: withId }; + } + + return item; +} + +function makeEmptyTodoToolGroup( + withId?: number, +): HistoryItem | HistoryItemWithoutId { + const item = { + type: 'tool_group' as const, + tools: [ + { + callId: 'todo-clear', + name: 'TodoWrite', + description: 'Clear todos', + resultDisplay: { + type: 'todo_list' as const, + todos: [], + }, + status: ToolCallStatus.Success, + confirmationDetails: undefined, + }, + ], + }; + + if (withId !== undefined) { + return { ...item, id: withId }; + } + + return item; +} + +describe('getStickyTodos', () => { + it('returns the latest todo snapshot from history', () => { + const history = [ + makeTodoToolGroup('first task', 1), + makeTodoToolGroup('latest history task', 2), + ] as HistoryItem[]; + + expect(getStickyTodos(history, [])).toEqual([ + { + id: 'todo-latest history task', + content: 'latest history task', + status: 'pending', + }, + ]); + }); + + it('prefers pending todo snapshots over history', () => { + const history = [makeTodoToolGroup('history task', 1)] as HistoryItem[]; + const pendingHistoryItems = [ + makeTodoToolGroup('pending task'), + ] as HistoryItemWithoutId[]; + + expect(getStickyTodos(history, pendingHistoryItems)).toEqual([ + { + id: 'todo-pending task', + content: 'pending task', + status: 'pending', + }, + ]); + }); + + it('returns null when the latest todo snapshot clears the list', () => { + const history = [makeTodoToolGroup('history task', 1)] as HistoryItem[]; + const pendingHistoryItems = [ + makeEmptyTodoToolGroup(), + ] as HistoryItemWithoutId[]; + + expect(getStickyTodos(history, pendingHistoryItems)).toBeNull(); + }); + + it('returns null when the latest history todo snapshot is fully completed', () => { + const history = [ + makeCustomTodoToolGroup( + [ + { + id: 'todo-1', + content: 'Run tests', + status: 'completed', + }, + { + id: 'todo-2', + content: 'Summarize results', + status: 'completed', + }, + ], + 1, + ), + ] as HistoryItem[]; + + expect(getStickyTodos(history, [])).toBeNull(); + }); + + it('keeps showing a fully completed pending snapshot until the turn finishes', () => { + const history = [ + makeTodoToolGroup('older history task', 1), + ] as HistoryItem[]; + const pendingHistoryItems = [ + makeCustomTodoToolGroup([ + { + id: 'todo-1', + content: 'Run tests', + status: 'completed', + }, + { + id: 'todo-2', + content: 'Summarize results', + status: 'completed', + }, + ]), + ] as HistoryItemWithoutId[]; + + expect(getStickyTodos(history, pendingHistoryItems)).toEqual([ + { + id: 'todo-1', + content: 'Run tests', + status: 'completed', + }, + { + id: 'todo-2', + content: 'Summarize results', + status: 'completed', + }, + ]); + }); +}); diff --git a/packages/cli/src/ui/utils/todoSnapshot.ts b/packages/cli/src/ui/utils/todoSnapshot.ts new file mode 100644 index 000000000..3f297e30d --- /dev/null +++ b/packages/cli/src/ui/utils/todoSnapshot.ts @@ -0,0 +1,109 @@ +/** + * @license + * Copyright 2025 Qwen Code + * SPDX-License-Identifier: Apache-2.0 + */ + +import type { TodoItem } from '../components/TodoDisplay.js'; +import type { + HistoryItem, + HistoryItemWithoutId, + IndividualToolCallDisplay, +} from '../types.js'; + +type HistoryLikeItem = HistoryItem | HistoryItemWithoutId; +type SnapshotSearchResult = TodoItem[] | null | undefined; +const STICKY_TODO_STATUS_PRIORITY: Record = { + in_progress: 0, + pending: 1, + completed: 2, +}; + +function extractTodosFromResultDisplay( + resultDisplay: unknown, +): TodoItem[] | null { + if (!resultDisplay) { + return null; + } + + if (typeof resultDisplay === 'object') { + const candidate = resultDisplay as Record; + if ( + candidate['type'] === 'todo_list' && + Array.isArray(candidate['todos']) + ) { + return candidate['todos'] as TodoItem[]; + } + } + + if (typeof resultDisplay === 'string') { + try { + const parsed = JSON.parse(resultDisplay) as Record; + if (parsed['type'] === 'todo_list' && Array.isArray(parsed['todos'])) { + return parsed['todos'] as TodoItem[]; + } + } catch { + return null; + } + } + + return null; +} + +function findLatestTodoSnapshot( + items: readonly HistoryLikeItem[], +): SnapshotSearchResult { + for (let itemIndex = items.length - 1; itemIndex >= 0; itemIndex -= 1) { + const item = items[itemIndex]; + if (item.type !== 'tool_group') { + continue; + } + + for ( + let toolIndex = item.tools.length - 1; + toolIndex >= 0; + toolIndex -= 1 + ) { + const tool = item.tools[toolIndex] as IndividualToolCallDisplay; + const todos = extractTodosFromResultDisplay(tool.resultDisplay); + if (todos) { + return todos.length > 0 ? todos : null; + } + } + } + + return undefined; +} + +function areAllTodosCompleted(todos: readonly TodoItem[]): boolean { + return todos.length > 0 && todos.every((todo) => todo.status === 'completed'); +} + +export function getStickyTodos( + history: readonly HistoryItem[], + pendingHistoryItems: readonly HistoryItemWithoutId[], +): TodoItem[] | null { + const pendingSnapshot = findLatestTodoSnapshot(pendingHistoryItems); + if (pendingSnapshot !== undefined) { + return pendingSnapshot; + } + + const historySnapshot = findLatestTodoSnapshot(history); + if (historySnapshot && areAllTodosCompleted(historySnapshot)) { + return null; + } + + return historySnapshot ?? null; +} + +export function getOrderedStickyTodos(todos: readonly TodoItem[]): TodoItem[] { + return todos + .map((todo, index) => ({ todo, index })) + .sort( + (left, right) => + STICKY_TODO_STATUS_PRIORITY[left.todo.status] - + STICKY_TODO_STATUS_PRIORITY[right.todo.status] || + left.index - right.index, + ) + .map(({ todo }) => todo); +} From 569cfe10fabd02f881a6b1f1211f14094e297270 Mon Sep 17 00:00:00 2001 From: Shaojin Wen Date: Sun, 26 Apr 2026 12:55:39 +0800 Subject: [PATCH 13/36] fix(telemetry): use safeJsonStringify in FileExporter to avoid circular reference crash (#3630) When --telemetry-outfile is configured, FileSpanExporter.serialize called JSON.stringify directly on OTel ReadableSpan instances. The spans hold a back-reference to BatchSpanProcessor (._shutdownOnce -> BindOnceFuture._that -> BatchSpanProcessor), which forms a cycle and triggers "TypeError: Converting circular structure to JSON" on every export. Combined with DiagConsoleLogger, the error was repeatedly printed to stderr and polluted the Ink TUI. Switch FileExporter.serialize to the existing safeJsonStringify utility, matching the upstream gemini-cli fix so future merges stay clean. Add a focused regression test that mimics the BatchSpanProcessor cycle shape; broader cycle behavior is already covered by safeJsonStringify.test.ts. Co-authored-by: wenshao --- .../core/src/telemetry/file-exporters.test.ts | 50 +++++++++++++++++++ packages/core/src/telemetry/file-exporters.ts | 3 +- 2 files changed, 52 insertions(+), 1 deletion(-) create mode 100644 packages/core/src/telemetry/file-exporters.test.ts diff --git a/packages/core/src/telemetry/file-exporters.test.ts b/packages/core/src/telemetry/file-exporters.test.ts new file mode 100644 index 000000000..40a871de1 --- /dev/null +++ b/packages/core/src/telemetry/file-exporters.test.ts @@ -0,0 +1,50 @@ +/** + * @license + * Copyright 2026 Qwen Team + * SPDX-License-Identifier: Apache-2.0 + */ + +import * as fs from 'node:fs'; +import * as os from 'node:os'; +import * as path from 'node:path'; +import { afterEach, beforeEach, describe, expect, it } from 'vitest'; +import { FileSpanExporter } from './file-exporters.js'; + +type SerializeAccess = { serialize: (data: unknown) => string }; + +describe('FileExporter.serialize', () => { + let tmpDir: string; + let exporter: FileSpanExporter; + let serialize: (data: unknown) => string; + + beforeEach(() => { + tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'file-exporters-test-')); + exporter = new FileSpanExporter(path.join(tmpDir, 'out.jsonl')); + serialize = (exporter as unknown as SerializeAccess).serialize.bind( + exporter, + ); + }); + + afterEach(async () => { + await exporter.shutdown(); + fs.rmSync(tmpDir, { recursive: true, force: true }); + }); + + // Regression for upstream PR #4689: a raw JSON.stringify on a ReadableSpan + // crashed because BatchSpanProcessor._shutdownOnce -> BindOnceFuture._that + // forms a cycle. The exporter must delegate to safeJsonStringify so cycles + // become "[Circular]" instead of throwing. + it('does not throw on BatchSpanProcessor-shaped cycle', () => { + const proc: Record = { kind: 'BatchSpanProcessor' }; + const future: Record = { kind: 'BindOnceFuture' }; + proc['_shutdownOnce'] = future; + future['_that'] = proc; + const span = { name: 'span-1', _spanProcessor: proc }; + + expect(() => serialize(span)).not.toThrow(); + const out = serialize(span); + expect(out).toContain('"name": "span-1"'); + expect(out).toContain('"[Circular]"'); + expect(out.endsWith('\n')).toBe(true); + }); +}); diff --git a/packages/core/src/telemetry/file-exporters.ts b/packages/core/src/telemetry/file-exporters.ts index 55fe64103..def6e91f4 100644 --- a/packages/core/src/telemetry/file-exporters.ts +++ b/packages/core/src/telemetry/file-exporters.ts @@ -17,6 +17,7 @@ import type { PushMetricExporter, } from '@opentelemetry/sdk-metrics'; import { AggregationTemporality } from '@opentelemetry/sdk-metrics'; +import { safeJsonStringify } from '../utils/safeJsonStringify.js'; class FileExporter { protected writeStream: fs.WriteStream; @@ -26,7 +27,7 @@ class FileExporter { } protected serialize(data: unknown): string { - return JSON.stringify(data, null, 2) + '\n'; + return safeJsonStringify(data, 2) + '\n'; } shutdown(): Promise { From 29887ddfefe52a4da8e306dc9cc145a758b78b57 Mon Sep 17 00:00:00 2001 From: Shaojin Wen Date: Sun, 26 Apr 2026 13:17:34 +0800 Subject: [PATCH 14/36] fix(core): match DeepSeek provider by model name for sglang/vllm (#3613) (#3620) Some OpenAI-compatible servers (notably sglang's deepseek-v4 jinja template) crash on the array form of message content even when it carries a single text block, with `TypeError: sequence item 0: expected str instance, list found` at `encoding_dsv4.py:336`. The DeepSeekOpenAICompatibleProvider already flattens content arrays into joined strings in buildRequest, but isDeepSeekProvider only matched on the official api.deepseek.com baseUrl. DeepSeek models served behind sglang / vllm / ollama / etc. bypass the workaround and hit the bug. Extend the matcher to also detect by model name (case-insensitive substring 'deepseek'), so any OpenAI-compatible endpoint serving a DeepSeek model picks up the same content-format flattening. Fixes #3613 Co-authored-by: wenshao --- .../provider/deepseek.test.ts | 27 ++++++++++++++++++- .../provider/deepseek.ts | 11 +++++++- 2 files changed, 36 insertions(+), 2 deletions(-) diff --git a/packages/core/src/core/openaiContentGenerator/provider/deepseek.test.ts b/packages/core/src/core/openaiContentGenerator/provider/deepseek.test.ts index f4ced4c45..6570e4aec 100644 --- a/packages/core/src/core/openaiContentGenerator/provider/deepseek.test.ts +++ b/packages/core/src/core/openaiContentGenerator/provider/deepseek.test.ts @@ -49,16 +49,41 @@ describe('DeepSeekOpenAICompatibleProvider', () => { expect(result).toBe(true); }); - it('returns false for non deepseek baseUrl', () => { + it('returns false when neither baseUrl nor model match deepseek', () => { const config = { ...mockContentGeneratorConfig, baseUrl: 'https://api.example.com/v1', + model: 'gpt-4o', } as ContentGeneratorConfig; const result = DeepSeekOpenAICompatibleProvider.isDeepSeekProvider(config); expect(result).toBe(false); }); + + it('returns true for deepseek model on a non-deepseek baseUrl (e.g. sglang) — issue #3613', () => { + const config = { + ...mockContentGeneratorConfig, + baseUrl: 'https://my-sglang.example.com:8000/v1', + model: 'deepseek-v4-pro', + } as ContentGeneratorConfig; + + const result = + DeepSeekOpenAICompatibleProvider.isDeepSeekProvider(config); + expect(result).toBe(true); + }); + + it('matches model name case-insensitively', () => { + const config = { + ...mockContentGeneratorConfig, + baseUrl: 'https://my-vllm.example.com/v1', + model: 'DeepSeek-R1', + } as ContentGeneratorConfig; + + const result = + DeepSeekOpenAICompatibleProvider.isDeepSeekProvider(config); + expect(result).toBe(true); + }); }); describe('buildRequest', () => { diff --git a/packages/core/src/core/openaiContentGenerator/provider/deepseek.ts b/packages/core/src/core/openaiContentGenerator/provider/deepseek.ts index e34dc724d..5c8fc1212 100644 --- a/packages/core/src/core/openaiContentGenerator/provider/deepseek.ts +++ b/packages/core/src/core/openaiContentGenerator/provider/deepseek.ts @@ -22,8 +22,17 @@ export class DeepSeekOpenAICompatibleProvider extends DefaultOpenAICompatiblePro contentGeneratorConfig: ContentGeneratorConfig, ): boolean { const baseUrl = contentGeneratorConfig.baseUrl ?? ''; + if (baseUrl.toLowerCase().includes('api.deepseek.com')) { + return true; + } - return baseUrl.toLowerCase().includes('api.deepseek.com'); + // DeepSeek models served behind any OpenAI-compatible endpoint (sglang, + // vllm, ollama, etc.) share the same content-format constraint that the + // official api.deepseek.com endpoint has. Detect them by model name so + // the buildRequest flattening below kicks in. + // See https://github.com/QwenLM/qwen-code/issues/3613 + const model = contentGeneratorConfig.model ?? ''; + return model.toLowerCase().includes('deepseek'); } /** From a6b0b7e579049364ec79eeadcb5bd8db217ed8de Mon Sep 17 00:00:00 2001 From: tanzhenxin Date: Sun, 26 Apr 2026 14:29:18 +0800 Subject: [PATCH 15/36] Revert "fix(cli): respect OPENAI_MODEL precedence in CLI model resolution (#3567)" (#3633) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This reverts commit 007a109db8efdd120c923214a0ad9e973a1ee6a3. The change made `OPENAI_MODEL` outrank `settings.model.name` when looking up the active entry in `settings.modelProviders`. Combined with the core resolver's `modelProvider > cli > env > settings` priority, this caused a regression: a `/model` selection (which writes `settings.model.name`) was silently overridden whenever `OPENAI_MODEL` was set in the user's shell, with no warning surfaced. Restoring the previous behavior — looking up the provider entry by `argv.model || settings.model?.name` — preserves the implicit contract that an explicit `modelProviders` config takes precedence over stale shell defaults. Users without a `modelProviders` config are unaffected: env vars still drive model selection through the core resolver. See discussion on #3567. --- .../cli/src/utils/modelConfigUtils.test.ts | 247 ------------------ packages/cli/src/utils/modelConfigUtils.ts | 9 +- 2 files changed, 2 insertions(+), 254 deletions(-) diff --git a/packages/cli/src/utils/modelConfigUtils.test.ts b/packages/cli/src/utils/modelConfigUtils.test.ts index abd556308..97cea9974 100644 --- a/packages/cli/src/utils/modelConfigUtils.test.ts +++ b/packages/cli/src/utils/modelConfigUtils.test.ts @@ -442,187 +442,6 @@ describe('modelConfigUtils', () => { ); }); - it('should find modelProvider from OPENAI_MODEL when argv.model is not provided', () => { - const argv = {}; - const modelProvider: ProviderModelConfig = { - id: 'env-openai-model', - name: 'Env OpenAI Model', - generationConfig: { - samplingParams: { temperature: 0.6 }, - }, - }; - const settings = makeMockSettings({ - model: { name: 'settings-model' }, - modelProviders: { - [AuthType.USE_OPENAI]: [ - { id: 'settings-model', name: 'Settings Model' }, - modelProvider, - ], - }, - }); - const selectedAuthType = AuthType.USE_OPENAI; - - vi.mocked(resolveModelConfig).mockReturnValue({ - config: { - model: 'env-openai-model', - apiKey: '', - baseUrl: '', - }, - sources: {}, - warnings: [], - }); - - resolveCliGenerationConfig({ - argv, - settings, - selectedAuthType, - env: { OPENAI_MODEL: 'env-openai-model' }, - }); - - expect(vi.mocked(resolveModelConfig)).toHaveBeenCalledWith( - expect.objectContaining({ - modelProvider, - }), - ); - }); - - it('should find modelProvider from QWEN_MODEL when OPENAI_MODEL is not provided', () => { - const argv = {}; - const modelProvider: ProviderModelConfig = { - id: 'qwen-env-model', - name: 'Qwen Env Model', - generationConfig: { - samplingParams: { temperature: 0.7 }, - }, - }; - const settings = makeMockSettings({ - model: { name: 'settings-model' }, - modelProviders: { - [AuthType.USE_OPENAI]: [ - { id: 'settings-model', name: 'Settings Model' }, - modelProvider, - ], - }, - }); - const selectedAuthType = AuthType.USE_OPENAI; - - vi.mocked(resolveModelConfig).mockReturnValue({ - config: { - model: 'qwen-env-model', - apiKey: '', - baseUrl: '', - }, - sources: {}, - warnings: [], - }); - - resolveCliGenerationConfig({ - argv, - settings, - selectedAuthType, - env: { QWEN_MODEL: 'qwen-env-model' }, - }); - - expect(vi.mocked(resolveModelConfig)).toHaveBeenCalledWith( - expect.objectContaining({ - modelProvider, - }), - ); - }); - - it('should prefer OPENAI_MODEL over QWEN_MODEL and settings.model.name for USE_OPENAI provider lookup', () => { - const argv = {}; - const openAIProvider: ProviderModelConfig = { - id: 'openai-env-model', - name: 'OpenAI Env Model', - }; - const qwenProvider: ProviderModelConfig = { - id: 'qwen-env-model', - name: 'Qwen Env Model', - }; - const settings = makeMockSettings({ - model: { name: 'settings-model' }, - modelProviders: { - [AuthType.USE_OPENAI]: [ - { id: 'settings-model', name: 'Settings Model' }, - qwenProvider, - openAIProvider, - ], - }, - }); - const selectedAuthType = AuthType.USE_OPENAI; - - vi.mocked(resolveModelConfig).mockReturnValue({ - config: { - model: 'openai-env-model', - apiKey: '', - baseUrl: '', - }, - sources: {}, - warnings: [], - }); - - resolveCliGenerationConfig({ - argv, - settings, - selectedAuthType, - env: { - OPENAI_MODEL: 'openai-env-model', - QWEN_MODEL: 'qwen-env-model', - }, - }); - - expect(vi.mocked(resolveModelConfig)).toHaveBeenCalledWith( - expect.objectContaining({ - modelProvider: openAIProvider, - }), - ); - }); - - it('should ignore OPENAI_MODEL for non-USE_OPENAI provider lookup', () => { - const argv = {}; - const settingsModelProvider: ProviderModelConfig = { - id: 'settings-model', - name: 'Settings Model', - }; - const unrelatedOpenAIProvider: ProviderModelConfig = { - id: 'openai-env-model', - name: 'OpenAI Env Model', - }; - const settings = makeMockSettings({ - model: { name: 'settings-model' }, - modelProviders: { - [AuthType.USE_ANTHROPIC]: [ - settingsModelProvider, - unrelatedOpenAIProvider, - ], - }, - }); - const selectedAuthType = AuthType.USE_ANTHROPIC; - - vi.mocked(resolveModelConfig).mockReturnValue({ - config: { - model: 'settings-model', - apiKey: '', - baseUrl: '', - }, - sources: {}, - warnings: [], - }); - - resolveCliGenerationConfig({ - argv, - settings, - selectedAuthType, - env: { OPENAI_MODEL: 'openai-env-model' }, - }); - - expect(vi.mocked(resolveModelConfig)).toHaveBeenCalledWith( - expect.objectContaining({ - modelProvider: settingsModelProvider, - }), - ); - }); it('should not find modelProvider when authType is undefined', () => { const argv = { model: 'test-model' }; const settings = makeMockSettings({ @@ -904,71 +723,5 @@ describe('modelConfigUtils', () => { }), ); }); - - it('should respect precedence: argv.model > OPENAI_MODEL > QWEN_MODEL > settings.model.name', () => { - const mockSettings = makeMockSettings({ - model: { name: 'settings-model' }, - modelProviders: { - [AuthType.USE_OPENAI]: [ - { id: 'settings-model' } as ProviderModelConfig, - { id: 'openai-env-model' } as ProviderModelConfig, - { id: 'qwen-env-model' } as ProviderModelConfig, - { id: 'cli-model' } as ProviderModelConfig, - ], - }, - }); - - vi.mocked(resolveModelConfig).mockReturnValue({ - config: { model: 'cli-model', apiKey: '', baseUrl: '' }, - sources: {}, - warnings: [], - }); - const result1 = resolveCliGenerationConfig({ - argv: { model: 'cli-model' }, - settings: mockSettings, - selectedAuthType: AuthType.USE_OPENAI, - env: { OPENAI_MODEL: 'openai-env-model' }, - }); - expect(result1.model).toBe('cli-model'); - - vi.mocked(resolveModelConfig).mockReturnValue({ - config: { model: 'openai-env-model', apiKey: '', baseUrl: '' }, - sources: {}, - warnings: [], - }); - const result2 = resolveCliGenerationConfig({ - argv: {}, - settings: mockSettings, - selectedAuthType: AuthType.USE_OPENAI, - env: { OPENAI_MODEL: 'openai-env-model', QWEN_MODEL: 'qwen-env-model' }, - }); - expect(result2.model).toBe('openai-env-model'); - - vi.mocked(resolveModelConfig).mockReturnValue({ - config: { model: 'openai-env-model', apiKey: '', baseUrl: '' }, - sources: {}, - warnings: [], - }); - const result3 = resolveCliGenerationConfig({ - argv: {}, - settings: mockSettings, - selectedAuthType: AuthType.USE_OPENAI, - env: { OPENAI_MODEL: 'openai-env-model' }, - }); - expect(result3.model).toBe('openai-env-model'); - - vi.mocked(resolveModelConfig).mockReturnValue({ - config: { model: 'settings-model', apiKey: '', baseUrl: '' }, - sources: {}, - warnings: [], - }); - const result4 = resolveCliGenerationConfig({ - argv: {}, - settings: mockSettings, - selectedAuthType: AuthType.USE_OPENAI, - env: {}, - }); - expect(result4.model).toBe('settings-model'); - }); }); }); diff --git a/packages/cli/src/utils/modelConfigUtils.ts b/packages/cli/src/utils/modelConfigUtils.ts index 83dbae1cc..aa5ac5e82 100644 --- a/packages/cli/src/utils/modelConfigUtils.ts +++ b/packages/cli/src/utils/modelConfigUtils.ts @@ -100,13 +100,8 @@ export function resolveCliGenerationConfig( if (authType && settings.modelProviders) { const providers = settings.modelProviders[authType]; if (providers && Array.isArray(providers)) { - const requestedModel = - authType === AuthType.USE_OPENAI - ? argv.model || - env['OPENAI_MODEL'] || - env['QWEN_MODEL'] || - settings.model?.name - : argv.model || settings.model?.name; + // Try to find by requested model (from CLI or settings) + const requestedModel = argv.model || settings.model?.name; if (requestedModel) { modelProvider = providers.find((p) => p.id === requestedModel) as | ProviderModelConfig From b5c7abd28ea3c534e67fadc83145691161ad69f0 Mon Sep 17 00:00:00 2001 From: Jordi Mas Date: Sun, 26 Apr 2026 16:26:53 +0200 Subject: [PATCH 16/36] feat: Adds Catalan language support (#3643) * Initial version * Some fixes * Fix sentences * More fixes * Fix * Latest fixes --- package-lock.json | 1 + packages/cli/src/i18n/languages.ts | 7 + packages/cli/src/i18n/locales/ca.js | 2143 +++++++++++++++++ .../schemas/settings.schema.json | 5 +- 4 files changed, 2154 insertions(+), 2 deletions(-) create mode 100644 packages/cli/src/i18n/locales/ca.js diff --git a/package-lock.json b/package-lock.json index ecb84a6b5..1c6f4b57f 100644 --- a/package-lock.json +++ b/package-lock.json @@ -12842,6 +12842,7 @@ "os": [ "darwin" ], + "peer": true, "engines": { "node": "^8.16.0 || ^10.6.0 || >=11.0.0" } diff --git a/packages/cli/src/i18n/languages.ts b/packages/cli/src/i18n/languages.ts index 0f72c7c96..02c17d14f 100644 --- a/packages/cli/src/i18n/languages.ts +++ b/packages/cli/src/i18n/languages.ts @@ -13,6 +13,7 @@ export type SupportedLanguage = | 'ja' | 'pt' | 'fr' + | 'ca' | string; export interface LanguageDefinition { @@ -75,6 +76,12 @@ export const SUPPORTED_LANGUAGES: readonly LanguageDefinition[] = [ fullName: 'French', nativeName: 'Français', }, + { + code: 'ca', + id: 'ca-ES', + fullName: 'Catalan', + nativeName: 'Català', + }, ]; /** diff --git a/packages/cli/src/i18n/locales/ca.js b/packages/cli/src/i18n/locales/ca.js new file mode 100644 index 000000000..c8599fd04 --- /dev/null +++ b/packages/cli/src/i18n/locales/ca.js @@ -0,0 +1,2143 @@ +/** + * @license + * Copyright 2025 Qwen + * SPDX-License-Identifier: Apache-2.0 + */ + +// Traduccions en català per al CLI de Qwen Code per Jordi Mas i Hernàndez + +export default { + // ============================================================================ + // Ajuda / Components de la interfície + // ============================================================================ + '↑ to manage attachments': '↑ per gestionar els adjunts', + '← → select, Delete to remove, ↓ to exit': + '← → seleccionar, Supr per eliminar, ↓ per sortir', + 'Attachments: ': 'Adjunts: ', + + 'Basics:': 'Bàsic:', + 'Add context': 'Afegir context', + 'Use {{symbol}} to specify files for context (e.g., {{example}}) to target specific files or folders.': + 'Useu {{symbol}} per especificar fitxers de context (p. ex., {{example}}) per seleccionar fitxers o carpetes específics.', + '@': '@', + '@src/myFile.ts': '@src/myFile.ts', + 'Shell mode': 'Mode shell', + 'YOLO mode': 'Mode YOLO', + 'plan mode': 'mode de planificació', + 'auto-accept edits': 'acceptació automàtica de canvis', + 'Accepting edits': 'Acceptant canvis', + '(shift + tab to cycle)': '(maj + tab per canviar)', + '(tab to cycle)': '(tab per canviar)', + 'Execute shell commands via {{symbol}} (e.g., {{example1}}) or use natural language (e.g., {{example2}}).': + 'Executeu ordres shell amb {{symbol}} (p. ex., {{example1}}) o useu el llenguatge natural (p. ex., {{example2}}).', + '!': '!', + '!npm run start': '!npm run start', + 'start server': 'iniciar el servidor', + 'Commands:': 'Ordres:', + 'shell command': 'ordre shell', + 'Model Context Protocol command (from external servers)': + 'Ordre del protocol de context del model (des de servidors externs)', + 'Keyboard Shortcuts:': 'Dreceres de teclat:', + 'Toggle this help display': 'Mostrar/amagar aquesta ajuda', + 'Toggle shell mode': 'Canviar el mode shell', + 'Open command menu': "Obrir el menú d'ordres", + 'Add file context': 'Afegir context de fitxer', + 'Accept suggestion / Autocomplete': 'Acceptar suggeriment / Autocompleció', + 'Reverse search history': "Cerca inversa a l'historial", + 'Press ? again to close': 'Premeu ? de nou per tancar', + 'for shell mode': 'per al mode shell', + 'for commands': 'per a les ordres', + 'for file paths': 'per als camins de fitxers', + 'to clear input': "per esborrar l'entrada", + 'to cycle approvals': 'per canviar les aprovacions', + 'to quit': 'per sortir', + 'for newline': 'per a nova línia', + 'to clear screen': 'per netejar la pantalla', + 'to search history': "per cercar a l'historial", + 'to paste images': 'per enganxar imatges', + 'for external editor': 'per a editor extern', + 'to toggle compact mode': 'per canviar el mode compacte', + 'Jump through words in the input': "Saltar entre paraules a l'entrada", + 'Close dialogs, cancel requests, or quit application': + "Tancar diàlegs, cancel·lar peticions o sortir de l'aplicació", + 'New line': 'Nova línia', + 'New line (Alt+Enter works for certain linux distros)': + 'Nova línia (Alt+Retorn funciona en certes distribucions de Linux)', + 'Clear the screen': 'Netejar la pantalla', + 'Open input in external editor': "Obrir l'entrada en un editor extern", + 'Send message': 'Enviar missatge', + 'Initializing...': 'Inicialitzant...', + 'Connecting to MCP servers... ({{connected}}/{{total}})': + 'Connectant als servidors MCP... ({{connected}}/{{total}})', + 'Type your message or @path/to/file': + 'Escriviu el vostre missatge o @camí/al/fitxer', + '? for shortcuts': '? per a dreceres', + "Press 'i' for INSERT mode and 'Esc' for NORMAL mode.": + "Premeu 'i' per al mode INSERCIÓ i 'Esc' per al mode NORMAL.", + 'Cancel operation / Clear input (double press)': + 'Cancel·lar operació / Esborrar entrada (doble premuda)', + 'Cycle approval modes': "Canviar els modes d'aprovació", + 'Cycle through your prompt history': "Navegar per l'historial de missatges", + 'For a full list of shortcuts, see {{docPath}}': + 'Per a una llista completa de dreceres, vegeu {{docPath}}', + 'docs/keyboard-shortcuts.md': 'docs/keyboard-shortcuts.md', + 'for help on Qwen Code': 'per a ajuda sobre Qwen Code', + 'show version info': 'mostrar informació de la versió', + 'submit a bug report': "enviar un informe d'error", + 'About Qwen Code': 'Sobre Qwen Code', + Status: 'Estat', + + // ============================================================================ + // Informació del sistema + // ============================================================================ + 'Qwen Code': 'Qwen Code', + Runtime: "Entorn d'execució", + OS: 'SO', + Auth: 'Autenticació', + 'CLI Version': 'Versió del CLI', + 'Git Commit': 'Commit de Git', + Model: 'Model', + 'Fast Model': 'Model ràpid', + Sandbox: 'Entorn aïllat', + 'OS Platform': 'Plataforma del SO', + 'OS Arch': 'Arquitectura del SO', + 'OS Release': 'Versió del SO', + 'Node.js Version': 'Versió de Node.js', + 'NPM Version': 'Versió de NPM', + 'Session ID': 'ID de sessió', + 'Auth Method': "Mètode d'autenticació", + 'Base URL': 'URL base', + Proxy: 'Proxy', + 'Memory Usage': 'Ús de memòria', + 'IDE Client': 'Client IDE', + + // ============================================================================ + // Ordres - General + // ============================================================================ + 'Analyzes the project and creates a tailored QWEN.md file.': + 'Analitza el projecte i crea un fitxer QWEN.md personalitzat.', + 'List available Qwen Code tools. Usage: /tools [desc]': + 'Llistar les eines disponibles de Qwen Code. Ús: /tools [desc]', + 'List available skills.': 'Llistar les habilitats disponibles.', + 'Available Qwen Code CLI tools:': 'Eines del CLI de Qwen Code disponibles:', + 'No tools available': 'No hi ha eines disponibles', + 'View or change the approval mode for tool usage': + "Veure o canviar el mode d'aprovació per a l'ús d'eines", + 'Invalid approval mode "{{arg}}". Valid modes: {{modes}}': + 'Mode d\'aprovació no vàlid "{{arg}}". Modes vàlids: {{modes}}', + 'Approval mode set to "{{mode}}"': 'Mode d\'aprovació establert a "{{mode}}"', + 'View or change the language setting': + "Veure o canviar la configuració d'idioma", + 'change the theme': 'canviar el tema', + 'Select Theme': 'Seleccionar tema', + Preview: 'Previsualització', + '(Use Enter to select, Tab to configure scope)': + "(Useu Retorn per seleccionar, Tab per configurar l'àmbit)", + '(Use Enter to apply scope, Tab to go back)': + "(Useu Retorn per aplicar l'àmbit, Tab per tornar enrere)", + 'Theme configuration unavailable due to NO_COLOR env variable.': + "La configuració del tema no està disponible degut a la variable d'entorn NO_COLOR.", + 'Theme "{{themeName}}" not found.': 'Tema "{{themeName}}" no trobat.', + 'Theme "{{themeName}}" not found in selected scope.': + 'Tema "{{themeName}}" no trobat en l\'àmbit seleccionat.', + 'Clear conversation history and free up context': + "Esborrar l'historial de la conversa i alliberar context", + 'Compresses the context by replacing it with a summary.': + 'Comprimeix el context substituint-lo per un resum.', + 'open full Qwen Code documentation in your browser': + 'obrir la documentació completa de Qwen Code al navegador', + 'Configuration not available.': 'Configuració no disponible.', + 'change the auth method': "canviar el mètode d'autenticació", + 'Configure authentication information for login': + "Configurar la informació d'autenticació per a iniciar sessió", + 'Copy the last result or code snippet to clipboard': + "Copiar l'últim resultat o fragment de codi al porta-retalls", + + // ============================================================================ + // Ordres - Agents + // ============================================================================ + 'Manage subagents for specialized task delegation.': + 'Gestionar subagents per a la delegació de tasques especialitzades.', + 'Manage existing subagents (view, edit, delete).': + 'Gestionar subagents existents (veure, editar, eliminar).', + 'Create a new subagent with guided setup.': + 'Crear un nou subagent amb configuració guiada.', + + // ============================================================================ + // Agents - Diàleg de gestió + // ============================================================================ + Agents: 'Agents', + 'Choose Action': 'Triar acció', + 'Edit {{name}}': 'Editar {{name}}', + 'Edit Tools: {{name}}': 'Editar eines: {{name}}', + 'Edit Color: {{name}}': 'Editar color: {{name}}', + 'Delete {{name}}': 'Eliminar {{name}}', + 'Unknown Step': 'Pas desconegut', + 'Esc to close': 'Esc per tancar', + 'Enter to select, ↑↓ to navigate, Esc to close': + 'Retorn per seleccionar, ↑↓ per navegar, Esc per tancar', + 'Esc to go back': 'Esc per tornar enrere', + 'Enter to confirm, Esc to cancel': 'Retorn per confirmar, Esc per cancel·lar', + 'Enter to select, ↑↓ to navigate, Esc to go back': + 'Retorn per seleccionar, ↑↓ per navegar, Esc per tornar enrere', + 'Enter to submit, Esc to go back': 'Retorn per enviar, Esc per tornar enrere', + 'Invalid step: {{step}}': 'Pas no vàlid: {{step}}', + 'No subagents found.': "No s'han trobat subagents.", + "Use '/agents create' to create your first subagent.": + "Useu '/agents create' per crear el vostre primer subagent.", + '(built-in)': '(integrat)', + '(overridden by project level agent)': + '(sobreescrit per un agent de nivell de projecte)', + 'Project Level ({{path}})': 'Nivell de projecte ({{path}})', + 'User Level ({{path}})': "Nivell d'usuari ({{path}})", + 'Built-in Agents': 'Agents integrats', + 'Extension Agents': "Agents d'extensió", + 'Using: {{count}} agents': 'En ús: {{count}} agents', + 'View Agent': 'Veure agent', + 'Edit Agent': 'Editar agent', + 'Delete Agent': 'Eliminar agent', + Back: 'Enrere', + 'No agent selected': 'Cap agent seleccionat', + 'File Path: ': 'Camí del fitxer: ', + 'Tools: ': 'Eines: ', + 'Color: ': 'Color: ', + 'Description:': 'Descripció:', + 'System Prompt:': 'Missatge del sistema:', + 'Open in editor': "Obrir a l'editor", + 'Edit tools': 'Editar eines', + 'Edit color': 'Editar color', + '❌ Error:': '❌ Error:', + 'Are you sure you want to delete agent "{{name}}"?': + 'Esteu segur que voleu eliminar l\'agent "{{name}}"?', + + // ============================================================================ + // Agents - Assistent de creació + // ============================================================================ + 'Project Level (.qwen/agents/)': 'Nivell de projecte (.qwen/agents/)', + 'User Level (~/.qwen/agents/)': "Nivell d'usuari (~/.qwen/agents/)", + '✅ Subagent Created Successfully!': '✅ Subagent creat correctament!', + 'Subagent "{{name}}" has been saved to {{level}} level.': + 'El subagent "{{name}}" s\'ha desat al nivell {{level}}.', + 'Name: ': 'Nom: ', + 'Location: ': 'Ubicació: ', + '❌ Error saving subagent:': '❌ Error en desar el subagent:', + 'Warnings:': 'Advertències:', + 'Name "{{name}}" already exists at {{level}} level - will overwrite existing subagent': + 'El nom "{{name}}" ja existeix al nivell {{level}} - sobreescriurà el subagent existent', + 'Name "{{name}}" exists at user level - project level will take precedence': + 'El nom "{{name}}" existeix al nivell d\'usuari - el nivell de projecte tindrà prioritat', + 'Name "{{name}}" exists at project level - existing subagent will take precedence': + 'El nom "{{name}}" existeix al nivell de projecte - el subagent existent tindrà prioritat', + 'Description is over {{length}} characters': + 'La descripció supera els {{length}} caràcters', + 'System prompt is over {{length}} characters': + 'El missatge del sistema supera els {{length}} caràcters', + 'Step {{n}}: Choose Location': 'Pas {{n}}: Triar ubicació', + 'Step {{n}}: Choose Generation Method': + 'Pas {{n}}: Triar mètode de generació', + 'Generate with Qwen Code (Recommended)': 'Generar amb Qwen Code (Recomanat)', + 'Manual Creation': 'Creació manual', + 'Describe what this subagent should do and when it should be used. (Be comprehensive for best results)': + "Descriviu què ha de fer aquest subagent i quan s'ha d'usar. (Sigueu exhaustiu per obtenir els millors resultats)", + 'e.g., Expert code reviewer that reviews code based on best practices...': + 'p. ex., Revisor de codi expert que revisa el codi seguint les millors pràctiques...', + 'Generating subagent configuration...': + 'Generant la configuració del subagent...', + 'Failed to generate subagent: {{error}}': + 'Error en generar el subagent: {{error}}', + 'Step {{n}}: Describe Your Subagent': + 'Pas {{n}}: Descriure el vostre subagent', + 'Step {{n}}: Enter Subagent Name': 'Pas {{n}}: Introduir el nom del subagent', + 'Step {{n}}: Enter System Prompt': + 'Pas {{n}}: Introduir el missatge del sistema', + 'Step {{n}}: Enter Description': 'Pas {{n}}: Introduir la descripció', + 'Step {{n}}: Select Tools': 'Pas {{n}}: Seleccionar eines', + 'All Tools (Default)': 'Totes les eines (per defecte)', + 'All Tools': 'Totes les eines', + 'Read-only Tools': 'Eines de només lectura', + 'Read & Edit Tools': 'Eines de lectura i edició', + 'Read & Edit & Execution Tools': 'Eines de lectura, edició i execució', + 'All tools selected, including MCP tools': + 'Totes les eines seleccionades, incloses les eines MCP', + 'Selected tools:': 'Eines seleccionades:', + 'Read-only tools:': 'Eines de només lectura:', + 'Edit tools:': "Eines d'edició:", + 'Execution tools:': "Eines d'execució:", + 'Step {{n}}: Choose Background Color': 'Pas {{n}}: Triar el color de fons', + 'Step {{n}}: Confirm and Save': 'Pas {{n}}: Confirmar i desar', + 'Esc to cancel': 'Esc per cancel·lar', + 'Press Enter to save, e to save and edit, Esc to go back': + 'Premeu Retorn per desar, e per desar i editar, Esc per tornar enrere', + 'Press Enter to continue, {{navigation}}Esc to {{action}}': + 'Premeu Retorn per continuar, {{navigation}}Esc per {{action}}', + cancel: 'cancel·lar', + 'go back': 'tornar enrere', + '↑↓ to navigate, ': '↑↓ per navegar, ', + 'Enter a clear, unique name for this subagent.': + 'Introduïu un nom clar i únic per a aquest subagent.', + 'e.g., Code Reviewer': 'p. ex., Revisor de codi', + 'Name cannot be empty.': 'El nom no pot estar buit.', + "Write the system prompt that defines this subagent's behavior. Be comprehensive for best results.": + "Escriviu el missatge del sistema que defineix el comportament d'aquest subagent. Sigueu exhaustiu per obtenir els millors resultats.", + 'e.g., You are an expert code reviewer...': + 'p. ex., Sou un revisor de codi expert...', + 'System prompt cannot be empty.': + 'El missatge del sistema no pot estar buit.', + 'Describe when and how this subagent should be used.': + "Descriviu quan i com s'ha d'usar aquest subagent.", + 'e.g., Reviews code for best practices and potential bugs.': + 'p. ex., Revisa el codi seguint les millors pràctiques i detectant errors potencials.', + 'Description cannot be empty.': 'La descripció no pot estar buida.', + 'Failed to launch editor: {{error}}': "Error en iniciar l'editor: {{error}}", + 'Failed to save and edit subagent: {{error}}': + 'Error en desar i editar el subagent: {{error}}', + + // ============================================================================ + // Extensions - Diàleg de gestió + // ============================================================================ + 'Manage Extensions': 'Gestionar extensions', + 'Extension Details': "Detalls de l'extensió", + 'View Extension': "Veure l'extensió", + 'Update Extension': "Actualitzar l'extensió", + 'Disable Extension': "Desactivar l'extensió", + 'Enable Extension': "Activar l'extensió", + 'Uninstall Extension': "Desinstal·lar l'extensió", + 'Select Scope': "Seleccionar l'àmbit", + 'User Scope': "Àmbit d'usuari", + 'Workspace Scope': "Àmbit de l'espai de treball", + 'No extensions found.': "No s'han trobat extensions.", + Active: 'Activa', + Disabled: 'Desactivada', + 'Update available': 'Actualització disponible', + 'Up to date': 'Al dia', + 'Checking...': 'Comprovant...', + 'Updating...': 'Actualitzant...', + Unknown: 'Desconegut', + Error: 'Error', + 'Version:': 'Versió:', + 'Status:': 'Estat:', + 'Are you sure you want to uninstall extension "{{name}}"?': + 'Esteu segur que voleu desinstal·lar l\'extensió "{{name}}"?', + 'This action cannot be undone.': 'Aquesta acció no es pot desfer.', + 'Extension "{{name}}" disabled successfully.': + 'L\'extensió "{{name}}" s\'ha desactivat correctament.', + 'Extension "{{name}}" enabled successfully.': + 'L\'extensió "{{name}}" s\'ha activat correctament.', + 'Extension "{{name}}" updated successfully.': + 'L\'extensió "{{name}}" s\'ha actualitzat correctament.', + 'Failed to update extension "{{name}}": {{error}}': + 'Error en actualitzar l\'extensió "{{name}}": {{error}}', + 'Select the scope for this action:': + "Seleccioneu l'àmbit per a aquesta acció:", + 'User - Applies to all projects': "Usuari - S'aplica a tots els projectes", + 'Workspace - Applies to current project only': + "Espai de treball - S'aplica només al projecte actual", + 'Name:': 'Nom:', + 'MCP Servers:': 'Servidors MCP:', + 'Settings:': 'Configuració:', + active: 'activa', + disabled: 'desactivada', + 'View Details': 'Veure detalls', + 'Update failed:': "Error en l'actualització:", + 'Updating {{name}}...': 'Actualitzant {{name}}...', + 'Update complete!': 'Actualització completada!', + 'User (global)': 'Usuari (global)', + 'Workspace (project-specific)': 'Espai de treball (específic del projecte)', + 'Disable "{{name}}" - Select Scope': + 'Desactivar "{{name}}" - Seleccionar àmbit', + 'Enable "{{name}}" - Select Scope': 'Activar "{{name}}" - Seleccionar àmbit', + 'No extension selected': 'Cap extensió seleccionada', + 'Press Y/Enter to confirm, N/Esc to cancel': + 'Premeu Y/Retorn per confirmar, N/Esc per cancel·lar', + 'Y/Enter to confirm, N/Esc to cancel': + 'Y/Retorn per confirmar, N/Esc per cancel·lar', + '{{count}} extensions installed': '{{count}} extensions instal·lades', + "Use '/extensions install' to install your first extension.": + "Useu '/extensions install' per instal·lar la vostra primera extensió.", + 'up to date': 'al dia', + 'update available': 'actualització disponible', + 'checking...': 'comprovant...', + 'not updatable': 'no actualitzable', + error: 'error', + + // ============================================================================ + // Ordres - General (continuació) + // ============================================================================ + 'View and edit Qwen Code settings': + 'Veure i editar la configuració de Qwen Code', + Settings: 'Configuració', + 'To see changes, Qwen Code must be restarted. Press r to exit and apply changes now.': + 'Per veure els canvis, cal reiniciar Qwen Code. Premeu r per sortir i aplicar els canvis ara.', + 'The command "/{{command}}" is not supported in non-interactive mode.': + 'L\'ordre "/{{command}}" no és compatible en mode no interactiu.', + + // ============================================================================ + // Etiquetes de configuració + // ============================================================================ + 'Vim Mode': 'Mode Vim', + 'Disable Auto Update': 'Desactivar actualització automàtica', + 'Attribution: commit': 'Atribució: commit', + 'Terminal Bell Notification': 'Notificació de campana del terminal', + 'Enable Usage Statistics': "Activar estadístiques d'ús", + Theme: 'Tema', + 'Preferred Editor': 'Editor preferit', + 'Auto-connect to IDE': 'Connexió automàtica a IDE', + 'Enable Prompt Completion': 'Activar la compleció de missatges', + 'Debug Keystroke Logging': 'Registre de tecles per a depuració', + 'Language: UI': 'Idioma: Interfície', + 'Language: Model': 'Idioma: Model', + 'Output Format': 'Format de sortida', + 'Hide Window Title': 'Amagar el títol de la finestra', + 'Show Status in Title': "Mostrar l'estat al títol", + 'Hide Tips': 'Amagar consells', + 'Show Line Numbers in Code': 'Mostrar números de línia al codi', + 'Show Citations': 'Mostrar cites', + 'Custom Witty Phrases': 'Frases enginyoses personalitzades', + 'Show Welcome Back Dialog': 'Mostrar el diàleg de benvinguda', + 'Enable User Feedback': 'Activar les valoracions dels usuaris', + 'How is Qwen doing this session? (optional)': + 'Com va Qwen en aquesta sessió? (opcional)', + Bad: 'Malament', + Fine: 'Bé', + Good: 'Molt bé', + Dismiss: 'Descartar', + 'Not Sure Yet': 'Encara no estic segur', + 'Any other key': 'Qualsevol altra tecla', + 'Disable Loading Phrases': 'Desactivar frases de càrrega', + 'Screen Reader Mode': 'Mode de lector de pantalla', + 'IDE Mode': 'Mode IDE', + 'Max Session Turns': 'Torns màxims de sessió', + 'Skip Next Speaker Check': 'Ometre la comprovació del proper parlant', + 'Skip Loop Detection': 'Ometre la detecció de bucles', + 'Skip Startup Context': "Ometre el context d'inici", + 'Enable OpenAI Logging': "Activar el registre d'OpenAI", + 'OpenAI Logging Directory': "Directori de registres d'OpenAI", + Timeout: "Temps d'espera", + 'Max Retries': 'Reintents màxims', + 'Disable Cache Control': 'Desactivar el control de memòria cau', + 'Memory Discovery Max Dirs': 'Directoris màxims de descoberta de memòria', + 'Load Memory From Include Directories': + 'Carregar memòria des dels directoris inclosos', + 'Respect .gitignore': 'Respectar .gitignore', + 'Respect .qwenignore': 'Respectar .qwenignore', + 'Enable Recursive File Search': 'Activar la cerca recursiva de fitxers', + 'Disable Fuzzy Search': 'Desactivar la cerca difusa', + 'Interactive Shell (PTY)': 'Shell interactiva (PTY)', + 'Show Color': 'Mostrar color', + 'Auto Accept': 'Acceptació automàtica', + 'Use Ripgrep': 'Usar Ripgrep', + 'Use Builtin Ripgrep': 'Usar Ripgrep integrat', + 'Enable Tool Output Truncation': + "Activar el truncament de la sortida d'eines", + 'Tool Output Truncation Threshold': + "Llindar de truncament de la sortida d'eines", + 'Tool Output Truncation Lines': "Línies de truncament de la sortida d'eines", + 'Folder Trust': 'Confiança de carpeta', + 'Vision Model Preview': 'Previsualització del model de visió', + 'Tool Schema Compliance': "Compliment de l'esquema d'eines", + 'Auto (detect from system)': 'Automàtic (detectar del sistema)', + 'Auto (detect terminal theme)': 'Automàtic (detectar el tema del terminal)', + Auto: 'Automàtic', + Text: 'Text', + JSON: 'JSON', + Plan: 'Planificació', + Default: 'Per defecte', + 'Auto Edit': 'Edició automàtica', + YOLO: 'YOLO', + 'toggle vim mode on/off': 'activar/desactivar el mode Vim', + 'check session stats. Usage: /stats [model|tools]': + 'comprovar les estadístiques de la sessió. Ús: /stats [model|tools]', + 'Show model-specific usage statistics.': + "Mostrar les estadístiques d'ús específiques del model.", + 'Show tool-specific usage statistics.': + "Mostrar les estadístiques d'ús específiques de les eines.", + 'exit the cli': 'sortir del CLI', + 'Open MCP management dialog, or authenticate with OAuth-enabled servers': + 'Obrir el diàleg de gestió MCP o autenticar-se amb servidors OAuth', + 'List configured MCP servers and tools, or authenticate with OAuth-enabled servers': + 'Llistar els servidors MCP configurats i les seves eines, o autenticar-se amb servidors OAuth', + 'Manage workspace directories': + "Gestionar els directoris de l'espai de treball", + 'Add directories to the workspace. Use comma to separate multiple paths': + "Afegir directoris a l'espai de treball. Useu comes per separar múltiples camins", + 'Show all directories in the workspace': + "Mostrar tots els directoris de l'espai de treball", + 'set external editor preference': "establir la preferència d'editor extern", + 'Select Editor': 'Seleccionar editor', + 'Editor Preference': "Preferència d'editor", + 'These editors are currently supported. Please note that some editors cannot be used in sandbox mode.': + 'Aquests editors estan suportats. Cal tenir en compte que alguns editors no es poden usar en mode aïllat.', + 'Your preferred editor is:': 'El vostre editor preferit és:', + 'Manage extensions': 'Gestionar extensions', + 'Manage installed extensions': 'Gestionar les extensions instal·lades', + 'List active extensions': 'Llistar les extensions actives', + 'Update extensions. Usage: update |--all': + "Actualitzar extensions. Ús: update |--all", + 'Disable an extension': 'Desactivar una extensió', + 'Enable an extension': 'Activar una extensió', + 'Install an extension from a git repo or local path': + "Instal·lar una extensió des d'un repositori git o camí local", + 'Uninstall an extension': 'Desinstal·lar una extensió', + 'No extensions installed.': 'No hi ha extensions instal·lades.', + 'Usage: /extensions update |--all': + "Ús: /extensions update |--all", + 'Extension "{{name}}" not found.': 'Extensió "{{name}}" no trobada.', + 'No extensions to update.': 'No hi ha extensions per actualitzar.', + 'Usage: /extensions install ': 'Ús: /extensions install ', + 'Installing extension from "{{source}}"...': + 'Instal·lant extensió des de "{{source}}"...', + 'Extension "{{name}}" installed successfully.': + 'L\'extensió "{{name}}" s\'ha instal·lat correctament.', + 'Failed to install extension from "{{source}}": {{error}}': + 'Error en instal·lar l\'extensió des de "{{source}}": {{error}}', + 'Usage: /extensions uninstall ': + "Ús: /extensions uninstall ", + 'Uninstalling extension "{{name}}"...': + 'Desinstal·lant l\'extensió "{{name}}"...', + 'Extension "{{name}}" uninstalled successfully.': + 'L\'extensió "{{name}}" s\'ha desinstal·lat correctament.', + 'Failed to uninstall extension "{{name}}": {{error}}': + 'Error en desinstal·lar l\'extensió "{{name}}": {{error}}', + 'Usage: /extensions {{command}} [--scope=]': + 'Ús: /extensions {{command}} [--scope=]', + 'Unsupported scope "{{scope}}", should be one of "user" or "workspace"': + 'Àmbit no suportat "{{scope}}", ha de ser "user" o "workspace"', + 'Extension "{{name}}" disabled for scope "{{scope}}"': + 'L\'extensió "{{name}}" desactivada per a l\'àmbit "{{scope}}"', + 'Extension "{{name}}" enabled for scope "{{scope}}"': + 'L\'extensió "{{name}}" activada per a l\'àmbit "{{scope}}"', + 'Do you want to continue? [Y/n]: ': 'Voleu continuar? [S/n]: ', + 'Do you want to continue?': 'Voleu continuar?', + 'Installing extension "{{name}}".': 'Instal·lant l\'extensió "{{name}}".', + '**Extensions may introduce unexpected behavior. Ensure you have investigated the extension source and trust the author.**': + "**Les extensions poden introduir comportaments inesperats. Assegureu-vos d'haver investigat la font de l'extensió i de confiar en l'autor.**", + 'This extension will run the following MCP servers:': + 'Aquesta extensió executarà els servidors MCP següents:', + local: 'local', + remote: 'remot', + 'This extension will add the following commands: {{commands}}.': + 'Aquesta extensió afegirà les ordres següents: {{commands}}.', + 'This extension will append info to your QWEN.md context using {{fileName}}': + 'Aquesta extensió afegirà informació al vostre context QWEN.md usant {{fileName}}', + 'This extension will exclude the following core tools: {{tools}}': + 'Aquesta extensió exclourà les eines principals següents: {{tools}}', + 'This extension will install the following skills:': + 'Aquesta extensió instal·larà les habilitats següents:', + 'This extension will install the following subagents:': + 'Aquesta extensió instal·larà els subagents següents:', + 'Installation cancelled for "{{name}}".': + 'Instal·lació cancel·lada per a "{{name}}".', + 'You are installing an extension from {{originSource}}. Some features may not work perfectly with Qwen Code.': + 'Esteu instal·lant una extensió des de {{originSource}}. Algunes funcions poden no funcionar perfectament amb Qwen Code.', + '--ref and --auto-update are not applicable for marketplace extensions.': + "--ref i --auto-update no s'apliquen a les extensions del mercat.", + 'Extension "{{name}}" installed successfully and enabled.': + 'L\'extensió "{{name}}" s\'ha instal·lat i activat correctament.', + 'Installs an extension from a git repository URL, local path, or claude marketplace (marketplace-url:plugin-name).': + "Instal·la una extensió des d'una URL de repositori git, un camí local o el mercat (marketplace-url:nom-del-connector).", + 'The github URL, local path, or marketplace source (marketplace-url:plugin-name) of the extension to install.': + "La URL de GitHub, el camí local o la font del mercat (marketplace-url:nom-del-connector) de l'extensió a instal·lar.", + 'The git ref to install from.': + 'La referència git des de la qual instal·lar.', + 'Enable auto-update for this extension.': + "Activar l'actualització automàtica per a aquesta extensió.", + 'Enable pre-release versions for this extension.': + 'Activar les versions preliminars per a aquesta extensió.', + 'Acknowledge the security risks of installing an extension and skip the confirmation prompt.': + "Acceptar els riscos de seguretat d'instal·lar una extensió i ometre el missatge de confirmació.", + 'The source argument must be provided.': "Cal proporcionar l'argument font.", + 'Extension "{{name}}" successfully uninstalled.': + 'L\'extensió "{{name}}" s\'ha desinstal·lat correctament.', + 'Uninstalls an extension.': 'Desinstal·la una extensió.', + 'The name or source path of the extension to uninstall.': + "El nom o camí font de l'extensió a desinstal·lar.", + 'Please include the name of the extension to uninstall as a positional argument.': + "Incloeu el nom de l'extensió a desinstal·lar com a argument posicional.", + 'Enables an extension.': 'Activa una extensió.', + 'The name of the extension to enable.': "El nom de l'extensió a activar.", + 'The scope to enable the extenison in. If not set, will be enabled in all scopes.': + "L'àmbit en el qual activar l'extensió. Si no s'estableix, s'activarà en tots els àmbits.", + 'Extension "{{name}}" successfully enabled for scope "{{scope}}".': + 'L\'extensió "{{name}}" s\'ha activat correctament per a l\'àmbit "{{scope}}".', + 'Extension "{{name}}" successfully enabled in all scopes.': + 'L\'extensió "{{name}}" s\'ha activat correctament en tots els àmbits.', + 'Invalid scope: {{scope}}. Please use one of {{scopes}}.': + 'Àmbit no vàlid: {{scope}}. Useu un dels següents: {{scopes}}.', + 'Disables an extension.': 'Desactiva una extensió.', + 'The name of the extension to disable.': "El nom de l'extensió a desactivar.", + 'The scope to disable the extenison in.': + "L'àmbit en el qual desactivar l'extensió.", + 'Extension "{{name}}" successfully disabled for scope "{{scope}}".': + 'L\'extensió "{{name}}" s\'ha desactivat correctament per a l\'àmbit "{{scope}}".', + 'Extension "{{name}}" successfully updated: {{oldVersion}} → {{newVersion}}.': + 'L\'extensió "{{name}}" s\'ha actualitzat correctament: {{oldVersion}} → {{newVersion}}.', + 'Unable to install extension "{{name}}" due to missing install metadata': + 'No es pot instal·lar l\'extensió "{{name}}" per manca de metadades d\'instal·lació', + 'Extension "{{name}}" is already up to date.': + 'L\'extensió "{{name}}" ja és al dia.', + 'Updates all extensions or a named extension to the latest version.': + "Actualitza totes les extensions o una extensió específica a l'última versió.", + 'Update all extensions.': 'Actualitzar totes les extensions.', + 'The name of the extension to update.': "El nom de l'extensió a actualitzar.", + 'Either an extension name or --all must be provided': + "Cal proporcionar un nom d'extensió o --all", + 'Lists installed extensions.': 'Llista les extensions instal·lades.', + 'Path:': 'Camí:', + 'Source:': 'Font:', + 'Type:': 'Tipus:', + 'Ref:': 'Ref:', + 'Release tag:': 'Etiqueta de versió:', + 'Enabled (User):': 'Activada (Usuari):', + 'Enabled (Workspace):': 'Activada (Espai de treball):', + 'Context files:': 'Fitxers de context:', + 'Skills:': 'Habilitats:', + 'Agents:': 'Agents:', + 'MCP servers:': 'Servidors MCP:', + 'Link extension failed to install.': + "No s'ha pogut instal·lar l'extensió d'enllaç.", + 'Extension "{{name}}" linked successfully and enabled.': + 'L\'extensió "{{name}}" s\'ha enllaçat i activat correctament.', + 'Links an extension from a local path. Updates made to the local path will always be reflected.': + "Enllaça una extensió des d'un camí local. Els canvis al camí local sempre es reflectiran.", + 'The name of the extension to link.': "El nom de l'extensió a enllaçar.", + 'Set a specific setting for an extension.': + 'Establir una configuració específica per a una extensió.', + 'Name of the extension to configure.': "Nom de l'extensió a configurar.", + 'The setting to configure (name or env var).': + "La configuració a establir (nom o variable d'entorn).", + 'The scope to set the setting in.': "L'àmbit on establir la configuració.", + 'List all settings for an extension.': + "Llistar tota la configuració d'una extensió.", + 'Name of the extension.': "Nom de l'extensió.", + 'Extension "{{name}}" has no settings to configure.': + 'L\'extensió "{{name}}" no té cap configuració.', + 'Settings for "{{name}}":': 'Configuració per a "{{name}}":', + '(workspace)': '(espai de treball)', + '(user)': '(usuari)', + '[not set]': '[no establert]', + '[value stored in keychain]': '[valor emmagatzemat al clauer]', + 'Value:': 'Valor:', + 'Manage extension settings.': 'Gestionar la configuració de les extensions.', + 'You need to specify a command (set or list).': + 'Cal especificar una ordre (set o list).', + + // ============================================================================ + // Selecció de connector / Mercat + // ============================================================================ + 'No plugins available in this marketplace.': + 'No hi ha connectors disponibles en aquest mercat.', + 'Select a plugin to install from marketplace "{{name}}":': + 'Seleccioneu un connector per instal·lar des del mercat "{{name}}":', + 'Plugin selection cancelled.': 'Selecció de connector cancel·lada.', + 'Select a plugin from "{{name}}"': 'Seleccionar un connector de "{{name}}"', + 'Use ↑↓ or j/k to navigate, Enter to select, Escape to cancel': + 'Useu ↑↓ o j/k per navegar, Retorn per seleccionar, Esc per cancel·lar', + '{{count}} more above': '{{count}} més amunt', + '{{count}} more below': '{{count}} més avall', + 'manage IDE integration': "gestionar la integració de l'IDE", + 'check status of IDE integration': + "comprovar l'estat de la integració de l'IDE", + 'install required IDE companion for {{ideName}}': + 'instal·lar el complement IDE necessari per a {{ideName}}', + 'enable IDE integration': "activar la integració de l'IDE", + 'disable IDE integration': "desactivar la integració de l'IDE", + 'IDE integration is not supported in your current environment. To use this feature, run Qwen Code in one of these supported IDEs: VS Code or VS Code forks.': + "La integració de l'IDE no és compatible en el vostre entorn actual. Per usar aquesta funció, executeu Qwen Code en un dels IDEs compatibles: VS Code o bifurcacions de VS Code.", + 'Set up GitHub Actions': 'Configurar GitHub Actions', + 'Configure terminal keybindings for multiline input (VS Code, Cursor, Windsurf, Trae)': + 'Configurar les dreceres del terminal per a entrada multilínia (VS Code, Cursor, Windsurf, Trae)', + 'Please restart your terminal for the changes to take effect.': + 'Reinicieu el terminal perquè els canvis tinguin efecte.', + 'Failed to configure terminal: {{error}}': + 'Error en configurar el terminal: {{error}}', + 'Could not determine {{terminalName}} config path on Windows: APPDATA environment variable is not set.': + "No s'ha pogut determinar el camí de configuració de {{terminalName}} a Windows: la variable d'entorn APPDATA no està establerta.", + '{{terminalName}} keybindings.json exists but is not a valid JSON array. Please fix the file manually or delete it to allow automatic configuration.': + '{{terminalName}} keybindings.json existeix però no és un array JSON vàlid. Corregiu el fitxer manualment o elimineu-lo per permetre la configuració automàtica.', + 'File: {{file}}': 'Fitxer: {{file}}', + 'Failed to parse {{terminalName}} keybindings.json. The file contains invalid JSON. Please fix the file manually or delete it to allow automatic configuration.': + 'Error en analitzar {{terminalName}} keybindings.json. El fitxer conté JSON no vàlid. Corregiu el fitxer manualment o elimineu-lo per permetre la configuració automàtica.', + 'Error: {{error}}': 'Error: {{error}}', + 'Shift+Enter binding already exists': 'La drecera Shift+Retorn ja existeix', + 'Ctrl+Enter binding already exists': 'La drecera Ctrl+Retorn ja existeix', + 'Existing keybindings detected. Will not modify to avoid conflicts.': + "S'han detectat dreceres existents. No es modificaran per evitar conflictes.", + 'Please check and modify manually if needed: {{file}}': + 'Comproveu i modifiqueu manualment si cal: {{file}}', + 'Added Shift+Enter and Ctrl+Enter keybindings to {{terminalName}}.': + "S'han afegit les dreceres Shift+Retorn i Ctrl+Retorn a {{terminalName}}.", + 'Modified: {{file}}': 'Modificat: {{file}}', + '{{terminalName}} keybindings already configured.': + 'Les dreceres de {{terminalName}} ja estan configurades.', + 'Failed to configure {{terminalName}}.': + 'Error en configurar {{terminalName}}.', + 'Your terminal is already configured for an optimal experience with multiline input (Shift+Enter and Ctrl+Enter).': + 'El vostre terminal ja està configurat per a una experiència òptima amb entrada multilínia (Shift+Retorn i Ctrl+Retorn).', + + // ============================================================================ + // Ordres - Hooks + // ============================================================================ + 'Manage Qwen Code hooks': 'Gestionar els hooks de Qwen Code', + 'List all configured hooks': 'Llistar tots els hooks configurats', + 'Enable a disabled hook': 'Activar un hook desactivat', + 'Disable an active hook': 'Desactivar un hook actiu', + Hooks: 'Hooks', + 'Loading hooks...': 'Carregant hooks...', + 'Error loading hooks:': 'Error en carregar els hooks:', + 'Press Escape to close': 'Premeu Esc per tancar', + 'Press Escape, Ctrl+C, or Ctrl+D to cancel': + 'Premeu Esc, Ctrl+C o Ctrl+D per cancel·lar', + 'Press Space, Enter, or Escape to dismiss': + 'Premeu Espai, Retorn o Esc per descartar', + 'No hook selected': 'Cap hook seleccionat', + 'No hook events found.': "No s'han trobat esdeveniments de hook.", + '{{count}} hook configured': '{{count}} hook configurat', + '{{count}} hooks configured': '{{count}} hooks configurats', + 'This menu is read-only. To add or modify hooks, edit settings.json directly or ask Qwen Code.': + 'Aquest menú és de només lectura. Per afegir o modificar hooks, editeu settings.json directament o demaneu-ho a Qwen Code.', + 'Enter to select · Esc to cancel': + 'Retorn per seleccionar · Esc per cancel·lar', + 'Exit codes:': 'Codis de sortida:', + 'Configured hooks:': 'Hooks configurats:', + 'No hooks configured for this event.': + 'No hi ha hooks configurats per a aquest esdeveniment.', + 'To add hooks, edit settings.json directly or ask Qwen.': + 'Per afegir hooks, editeu settings.json directament o demaneu-ho a Qwen.', + 'Enter to select · Esc to go back': + 'Retorn per seleccionar · Esc per tornar enrere', + 'Hook details': 'Detalls del hook', + 'Event:': 'Esdeveniment:', + 'Extension:': 'Extensió:', + 'Desc:': 'Desc:', + 'No hook config selected': 'Cap configuració de hook seleccionada', + 'To modify or remove this hook, edit settings.json directly or ask Qwen to help.': + 'Per modificar o eliminar aquest hook, editeu settings.json directament o demaneu ajuda a Qwen.', + 'Hook Configuration - Disabled': 'Configuració de hooks - Desactivats', + 'All hooks are currently disabled. You have {{count}} that are not running.': + 'Tots els hooks estan desactivats. En teniu {{count}} que no estan en execució.', + '{{count}} configured hook': '{{count}} hook configurat', + '{{count}} configured hooks': '{{count}} hooks configurats', + 'When hooks are disabled:': 'Quan els hooks estan desactivats:', + 'No hook commands will execute': "Cap ordre de hook s'executarà", + 'StatusLine will not be displayed': "La barra d'estat no es mostrarà", + 'Tool operations will proceed without hook validation': + "Les operacions d'eines continuaran sense validació de hook", + 'To re-enable hooks, remove "disableAllHooks" from settings.json or ask Qwen Code.': + 'Per tornar a activar els hooks, elimineu "disableAllHooks" de settings.json o demaneu-ho a Qwen Code.', + Project: 'Projecte', + User: 'Usuari', + System: 'Sistema', + Extension: 'Extensió', + 'Local Settings': 'Configuració local', + 'User Settings': "Configuració d'usuari", + 'System Settings': 'Configuració del sistema', + Extensions: 'Extensions', + 'Session (temporary)': 'Sessió (temporal)', + '✓ Enabled': '✓ Activat', + '✗ Disabled': '✗ Desactivat', + 'Before tool execution': "Abans de l'execució de l'eina", + 'After tool execution': "Després de l'execució de l'eina", + 'After tool execution fails': "Quan falla l'execució de l'eina", + 'When notifications are sent': "Quan s'envien notificacions", + 'When the user submits a prompt': "Quan l'usuari envia un missatge", + 'When a new session is started': "Quan s'inicia una nova sessió", + 'Right before Qwen Code concludes its response': + 'Immediatament abans que Qwen Code conclou la seva resposta', + 'When a subagent (Agent tool call) is started': + "Quan s'inicia un subagent (crida a l'eina Agent)", + 'Right before a subagent concludes its response': + 'Immediatament abans que un subagent conclou la seva resposta', + 'Before conversation compaction': 'Abans de la compactació de la conversa', + 'When a session is ending': "Quan una sessió s'està acabant", + 'When a permission dialog is displayed': + 'Quan es mostra un diàleg de permisos', + 'Input to command is JSON of tool call arguments.': + "L'entrada a l'ordre és JSON dels arguments de la crida a l'eina.", + 'Input to command is JSON with fields "inputs" (tool call arguments) and "response" (tool call response).': + 'L\'entrada a l\'ordre és JSON amb els camps "inputs" (arguments de la crida a l\'eina) i "response" (resposta de la crida a l\'eina).', + 'Input to command is JSON with tool_name, tool_input, tool_use_id, error, error_type, is_interrupt, and is_timeout.': + "L'entrada a l'ordre és JSON amb tool_name, tool_input, tool_use_id, error, error_type, is_interrupt i is_timeout.", + 'Input to command is JSON with notification message and type.': + "L'entrada a l'ordre és JSON amb el missatge de notificació i el tipus.", + 'Input to command is JSON with original user prompt text.': + "L'entrada a l'ordre és JSON amb el text original del missatge de l'usuari.", + 'Input to command is JSON with session start source.': + "L'entrada a l'ordre és JSON amb la font d'inici de sessió.", + 'Input to command is JSON with session end reason.': + "L'entrada a l'ordre és JSON amb el motiu de fi de sessió.", + 'Input to command is JSON with agent_id and agent_type.': + "L'entrada a l'ordre és JSON amb agent_id i agent_type.", + 'Input to command is JSON with agent_id, agent_type, and agent_transcript_path.': + "L'entrada a l'ordre és JSON amb agent_id, agent_type i agent_transcript_path.", + 'Input to command is JSON with compaction details.': + "L'entrada a l'ordre és JSON amb els detalls de compactació.", + 'Input to command is JSON with tool_name, tool_input, and tool_use_id. Output JSON with hookSpecificOutput containing decision to allow or deny.': + "L'entrada a l'ordre és JSON amb tool_name, tool_input i tool_use_id. La sortida JSON amb hookSpecificOutput conté la decisió de permetre o denegar.", + 'stdout/stderr not shown': 'stdout/stderr no es mostra', + 'show stderr to model and continue conversation': + 'mostrar stderr al model i continuar la conversa', + 'show stderr to user only': "mostrar stderr només a l'usuari", + 'stdout shown in transcript mode (ctrl+o)': + 'stdout mostrat en mode transcripció (ctrl+o)', + 'show stderr to model immediately': 'mostrar stderr al model immediatament', + 'show stderr to user only but continue with tool call': + "mostrar stderr només a l'usuari però continuar amb la crida a l'eina", + 'block processing, erase original prompt, and show stderr to user only': + "blocar el processament, esborrar el missatge original i mostrar stderr només a l'usuari", + 'stdout shown to Qwen': 'stdout mostrat a Qwen', + 'show stderr to user only (blocking errors ignored)': + "mostrar stderr només a l'usuari (errors de bloqueig ignorats)", + 'command completes successfully': "l'ordre es completa correctament", + 'stdout shown to subagent': 'stdout mostrat al subagent', + 'show stderr to subagent and continue having it run': + 'mostrar stderr al subagent i continuar la seva execució', + 'stdout appended as custom compact instructions': + 'stdout afegit com a instruccions compactes personalitzades', + 'block compaction': 'blocar la compactació', + 'show stderr to user only but continue with compaction': + "mostrar stderr només a l'usuari però continuar amb la compactació", + 'use hook decision if provided': 'usar la decisió del hook si es proporciona', + 'Config not loaded.': 'Configuració no carregada.', + 'Hooks are not enabled. Enable hooks in settings to use this feature.': + 'Els hooks no estan activats. Activeu els hooks a la configuració per usar aquesta funció.', + 'No hooks configured. Add hooks in your settings.json file.': + 'No hi ha hooks configurats. Afegiu hooks al vostre fitxer settings.json.', + 'Configured Hooks ({{count}} total)': + 'Hooks configurats ({{count}} en total)', + + // ============================================================================ + // Ordres - Exportació de sessió + // ============================================================================ + 'Export current session message history to a file': + "Exportar l'historial de missatges de la sessió actual a un fitxer", + 'Export session to HTML format': 'Exportar la sessió en format HTML', + 'Export session to JSON format': 'Exportar la sessió en format JSON', + 'Export session to JSONL format (one message per line)': + 'Exportar la sessió en format JSONL (un missatge per línia)', + 'Export session to markdown format': 'Exportar la sessió en format markdown', + + // ============================================================================ + // Ordres - Idees + // ============================================================================ + 'generate personalized programming insights from your chat history': + 'generar idees de programació personalitzades a partir del vostre historial de xat', + + // ============================================================================ + // Ordres - Historial de sessió + // ============================================================================ + 'Resume a previous session': 'Reprendre una sessió anterior', + 'Restore a tool call. This will reset the conversation and file history to the state it was in when the tool call was suggested': + "Restaurar una crida a una eina. Això restablirà la conversa i l'historial de fitxers a l'estat en què es trobaven quan es va suggerir la crida a l'eina", + 'Could not detect terminal type. Supported terminals: VS Code, Cursor, Windsurf, and Trae.': + "No s'ha pogut detectar el tipus de terminal. Terminals compatibles: VS Code, Cursor, Windsurf i Trae.", + 'Terminal "{{terminal}}" is not supported yet.': + 'El terminal "{{terminal}}" no és compatible encara.', + + // ============================================================================ + // Ordres - Idioma + // ============================================================================ + 'Invalid language. Available: {{options}}': + 'Idioma no vàlid. Disponibles: {{options}}', + 'Language subcommands do not accept additional arguments.': + "Les subordres d'idioma no accepten arguments addicionals.", + 'Current UI language: {{lang}}': 'Idioma actual de la interfície: {{lang}}', + 'Current LLM output language: {{lang}}': + 'Idioma actual de la sortida del model: {{lang}}', + 'LLM output language not set': 'Idioma de sortida del model no establert', + 'Set UI language': "Establir l'idioma de la interfície", + 'Set LLM output language': "Establir l'idioma de sortida del model", + 'Usage: /language ui [{{options}}]': 'Ús: /language ui [{{options}}]', + 'Usage: /language output ': 'Ús: /language output ', + 'Example: /language output 中文': 'Exemple: /language output 中文', + 'Example: /language output English': 'Exemple: /language output English', + 'Example: /language output 日本語': 'Exemple: /language output 日本語', + 'Example: /language output Português': 'Exemple: /language output Português', + 'UI language changed to {{lang}}': + 'Idioma de la interfície canviat a {{lang}}', + 'LLM output language set to {{lang}}': + 'Idioma de sortida del model establert a {{lang}}', + 'LLM output language rule file generated at {{path}}': + "Fitxer de regles d'idioma de sortida del model generat a {{path}}", + 'Please restart the application for the changes to take effect.': + "Reinicieu l'aplicació perquè els canvis tinguin efecte.", + 'Failed to generate LLM output language rule file: {{error}}': + "Error en generar el fitxer de regles d'idioma de sortida del model: {{error}}", + 'Invalid command. Available subcommands:': + 'Ordre no vàlida. Subordres disponibles:', + 'Available subcommands:': 'Subordres disponibles:', + 'To request additional UI language packs, please open an issue on GitHub.': + "Per sol·licitar paquets d'idioma addicionals per a la interfície, obriu una incidència a GitHub.", + 'Available options:': 'Opcions disponibles:', + 'Set UI language to {{name}}': + "Establir l'idioma de la interfície a {{name}}", + + // ============================================================================ + // Ordres - Mode d'aprovació + // ============================================================================ + 'Tool Approval Mode': "Mode d'aprovació d'eines", + 'Current approval mode: {{mode}}': "Mode d'aprovació actual: {{mode}}", + 'Available approval modes:': "Modes d'aprovació disponibles:", + 'Approval mode changed to: {{mode}}': "Mode d'aprovació canviat a: {{mode}}", + 'Approval mode changed to: {{mode}} (saved to {{scope}} settings{{location}})': + "Mode d'aprovació canviat a: {{mode}} (desat a la configuració {{scope}}{{location}})", + 'Usage: /approval-mode [--session|--user|--project]': + 'Ús: /approval-mode [--session|--user|--project]', + 'Scope subcommands do not accept additional arguments.': + "Les subordres d'àmbit no accepten arguments addicionals.", + 'Plan mode - Analyze only, do not modify files or execute commands': + 'Mode de planificació - Analitzar només, sense modificar fitxers ni executar ordres', + 'Default mode - Require approval for file edits or shell commands': + 'Mode per defecte - Requerir aprovació per a edicions de fitxers o ordres shell', + 'Auto-edit mode - Automatically approve file edits': + "Mode d'edició automàtica - Aprovar automàticament les edicions de fitxers", + 'YOLO mode - Automatically approve all tools': + 'Mode YOLO - Aprovar automàticament totes les eines', + '{{mode}} mode': 'Mode {{mode}}', + 'Settings service is not available; unable to persist the approval mode.': + "El servei de configuració no està disponible; no es pot persistir el mode d'aprovació.", + 'Failed to save approval mode: {{error}}': + "Error en desar el mode d'aprovació: {{error}}", + 'Failed to change approval mode: {{error}}': + "Error en canviar el mode d'aprovació: {{error}}", + 'Apply to current session only (temporary)': + 'Aplicar només a la sessió actual (temporal)', + 'Persist for this project/workspace': + 'Persistir per a aquest projecte/espai de treball', + 'Persist for this user on this machine': + 'Persistir per a aquest usuari en aquesta màquina', + 'Analyze only, do not modify files or execute commands': + 'Analitzar només, sense modificar fitxers ni executar ordres', + 'Require approval for file edits or shell commands': + 'Requerir aprovació per a edicions de fitxers o ordres shell', + 'Automatically approve file edits': + 'Aprovar automàticament les edicions de fitxers', + 'Automatically approve all tools': 'Aprovar automàticament totes les eines', + 'Workspace approval mode exists and takes priority. User-level change will have no effect.': + "Existeix un mode d'aprovació de l'espai de treball i té prioritat. El canvi a nivell d'usuari no tindrà cap efecte.", + 'Apply To': 'Aplicar a', + 'Workspace Settings': "Configuració de l'espai de treball", + + // ============================================================================ + // Ordres - Memòria + // ============================================================================ + 'Commands for interacting with memory.': + 'Ordres per interactuar amb la memòria.', + 'Show the current memory contents.': + 'Mostrar el contingut actual de la memòria.', + 'Show project-level memory contents.': + 'Mostrar el contingut de la memòria a nivell de projecte.', + 'Show global memory contents.': 'Mostrar el contingut de la memòria global.', + 'Add content to project-level memory.': + 'Afegir contingut a la memòria a nivell de projecte.', + 'Add content to global memory.': 'Afegir contingut a la memòria global.', + 'Refresh the memory from the source.': + 'Actualitzar la memòria des de la font.', + 'Usage: /memory add --project ': + 'Ús: /memory add --project ', + 'Usage: /memory add --global ': + 'Ús: /memory add --global ', + 'Attempting to save to project memory: "{{text}}"': + 'Intentant desar a la memòria del projecte: "{{text}}"', + 'Attempting to save to global memory: "{{text}}"': + 'Intentant desar a la memòria global: "{{text}}"', + 'Current memory content from {{count}} file(s):': + 'Contingut actual de la memòria de {{count}} fitxer(s):', + 'Memory is currently empty.': 'La memòria és buida actualment.', + 'Project memory file not found or is currently empty.': + "El fitxer de memòria del projecte no s'ha trobat o és buit.", + 'Global memory file not found or is currently empty.': + "El fitxer de memòria global no s'ha trobat o és buit.", + 'Global memory is currently empty.': 'La memòria global és buida actualment.', + 'Global memory content:\n\n---\n{{content}}\n---': + 'Contingut de la memòria global:\n\n---\n{{content}}\n---', + 'Project memory content from {{path}}:\n\n---\n{{content}}\n---': + 'Contingut de la memòria del projecte des de {{path}}:\n\n---\n{{content}}\n---', + 'Project memory is currently empty.': 'La memòria del projecte és buida.', + 'Refreshing memory from source files...': + 'Actualitzant la memòria des dels fitxers font...', + 'Add content to the memory. Use --global for global memory or --project for project memory.': + 'Afegir contingut a la memòria. Useu --global per a la memòria global o --project per a la memòria del projecte.', + 'Usage: /memory add [--global|--project] ': + 'Ús: /memory add [--global|--project] ', + 'Attempting to save to memory {{scope}}: "{{fact}}"': + 'Intentant desar a la memòria {{scope}}: "{{fact}}"', + 'Open auto-memory folder': 'Obrir la carpeta de memòria automàtica', + 'Auto-memory: {{status}}': 'Memòria automàtica: {{status}}', + 'Auto-dream: {{status}} · {{lastDream}} · /dream to run': + 'Auto-dream: {{status}} · {{lastDream}} · /dream per executar', + never: 'mai', + on: 'activada', + off: 'desactivada', + '✦ dreaming': '✦ somniant', + 'Remove matching entries from managed auto-memory.': + 'Eliminar les entrades coincidents de la memòria automàtica gestionada.', + 'Usage: /forget ': + 'Ús: /forget ', + 'No managed auto-memory entries matched: {{query}}': + 'Cap entrada de memòria automàtica gestionada coincideix: {{query}}', + 'Show managed auto-memory status.': + "Mostrar l'estat de la memòria automàtica gestionada.", + 'Run managed auto-memory extraction for the current session.': + "Executar l'extracció de memòria automàtica gestionada per a la sessió actual.", + 'Managed auto-memory root: {{root}}': + 'Arrel de la memòria automàtica gestionada: {{root}}', + 'Managed auto-memory topics:': 'Temes de la memòria automàtica gestionada:', + 'No extraction cursor found yet.': + "Encara no s'ha trobat cap cursor d'extracció.", + 'Cursor: session={{sessionId}}, offset={{offset}}, updated={{updatedAt}}': + 'Cursor: session={{sessionId}}, offset={{offset}}, updated={{updatedAt}}', + 'No chat client available to extract memory.': + 'No hi ha cap client de xat disponible per extreure memòria.', + 'Managed auto-memory extraction is already running.': + "L'extracció de memòria automàtica gestionada ja s'està executant.", + 'Managed auto-memory extraction found no new durable memories.': + "L'extracció de memòria automàtica gestionada no ha trobat noves memòries durables.", + 'Consolidate managed auto-memory topic files.': + 'Consolidar els fitxers de temes de memòria automàtica gestionada.', + 'Managed auto-memory dream found nothing to improve.': + "L'auto-dream de memòria gestionada no ha trobat res a millorar.", + 'Deduplicated entries: {{count}}': 'Entrades desduplicades: {{count}}', + 'Save a durable memory using the save_memory tool.': + "Desar una memòria durable usant l'eina save_memory.", + 'Usage: /remember [--global|--project] ': + 'Ús: /remember [--global|--project] ', + + // ============================================================================ + // Ordres - MCP + // ============================================================================ + 'Authenticate with an OAuth-enabled MCP server': + 'Autenticar-se amb un servidor MCP amb OAuth', + 'List configured MCP servers and tools': + 'Llistar els servidors MCP configurats i les seves eines', + 'Restarts MCP servers.': 'Reinicia els servidors MCP.', + 'Open MCP management dialog': 'Obrir el diàleg de gestió MCP', + 'Could not retrieve tool registry.': + "No s'ha pogut recuperar el registre d'eines.", + 'No MCP servers configured with OAuth authentication.': + 'No hi ha servidors MCP configurats amb autenticació OAuth.', + 'MCP servers with OAuth authentication:': + 'Servidors MCP amb autenticació OAuth:', + 'Use /mcp auth to authenticate.': + 'Useu /mcp auth per autenticar-vos.', + "MCP server '{{name}}' not found.": + "El servidor MCP '{{name}}' no s'ha trobat.", + "Successfully authenticated and refreshed tools for '{{name}}'.": + "S'ha autenticat correctament i s'han actualitzat les eines per a '{{name}}'.", + "Failed to authenticate with MCP server '{{name}}': {{error}}": + "Error en autenticar-se amb el servidor MCP '{{name}}': {{error}}", + "Re-discovering tools from '{{name}}'...": + "Redescobrint les eines de '{{name}}'...", + "Discovered {{count}} tool(s) from '{{name}}'.": + "S'han descobert {{count}} eina(es) de '{{name}}'.", + 'Authentication complete. Returning to server details...': + 'Autenticació completada. Tornant als detalls del servidor...', + 'Authentication successful.': 'Autenticació correcta.', + 'If the browser does not open, copy and paste this URL into your browser:': + "Si el navegador no s'obre, copieu i enganxeu aquesta URL al vostre navegador:", + 'Make sure to copy the COMPLETE URL - it may wrap across multiple lines.': + 'Assegureu-vos de copiar la URL COMPLETA - pot ocupar múltiples línies.', + + // ============================================================================ + // Diàleg de gestió MCP + // ============================================================================ + 'Manage MCP servers': 'Gestionar els servidors MCP', + 'Server Detail': 'Detalls del servidor', + 'Disable Server': 'Desactivar el servidor', + Tools: 'Eines', + 'Tool Detail': "Detalls de l'eina", + 'MCP Management': 'Gestió MCP', + 'Loading...': 'Carregant...', + 'Unknown step': 'Pas desconegut', + 'Esc to back': 'Esc per tornar', + '↑↓ to navigate · Enter to select · Esc to close': + '↑↓ per navegar · Retorn per seleccionar · Esc per tancar', + '↑↓ to navigate · Enter to select · Esc to back': + '↑↓ per navegar · Retorn per seleccionar · Esc per tornar', + '↑↓ to navigate · Enter to confirm · Esc to back': + '↑↓ per navegar · Retorn per confirmar · Esc per tornar', + 'User Settings (global)': "Configuració d'usuari (global)", + 'Workspace Settings (project-specific)': + "Configuració de l'espai de treball (específica del projecte)", + 'Disable server:': 'Desactivar el servidor:', + 'Select where to add the server to the exclude list:': + "Seleccioneu on afegir el servidor a la llista d'exclusió:", + 'Press Enter to confirm, Esc to cancel': + 'Premeu Retorn per confirmar, Esc per cancel·lar', + 'View tools': 'Veure eines', + Reconnect: 'Reconnectar', + Enable: 'Activar', + Disable: 'Desactivar', + Authenticate: 'Autenticar', + 'Re-authenticate': 'Tornar a autenticar', + 'Clear Authentication': "Esborrar l'autenticació", + 'Server:': 'Servidor:', + 'Command:': 'Ordre:', + 'Working Directory:': 'Directori de treball:', + 'Capabilities:': 'Capacitats:', + 'No server selected': 'Cap servidor seleccionat', + prompts: 'missatges', + '(disabled)': '(desactivat)', + 'Error:': 'Error:', + tool: 'eina', + tools: 'eines', + connected: 'connectat', + connecting: 'connectant', + disconnected: 'desconnectat', + 'User MCPs': "MCPs de l'usuari", + 'Project MCPs': 'MCPs del projecte', + 'Extension MCPs': 'MCPs de les extensions', + server: 'servidor', + servers: 'servidors', + 'Add MCP servers to your settings to get started.': + 'Afegiu servidors MCP a la configuració per començar.', + 'Run qwen --debug to see error logs': + "Executeu qwen --debug per veure els registres d'errors", + 'OAuth Authentication': 'Autenticació OAuth', + 'Press Enter to start authentication, Esc to go back': + "Premeu Retorn per iniciar l'autenticació, Esc per tornar enrere", + 'Authenticating... Please complete the login in your browser.': + "Autenticant... Completeu l'inici de sessió al vostre navegador.", + 'Press c to copy the authorization URL to your clipboard.': + "Premeu c per copiar la URL d'autorització al porta-retalls.", + 'Copy request sent to your terminal. If paste is empty, copy the URL above manually.': + 'Sol·licitud de còpia enviada al vostre terminal. Si el que enganxeu és buit, copieu la URL anterior manualment.', + 'Cannot write to terminal — copy the URL above manually.': + 'No es pot escriure al terminal — copieu la URL anterior manualment.', + 'Press Enter or Esc to go back': 'Premeu Retorn o Esc per tornar enrere', + 'No tools available for this server.': + 'No hi ha eines disponibles per a aquest servidor.', + destructive: 'destructiu', + 'read-only': 'només lectura', + 'open-world': 'món obert', + idempotent: 'idempotent', + 'Tools for {{name}}': 'Eines per a {{name}}', + 'Tools for {{serverName}}': 'Eines per a {{serverName}}', + '{{current}}/{{total}}': '{{current}}/{{total}}', + required: 'obligatori', + Type: 'Tipus', + Enum: 'Enumeració', + Parameters: 'Paràmetres', + 'No tool selected': 'Cap eina seleccionada', + Annotations: 'Anotacions', + Title: 'Títol', + 'Read Only': 'Només lectura', + Destructive: 'Destructiu', + Idempotent: 'Idempotent', + 'Open World': 'Món obert', + Server: 'Servidor', + '{{count}} invalid tools': '{{count}} eines no vàlides', + invalid: 'no vàlid', + 'invalid: {{reason}}': 'no vàlid: {{reason}}', + 'missing name': 'nom absent', + 'missing description': 'descripció absent', + '(unnamed)': '(sense nom)', + 'Warning: This tool cannot be called by the LLM': + 'Advertència: el model no pot cridar aquesta eina', + Reason: 'Motiu', + 'Tools must have both name and description to be used by the LLM.': + 'Les eines han de tenir nom i descripció per poder ser usades pel model.', + + // ============================================================================ + // Ordres - Xat + // ============================================================================ + 'Manage conversation history.': "Gestionar l'historial de la conversa.", + 'List saved conversation checkpoints': + 'Llistar els punts de control desats de la conversa', + 'No saved conversation checkpoints found.': + "No s'han trobat punts de control de conversa desats.", + 'List of saved conversations:': 'Llista de converses desades:', + 'Note: Newest last, oldest first': + 'Nota: Les més recents al final, les més antigues al principi', + 'Save the current conversation as a checkpoint. Usage: /chat save ': + 'Desar la conversa actual com a punt de control. Ús: /chat save ', + 'Missing tag. Usage: /chat save ': + 'Etiqueta absent. Ús: /chat save ', + 'Delete a conversation checkpoint. Usage: /chat delete ': + 'Eliminar un punt de control de conversa. Ús: /chat delete ', + 'Missing tag. Usage: /chat delete ': + 'Etiqueta absent. Ús: /chat delete ', + "Conversation checkpoint '{{tag}}' has been deleted.": + "El punt de control de conversa '{{tag}}' s'ha eliminat.", + "Error: No checkpoint found with tag '{{tag}}'.": + "Error: No s'ha trobat cap punt de control amb l'etiqueta '{{tag}}'.", + 'Resume a conversation from a checkpoint. Usage: /chat resume ': + "Reprendre una conversa des d'un punt de control. Ús: /chat resume ", + 'Missing tag. Usage: /chat resume ': + 'Etiqueta absent. Ús: /chat resume ', + 'No saved checkpoint found with tag: {{tag}}.': + "No s'ha trobat cap punt de control desat amb l'etiqueta: {{tag}}.", + 'A checkpoint with the tag {{tag}} already exists. Do you want to overwrite it?': + "Ja existeix un punt de control amb l'etiqueta {{tag}}. Voleu sobreescriure'l?", + 'No chat client available to save conversation.': + 'No hi ha cap client de xat disponible per desar la conversa.', + 'Conversation checkpoint saved with tag: {{tag}}.': + "Punt de control de conversa desat amb l'etiqueta: {{tag}}.", + 'No conversation found to save.': "No s'ha trobat cap conversa per desar.", + 'No chat client available to share conversation.': + 'No hi ha cap client de xat disponible per compartir la conversa.', + 'Invalid file format. Only .md and .json are supported.': + 'Format de fitxer no vàlid. Només es suporten .md i .json.', + 'Error sharing conversation: {{error}}': + 'Error en compartir la conversa: {{error}}', + 'Conversation shared to {{filePath}}': 'Conversa compartida a {{filePath}}', + 'No conversation found to share.': + "No s'ha trobat cap conversa per compartir.", + 'Share the current conversation to a markdown or json file. Usage: /chat share ': + 'Compartir la conversa actual a un fitxer markdown o json. Ús: /chat share ', + + // ============================================================================ + // Ordres - Resum + // ============================================================================ + 'Generate a project summary and save it to .qwen/PROJECT_SUMMARY.md': + 'Generar un resum del projecte i desar-lo a .qwen/PROJECT_SUMMARY.md', + 'No chat client available to generate summary.': + 'No hi ha cap client de xat disponible per generar el resum.', + 'Already generating summary, wait for previous request to complete': + "Ja s'està generant el resum, espereu que acabi la sol·licitud anterior", + 'No conversation found to summarize.': + "No s'ha trobat cap conversa per resumir.", + 'Failed to generate project context summary: {{error}}': + 'Error en generar el resum del context del projecte: {{error}}', + 'Saved project summary to {{filePathForDisplay}}.': + 'Resum del projecte desat a {{filePathForDisplay}}.', + 'Saving project summary...': 'Desant el resum del projecte...', + 'Generating project summary...': 'Generant el resum del projecte...', + 'Failed to generate summary - no text content received from LLM response': + "Error en generar el resum - no s'ha rebut contingut de text de la resposta del model", + + // ============================================================================ + // Ordres - Model + // ============================================================================ + 'Switch the model for this session (--fast for suggestion model)': + 'Canviar el model per a aquesta sessió (--fast per al model de suggeriments)', + 'Set a lighter model for prompt suggestions and speculative execution': + 'Establir un model més lleuger per a suggeriments de missatges i execució especulativa', + 'Content generator configuration not available.': + 'Configuració del generador de contingut no disponible.', + 'Authentication type not available.': "Tipus d'autenticació no disponible.", + 'No models available for the current authentication type ({{authType}}).': + "No hi ha models disponibles per al tipus d'autenticació actual ({{authType}}).", + + // ============================================================================ + // Ordres - Netejar + // ============================================================================ + 'Starting a new session, resetting chat, and clearing terminal.': + 'Iniciant una nova sessió, restablint el xat i netejant el terminal.', + 'Starting a new session and clearing.': + 'Iniciant una nova sessió i netejant.', + + // ============================================================================ + // Ordres - Comprimir + // ============================================================================ + 'Already compressing, wait for previous request to complete': + "Ja s'està comprimint, espereu que acabi la sol·licitud anterior", + 'Failed to compress chat history.': "Error en comprimir l'historial del xat.", + 'Failed to compress chat history: {{error}}': + "Error en comprimir l'historial del xat: {{error}}", + 'Compressing chat history': "Comprimint l'historial del xat", + 'Chat history compressed from {{originalTokens}} to {{newTokens}} tokens.': + "L'historial del xat s'ha comprimit de {{originalTokens}} a {{newTokens}} tokens.", + 'Compression was not beneficial for this history size.': + "La compressió no ha estat beneficiosa per a aquesta mida d'historial.", + 'Chat history compression did not reduce size. This may indicate issues with the compression prompt.': + "La compressió de l'historial del xat no ha reduït la mida. Això pot indicar problemes amb el missatge de compressió.", + 'Could not compress chat history due to a token counting error.': + "No s'ha pogut comprimir l'historial del xat per un error de recompte de tokens.", + 'Chat history is already compressed.': + "L'historial del xat ja està comprimit.", + + // ============================================================================ + // Ordres - Directori + // ============================================================================ + 'Configuration is not available.': 'Configuració no disponible.', + 'Please provide at least one path to add.': + 'Proporcioneu almenys un camí per afegir.', + 'The /directory add command is not supported in restrictive sandbox profiles. Please use --include-directories when starting the session instead.': + "L'ordre /directory add no és compatible en perfils d'entorn aïllat restrictius. En el seu lloc, useu --include-directories en iniciar la sessió.", + "Error adding '{{path}}': {{error}}": "Error en afegir '{{path}}': {{error}}", + 'Successfully added QWEN.md files from the following directories if there are:\n- {{directories}}': + "S'han afegit correctament els fitxers QWEN.md dels directoris següents si n'hi ha:\n- {{directories}}", + 'Error refreshing memory: {{error}}': + 'Error en actualitzar la memòria: {{error}}', + 'Successfully added directories:\n- {{directories}}': + "S'han afegit correctament els directoris:\n- {{directories}}", + 'Current workspace directories:\n{{directories}}': + "Directoris actuals de l'espai de treball:\n{{directories}}", + + // ============================================================================ + // Ordres - Documentació + // ============================================================================ + 'Please open the following URL in your browser to view the documentation:\n{{url}}': + 'Obriu la URL següent al vostre navegador per veure la documentació:\n{{url}}', + 'Opening documentation in your browser: {{url}}': + 'Obrint la documentació al vostre navegador: {{url}}', + + // ============================================================================ + // Diàlegs - Confirmació d'eines + // ============================================================================ + 'Do you want to proceed?': 'Voleu continuar?', + 'Yes, allow once': 'Sí, permetre una vegada', + 'Allow always': 'Permetre sempre', + Yes: 'Sí', + No: 'No', + 'No (esc)': 'No (esc)', + 'Yes, allow always for this session': + 'Sí, permetre sempre per a aquesta sessió', + 'Modify in progress:': 'Modificació en curs:', + 'Save and close external editor to continue': + "Deseu i tanqueu l'editor extern per continuar", + 'Apply this change?': 'Aplicar aquest canvi?', + 'Yes, allow always': 'Sí, permetre sempre', + 'Modify with external editor': 'Modificar amb editor extern', + 'No, suggest changes (esc)': 'No, suggerir canvis (esc)', + "Allow execution of: '{{command}}'?": + "Permetre l'execució de: '{{command}}'?", + 'Yes, allow always ...': 'Sí, permetre sempre...', + 'Always allow in this project': 'Permetre sempre en aquest projecte', + 'Always allow {{action}} in this project': + 'Permetre sempre {{action}} en aquest projecte', + 'Always allow for this user': 'Permetre sempre per a aquest usuari', + 'Always allow {{action}} for this user': + 'Permetre sempre {{action}} per a aquest usuari', + 'Yes, restore previous mode ({{mode}})': + 'Sí, restaurar el mode anterior ({{mode}})', + 'Yes, and auto-accept edits': 'Sí, i acceptar els canvis automàticament', + 'Yes, and manually approve edits': 'Sí, i aprovar els canvis manualment', + 'No, keep planning (esc)': 'No, seguir planificant (esc)', + 'URLs to fetch:': 'URLs a recuperar:', + 'MCP Server: {{server}}': 'Servidor MCP: {{server}}', + 'Tool: {{tool}}': 'Eina: {{tool}}', + 'Allow execution of MCP tool "{{tool}}" from server "{{server}}"?': + 'Permetre l\'execució de l\'eina MCP "{{tool}}" del servidor "{{server}}"?', + 'Yes, always allow tool "{{tool}}" from server "{{server}}"': + 'Sí, permetre sempre l\'eina "{{tool}}" del servidor "{{server}}"', + 'Yes, always allow all tools from server "{{server}}"': + 'Sí, permetre sempre totes les eines del servidor "{{server}}"', + + // ============================================================================ + // Diàlegs - Confirmació de shell + // ============================================================================ + 'Shell Command Execution': "Execució d'ordres shell", + 'A custom command wants to run the following shell commands:': + 'Una ordre personalitzada vol executar les ordres shell següents:', + + // ============================================================================ + // Diàlegs - Quota Pro + // ============================================================================ + 'Pro quota limit reached for {{model}}.': + "S'ha assolit el límit de quota Pro per a {{model}}.", + 'Change auth (executes the /auth command)': + "Canviar autenticació (executa l'ordre /auth)", + 'Continue with {{model}}': 'Continuar amb {{model}}', + + // ============================================================================ + // Diàlegs - Benvinguda + // ============================================================================ + 'Current Plan:': 'Pla actual:', + 'Progress: {{done}}/{{total}} tasks completed': + 'Progrés: {{done}}/{{total}} tasques completades', + ', {{inProgress}} in progress': ', {{inProgress}} en curs', + 'Pending Tasks:': 'Tasques pendents:', + 'What would you like to do?': 'Què voleu fer?', + 'Choose how to proceed with your session:': + 'Trieu com voleu continuar la vostra sessió:', + 'Start new chat session': 'Iniciar una nova sessió de xat', + 'Continue previous conversation': 'Continuar la conversa anterior', + '👋 Welcome back! (Last updated: {{timeAgo}})': + '👋 Benvingut de nou! (Darrera actualització: {{timeAgo}})', + '🎯 Overall Goal:': '🎯 Objectiu general:', + + // ============================================================================ + // Diàlegs - Autenticació + // ============================================================================ + 'Get started': 'Comencem', + 'Select Authentication Method': "Seleccioneu el mètode d'autenticació", + 'OpenAI API key is required to use OpenAI authentication.': + "Cal una clau API d'OpenAI per usar l'autenticació d'OpenAI.", + 'You must select an auth method to proceed. Press Ctrl+C again to exit.': + "Cal seleccionar un mètode d'autenticació per continuar. Premeu Ctrl+C de nou per sortir.", + 'Terms of Services and Privacy Notice': + 'Termes de servei i avís de privacitat', + 'Qwen OAuth': 'Qwen OAuth', + 'Discontinued — switch to Coding Plan or API Key': + 'Descontinuat — canvieu a Coding Plan o clau API', + 'Qwen OAuth free tier was discontinued on 2026-04-15. Run /auth to switch provider.': + 'El nivell gratuït de Qwen OAuth es va descontinuar el 15-04-2026. Executeu /auth per canviar de proveïdor.', + 'Qwen OAuth free tier was discontinued on 2026-04-15. Please select Coding Plan or API Key instead.': + 'El nivell gratuït de Qwen OAuth es va descontinuar el 15-04-2026. Seleccioneu Coding Plan o clau API en el seu lloc.', + 'Qwen OAuth free tier was discontinued on 2026-04-15. Please select a model from another provider or run /auth to switch.': + "El nivell gratuït de Qwen OAuth es va descontinuar el 15-04-2026. Seleccioneu un model d'un altre proveïdor o executeu /auth per canviar.", + '\n⚠ Qwen OAuth free tier was discontinued on 2026-04-15. Please select another option.\n': + '\n⚠ El nivell gratuït de Qwen OAuth es va descontinuar el 15-04-2026. Seleccioneu una altra opció.\n', + 'Paid · Up to 6,000 requests/5 hrs · All Alibaba Cloud Coding Plan Models': + "De pagament · Fins a 6.000 sol·licituds/5 h · Tots els models del Coding Plan d'Alibaba Cloud", + 'Alibaba Cloud Coding Plan': "Coding Plan d'Alibaba Cloud", + 'Bring your own API key': 'Porteu la vostra pròpia clau API', + 'API-KEY': 'API-KEY', + 'Use coding plan credentials or your own api-keys/providers.': + 'Useu les credencials del coding plan o les vostres pròpies claus API/proveïdors.', + OpenAI: 'OpenAI', + 'Failed to login. Message: {{message}}': + 'Error en iniciar sessió. Missatge: {{message}}', + 'Authentication is enforced to be {{enforcedType}}, but you are currently using {{currentType}}.': + "L'autenticació ha de ser {{enforcedType}}, però actualment esteu usant {{currentType}}.", + 'Qwen OAuth authentication timed out. Please try again.': + "L'autenticació Qwen OAuth ha expirat. Torneu-ho a intentar.", + 'Qwen OAuth authentication cancelled.': + "L'autenticació Qwen OAuth s'ha cancel·lat.", + 'Qwen OAuth Authentication': 'Autenticació Qwen OAuth', + 'Please visit this URL to authorize:': 'Visiteu aquesta URL per autoritzar:', + 'Or scan the QR code below:': 'O escanegeu el codi QR de sota:', + 'Waiting for authorization': "Esperant l'autorització", + 'Time remaining:': 'Temps restant:', + '(Press ESC or CTRL+C to cancel)': '(Premeu ESC o CTRL+C per cancel·lar)', + 'Qwen OAuth Authentication Timeout': + "Temps d'espera de l'autenticació Qwen OAuth esgotat", + 'OAuth token expired (over {{seconds}} seconds). Please select authentication method again.': + "El token OAuth ha expirat (més de {{seconds}} segons). Seleccioneu el mètode d'autenticació de nou.", + 'Press any key to return to authentication type selection.': + "Premeu qualsevol tecla per tornar a la selecció del tipus d'autenticació.", + 'Waiting for Qwen OAuth authentication...': + "Esperant l'autenticació Qwen OAuth...", + 'Note: Your existing API key in settings.json will not be cleared when using Qwen OAuth. You can switch back to OpenAI authentication later if needed.': + "Nota: La vostra clau API existent a settings.json no s'esborrarà en usar Qwen OAuth. Podeu tornar a l'autenticació d'OpenAI més endavant si cal.", + 'Note: Your existing API key will not be cleared when using Qwen OAuth.': + "Nota: La vostra clau API existent no s'esborrarà en usar Qwen OAuth.", + 'Authentication timed out. Please try again.': + "L'autenticació ha expirat. Torneu-ho a intentar.", + 'Waiting for auth... (Press ESC or CTRL+C to cancel)': + "Esperant l'autenticació... (Premeu ESC o CTRL+C per cancel·lar)", + 'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.': + "Manca la clau API per a l'autenticació compatible amb OpenAI. Establiu settings.security.auth.apiKey o la variable d'entorn {{envKeyHint}}.", + '{{envKeyHint}} environment variable not found.': + "La variable d'entorn {{envKeyHint}} no s'ha trobat.", + '{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.': + "La variable d'entorn {{envKeyHint}} no s'ha trobat. Establiu-la al fitxer .env o a les variables d'entorn.", + '{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.': + "La variable d'entorn {{envKeyHint}} no s'ha trobat (o establiu settings.security.auth.apiKey). Establiu-la al fitxer .env o a les variables d'entorn.", + 'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.': + "Manca la clau API per a l'autenticació compatible amb OpenAI. Establiu la variable d'entorn {{envKeyHint}}.", + 'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.': + 'El proveïdor Anthropic no té la baseUrl obligatòria a modelProviders[].baseUrl.', + 'ANTHROPIC_BASE_URL environment variable not found.': + "La variable d'entorn ANTHROPIC_BASE_URL no s'ha trobat.", + 'Invalid auth method selected.': + "S'ha seleccionat un mètode d'autenticació no vàlid.", + 'Failed to authenticate. Message: {{message}}': + 'Error en autenticar-se. Missatge: {{message}}', + 'Authenticated successfully with {{authType}} credentials.': + "S'ha autenticat correctament amb les credencials {{authType}}.", + 'Invalid QWEN_DEFAULT_AUTH_TYPE value: "{{value}}". Valid values are: {{validValues}}': + 'Valor de QWEN_DEFAULT_AUTH_TYPE no vàlid: "{{value}}". Els valors vàlids són: {{validValues}}', + 'OpenAI Configuration Required': "Configuració d'OpenAI necessària", + 'Please enter your OpenAI configuration. You can get an API key from': + "Introduïu la vostra configuració d'OpenAI. Podeu obtenir una clau API de", + 'API Key:': 'Clau API:', + 'Invalid credentials: {{errorMessage}}': + 'Credencials no vàlides: {{errorMessage}}', + 'Failed to validate credentials': 'Error en validar les credencials', + 'Press Enter to continue, Tab/↑↓ to navigate, Esc to cancel': + 'Premeu Retorn per continuar, Tab/↑↓ per navegar, Esc per cancel·lar', + + // ============================================================================ + // Diàlegs - Model + // ============================================================================ + 'Select Model': 'Seleccioneu el model', + '(Press Esc to close)': '(Premeu Esc per tancar)', + 'Current (effective) configuration': 'Configuració actual (efectiva)', + AuthType: "Tipus d'autenticació", + 'API Key': 'Clau API', + unset: 'no establert', + '(default)': '(per defecte)', + '(set)': '(establert)', + '(not set)': '(no establert)', + Modality: 'Modalitat', + 'Context Window': 'Fin. de context', + text: 'text', + 'text-only': 'només text', + image: 'imatge', + pdf: 'pdf', + audio: 'àudio', + video: 'vídeo', + 'not set': 'no establert', + none: 'cap', + unknown: 'desconegut', + "Failed to switch model to '{{modelId}}'.\n\n{{error}}": + "Error en canviar al model '{{modelId}}'.\n\n{{error}}", + 'Qwen 3.6 Plus — efficient hybrid model with leading coding performance': + 'Qwen 3.6 Plus — model híbrid eficient amb un rendiment de codificació líder', + 'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)': + "L'últim model de visió Qwen d'Alibaba Cloud ModelStudio (versió: qwen3-vl-plus-2025-09-23)", + + // ============================================================================ + // Diàlegs - Permisos + // ============================================================================ + 'Manage folder trust settings': + 'Gestionar la configuració de confiança de carpetes', + 'Manage permission rules': 'Gestionar les regles de permisos', + Allow: 'Permetre', + Ask: 'Preguntar', + Deny: 'Denegar', + Workspace: 'Espai de treball', + "Qwen Code won't ask before using allowed tools.": + "Qwen Code no preguntarà abans d'usar les eines permeses.", + 'Qwen Code will ask before using these tools.': + "Qwen Code preguntarà abans d'usar aquestes eines.", + 'Qwen Code is not allowed to use denied tools.': + 'Qwen Code no té permís per usar les eines denegades.', + 'Manage trusted directories for this workspace.': + "Gestionar els directoris de confiança d'aquest espai de treball.", + 'Any use of the {{tool}} tool': "Qualsevol ús de l'eina {{tool}}", + "{{tool}} commands matching '{{pattern}}'": + "Ordres de {{tool}} que coincideixen amb '{{pattern}}'", + 'From user settings': "Des de la configuració d'usuari", + 'From project settings': 'Des de la configuració del projecte', + 'From session': 'Des de la sessió', + 'Project settings (local)': 'Configuració del projecte (local)', + 'Saved in .qwen/settings.local.json': 'Desat a .qwen/settings.local.json', + 'Project settings': 'Configuració del projecte', + 'Checked in at .qwen/settings.json': 'Registrat a .qwen/settings.json', + 'User settings': "Configuració d'usuari", + 'Saved in at ~/.qwen/settings.json': 'Desat a ~/.qwen/settings.json', + 'Add a new rule…': 'Afegir una nova regla…', + 'Add {{type}} permission rule': 'Afegir una regla de permís {{type}}', + 'Permission rules are a tool name, optionally followed by a specifier in parentheses.': + "Les regles de permisos són un nom d'eina, seguit opcionalment d'un especificador entre parèntesis.", + 'e.g.,': 'p. ex.,', + or: 'o', + 'Enter permission rule…': 'Introduïu la regla de permís…', + 'Enter to submit · Esc to cancel': 'Retorn per enviar · Esc per cancel·lar', + 'Where should this rule be saved?': "On s'ha de desar aquesta regla?", + 'Enter to confirm · Esc to cancel': + 'Retorn per confirmar · Esc per cancel·lar', + 'Delete {{type}} rule?': 'Eliminar la regla {{type}}?', + 'Are you sure you want to delete this permission rule?': + 'Esteu segur que voleu eliminar aquesta regla de permisos?', + 'Permissions:': 'Permisos:', + '(←/→ or tab to cycle)': '(←/→ o tab per canviar)', + 'Press ↑↓ to navigate · Enter to select · Type to search · Esc to cancel': + 'Premeu ↑↓ per navegar · Retorn per seleccionar · Escriviu per cercar · Esc per cancel·lar', + 'Search…': 'Cercar…', + 'Use /trust to manage folder trust settings for this workspace.': + "Useu /trust per gestionar la configuració de confiança de carpetes d'aquest espai de treball.", + 'Add directory…': 'Afegir directori…', + 'Add directory to workspace': "Afegir directori a l'espai de treball", + 'Qwen Code can read files in the workspace, and make edits when auto-accept edits is on.': + "Qwen Code pot llegir fitxers a l'espai de treball i fer canvis quan l'acceptació automàtica de canvis està activada.", + 'Qwen Code will be able to read files in this directory and make edits when auto-accept edits is on.': + "Qwen Code podrà llegir fitxers en aquest directori i fer canvis quan l'acceptació automàtica de canvis està activada.", + 'Enter the path to the directory:': 'Introduïu el camí del directori:', + 'Enter directory path…': 'Introduïu el camí del directori…', + 'Tab to complete · Enter to add · Esc to cancel': + 'Tab per completar · Retorn per afegir · Esc per cancel·lar', + 'Remove directory?': 'Eliminar el directori?', + 'Are you sure you want to remove this directory from the workspace?': + "Esteu segur que voleu eliminar aquest directori de l'espai de treball?", + ' (Original working directory)': ' (Directori de treball original)', + ' (from settings)': ' (des de la configuració)', + 'Directory does not exist.': 'El directori no existeix.', + 'Path is not a directory.': 'El camí no és un directori.', + 'This directory is already in the workspace.': + "Aquest directori ja és a l'espai de treball.", + 'Already covered by existing directory: {{dir}}': + 'Ja cobert per un directori existent: {{dir}}', + + // ============================================================================ + // Barra d'estat + // ============================================================================ + 'Using:': 'En ús:', + '{{count}} open file': '{{count}} fitxer obert', + '{{count}} open files': '{{count}} fitxers oberts', + '(ctrl+g to view)': '(ctrl+g per veure)', + '{{count}} {{name}} file': '{{count}} fitxer {{name}}', + '{{count}} {{name}} files': '{{count}} fitxers {{name}}', + '{{count}} MCP server': '{{count}} servidor MCP', + '{{count}} MCP servers': '{{count}} servidors MCP', + '{{count}} Blocked': '{{count}} bloquejats', + '(ctrl+t to view)': '(ctrl+t per veure)', + '(ctrl+t to toggle)': '(ctrl+t per canviar)', + 'Press Ctrl+C again to exit.': 'Premeu Ctrl+C de nou per sortir.', + 'Press Ctrl+D again to exit.': 'Premeu Ctrl+D de nou per sortir.', + 'Press Esc again to clear.': 'Premeu Esc de nou per esborrar.', + 'Press ↑ to edit queued messages': 'Premeu ↑ per editar els missatges en cua', + + // ============================================================================ + // Estat MCP + // ============================================================================ + 'No MCP servers configured.': 'No hi ha servidors MCP configurats.', + '⏳ MCP servers are starting up ({{count}} initializing)...': + "⏳ Els servidors MCP s'estan iniciant ({{count}} inicialitzant)...", + 'Note: First startup may take longer. Tool availability will update automatically.': + "Nota: El primer inici pot tardar més. La disponibilitat de les eines s'actualitzarà automàticament.", + 'Configured MCP servers:': 'Servidors MCP configurats:', + Ready: 'Preparat', + 'Starting... (first startup may take longer)': + 'Iniciant... (el primer inici pot tardar més)', + Disconnected: 'Desconnectat', + '{{count}} tool': '{{count}} eina', + '{{count}} tools': '{{count}} eines', + '{{count}} prompt': '{{count}} missatge', + '{{count}} prompts': '{{count}} missatges', + '(from {{extensionName}})': '(de {{extensionName}})', + OAuth: 'OAuth', + 'OAuth expired': 'OAuth expirat', + 'OAuth not authenticated': 'OAuth no autenticat', + 'tools and prompts will appear when ready': + 'les eines i els missatges apareixeran quan estiguin a punt', + '{{count}} tools cached': '{{count}} eines en memòria cau', + 'Tools:': 'Eines:', + 'Parameters:': 'Paràmetres:', + 'Prompts:': 'Missatges:', + Blocked: 'Bloquejat', + '💡 Tips:': '💡 Consells:', + Use: 'Useu', + 'to show server and tool descriptions': + 'per mostrar les descripcions del servidor i de les eines', + 'to show tool parameter schemas': + 'per mostrar els esquemes de paràmetres de les eines', + 'to hide descriptions': 'per amagar les descripcions', + 'to authenticate with OAuth-enabled servers': + 'per autenticar-vos amb servidors OAuth', + Press: 'Premeu', + 'to toggle tool descriptions on/off': + 'per activar/desactivar les descripcions de les eines', + "Starting OAuth authentication for MCP server '{{name}}'...": + "Iniciant l'autenticació OAuth per al servidor MCP '{{name}}'...", + 'Restarting MCP servers...': 'Reiniciant els servidors MCP...', + + // ============================================================================ + // Consells d'inici + // ============================================================================ + 'Tips:': 'Consells:', + 'Use /compress when the conversation gets long to summarize history and free up context.': + "Useu /compress quan la conversa sigui llarga per resumir l'historial i alliberar context.", + 'Start a fresh idea with /clear or /new; the previous session stays available in history.': + "Comenceu una idea nova amb /clear o /new; la sessió anterior segueix disponible a l'historial.", + 'Use /bug to submit issues to the maintainers when something goes off.': + 'Useu /bug per enviar incidències als mantenidors quan alguna cosa vagi malament.', + 'Switch auth type quickly with /auth.': + "Canvieu ràpidament el tipus d'autenticació amb /auth.", + 'You can run any shell commands from Qwen Code using ! (e.g. !ls).': + 'Podeu executar qualsevol ordre shell des de Qwen Code usant ! (p. ex. !ls).', + 'Type / to open the command popup; Tab autocompletes slash commands and saved prompts.': + "Escriviu / per obrir el menú emergent d'ordres; Tab completa automàticament les ordres de barra i els missatges desats.", + 'You can resume a previous conversation by running qwen --continue or qwen --resume.': + 'Podeu reprendre una conversa anterior executant qwen --continue o qwen --resume.', + 'You can switch permission mode quickly with Shift+Tab or /approval-mode.': + 'Podeu canviar ràpidament el mode de permisos amb Maj+Tab o /approval-mode.', + 'You can switch permission mode quickly with Tab or /approval-mode.': + 'Podeu canviar ràpidament el mode de permisos amb Tab o /approval-mode.', + 'Try /insight to generate personalized insights from your chat history.': + 'Proveu /insight per generar idees personalitzades a partir del vostre historial de xat.', + 'Press Ctrl+O to toggle compact mode — hide tool output and thinking for a cleaner view.': + 'Premeu Ctrl+O per canviar el mode compacte — amagueu la sortida de les eines i el pensament per a una vista més neta.', + 'Add a QWEN.md file to give Qwen Code persistent project context.': + 'Afegiu un fitxer QWEN.md per donar a Qwen Code un context persistent del projecte.', + 'Use /btw to ask a quick side question without disrupting the conversation.': + 'Useu /btw per fer una pregunta ràpida sense interrompre la conversa.', + 'Context is almost full! Run /compress now or start /new to continue.': + 'El context gairebé és ple! Executeu /compress ara o inicieu /new per continuar.', + 'Context is getting full. Use /compress to free up space.': + "El context s'omple. Useu /compress per alliberar espai.", + 'Long conversation? /compress summarizes history to free context.': + "Conversa llarga? /compress resumeix l'historial per alliberar context.", + + // ============================================================================ + // Pantalla de sortida / Estadístiques + // ============================================================================ + 'Agent powering down. Goodbye!': "L'agent s'apaga. Fins aviat!", + 'To continue this session, run': 'Per continuar aquesta sessió, executeu', + 'Interaction Summary': 'Resum de la interacció', + 'Session ID:': 'ID de sessió:', + 'Tool Calls:': 'Crides a eines:', + 'Success Rate:': "Taxa d'èxit:", + 'User Agreement:': "Acord de l'usuari:", + reviewed: 'revisades', + 'Code Changes:': 'Canvis de codi:', + Performance: 'Rendiment', + 'Wall Time:': 'Temps real:', + 'Agent Active:': 'Agent actiu:', + 'API Time:': "Temps de l'API:", + 'Tool Time:': "Temps d'eines:", + 'Session Stats': 'Estadístiques de la sessió', + 'Model Usage': 'Ús del model', + Reqs: 'Sol·licituds', + 'Input Tokens': "Tokens d'entrada", + 'Output Tokens': 'Tokens de sortida', + 'Savings Highlight:': 'Estalvis destacats:', + 'of input tokens were served from the cache, reducing costs.': + "dels tokens d'entrada s'han servit des de la memòria cau, reduint els costos.", + 'Tip: For a full token breakdown, run `/stats model`.': + 'Consell: Per a un desglossament complet de tokens, executeu `/stats model`.', + 'Model Stats For Nerds': 'Estadístiques del model per a nerds', + 'Tool Stats For Nerds': "Estadístiques d'eines per a nerds", + Metric: 'Mètrica', + API: 'API', + Requests: 'Sol·licituds', + Errors: 'Errors', + 'Avg Latency': 'Latència mitjana', + Tokens: 'Tokens', + Total: 'Total', + Prompt: 'Missatge', + Cached: 'En memòria cau', + Thoughts: 'Pensaments', + Tool: 'Eina', + Output: 'Sortida', + 'No API calls have been made in this session.': + "No s'ha realitzat cap crida a l'API en aquesta sessió.", + 'Tool Name': "Nom de l'eina", + Calls: 'Crides', + 'Success Rate': "Taxa d'èxit", + 'Avg Duration': 'Durada mitjana', + 'User Decision Summary': "Resum de decisions de l'usuari", + 'Total Reviewed Suggestions:': 'Total de suggeriments revisats:', + ' » Accepted:': ' » Acceptats:', + ' » Rejected:': ' » Rebutjats:', + ' » Modified:': ' » Modificats:', + ' Overall Agreement Rate:': " Taxa d'acord global:", + 'No tool calls have been made in this session.': + "No s'ha realitzat cap crida a eines en aquesta sessió.", + 'Session start time is unavailable, cannot calculate stats.': + "L'hora d'inici de la sessió no està disponible, no es poden calcular les estadístiques.", + + // ============================================================================ + // Migració del format d'ordres + // ============================================================================ + 'Command Format Migration': "Migració del format d'ordres", + 'Found {{count}} TOML command file:': + "S'ha trobat {{count}} fitxer d'ordres TOML:", + 'Found {{count}} TOML command files:': + "S'han trobat {{count}} fitxers d'ordres TOML:", + '... and {{count}} more': '... i {{count}} més', + 'The TOML format is deprecated. Would you like to migrate them to Markdown format?': + 'El format TOML és obsolet. Voleu migrar-los al format Markdown?', + '(Backups will be created and original files will be preserved)': + '(Es crearan còpies de seguretat i els fitxers originals es conservaran)', + + // ============================================================================ + // Frases de càrrega + // ============================================================================ + 'Waiting for user confirmation...': "Esperant la confirmació de l'usuari...", + '(esc to cancel, {{time}})': '(esc per cancel·lar, {{time}})', + + // ============================================================================ + // Frases de càrrega enginyoses + // ============================================================================ + WITTY_LOADING_PHRASES: [ + 'Em sento afortunat', + 'Enviant el millor...', + "Setze jutges d'un jutjat mengen fetge d'un penjat.", + 'Navegant pel fong mucilaginós...', + 'Consultant els esperits digitals...', + 'Desperta ferro...', + 'Escalfant els hàmsters de la IA...', + 'Preguntant a la petxina màgica...', + 'Generant una rèplica enginyosa...', + 'Polint els algorismes...', + 'No correu la perfecció (ni el meu codi)...', + 'Preparant bytes frescos...', + 'Comptant electrons...', + 'Activant els processadors cognitius...', + "Buscant errors de sintaxi a l'univers...", + "Un moment, optimitzant l'humor...", + 'Barrejant les gràcies...', + 'Desenredant les xarxes neuronals...', + 'Compilant la brillantor...', + 'Carregant gràcia.exe...', + 'Invocant el núvol de saviesa...', + 'Preparant una resposta enginyosa...', + 'Un segon, estic depurant la realitat...', + 'Donant els últims cops de...', + 'Afinant les freqüències còsmiques...', + 'Elaborant una resposta digna de la vostra paciència...', + 'Compilant els 1 i els 0...', + 'Resolent dependències... i crisis existencials...', + 'Desfragmentant records... tant de RAM com personals...', + "Reiniciant el mòdul de l'humor...", + 'Emmagatzemant en memòria cau el necessari (principalment mems de gats)...', + 'Optimitzant per a velocitat ridícula', + 'Intercanviant bits... que no ho sàpiguen els bytes...', + 'Recollint brossa... torno de seguida...', + 'Assemblant les internets...', + 'Convertint cafè en codi...', + 'Actualitzant la sintaxi de la realitat...', + 'Reconnectant les sinapsis...', + 'Buscant un punt i coma mal posat...', + 'Engreixant els engranatges de la màquina...', + 'Precalfant els servidors...', + 'Calibrant el condensador de flux...', + 'Activant el motor de improbabilitat...', + 'Canalitzant la Força...', + 'Alineant les estrelles per a una resposta òptima...', + 'I tots ho diem...', + 'Carregant la propera gran idea...', + 'Un moment, estic en el meu element...', + 'Preparant-me per impressionar-vos amb brillantor...', + 'Un moment, polint el meu enginy...', + 'Aguanteu, estic creant una obra mestra...', + "Un moment, depurant l'univers...", + 'Un moment, alineant els píxels...', + "Un segon, optimitzant l'humor...", + 'Un moment, afinant els algorismes...', + 'Velocitat de curvatura activada...', + 'Preparant la següent jugada mestre...', + 'No us espanteu...', + 'Seguint el conill blanc...', + 'La veritat és aquí... en algun lloc...', + 'Bufant al cartutx...', + 'Carregant... Feu un gir de barril!', + 'Esperant la reaparició...', + 'Paciència, pensa que Rodalies encara va més lent...', + "El pastís no és una mentida, simplement s'està carregant...", + 'Tafanejant la pantalla de creació de personatge...', + 'Un moment, trobo el meme adequat...', + "Prement 'A' per continuar...", + 'Pasturant gats digitals...', + 'Polint els píxels...', + 'Buscant un acudit per a la pantalla de càrrega...', + 'Distreu-vos amb aquesta frase enginyosa...', + 'Gairebé a punt... probablement...', + 'Els nostres hàmsters treballen tan ràpid com poden...', + 'Donant un copet al cap a Cloudy...', + 'Fent festes al gat...', + 'Endavant les atxes...', + 'Mai no us deixaré anar, mai no us decebré...', + 'Tocant el baix...', + 'Vaig a buscar ratafia...', + 'Vaig a tota velocitat, vaig a tota marxa...', + 'És la vida real? És sols fantasia?...', + 'Tinc bon pressentiment sobre això...', + 'Tocant el tigre...', + 'Investigant els últims mems...', + 'Pensant com fer això més enginyós...', + 'Hmm... deixeu-me pensar...', + 'Suant la cansalada...', + 'Trient el fetge per la boca...', + "Posar fil a l'agulla...", + 'Un moment, ho tenim a tocar..', + 'Això és bufar i fer ampolles', + 'Què pots fer amb un llapis trencat? Res, no té punta...', + 'Aplicant manteniment percussiu...', + "Buscant l'orientació correcta de l'USB...", + 'Assegurant que el fum màgic quedi dins dels cables...', + 'Intentant sortir del Vim...', + 'Girant la roda del hàmster...', + 'Això no és un error, és una característica no documentada...', + 'Endavant.', + 'Tornaré... amb una resposta.', + 'El meu altre procés és una TARDIS...', + 'Posant oli als engranatges...', + 'Deixant que els pensaments macerin...', + 'Acabo de recordar on he deixat les claus...', + "Ponderant l'orbe...", + 'He vist coses que no creuríeu... com un usuari que llegeix els missatges de càrrega.', + 'Iniciant la mirada pensativa...', + "Quin és el berenar preferit d'un computador? Xips micro.", + 'Per què els programadors de Java porten ulleres? Perquè no veuen en C#.', + 'Carregant el làser... piu piu!', + 'Dividint per zero... és broma!', + 'Buscant un supervisor adult... és a dir, processant.', + 'Fent que faci xup xup.', + 'Emmarcant... perquè fins i tot les IA necessiten un moment.', + 'Entrellaçant partícules quàntiques per a una resposta més ràpida...', + 'Polint el crom... dels algorismes.', + 'No esteu entretinguts? (Hi estem treballant!)', + 'Invocant els follets del codi... per ajudar, és clar.', + 'Esperant que acabi el so del mòdem de marcació...', + "Recalibrant el mesurament de l'humor.", + 'La meva altra pantalla de càrrega és fins i tot més divertida.', + 'Estic bastant segur que hi ha un gat caminant per algun teclat...', + 'Millorant... millorant... encara carregant.', + "No és un error, és una característica... d'aquesta pantalla de càrrega.", + 'Heu provat apagar-ho i tornar-lo a encendre? (La pantalla de càrrega, no jo.)', + 'Construint piló addicionals...', + ], + + // ============================================================================ + // Entrada de configuració d'extensions + // ============================================================================ + 'Enter value...': 'Introduïu el valor...', + 'Enter sensitive value...': 'Introduïu el valor sensible...', + 'Press Enter to submit, Escape to cancel': + 'Premeu Retorn per enviar, Esc per cancel·lar', + + // ============================================================================ + // Eina de migració d'ordres + // ============================================================================ + 'Markdown file already exists: {{filename}}': + 'El fitxer Markdown ja existeix: {{filename}}', + 'TOML Command Format Deprecation Notice': + "Avís d'obsolescència del format d'ordres TOML", + 'Found {{count}} command file(s) in TOML format:': + "S'ha(n) trobat {{count}} fitxer(s) d'ordres en format TOML:", + 'The TOML format for commands is being deprecated in favor of Markdown format.': + "El format TOML per a ordres s'està fent obsolet en favor del format Markdown.", + 'Markdown format is more readable and easier to edit.': + "El format Markdown és més llegible i fàcil d'editar.", + 'You can migrate these files automatically using:': + 'Podeu migrar aquests fitxers automàticament usant:', + 'Or manually convert each file:': 'O convertiu cada fitxer manualment:', + 'TOML: prompt = "..." / description = "..."': + 'TOML: prompt = "..." / description = "..."', + 'Markdown: YAML frontmatter + content': + 'Markdown: capçalera YAML + contingut', + 'The migration tool will:': "L'eina de migració farà:", + 'Convert TOML files to Markdown': 'Convertir fitxers TOML a Markdown', + 'Create backups of original files': + 'Crear còpies de seguretat dels fitxers originals', + 'Preserve all command functionality': + 'Preservar tota la funcionalitat de les ordres', + 'TOML format will continue to work for now, but migration is recommended.': + 'El format TOML seguirà funcionant de moment, però es recomana la migració.', + + // ============================================================================ + // Extensions - Ordre d'explorar + // ============================================================================ + 'Open extensions page in your browser': + "Obrir la pàgina d'extensions al vostre navegador", + 'Unknown extensions source: {{source}}.': + "Font d'extensions desconeguda: {{source}}.", + 'Would open extensions page in your browser: {{url}} (skipped in test environment)': + "Obriria la pàgina d'extensions al vostre navegador: {{url}} (omès en entorn de proves)", + 'View available extensions at {{url}}': + 'Veure les extensions disponibles a {{url}}', + 'Opening extensions page in your browser: {{url}}': + "Obrint la pàgina d'extensions al vostre navegador: {{url}}", + 'Failed to open browser. Check out the extensions gallery at {{url}}': + "Error en obrir el navegador. Visiteu la galeria d'extensions a {{url}}", + + // ============================================================================ + // Reintents / Límit de velocitat + // ============================================================================ + 'Rate limit error: {{reason}}': 'Error de límit de velocitat: {{reason}}', + 'Retrying in {{seconds}} seconds… (attempt {{attempt}}/{{maxRetries}})': + 'Reintentant en {{seconds}} segons… (intent {{attempt}}/{{maxRetries}})', + 'Press Ctrl+Y to retry': 'Premeu Ctrl+Y per reintentar', + 'No failed request to retry.': + 'No hi ha cap sol·licitud fallida per reintentar.', + 'to retry last request': "per reintentar l'última sol·licitud", + + // ============================================================================ + // Autenticació del Coding Plan + // ============================================================================ + 'API key cannot be empty.': 'La clau API no pot estar buida.', + 'Invalid API key. Coding Plan API keys start with "sk-sp-". Please check.': + 'Clau API no vàlida. Les claus API del Coding Plan comencen per "sk-sp-". Comproveu-la.', + 'You can get your Coding Plan API key here': + 'Podeu obtenir la vostra clau API del Coding Plan aquí', + 'API key is stored in settings.env. You can migrate it to a .env file for better security.': + "La clau API s'emmagatzema a settings.env. Podeu migrar-la a un fitxer .env per una millor seguretat.", + 'New model configurations are available for Alibaba Cloud Coding Plan. Update now?': + "Hi ha noves configuracions de model disponibles per al Coding Plan d'Alibaba Cloud. Actualitzeu ara?", + 'Coding Plan configuration updated successfully. New models are now available.': + "La configuració del Coding Plan s'ha actualitzat correctament. Ara hi ha nous models disponibles.", + 'Coding Plan API key not found. Please re-authenticate with Coding Plan.': + "No s'ha trobat la clau API del Coding Plan. Torneu a autenticar-vos amb el Coding Plan.", + 'Failed to update Coding Plan configuration: {{message}}': + 'Error en actualitzar la configuració del Coding Plan: {{message}}', + + // ============================================================================ + // Configuració de clau API personalitzada + // ============================================================================ + 'You can configure your API key and models in settings.json': + 'Podeu configurar la vostra clau API i els models a settings.json', + 'Refer to the documentation for setup instructions': + 'Consulteu la documentació per a les instruccions de configuració', + + // ============================================================================ + // Diàleg d'autenticació - Títols i etiquetes + // ============================================================================ + 'Coding Plan': 'Coding Plan', + "Paste your api key of ModelStudio Coding Plan and you're all set!": + 'Enganxeu la vostra clau API del Coding Plan de ModelStudio i ja esteu llest!', + Custom: 'Personalitzat', + 'More instructions about configuring `modelProviders` manually.': + 'Més instruccions sobre la configuració manual de `modelProviders`.', + 'Select API-KEY configuration mode:': + 'Seleccioneu el mode de configuració de la clau API:', + '(Press Escape to go back)': '(Premeu Esc per tornar enrere)', + '(Press Enter to submit, Escape to cancel)': + '(Premeu Retorn per enviar, Esc per cancel·lar)', + 'Select Region for Coding Plan': 'Seleccioneu la regió per al Coding Plan', + 'Choose based on where your account is registered': + "Trieu en funció d'on teniu registrat el compte", + 'Enter Coding Plan API Key': 'Introduïu la clau API del Coding Plan', + + // ============================================================================ + // Actualitzacions internacionals del Coding Plan + // ============================================================================ + 'New model configurations are available for {{region}}. Update now?': + 'Hi ha noves configuracions de model disponibles per a {{region}}. Actualitzeu ara?', + '{{region}} configuration updated successfully. Model switched to "{{model}}".': + 'La configuració de {{region}} s\'ha actualitzat correctament. El model ha canviat a "{{model}}".', + 'Authenticated successfully with {{region}}. API key and model configs saved to settings.json (backed up).': + "S'ha autenticat correctament amb {{region}}. La clau API i les configuracions del model s'han desat a settings.json (amb còpia de seguretat).", + + // ============================================================================ + // Component d'ús del context + // ============================================================================ + 'Context Usage': 'Ús del context', + '% used': '% usat', + '% context used': '% del context usat', + 'Context exceeds limit! Use /compress or /clear to reduce.': + 'El context supera el límit! Useu /compress o /clear per reduir-lo.', + 'Use /compress or /clear': 'Useu /compress o /clear', + 'No API response yet. Send a message to see actual usage.': + "Encara no hi ha cap resposta de l'API. Envieu un missatge per veure l'ús real.", + 'Estimated pre-conversation overhead': + 'Càrrega estimada prèvia a la conversa', + 'Context window': 'Finestra de context', + tokens: 'tokens', + Used: 'Usat', + Free: 'Lliure', + 'Autocompact buffer': 'Memòria intermèdia de compactació automàtica', + 'Usage by category': 'Ús per categoria', + 'System prompt': 'Missatge del sistema', + 'Built-in tools': 'Eines integrades', + 'MCP tools': 'Eines MCP', + 'Memory files': 'Fitxers de memòria', + Skills: 'Habilitats', + Messages: 'Missatges', + 'Show context window usage breakdown.': + "Mostrar el desglossament de l'ús de la finestra de context.", + 'Run /context detail for per-item breakdown.': + 'Executeu /context detail per a un desglossament per element.', + 'Show context window usage breakdown. Use "/context detail" for per-item breakdown.': + 'Mostrar el desglossament de l\'ús de la finestra de context. Useu "/context detail" per a un desglossament per element.', + 'body loaded': 'cos carregat', + memory: 'memòria', + '{{region}} configuration updated successfully.': + "La configuració de {{region}} s'ha actualitzat correctament.", + 'Authenticated successfully with {{region}}. API key and model configs saved to settings.json.': + "S'ha autenticat correctament amb {{region}}. La clau API i les configuracions del model s'han desat a settings.json.", + 'Tip: Use /model to switch between available Coding Plan models.': + 'Consell: Useu /model per canviar entre els models del Coding Plan disponibles.', + + // ============================================================================ + // Eina de preguntes a l'usuari + // ============================================================================ + 'Please answer the following question(s):': + 'Responeu la(es) pregunta(es) següent(s):', + 'Cannot ask user questions in non-interactive mode. Please run in interactive mode to use this tool.': + "No es poden fer preguntes a l'usuari en mode no interactiu. Executeu en mode interactiu per usar aquesta eina.", + 'User declined to answer the questions.': + "L'usuari ha declinat respondre les preguntes.", + 'User has provided the following answers:': + "L'usuari ha proporcionat les respostes següents:", + 'Failed to process user answers:': + "Error en processar les respostes de l'usuari:", + 'Type something...': 'Escriviu alguna cosa...', + Submit: 'Enviar', + 'Submit answers': 'Enviar respostes', + Cancel: 'Cancel·lar', + 'Your answers:': 'Les vostres respostes:', + '(not answered)': '(sense resposta)', + 'Ready to submit your answers?': + 'Preparats per enviar les vostres respostes?', + '↑/↓: Navigate | ←/→: Switch tabs | Enter: Select': + '↑/↓: Navegar | ←/→: Canviar pestanyes | Retorn: Seleccionar', + '↑/↓: Navigate | ←/→: Switch tabs | Space/Enter: Toggle | Esc: Cancel': + '↑/↓: Navegar | ←/→: Canviar pestanyes | Espai/Retorn: Canviar | Esc: Cancel·lar', + '↑/↓: Navigate | Space/Enter: Toggle | Esc: Cancel': + '↑/↓: Navegar | Espai/Retorn: Canviar | Esc: Cancel·lar', + '↑/↓: Navigate | Enter: Select | Esc: Cancel': + '↑/↓: Navegar | Retorn: Seleccionar | Esc: Cancel·lar', + + // ============================================================================ + // Ordres - Autenticació + // ============================================================================ + 'Configure Qwen authentication information with Qwen-OAuth or Alibaba Cloud Coding Plan': + "Configurar la informació d'autenticació de Qwen amb Qwen-OAuth o el Coding Plan d'Alibaba Cloud", + 'Authenticate using Qwen OAuth': 'Autenticar-se usant Qwen OAuth', + 'Authenticate using Alibaba Cloud Coding Plan': + "Autenticar-se usant el Coding Plan d'Alibaba Cloud", + 'Region for Coding Plan (china/global)': + 'Regió per al Coding Plan (china/global)', + 'API key for Coding Plan': 'Clau API per al Coding Plan', + 'Show current authentication status': "Mostrar l'estat d'autenticació actual", + 'Authentication completed successfully.': + "L'autenticació s'ha completat correctament.", + 'Starting Qwen OAuth authentication...': + "Iniciant l'autenticació Qwen OAuth...", + 'Successfully authenticated with Qwen OAuth.': + "S'ha autenticat correctament amb Qwen OAuth.", + 'Failed to authenticate with Qwen OAuth: {{error}}': + 'Error en autenticar-se amb Qwen OAuth: {{error}}', + 'Processing Alibaba Cloud Coding Plan authentication...': + "Processant l'autenticació del Coding Plan d'Alibaba Cloud...", + 'Successfully authenticated with Alibaba Cloud Coding Plan.': + "S'ha autenticat correctament amb el Coding Plan d'Alibaba Cloud.", + 'Failed to authenticate with Coding Plan: {{error}}': + 'Error en autenticar-se amb el Coding Plan: {{error}}', + '中国 (China)': '中国 (China)', + '阿里云百炼 (aliyun.com)': '阿里云百炼 (aliyun.com)', + Global: 'Global', + 'Alibaba Cloud (alibabacloud.com)': 'Alibaba Cloud (alibabacloud.com)', + 'Select region for Coding Plan:': 'Seleccioneu la regió per al Coding Plan:', + 'Enter your Coding Plan API key: ': + 'Introduïu la vostra clau API del Coding Plan: ', + 'Select authentication method:': "Seleccioneu el mètode d'autenticació:", + '\n=== Authentication Status ===\n': "\n=== Estat d'autenticació ===\n", + '⚠️ No authentication method configured.\n': + "⚠️ Cap mètode d'autenticació configurat.\n", + 'Run one of the following commands to get started:\n': + 'Executeu una de les ordres següents per començar:\n', + ' qwen auth qwen-oauth - Authenticate with Qwen OAuth (discontinued)': + ' qwen auth qwen-oauth - Autenticar-se amb Qwen OAuth (descontinuat)', + ' qwen auth coding-plan - Authenticate with Alibaba Cloud Coding Plan\n': + " qwen auth coding-plan - Autenticar-se amb el Coding Plan d'Alibaba Cloud\n", + 'Or simply run:': 'O simplement executeu:', + ' qwen auth - Interactive authentication setup\n': + " qwen auth - Configuració interactiva de l'autenticació\n", + '✓ Authentication Method: Qwen OAuth': "✓ Mètode d'autenticació: Qwen OAuth", + ' Type: Free tier (discontinued 2026-04-15)': + ' Tipus: Nivell gratuït (descontinuat el 15-04-2026)', + ' Limit: No longer available': ' Límit: Ja no disponible', + 'Qwen OAuth free tier was discontinued on 2026-04-15. Run /auth to switch to Coding Plan, OpenRouter, Fireworks AI, or another provider.': + 'El nivell gratuït de Qwen OAuth es va descontinuar el 15-04-2026. Executeu /auth per canviar al Coding Plan, OpenRouter, Fireworks AI o un altre proveïdor.', + ' Models: Qwen latest models\n': ' Models: Últims models Qwen\n', + '✓ Authentication Method: Alibaba Cloud Coding Plan': + "✓ Mètode d'autenticació: Coding Plan d'Alibaba Cloud", + '中国 (China) - 阿里云百炼': '中国 (China) - 阿里云百炼', + 'Global - Alibaba Cloud': 'Global - Alibaba Cloud', + ' Region: {{region}}': ' Regió: {{region}}', + ' Current Model: {{model}}': ' Model actual: {{model}}', + ' Config Version: {{version}}': ' Versió de configuració: {{version}}', + ' Status: API key configured\n': ' Estat: Clau API configurada\n', + '⚠️ Authentication Method: Alibaba Cloud Coding Plan (Incomplete)': + "⚠️ Mètode d'autenticació: Coding Plan d'Alibaba Cloud (Incomplet)", + ' Issue: API key not found in environment or settings\n': + " Problema: Clau API no trobada a l'entorn o la configuració\n", + ' Run `qwen auth coding-plan` to re-configure.\n': + ' Executeu `qwen auth coding-plan` per tornar a configurar.\n', + '✓ Authentication Method: {{type}}': "✓ Mètode d'autenticació: {{type}}", + ' Status: Configured\n': ' Estat: Configurat\n', + 'Failed to check authentication status: {{error}}': + "Error en comprovar l'estat d'autenticació: {{error}}", + 'Select an option:': 'Seleccioneu una opció:', + 'Raw mode not available. Please run in an interactive terminal.': + 'El mode raw no està disponible. Executeu en un terminal interactiu.', + '(Use ↑ ↓ arrows to navigate, Enter to select, Ctrl+C to exit)\n': + '(Useu les fletxes ↑ ↓ per navegar, Retorn per seleccionar, Ctrl+C per sortir)\n', + compact: 'compacte', + 'compact mode: on (Ctrl+O off)': + 'mode compacte: activat (Ctrl+O per desactivar)', + 'Hide tool output and thinking for a cleaner view (toggle with Ctrl+O).': + 'Amagueu la sortida de les eines i el pensament per a una vista més neta (canvieu amb Ctrl+O).', + 'Press Ctrl+O to show full tool output': + 'Premeu Ctrl+O per mostrar la sortida completa de les eines', + + 'Switch to plan mode or exit plan mode': + 'Canviar al mode de planificació o sortir del mode de planificació', + 'Exited plan mode. Previous approval mode restored.': + "S'ha sortit del mode de planificació. S'ha restaurat el mode d'aprovació anterior.", + 'Enabled plan mode. The agent will analyze and plan without executing tools.': + "S'ha activat el mode de planificació. L'agent analitzarà i planificarà sense executar eines.", + 'Already in plan mode. Use "/plan exit" to exit plan mode.': + 'Ja esteu en mode de planificació. Useu "/plan exit" per sortir del mode de planificació.', + 'Not in plan mode. Use "/plan" to enter plan mode first.': + 'No esteu en mode de planificació. Useu "/plan" per entrar al mode de planificació primer.', + + "Set up Qwen Code's status line UI": + "Configurar la interfície de la barra d'estat de Qwen Code", +}; diff --git a/packages/vscode-ide-companion/schemas/settings.schema.json b/packages/vscode-ide-companion/schemas/settings.schema.json index 13e230672..ba5d3be73 100644 --- a/packages/vscode-ide-companion/schemas/settings.schema.json +++ b/packages/vscode-ide-companion/schemas/settings.schema.json @@ -83,7 +83,7 @@ "default": false }, "language": { - "description": "The language for the user interface. Use \"auto\" to detect from system settings. You can also use custom language codes (e.g., \"es\", \"fr\") by placing JS language files in ~/.qwen/locales/ (e.g., ~/.qwen/locales/es.js). Options: auto, en, zh-TW, zh, ru, de, ja, pt, fr", + "description": "The language for the user interface. Use \"auto\" to detect from system settings. You can also use custom language codes (e.g., \"es\", \"fr\") by placing JS language files in ~/.qwen/locales/ (e.g., ~/.qwen/locales/es.js). Options: auto, en, zh-TW, zh, ru, de, ja, pt, fr, ca", "enum": [ "auto", "en", @@ -93,7 +93,8 @@ "de", "ja", "pt", - "fr" + "fr", + "ca" ], "default": "auto" }, From 04afc610ea1e212b60fdcc5a8cba0c23d10722ce Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=98=93=E8=89=AF?= <1204183885@qq.com> Date: Sun, 26 Apr 2026 22:27:54 +0800 Subject: [PATCH 17/36] fix(vscode-companion): slash command completion not triggering after message submit (#3609) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * fix(vscode-companion): slash command completion not triggering after message submit After submitting a message, the input field is cleared with a zero-width space (\u200B) to maintain contentEditable height. When the user then types "/", the DOM content becomes "\u200B/" and the trigger character lands at position 1 instead of 0. The word boundary check only recognized regular space and newline, so the zero-width space was rejected as an invalid boundary — preventing the completion popup from appearing. Add \u200B to the valid word boundary characters so "/" and "@" triggers work correctly after message submission without requiring an extra backspace. Closes #3592 * refactor(webui): extract zero-width space placeholder into shared constant Replace scattered `\u200B` magic strings with a shared `ZERO_WIDTH_SPACE` constant and `stripZeroWidthSpaces()` helper exported from @qwen-code/webui. This also improves the slash command completion fix: instead of adding \u200B to the word boundary check, strip it at the source in handleInput (consistent with InputForm's onInput handler) and clamp the cursor position to the stripped text length. Closes #3592 * test: add tests for zero-width space handling and shouldSendMessage - Add unit tests for ZERO_WIDTH_SPACE constant and stripZeroWidthSpaces helper (via @qwen-code/webui import) - Add shouldSendMessage tests covering empty, whitespace, zero-width space, and attachment scenarios - Add parseExportSlashCommand tests for zero-width space input * fix(test): use correct ImageAttachment type in shouldSendMessage tests Fix CI lint failure by providing all required ImageAttachment fields (id, name, type, size, data, timestamp) instead of non-existent mediaType property. --- .../src/services/sessionExportService.test.ts | 9 ++ .../src/services/sessionExportService.ts | 3 +- .../webview/handlers/SessionMessageHandler.ts | 3 +- .../src/webview/hooks/useCompletionTrigger.ts | 13 +- .../webview/hooks/useMessageSubmit.test.ts | 131 ++++++++++++++++++ .../src/webview/hooks/useMessageSubmit.ts | 13 +- .../webui/src/components/layout/InputForm.tsx | 7 +- packages/webui/src/index.ts | 4 + packages/webui/src/utils/inputPlaceholder.ts | 25 ++++ 9 files changed, 192 insertions(+), 16 deletions(-) create mode 100644 packages/vscode-ide-companion/src/webview/hooks/useMessageSubmit.test.ts create mode 100644 packages/webui/src/utils/inputPlaceholder.ts diff --git a/packages/vscode-ide-companion/src/services/sessionExportService.test.ts b/packages/vscode-ide-companion/src/services/sessionExportService.test.ts index e67f54071..e9679e3f9 100644 --- a/packages/vscode-ide-companion/src/services/sessionExportService.test.ts +++ b/packages/vscode-ide-companion/src/services/sessionExportService.test.ts @@ -128,6 +128,15 @@ describe('sessionExportService', () => { 'Unsupported /export format. Use /export html, /export md, /export json, or /export jsonl.', ); }); + + it('strips leading zero-width space placeholder before parsing', () => { + expect(parseExportSlashCommand('\u200B/export html')).toBe('html'); + expect(parseExportSlashCommand('\u200B/export md')).toBe('md'); + }); + + it('returns null for zero-width space only input', () => { + expect(parseExportSlashCommand('\u200B')).toBeNull(); + }); }); describe('exportSessionToFile', () => { diff --git a/packages/vscode-ide-companion/src/services/sessionExportService.ts b/packages/vscode-ide-companion/src/services/sessionExportService.ts index 3f4ab50ea..7f202c2f1 100644 --- a/packages/vscode-ide-companion/src/services/sessionExportService.ts +++ b/packages/vscode-ide-companion/src/services/sessionExportService.ts @@ -23,6 +23,7 @@ import { isSessionExportFormat, type SessionExportFormat, } from '../utils/exportSlashCommand.js'; +import { stripZeroWidthSpaces } from '@qwen-code/webui'; export { EXPORT_SESSION_FORMATS as SESSION_EXPORT_FORMATS }; export type { SessionExportFormat } from '../utils/exportSlashCommand.js'; @@ -40,7 +41,7 @@ const EXPORT_CONFIG = { export function parseExportSlashCommand( text: string, ): SessionExportFormat | null { - const trimmed = text.replace(/\u200B/g, '').trim(); + const trimmed = stripZeroWidthSpaces(text).trim(); if (!trimmed.startsWith('/')) { return null; } diff --git a/packages/vscode-ide-companion/src/webview/handlers/SessionMessageHandler.ts b/packages/vscode-ide-companion/src/webview/handlers/SessionMessageHandler.ts index ef73630ae..1c19f9db4 100644 --- a/packages/vscode-ide-companion/src/webview/handlers/SessionMessageHandler.ts +++ b/packages/vscode-ide-companion/src/webview/handlers/SessionMessageHandler.ts @@ -15,6 +15,7 @@ import { } from '../utils/imageHandler.js'; import { isAuthenticationRequiredError } from '../../utils/authErrors.js'; import { getErrorMessage } from '../../utils/errorMessage.js'; +import { stripZeroWidthSpaces } from '@qwen-code/webui'; import { exportSessionToFile, parseExportSlashCommand, @@ -392,7 +393,7 @@ export class SessionMessageHandler extends BaseMessageHandler { // Guard: do not process empty or whitespace-only messages. // This prevents ghost user-message bubbles when slash-command completions // or model-selector interactions clear the input but still trigger a submit. - const trimmedText = text.replace(/\u200B/g, '').trim(); + const trimmedText = stripZeroWidthSpaces(text).trim(); const hasAttachments = (attachments?.length ?? 0) > 0; if (!trimmedText && !hasAttachments) { console.warn('[SessionMessageHandler] Ignoring empty message'); diff --git a/packages/vscode-ide-companion/src/webview/hooks/useCompletionTrigger.ts b/packages/vscode-ide-companion/src/webview/hooks/useCompletionTrigger.ts index d61bd23c0..76aeb74eb 100644 --- a/packages/vscode-ide-companion/src/webview/hooks/useCompletionTrigger.ts +++ b/packages/vscode-ide-companion/src/webview/hooks/useCompletionTrigger.ts @@ -8,6 +8,7 @@ import type { RefObject } from 'react'; import { useState, useEffect, useCallback, useRef, useMemo } from 'react'; import type { CompletionItem } from '../../types/completionItemTypes.js'; import { shouldAllowCompletionQuery } from '../utils/slashCommandUtils.js'; +import { stripZeroWidthSpaces } from '@qwen-code/webui'; interface CompletionTriggerState { isOpen: boolean; @@ -252,7 +253,9 @@ export function useCompletionTrigger( }; const handleInput = async () => { - const text = inputElement.textContent || ''; + // Strip zero-width space placeholders before processing, consistent + // with InputForm's onInput handler that strips them for React state. + const text = stripZeroWidthSpaces(inputElement.textContent || ''); const selection = window.getSelection(); if (!selection || selection.rangeCount === 0) { console.log('[useCompletionTrigger] No selection or rangeCount === 0'); @@ -304,8 +307,12 @@ export function useCompletionTrigger( // Find trigger character before cursor // Use text length if cursorPosition is 0 but we have text (edge case for first character) - const effectiveCursorPosition = - cursorPosition === 0 && text.length > 0 ? text.length : cursorPosition; + // Clamp to text.length because the DOM cursor offset may exceed the + // stripped text length (e.g. after removing a leading zero-width space). + const effectiveCursorPosition = Math.min( + cursorPosition === 0 && text.length > 0 ? text.length : cursorPosition, + text.length, + ); const textBeforeCursor = text.substring(0, effectiveCursorPosition); const lastAtMatch = textBeforeCursor.lastIndexOf('@'); diff --git a/packages/vscode-ide-companion/src/webview/hooks/useMessageSubmit.test.ts b/packages/vscode-ide-companion/src/webview/hooks/useMessageSubmit.test.ts new file mode 100644 index 000000000..dd0479dbb --- /dev/null +++ b/packages/vscode-ide-companion/src/webview/hooks/useMessageSubmit.test.ts @@ -0,0 +1,131 @@ +/** + * @license + * Copyright 2025 Qwen Team + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, expect, it } from 'vitest'; +import { ZERO_WIDTH_SPACE, stripZeroWidthSpaces } from '@qwen-code/webui'; +import { shouldSendMessage } from './useMessageSubmit.js'; + +describe('ZERO_WIDTH_SPACE and stripZeroWidthSpaces', () => { + it('ZERO_WIDTH_SPACE is U+200B', () => { + expect(ZERO_WIDTH_SPACE).toBe('\u200B'); + expect(ZERO_WIDTH_SPACE.length).toBe(1); + }); + + it('strips a single leading zero-width space', () => { + expect(stripZeroWidthSpaces('\u200B')).toBe(''); + }); + + it('strips zero-width space before real text', () => { + expect(stripZeroWidthSpaces('\u200B/help')).toBe('/help'); + }); + + it('strips multiple zero-width spaces', () => { + expect(stripZeroWidthSpaces('\u200Bhello\u200B world\u200B')).toBe( + 'hello world', + ); + }); + + it('returns unchanged text when no zero-width spaces present', () => { + expect(stripZeroWidthSpaces('hello world')).toBe('hello world'); + }); + + it('returns empty string for empty input', () => { + expect(stripZeroWidthSpaces('')).toBe(''); + }); + + it('preserves other whitespace characters', () => { + expect(stripZeroWidthSpaces('\u200B \t\n')).toBe(' \t\n'); + }); +}); + +describe('shouldSendMessage', () => { + const defaults = { + isStreaming: false, + isWaitingForResponse: false, + }; + + it('returns false when streaming', () => { + expect( + shouldSendMessage({ ...defaults, inputText: 'hello', isStreaming: true }), + ).toBe(false); + }); + + it('returns false when waiting for response', () => { + expect( + shouldSendMessage({ + ...defaults, + inputText: 'hello', + isWaitingForResponse: true, + }), + ).toBe(false); + }); + + it('returns true for non-empty text', () => { + expect(shouldSendMessage({ ...defaults, inputText: 'hello' })).toBe(true); + }); + + it('returns false for empty text', () => { + expect(shouldSendMessage({ ...defaults, inputText: '' })).toBe(false); + }); + + it('returns false for whitespace-only text', () => { + expect(shouldSendMessage({ ...defaults, inputText: ' ' })).toBe(false); + }); + + it('returns false when input is only a zero-width space placeholder', () => { + expect(shouldSendMessage({ ...defaults, inputText: '\u200B' })).toBe(false); + }); + + it('returns false when input is zero-width space plus whitespace', () => { + expect(shouldSendMessage({ ...defaults, inputText: '\u200B ' })).toBe( + false, + ); + }); + + it('returns true when input has real text after zero-width space', () => { + expect(shouldSendMessage({ ...defaults, inputText: '\u200Bhello' })).toBe( + true, + ); + }); + + it('returns true when input has only attachments and no text', () => { + expect( + shouldSendMessage({ + ...defaults, + inputText: '', + attachedImages: [ + { + id: '1', + name: 'test.png', + type: 'image/png', + size: 100, + data: 'base64data', + timestamp: Date.now(), + }, + ], + }), + ).toBe(true); + }); + + it('returns true when input has only attachments and zero-width space', () => { + expect( + shouldSendMessage({ + ...defaults, + inputText: '\u200B', + attachedImages: [ + { + id: '1', + name: 'test.png', + type: 'image/png', + size: 100, + data: 'base64data', + timestamp: Date.now(), + }, + ], + }), + ).toBe(true); + }); +}); diff --git a/packages/vscode-ide-companion/src/webview/hooks/useMessageSubmit.ts b/packages/vscode-ide-companion/src/webview/hooks/useMessageSubmit.ts index 517d2db4e..4fad1df0f 100644 --- a/packages/vscode-ide-companion/src/webview/hooks/useMessageSubmit.ts +++ b/packages/vscode-ide-companion/src/webview/hooks/useMessageSubmit.ts @@ -8,6 +8,7 @@ import { useCallback } from 'react'; import type { VSCodeAPI } from './useVSCode.js'; import { getRandomLoadingMessage } from '../../constants/loadingMessages.js'; import type { ImageAttachment } from './useImage.js'; +import { ZERO_WIDTH_SPACE, stripZeroWidthSpaces } from '@qwen-code/webui'; interface UseMessageSubmitProps { vscode: VSCodeAPI; @@ -49,7 +50,7 @@ export const shouldSendMessage = ({ return false; } - const hasText = inputText.replace(/\u200B/g, '').trim().length > 0; + const hasText = stripZeroWidthSpaces(inputText).trim().length > 0; const hasAttachments = (attachedImages?.length ?? 0) > 0; return hasText || hasAttachments; }; @@ -93,7 +94,7 @@ export const useMessageSubmit = ({ if (textToSend.trim() === '/account') { setInputText(''); if (inputFieldRef.current) { - inputFieldRef.current.textContent = '\u200B'; + inputFieldRef.current.textContent = ZERO_WIDTH_SPACE; inputFieldRef.current.setAttribute('data-empty', 'true'); } vscode.postMessage({ type: 'getAccountInfo', data: {} }); @@ -104,9 +105,7 @@ export const useMessageSubmit = ({ if (textToSend.trim() === '/login') { setInputText(''); if (inputFieldRef.current) { - // Use a zero-width space to maintain the height of the contentEditable element - inputFieldRef.current.textContent = '\u200B'; - // Set the data-empty attribute to show the placeholder + inputFieldRef.current.textContent = ZERO_WIDTH_SPACE; inputFieldRef.current.setAttribute('data-empty', 'true'); } vscode.postMessage({ @@ -194,9 +193,7 @@ export const useMessageSubmit = ({ setInputText(''); if (inputFieldRef.current) { - // Use a zero-width space to maintain the height of the contentEditable element - inputFieldRef.current.textContent = '\u200B'; - // Set the data-empty attribute to show the placeholder + inputFieldRef.current.textContent = ZERO_WIDTH_SPACE; inputFieldRef.current.setAttribute('data-empty', 'true'); } fileContext.clearFileReferences(); diff --git a/packages/webui/src/components/layout/InputForm.tsx b/packages/webui/src/components/layout/InputForm.tsx index ca28e996e..09a8a71e7 100644 --- a/packages/webui/src/components/layout/InputForm.tsx +++ b/packages/webui/src/components/layout/InputForm.tsx @@ -23,6 +23,7 @@ import { ContextIndicator } from './ContextIndicator.js'; import type { CompletionItem } from '../../types/completion.js'; import type { ContextUsage } from './ContextIndicator.js'; import type { FollowupState } from '../../types/followup.js'; +import { stripZeroWidthSpaces } from '../../utils/inputPlaceholder.js'; /** * Edit mode display information @@ -203,7 +204,7 @@ export const InputForm: FC = ({ }) => { const composerDisabled = isStreaming || isWaitingForResponse; const hasDraftContent = - canSubmit ?? inputText.replace(/\u200B/g, '').trim().length > 0; + canSubmit ?? stripZeroWidthSpaces(inputText).trim().length > 0; const completionItemsResolved = completionItems ?? []; const completionActive = completionIsOpen && @@ -337,14 +338,14 @@ export const InputForm: FC = ({ // Use a data flag so CSS can show placeholder even if the browser // inserts an invisible
into contentEditable (so :empty no longer matches) data-empty={ - inputText.replace(/\u200B/g, '').trim().length === 0 + stripZeroWidthSpaces(inputText).trim().length === 0 ? 'true' : 'false' } onInput={(e) => { const target = e.target as HTMLDivElement; // Filter out zero-width space that we use to maintain height - const text = target.textContent?.replace(/\u200B/g, '') || ''; + const text = stripZeroWidthSpaces(target.textContent ?? ''); onInputChange(text); // Dismiss follow-up suggestion when user starts typing if (hasFollowup && !inputText && text) { diff --git a/packages/webui/src/index.ts b/packages/webui/src/index.ts index 06acc35ac..56bbc89aa 100644 --- a/packages/webui/src/index.ts +++ b/packages/webui/src/index.ts @@ -260,6 +260,10 @@ export type { CompletionItem, CompletionItemType } from './types/completion'; // Utils export { groupSessionsByDate, getTimeAgo } from './utils/sessionGrouping'; export type { SessionGroup } from './utils/sessionGrouping'; +export { + ZERO_WIDTH_SPACE, + stripZeroWidthSpaces, +} from './utils/inputPlaceholder'; // Adapters - for normalizing different data formats export { diff --git a/packages/webui/src/utils/inputPlaceholder.ts b/packages/webui/src/utils/inputPlaceholder.ts new file mode 100644 index 000000000..44acf3c3a --- /dev/null +++ b/packages/webui/src/utils/inputPlaceholder.ts @@ -0,0 +1,25 @@ +/** + * @license + * Copyright 2025 Qwen Team + * SPDX-License-Identifier: Apache-2.0 + */ + +/** + * Zero-width space used as a height placeholder in contentEditable inputs. + * + * After clearing a contentEditable element (e.g. on message submit), setting + * its textContent to this character keeps the element at its normal line height + * instead of collapsing to zero height. All downstream consumers must strip + * this character before treating the text as real user input. + */ +export const ZERO_WIDTH_SPACE = '\u200B'; + +/** + * Strip {@link ZERO_WIDTH_SPACE} placeholders from text. + * + * @param text - raw text that may contain zero-width spaces + * @returns text with all zero-width spaces removed + */ +export function stripZeroWidthSpaces(text: string): string { + return text.replace(/\u200B/g, ''); +} From 310eb88fba0299791624a2ded34b16643ee5d4f8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=98=93=E8=89=AF?= <1204183885@qq.com> Date: Mon, 27 Apr 2026 00:52:56 +0800 Subject: [PATCH 18/36] fix(cli): guard gradient rendering without colors (#3640) --- .../cli/src/ui/components/Header.test.tsx | 21 +++++++- packages/cli/src/ui/components/Header.tsx | 14 ++++-- .../src/ui/components/StatsDisplay.test.tsx | 48 ++++++++++++++++++- .../cli/src/ui/components/StatsDisplay.tsx | 6 ++- packages/cli/src/ui/utils/gradientUtils.ts | 18 +++++++ 5 files changed, 98 insertions(+), 9 deletions(-) create mode 100644 packages/cli/src/ui/utils/gradientUtils.ts diff --git a/packages/cli/src/ui/components/Header.test.tsx b/packages/cli/src/ui/components/Header.test.tsx index 72da62aba..087618654 100644 --- a/packages/cli/src/ui/components/Header.test.tsx +++ b/packages/cli/src/ui/components/Header.test.tsx @@ -5,7 +5,7 @@ */ import { render } from 'ink-testing-library'; -import { describe, it, expect, vi, beforeEach } from 'vitest'; +import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; import { Header, AuthDisplayType } from './Header.js'; import * as useTerminalSize from '../hooks/useTerminalSize.js'; @@ -20,10 +20,21 @@ const defaultProps = { }; describe('
', () => { + const originalNoColor = process.env['NO_COLOR']; + beforeEach(() => { + delete process.env['NO_COLOR']; useTerminalSizeMock.mockReturnValue({ columns: 120, rows: 24 }); }); + afterEach(() => { + if (originalNoColor === undefined) { + delete process.env['NO_COLOR']; + } else { + process.env['NO_COLOR'] = originalNoColor; + } + }); + it('renders the ASCII logo on wide terminal', () => { const { lastFrame } = render(
); expect(lastFrame()).toContain('██╔═══██╗'); @@ -81,4 +92,12 @@ describe('
', () => { expect(lastFrame()).toContain('┌'); expect(lastFrame()).toContain('┐'); }); + + it('renders plain text when NO_COLOR disables gradient colors', () => { + process.env['NO_COLOR'] = '1'; + + const { lastFrame } = render(
); + + expect(lastFrame()).toContain('██╔═══██╗'); + }); }); diff --git a/packages/cli/src/ui/components/Header.tsx b/packages/cli/src/ui/components/Header.tsx index 2d919385f..345bb9fe4 100644 --- a/packages/cli/src/ui/components/Header.tsx +++ b/packages/cli/src/ui/components/Header.tsx @@ -12,6 +12,7 @@ import { theme } from '../semantic-colors.js'; import { shortAsciiLogo } from './AsciiArt.js'; import { getAsciiArtWidth, getCachedStringWidth } from '../utils/textUtils.js'; import { useTerminalSize } from '../hooks/useTerminalSize.js'; +import { getRenderableGradientColors } from '../utils/gradientUtils.js'; /** * Auth display type for the Header component. @@ -98,12 +99,11 @@ export const Header: React.FC = ({ ? shortenedPath.slice(0, maxPathLength) : shortenedPath; - // Use theme gradient colors if available, otherwise use text colors (excluding primary) - const gradientColors = theme.ui.gradient || [ + const gradientColors = getRenderableGradientColors(theme.ui.gradient, [ theme.text.secondary, theme.text.link, theme.text.accent, - ]; + ]); return ( = ({ {showLogo && ( <> - + {gradientColors ? ( + + {displayLogo} + + ) : ( {displayLogo} - + )} {/* Fixed gap between logo and info panel */} diff --git a/packages/cli/src/ui/components/StatsDisplay.test.tsx b/packages/cli/src/ui/components/StatsDisplay.test.tsx index b75b3c2dc..0a4540ca2 100644 --- a/packages/cli/src/ui/components/StatsDisplay.test.tsx +++ b/packages/cli/src/ui/components/StatsDisplay.test.tsx @@ -5,7 +5,7 @@ */ import { render } from 'ink-testing-library'; -import { describe, it, expect, vi } from 'vitest'; +import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; import { StatsDisplay } from './StatsDisplay.js'; import * as SessionContext from '../contexts/SessionContext.js'; import type { @@ -14,6 +14,7 @@ import type { SessionMetrics, } from '../contexts/SessionContext.js'; import { MAIN_SOURCE } from '@qwen-code/qwen-code-core'; +import { DEFAULT_THEME, themeManager } from '../themes/theme-manager.js'; // Wraps a core metrics object as a ModelMetrics with a single `main` source // bucket, matching the shape produced by processing an API call with no @@ -33,6 +34,21 @@ vi.mock('../contexts/SessionContext.js', async (importOriginal) => { }); const useSessionStatsMock = vi.mocked(SessionContext.useSessionStats); +const originalNoColor = process.env['NO_COLOR']; + +beforeEach(() => { + delete process.env['NO_COLOR']; +}); + +afterEach(() => { + if (originalNoColor === undefined) { + delete process.env['NO_COLOR']; + } else { + process.env['NO_COLOR'] = originalNoColor; + } + themeManager.loadCustomThemes({}); + themeManager.setActiveTheme(DEFAULT_THEME.name); +}); const renderWithMockedStats = (metrics: SessionMetrics) => { useSessionStatsMock.mockReturnValue({ @@ -558,5 +574,35 @@ describe('', () => { expect(output).not.toContain('Session Stats'); expect(output).toMatchSnapshot(); }); + + it('renders a custom title as plain text when the theme has too few gradient colors', () => { + themeManager.loadCustomThemes({ + OneColorGradient: { + name: 'OneColorGradient', + type: 'custom', + ui: { gradient: ['red'] }, + }, + }); + themeManager.setActiveTheme('OneColorGradient'); + useSessionStatsMock.mockReturnValue({ + stats: { + sessionId: 'test-session-id', + sessionStartTime: new Date(), + metrics: zeroMetrics, + lastPromptTokenCount: 0, + promptCount: 5, + }, + + getPromptCount: () => 5, + startNewPrompt: vi.fn(), + }); + + const { lastFrame } = render( + , + ); + const output = lastFrame(); + expect(output).toContain('Agent powering down. Goodbye!'); + expect(output).not.toContain('Invalid number of stops'); + }); }); }); diff --git a/packages/cli/src/ui/components/StatsDisplay.tsx b/packages/cli/src/ui/components/StatsDisplay.tsx index 2a766f698..e66d4cd19 100644 --- a/packages/cli/src/ui/components/StatsDisplay.tsx +++ b/packages/cli/src/ui/components/StatsDisplay.tsx @@ -18,6 +18,7 @@ import { USER_AGREEMENT_RATE_HIGH, USER_AGREEMENT_RATE_MEDIUM, } from '../utils/displayUtils.js'; +import { getRenderableGradientColors } from '../utils/gradientUtils.js'; import { computeSessionStats } from '../utils/computeStats.js'; import { flattenModelsBySource } from '../utils/modelsBySource.js'; import { t } from '../../i18n/index.js'; @@ -194,8 +195,9 @@ export const StatsDisplay: React.FC = ({ const renderTitle = () => { if (title) { - return theme.ui.gradient && theme.ui.gradient.length > 0 ? ( - + const gradientColors = getRenderableGradientColors(theme.ui.gradient); + return gradientColors ? ( + {title} diff --git a/packages/cli/src/ui/utils/gradientUtils.ts b/packages/cli/src/ui/utils/gradientUtils.ts new file mode 100644 index 000000000..0c164768a --- /dev/null +++ b/packages/cli/src/ui/utils/gradientUtils.ts @@ -0,0 +1,18 @@ +/** + * @license + * Copyright 2026 Qwen Team + * SPDX-License-Identifier: Apache-2.0 + */ + +export function getRenderableGradientColors( + ...candidates: Array +): string[] | undefined { + return candidates.find( + (colors): colors is string[] => + Array.isArray(colors) && + colors.length >= 2 && + colors.every( + (color) => typeof color === 'string' && color.trim().length > 0, + ), + ); +} From 70127b5cd88babe826fac721ed12064b93f49554 Mon Sep 17 00:00:00 2001 From: John London Date: Sun, 26 Apr 2026 16:59:06 -0500 Subject: [PATCH 19/36] fix(config): support QWEN_CODE_API_TIMEOUT_MS across OAuth and non-OAuth paths (#3629) * feat(config): support API timeout env override Adds support for QWEN_CODE_API_TIMEOUT_MS as an environment override for model generation timeout. Qwen Code already supports timeout configuration via: settings.model.generationConfig.timeout This change introduces an env-based override for users running slow local/OpenAI-compatible backends where editing config is less convenient. Precedence: modelProvider > env var > settings > default (120000ms) Behavior: - Valid positive env values override configured timeout - Invalid values are ignored - Default behavior remains unchanged (applied in buildClient()) Note: The 5-minute timeout reported in #1045 originally came from undici's default bodyTimeout, which is now disabled (bodyTimeout:0). The modelConfigResolver default is 120000ms (2 minutes). Includes unit tests covering precedence and validation. Closes #1045 Co-Authored-By: Claude Opus 4.7 * test(core): add edge-case tests for QWEN_CODE_API_TIMEOUT_MS Covers: large timeout values, whitespace-padded env values, negative env values, and reinforces provider > env > settings precedence. Co-Authored-By: Claude Opus 4.7 * feat(config): support QWEN_CODE_API_TIMEOUT_MS override Adds support for QWEN_CODE_API_TIMEOUT_MS as an environment override for model generation timeout. Closes #13 Co-Authored-By: Claude Opus 4.7 --------- Co-authored-by: Claude Opus 4.7 --- .../src/models/modelConfigResolver.test.ts | 337 ++++++++++++++++++ .../core/src/models/modelConfigResolver.ts | 45 ++- 2 files changed, 377 insertions(+), 5 deletions(-) diff --git a/packages/core/src/models/modelConfigResolver.test.ts b/packages/core/src/models/modelConfigResolver.test.ts index 978949b2c..dd2ca7b04 100644 --- a/packages/core/src/models/modelConfigResolver.test.ts +++ b/packages/core/src/models/modelConfigResolver.test.ts @@ -185,6 +185,114 @@ describe('modelConfigResolver', () => { expect(result.warnings).toHaveLength(1); expect(result.warnings[0]).toContain('unsupported-model'); }); + + it('QWEN_CODE_API_TIMEOUT_MS applies in Qwen OAuth path', () => { + const result = resolveModelConfig({ + authType: AuthType.QWEN_OAUTH, + cli: {}, + settings: {}, + env: { + QWEN_CODE_API_TIMEOUT_MS: '45000', + }, + }); + + expect(result.config.timeout).toBe(45000); + expect(result.sources['timeout']).toBeDefined(); + expect(result.sources['timeout'].kind).toBe('env'); + expect(result.sources['timeout'].envKey).toBe( + 'QWEN_CODE_API_TIMEOUT_MS', + ); + expect(result.config.model).toBe(DEFAULT_QWEN_MODEL); + }); + + it('modelProvider timeout takes precedence over QWEN_CODE_API_TIMEOUT_MS in OAuth', () => { + const result = resolveModelConfig({ + authType: AuthType.QWEN_OAUTH, + cli: {}, + settings: {}, + env: { + QWEN_CODE_API_TIMEOUT_MS: '45000', + }, + modelProvider: { + id: 'qwen-oauth', + name: 'Qwen OAuth', + generationConfig: { + timeout: 120000, + }, + }, + }); + + expect(result.config.timeout).toBe(120000); + expect(result.sources['timeout'].kind).toBe('modelProviders'); + }); + + it('invalid QWEN_CODE_API_TIMEOUT_MS ignored in OAuth path', () => { + const result = resolveModelConfig({ + authType: AuthType.QWEN_OAUTH, + cli: {}, + settings: {}, + env: { + QWEN_CODE_API_TIMEOUT_MS: 'not-a-number', + }, + }); + + expect(result.config.timeout).toBeUndefined(); + }); + + it('negative QWEN_CODE_API_TIMEOUT_MS ignored in OAuth path', () => { + const result = resolveModelConfig({ + authType: AuthType.QWEN_OAUTH, + cli: {}, + settings: {}, + env: { + QWEN_CODE_API_TIMEOUT_MS: '-100', + }, + }); + + expect(result.config.timeout).toBeUndefined(); + }); + + it('zero QWEN_CODE_API_TIMEOUT_MS ignored in OAuth path', () => { + const result = resolveModelConfig({ + authType: AuthType.QWEN_OAUTH, + cli: {}, + settings: {}, + env: { + QWEN_CODE_API_TIMEOUT_MS: '0', + }, + }); + + expect(result.config.timeout).toBeUndefined(); + }); + + it('QWEN_CODE_API_TIMEOUT_MS works with float value in OAuth', () => { + const result = resolveModelConfig({ + authType: AuthType.QWEN_OAUTH, + cli: {}, + settings: {}, + env: { + QWEN_CODE_API_TIMEOUT_MS: '12345.67', + }, + }); + + expect(result.config.timeout).toBe(12345); + }); + + it('QWEN_CODE_API_TIMEOUT_MS works with proxy in OAuth path', () => { + const result = resolveModelConfig({ + authType: AuthType.QWEN_OAUTH, + cli: {}, + settings: {}, + env: { + QWEN_CODE_API_TIMEOUT_MS: '60000', + }, + proxy: 'http://proxy.example.com:8080', + }); + + expect(result.config.timeout).toBe(60000); + expect(result.config.proxy).toBe('http://proxy.example.com:8080'); + expect(result.sources['timeout'].kind).toBe('env'); + }); }); describe('Anthropic auth type', () => { @@ -258,6 +366,235 @@ describe('modelConfigResolver', () => { expect(result.config.timeout).toBe(60000); expect(result.sources['timeout'].kind).toBe('modelProviders'); }); + + it('QWEN_CODE_API_TIMEOUT_MS env var overrides settings timeout', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { + apiKey: 'key', + generationConfig: { + timeout: 30000, + }, + }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '900000', + }, + }); + + expect(result.config.timeout).toBe(900000); + expect(result.sources['timeout'].kind).toBe('env'); + expect(result.sources['timeout'].envKey).toBe( + 'QWEN_CODE_API_TIMEOUT_MS', + ); + }); + + it('modelProvider timeout wins over QWEN_CODE_API_TIMEOUT_MS', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: {}, + env: { + MY_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '900000', + }, + modelProvider: { + id: 'model', + name: 'Model', + envKey: 'MY_KEY', + baseUrl: 'https://api.example.com', + generationConfig: { + timeout: 60000, + }, + }, + }); + + // modelProvider > env: modelProvider timeout should win + expect(result.config.timeout).toBe(60000); + expect(result.sources['timeout'].kind).toBe('modelProviders'); + }); + + it('QWEN_CODE_API_TIMEOUT_MS applies when modelProvider has no timeout', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: {}, + env: { + MY_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '900000', + }, + modelProvider: { + id: 'model', + name: 'Model', + envKey: 'MY_KEY', + baseUrl: 'https://api.example.com', + generationConfig: {}, + }, + }); + + expect(result.config.timeout).toBe(900000); + expect(result.sources['timeout'].kind).toBe('env'); + expect(result.sources['timeout'].envKey).toBe( + 'QWEN_CODE_API_TIMEOUT_MS', + ); + }); + + it('ignores invalid QWEN_CODE_API_TIMEOUT_MS values', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { + apiKey: 'key', + generationConfig: { + timeout: 30000, + }, + }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: 'invalid', + }, + }); + + // Should fall back to settings value + expect(result.config.timeout).toBe(30000); + expect(result.sources['timeout'].kind).toBe('settings'); + }); + + it('ignores negative or zero QWEN_CODE_API_TIMEOUT_MS values', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { + apiKey: 'key', + generationConfig: { + timeout: 30000, + }, + }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '0', + }, + }); + + // Should fall back to settings value + expect(result.config.timeout).toBe(30000); + expect(result.sources['timeout'].kind).toBe('settings'); + }); + + it('timeout is undefined when not configured, default applied in buildClient', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { + apiKey: 'key', + }, + env: { + OPENAI_API_KEY: 'key', + }, + }); + + // timeout is undefined here; DEFAULT_TIMEOUT (120000) is applied in + // the provider's buildClient() when timeout is not set. + expect(result.config.timeout).toBeUndefined(); + }); + + it('QWEN_CODE_API_TIMEOUT_MS works for Anthropic auth type', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_ANTHROPIC, + cli: {}, + settings: {}, + env: { + ANTHROPIC_API_KEY: 'key', + ANTHROPIC_BASE_URL: 'https://api.anthropic.com', + QWEN_CODE_API_TIMEOUT_MS: '600000', + }, + }); + + expect(result.config.timeout).toBe(600000); + expect(result.sources['timeout'].kind).toBe('env'); + expect(result.sources['timeout'].envKey).toBe( + 'QWEN_CODE_API_TIMEOUT_MS', + ); + }); + + it('env var actually changes resolved timeout value', () => { + // Integration-style test: proves the env var flows through to the resolved config + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { + apiKey: 'key', + generationConfig: { + timeout: 30000, + }, + }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '900000', + }, + }); + + // Timeout should be the env var value, not the settings value + expect(result.config.timeout).toBe(900000); + expect(result.sources['timeout'].kind).toBe('env'); + expect(result.sources['timeout'].envKey).toBe( + 'QWEN_CODE_API_TIMEOUT_MS', + ); + + // Prove it would be used by the client (default.ts:48 reads config.timeout) + const clientTimeout = result.config.timeout; + expect(clientTimeout).toBe(900000); + }); + + it('handles extremely large timeout values safely', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { apiKey: 'key' }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '999999999', + }, + }); + + expect(result.config.timeout).toBe(999999999); + expect(result.sources['timeout'].kind).toBe('env'); + }); + + it('handles whitespace-padded env values', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { apiKey: 'key' }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: ' 300000 ', + }, + }); + + // Number() implicitly trims whitespace, so this should parse correctly + expect(result.config.timeout).toBe(300000); + expect(result.sources['timeout'].kind).toBe('env'); + }); + + it('ignores negative QWEN_CODE_API_TIMEOUT_MS values', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { + apiKey: 'key', + generationConfig: { timeout: 30000 }, + }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '-100', + }, + }); + + expect(result.config.timeout).toBe(30000); + expect(result.sources['timeout'].kind).toBe('settings'); + }); }); describe('proxy handling', () => { diff --git a/packages/core/src/models/modelConfigResolver.ts b/packages/core/src/models/modelConfigResolver.ts index c7db1611c..3eb663c83 100644 --- a/packages/core/src/models/modelConfigResolver.ts +++ b/packages/core/src/models/modelConfigResolver.ts @@ -231,11 +231,10 @@ export function resolveModelConfig( let apiKeyEnvKey: string | undefined; if (authType && modelProvider?.envKey) { apiKeyEnvKey = modelProvider.envKey; - sources['apiKeyEnvKey'] = modelProvidersSource( - authType, - modelProvider.id, - 'envKey', - ); + sources['apiKeyEnvKey'] = { + ...modelProvidersSource(authType, modelProvider.id, 'envKey'), + envKey: modelProvider.envKey, + }; } // ---- Generation Config (from settings or modelProvider) ---- @@ -247,6 +246,24 @@ export function resolveModelConfig( sources, ); + // ---- Env override: QWEN_CODE_API_TIMEOUT_MS ---- + // Precedence: modelProvider > env > settings > default (CLI doesn't set timeout) + const modelProviderSetTimeout = + modelProvider?.generationConfig?.timeout !== undefined; + if (!modelProviderSetTimeout) { + const envTimeout = env['QWEN_CODE_API_TIMEOUT_MS']; + if (envTimeout !== undefined) { + const parsed = Number(envTimeout); + if (Number.isFinite(parsed) && parsed > 0) { + generationConfig.timeout = Math.floor(parsed); + sources['timeout'] = { + kind: 'env', + envKey: 'QWEN_CODE_API_TIMEOUT_MS', + }; + } + } + } + // Build final config const config: ContentGeneratorConfig = { authType, @@ -325,6 +342,24 @@ function resolveQwenOAuthConfig( sources, ); + // ---- Env override: QWEN_CODE_API_TIMEOUT_MS ---- + // Precedence: modelProvider > env > settings > default + const modelProviderSetTimeoutOAuth = + modelProvider?.generationConfig?.timeout !== undefined; + if (!modelProviderSetTimeoutOAuth) { + const envTimeoutOAuth = input.env['QWEN_CODE_API_TIMEOUT_MS']; + if (envTimeoutOAuth !== undefined) { + const parsed = Number(envTimeoutOAuth); + if (Number.isFinite(parsed) && parsed > 0) { + generationConfig.timeout = Math.floor(parsed); + sources['timeout'] = { + kind: 'env', + envKey: 'QWEN_CODE_API_TIMEOUT_MS', + }; + } + } + } + const config: ContentGeneratorConfig = { authType: AuthType.QWEN_OAUTH, model: resolvedModel, From 3b0b6c052be5928e551f1ead8e4407d9163eb4b8 Mon Sep 17 00:00:00 2001 From: jinye Date: Mon, 27 Apr 2026 06:54:55 +0800 Subject: [PATCH 20/36] feat(cli): add API preconnect to reduce first-call latency (#3318) Fire a fire-and-forget HEAD request early in startup to warm the TCP+TLS connection. Subsequent SDK calls share an undici dispatcher with preconnect, reusing the warmed connection to save 100-200ms on the first request. Skip conditions: - NODE_EXTRA_CA_CERTS set (enterprise TLS inspection) - Sandbox mode (process-restart context) - Non-default baseUrl (mTLS / private deployment) - Non-Node runtimes (Bun) Disable via QWEN_CODE_DISABLE_PRECONNECT=1. Closes #3223 --- eslint.config.js | 3 +- packages/cli/src/gemini.tsx | 16 ++ packages/cli/src/utils/apiPreconnect.test.ts | 241 ++++++++++++++++++ packages/cli/src/utils/apiPreconnect.ts | 213 ++++++++++++++++ packages/core/src/index.ts | 4 + .../src/utils/runtimeFetchOptions.test.ts | 44 +++- .../core/src/utils/runtimeFetchOptions.ts | 56 +++- scripts/benchmark-api-latency.mjs | 167 ++++++++++++ 8 files changed, 731 insertions(+), 13 deletions(-) create mode 100644 packages/cli/src/utils/apiPreconnect.test.ts create mode 100644 packages/cli/src/utils/apiPreconnect.ts create mode 100644 scripts/benchmark-api-latency.mjs diff --git a/eslint.config.js b/eslint.config.js index 27214f6d6..f2eb93ecf 100644 --- a/eslint.config.js +++ b/eslint.config.js @@ -190,10 +190,11 @@ export default tseslint.config( }, // extra settings for scripts that we run directly with node { - files: ['./scripts/**/*.js', 'esbuild.config.js', 'packages/*/scripts/**/*.js'], + files: ['./scripts/**/*.js', './scripts/**/*.mjs', 'esbuild.config.js', 'packages/*/scripts/**/*.js'], languageOptions: { globals: { ...globals.node, + ...globals.browser, process: 'readonly', console: 'readonly', }, diff --git a/packages/cli/src/gemini.tsx b/packages/cli/src/gemini.tsx index f8283cca3..e7da7582a 100644 --- a/packages/cli/src/gemini.tsx +++ b/packages/cli/src/gemini.tsx @@ -77,6 +77,7 @@ import { startEarlyInputCapture, stopAndGetCapturedInput, } from './utils/earlyInputCapture.js'; +import { preconnectApi } from './utils/apiPreconnect.js'; import { validateNonInteractiveAuth } from './validateNonInterActiveAuth.js'; import { showResumeSessionPicker } from './ui/components/StandaloneSessionPicker.js'; import { initializeLlmOutputLanguage } from './utils/languageUtils.js'; @@ -528,6 +529,21 @@ export async function main() { // This ensures MCP server subprocesses are properly terminated on exit registerCleanup(() => config.shutdown()); + // Startup optimization: preconnect API to warm TCP+TLS connection + // Fires early; cost is one HEAD request even for local-only commands + try { + const modelsConfig = config.getModelsConfig(); + const authType = modelsConfig.getCurrentAuthType(); + const resolvedBaseUrl = modelsConfig.getGenerationConfig().baseUrl; + const proxy = config.getProxy(); + preconnectApi(authType, { resolvedBaseUrl, proxy }); + } catch (error) { + // If we can't get authType, skip preconnect - it's optional optimization + debugLogger.debug( + `Preconnect skipped due to error getting authType: ${error}`, + ); + } + // FIXME: list extensions after the config initialize // if (config.getListExtensions()) { // console.log('Installed extensions:'); diff --git a/packages/cli/src/utils/apiPreconnect.test.ts b/packages/cli/src/utils/apiPreconnect.test.ts new file mode 100644 index 000000000..ba50f1ff1 --- /dev/null +++ b/packages/cli/src/utils/apiPreconnect.test.ts @@ -0,0 +1,241 @@ +/** + * @license + * Copyright 2026 Qwen Team + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; +import { preconnectApi, resetPreconnectState } from './apiPreconnect.js'; + +// Mock fetch +const mockFetch = vi.fn().mockResolvedValue(undefined); +global.fetch = mockFetch; + +// Mock the shared dispatcher functions from core +const { mockGetOrCreateSharedDispatcher, mockDebugLogger } = vi.hoisted(() => { + const dispatcher = { fake: 'dispatcher' }; + const mockDebugLogger = { debug: vi.fn() }; + return { + mockGetOrCreateSharedDispatcher: vi.fn(() => dispatcher), + mockDebugLogger, + }; +}); +vi.mock('@qwen-code/qwen-code-core', () => ({ + createDebugLogger: () => mockDebugLogger, + detectRuntime: () => 'node', + getOrCreateSharedDispatcher: mockGetOrCreateSharedDispatcher, +})); + +describe('apiPreconnect', () => { + beforeEach(() => { + resetPreconnectState(); + mockFetch.mockClear(); + mockFetch.mockResolvedValue(undefined); + mockGetOrCreateSharedDispatcher.mockClear(); + delete process.env['HTTPS_PROXY']; + delete process.env['https_proxy']; + delete process.env['HTTP_PROXY']; + delete process.env['http_proxy']; + delete process.env['QWEN_CODE_DISABLE_PRECONNECT']; + delete process.env['NODE_EXTRA_CA_CERTS']; + delete process.env['SANDBOX']; + }); + + afterEach(() => { + vi.restoreAllMocks(); + }); + + describe('shouldSkipPreconnect', () => { + it('should skip when NODE_EXTRA_CA_CERTS is set', () => { + process.env['NODE_EXTRA_CA_CERTS'] = '/path/to/ca.pem'; + preconnectApi('qwen-oauth'); + expect(mockFetch).not.toHaveBeenCalled(); + }); + }); + + describe('resolvedBaseUrl handling', () => { + it('should use resolvedBaseUrl when it is a default URL', () => { + preconnectApi('openai', { + resolvedBaseUrl: 'https://api.openai.com/v1', + }); + expect(mockFetch).toHaveBeenCalledWith( + 'https://api.openai.com/v1', + expect.objectContaining({ method: 'HEAD' }), + ); + }); + + it('should skip when resolvedBaseUrl is a custom (non-default) URL', () => { + preconnectApi('openai', { + resolvedBaseUrl: 'https://custom.api.com/v1', + }); + expect(mockFetch).not.toHaveBeenCalled(); + }); + + it('should skip when resolvedBaseUrl is a subdomain-spoofed URL', () => { + preconnectApi('openai', { + resolvedBaseUrl: 'https://api.openai.com.malicious.com/v1', + }); + expect(mockFetch).not.toHaveBeenCalled(); + }); + + it('should use resolvedBaseUrl when it is a dashscope compatible-mode URL', () => { + preconnectApi('openai', { + resolvedBaseUrl: 'https://dashscope.aliyuncs.com/compatible-mode/v1', + }); + expect(mockFetch).toHaveBeenCalledWith( + 'https://dashscope.aliyuncs.com/compatible-mode/v1', + expect.objectContaining({ method: 'HEAD' }), + ); + }); + + it('should skip when resolvedBaseUrl is a dashscope subdomain-spoofed URL', () => { + preconnectApi('openai', { + resolvedBaseUrl: 'https://dashscope.aliyuncs.com.malicious.com/v1', + }); + expect(mockFetch).not.toHaveBeenCalled(); + }); + + it('should accept DashScope regional endpoint (sg-singapore)', () => { + preconnectApi('openai', { + resolvedBaseUrl: + 'https://dashscope-intl.aliyuncs.com/compatible-mode/v1', + }); + expect(mockFetch).toHaveBeenCalledWith( + 'https://dashscope-intl.aliyuncs.com/compatible-mode/v1', + expect.objectContaining({ method: 'HEAD' }), + ); + }); + + it('should accept DashScope regional endpoint (us-virginia)', () => { + preconnectApi('openai', { + resolvedBaseUrl: 'https://dashscope-us.aliyuncs.com/compatible-mode/v1', + }); + expect(mockFetch).toHaveBeenCalledWith( + 'https://dashscope-us.aliyuncs.com/compatible-mode/v1', + expect.objectContaining({ method: 'HEAD' }), + ); + }); + + it('should accept DashScope regional endpoint (cn-hongkong)', () => { + preconnectApi('openai', { + resolvedBaseUrl: + 'https://cn-hongkong.dashscope.aliyuncs.com/compatible-mode/v1', + }); + expect(mockFetch).toHaveBeenCalledWith( + 'https://cn-hongkong.dashscope.aliyuncs.com/compatible-mode/v1', + expect.objectContaining({ method: 'HEAD' }), + ); + }); + + it('should fall back to authType default when resolvedBaseUrl is a non-URL sentinel', () => { + preconnectApi('qwen-oauth', { + resolvedBaseUrl: 'DYNAMIC_QWEN_OAUTH_BASE_URL', + }); + expect(mockFetch).toHaveBeenCalledWith( + 'https://coding.dashscope.aliyuncs.com', + expect.objectContaining({ method: 'HEAD' }), + ); + }); + + it('should fall back to default URL when resolvedBaseUrl is undefined', () => { + preconnectApi('qwen-oauth'); + expect(mockFetch).toHaveBeenCalledWith( + 'https://coding.dashscope.aliyuncs.com', + expect.objectContaining({ method: 'HEAD' }), + ); + }); + }); + + describe('preconnect behavior', () => { + it('should use default baseUrl for qwen-oauth', () => { + preconnectApi('qwen-oauth'); + expect(mockFetch).toHaveBeenCalledWith( + 'https://coding.dashscope.aliyuncs.com', + expect.objectContaining({ method: 'HEAD' }), + ); + }); + + it('should use default baseUrl for openai', () => { + preconnectApi('openai'); + expect(mockFetch).toHaveBeenCalledWith( + 'https://api.openai.com', + expect.objectContaining({ method: 'HEAD' }), + ); + }); + + it('should use default baseUrl for anthropic', () => { + preconnectApi('anthropic'); + expect(mockFetch).toHaveBeenCalledWith( + 'https://api.anthropic.com', + expect.objectContaining({ method: 'HEAD' }), + ); + }); + + it('should pass shared dispatcher on Node.js runtime', () => { + preconnectApi('qwen-oauth'); + expect(mockFetch).toHaveBeenCalledWith( + expect.any(String), + expect.objectContaining({ + dispatcher: { fake: 'dispatcher' }, + }), + ); + }); + + it('should pass undefined proxy to shared dispatcher by default', () => { + preconnectApi('qwen-oauth'); + expect(mockGetOrCreateSharedDispatcher).toHaveBeenCalledWith(undefined); + }); + + it('should pass configured proxy to shared dispatcher', () => { + preconnectApi('qwen-oauth', { proxy: 'http://proxy.example.com:8080' }); + expect(mockGetOrCreateSharedDispatcher).toHaveBeenCalledWith( + 'http://proxy.example.com:8080', + ); + }); + + it('should not fire twice', () => { + preconnectApi('qwen-oauth'); + preconnectApi('openai'); + expect(mockFetch).toHaveBeenCalledTimes(1); + }); + + it('should retry when targetUrl was unavailable on first call', () => { + // First call: unknown authType, no resolvedBaseUrl → no targetUrl + preconnectApi('unknown-auth'); + expect(mockFetch).not.toHaveBeenCalled(); + + // Second call: valid authType → should fire + preconnectApi('qwen-oauth'); + expect(mockFetch).toHaveBeenCalledTimes(1); + expect(mockFetch).toHaveBeenCalledWith( + 'https://coding.dashscope.aliyuncs.com', + expect.objectContaining({ method: 'HEAD' }), + ); + }); + + it('should handle fetch errors gracefully', async () => { + mockFetch.mockRejectedValue(new Error('Network error')); + // Should not throw + expect(() => preconnectApi('qwen-oauth')).not.toThrow(); + }); + + it('should handle synchronous dispatcher errors gracefully', () => { + mockGetOrCreateSharedDispatcher.mockImplementation(() => { + throw new Error('Failed to create dispatcher'); + }); + expect(() => preconnectApi('qwen-oauth')).not.toThrow(); + }); + + it('should skip when QWEN_CODE_DISABLE_PRECONNECT is set', () => { + process.env['QWEN_CODE_DISABLE_PRECONNECT'] = '1'; + preconnectApi('qwen-oauth'); + expect(mockFetch).not.toHaveBeenCalled(); + }); + + it('should skip in sandbox mode', () => { + process.env['SANDBOX'] = '1'; + preconnectApi('qwen-oauth'); + expect(mockFetch).not.toHaveBeenCalled(); + }); + }); +}); diff --git a/packages/cli/src/utils/apiPreconnect.ts b/packages/cli/src/utils/apiPreconnect.ts new file mode 100644 index 000000000..611a8b5a1 --- /dev/null +++ b/packages/cli/src/utils/apiPreconnect.ts @@ -0,0 +1,213 @@ +/** + * @license + * Copyright 2026 Qwen Team + * SPDX-License-Identifier: Apache-2.0 + */ + +/** + * API Preconnect - Warm API connections to reduce TCP+TLS handshake latency + * + * Principle: Fire a fire-and-forget HEAD request early in startup to warm + * the TCP+TLS connection. Subsequent actual API calls reuse this connection, + * saving 100-200ms. + * + * The preconnect uses the same shared undici dispatcher as the SDK clients, + * ensuring the warmed TCP+TLS connection is reused by subsequent API calls. + */ + +import { + createDebugLogger, + detectRuntime, + getOrCreateSharedDispatcher, +} from '@qwen-code/qwen-code-core'; + +import { ALIBABA_STANDARD_API_KEY_ENDPOINTS } from '../constants/alibabaStandardApiKey.js'; + +const debugLogger = createDebugLogger('PRECONNECT'); + +let preconnectFired = false; + +/** + * Default API base URLs by AuthType. + * DashScope regional endpoints are derived from ALIBABA_STANDARD_API_KEY_ENDPOINTS + * so preconnect covers all supported regions (cn-beijing, sg-singapore, us-virginia, cn-hongkong). + */ +const DEFAULT_BASE_URLS: Record = { + openai: 'https://api.openai.com', + 'qwen-oauth': 'https://coding.dashscope.aliyuncs.com', + anthropic: 'https://api.anthropic.com', + dashscope: 'https://dashscope.aliyuncs.com', +}; + +/** + * All known default base URLs, including DashScope regional endpoints. + * Used by isDefaultBaseUrl() to accept any supported default endpoint. + */ +const ALL_DEFAULT_URLS: string[] = [ + ...Object.values(DEFAULT_BASE_URLS), + ...Object.values(ALIBABA_STANDARD_API_KEY_ENDPOINTS), +]; + +/** + * Check if preconnect should be skipped due to environment conditions + */ +function shouldSkipPreconnect(): boolean { + // Skip for custom CA certificate (enterprise TLS inspection may interfere) + if (process.env['NODE_EXTRA_CA_CERTS']) { + debugLogger.debug('Skipping preconnect: custom CA certificate configured'); + return true; + } + + return false; +} + +/** + * Check if running in sandbox mode + * In sandbox mode, preconnect is ineffective because the process will restart + */ +function isInSandboxMode(): boolean { + return process.env['SANDBOX'] !== undefined; +} + +/** + * Check if baseUrl is a default URL + */ +function isDefaultBaseUrl(baseUrl: string): boolean { + const normalizedInput = baseUrl + .toLowerCase() + .replace(/^https?:\/\//, '') + .replace(/\/+$/, ''); + return ALL_DEFAULT_URLS.some((defaultUrl) => { + const normalizedDefault = defaultUrl + .toLowerCase() + .replace(/^https?:\/\//, '') + .replace(/\/+$/, ''); + return ( + normalizedInput === normalizedDefault || + normalizedInput.startsWith(normalizedDefault + '/') + ); + }); +} + +/** + * Get the target URL for preconnect. + * Uses the already-resolved base URL from the model config, falling back + * to default URLs by authType. + * + * Only preconnects to known default URLs — custom URLs may not accept HEAD + * requests or may require mTLS / private deployment configurations. + */ +function getPreconnectTargetUrl( + authType: string | undefined, + resolvedBaseUrl: string | undefined, +): string | undefined { + // 1. Use the resolved base URL from model config (already incorporates + // modelProviders > cli > env > settings priority chain) + if (resolvedBaseUrl && /^https?:\/\//i.test(resolvedBaseUrl)) { + if (isDefaultBaseUrl(resolvedBaseUrl)) { + return resolvedBaseUrl; + } + debugLogger.debug( + 'Skipping preconnect: resolved baseUrl is not a default URL', + ); + return undefined; + } + + // 2. Fall back to default value by authType + if (authType && DEFAULT_BASE_URLS[authType]) { + return DEFAULT_BASE_URLS[authType]; + } + + return undefined; +} + +/** + * Execute API preconnect + * Use HEAD request to establish TCP+TLS connection without sending actual request body. + * Uses the shared undici dispatcher to ensure connection pool is shared with SDK clients. + * + * @param authType - Authentication type (openai, qwen-oauth, anthropic, etc.) + * @param options - Configuration options + */ +export function preconnectApi( + authType: string | undefined, + options: { + resolvedBaseUrl?: string; + proxy?: string; + } = {}, +): void { + if (preconnectFired) { + return; + } + + // Check if disabled + if (process.env['QWEN_CODE_DISABLE_PRECONNECT'] === '1') { + debugLogger.debug('Preconnect disabled by environment variable'); + preconnectFired = true; + return; + } + + // Check if in sandbox mode (process will restart, preconnect is ineffective) + if (isInSandboxMode()) { + debugLogger.debug('Skipping preconnect: sandbox mode detected'); + preconnectFired = true; + return; + } + + // Check environment skip conditions (custom CA) + if (shouldSkipPreconnect()) { + preconnectFired = true; + return; + } + + // Skip on non-Node runtimes (e.g. Bun) — they use independent connection + // pools, so warming undici's pool provides no benefit. + if (detectRuntime() !== 'node') { + debugLogger.debug('Skipping preconnect: unsupported runtime'); + preconnectFired = true; + return; + } + + const targetUrl = getPreconnectTargetUrl(authType, options.resolvedBaseUrl); + + if (!targetUrl) { + debugLogger.debug('No target URL for preconnect'); + return; + } + + preconnectFired = true; + debugLogger.debug(`Preconnecting to: ${targetUrl}`); + + try { + // Use the same shared undici dispatcher that SDK clients will use, + // so the warmed TCP+TLS connection is reused by subsequent API calls. + const dispatcher = getOrCreateSharedDispatcher(options.proxy); + + // Fire HEAD request to warm connection (fire-and-forget) + fetch(targetUrl, { + method: 'HEAD', + signal: AbortSignal.timeout(5_000), + headers: { + 'User-Agent': 'QwenCode-Preconnect/1.0', + }, + dispatcher, + } as RequestInit) + .then(() => { + debugLogger.debug('Preconnect completed'); + }) + .catch((error) => { + debugLogger.debug(`Preconnect failed (ignored): ${error}`); + }); + } catch (error) { + // Preconnect failure doesn't affect main flow + debugLogger.debug(`Preconnect failed (ignored): ${error}`); + } +} + +/** + * Reset preconnect state (for testing only) + * @internal + */ +export function resetPreconnectState(): void { + preconnectFired = false; +} diff --git a/packages/core/src/index.ts b/packages/core/src/index.ts index 80210ee03..985dda01c 100644 --- a/packages/core/src/index.ts +++ b/packages/core/src/index.ts @@ -298,6 +298,10 @@ export * from './utils/request-tokenizer/supportedImageFormats.js'; export { TextTokenizer } from './utils/request-tokenizer/textTokenizer.js'; export * from './utils/retry.js'; export * from './utils/ripgrepUtils.js'; +export { + detectRuntime, + getOrCreateSharedDispatcher, +} from './utils/runtimeFetchOptions.js'; export * from './utils/schemaValidator.js'; export * from './utils/shell-utils.js'; export * from './utils/subagentGenerator.js'; diff --git a/packages/core/src/utils/runtimeFetchOptions.test.ts b/packages/core/src/utils/runtimeFetchOptions.test.ts index fd4e7a089..62a4b0455 100644 --- a/packages/core/src/utils/runtimeFetchOptions.test.ts +++ b/packages/core/src/utils/runtimeFetchOptions.test.ts @@ -4,8 +4,12 @@ * SPDX-License-Identifier: Apache-2.0 */ -import { describe, it, expect, vi } from 'vitest'; -import { buildRuntimeFetchOptions } from './runtimeFetchOptions.js'; +import { describe, it, expect, vi, beforeEach } from 'vitest'; +import { + buildRuntimeFetchOptions, + getOrCreateSharedDispatcher, + resetDispatcherCache, +} from './runtimeFetchOptions.js'; type UndiciOptions = Record; @@ -31,6 +35,9 @@ vi.mock('undici', () => { }); describe('buildRuntimeFetchOptions (node runtime)', () => { + beforeEach(() => { + resetDispatcherCache(); + }); it('disables undici timeouts for Agent in OpenAI options', () => { const result = buildRuntimeFetchOptions('openai'); @@ -93,3 +100,36 @@ describe('buildRuntimeFetchOptions (node runtime)', () => { }); }); }); + +describe('getOrCreateSharedDispatcher', () => { + beforeEach(() => { + resetDispatcherCache(); + }); + + it('returns the same instance for repeated calls without proxy', () => { + const d1 = getOrCreateSharedDispatcher(); + const d2 = getOrCreateSharedDispatcher(); + expect(d1).toBe(d2); + }); + + it('returns the same instance for repeated calls with the same proxy', () => { + const d1 = getOrCreateSharedDispatcher('http://proxy.local'); + const d2 = getOrCreateSharedDispatcher('http://proxy.local'); + expect(d1).toBe(d2); + }); + + it('returns different instances for different proxy URLs', () => { + const d1 = getOrCreateSharedDispatcher(); + const d2 = getOrCreateSharedDispatcher('http://proxy.local'); + expect(d1).not.toBe(d2); + }); + + it('shares the same dispatcher with buildRuntimeFetchOptions', () => { + const shared = getOrCreateSharedDispatcher(); + const result = buildRuntimeFetchOptions('openai'); + const sdkDispatcher = ( + result as { fetchOptions?: { dispatcher?: unknown } } + ).fetchOptions?.dispatcher; + expect(sdkDispatcher).toBe(shared); + }); +}); diff --git a/packages/core/src/utils/runtimeFetchOptions.ts b/packages/core/src/utils/runtimeFetchOptions.ts index 1e0ef4806..86f1d81a2 100644 --- a/packages/core/src/utils/runtimeFetchOptions.ts +++ b/packages/core/src/utils/runtimeFetchOptions.ts @@ -131,21 +131,57 @@ export function buildRuntimeFetchOptions( } } +/** + * Cache of shared dispatcher instances keyed by proxy URL (undefined = no proxy). + * Ensures preconnect and SDK clients share the same connection pool. + */ +const dispatcherCache = new Map(); + +/** + * Get or create a shared undici dispatcher for the given proxy configuration. + * The dispatcher is cached so that preconnect and subsequent SDK requests + * share the same connection pool, enabling TCP+TLS connection reuse. + * + * @param proxyUrl - Optional proxy URL; undefined for direct connections + * @returns A cached undici Dispatcher (Agent or ProxyAgent) + */ +export function getOrCreateSharedDispatcher(proxyUrl?: string): Dispatcher { + const cached = dispatcherCache.get(proxyUrl); + if (cached) { + return cached; + } + + const dispatcher = proxyUrl + ? new ProxyAgent({ + uri: proxyUrl, + headersTimeout: 0, + bodyTimeout: 0, + keepAliveTimeout: 60_000, + }) + : new Agent({ + headersTimeout: 0, + bodyTimeout: 0, + keepAliveTimeout: 60_000, + }); + + dispatcherCache.set(proxyUrl, dispatcher); + return dispatcher; +} + +/** + * Reset the dispatcher cache (for testing only) + * @internal + */ +export function resetDispatcherCache(): void { + dispatcherCache.clear(); +} + function buildFetchOptionsWithDispatcher( sdkType: SDKType, proxyUrl?: string, ): OpenAIRuntimeFetchOptions | AnthropicRuntimeFetchOptions { try { - const dispatcher = proxyUrl - ? new ProxyAgent({ - uri: proxyUrl, - headersTimeout: 0, - bodyTimeout: 0, - }) - : new Agent({ - headersTimeout: 0, - bodyTimeout: 0, - }); + const dispatcher = getOrCreateSharedDispatcher(proxyUrl); return { fetchOptions: { dispatcher } }; } catch { return sdkType === 'openai' ? undefined : {}; diff --git a/scripts/benchmark-api-latency.mjs b/scripts/benchmark-api-latency.mjs new file mode 100644 index 000000000..3ed38c4c3 --- /dev/null +++ b/scripts/benchmark-api-latency.mjs @@ -0,0 +1,167 @@ +#!/usr/bin/env node +/** + * API Preconnect Latency Benchmark + * + * Measures the real TCP+TLS connection reuse benefit of preconnect by using + * undici (the same library as apiPreconnect.ts) within a single process. + * + * Unlike the previous curl-based approach, this correctly measures connection + * pool reuse: the same dispatcher instance is shared between the preconnect + * HEAD request and the subsequent measured request, just like in production. + * + * Usage: + * node scripts/benchmark-api-latency.mjs + * + * Environment variables: + * ITERATIONS=3 Number of cold/warm pairs per endpoint (default: 3) + * REQUEST_TIMEOUT_MS=5000 Per-request timeout in ms (default: 5000) + * BENCHMARK_URLS Space-separated extra URLs to benchmark + */ + +import { createRequire } from 'module'; +import { performance } from 'perf_hooks'; + +// Resolve undici from the core package (same version used by preconnect) +const require = createRequire(import.meta.url); +const { Agent } = require('../packages/core/node_modules/undici/index.js'); + +const ITERATIONS = parseInt(process.env['ITERATIONS'] ?? '3', 10); +const REQUEST_TIMEOUT_MS = parseInt(process.env['REQUEST_TIMEOUT_MS'] ?? '5000', 10); + +const DEFAULT_ENDPOINTS = [ + { url: 'https://api.openai.com', label: 'OpenAI' }, + { url: 'https://api.anthropic.com', label: 'Anthropic' }, + { url: 'https://dashscope.aliyuncs.com/compatible-mode/v1', label: 'DashScope (openai-compatible)' }, +]; + +const extraUrls = process.env['BENCHMARK_URLS'] + ? process.env['BENCHMARK_URLS'].split(' ').filter(Boolean).map((url) => ({ url, label: url })) + : []; + +const ENDPOINTS = [...DEFAULT_ENDPOINTS, ...extraUrls]; + +// --------------------------------------------------------------------------- + +function newDispatcher() { + return new Agent({ + headersTimeout: 0, + bodyTimeout: 0, + keepAliveTimeout: 60_000, + }); +} + +async function fetchOnce(url, dispatcher, method = 'HEAD') { + const start = performance.now(); + try { + await fetch(url, { + method, + signal: AbortSignal.timeout(REQUEST_TIMEOUT_MS), + headers: { 'User-Agent': 'QwenCode-Benchmark/1.0' }, + dispatcher, + }); + } catch (err) { + // Timeouts and non-2xx are fine — we only care about connection timing + if (err?.name === 'TimeoutError') { + return performance.now() - start; // still records the time spent + } + } + return performance.now() - start; +} + +/** + * Cold measurement: brand-new dispatcher, no preconnect. + * Returns elapsed ms of the measured request. + */ +async function measureCold(url) { + const dispatcher = newDispatcher(); + return fetchOnce(url, dispatcher, 'HEAD'); +} + +/** + * Warm measurement: same dispatcher for preconnect HEAD + measured request. + * Returns elapsed ms of the measured request only (not the preconnect time). + */ +async function measureWarm(url) { + const dispatcher = newDispatcher(); + // Preconnect — mirrors apiPreconnect.ts behaviour + await fetchOnce(url, dispatcher, 'HEAD').catch(() => {}); + // Measured request reuses the warmed connection from the same pool + return fetchOnce(url, dispatcher, 'HEAD'); +} + +// --------------------------------------------------------------------------- + +function fmt(ms) { + return `${ms.toFixed(1)}ms`; +} + +function avg(arr) { + return arr.reduce((a, b) => a + b, 0) / arr.length; +} + +async function benchmarkEndpoint({ url, label }) { + console.log(`\n ${label}`); + console.log(` ${url}`); + + const coldTimes = []; + const warmTimes = []; + + for (let i = 0; i < ITERATIONS; i++) { + const cold = await measureCold(url); + coldTimes.push(cold); + + // Brief pause so the OS can release the cold connection + await new Promise((r) => setTimeout(r, 500)); + + const warm = await measureWarm(url); + warmTimes.push(warm); + + console.log(` run ${i + 1}: cold=${fmt(cold)} warm=${fmt(warm)}`); + + await new Promise((r) => setTimeout(r, 500)); + } + + const avgCold = avg(coldTimes); + const avgWarm = avg(warmTimes); + const saved = avgCold - avgWarm; + const pct = avgCold > 0 ? (saved / avgCold) * 100 : 0; + + return { label, url, avgCold, avgWarm, saved, pct }; +} + +// --------------------------------------------------------------------------- + +console.log('=== Qwen Code API Preconnect Latency Benchmark ==='); +console.log(`Iterations per endpoint : ${ITERATIONS}`); +console.log(`Request timeout : ${REQUEST_TIMEOUT_MS}ms`); +console.log('\nRunning...'); + +const results = []; +for (const endpoint of ENDPOINTS) { + const result = await benchmarkEndpoint(endpoint); + results.push(result); +} + +// Summary table +console.log('\n\n=== Results ===\n'); +console.log( + 'Endpoint'.padEnd(36) + + 'Cold (avg)'.padStart(12) + + 'Warm (avg)'.padStart(12) + + 'Saved'.padStart(10) + + 'Improvement'.padStart(13), +); +console.log('─'.repeat(83)); + +for (const r of results) { + const status = r.pct >= 30 ? '✓' : r.pct >= 10 ? '~' : '✗'; + console.log( + r.label.slice(0, 35).padEnd(36) + + fmt(r.avgCold).padStart(12) + + fmt(r.avgWarm).padStart(12) + + fmt(r.saved).padStart(10) + + `${r.pct.toFixed(1)}% ${status}`.padStart(13), + ); +} + +console.log('\nLegend: ✓ ≥30% improvement ~ 10–30% ✗ <10%'); From 534ca986eb8cc542469ee64e1750eb40d09ee7d1 Mon Sep 17 00:00:00 2001 From: Dragon <52599892+DragonnZhang@users.noreply.github.com> Date: Mon, 27 Apr 2026 08:29:50 +0800 Subject: [PATCH 21/36] feat(cli): add argument-hint support for slash commands (#3593) Adds argument-hint support across the slash command pipeline. Skill and command authors specify an argument-hint field in markdown frontmatter, which renders as inline ghost text when the user has typed the command name but not yet provided arguments. Pipeline: - Skill parsing: SkillConfig.argumentHint parsed from SKILL.md frontmatter - Command loaders: propagated through SkillCommandLoader, BundledSkillLoader, FileCommandLoader, command-factory - UI: useCommandCompletion shows hint as ghost text with showCursorBeforeText layout; InputPrompt separates display text from Tab-accept text - ACP: passed as input.hint per spec - Bundled skills (batch, loop, qc-helper, review) get hints Hint is excluded from completion menu labels to keep the dropdown clean and disappears as soon as the user starts typing arguments. --- .../acp-integration/session/Session.test.ts | 3 +- .../src/acp-integration/session/Session.ts | 2 +- .../src/services/BundledSkillLoader.test.ts | 10 +++ .../cli/src/services/BundledSkillLoader.ts | 1 + .../FileCommandLoader-markdown.test.ts | 2 + .../cli/src/services/FileCommandLoader.ts | 1 + .../src/services/SkillCommandLoader.test.ts | 13 ++++ .../cli/src/services/SkillCommandLoader.ts | 1 + packages/cli/src/services/command-factory.ts | 2 + .../services/markdown-command-parser.test.ts | 3 + .../src/services/markdown-command-parser.ts | 1 + .../cli/src/ui/components/InputPrompt.tsx | 33 +++++++--- .../src/ui/hooks/useCommandCompletion.test.ts | 61 ++++++++++++++++++- .../cli/src/ui/hooks/useCommandCompletion.tsx | 47 ++++++++++++-- .../src/ui/hooks/useSlashCompletion.test.ts | 29 +++++++++ .../core/src/skills/bundled/batch/SKILL.md | 1 + .../core/src/skills/bundled/loop/SKILL.md | 1 + .../src/skills/bundled/qc-helper/SKILL.md | 1 + .../core/src/skills/bundled/review/SKILL.md | 1 + packages/core/src/skills/skill-load.test.ts | 22 +++++++ packages/core/src/skills/skill-load.ts | 5 ++ .../core/src/skills/skill-manager.test.ts | 26 ++++++++ packages/core/src/skills/skill-manager.ts | 7 ++- packages/core/src/skills/types.ts | 6 ++ 24 files changed, 259 insertions(+), 20 deletions(-) diff --git a/packages/cli/src/acp-integration/session/Session.test.ts b/packages/cli/src/acp-integration/session/Session.test.ts index 1f806b510..8178267b2 100644 --- a/packages/cli/src/acp-integration/session/Session.test.ts +++ b/packages/cli/src/acp-integration/session/Session.test.ts @@ -221,6 +221,7 @@ describe('Session', () => { { name: 'init', description: 'Initialize project context', + argumentHint: '[path]', }, ]); @@ -239,7 +240,7 @@ describe('Session', () => { { name: 'init', description: 'Initialize project context', - input: null, + input: { hint: '[path]' }, }, ], }, diff --git a/packages/cli/src/acp-integration/session/Session.ts b/packages/cli/src/acp-integration/session/Session.ts index f4426faab..6521f3e12 100644 --- a/packages/cli/src/acp-integration/session/Session.ts +++ b/packages/cli/src/acp-integration/session/Session.ts @@ -981,7 +981,7 @@ export class Session implements SessionContext { (cmd) => ({ name: cmd.name, description: cmd.description, - input: null, + input: cmd.argumentHint ? { hint: cmd.argumentHint } : null, }), ); diff --git a/packages/cli/src/services/BundledSkillLoader.test.ts b/packages/cli/src/services/BundledSkillLoader.test.ts index a2bd4544c..5c99f3afd 100644 --- a/packages/cli/src/services/BundledSkillLoader.test.ts +++ b/packages/cli/src/services/BundledSkillLoader.test.ts @@ -69,6 +69,16 @@ describe('BundledSkillLoader', () => { expect(mockSkillManager.listSkills).not.toHaveBeenCalled(); }); + it('should propagate argumentHint from bundled skills to slash commands', async () => { + const skill = makeSkill({ argumentHint: '[topic]' }); + mockSkillManager.listSkills.mockResolvedValue([skill]); + + const loader = new BundledSkillLoader(mockConfig); + const commands = await loader.loadCommands(signal); + + expect(commands[0]?.argumentHint).toBe('[topic]'); + }); + it('should load bundled skills as slash commands', async () => { const skill = makeSkill(); mockSkillManager.listSkills.mockResolvedValue([skill]); diff --git a/packages/cli/src/services/BundledSkillLoader.ts b/packages/cli/src/services/BundledSkillLoader.ts index 5ea6682b5..d5f36c93e 100644 --- a/packages/cli/src/services/BundledSkillLoader.ts +++ b/packages/cli/src/services/BundledSkillLoader.ts @@ -66,6 +66,7 @@ export class BundledSkillLoader implements ICommandLoader { source: 'bundled-skill' as const, sourceLabel: 'Skill', modelInvocable: !skill.disableModelInvocation, + argumentHint: skill.argumentHint, whenToUse: skill.whenToUse, action: async (context, _args): Promise => { // Resolve template variables in skill body diff --git a/packages/cli/src/services/FileCommandLoader-markdown.test.ts b/packages/cli/src/services/FileCommandLoader-markdown.test.ts index 737cc39db..816bc1269 100644 --- a/packages/cli/src/services/FileCommandLoader-markdown.test.ts +++ b/packages/cli/src/services/FileCommandLoader-markdown.test.ts @@ -27,6 +27,7 @@ describe('FileCommandLoader - Markdown support', () => { // Create a test markdown command file const mdContent = `--- description: Test markdown command +argument-hint: "[issue-number]" --- This is a test prompt from markdown.`; @@ -47,6 +48,7 @@ This is a test prompt from markdown.`; expect(commands).toHaveLength(1); expect(commands[0].name).toBe('test-command'); expect(commands[0].description).toBe('Test markdown command'); + expect(commands[0].argumentHint).toBe('[issue-number]'); } finally { // Restore original method loader['getCommandDirectories'] = originalMethod; diff --git a/packages/cli/src/services/FileCommandLoader.ts b/packages/cli/src/services/FileCommandLoader.ts index ba03c9567..61bb78210 100644 --- a/packages/cli/src/services/FileCommandLoader.ts +++ b/packages/cli/src/services/FileCommandLoader.ts @@ -355,6 +355,7 @@ export class FileCommandLoader implements ICommandLoader { ? validDef.frontmatter.description : undefined, whenToUse: validDef.frontmatter?.when_to_use, + argumentHint: validDef.frontmatter?.['argument-hint'], disableModelInvocation: validDef.frontmatter?.['disable-model-invocation'], }; diff --git a/packages/cli/src/services/SkillCommandLoader.test.ts b/packages/cli/src/services/SkillCommandLoader.test.ts index 00c189123..a884d2c4f 100644 --- a/packages/cli/src/services/SkillCommandLoader.test.ts +++ b/packages/cli/src/services/SkillCommandLoader.test.ts @@ -58,6 +58,19 @@ describe('SkillCommandLoader', () => { expect(mockSkillManager.listSkills).not.toHaveBeenCalled(); }); + it('should propagate argumentHint from skills to slash commands', async () => { + const skill = makeSkill({ argumentHint: '[topic]' }); + mockSkillManager.listSkills.mockImplementation( + ({ level }: { level: string }) => + Promise.resolve(level === 'user' ? [skill] : []), + ); + + const loader = new SkillCommandLoader(mockConfig); + const commands = await loader.loadCommands(signal); + + expect(commands[0]?.argumentHint).toBe('[topic]'); + }); + it('should query user, project, and extension levels', async () => { const loader = new SkillCommandLoader(mockConfig); await loader.loadCommands(signal); diff --git a/packages/cli/src/services/SkillCommandLoader.ts b/packages/cli/src/services/SkillCommandLoader.ts index 58e297633..daf7bc918 100644 --- a/packages/cli/src/services/SkillCommandLoader.ts +++ b/packages/cli/src/services/SkillCommandLoader.ts @@ -84,6 +84,7 @@ export class SkillCommandLoader implements ICommandLoader { : 'skill-dir-command') as CommandSource, sourceLabel, modelInvocable, + argumentHint: skill.argumentHint, whenToUse: skill.whenToUse, action: async (context, _args): Promise => { const body = skill.body; diff --git a/packages/cli/src/services/command-factory.ts b/packages/cli/src/services/command-factory.ts index 1aa363254..ef3f91bee 100644 --- a/packages/cli/src/services/command-factory.ts +++ b/packages/cli/src/services/command-factory.ts @@ -37,6 +37,7 @@ import { AtFileProcessor } from './prompt-processors/atFileProcessor.js'; export interface CommandDefinition { prompt: string; description?: string; + argumentHint?: string; whenToUse?: string; disableModelInvocation?: boolean; } @@ -121,6 +122,7 @@ export function createSlashCommandFromDefinition( modelInvocable: definition.disableModelInvocation ? false : !extensionName || !!(definition.description || definition.whenToUse), + argumentHint: definition.argumentHint, whenToUse: definition.whenToUse, action: async ( context: CommandContext, diff --git a/packages/cli/src/services/markdown-command-parser.test.ts b/packages/cli/src/services/markdown-command-parser.test.ts index bbefa43a4..da71db1a8 100644 --- a/packages/cli/src/services/markdown-command-parser.test.ts +++ b/packages/cli/src/services/markdown-command-parser.test.ts @@ -14,6 +14,7 @@ describe('parseMarkdownCommand', () => { it('should parse markdown with YAML frontmatter', () => { const content = `--- description: Test command +argument-hint: "[issue-number]" --- This is the prompt content.`; @@ -23,6 +24,7 @@ This is the prompt content.`; expect(result).toEqual({ frontmatter: { description: 'Test command', + 'argument-hint': '[issue-number]', }, prompt: 'This is the prompt content.', }); @@ -146,6 +148,7 @@ describe('MarkdownCommandDefSchema', () => { const validDef = { frontmatter: { description: 'Test description', + 'argument-hint': '[issue-number]', }, prompt: 'Test prompt', }; diff --git a/packages/cli/src/services/markdown-command-parser.ts b/packages/cli/src/services/markdown-command-parser.ts index bfe5a3e2b..8e0e0139c 100644 --- a/packages/cli/src/services/markdown-command-parser.ts +++ b/packages/cli/src/services/markdown-command-parser.ts @@ -18,6 +18,7 @@ export const MarkdownCommandDefSchema = z.object({ frontmatter: z .object({ description: z.string().optional(), + 'argument-hint': z.string().optional(), when_to_use: z.string().optional(), 'disable-model-invocation': z.boolean().optional(), }) diff --git a/packages/cli/src/ui/components/InputPrompt.tsx b/packages/cli/src/ui/components/InputPrompt.tsx index bbdf36f5c..cf457454d 100644 --- a/packages/cli/src/ui/components/InputPrompt.tsx +++ b/packages/cli/src/ui/components/InputPrompt.tsx @@ -194,6 +194,8 @@ export const InputPrompt: React.FC = ({ const midInputGhostTextRef = useRef<{ text: string; insertPosition: number; + acceptText?: string; + showCursorBeforeText?: boolean; } | null>(null); midInputGhostTextRef.current = completion.midInputGhostText; @@ -827,9 +829,9 @@ export const InputPrompt: React.FC = ({ !key.paste && !key.shift && !completion.showSuggestions && - midInputGhostTextRef.current + midInputGhostTextRef.current?.acceptText ) { - buffer.insert(midInputGhostTextRef.current.text); + buffer.insert(midInputGhostTextRef.current.acceptText); return true; } @@ -1169,18 +1171,29 @@ export const InputPrompt: React.FC = ({ // Check for mid-input ghost text (only renders when cursor is at end of input) const ghostText = midInputGhostTextRef.current; if (ghostText && showCursorOpt && ghostText.text.length > 0) { - // First ghost char: inverted (as cursor). Rest: dimmed gray. - const firstChar = ghostText.text[0]!; - const rest = ghostText.text.slice(firstChar.length); - renderedLine.push( - {chalk.inverse(firstChar)}, - ); - if (rest.length > 0) { + if (ghostText.showCursorBeforeText) { + renderedLine.push( + {chalk.inverse(' ')}, + ); renderedLine.push( - {rest} + {ghostText.text} , ); + } else { + // First ghost char: inverted (as cursor). Rest: dimmed gray. + const firstChar = ghostText.text[0]!; + const rest = ghostText.text.slice(firstChar.length); + renderedLine.push( + {chalk.inverse(firstChar)}, + ); + if (rest.length > 0) { + renderedLine.push( + + {rest} + , + ); + } } renderedLine.push({`\u200B`}); } else { diff --git a/packages/cli/src/ui/hooks/useCommandCompletion.test.ts b/packages/cli/src/ui/hooks/useCommandCompletion.test.ts index 74e5e942d..25f2f4924 100644 --- a/packages/cli/src/ui/hooks/useCommandCompletion.test.ts +++ b/packages/cli/src/ui/hooks/useCommandCompletion.test.ts @@ -9,7 +9,8 @@ import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest'; import { renderHook, act, waitFor } from '@testing-library/react'; import { useCommandCompletion } from './useCommandCompletion.js'; -import type { CommandContext } from '../commands/types.js'; +import type { CommandContext, SlashCommand } from '../commands/types.js'; +import { CommandKind } from '../commands/types.js'; import type { Config } from '@qwen-code/qwen-code-core'; import { useTextBuffer } from '../components/shared/text-buffer.js'; import { useEffect } from 'react'; @@ -585,4 +586,62 @@ describe('useCommandCompletion', () => { ); }); }); + + describe('argument hint ghost text', () => { + it('shows argumentHint as inline ghost text for a complete slash command', () => { + const slashCommands: SlashCommand[] = [ + { + name: 'fix-issue', + description: 'Fix GitHub issue', + argumentHint: '[issue-number]', + kind: CommandKind.FILE, + }, + ]; + + const { result } = renderHook(() => { + const textBuffer = useTextBufferForTest('/fix-issue'); + const completion = useCommandCompletion( + textBuffer, + testRootDir, + slashCommands, + mockCommandContext, + false, + mockConfig, + ); + return completion; + }); + + expect(result.current.midInputGhostText).toEqual({ + text: '[issue-number]', + insertPosition: '/fix-issue'.length, + showCursorBeforeText: true, + }); + }); + + it('does not show argumentHint after arguments have started', () => { + const slashCommands: SlashCommand[] = [ + { + name: 'fix-issue', + description: 'Fix GitHub issue', + argumentHint: '[issue-number]', + kind: CommandKind.FILE, + }, + ]; + + const { result } = renderHook(() => { + const textBuffer = useTextBufferForTest('/fix-issue 123'); + const completion = useCommandCompletion( + textBuffer, + testRootDir, + slashCommands, + mockCommandContext, + false, + mockConfig, + ); + return completion; + }); + + expect(result.current.midInputGhostText).toBeNull(); + }); + }); }); diff --git a/packages/cli/src/ui/hooks/useCommandCompletion.tsx b/packages/cli/src/ui/hooks/useCommandCompletion.tsx index cedea5c56..0b1cfd72f 100644 --- a/packages/cli/src/ui/hooks/useCommandCompletion.tsx +++ b/packages/cli/src/ui/hooks/useCommandCompletion.tsx @@ -19,6 +19,7 @@ import { useAtCompletion } from './useAtCompletion.js'; import { useSlashCompletion } from './useSlashCompletion.js'; import type { Config } from '@qwen-code/qwen-code-core'; import { useCompletion } from './useCompletion.js'; +import { parseSlashCommand } from '../../utils/commands.js'; export enum CompletionMode { IDLE = 'IDLE', @@ -40,7 +41,12 @@ export interface UseCommandCompletionReturn { navigateDown: () => void; handleAutocomplete: (indexToUse: number) => void; /** Inline ghost text for mid-input slash commands (not at line start). */ - midInputGhostText: { text: string; insertPosition: number } | null; + midInputGhostText: { + text: string; + insertPosition: number; + acceptText?: string; + showCursorBeforeText?: boolean; + } | null; } export function useCommandCompletion( @@ -243,17 +249,46 @@ export function useCommandCompletion( const midInputGhostText = useMemo((): { text: string; insertPosition: number; + acceptText?: string; + showCursorBeforeText?: boolean; } | null => { if (!active || reverseSearchActive) return null; const cursorOffset = logicalPosToOffset(buffer.lines, cursorRow, cursorCol); const midCmd = findMidInputSlashCommand(buffer.text, cursorOffset); - if (!midCmd) return null; - const match = getBestSlashCommandMatch( - midCmd.partialCommand, + if (midCmd) { + const match = getBestSlashCommandMatch( + midCmd.partialCommand, + slashCommands, + ); + if (!match) return null; + return { + text: match.suffix, + insertPosition: cursorOffset, + acceptText: match.suffix, + }; + } + + if (cursorRow !== 0) return null; + const currentLine = buffer.lines[cursorRow] || ''; + const lineCodePoints = toCodePoints(currentLine); + if (cursorCol !== lineCodePoints.length) return null; + + const lineToCursor = lineCodePoints.slice(0, cursorCol).join(''); + if (!isSlashCommand(lineToCursor.trim())) return null; + + const { commandToExecute, args } = parseSlashCommand( + lineToCursor, slashCommands, ); - if (!match) return null; - return { text: match.suffix, insertPosition: cursorOffset }; + if (!commandToExecute?.argumentHint || args.trim().length > 0) { + return null; + } + + return { + text: commandToExecute.argumentHint, + insertPosition: cursorOffset, + showCursorBeforeText: true, + }; }, [ buffer.text, buffer.lines, diff --git a/packages/cli/src/ui/hooks/useSlashCompletion.test.ts b/packages/cli/src/ui/hooks/useSlashCompletion.test.ts index a526ce6b3..dbe7a601c 100644 --- a/packages/cli/src/ui/hooks/useSlashCompletion.test.ts +++ b/packages/cli/src/ui/hooks/useSlashCompletion.test.ts @@ -246,6 +246,35 @@ describe('useSlashCompletion', () => { }); }); + it('should keep argumentHint out of command suggestion labels', async () => { + const slashCommands = [ + createTestCommand({ + name: 'fix-issue', + description: 'Fix GitHub issue', + argumentHint: '[issue-number]', + }), + ]; + const { result } = renderHook(() => + useTestHarnessForSlashCompletion( + true, + '/fix', + slashCommands, + mockCommandContext, + ), + ); + + await waitFor(() => { + expect(result.current.suggestions).toEqual([ + { + label: 'fix-issue', + value: 'fix-issue', + description: 'Fix GitHub issue', + commandKind: CommandKind.BUILT_IN, + }, + ]); + }); + }); + it('should prefer higher completionPriority when match quality ties', async () => { const slashCommands = [ createTestCommand({ diff --git a/packages/core/src/skills/bundled/batch/SKILL.md b/packages/core/src/skills/bundled/batch/SKILL.md index b6b9c9e8d..a44d1cdac 100644 --- a/packages/core/src/skills/bundled/batch/SKILL.md +++ b/packages/core/src/skills/bundled/batch/SKILL.md @@ -1,6 +1,7 @@ --- name: batch description: Execute batch operations on multiple files in parallel. Automatically discovers files, splits into chunks, and processes with parallel worker agents. Use `/batch` followed by operation and file pattern. +argument-hint: ' ' allowedTools: - task - glob diff --git a/packages/core/src/skills/bundled/loop/SKILL.md b/packages/core/src/skills/bundled/loop/SKILL.md index ea9ae7cf3..70bc49418 100644 --- a/packages/core/src/skills/bundled/loop/SKILL.md +++ b/packages/core/src/skills/bundled/loop/SKILL.md @@ -1,6 +1,7 @@ --- name: loop description: Create a recurring loop that runs a prompt on a schedule. Usage - /loop 5m check the build, /loop check the PR every 30m, /loop run tests (defaults to 10m). /loop list to show jobs, /loop clear to cancel all. +argument-hint: '[interval] | list | clear' allowedTools: - cron_create - cron_list diff --git a/packages/core/src/skills/bundled/qc-helper/SKILL.md b/packages/core/src/skills/bundled/qc-helper/SKILL.md index 14fbaa152..bc68011a5 100644 --- a/packages/core/src/skills/bundled/qc-helper/SKILL.md +++ b/packages/core/src/skills/bundled/qc-helper/SKILL.md @@ -1,6 +1,7 @@ --- name: qc-helper description: Answer any question about Qwen Code usage, features, configuration, and troubleshooting by referencing the official user documentation. Also helps users view or modify their settings.json. Invoke with `/qc-helper` followed by a question, e.g. `/qc-helper how do I configure MCP servers?` or `/qc-helper change approval mode to yolo`. +argument-hint: '' allowedTools: - read_file - edit_file diff --git a/packages/core/src/skills/bundled/review/SKILL.md b/packages/core/src/skills/bundled/review/SKILL.md index ca4e166c9..2a0571184 100644 --- a/packages/core/src/skills/bundled/review/SKILL.md +++ b/packages/core/src/skills/bundled/review/SKILL.md @@ -1,6 +1,7 @@ --- name: review description: Review changed code for correctness, security, code quality, and performance. Use when the user asks to review code changes, a PR, or specific files. Invoke with `/review`, `/review `, `/review `, or `/review --comment` to post inline comments on the PR. +argument-hint: '[pr-number|file-path] [--comment]' allowedTools: - task - run_shell_command diff --git a/packages/core/src/skills/skill-load.test.ts b/packages/core/src/skills/skill-load.test.ts index af382b7b8..fde14b389 100644 --- a/packages/core/src/skills/skill-load.test.ts +++ b/packages/core/src/skills/skill-load.test.ts @@ -43,6 +43,13 @@ describe('skill-load', () => { allowedTools: ['read_file', 'write_file'], }; } + if (yamlString.includes('argument-hint:')) { + return { + name: 'test-skill', + description: 'A test skill', + 'argument-hint': '[topic]', + }; + } // Default case return { name: 'test-skill', @@ -174,6 +181,21 @@ You are a helpful assistant with this skill. expect(config.allowedTools).toEqual(['read_file', 'write_file']); }); + it('should parse argument-hint from frontmatter', () => { + const markdownWithArgumentHint = `--- +name: test-skill +description: A test skill +argument-hint: "[topic]" +--- + +Skill body. +`; + + const config = parseSkillContent(markdownWithArgumentHint, testFilePath); + + expect(config.argumentHint).toBe('[topic]'); + }); + it('should throw error for invalid format without frontmatter', () => { const invalidMarkdown = `# Just a heading Some content without frontmatter. diff --git a/packages/core/src/skills/skill-load.ts b/packages/core/src/skills/skill-load.ts index 970441588..9744c7eef 100644 --- a/packages/core/src/skills/skill-load.ts +++ b/packages/core/src/skills/skill-load.ts @@ -114,11 +114,16 @@ export function parseSkillContent( // Extract optional model field const model = parseModelField(frontmatter); + const argumentHint = + typeof frontmatter['argument-hint'] === 'string' + ? frontmatter['argument-hint'] + : undefined; const config: SkillConfig = { name, description, allowedTools, + argumentHint, model, filePath, body: body.trim(), diff --git a/packages/core/src/skills/skill-manager.test.ts b/packages/core/src/skills/skill-manager.test.ts index 9dd23ed62..841bc350f 100644 --- a/packages/core/src/skills/skill-manager.test.ts +++ b/packages/core/src/skills/skill-manager.test.ts @@ -82,6 +82,13 @@ describe('SkillManager', () => { allowedTools: ['read_file', 'write_file'], }; } + if (yamlString.includes('argument-hint:')) { + return { + name: 'test-skill', + description: 'A test skill', + 'argument-hint': '[topic]', + }; + } if (yamlString.includes('name: skill1')) { return { name: 'skill1', description: 'First skill' }; } @@ -239,6 +246,25 @@ You are a helpful assistant with this skill. expect(config.allowedTools).toEqual(['read_file', 'write_file']); }); + it('should parse argument-hint from frontmatter', () => { + const markdownWithArgumentHint = `--- +name: test-skill +description: A test skill +argument-hint: "[topic]" +--- + +Skill body. +`; + + const config = manager.parseSkillContent( + markdownWithArgumentHint, + validSkillConfig.filePath, + 'project', + ); + + expect(config.argumentHint).toBe('[topic]'); + }); + it('should determine level from file path', () => { const projectPath = '/test/project/.qwen/skills/test-skill/SKILL.md'; const userPath = '/home/user/.qwen/skills/test-skill/SKILL.md'; diff --git a/packages/core/src/skills/skill-manager.ts b/packages/core/src/skills/skill-manager.ts index 0c92e0761..2ab6f9ebd 100644 --- a/packages/core/src/skills/skill-manager.ts +++ b/packages/core/src/skills/skill-manager.ts @@ -450,7 +450,11 @@ export class SkillManager { // Extract optional model field const model = parseModelField(frontmatter); - // Extract when_to_use and disable-model-invocation + // Extract argument-hint, when_to_use, and disable-model-invocation + const argumentHint = + typeof frontmatter['argument-hint'] === 'string' + ? frontmatter['argument-hint'] + : undefined; const whenToUse = typeof frontmatter['when_to_use'] === 'string' ? frontmatter['when_to_use'] @@ -468,6 +472,7 @@ export class SkillManager { allowedTools, hooks, skillRoot, + argumentHint, model, level, filePath, diff --git a/packages/core/src/skills/types.ts b/packages/core/src/skills/types.ts index fe7332466..8e0655e30 100644 --- a/packages/core/src/skills/types.ts +++ b/packages/core/src/skills/types.ts @@ -81,6 +81,12 @@ export interface SkillConfig { */ extensionName?: string; + /** + * Argument hint shown after the slash command name in completion menus. + * Parsed from the `argument-hint` frontmatter field in SKILL.md. + */ + argumentHint?: string; + /** * Describes when to invoke this skill — shown to the model in the SkillTool * description so it can decide whether to use it. Parsed from the From ccb9857a5c51cf8ac39101dbe909c2a3648166dd Mon Sep 17 00:00:00 2001 From: John London Date: Sun, 26 Apr 2026 19:44:18 -0500 Subject: [PATCH 22/36] refactor(config): dedupe QWEN_CODE_API_TIMEOUT_MS env override logic (#3653) Extract duplicated timeout env override block into a shared helper applyTimeoutEnvOverride(), used by both resolveModelConfig() and resolveQwenOAuthConfig(). Preserves precedence: modelProvider > env > settings > default. Adds [Regression] and [Additional] tests guarding against the original OAuth-path bug and covering edge cases. --- .../src/models/modelConfigResolver.test.ts | 169 ++++++++++++++++++ .../core/src/models/modelConfigResolver.ts | 60 +++---- 2 files changed, 197 insertions(+), 32 deletions(-) diff --git a/packages/core/src/models/modelConfigResolver.test.ts b/packages/core/src/models/modelConfigResolver.test.ts index dd2ca7b04..4aeb8fbee 100644 --- a/packages/core/src/models/modelConfigResolver.test.ts +++ b/packages/core/src/models/modelConfigResolver.test.ts @@ -685,4 +685,173 @@ describe('modelConfigResolver', () => { expect(result.errors[0].message).toContain('envKey'); }); }); + + describe('[Regression] timeout env override refactor', () => { + it('[Regression] OAuth path must apply QWEN_CODE_API_TIMEOUT_MS (was broken before fix #3629)', () => { + // Guards against the original bug where resolveQwenOAuthConfig() + // returned before applying the env override. + const result = resolveModelConfig({ + authType: AuthType.QWEN_OAUTH, + cli: {}, + settings: {}, + env: { + QWEN_CODE_API_TIMEOUT_MS: '45000', + }, + }); + + expect(result.config.timeout).toBe(45000); + expect(result.sources['timeout']).toBeDefined(); + expect(result.sources['timeout'].kind).toBe('env'); + expect(result.sources['timeout'].envKey).toBe('QWEN_CODE_API_TIMEOUT_MS'); + expect(result.config.model).toBe(DEFAULT_QWEN_MODEL); + }); + + it('[Regression] non-OAuth path must apply QWEN_CODE_API_TIMEOUT_MS', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { apiKey: 'key' }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '900000', + }, + }); + + expect(result.config.timeout).toBe(900000); + expect(result.sources['timeout'].kind).toBe('env'); + expect(result.sources['timeout'].envKey).toBe('QWEN_CODE_API_TIMEOUT_MS'); + }); + + it('[Regression] modelProvider timeout must win over env in both paths', () => { + // Non-OAuth + const nonOAuth = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: {}, + env: { + MY_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '900000', + }, + modelProvider: { + id: 'model', + name: 'Model', + envKey: 'MY_KEY', + baseUrl: 'https://api.example.com', + generationConfig: { timeout: 60000 }, + }, + }); + expect(nonOAuth.config.timeout).toBe(60000); + expect(nonOAuth.sources['timeout'].kind).toBe('modelProviders'); + + // OAuth + const oauth = resolveModelConfig({ + authType: AuthType.QWEN_OAUTH, + cli: {}, + settings: {}, + env: { + QWEN_CODE_API_TIMEOUT_MS: '45000', + }, + modelProvider: { + id: 'qwen-oauth', + name: 'Qwen OAuth', + generationConfig: { timeout: 120000 }, + }, + }); + expect(oauth.config.timeout).toBe(120000); + expect(oauth.sources['timeout'].kind).toBe('modelProviders'); + }); + + it('[Regression] refactor must not alter precedence: env > settings', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { + apiKey: 'key', + generationConfig: { timeout: 30000 }, + }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '900000', + }, + }); + + // env must override settings + expect(result.config.timeout).toBe(900000); + expect(result.sources['timeout'].kind).toBe('env'); + }); + }); + + describe('[Additional] timeout env override edge cases', () => { + it('handles scientific notation in QWEN_CODE_API_TIMEOUT_MS', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { apiKey: 'key' }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '1.5e5', + }, + }); + + expect(result.config.timeout).toBe(150000); + expect(result.sources['timeout'].kind).toBe('env'); + }); + + it('handles hex values in QWEN_CODE_API_TIMEOUT_MS', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { apiKey: 'key', generationConfig: { timeout: 30000 } }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '0x2BF20', // 180000 in hex + }, + }); + + expect(result.config.timeout).toBe(180000); + expect(result.sources['timeout'].kind).toBe('env'); + }); + + it('ignores empty string QWEN_CODE_API_TIMEOUT_MS', () => { + const result = resolveModelConfig({ + authType: AuthType.USE_OPENAI, + cli: {}, + settings: { apiKey: 'key', generationConfig: { timeout: 30000 } }, + env: { + OPENAI_API_KEY: 'key', + QWEN_CODE_API_TIMEOUT_MS: '', + }, + }); + + expect(result.config.timeout).toBe(30000); + expect(result.sources['timeout'].kind).toBe('settings'); + }); + + it('applies env override for every supported auth type', () => { + const authTypes = [ + { type: AuthType.USE_OPENAI, env: { OPENAI_API_KEY: 'key' } }, + { + type: AuthType.USE_ANTHROPIC, + env: { + ANTHROPIC_API_KEY: 'key', + ANTHROPIC_BASE_URL: 'https://api.anthropic.com', + }, + }, + ]; + + for (const { type, env } of authTypes) { + const result = resolveModelConfig({ + authType: type, + cli: {}, + settings: { + ...(type === AuthType.USE_OPENAI ? { apiKey: 'key' } : {}), + }, + env: { ...env, QWEN_CODE_API_TIMEOUT_MS: '99999' }, + }); + + expect(result.config.timeout).toBe(99999); + expect(result.sources['timeout'].kind).toBe('env'); + } + }); + }); }); diff --git a/packages/core/src/models/modelConfigResolver.ts b/packages/core/src/models/modelConfigResolver.ts index 3eb663c83..81c916523 100644 --- a/packages/core/src/models/modelConfigResolver.ts +++ b/packages/core/src/models/modelConfigResolver.ts @@ -105,6 +105,32 @@ export interface ModelConfigResolutionResult { warnings: string[]; } +/** + * Applies QWEN_CODE_API_TIMEOUT_MS env override if modelProvider has not set a timeout. + * Precedence: modelProvider > env > settings > default + * Mutates generationConfig and sources in-place. + */ +function applyTimeoutEnvOverride( + env: Record, + generationConfig: Partial, + sources: ConfigSources, + modelProvider?: ModelProviderConfig, +): void { + if (modelProvider?.generationConfig?.timeout !== undefined) return; + + const raw = env['QWEN_CODE_API_TIMEOUT_MS']; + if (raw === undefined) return; + + const parsed = Number(raw); + if (Number.isFinite(parsed) && parsed > 0) { + generationConfig.timeout = Math.floor(parsed); + sources['timeout'] = { + kind: 'env', + envKey: 'QWEN_CODE_API_TIMEOUT_MS', + }; + } +} + /** * Resolve model configuration from all input sources. * @@ -247,22 +273,7 @@ export function resolveModelConfig( ); // ---- Env override: QWEN_CODE_API_TIMEOUT_MS ---- - // Precedence: modelProvider > env > settings > default (CLI doesn't set timeout) - const modelProviderSetTimeout = - modelProvider?.generationConfig?.timeout !== undefined; - if (!modelProviderSetTimeout) { - const envTimeout = env['QWEN_CODE_API_TIMEOUT_MS']; - if (envTimeout !== undefined) { - const parsed = Number(envTimeout); - if (Number.isFinite(parsed) && parsed > 0) { - generationConfig.timeout = Math.floor(parsed); - sources['timeout'] = { - kind: 'env', - envKey: 'QWEN_CODE_API_TIMEOUT_MS', - }; - } - } - } + applyTimeoutEnvOverride(env, generationConfig, sources, modelProvider); // Build final config const config: ContentGeneratorConfig = { @@ -343,22 +354,7 @@ function resolveQwenOAuthConfig( ); // ---- Env override: QWEN_CODE_API_TIMEOUT_MS ---- - // Precedence: modelProvider > env > settings > default - const modelProviderSetTimeoutOAuth = - modelProvider?.generationConfig?.timeout !== undefined; - if (!modelProviderSetTimeoutOAuth) { - const envTimeoutOAuth = input.env['QWEN_CODE_API_TIMEOUT_MS']; - if (envTimeoutOAuth !== undefined) { - const parsed = Number(envTimeoutOAuth); - if (Number.isFinite(parsed) && parsed > 0) { - generationConfig.timeout = Math.floor(parsed); - sources['timeout'] = { - kind: 'env', - envKey: 'QWEN_CODE_API_TIMEOUT_MS', - }; - } - } - } + applyTimeoutEnvOverride(input.env, generationConfig, sources, modelProvider); const config: ContentGeneratorConfig = { authType: AuthType.QWEN_OAUTH, From 96bc8741977b00dd01847f12fd483e15193495f5 Mon Sep 17 00:00:00 2001 From: jinye Date: Mon, 27 Apr 2026 14:38:56 +0800 Subject: [PATCH 23/36] chore(gitignore): add .codex directory to gitignore (#3665) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 🤖 Generated with [Qwen Code](https://github.com/QwenLM/qwen-code) Co-authored-by: jinye.djy --- .gitignore | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitignore b/.gitignore index 9644bc90b..d4b35d0e7 100644 --- a/.gitignore +++ b/.gitignore @@ -26,6 +26,7 @@ package-lock.json .qoder .claude CLAUDE.md +.codex # Qwen Code Configs .qwen/* From 7fe853a7827e0f7dbe07331c6ffcb302e2b426e7 Mon Sep 17 00:00:00 2001 From: pomelo Date: Mon, 27 Apr 2026 14:47:44 +0800 Subject: [PATCH 24/36] Feat/openrouter auth (#3576) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat(cli): add OpenRouter auth flow Co-authored-by: Qwen-Coder * feat(cli): add OpenRouter model management UI Co-authored-by: Qwen-Coder * fix(cli): align OpenRouter OAuth fallback session Co-authored-by: Qwen-Coder * refactor(cli): unify OpenRouter model setup flow Co-authored-by: Qwen-Coder * feat(auth): update OAuth description with provider examples and i18n support - Updated OAuth option description to include provider examples (OpenRouter, ModelScope) - Added internationalization support for new description text - Updated all language files (en, zh, de, fr, ja, pt, ru) with translations Co-authored-by: Qwen-Coder * docs: simplify OpenRouter design docs Co-authored-by: Qwen-Coder * test(auth): fix OpenRouter OAuth mock typing Co-authored-by: Qwen-Coder * test(auth): sync AuthDialog tests with new three-option main menu layout Co-authored-by: Qwen-Coder Update assertions that referenced removed 'Qwen OAuth' and 'OpenRouter' options in the main/API-key views to match the refactored OAUTH / CODING_PLAN / API_KEY structure. * fix(i18n): add missing zh-TW translation for browser-based auth key Co-authored-by: Qwen-Coder zh-TW.js was generated from main's en.js which had already removed this key, but the PR re-adds it in en.js. Sync zh-TW with the new translation. * feat(cli): Improve custom auth wizard with step indicators and cleaner advanced config (#3607) * feat(cli): Add custom API key auth wizard with 6-step setup flow Replace the documentation-only Co-authored-by: Qwen-Coder "Custom API Key" screen with an in-terminal wizard: Protocol select → Base URL input → API Key input → Model ID input → JSON review → Save. - Add 5 new ViewLevels and render functions in AuthDialog - Implement utility functions: generateCustomApiKeyEnvKey (normalization), normalizeCustomModelIds (split/trim/dedupe), maskApiKey (display) - Implement handleCustomApiKeySubmit in useAuth with backup, env key generation, modelProviders merge, auth refresh, and user feedback - Wire handler through UIActionsContext and AppContainer - Add 18 unit tests for utilities, 4 wizard flow integration tests * feat(cli): Improve custom auth wizard with step indicators and cleaner advanced config - Add step indicators (Step 1/6 · Protocol) to each wizard screen - Remove redundant Protocol/Endpoint context from each step for focus - Redesign advanced config: add descriptions to thinking/modality toggles - Remove max tokens option; keep only thinking and modality settings - Add ↑↓ arrow navigation with Space toggle and Enter to continue - Generation config flows through review JSON and final submit Co-authored-by: Qwen-Coder * test: Fix Windows CI failures in fileUtils and AuthDialog tests - fileUtils.test.ts: Mock node:child_process execFile to prevent pdftotext spawn that times out on Windows (ENOENT, 5s timeout) - AuthDialog.test.tsx: Add char-by-char typeText() helper to work around Node 24.x + ink TextInput compatibility issue on Windows Co-authored-by: Qwen-Coder * fix(cli): Reset advanced wizard state and use JSON.stringify for settings preview - Reset advancedThinkingEnabled, advancedModalityEnabled, and focusedConfigIndex when re-entering custom wizard to prevent state leakage between configurations - Replace hand-rolled JSON string concatenation with JSON.stringify for settings.json preview to properly escape special characters in model IDs and base URLs Co-authored-by: Qwen-Coder --------- Co-authored-by: Qwen-Coder * fix(cli): harden OpenRouter OAuth callback handling Co-authored-by: Qwen-Coder * test(cli): stabilize OpenRouter state mismatch test Co-authored-by: Qwen-Coder * test(cli): stabilize custom auth wizard navigation Co-authored-by: Qwen-Coder --------- Co-authored-by: Qwen-Coder --- .gitignore | 1 + docs/design/openrouter-auth-and-models.md | 89 +++ packages/cli/src/commands/auth.ts | 18 +- packages/cli/src/commands/auth/handler.ts | 241 +++++- .../cli/src/commands/auth/openrouter.test.ts | 304 +++++++ .../src/commands/auth/openrouterOAuth.test.ts | 730 +++++++++++++++++ .../cli/src/commands/auth/openrouterOAuth.ts | 751 ++++++++++++++++++ packages/cli/src/commands/auth/status.test.ts | 79 +- packages/cli/src/i18n/locales/de.js | 2 + packages/cli/src/i18n/locales/en.js | 2 + packages/cli/src/i18n/locales/fr.js | 2 + packages/cli/src/i18n/locales/ja.js | 2 + packages/cli/src/i18n/locales/pt.js | 2 + packages/cli/src/i18n/locales/ru.js | 2 + packages/cli/src/i18n/locales/zh-TW.js | 2 + packages/cli/src/i18n/locales/zh.js | 2 + .../cli/src/services/BuiltinCommandLoader.ts | 2 + packages/cli/src/ui/AppContainer.test.tsx | 18 + packages/cli/src/ui/AppContainer.tsx | 24 + packages/cli/src/ui/auth/AuthDialog.test.tsx | 641 ++++++++++++++- packages/cli/src/ui/auth/AuthDialog.tsx | 653 ++++++++++++++- packages/cli/src/ui/auth/useAuth.test.ts | 351 ++++++++ packages/cli/src/ui/auth/useAuth.ts | 350 +++++++- .../ui/commands/manageModelsCommand.test.ts | 38 + .../src/ui/commands/manageModelsCommand.ts | 24 + packages/cli/src/ui/commands/rewindCommand.ts | 6 +- packages/cli/src/ui/commands/types.ts | 1 + .../cli/src/ui/components/DialogManager.tsx | 27 + .../components/ExternalAuthProgress.test.tsx | 29 + .../ui/components/ExternalAuthProgress.tsx | 60 ++ .../ui/components/ManageModelsDialog.test.tsx | 132 +++ .../src/ui/components/ManageModelsDialog.tsx | 648 +++++++++++++++ .../src/ui/components/shared/MultiSelect.tsx | 51 +- .../src/ui/components/shared/TextInput.tsx | 4 +- .../cli/src/ui/contexts/UIActionsContext.tsx | 21 + .../cli/src/ui/contexts/UIStateContext.tsx | 4 +- .../cli/src/ui/hooks/slashCommandProcessor.ts | 4 + .../ui/hooks/useManageModelsCommand.test.ts | 40 + .../src/ui/hooks/useManageModelsCommand.ts | 32 + packages/cli/src/ui/hooks/useQwenAuth.ts | 6 + .../src/ui/manageModels/manageModels.test.ts | 140 ++++ .../cli/src/ui/manageModels/manageModels.ts | 211 +++++ .../src/services/shellExecutionService.ts | 8 +- packages/core/src/utils/fileUtils.test.ts | 34 + .../core/src/utils/terminalSerializer.test.ts | 7 +- 45 files changed, 5666 insertions(+), 129 deletions(-) create mode 100644 docs/design/openrouter-auth-and-models.md create mode 100644 packages/cli/src/commands/auth/openrouter.test.ts create mode 100644 packages/cli/src/commands/auth/openrouterOAuth.test.ts create mode 100644 packages/cli/src/commands/auth/openrouterOAuth.ts create mode 100644 packages/cli/src/ui/auth/useAuth.test.ts create mode 100644 packages/cli/src/ui/commands/manageModelsCommand.test.ts create mode 100644 packages/cli/src/ui/commands/manageModelsCommand.ts create mode 100644 packages/cli/src/ui/components/ExternalAuthProgress.test.tsx create mode 100644 packages/cli/src/ui/components/ExternalAuthProgress.tsx create mode 100644 packages/cli/src/ui/components/ManageModelsDialog.test.tsx create mode 100644 packages/cli/src/ui/components/ManageModelsDialog.tsx create mode 100644 packages/cli/src/ui/hooks/useManageModelsCommand.test.ts create mode 100644 packages/cli/src/ui/hooks/useManageModelsCommand.ts create mode 100644 packages/cli/src/ui/manageModels/manageModels.test.ts create mode 100644 packages/cli/src/ui/manageModels/manageModels.ts diff --git a/.gitignore b/.gitignore index d4b35d0e7..2dae5710a 100644 --- a/.gitignore +++ b/.gitignore @@ -89,3 +89,4 @@ storybook-static # Dev symlink: qc-helper bundled skill docs (created by scripts/dev.js) packages/core/src/skills/bundled/qc-helper/docs +tmp/ \ No newline at end of file diff --git a/docs/design/openrouter-auth-and-models.md b/docs/design/openrouter-auth-and-models.md new file mode 100644 index 000000000..7c82476a6 --- /dev/null +++ b/docs/design/openrouter-auth-and-models.md @@ -0,0 +1,89 @@ +# OpenRouter Auth and Model Management Design + +This document captures the design intent behind the OpenRouter auth flow and the +model management changes introduced with it. It intentionally focuses on the +product and architectural choices, not implementation history. + +## Goals + +- Let users authenticate with OpenRouter from both CLI and `/auth`. +- Reuse the existing OpenAI-compatible provider path instead of adding a new auth + type for OpenRouter. +- Make the first-run experience usable without asking users to manage hundreds of + models immediately. +- Keep a clear path toward richer model management via `/manage-models`. + +## OpenRouter Auth + +OpenRouter is integrated as an OpenAI-compatible provider: + +- auth type: `AuthType.USE_OPENAI` +- provider settings: `modelProviders.openai` +- API key env var: `OPENROUTER_API_KEY` +- base URL: `https://openrouter.ai/api/v1` + +This avoids introducing an OpenRouter-specific `AuthType` when the runtime model +provider path is already OpenAI-compatible. It keeps auth status, model +resolution, provider selection, and settings schema aligned with the existing +provider abstraction. + +The user-facing flows are: + +- `qwen auth openrouter --key ` for automation or direct API-key setup. +- `qwen auth openrouter` for browser-based OAuth. +- `/auth` → API Key → OpenRouter for the TUI flow. + +Browser OAuth uses OpenRouter's PKCE flow and writes the exchanged API key into +settings before refreshing auth as `AuthType.USE_OPENAI`. + +## Model Management + +OpenRouter exposes a large dynamic model catalog. Writing every discovered model +into `modelProviders.openai` would make `/model` noisy and would turn a long-term +settings field into a cache of a remote catalog. + +The key design split is: + +- **Catalog**: the full set of models discovered from a source such as + OpenRouter. +- **Enabled set**: the smaller set of models that should appear in `/model` and + be persisted in user settings. + +For the initial OpenRouter flow, auth should finish with a useful default enabled +set instead of interrupting the user with a large picker. The recommended set +should be small, stable, and biased toward models that let users try the product +successfully, including free models when available. + +`/model` remains a fast model switcher. It should not become the place where +users browse and curate a full provider catalog. + +## `/manage-models` + +Richer model management belongs in a separate `/manage-models` entry point. That +flow should let users: + +- browse discovered models; +- search by id, display name, provider prefix, and derived tags such as `free` or + `vision`; +- see which models are currently enabled; +- enable or disable models in batches. + +The source dimension must remain part of this design. OpenRouter is only the +first dynamic catalog source; future sources such as ModelScope and ModelStudio +should fit the same shape. UI complexity can be reduced, but the underlying +source abstraction should stay available as the extension point. + +## Current Boundary + +This change should do the minimum needed to make OpenRouter auth and model setup +pleasant: + +- OAuth or key-based auth configures OpenRouter through the existing + OpenAI-compatible provider path. +- The initial enabled model set is curated instead of dumping the full catalog + into settings. +- Full catalog storage, browsing, filtering, and batch management are deferred to + `/manage-models`. + +The design principle is simple: authentication should get users to a working +state quickly, while model curation should live in a dedicated management flow. diff --git a/packages/cli/src/commands/auth.ts b/packages/cli/src/commands/auth.ts index b90795bc7..c99ce7e8d 100644 --- a/packages/cli/src/commands/auth.ts +++ b/packages/cli/src/commands/auth.ts @@ -50,6 +50,21 @@ const codePlanCommand = { }, }; +const openRouterCommand = { + command: 'openrouter', + describe: t('Authenticate using OpenRouter API key setup'), + builder: (yargs: Argv) => + yargs.option('key', { + alias: 'k', + describe: t('API key for OpenRouter'), + type: 'string', + }), + handler: async (argv: { key?: string }) => { + const key = argv['key'] as string | undefined; + await handleQwenAuth('openrouter', { key }); + }, +}; + const statusCommand = { command: 'status', describe: t('Show current authentication status'), @@ -61,12 +76,13 @@ const statusCommand = { export const authCommand: CommandModule = { command: 'auth', describe: t( - 'Configure Qwen authentication information with Qwen-OAuth or Alibaba Cloud Coding Plan', + 'Configure Qwen authentication information with Qwen-OAuth, OpenRouter, or Alibaba Cloud Coding Plan', ), builder: (yargs: Argv) => yargs .command(qwenOauthCommand) .command(codePlanCommand) + .command(openRouterCommand) .command(statusCommand) .demandCommand(0) // Don't require a subcommand .version(false), diff --git a/packages/cli/src/commands/auth/handler.ts b/packages/cli/src/commands/auth/handler.ts index 25a7d44fa..099f5e639 100644 --- a/packages/cli/src/commands/auth/handler.ts +++ b/packages/cli/src/commands/auth/handler.ts @@ -12,18 +12,29 @@ import { } from '@qwen-code/qwen-code-core'; import { writeStdoutLine, writeStderrLine } from '../../utils/stdioHelpers.js'; import { t } from '../../i18n/index.js'; +import { getPersistScopeForModelSelection } from '../../config/modelProvidersScope.js'; import { getCodingPlanConfig, isCodingPlanConfig, CodingPlanRegion, CODING_PLAN_ENV_KEY, -} from '@qwen-code/qwen-code-core'; -import { getPersistScopeForModelSelection } from '../../config/modelProvidersScope.js'; +} from '../../constants/codingPlan.js'; import { backupSettingsFile } from '../../utils/settingsUtils.js'; import { loadSettings, type LoadedSettings } from '../../config/settings.js'; import { loadCliConfig } from '../../config/config.js'; import type { CliArgs } from '../../config/config.js'; import { InteractiveSelector } from './interactiveSelector.js'; +import { + applyOpenRouterModelsConfiguration, + createOpenRouterOAuthSession, + isOpenRouterConfig, + OPENROUTER_ENV_KEY, + runOpenRouterOAuthLogin, +} from './openrouterOAuth.js'; + +function formatElapsedTime(startMs: number): string { + return `${((Date.now() - startMs) / 1000).toFixed(2)}s`; +} interface QwenAuthOptions { region?: string; @@ -53,7 +64,7 @@ interface MergedSettingsWithCodingPlan { * Handles the authentication process based on the specified command and options */ export async function handleQwenAuth( - command: 'qwen-oauth' | 'coding-plan', + command: 'qwen-oauth' | 'coding-plan' | 'openrouter', options: QwenAuthOptions, ) { try { @@ -126,6 +137,8 @@ export async function handleQwenAuth( await handleQwenOAuth(config, settings); } else if (command === 'coding-plan') { await handleCodePlanAuth(config, settings, options); + } else if (command === 'openrouter') { + await handleOpenRouterAuth(config, settings, options); } // Exit after authentication is complete @@ -192,7 +205,7 @@ async function handleCodePlanAuth( } else { // Otherwise, prompt interactively selectedRegion = await promptForRegion(); - selectedKey = await promptForKey(); + selectedKey = await promptForAuthKey(t('Enter your Coding Plan API key: ')); } writeStdoutLine(t('Processing Alibaba Cloud Coding Plan authentication...')); @@ -279,6 +292,105 @@ async function handleCodePlanAuth( } } +/** + * Handles OpenRouter API key setup. + */ +async function handleOpenRouterAuth( + config: Config, + settings: LoadedSettings, + options: QwenAuthOptions, +): Promise { + writeStdoutLine(t('Processing OpenRouter authentication...')); + + try { + const authStartMs = Date.now(); + let selectedKey = options.key; + + if (!selectedKey) { + const oauthStartMs = Date.now(); + const oauthSession = createOpenRouterOAuthSession(); + writeStdoutLine( + t( + 'Starting OpenRouter OAuth in your browser. If needed, open this link manually: {{authorizationUrl}}', + { + authorizationUrl: oauthSession.authorizationUrl, + }, + ), + ); + const oauthResult = await runOpenRouterOAuthLogin(undefined, { + session: oauthSession, + }); + writeStdoutLine( + t('Waited for OpenRouter browser authorization in {{elapsed}}.', { + elapsed: + typeof oauthResult.authorizationCodeWaitMs === 'number' + ? `${(oauthResult.authorizationCodeWaitMs / 1000).toFixed(2)}s` + : formatElapsedTime(oauthStartMs), + }), + ); + writeStdoutLine( + t('Exchanged OpenRouter auth code for API key in {{elapsed}}.', { + elapsed: + typeof oauthResult.apiKeyExchangeMs === 'number' + ? `${(oauthResult.apiKeyExchangeMs / 1000).toFixed(2)}s` + : formatElapsedTime(oauthStartMs), + }), + ); + writeStdoutLine( + t('OpenRouter OAuth callback completed in {{elapsed}}.', { + elapsed: formatElapsedTime(oauthStartMs), + }), + ); + selectedKey = oauthResult.apiKey; + } + + if (!selectedKey) { + throw new Error( + 'OpenRouter authentication completed without an API key.', + ); + } + + const authTypeScope = getPersistScopeForModelSelection(settings); + const settingsFile = settings.forScope(authTypeScope); + backupSettingsFile(settingsFile.path); + + const modelsStartMs = Date.now(); + await applyOpenRouterModelsConfiguration({ + settings, + config, + apiKey: selectedKey, + reloadConfig: true, + }); + writeStdoutLine( + t('Fetched OpenRouter models in {{elapsed}}.', { + elapsed: formatElapsedTime(modelsStartMs), + }), + ); + + const refreshStartMs = Date.now(); + await config.refreshAuth(AuthType.USE_OPENAI); + writeStdoutLine( + t('Refreshed OpenRouter auth in {{elapsed}}.', { + elapsed: formatElapsedTime(refreshStartMs), + }), + ); + writeStdoutLine( + t('Total OpenRouter setup time: {{elapsed}}.', { + elapsed: formatElapsedTime(authStartMs), + }), + ); + + writeStdoutLine(t('Successfully configured OpenRouter.')); + } catch (error) { + writeStderrLine( + t('Failed to configure OpenRouter: {{error}}', { + error: getErrorMessage(error), + }), + ); + process.exit(1); + } +} + /** * Prompts the user to select a region using an interactive selector */ @@ -305,12 +417,12 @@ async function promptForRegion(): Promise { /** * Prompts the user to enter an API key */ -async function promptForKey(): Promise { +async function promptForAuthKey(prompt: string): Promise { // Create a simple password-style input (without echoing characters) const stdin = process.stdin; const stdout = process.stdout; - stdout.write(t('Enter your Coding Plan API key: ')); + stdout.write(prompt); // Set raw mode to capture keystrokes const wasRaw = stdin.isRaw; @@ -370,6 +482,13 @@ async function promptForKey(): Promise { export async function runInteractiveAuth() { const selector = new InteractiveSelector( [ + { + value: 'openrouter' as const, + label: t('OpenRouter'), + description: t( + 'API key setup · OpenAI-compatible provider via OpenRouter', + ), + }, { value: 'coding-plan' as const, label: t('Alibaba Cloud Coding Plan'), @@ -400,6 +519,8 @@ export async function runInteractiveAuth() { if (choice === 'coding-plan') { await handleQwenAuth('coding-plan', {}); + } else if (choice === 'openrouter') { + await handleQwenAuth('openrouter', {}); } } @@ -419,6 +540,9 @@ export async function showAuthStatus(): Promise { if (!selectedType) { writeStdoutLine(t('⚠️ No authentication method configured.\n')); writeStdoutLine(t('Run one of the following commands to get started:\n')); + writeStdoutLine( + t(' qwen auth openrouter - Configure OpenRouter API key'), + ); writeStdoutLine( t( ' qwen auth qwen-oauth - Authenticate with Qwen OAuth (free tier)', @@ -446,54 +570,83 @@ export async function showAuthStatus(): Promise { t('\n ⚠ Run /auth to switch to Coding Plan or another provider.\n'), ); } else if (selectedType === AuthType.USE_OPENAI) { - // Check for Coding Plan configuration const codingPlanRegion = mergedSettings.codingPlan?.region; const codingPlanVersion = mergedSettings.codingPlan?.version; const modelName = mergedSettings.model?.name; + const openAiProviders = + mergedSettings.modelProviders?.[AuthType.USE_OPENAI] || []; + const hasOpenRouterConfig = openAiProviders.some(isOpenRouterConfig); + const hasOpenRouterApiKey = + !!process.env[OPENROUTER_ENV_KEY] || + !!mergedSettings.env?.[OPENROUTER_ENV_KEY]; - // Check if API key is set in environment - const hasApiKey = - !!process.env[CODING_PLAN_ENV_KEY] || - !!mergedSettings.env?.[CODING_PLAN_ENV_KEY]; - - if (hasApiKey) { - writeStdoutLine( - t('✓ Authentication Method: Alibaba Cloud Coding Plan'), - ); - - if (codingPlanRegion) { - const regionDisplay = - codingPlanRegion === CodingPlanRegion.CHINA - ? t('中国 (China) - 阿里云百炼') - : t('Global - Alibaba Cloud'); - writeStdoutLine(t(' Region: {{region}}', { region: regionDisplay })); - } - - if (modelName) { + if (hasOpenRouterConfig) { + if (hasOpenRouterApiKey) { + writeStdoutLine(t('✓ Authentication Method: OpenRouter')); + if (modelName) { + writeStdoutLine( + t(' Current Model: {{model}}', { model: modelName }), + ); + } + writeStdoutLine(t(' Status: API key configured\n')); + } else { writeStdoutLine( - t(' Current Model: {{model}}', { model: modelName }), + t('⚠️ Authentication Method: OpenRouter (Incomplete)'), ); - } - - if (codingPlanVersion) { writeStdoutLine( - t(' Config Version: {{version}}', { - version: codingPlanVersion.substring(0, 8) + '...', - }), + t(' Issue: API key not found in environment or settings\n'), ); + writeStdoutLine(t(' Run `qwen auth openrouter` to re-configure.\n')); } - - writeStdoutLine(t(' Status: API key configured\n')); } else { - writeStdoutLine( - t( - '⚠️ Authentication Method: Alibaba Cloud Coding Plan (Incomplete)', - ), - ); - writeStdoutLine( - t(' Issue: API key not found in environment or settings\n'), - ); - writeStdoutLine(t(' Run `qwen auth coding-plan` to re-configure.\n')); + // Check for Coding Plan configuration + const hasApiKey = + !!process.env[CODING_PLAN_ENV_KEY] || + !!mergedSettings.env?.[CODING_PLAN_ENV_KEY]; + + if (hasApiKey) { + writeStdoutLine( + t('✓ Authentication Method: Alibaba Cloud Coding Plan'), + ); + + if (codingPlanRegion) { + const regionDisplay = + codingPlanRegion === CodingPlanRegion.CHINA + ? t('中国 (China) - 阿里云百炼') + : t('Global - Alibaba Cloud'); + writeStdoutLine( + t(' Region: {{region}}', { region: regionDisplay }), + ); + } + + if (modelName) { + writeStdoutLine( + t(' Current Model: {{model}}', { model: modelName }), + ); + } + + if (codingPlanVersion) { + writeStdoutLine( + t(' Config Version: {{version}}', { + version: codingPlanVersion.substring(0, 8) + '...', + }), + ); + } + + writeStdoutLine(t(' Status: API key configured\n')); + } else { + writeStdoutLine( + t( + '⚠️ Authentication Method: Alibaba Cloud Coding Plan (Incomplete)', + ), + ); + writeStdoutLine( + t(' Issue: API key not found in environment or settings\n'), + ); + writeStdoutLine( + t(' Run `qwen auth coding-plan` to re-configure.\n'), + ); + } } } else { writeStdoutLine( diff --git a/packages/cli/src/commands/auth/openrouter.test.ts b/packages/cli/src/commands/auth/openrouter.test.ts new file mode 100644 index 000000000..d4aedf05f --- /dev/null +++ b/packages/cli/src/commands/auth/openrouter.test.ts @@ -0,0 +1,304 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; +import { handleQwenAuth } from './handler.js'; +import { AuthType } from '@qwen-code/qwen-code-core'; +import type { LoadedSettings } from '../../config/settings.js'; + +const { + mockRefreshAuth, + mockSetValue, + mockForScope, + mockBackupSettingsFile, + mockLoadCliConfig, +} = vi.hoisted(() => { + const mockRefreshAuth = vi.fn(); + return { + mockRefreshAuth, + mockSetValue: vi.fn(), + mockForScope: vi.fn(() => ({ path: '/user.json' })), + mockBackupSettingsFile: vi.fn(), + mockLoadCliConfig: vi.fn(async () => ({ + refreshAuth: mockRefreshAuth, + })), + }; +}); + +vi.mock('../../config/settings.js', () => ({ + loadSettings: vi.fn(), +})); + +vi.mock('../../config/config.js', () => ({ + loadCliConfig: mockLoadCliConfig, +})); + +vi.mock('../../utils/settingsUtils.js', () => ({ + backupSettingsFile: mockBackupSettingsFile, +})); + +vi.mock('../../config/modelProvidersScope.js', () => ({ + getPersistScopeForModelSelection: vi.fn(() => 'user'), +})); + +vi.mock('../../utils/stdioHelpers.js', () => ({ + writeStdoutLine: vi.fn(), + writeStderrLine: vi.fn(), +})); + +vi.mock('./openrouterOAuth.js', () => ({ + OPENROUTER_ENV_KEY: 'OPENROUTER_API_KEY', + OPENROUTER_OAUTH_CALLBACK_URL: 'http://localhost:3000/openrouter/callback', + createOpenRouterOAuthSession: vi.fn(() => ({ + callbackUrl: 'http://localhost:3000/openrouter/callback', + codeVerifier: 'test-verifier', + authorizationUrl: 'https://openrouter.ai/auth?manual=1', + })), + applyOpenRouterModelsConfiguration: vi.fn(async ({ settings, apiKey }) => { + process.env['OPENROUTER_API_KEY'] = apiKey; + settings.setValue('user', 'env.OPENROUTER_API_KEY', apiKey); + settings.setValue( + 'user', + 'security.auth.selectedType', + AuthType.USE_OPENAI, + ); + settings.setValue('user', 'model.name', 'openai/gpt-4o-mini:free'); + settings.setValue('user', `modelProviders.${AuthType.USE_OPENAI}`, [ + { + id: 'openai/gpt-4o-mini:free', + name: 'OpenRouter · GPT-4o mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'anthropic/claude-3.7-sonnet', + name: 'OpenRouter · Claude 3.7 Sonnet', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'gpt-4.1', + name: 'OpenAI GPT-4.1', + baseUrl: 'https://api.openai.com/v1', + envKey: 'OPENAI_API_KEY', + }, + ]); + return { + updatedConfigs: [ + { + id: 'openai/gpt-4o-mini:free', + name: 'OpenRouter · GPT-4o mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'anthropic/claude-3.7-sonnet', + name: 'OpenRouter · Claude 3.7 Sonnet', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'gpt-4.1', + name: 'OpenAI GPT-4.1', + baseUrl: 'https://api.openai.com/v1', + envKey: 'OPENAI_API_KEY', + }, + ], + activeModelId: 'openai/gpt-4o-mini:free', + persistScope: 'user', + }; + }), + runOpenRouterOAuthLogin: vi.fn(), +})); + +import { loadSettings } from '../../config/settings.js'; +import { + applyOpenRouterModelsConfiguration, + runOpenRouterOAuthLogin, +} from './openrouterOAuth.js'; + +describe('handleQwenAuth openrouter', () => { + beforeEach(() => { + vi.clearAllMocks(); + vi.spyOn(process, 'exit').mockImplementation((() => undefined) as never); + delete process.env['OPENROUTER_API_KEY']; + }); + + afterEach(() => { + vi.restoreAllMocks(); + delete process.env['OPENROUTER_API_KEY']; + }); + + const createMockSettings = ( + merged: Record, + ): LoadedSettings => + ({ + merged, + system: { settings: {}, path: '/system.json' }, + systemDefaults: { settings: {}, path: '/system-defaults.json' }, + user: { settings: {}, path: '/user.json' }, + workspace: { settings: {}, path: '/workspace.json' }, + forScope: mockForScope, + setValue: mockSetValue, + getUserHooks: vi.fn(() => []), + getProjectHooks: vi.fn(() => []), + isTrusted: true, + }) as unknown as LoadedSettings; + + it('stores OpenRouter key and model provider config', async () => { + vi.mocked(loadSettings).mockReturnValue( + createMockSettings({ + modelProviders: { + [AuthType.USE_OPENAI]: [ + { + id: 'gpt-4.1', + name: 'OpenAI GPT-4.1', + baseUrl: 'https://api.openai.com/v1', + envKey: 'OPENAI_API_KEY', + }, + ], + }, + }), + ); + + await handleQwenAuth('openrouter', { key: 'or-key-123' }); + + expect(mockBackupSettingsFile).toHaveBeenCalledWith('/user.json'); + expect(mockSetValue).toHaveBeenCalledWith( + 'user', + 'env.OPENROUTER_API_KEY', + 'or-key-123', + ); + expect(mockSetValue).toHaveBeenCalledWith( + 'user', + 'security.auth.selectedType', + AuthType.USE_OPENAI, + ); + expect(mockSetValue).toHaveBeenCalledWith( + 'user', + 'model.name', + 'openai/gpt-4o-mini:free', + ); + + const modelProvidersCall = mockSetValue.mock.calls.find( + (call) => call[1] === `modelProviders.${AuthType.USE_OPENAI}`, + ); + expect(modelProvidersCall).toBeDefined(); + expect(modelProvidersCall?.[2]).toEqual([ + { + id: 'openai/gpt-4o-mini:free', + name: 'OpenRouter · GPT-4o mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'anthropic/claude-3.7-sonnet', + name: 'OpenRouter · Claude 3.7 Sonnet', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'gpt-4.1', + name: 'OpenAI GPT-4.1', + baseUrl: 'https://api.openai.com/v1', + envKey: 'OPENAI_API_KEY', + }, + ]); + expect(applyOpenRouterModelsConfiguration).toHaveBeenCalledWith( + expect.objectContaining({ + settings: expect.anything(), + config: expect.anything(), + apiKey: 'or-key-123', + reloadConfig: true, + }), + ); + expect(mockRefreshAuth).toHaveBeenCalledWith(AuthType.USE_OPENAI); + expect(process.env['OPENROUTER_API_KEY']).toBe('or-key-123'); + }); + + it('replaces existing OpenRouter configs instead of duplicating them', async () => { + vi.mocked(loadSettings).mockReturnValue( + createMockSettings({ + modelProviders: { + [AuthType.USE_OPENAI]: [ + { + id: 'old/model', + name: 'Old OpenRouter Model', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'gpt-4.1', + name: 'OpenAI GPT-4.1', + baseUrl: 'https://api.openai.com/v1', + envKey: 'OPENAI_API_KEY', + }, + ], + }, + }), + ); + + await handleQwenAuth('openrouter', { key: 'or-key-456' }); + + const modelProvidersCall = mockSetValue.mock.calls.find( + (call) => call[1] === `modelProviders.${AuthType.USE_OPENAI}`, + ); + expect(modelProvidersCall?.[2]).toEqual([ + { + id: 'openai/gpt-4o-mini:free', + name: 'OpenRouter · GPT-4o mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'anthropic/claude-3.7-sonnet', + name: 'OpenRouter · Claude 3.7 Sonnet', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'gpt-4.1', + name: 'OpenAI GPT-4.1', + baseUrl: 'https://api.openai.com/v1', + envKey: 'OPENAI_API_KEY', + }, + ]); + }); + + it('uses OAuth flow when key is not provided', async () => { + vi.mocked(loadSettings).mockReturnValue(createMockSettings({})); + vi.mocked(runOpenRouterOAuthLogin).mockResolvedValue({ + apiKey: 'oauth-key-123', + userId: 'user-1', + authorizationUrl: 'https://openrouter.ai/auth?manual=1', + }); + + await handleQwenAuth('openrouter', {}); + + expect(runOpenRouterOAuthLogin).toHaveBeenCalledTimes(1); + expect(mockSetValue).toHaveBeenCalledWith( + 'user', + 'env.OPENROUTER_API_KEY', + 'oauth-key-123', + ); + expect(process.env['OPENROUTER_API_KEY']).toBe('oauth-key-123'); + }); + + it('delegates OpenRouter provider updates to the shared configuration helper', async () => { + vi.mocked(loadSettings).mockReturnValue(createMockSettings({})); + + await handleQwenAuth('openrouter', { key: 'or-key-dynamic' }); + + expect(applyOpenRouterModelsConfiguration).toHaveBeenCalledWith( + expect.objectContaining({ + settings: expect.anything(), + config: expect.anything(), + apiKey: 'or-key-dynamic', + reloadConfig: true, + }), + ); + }); +}); diff --git a/packages/cli/src/commands/auth/openrouterOAuth.test.ts b/packages/cli/src/commands/auth/openrouterOAuth.test.ts new file mode 100644 index 000000000..81fe89d77 --- /dev/null +++ b/packages/cli/src/commands/auth/openrouterOAuth.test.ts @@ -0,0 +1,730 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'; +import { AuthType, type Config } from '@qwen-code/qwen-code-core'; +import type { LoadedSettings } from '../../config/settings.js'; +import { + buildOpenRouterAuthorizationUrl, + createOpenRouterOAuthSession, + createOAuthState, + createPkcePair, + exchangeAuthCodeForApiKey, + fetchOpenRouterModels, + getOpenRouterModelsWithFallback, + getPreferredOpenRouterModelId, + mergeOpenRouterConfigs, + OPENROUTER_DEFAULT_MODELS, + OPENROUTER_MODELS_URL, + OPENROUTER_OAUTH_AUTHORIZE_URL, + OPENROUTER_OAUTH_EXCHANGE_URL, + runOpenRouterOAuthLogin, + selectRecommendedOpenRouterModels, + startOAuthCallbackListener, + applyOpenRouterModelsConfiguration, +} from './openrouterOAuth.js'; +import { request } from 'node:http'; + +describe('openrouterOAuth', () => { + beforeEach(() => { + vi.unstubAllGlobals(); + }); + + afterEach(() => { + vi.restoreAllMocks(); + vi.unstubAllGlobals(); + }); + + it('creates a valid PKCE pair', () => { + const pkce = createPkcePair(); + + expect(pkce.codeVerifier).toMatch(/^[A-Za-z0-9\-_]+$/); + expect(pkce.codeChallenge).toMatch(/^[A-Za-z0-9\-_]+$/); + expect(pkce.codeVerifier.length).toBeGreaterThan(20); + expect(pkce.codeChallenge.length).toBeGreaterThan(20); + }); + + it('builds OpenRouter authorization URL with required params', () => { + const url = buildOpenRouterAuthorizationUrl({ + callbackUrl: 'http://localhost:3000/openrouter/callback', + codeChallenge: 'challenge123', + state: 'state-123', + codeChallengeMethod: 'S256', + limit: 100, + }); + + const parsed = new URL(url); + expect(parsed.origin + parsed.pathname).toBe( + OPENROUTER_OAUTH_AUTHORIZE_URL, + ); + expect(parsed.searchParams.get('callback_url')).toBe( + 'http://localhost:3000/openrouter/callback', + ); + expect(parsed.searchParams.get('code_challenge')).toBe('challenge123'); + expect(parsed.searchParams.get('state')).toBe('state-123'); + expect(parsed.searchParams.get('code_challenge_method')).toBe('S256'); + expect(parsed.searchParams.get('limit')).toBe('100'); + }); + + it('creates a random OAuth state token', () => { + const state = createOAuthState(); + + expect(state).toMatch(/^[A-Za-z0-9\-_]+$/); + expect(state.length).toBeGreaterThan(20); + }); + + it('exchanges auth code for API key', async () => { + const fetchMock = vi.fn(async () => ({ + ok: true, + json: async () => ({ + key: 'or-key-123', + user_id: 'user-1', + }), + })); + vi.stubGlobal('fetch', fetchMock); + + const result = await exchangeAuthCodeForApiKey({ + code: 'auth-code-123', + codeVerifier: 'verifier-123', + }); + + expect(fetchMock).toHaveBeenCalledWith( + OPENROUTER_OAUTH_EXCHANGE_URL, + expect.objectContaining({ + method: 'POST', + headers: expect.objectContaining({ + Accept: 'application/json', + 'Content-Type': 'application/json', + }), + }), + ); + expect(result).toEqual({ + apiKey: 'or-key-123', + userId: 'user-1', + }); + expect(typeof result.apiKey).toBe('string'); + }); + + it('throws when exchange response does not contain key', async () => { + vi.stubGlobal( + 'fetch', + vi.fn(async () => ({ + ok: true, + json: async () => ({}), + })), + ); + + await expect( + exchangeAuthCodeForApiKey({ + code: 'auth-code-123', + codeVerifier: 'verifier-123', + }), + ).rejects.toThrow('no key was returned'); + }); + + it('resolves callback code without waiting for server close completion', async () => { + const listener = startOAuthCallbackListener( + 'http://localhost:3100/openrouter/callback', + 5000, + 'state-123', + ); + await listener.ready; + + const codePromise = listener.waitForCode; + await new Promise((resolve, reject) => { + const req = request( + 'http://localhost:3100/openrouter/callback?code=fast-code-123&state=state-123', + (res) => { + res.resume(); + res.on('end', resolve); + }, + ); + req.on('error', reject); + req.end(); + }); + + await expect(codePromise).resolves.toBe('fast-code-123'); + }); + + it('rejects callback codes with mismatched OAuth state', async () => { + const listener = startOAuthCallbackListener( + 'http://localhost:3101/openrouter/callback', + 5000, + 'expected-state', + ); + await listener.ready; + + const codePromise = listener.waitForCode.catch((error: unknown) => error); + await new Promise((resolve, reject) => { + const req = request( + 'http://localhost:3101/openrouter/callback?code=fast-code-123&state=wrong-state', + (res) => { + expect(res.statusCode).toBe(400); + res.resume(); + res.on('end', resolve); + }, + ); + req.on('error', reject); + req.end(); + }); + + await expect(codePromise).resolves.toEqual( + expect.objectContaining({ + message: expect.stringContaining('Invalid OAuth state'), + }), + ); + }, 15_000); + + it('creates a reusable OAuth session for manual fallback links', () => { + const session = createOpenRouterOAuthSession( + 'http://localhost:3000/openrouter/callback', + { + codeVerifier: 'verifier-123', + codeChallenge: 'challenge-123', + }, + 'state-123', + ); + + expect(session).toEqual({ + callbackUrl: 'http://localhost:3000/openrouter/callback', + codeVerifier: 'verifier-123', + state: 'state-123', + authorizationUrl: expect.stringContaining('code_challenge=challenge-123'), + }); + expect(session.authorizationUrl).toContain('state=state-123'); + }); + + it('returns OAuth result without waiting for slow listener close', async () => { + let resolveClose!: () => void; + const listener = { + ready: Promise.resolve(), + waitForCode: Promise.resolve('auth-code-123'), + close: vi.fn( + () => + new Promise((resolve) => { + resolveClose = resolve; + }), + ), + }; + const openBrowser = vi.fn(async () => ({}) as never); + const exchangeApiKey = vi.fn(async () => ({ + apiKey: 'or-key-123', + userId: 'user-1', + })); + const resultPromise = runOpenRouterOAuthLogin( + 'http://localhost:3000/openrouter/callback', + { + openBrowser, + startListener: vi.fn(() => listener), + exchangeApiKey, + now: () => 1000, + }, + ); + + await expect(resultPromise).resolves.toMatchObject({ + apiKey: 'or-key-123', + userId: 'user-1', + authorizationUrl: expect.stringContaining('https://openrouter.ai/auth'), + }); + expect(listener.close).toHaveBeenCalled(); + resolveClose(); + }); + + it('passes the session state to the OAuth callback listener', async () => { + const listener = { + ready: Promise.resolve(), + waitForCode: Promise.resolve('auth-code-123'), + close: vi.fn(async () => undefined), + }; + const openBrowser = vi.fn(async () => ({}) as never); + const startListener = vi.fn(() => listener); + const exchangeApiKey = vi.fn(async () => ({ + apiKey: 'or-key-123', + userId: 'user-1', + })); + + await runOpenRouterOAuthLogin('http://localhost:3000/openrouter/callback', { + openBrowser, + startListener, + exchangeApiKey, + session: { + callbackUrl: 'http://localhost:3000/openrouter/callback', + codeVerifier: 'verifier-123', + state: 'state-123', + authorizationUrl: 'https://openrouter.ai/auth?state=state-123', + }, + }); + + expect(startListener).toHaveBeenCalledWith( + 'http://localhost:3000/openrouter/callback', + expect.any(Number), + 'state-123', + ); + }); + + it('records wait and exchange timings during OAuth login', async () => { + const listener = { + ready: Promise.resolve(), + waitForCode: Promise.resolve('auth-code-123'), + close: vi.fn(async () => undefined), + }; + const openBrowser = vi.fn(async () => ({}) as never); + const exchangeApiKey = vi.fn(async () => ({ + apiKey: 'or-key-123', + userId: 'user-1', + })); + const now = vi + .fn<() => number>() + .mockReturnValueOnce(1000) + .mockReturnValueOnce(2200) + .mockReturnValueOnce(3000) + .mockReturnValueOnce(3450); + + const result = await runOpenRouterOAuthLogin( + 'http://localhost:3000/openrouter/callback', + { + openBrowser, + startListener: () => listener, + exchangeApiKey, + now, + }, + ); + + expect(openBrowser).toHaveBeenCalledWith( + expect.stringContaining('https://openrouter.ai/auth'), + ); + expect(exchangeApiKey).toHaveBeenCalledWith({ + code: 'auth-code-123', + codeVerifier: expect.any(String), + }); + expect(result).toEqual({ + apiKey: 'or-key-123', + userId: 'user-1', + authorizationUrl: expect.stringContaining('https://openrouter.ai/auth'), + authorizationCodeWaitMs: 1200, + apiKeyExchangeMs: 450, + }); + expect(listener.close).toHaveBeenCalled(); + }); + + it('allows cancelling OAuth wait with process signals after opening the browser', async () => { + let sigintHandler: ((signal: NodeJS.Signals) => void) | undefined; + let sigtermHandler: ((signal: NodeJS.Signals) => void) | undefined; + const signalTarget = { + once: vi.fn( + ( + event: 'SIGINT' | 'SIGTERM', + handler: (signal: NodeJS.Signals) => void, + ) => { + if (event === 'SIGINT') { + sigintHandler = handler; + } else { + sigtermHandler = handler; + } + }, + ), + removeListener: vi.fn( + ( + _event: 'SIGINT' | 'SIGTERM', + _handler: (signal: NodeJS.Signals) => void, + ) => undefined, + ), + }; + const listener = { + ready: Promise.resolve(), + waitForCode: new Promise(() => undefined), + close: vi.fn(async () => undefined), + }; + const openBrowser = vi.fn(async () => ({}) as never); + const exchangeApiKey = vi.fn(); + + const resultPromise = runOpenRouterOAuthLogin( + 'http://localhost:3000/openrouter/callback', + { + openBrowser, + startListener: () => listener, + exchangeApiKey, + signalTarget, + }, + ); + + await vi.waitFor(() => { + expect(openBrowser).toHaveBeenCalledTimes(1); + expect(sigintHandler).toBeTypeOf('function'); + expect(sigtermHandler).toBeTypeOf('function'); + }); + + sigintHandler?.('SIGINT'); + + await expect(resultPromise).rejects.toThrow( + 'OpenRouter OAuth cancelled by user (SIGINT) while waiting for browser authorization.', + ); + expect(exchangeApiKey).not.toHaveBeenCalled(); + expect(listener.close).toHaveBeenCalled(); + expect(signalTarget.removeListener).toHaveBeenCalledWith( + 'SIGINT', + sigintHandler, + ); + expect(signalTarget.removeListener).toHaveBeenCalledWith( + 'SIGTERM', + sigtermHandler, + ); + }); + + it('allows cancelling OAuth wait with an abort signal', async () => { + const abortController = new AbortController(); + const listener = { + ready: Promise.resolve(), + waitForCode: new Promise(() => undefined), + close: vi.fn(async () => undefined), + }; + const openBrowser = vi.fn(async () => ({}) as never); + const exchangeApiKey = vi.fn(); + + const resultPromise = runOpenRouterOAuthLogin( + 'http://localhost:3000/openrouter/callback', + { + openBrowser, + startListener: () => listener, + exchangeApiKey, + abortSignal: abortController.signal, + }, + ); + + await vi.waitFor(() => { + expect(openBrowser).toHaveBeenCalledTimes(1); + }); + + abortController.abort(); + + await expect(resultPromise).rejects.toMatchObject({ + name: 'AbortError', + message: 'OpenRouter OAuth cancelled.', + }); + expect(exchangeApiKey).not.toHaveBeenCalled(); + expect(listener.close).toHaveBeenCalled(); + }); + + it('fetches dynamic OpenRouter text models with free-first ordering', async () => { + const fetchMock = vi.fn(async () => ({ + ok: true, + json: async () => ({ + data: [ + { + id: 'openai/gpt-5-mini', + name: 'GPT-5 Mini', + context_length: 128000, + architecture: { + input_modalities: ['text', 'image'], + output_modalities: ['text'], + }, + pricing: { + prompt: '0.000001', + completion: '0.000003', + }, + }, + { + id: 'minimax/minimax-m1', + name: 'MiniMax M1', + architecture: { + input_modalities: ['text'], + output_modalities: ['text'], + }, + pricing: { + prompt: '0', + completion: '0', + }, + }, + { + id: 'qwen/qwen3-coder:free', + name: 'Qwen3 Coder', + architecture: { + input_modalities: ['text'], + output_modalities: ['text'], + }, + pricing: { + prompt: '0', + completion: '0', + }, + }, + { + id: 'zhipu/glm-4.5', + name: 'GLM 4.5', + architecture: { + input_modalities: ['text'], + output_modalities: ['text'], + }, + pricing: { + prompt: '0.000002', + completion: '0.000004', + }, + }, + { + id: 'black-forest-labs/flux', + name: 'Flux', + architecture: { + input_modalities: ['text'], + output_modalities: ['image'], + }, + }, + ], + }), + })); + vi.stubGlobal('fetch', fetchMock); + + const models = await fetchOpenRouterModels(); + + expect(fetchMock).toHaveBeenCalledWith( + OPENROUTER_MODELS_URL, + expect.objectContaining({ method: 'GET' }), + ); + expect(models).toEqual([ + { + id: 'qwen/qwen3-coder:free', + name: 'OpenRouter · Qwen3 Coder', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'minimax/minimax-m1', + name: 'OpenRouter · MiniMax M1', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'openai/gpt-5-mini', + name: 'OpenRouter · GPT-5 Mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + capabilities: { vision: true }, + generationConfig: { contextWindowSize: 128000 }, + }, + { + id: 'zhipu/glm-4.5', + name: 'OpenRouter · GLM 4.5', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + ]); + }); + + it('selects a recommended OpenRouter subset instead of returning the full catalog', () => { + const recommended = selectRecommendedOpenRouterModels( + [ + { + id: 'qwen/qwen3-coder:free', + name: 'OpenRouter · Qwen3 Coder', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'qwen/qwen3-max', + name: 'OpenRouter · Qwen3 Max', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'glm/glm-4.5-air:free', + name: 'OpenRouter · GLM 4.5 Air', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'minimax/minimax-m1', + name: 'OpenRouter · MiniMax M1', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'anthropic/claude-3.7-sonnet', + name: 'OpenRouter · Claude 3.7 Sonnet', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'google/gemini-2.5-flash', + name: 'OpenRouter · Gemini 2.5 Flash', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'openai/gpt-5-mini', + name: 'OpenRouter · GPT-5 Mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + capabilities: { vision: true }, + }, + { + id: 'deepseek/deepseek-r1', + name: 'OpenRouter · DeepSeek R1', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + generationConfig: { contextWindowSize: 1048576 }, + }, + { + id: 'meta/llama-3.3-70b', + name: 'OpenRouter · Llama 3.3 70B', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + ], + 6, + ); + + expect(recommended.map((model) => model.id)).toEqual([ + 'qwen/qwen3-coder:free', + 'glm/glm-4.5-air:free', + 'qwen/qwen3-max', + 'minimax/minimax-m1', + 'anthropic/claude-3.7-sonnet', + 'google/gemini-2.5-flash', + ]); + }); + + it('applies OpenRouter configuration to settings and reloads providers', async () => { + const settings = { + merged: { + modelProviders: { + [AuthType.USE_OPENAI]: [ + { id: 'custom/model', baseUrl: 'https://example.com/v1' }, + ], + }, + }, + user: { settings: { modelProviders: {} }, path: '/user.json' }, + workspace: { settings: {}, path: '/workspace.json' }, + system: { settings: {}, path: '/system.json' }, + systemDefaults: { settings: {}, path: '/system-defaults.json' }, + setValue: vi.fn(), + forScope: vi.fn(), + } as unknown as LoadedSettings; + const config = { + reloadModelProvidersConfig: vi.fn(), + } as unknown as Config; + const fetchSpy = vi + .spyOn( + await import('./openrouterOAuth.js'), + 'getOpenRouterModelsWithFallback', + ) + .mockResolvedValue([ + { + id: 'openai/gpt-4o-mini', + name: 'OpenRouter · GPT-4o mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + ]); + + const result = await applyOpenRouterModelsConfiguration({ + settings, + config, + apiKey: 'or-key-123', + reloadConfig: true, + }); + + expect(settings.setValue).toHaveBeenCalledWith( + expect.anything(), + 'env.OPENROUTER_API_KEY', + 'or-key-123', + ); + + const modelProvidersCall = vi + .mocked(settings.setValue) + .mock.calls.find( + (call) => call[1] === `modelProviders.${AuthType.USE_OPENAI}`, + ); + expect(modelProvidersCall).toBeDefined(); + expect(modelProvidersCall?.[2]).toEqual( + expect.arrayContaining([ + expect.objectContaining({ + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }), + expect.objectContaining({ + id: 'custom/model', + baseUrl: 'https://example.com/v1', + }), + ]), + ); + + expect(config.reloadModelProvidersConfig).toHaveBeenCalled(); + expect(result.activeModelId).toBeDefined(); + fetchSpy.mockRestore(); + }); + + it('prefers the default OpenRouter model when it remains enabled', () => { + expect( + getPreferredOpenRouterModelId([ + { id: 'anthropic/claude-3.7-sonnet' }, + { id: 'openai/gpt-4o-mini' }, + ] as never), + ).toBe('openai/gpt-4o-mini'); + }); + + it('falls back to the first enabled OpenRouter model when the default is unavailable', () => { + expect( + getPreferredOpenRouterModelId([ + { id: 'anthropic/claude-3.7-sonnet' }, + ] as never), + ).toBe('anthropic/claude-3.7-sonnet'); + }); + + it('falls back to default models when dynamic fetch fails', async () => { + vi.stubGlobal( + 'fetch', + vi.fn(async () => ({ + ok: false, + status: 500, + text: async () => 'server error', + })), + ); + + await expect(getOpenRouterModelsWithFallback()).resolves.toEqual( + OPENROUTER_DEFAULT_MODELS, + ); + }); + + it('replaces only existing OpenRouter configs when merging dynamic models', () => { + const merged = mergeOpenRouterConfigs( + [ + { + id: 'old/model', + name: 'Old OpenRouter Model', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'gpt-4.1', + name: 'OpenAI GPT-4.1', + baseUrl: 'https://api.openai.com/v1', + envKey: 'OPENAI_API_KEY', + }, + ], + [ + { + id: 'openai/gpt-5-mini', + name: 'OpenRouter · GPT-5 Mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + ], + ); + + expect(merged).toEqual([ + { + id: 'openai/gpt-5-mini', + name: 'OpenRouter · GPT-5 Mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + { + id: 'gpt-4.1', + name: 'OpenAI GPT-4.1', + baseUrl: 'https://api.openai.com/v1', + envKey: 'OPENAI_API_KEY', + }, + ]); + }); +}); diff --git a/packages/cli/src/commands/auth/openrouterOAuth.ts b/packages/cli/src/commands/auth/openrouterOAuth.ts new file mode 100644 index 000000000..5d36da75b --- /dev/null +++ b/packages/cli/src/commands/auth/openrouterOAuth.ts @@ -0,0 +1,751 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { createServer, type Server } from 'node:http'; +import { createHash, randomBytes } from 'node:crypto'; +import open from 'open'; + +import { + AuthType, + type Config, + type ModelProvidersConfig, + type ProviderModelConfig as ModelConfig, +} from '@qwen-code/qwen-code-core'; +import type { LoadedSettings } from '../../config/settings.js'; +import { getPersistScopeForModelSelection } from '../../config/modelProvidersScope.js'; + +export const OPENROUTER_ENV_KEY = 'OPENROUTER_API_KEY'; +export const OPENROUTER_DEFAULT_MODEL = 'openai/gpt-4o-mini'; +export const OPENROUTER_BASE_URL = 'https://openrouter.ai/api/v1'; +export const OPENROUTER_OAUTH_AUTHORIZE_URL = 'https://openrouter.ai/auth'; +export const OPENROUTER_OAUTH_EXCHANGE_URL = + 'https://openrouter.ai/api/v1/auth/keys'; +export const OPENROUTER_MODELS_URL = 'https://openrouter.ai/api/v1/models'; +export const OPENROUTER_OAUTH_CALLBACK_URL = + 'http://localhost:3000/openrouter/callback'; +const OPENROUTER_CODE_CHALLENGE_METHOD = 'S256'; +const OPENROUTER_OAUTH_TIMEOUT_MS = 5 * 60 * 1000; +const OPENROUTER_MINIMUM_TEXT_MODELS = 1; + +export const OPENROUTER_DEFAULT_MODELS: ModelConfig[] = [ + { + id: 'openai/gpt-4o-mini', + name: 'OpenRouter · GPT-4o mini', + baseUrl: OPENROUTER_BASE_URL, + envKey: OPENROUTER_ENV_KEY, + }, + { + id: 'anthropic/claude-3.7-sonnet', + name: 'OpenRouter · Claude 3.7 Sonnet', + baseUrl: OPENROUTER_BASE_URL, + envKey: OPENROUTER_ENV_KEY, + }, + { + id: 'google/gemini-2.5-flash', + name: 'OpenRouter · Gemini 2.5 Flash', + baseUrl: OPENROUTER_BASE_URL, + envKey: OPENROUTER_ENV_KEY, + }, +]; + +export interface OpenRouterOAuthResult { + apiKey: string; + userId?: string; + authorizationUrl?: string; + authorizationCodeWaitMs?: number; + apiKeyExchangeMs?: number; +} + +export interface PkcePair { + codeVerifier: string; + codeChallenge: string; +} + +export interface OpenRouterOAuthSession { + callbackUrl: string; + codeVerifier: string; + state: string; + authorizationUrl: string; +} + +export interface OAuthCallbackListener { + ready: Promise; + waitForCode: Promise; + close: () => Promise; +} + +interface OpenRouterModelApiRecord { + id?: string; + name?: string; + description?: string; + context_length?: number; + architecture?: { + input_modalities?: string[]; + output_modalities?: string[]; + }; + pricing?: { + prompt?: string; + completion?: string; + }; +} + +function toBase64Url(input: Buffer): string { + return input + .toString('base64') + .replace(/\+/g, '-') + .replace(/\//g, '_') + .replace(/=+$/g, ''); +} + +export function createPkcePair(): PkcePair { + const codeVerifier = toBase64Url(randomBytes(32)); + const codeChallenge = toBase64Url( + createHash('sha256').update(codeVerifier).digest(), + ); + return { codeVerifier, codeChallenge }; +} + +export function buildOpenRouterAuthorizationUrl(params: { + callbackUrl: string; + codeChallenge: string; + state: string; + codeChallengeMethod?: 'S256'; + limit?: number; +}): string { + const url = new URL(OPENROUTER_OAUTH_AUTHORIZE_URL); + url.searchParams.set('callback_url', params.callbackUrl); + url.searchParams.set('code_challenge', params.codeChallenge); + url.searchParams.set('state', params.state); + url.searchParams.set( + 'code_challenge_method', + params.codeChallengeMethod || OPENROUTER_CODE_CHALLENGE_METHOD, + ); + if (typeof params.limit === 'number') { + url.searchParams.set('limit', String(params.limit)); + } + return url.toString(); +} + +export function createOAuthState(): string { + return toBase64Url(randomBytes(32)); +} + +export function createOpenRouterOAuthSession( + callbackUrl = OPENROUTER_OAUTH_CALLBACK_URL, + pkcePair = createPkcePair(), + state = createOAuthState(), +): OpenRouterOAuthSession { + return { + callbackUrl, + codeVerifier: pkcePair.codeVerifier, + state, + authorizationUrl: buildOpenRouterAuthorizationUrl({ + callbackUrl, + codeChallenge: pkcePair.codeChallenge, + state, + codeChallengeMethod: OPENROUTER_CODE_CHALLENGE_METHOD, + }), + }; +} + +export function startOAuthCallbackListener( + callbackUrl = OPENROUTER_OAUTH_CALLBACK_URL, + timeoutMs = OPENROUTER_OAUTH_TIMEOUT_MS, + expectedState?: string, +): OAuthCallbackListener { + const parsedUrl = new URL(callbackUrl); + if (parsedUrl.protocol !== 'http:') { + throw new Error( + 'Only http localhost callback URLs are currently supported.', + ); + } + + let server: Server | undefined; + let timeout: NodeJS.Timeout | undefined; + let settled = false; + + const close = async () => { + if (timeout) { + clearTimeout(timeout); + timeout = undefined; + } + if (!server) { + return; + } + await new Promise((resolve, reject) => { + server!.close((error) => { + if (error) { + reject(error); + return; + } + resolve(); + }); + }); + server = undefined; + }; + + let resolveReady!: () => void; + let rejectReady!: (error: Error) => void; + const ready = new Promise((resolve, reject) => { + resolveReady = resolve; + rejectReady = reject; + }); + + let resolveCode!: (code: string) => void; + let rejectCode!: (error: Error) => void; + const waitForCode = new Promise((resolve, reject) => { + resolveCode = resolve; + rejectCode = reject; + }); + + const finish = (action: 'resolve' | 'reject', payload: string | Error) => { + if (settled) { + return; + } + settled = true; + + if (action === 'resolve') { + resolveCode(payload as string); + } else { + rejectCode(payload as Error); + } + + void close().catch(() => undefined); + }; + + server = createServer((req, res) => { + const requestUrl = new URL(req.url || '/', parsedUrl.origin); + if (requestUrl.pathname !== parsedUrl.pathname) { + res.statusCode = 404; + res.end('Not found'); + return; + } + + const error = requestUrl.searchParams.get('error'); + if (error) { + res.statusCode = 400; + res.setHeader('Content-Type', 'text/plain; charset=utf-8'); + res.end(`OpenRouter authorization failed: ${error}`); + void finish( + 'reject', + new Error(`OpenRouter authorization failed: ${error}`), + ); + return; + } + + const callbackState = requestUrl.searchParams.get('state'); + if (expectedState && callbackState !== expectedState) { + res.statusCode = 400; + res.setHeader('Content-Type', 'text/plain; charset=utf-8'); + res.end('Invalid OAuth state.'); + void finish( + 'reject', + new Error('Invalid OAuth state from OpenRouter callback.'), + ); + return; + } + + const code = requestUrl.searchParams.get('code'); + if (!code) { + res.statusCode = 400; + res.setHeader('Content-Type', 'text/plain; charset=utf-8'); + res.end('Missing authorization code.'); + void finish( + 'reject', + new Error('Missing authorization code from OpenRouter callback.'), + ); + return; + } + + res.statusCode = 200; + res.setHeader('Content-Type', 'text/html; charset=utf-8'); + res.end( + '

OpenRouter authentication complete.

You can return to Qwen Code.

', + ); + void finish('resolve', code); + }); + + server.once('error', (error) => { + rejectReady(error instanceof Error ? error : new Error(String(error))); + void finish( + 'reject', + error instanceof Error ? error : new Error(String(error)), + ); + }); + + const port = parsedUrl.port ? Number(parsedUrl.port) : 80; + server.listen(port, parsedUrl.hostname, () => { + resolveReady(); + }); + + timeout = setTimeout(() => { + void finish( + 'reject', + new Error('Timed out waiting for OpenRouter OAuth callback.'), + ); + }, timeoutMs); + + return { + ready, + waitForCode, + close, + }; +} + +function buildOpenRouterHeaders() { + return { + Accept: 'application/json', + 'Content-Type': 'application/json', + 'HTTP-Referer': 'https://github.com/QwenLM/qwen-code.git', + 'X-OpenRouter-Title': 'Qwen Code', + }; +} + +const OPENROUTER_MODEL_PRIORITY_PREFIXES = ['qwen/', 'glm/', 'minimax/']; +const OPENROUTER_RECOMMENDED_MODEL_LIMIT = 16; +const OPENROUTER_FREE_MODEL_ID_HINT = ':free'; + +export function getPreferredOpenRouterModelId( + models: ModelConfig[], +): string | undefined { + return ( + models.find((model) => model.id === OPENROUTER_DEFAULT_MODEL)?.id || + models[0]?.id + ); +} + +function isOpenRouterFreeModelId(modelId: string): boolean { + const normalizedId = modelId.toLowerCase(); + return ( + normalizedId.includes(OPENROUTER_FREE_MODEL_ID_HINT) || + normalizedId === 'openrouter/free' + ); +} + +function getOpenRouterModelPriority(modelId: string): number { + const normalizedId = modelId.toLowerCase(); + const matchedIndex = OPENROUTER_MODEL_PRIORITY_PREFIXES.findIndex((prefix) => + normalizedId.startsWith(prefix), + ); + return matchedIndex === -1 + ? OPENROUTER_MODEL_PRIORITY_PREFIXES.length + : matchedIndex; +} + +function isOpenRouterFreeConfig(model: ModelConfig): boolean { + return isOpenRouterFreeModelId(model.id); +} + +function compareOpenRouterModels(a: ModelConfig, b: ModelConfig): number { + const freeDiff = + Number(isOpenRouterFreeConfig(b)) - Number(isOpenRouterFreeConfig(a)); + if (freeDiff !== 0) { + return freeDiff; + } + + const priorityDiff = + getOpenRouterModelPriority(a.id) - getOpenRouterModelPriority(b.id); + if (priorityDiff !== 0) { + return priorityDiff; + } + + return a.id.localeCompare(b.id); +} + +function toOpenRouterModelConfig( + model: OpenRouterModelApiRecord, +): ModelConfig | null { + if (!model.id) { + return null; + } + + const outputModalities = model.architecture?.output_modalities || []; + const supportsTextOutput = outputModalities.length + ? outputModalities.includes('text') + : true; + + if (!supportsTextOutput) { + return null; + } + + const inputModalities = model.architecture?.input_modalities || []; + const supportsVision = inputModalities.includes('image'); + + return { + id: model.id, + name: model.name + ? `OpenRouter · ${model.name}` + : `OpenRouter · ${model.id}`, + baseUrl: OPENROUTER_BASE_URL, + envKey: OPENROUTER_ENV_KEY, + capabilities: supportsVision ? { vision: true } : undefined, + generationConfig: + typeof model.context_length === 'number' + ? { contextWindowSize: model.context_length } + : undefined, + }; +} + +function chooseRepresentativeModel( + models: ModelConfig[], + predicate: (model: ModelConfig) => boolean, + selectedIds: Set, +): ModelConfig | undefined { + return models.find((model) => predicate(model) && !selectedIds.has(model.id)); +} + +function addRecommendedModel( + target: ModelConfig[], + model: ModelConfig | undefined, + selectedIds: Set, + limit: number, +): void { + if (!model || selectedIds.has(model.id) || target.length >= limit) { + return; + } + target.push(model); + selectedIds.add(model.id); +} + +export function selectRecommendedOpenRouterModels( + models: ModelConfig[], + limit = OPENROUTER_RECOMMENDED_MODEL_LIMIT, +): ModelConfig[] { + if (models.length <= limit) { + return models; + } + + const sorted = [...models].sort(compareOpenRouterModels); + const recommended: ModelConfig[] = []; + const selectedIds = new Set(); + + const freeModels = sorted.filter((model) => isOpenRouterFreeConfig(model)); + for (const model of freeModels.slice(0, Math.min(limit, 6))) { + addRecommendedModel(recommended, model, selectedIds, limit); + } + + for (const prefix of OPENROUTER_MODEL_PRIORITY_PREFIXES) { + addRecommendedModel( + recommended, + chooseRepresentativeModel( + sorted, + (model) => model.id.toLowerCase().startsWith(prefix), + selectedIds, + ), + selectedIds, + limit, + ); + } + + for (const family of ['anthropic/', 'google/', 'openai/']) { + addRecommendedModel( + recommended, + chooseRepresentativeModel( + sorted, + (model) => model.id.toLowerCase().startsWith(family), + selectedIds, + ), + selectedIds, + limit, + ); + } + + addRecommendedModel( + recommended, + chooseRepresentativeModel( + sorted, + (model) => model.capabilities?.vision === true, + selectedIds, + ), + selectedIds, + limit, + ); + + addRecommendedModel( + recommended, + chooseRepresentativeModel( + sorted, + (model) => (model.generationConfig?.contextWindowSize || 0) >= 1000000, + selectedIds, + ), + selectedIds, + limit, + ); + + for (const model of sorted) { + if (recommended.length >= limit) { + break; + } + addRecommendedModel(recommended, model, selectedIds, limit); + } + + return recommended; +} + +export function isOpenRouterConfig(config: ModelConfig): boolean { + return (config.baseUrl || '').includes('openrouter.ai'); +} + +export function mergeOpenRouterConfigs( + existingConfigs: ModelConfig[], + openRouterModels = OPENROUTER_DEFAULT_MODELS, +): ModelConfig[] { + const nonOpenRouterConfigs = existingConfigs.filter( + (existing) => !isOpenRouterConfig(existing), + ); + return [...openRouterModels, ...nonOpenRouterConfigs]; +} + +export interface ApplyOpenRouterModelsResult { + updatedConfigs: ModelConfig[]; + activeModelId?: string; + persistScope: ReturnType; +} + +export async function applyOpenRouterModelsConfiguration(params: { + settings: LoadedSettings; + config: Config; + apiKey: string; + reloadConfig: boolean; +}): Promise { + const { settings, config, apiKey, reloadConfig } = params; + const persistScope = getPersistScopeForModelSelection(settings); + + settings.setValue(persistScope, `env.${OPENROUTER_ENV_KEY}`, apiKey); + process.env[OPENROUTER_ENV_KEY] = apiKey; + + const existingConfigs = + (settings.merged.modelProviders as ModelProvidersConfig | undefined)?.[ + AuthType.USE_OPENAI + ] || []; + const openRouterCatalog = await getOpenRouterModelsWithFallback(); + const openRouterModels = selectRecommendedOpenRouterModels(openRouterCatalog); + const updatedConfigs = mergeOpenRouterConfigs( + existingConfigs, + openRouterModels, + ); + + settings.setValue( + persistScope, + `modelProviders.${AuthType.USE_OPENAI}`, + updatedConfigs, + ); + settings.setValue( + persistScope, + 'security.auth.selectedType', + AuthType.USE_OPENAI, + ); + + const activeModelId = getPreferredOpenRouterModelId(updatedConfigs); + if (activeModelId) { + settings.setValue(persistScope, 'model.name', activeModelId); + } + + if (reloadConfig) { + const updatedModelProviders: ModelProvidersConfig = { + ...(settings.merged.modelProviders as ModelProvidersConfig | undefined), + [AuthType.USE_OPENAI]: updatedConfigs, + }; + config.reloadModelProvidersConfig(updatedModelProviders); + } + + return { + updatedConfigs, + activeModelId, + persistScope, + }; +} + +export async function fetchOpenRouterModels(): Promise { + const response = await fetch(OPENROUTER_MODELS_URL, { + method: 'GET', + headers: buildOpenRouterHeaders(), + }); + + if (!response.ok) { + const errorText = await response.text(); + throw new Error( + `OpenRouter models request failed (${response.status}): ${errorText}`, + ); + } + + const data = (await response.json()) as { + data?: OpenRouterModelApiRecord[]; + }; + const records = Array.isArray(data.data) ? data.data : []; + const models = records + .map((record) => toOpenRouterModelConfig(record)) + .filter((model): model is ModelConfig => model !== null) + .sort(compareOpenRouterModels); + + if (models.length < OPENROUTER_MINIMUM_TEXT_MODELS) { + throw new Error( + 'OpenRouter models request returned no usable text models.', + ); + } + + return models; +} + +export async function getOpenRouterModelsWithFallback(): Promise< + ModelConfig[] +> { + try { + return await fetchOpenRouterModels(); + } catch { + return OPENROUTER_DEFAULT_MODELS; + } +} + +export async function exchangeAuthCodeForApiKey(params: { + code: string; + codeVerifier: string; +}): Promise { + const response = await fetch(OPENROUTER_OAUTH_EXCHANGE_URL, { + method: 'POST', + headers: buildOpenRouterHeaders(), + body: JSON.stringify({ + code: params.code, + code_verifier: params.codeVerifier, + code_challenge_method: OPENROUTER_CODE_CHALLENGE_METHOD, + }), + }); + + if (!response.ok) { + const errorText = await response.text(); + throw new Error( + `OpenRouter API key exchange failed (${response.status}): ${errorText}`, + ); + } + + const data = (await response.json()) as { + key?: string; + user_id?: string; + }; + + if (!data.key) { + throw new Error( + 'OpenRouter API key exchange succeeded but no key was returned.', + ); + } + + return { + apiKey: data.key, + userId: data.user_id, + }; +} + +interface OAuthSignalTarget { + once(event: NodeJS.Signals, listener: (signal: NodeJS.Signals) => void): void; + removeListener( + event: NodeJS.Signals, + listener: (signal: NodeJS.Signals) => void, + ): void; +} + +interface OpenRouterOAuthLoginDeps { + openBrowser?: typeof open; + startListener?: typeof startOAuthCallbackListener; + exchangeApiKey?: typeof exchangeAuthCodeForApiKey; + now?: () => number; + signalTarget?: OAuthSignalTarget; + abortSignal?: AbortSignal; + session?: OpenRouterOAuthSession; +} + +export async function runOpenRouterOAuthLogin( + callbackUrl = OPENROUTER_OAUTH_CALLBACK_URL, + deps: OpenRouterOAuthLoginDeps = {}, +): Promise { + const session = deps.session || createOpenRouterOAuthSession(callbackUrl); + const { + callbackUrl: effectiveCallbackUrl, + codeVerifier, + state, + authorizationUrl: authUrl, + } = session; + + const openBrowser = deps.openBrowser || open; + const startListener = deps.startListener || startOAuthCallbackListener; + const exchangeApiKey = deps.exchangeApiKey || exchangeAuthCodeForApiKey; + const now = deps.now || Date.now; + const signalTarget = deps.signalTarget || process; + const abortSignal = deps.abortSignal; + + const listener = startListener( + effectiveCallbackUrl, + OPENROUTER_OAUTH_TIMEOUT_MS, + state, + ); + let cleanupSignalHandlers = () => {}; + let cleanupAbortListener = () => {}; + try { + await listener.ready; + await openBrowser(authUrl); + + const waitForCancel = new Promise((_, reject) => { + const handleSignal = (signal: NodeJS.Signals) => { + reject( + new Error( + `OpenRouter OAuth cancelled by user (${signal}) while waiting for browser authorization.`, + ), + ); + }; + + signalTarget.once('SIGINT', handleSignal); + signalTarget.once('SIGTERM', handleSignal); + cleanupSignalHandlers = () => { + signalTarget.removeListener('SIGINT', handleSignal); + signalTarget.removeListener('SIGTERM', handleSignal); + }; + }); + + const waitForAbort = new Promise((_, reject) => { + if (!abortSignal) { + return; + } + + const handleAbort = () => { + reject(new DOMException('OpenRouter OAuth cancelled.', 'AbortError')); + }; + + if (abortSignal.aborted) { + handleAbort(); + return; + } + + abortSignal.addEventListener('abort', handleAbort, { once: true }); + cleanupAbortListener = () => { + abortSignal.removeEventListener('abort', handleAbort); + }; + }); + + const waitStartMs = now(); + const code = await Promise.race([ + listener.waitForCode, + waitForCancel, + waitForAbort, + ]); + cleanupSignalHandlers(); + cleanupAbortListener(); + const authorizationCodeWaitMs = now() - waitStartMs; + + const exchangeStartMs = now(); + const exchangeResult = await exchangeApiKey({ code, codeVerifier }); + const apiKeyExchangeMs = now() - exchangeStartMs; + + return { + ...exchangeResult, + authorizationUrl: authUrl, + authorizationCodeWaitMs, + apiKeyExchangeMs, + }; + } finally { + cleanupSignalHandlers(); + cleanupAbortListener(); + void listener.close().catch(() => undefined); + } +} diff --git a/packages/cli/src/commands/auth/status.test.ts b/packages/cli/src/commands/auth/status.test.ts index ca5a52718..b7b649a82 100644 --- a/packages/cli/src/commands/auth/status.test.ts +++ b/packages/cli/src/commands/auth/status.test.ts @@ -6,7 +6,8 @@ import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; import { showAuthStatus } from './handler.js'; -import { AuthType, CODING_PLAN_ENV_KEY } from '@qwen-code/qwen-code-core'; +import { AuthType } from '@qwen-code/qwen-code-core'; +import { CODING_PLAN_ENV_KEY } from '../../constants/codingPlan.js'; import type { LoadedSettings } from '../../config/settings.js'; vi.mock('../../config/settings.js', () => ({ @@ -26,11 +27,13 @@ describe('showAuthStatus', () => { vi.clearAllMocks(); vi.spyOn(process, 'exit').mockImplementation((() => undefined) as never); delete process.env[CODING_PLAN_ENV_KEY]; + delete process.env['OPENROUTER_API_KEY']; }); afterEach(() => { vi.restoreAllMocks(); delete process.env[CODING_PLAN_ENV_KEY]; + delete process.env['OPENROUTER_API_KEY']; }); const createMockSettings = ( @@ -55,6 +58,9 @@ describe('showAuthStatus', () => { expect(writeStdoutLine).toHaveBeenCalledWith( expect.stringContaining('No authentication method configured'), ); + expect(writeStdoutLine).toHaveBeenCalledWith( + expect.stringContaining('qwen auth openrouter'), + ); expect(writeStdoutLine).toHaveBeenCalledWith( expect.stringContaining('qwen auth qwen-oauth'), ); @@ -120,6 +126,77 @@ describe('showAuthStatus', () => { expect(process.exit).toHaveBeenCalledWith(0); }); + it('should show OpenRouter status when configured with API key', async () => { + process.env['OPENROUTER_API_KEY'] = 'test-openrouter-key'; + + vi.mocked(loadSettings).mockReturnValue( + createMockSettings({ + security: { + auth: { + selectedType: AuthType.USE_OPENAI, + }, + }, + model: { + name: 'openai/gpt-4o-mini', + }, + modelProviders: { + [AuthType.USE_OPENAI]: [ + { + id: 'openai/gpt-4o-mini', + name: 'OpenRouter · GPT-4o mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + ], + }, + }), + ); + + await showAuthStatus(); + + expect(writeStdoutLine).toHaveBeenCalledWith( + expect.stringContaining('OpenRouter'), + ); + expect(writeStdoutLine).toHaveBeenCalledWith( + expect.stringContaining('openai/gpt-4o-mini'), + ); + expect(writeStdoutLine).toHaveBeenCalledWith( + expect.stringContaining('API key configured'), + ); + expect(process.exit).toHaveBeenCalledWith(0); + }); + + it('should show OpenRouter as incomplete when API key is missing', async () => { + vi.mocked(loadSettings).mockReturnValue( + createMockSettings({ + security: { + auth: { + selectedType: AuthType.USE_OPENAI, + }, + }, + modelProviders: { + [AuthType.USE_OPENAI]: [ + { + id: 'openai/gpt-4o-mini', + name: 'OpenRouter · GPT-4o mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + ], + }, + }), + ); + + await showAuthStatus(); + + expect(writeStdoutLine).toHaveBeenCalledWith( + expect.stringContaining('OpenRouter (Incomplete)'), + ); + expect(writeStdoutLine).toHaveBeenCalledWith( + expect.stringContaining('qwen auth openrouter'), + ); + }); + it('should show Coding Plan as incomplete when API key is missing', async () => { vi.mocked(loadSettings).mockReturnValue( createMockSettings({ diff --git a/packages/cli/src/i18n/locales/de.js b/packages/cli/src/i18n/locales/de.js index d4b6e94bf..4cc0d50e5 100644 --- a/packages/cli/src/i18n/locales/de.js +++ b/packages/cli/src/i18n/locales/de.js @@ -1301,6 +1301,8 @@ export default { 'Kostenpflichtig \u00B7 Bis zu 6.000 Anfragen/5 Std. \u00B7 Alle Alibaba Cloud Coding Plan Modelle', 'Alibaba Cloud Coding Plan': 'Alibaba Cloud Coding Plan', 'Bring your own API key': 'Eigenen API-Schlüssel verwenden', + 'Browser-based authentication with third-party providers (e.g. OpenRouter, ModelScope)': + 'Browserbasierte Authentifizierung mit externen Anbietern (z. B. OpenRouter, ModelScope)', 'API-KEY': 'API-KEY', 'Use coding plan credentials or your own api-keys/providers.': 'Verwenden Sie Coding Plan-Anmeldedaten oder Ihre eigenen API-Schlüssel/Anbieter.', diff --git a/packages/cli/src/i18n/locales/en.js b/packages/cli/src/i18n/locales/en.js index c2a427bdb..268aa5048 100644 --- a/packages/cli/src/i18n/locales/en.js +++ b/packages/cli/src/i18n/locales/en.js @@ -1361,6 +1361,8 @@ export default { 'Paid \u00B7 Up to 6,000 requests/5 hrs \u00B7 All Alibaba Cloud Coding Plan Models', 'Alibaba Cloud Coding Plan': 'Alibaba Cloud Coding Plan', 'Bring your own API key': 'Bring your own API key', + 'Browser-based authentication with third-party providers (e.g. OpenRouter, ModelScope)': + 'Browser-based authentication with third-party providers (e.g. OpenRouter, ModelScope)', 'API-KEY': 'API-KEY', 'Use coding plan credentials or your own api-keys/providers.': 'Use coding plan credentials or your own api-keys/providers.', diff --git a/packages/cli/src/i18n/locales/fr.js b/packages/cli/src/i18n/locales/fr.js index 138f7bf8d..433b61f4a 100644 --- a/packages/cli/src/i18n/locales/fr.js +++ b/packages/cli/src/i18n/locales/fr.js @@ -1345,6 +1345,8 @@ export default { "Payant · Jusqu'à 6 000 requêtes/5h · Tous les modèles Alibaba Cloud Coding Plan", 'Alibaba Cloud Coding Plan': 'Plan de codage Alibaba Cloud', 'Bring your own API key': 'Apportez votre propre clé API', + 'Browser-based authentication with third-party providers (e.g. OpenRouter, ModelScope)': + 'Authentification basée sur le navigateur avec des fournisseurs tiers (par exemple OpenRouter, ModelScope)', 'API-KEY': 'CLÉ-API', 'Use coding plan credentials or your own api-keys/providers.': 'Utilisez les identifiants du plan de codage ou vos propres clés API/fournisseurs.', diff --git a/packages/cli/src/i18n/locales/ja.js b/packages/cli/src/i18n/locales/ja.js index 2c7225639..94a4e5ffc 100644 --- a/packages/cli/src/i18n/locales/ja.js +++ b/packages/cli/src/i18n/locales/ja.js @@ -1022,6 +1022,8 @@ export default { '有料 \u00B7 5時間最大6,000リクエスト \u00B7 すべての Alibaba Cloud Coding Plan モデル', 'Alibaba Cloud Coding Plan': 'Alibaba Cloud Coding Plan', 'Bring your own API key': '自分のAPIキーを使用', + 'Browser-based authentication with third-party providers (e.g. OpenRouter, ModelScope)': + 'サードパーティプロバイダーによるブラウザベースの認証(例:OpenRouter、ModelScope)', 'API-KEY': 'API-KEY', 'Use coding plan credentials or your own api-keys/providers.': 'Coding Planの認証情報またはご自身のAPIキー/プロバイダーをご利用ください。', diff --git a/packages/cli/src/i18n/locales/pt.js b/packages/cli/src/i18n/locales/pt.js index db681a19a..74e9630dc 100644 --- a/packages/cli/src/i18n/locales/pt.js +++ b/packages/cli/src/i18n/locales/pt.js @@ -1309,6 +1309,8 @@ export default { 'Pago \u00B7 Até 6.000 solicitações/5 hrs \u00B7 Todos os modelos Alibaba Cloud Coding Plan', 'Alibaba Cloud Coding Plan': 'Alibaba Cloud Coding Plan', 'Bring your own API key': 'Traga sua própria chave API', + 'Browser-based authentication with third-party providers (e.g. OpenRouter, ModelScope)': + 'Autenticação baseada em navegador com provedores terceiros (por exemplo, OpenRouter, ModelScope)', 'API-KEY': 'API-KEY', 'Use coding plan credentials or your own api-keys/providers.': 'Use credenciais do Coding Plan ou suas próprias chaves API/provedores.', diff --git a/packages/cli/src/i18n/locales/ru.js b/packages/cli/src/i18n/locales/ru.js index ed2af31b3..f2071ffed 100644 --- a/packages/cli/src/i18n/locales/ru.js +++ b/packages/cli/src/i18n/locales/ru.js @@ -1231,6 +1231,8 @@ export default { 'Платно \u00B7 До 6 000 запросов/5 часов \u00B7 Все модели Alibaba Cloud Coding Plan', 'Alibaba Cloud Coding Plan': 'Alibaba Cloud Coding Plan', 'Bring your own API key': 'Используйте свой API-ключ', + 'Browser-based authentication with third-party providers (e.g. OpenRouter, ModelScope)': + 'Браузерная аутентификация с использованием сторонних провайдеров (например, OpenRouter, ModelScope)', 'API-KEY': 'API-KEY', 'Use coding plan credentials or your own api-keys/providers.': 'Используйте учетные данные Coding Plan или свои собственные API-ключи/провайдеры.', diff --git a/packages/cli/src/i18n/locales/zh-TW.js b/packages/cli/src/i18n/locales/zh-TW.js index 9460ba71d..5ab279b51 100644 --- a/packages/cli/src/i18n/locales/zh-TW.js +++ b/packages/cli/src/i18n/locales/zh-TW.js @@ -1141,6 +1141,8 @@ export default { 'Alibaba Cloud Coding Plan': '阿里雲百鍊 Coding Plan', 'Bring your own API key': '使用自己的 API 密鑰', 'API-KEY': 'API-KEY', + 'Browser-based authentication with third-party providers (e.g. OpenRouter, ModelScope)': + '基於瀏覽器的第三方提供商認證(例如 OpenRouter、ModelScope)', 'Use coding plan credentials or your own api-keys/providers.': '使用 Coding Plan 憑證或您自己的 API 密鑰/提供商。', OpenAI: 'OpenAI', diff --git a/packages/cli/src/i18n/locales/zh.js b/packages/cli/src/i18n/locales/zh.js index 4c3c98ba6..701732e87 100644 --- a/packages/cli/src/i18n/locales/zh.js +++ b/packages/cli/src/i18n/locales/zh.js @@ -1291,6 +1291,8 @@ export default { '付费 \u00B7 每 5 小时最多 6,000 次请求 \u00B7 支持阿里云百炼 Coding Plan 全部模型', 'Alibaba Cloud Coding Plan': '阿里云百炼 Coding Plan', 'Bring your own API key': '使用自己的 API 密钥', + 'Browser-based authentication with third-party providers (e.g. OpenRouter, ModelScope)': + '基于浏览器的第三方提供商认证(例如 OpenRouter、ModelScope)', 'Use coding plan credentials or your own api-keys/providers.': '使用 Coding Plan 凭证或您自己的 API 密钥/提供商。', OpenAI: 'OpenAI', diff --git a/packages/cli/src/services/BuiltinCommandLoader.ts b/packages/cli/src/services/BuiltinCommandLoader.ts index 9d7fd4722..086542695 100644 --- a/packages/cli/src/services/BuiltinCommandLoader.ts +++ b/packages/cli/src/services/BuiltinCommandLoader.ts @@ -36,6 +36,7 @@ import { dreamCommand } from '../ui/commands/dreamCommand.js'; import { forgetCommand } from '../ui/commands/forgetCommand.js'; import { memoryCommand } from '../ui/commands/memoryCommand.js'; import { modelCommand } from '../ui/commands/modelCommand.js'; +import { manageModelsCommand } from '../ui/commands/manageModelsCommand.js'; import { rememberCommand } from '../ui/commands/rememberCommand.js'; import { planCommand } from '../ui/commands/planCommand.js'; import { permissionsCommand } from '../ui/commands/permissionsCommand.js'; @@ -118,6 +119,7 @@ export class BuiltinCommandLoader implements ICommandLoader { : []), memoryCommand, modelCommand, + manageModelsCommand, rememberCommand, planCommand, permissionsCommand, diff --git a/packages/cli/src/ui/AppContainer.test.tsx b/packages/cli/src/ui/AppContainer.test.tsx index 0ecd6dd6c..9b15d302b 100644 --- a/packages/cli/src/ui/AppContainer.test.tsx +++ b/packages/cli/src/ui/AppContainer.test.tsx @@ -191,9 +191,17 @@ describe('AppContainer State Management', () => { onAuthError: vi.fn(), isAuthDialogOpen: false, isAuthenticating: false, + pendingAuthType: undefined, + externalAuthState: null, + qwenAuthState: { + deviceAuth: null, + authStatus: 'idle', + authMessage: null, + }, handleAuthSelect: vi.fn(), handleCodingPlanSubmit: vi.fn(), handleAlibabaStandardSubmit: vi.fn(), + handleOpenRouterSubmit: vi.fn(), openAuthDialog: vi.fn(), cancelAuthentication: vi.fn(), }); @@ -1397,7 +1405,17 @@ describe('AppContainer State Management', () => { onAuthError: vi.fn(), isAuthDialogOpen: false, isAuthenticating: true, + pendingAuthType: undefined, + externalAuthState: null, + qwenAuthState: { + deviceAuth: null, + authStatus: 'idle', + authMessage: null, + }, handleAuthSelect: vi.fn(), + handleCodingPlanSubmit: vi.fn(), + handleAlibabaStandardSubmit: vi.fn(), + handleOpenRouterSubmit: vi.fn(), openAuthDialog: vi.fn(), cancelAuthentication: vi.fn(), }); diff --git a/packages/cli/src/ui/AppContainer.tsx b/packages/cli/src/ui/AppContainer.tsx index c16e5305d..0e88f0c02 100644 --- a/packages/cli/src/ui/AppContainer.tsx +++ b/packages/cli/src/ui/AppContainer.tsx @@ -70,6 +70,7 @@ import { useAuthCommand } from './auth/useAuth.js'; import { useEditorSettings } from './hooks/useEditorSettings.js'; import { useSettingsCommand } from './hooks/useSettingsCommand.js'; import { useModelCommand } from './hooks/useModelCommand.js'; +import { useManageModelsCommand } from './hooks/useManageModelsCommand.js'; import { useArenaCommand } from './hooks/useArenaCommand.js'; import { useApprovalModeCommand } from './hooks/useApprovalModeCommand.js'; import { useResumeCommand } from './hooks/useResumeCommand.js'; @@ -518,10 +519,13 @@ export const AppContainer = (props: AppContainerProps) => { isAuthDialogOpen, isAuthenticating, pendingAuthType, + externalAuthState, qwenAuthState, handleAuthSelect, handleCodingPlanSubmit, handleAlibabaStandardSubmit, + handleOpenRouterSubmit, + handleCustomApiKeySubmit, openAuthDialog, cancelAuthentication, } = useAuthCommand(settings, config, historyManager.addItem, refreshStatic); @@ -591,6 +595,11 @@ export const AppContainer = (props: AppContainerProps) => { openModelDialog, closeModelDialog, } = useModelCommand(); + const { + isManageModelsDialogOpen, + openManageModelsDialog, + closeManageModelsDialog, + } = useManageModelsCommand(); const { activeArenaDialog, openArenaDialog, closeArenaDialog } = useArenaCommand(); @@ -656,6 +665,7 @@ export const AppContainer = (props: AppContainerProps) => { openMemoryDialog, openSettingsDialog, openModelDialog, + openManageModelsDialog, openTrustDialog, openArenaDialog, openPermissionsDialog, @@ -687,6 +697,7 @@ export const AppContainer = (props: AppContainerProps) => { openMemoryDialog, openSettingsDialog, openModelDialog, + openManageModelsDialog, openArenaDialog, setDebugMessage, dispatchExtensionStateUpdate, @@ -1551,6 +1562,7 @@ export const AppContainer = (props: AppContainerProps) => { isSettingsDialogOpen || isMemoryDialogOpen || isModelDialogOpen || + isManageModelsDialogOpen || isTrustDialogOpen || activeArenaDialog !== null || isPermissionsDialogOpen || @@ -2237,6 +2249,7 @@ export const AppContainer = (props: AppContainerProps) => { authError, isAuthDialogOpen, pendingAuthType, + externalAuthState, // Qwen OAuth state qwenAuthState, editorError, @@ -2247,6 +2260,7 @@ export const AppContainer = (props: AppContainerProps) => { isMemoryDialogOpen, isModelDialogOpen, isFastModelMode, + isManageModelsDialogOpen, isTrustDialogOpen, activeArenaDialog, isPermissionsDialogOpen, @@ -2356,6 +2370,7 @@ export const AppContainer = (props: AppContainerProps) => { authError, isAuthDialogOpen, pendingAuthType, + externalAuthState, // Qwen OAuth state qwenAuthState, editorError, @@ -2366,6 +2381,7 @@ export const AppContainer = (props: AppContainerProps) => { isMemoryDialogOpen, isModelDialogOpen, isFastModelMode, + isManageModelsDialogOpen, isTrustDialogOpen, activeArenaDialog, isPermissionsDialogOpen, @@ -2484,12 +2500,16 @@ export const AppContainer = (props: AppContainerProps) => { cancelAuthentication, handleCodingPlanSubmit, handleAlibabaStandardSubmit, + handleOpenRouterSubmit, + handleCustomApiKeySubmit, handleEditorSelect, exitEditorDialog, closeSettingsDialog, closeMemoryDialog, closeModelDialog, openModelDialog, + openManageModelsDialog, + closeManageModelsDialog, openArenaDialog, closeArenaDialog, handleArenaModelsSelected, @@ -2554,12 +2574,16 @@ export const AppContainer = (props: AppContainerProps) => { cancelAuthentication, handleCodingPlanSubmit, handleAlibabaStandardSubmit, + handleOpenRouterSubmit, + handleCustomApiKeySubmit, handleEditorSelect, exitEditorDialog, closeSettingsDialog, closeMemoryDialog, closeModelDialog, openModelDialog, + openManageModelsDialog, + closeManageModelsDialog, openArenaDialog, closeArenaDialog, handleArenaModelsSelected, diff --git a/packages/cli/src/ui/auth/AuthDialog.test.tsx b/packages/cli/src/ui/auth/AuthDialog.test.tsx index b540ae56e..ad62f46ff 100644 --- a/packages/cli/src/ui/auth/AuthDialog.test.tsx +++ b/packages/cli/src/ui/auth/AuthDialog.test.tsx @@ -34,6 +34,7 @@ const createMockUIActions = (overrides: Partial = {}): UIActions => { handleAuthSelect: vi.fn(), handleCodingPlanSubmit: vi.fn(), handleAlibabaStandardSubmit: vi.fn(), + handleOpenRouterSubmit: vi.fn(), onAuthError: vi.fn(), handleRetryLastPrompt: vi.fn(), } as Partial; @@ -69,6 +70,118 @@ const renderAuthDialog = ( ); }; +/** + * Type text into the terminal one character at a time. + * Works around a Node 24.x + ink compatibility issue on Windows + * where bulk stdin.write() may not propagate to TextInput correctly. + */ +const typeText = async ( + stdin: { write: (s: string) => void }, + text: string, +) => { + const delay = (ms = 5) => new Promise((resolve) => setTimeout(resolve, ms)); + for (const char of text) { + stdin.write(char); + await delay(5); + } + await delay(30); +}; + +const escapeRegExp = (text: string) => + text.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); + +const expectSelectedOption = (frame: string | undefined, label: string) => { + expect(frame).toMatch( + new RegExp(`›\\s*(?:\\d+\\.\\s*)?${escapeRegExp(label)}`), + ); +}; + +const waitForSelectedOption = async ( + lastFrame: () => string | undefined, + label: string, +) => { + await vi.waitFor(() => { + expectSelectedOption(lastFrame(), label); + }); +}; + +const pressEnterAndWaitFor = async ( + stdin: { write: (s: string) => void }, + lastFrame: () => string | undefined, + expectedText: string, +) => { + stdin.write('\r'); + await vi.waitFor(() => { + expect(lastFrame()).toContain(expectedText); + }); +}; + +const moveDownAndWaitForSelection = async ( + stdin: { write: (s: string) => void }, + lastFrame: () => string | undefined, + label: string, +) => { + stdin.write('\u001b[B'); + await waitForSelectedOption(lastFrame, label); +}; + +const navigateToCustomProtocolSelect = async ( + stdin: { write: (s: string) => void }, + lastFrame: () => string | undefined, +) => { + await waitForSelectedOption(lastFrame, 'OAuth'); + await moveDownAndWaitForSelection( + stdin, + lastFrame, + 'Alibaba Cloud Coding Plan', + ); + await moveDownAndWaitForSelection(stdin, lastFrame, 'API Key'); + await pressEnterAndWaitFor(stdin, lastFrame, 'Select API Key Type'); + await waitForSelectedOption( + lastFrame, + 'Alibaba Cloud ModelStudio Standard API Key', + ); + await moveDownAndWaitForSelection(stdin, lastFrame, 'Custom API Key'); + await pressEnterAndWaitFor(stdin, lastFrame, 'Step 1/6 · Protocol'); +}; + +const navigateToCustomBaseUrlInput = async ( + stdin: { write: (s: string) => void }, + lastFrame: () => string | undefined, +) => { + await navigateToCustomProtocolSelect(stdin, lastFrame); + await pressEnterAndWaitFor(stdin, lastFrame, 'Step 2/6 · Base URL'); +}; + +const navigateToCustomApiKeyInput = async ( + stdin: { write: (s: string) => void }, + lastFrame: () => string | undefined, +) => { + await navigateToCustomBaseUrlInput(stdin, lastFrame); + await pressEnterAndWaitFor(stdin, lastFrame, 'Step 3/6 · API Key'); +}; + +const navigateToCustomModelIdInput = async ( + stdin: { write: (s: string) => void }, + lastFrame: () => string | undefined, + apiKey = 'sk-test', +) => { + await navigateToCustomApiKeyInput(stdin, lastFrame); + await typeText(stdin, apiKey); + await pressEnterAndWaitFor(stdin, lastFrame, 'Step 4/6 · Model IDs'); +}; + +const navigateToCustomAdvancedConfig = async ( + stdin: { write: (s: string) => void }, + lastFrame: () => string | undefined, + apiKey = 'sk-test', + modelIds = 'model-1,model-2', +) => { + await navigateToCustomModelIdInput(stdin, lastFrame, apiKey); + await typeText(stdin, modelIds); + await pressEnterAndWaitFor(stdin, lastFrame, 'Step 5/6 · Advanced Config'); +}; + describe('AuthDialog', () => { const wait = (ms = 50) => new Promise((resolve) => setTimeout(resolve, ms)); @@ -308,8 +421,8 @@ describe('AuthDialog', () => { const { lastFrame } = renderAuthDialog(settings); - // QWEN_OAUTH is the first option, so it should be selected - expect(lastFrame()).toContain('Qwen OAuth'); + // QWEN_OAUTH maps to 'OAUTH' in the new three-option main menu + expect(lastFrame()).toContain('OAuth'); }); it('should fall back to default if QWEN_DEFAULT_AUTH_TYPE is not set', () => { @@ -391,8 +504,8 @@ describe('AuthDialog', () => { const { lastFrame } = renderAuthDialog(settings); // Since the auth dialog doesn't show QWEN_DEFAULT_AUTH_TYPE errors anymore, - // it will just show the default Qwen OAuth option - expect(lastFrame()).toContain('Qwen OAuth'); + // it will just show the default OAuth option + expect(lastFrame()).toContain('OAuth'); }); }); @@ -558,4 +671,524 @@ describe('AuthDialog', () => { expect(handleAuthSelect).toHaveBeenCalledWith(undefined); unmount(); }); + + it('should show OpenRouter in API key options', async () => { + const settings: LoadedSettings = new LoadedSettings( + { + settings: { ui: { customThemes: {} }, mcpServers: {} }, + originalSettings: { ui: { customThemes: {} }, mcpServers: {} }, + path: '', + }, + { + settings: {}, + originalSettings: {}, + path: '', + }, + { + settings: { + security: { auth: { selectedType: undefined } }, + ui: { customThemes: {} }, + mcpServers: {}, + }, + originalSettings: { + security: { auth: { selectedType: undefined } }, + ui: { customThemes: {} }, + mcpServers: {}, + }, + path: '', + }, + { + settings: { ui: { customThemes: {} }, mcpServers: {} }, + originalSettings: { ui: { customThemes: {} }, mcpServers: {} }, + path: '', + }, + true, + new Set(), + ); + + const { stdin, lastFrame, unmount } = renderAuthDialog(settings); + await wait(); + + // OAuth is selected by default, press Enter to enter OAuth provider list + stdin.write('\r'); + await wait(); + + await vi.waitFor(() => { + const frame = lastFrame(); + expect(frame).toContain('OpenRouter'); + expect(frame).toContain('Browser OAuth'); + }); + + unmount(); + }); + + it('should trigger OpenRouter OAuth from API key options', async () => { + const handleOpenRouterSubmit = vi.fn().mockResolvedValue(undefined); + const settings: LoadedSettings = new LoadedSettings( + { + settings: { ui: { customThemes: {} }, mcpServers: {} }, + originalSettings: { ui: { customThemes: {} }, mcpServers: {} }, + path: '', + }, + { + settings: {}, + originalSettings: {}, + path: '', + }, + { + settings: { + security: { auth: { selectedType: undefined } }, + ui: { customThemes: {} }, + mcpServers: {}, + }, + originalSettings: { + security: { auth: { selectedType: undefined } }, + ui: { customThemes: {} }, + mcpServers: {}, + }, + path: '', + }, + { + settings: { ui: { customThemes: {} }, mcpServers: {} }, + originalSettings: { ui: { customThemes: {} }, mcpServers: {} }, + path: '', + }, + true, + new Set(), + ); + + const { stdin, unmount } = renderAuthDialog( + settings, + {}, + { handleOpenRouterSubmit }, + ); + await wait(); + + // OAuth is selected by default, press Enter to enter OAuth provider list + stdin.write('\r'); + await wait(); + // OpenRouter is the first option, press Enter to trigger OAuth + stdin.write('\r'); + await wait(); + + await vi.waitFor(() => { + expect(handleOpenRouterSubmit).toHaveBeenCalledTimes(1); + }); + + unmount(); + }); +}); + +const isUnreliableTuiInputEnvironment = + process.platform === 'win32' || + (process.env['CI'] === 'true' && process.version.startsWith('v20.')); +const itWhenTuiInputReliable = isUnreliableTuiInputEnvironment ? it.skip : it; + +describe('AuthDialog Custom API Key Wizard', () => { + const wait = (ms = 50) => new Promise((resolve) => setTimeout(resolve, ms)); + + const createStandardSettings = (): LoadedSettings => + new LoadedSettings( + { + settings: { ui: { customThemes: {} }, mcpServers: {} }, + originalSettings: { ui: { customThemes: {} }, mcpServers: {} }, + path: '', + }, + { + settings: {}, + originalSettings: {}, + path: '', + }, + { + settings: { + security: { auth: { selectedType: undefined } }, + ui: { customThemes: {} }, + mcpServers: {}, + }, + originalSettings: { + security: { auth: { selectedType: undefined } }, + ui: { customThemes: {} }, + mcpServers: {}, + }, + path: '', + }, + { + settings: { ui: { customThemes: {} }, mcpServers: {} }, + originalSettings: { ui: { customThemes: {} }, mcpServers: {} }, + path: '', + }, + true, + new Set(), + ); + + itWhenTuiInputReliable( + 'navigates to protocol selection when Custom API Key is selected', + async () => { + const settings = createStandardSettings(); + const handleCustomApiKeySubmit = vi.fn(); + + const mockUIState = { + authError: null, + pendingAuthType: undefined, + } as UIState; + + const mockUIActions = { + handleAuthSelect: vi.fn(), + handleCodingPlanSubmit: vi.fn(), + handleAlibabaStandardSubmit: vi.fn(), + handleOpenRouterSubmit: vi.fn(), + handleCustomApiKeySubmit, + onAuthError: vi.fn(), + handleRetryLastPrompt: vi.fn(), + } as unknown as UIActions; + + const mockConfig = { + getAuthType: vi.fn(() => undefined), + getContentGeneratorConfig: vi.fn(() => ({})), + } as unknown as Config; + + const { stdin, lastFrame, unmount } = renderWithProviders( + + + + + , + { settings, config: mockConfig }, + ); + + await navigateToCustomProtocolSelect(stdin, lastFrame); + + await vi.waitFor(() => { + const frame = lastFrame(); + expect(frame).toContain('Step 1/6 · Protocol'); + expect(frame).toContain('OpenAI-compatible'); + expect(frame).toContain('Anthropic-compatible'); + expect(frame).toContain('Gemini-compatible'); + }); + + unmount(); + }, + ); + + itWhenTuiInputReliable( + 'navigates to base URL input after selecting a protocol', + async () => { + const settings = createStandardSettings(); + const handleCustomApiKeySubmit = vi.fn(); + + const mockUIState = { + authError: null, + pendingAuthType: undefined, + } as UIState; + + const mockUIActions = { + handleAuthSelect: vi.fn(), + handleCodingPlanSubmit: vi.fn(), + handleAlibabaStandardSubmit: vi.fn(), + handleOpenRouterSubmit: vi.fn(), + handleCustomApiKeySubmit, + onAuthError: vi.fn(), + handleRetryLastPrompt: vi.fn(), + } as unknown as UIActions; + + const mockConfig = { + getAuthType: vi.fn(() => undefined), + getContentGeneratorConfig: vi.fn(() => ({})), + } as unknown as Config; + + const { stdin, lastFrame, unmount } = renderWithProviders( + + + + + , + { settings, config: mockConfig }, + ); + + await navigateToCustomBaseUrlInput(stdin, lastFrame); + + await vi.waitFor(() => { + const frame = lastFrame(); + expect(frame).toContain('Step 2/6 · Base URL'); + expect(frame).toContain('Enter the API endpoint'); + }); + + unmount(); + }, + ); + + itWhenTuiInputReliable( + 'shows review screen with JSON after entering model IDs', + async () => { + const settings = createStandardSettings(); + const handleCustomApiKeySubmit = vi.fn(); + + const mockUIState = { + authError: null, + pendingAuthType: undefined, + } as UIState; + + const mockUIActions = { + handleAuthSelect: vi.fn(), + handleCodingPlanSubmit: vi.fn(), + handleAlibabaStandardSubmit: vi.fn(), + handleOpenRouterSubmit: vi.fn(), + handleCustomApiKeySubmit, + onAuthError: vi.fn(), + handleRetryLastPrompt: vi.fn(), + } as unknown as UIActions; + + const mockConfig = { + getAuthType: vi.fn(() => undefined), + getContentGeneratorConfig: vi.fn(() => ({})), + } as unknown as Config; + + const { stdin, lastFrame, unmount } = renderWithProviders( + + + + + , + { settings, config: mockConfig }, + ); + + await navigateToCustomAdvancedConfig( + stdin, + lastFrame, + 'sk-test-key-12345', + 'qwen/qwen3-coder,gpt-4.1', + ); + await pressEnterAndWaitFor(stdin, lastFrame, 'Step 6/6 · Review'); + + await vi.waitFor(() => { + const frame = lastFrame(); + expect(frame).toContain('Step 6/6 · Review'); + expect(frame).toContain('The following JSON will be saved'); + expect(frame).toContain('QWEN_CUSTOM_API_KEY_OPENAI'); + expect(frame).toContain('qwen/qwen3-coder'); + expect(frame).toContain('gpt-4.1'); + expect(frame).toContain('Enter to save'); + }); + + unmount(); + }, + ); + + itWhenTuiInputReliable( + 'calls handleCustomApiKeySubmit on Enter in review view', + async () => { + const settings = createStandardSettings(); + const handleCustomApiKeySubmit = vi.fn().mockResolvedValue(undefined); + + const mockUIState = { + authError: null, + pendingAuthType: undefined, + } as UIState; + + const mockUIActions = { + handleAuthSelect: vi.fn(), + handleCodingPlanSubmit: vi.fn(), + handleAlibabaStandardSubmit: vi.fn(), + handleOpenRouterSubmit: vi.fn(), + handleCustomApiKeySubmit, + onAuthError: vi.fn(), + handleRetryLastPrompt: vi.fn(), + } as unknown as UIActions; + + const mockConfig = { + getAuthType: vi.fn(() => undefined), + getContentGeneratorConfig: vi.fn(() => ({})), + } as unknown as Config; + + const { stdin, lastFrame, unmount } = renderWithProviders( + + + + + , + { settings, config: mockConfig }, + ); + + await navigateToCustomAdvancedConfig( + stdin, + lastFrame, + 'sk-test', + 'model-1,model-2', + ); + await pressEnterAndWaitFor(stdin, lastFrame, 'Step 6/6 · Review'); + + await vi.waitFor(() => { + const frame = lastFrame(); + expect(frame).toContain('Enter to save'); + }); + + stdin.write('\r'); // Enter to save + await wait(); + + await vi.waitFor(() => { + expect(handleCustomApiKeySubmit).toHaveBeenCalledWith( + AuthType.USE_OPENAI, + 'https://api.openai.com/v1', + 'sk-test', + 'model-1,model-2', + undefined, + ); + }); + + unmount(); + }, + ); + + itWhenTuiInputReliable( + 'shows advanced config screen after entering model IDs', + async () => { + const settings = createStandardSettings(); + const handleCustomApiKeySubmit = vi.fn(); + + const mockUIState = { + authError: null, + pendingAuthType: undefined, + } as UIState; + + const mockUIActions = { + handleAuthSelect: vi.fn(), + handleCodingPlanSubmit: vi.fn(), + handleAlibabaStandardSubmit: vi.fn(), + handleOpenRouterSubmit: vi.fn(), + handleCustomApiKeySubmit, + onAuthError: vi.fn(), + handleRetryLastPrompt: vi.fn(), + } as unknown as UIActions; + + const mockConfig = { + getAuthType: vi.fn(() => undefined), + getContentGeneratorConfig: vi.fn(() => ({})), + } as unknown as Config; + + const { stdin, lastFrame, unmount } = renderWithProviders( + + + + + , + { settings, config: mockConfig }, + ); + + await navigateToCustomAdvancedConfig( + stdin, + lastFrame, + 'sk-test', + 'model-1,model-2', + ); + + await vi.waitFor(() => { + const frame = lastFrame(); + expect(frame).toContain('Step 5/6 · Advanced Config'); + expect(frame).toContain( + 'Optional: configure advanced generation settings', + ); + expect(frame).toContain('Enable thinking'); + expect(frame).toContain('Enable modality'); + expect(frame).toContain('Enter to continue'); + }); + + unmount(); + }, + ); + + itWhenTuiInputReliable( + 'passes generationConfig when advanced options are toggled', + async () => { + const settings = createStandardSettings(); + const handleCustomApiKeySubmit = vi.fn().mockResolvedValue(undefined); + + const mockUIState = { + authError: null, + pendingAuthType: undefined, + } as UIState; + + const mockUIActions = { + handleAuthSelect: vi.fn(), + handleCodingPlanSubmit: vi.fn(), + handleAlibabaStandardSubmit: vi.fn(), + handleOpenRouterSubmit: vi.fn(), + handleCustomApiKeySubmit, + onAuthError: vi.fn(), + handleRetryLastPrompt: vi.fn(), + } as unknown as UIActions; + + const mockConfig = { + getAuthType: vi.fn(() => undefined), + getContentGeneratorConfig: vi.fn(() => ({})), + } as unknown as Config; + + const { stdin, lastFrame, unmount } = renderWithProviders( + + + + + , + { settings, config: mockConfig }, + ); + + await navigateToCustomAdvancedConfig( + stdin, + lastFrame, + 'sk-test', + 'model-1', + ); + + await vi.waitFor(() => { + const frame = lastFrame(); + expect(frame).toContain('Step 5/6 · Advanced Config'); + }); + + // Toggle thinking (press Space — thinking is initially focused) + stdin.write(' '); + await wait(); + + // Navigate down to modality, toggle (press ↓ then Space) + stdin.write('\u001b[B'); + await wait(); + stdin.write(' '); + await wait(); + + // Press Enter to continue to review + stdin.write('\r'); + await wait(); + + // Verify review includes generationConfig + await vi.waitFor(() => { + const frame = lastFrame(); + expect(frame).toContain('"generationConfig"'); + expect(frame).toContain('"enable_thinking"'); + expect(frame).toContain('"image": true'); + expect(frame).toContain('"video": true'); + expect(frame).toContain('"audio": true'); + }); + + // Press Enter to save + stdin.write('\r'); + await wait(); + + await vi.waitFor(() => { + expect(handleCustomApiKeySubmit).toHaveBeenCalledWith( + AuthType.USE_OPENAI, + 'https://api.openai.com/v1', + 'sk-test', + 'model-1', + { + enableThinking: true, + multimodal: { + image: true, + video: true, + audio: true, + }, + }, + ); + }); + + unmount(); + }, + ); }); diff --git a/packages/cli/src/ui/auth/AuthDialog.tsx b/packages/cli/src/ui/auth/AuthDialog.tsx index f7fb402b2..4d32b003f 100644 --- a/packages/cli/src/ui/auth/AuthDialog.tsx +++ b/packages/cli/src/ui/auth/AuthDialog.tsx @@ -26,6 +26,11 @@ import { ALIBABA_STANDARD_API_KEY_ENDPOINTS, type AlibabaStandardRegion, } from '../../constants/alibabaStandardApiKey.js'; +import { + generateCustomApiKeyEnvKey, + normalizeCustomModelIds, + maskApiKey, +} from './useAuth.js'; const MODEL_PROVIDERS_DOCUMENTATION_URL = 'https://qwenlm.github.io/qwen-code-docs/en/users/configuration/model-providers/'; @@ -43,8 +48,15 @@ function parseDefaultAuthType( } // Main menu option type -type MainOption = typeof AuthType.QWEN_OAUTH | 'CODING_PLAN' | 'API_KEY'; -type ApiKeyOption = 'ALIBABA_STANDARD_API_KEY' | 'CUSTOM_API_KEY'; +type MainOption = 'OAUTH' | 'CODING_PLAN' | 'API_KEY'; +type ApiKeyOption = + | 'OPENROUTER_OAUTH' + | 'ALIBABA_STANDARD_API_KEY' + | 'CUSTOM_API_KEY'; +type OAuthOption = + | 'OPENROUTER_OAUTH' + | 'MODELSCOPE_OAUTH' + | 'QWEN_OAUTH_DISCONTINUED'; // View level for navigation type ViewLevel = @@ -55,7 +67,13 @@ type ViewLevel = | 'alibaba-standard-region-select' | 'alibaba-standard-api-key-input' | 'alibaba-standard-model-id-input' - | 'custom-info'; + | 'custom-protocol-select' + | 'custom-base-url-input' + | 'custom-api-key-input' + | 'custom-model-id-input' + | 'custom-advanced-config' + | 'custom-review-json' + | 'oauth-provider-select'; const ALIBABA_STANDARD_MODEL_IDS_PLACEHOLDER = 'qwen3.5-plus,glm-5,kimi-k2.5'; const ALIBABA_STANDARD_API_DOCUMENTATION_URLS: Record< @@ -77,6 +95,8 @@ export function AuthDialog(): React.JSX.Element { handleAuthSelect: onAuthSelect, handleCodingPlanSubmit, handleAlibabaStandardSubmit, + handleOpenRouterSubmit, + handleCustomApiKeySubmit, onAuthError, } = useUIActions(); const config = useConfig(); @@ -90,6 +110,7 @@ export function AuthDialog(): React.JSX.Element { const [alibabaStandardRegionIndex, setAlibabaStandardRegionIndex] = useState(0); const [apiKeyTypeIndex, setApiKeyTypeIndex] = useState(0); + const [oauthProviderIndex, setOAuthProviderIndex] = useState(0); const [alibabaStandardRegion, setAlibabaStandardRegion] = useState('cn-beijing'); const [alibabaStandardApiKey, setAlibabaStandardApiKey] = useState(''); @@ -100,6 +121,30 @@ export function AuthDialog(): React.JSX.Element { const [alibabaStandardModelIdError, setAlibabaStandardModelIdError] = useState(null); + // Custom API Key wizard state + const [customProtocolIndex, setCustomProtocolIndex] = useState(0); + const [customProtocol, setCustomProtocol] = useState( + AuthType.USE_OPENAI, + ); + const [customBaseUrl, setCustomBaseUrl] = useState(''); + const [customBaseUrlError, setCustomBaseUrlError] = useState( + null, + ); + const [customApiKey, setCustomApiKey] = useState(''); + const [customApiKeyError, setCustomApiKeyError] = useState( + null, + ); + const [customModelIds, setCustomModelIds] = useState(''); + const [customModelIdsError, setCustomModelIdsError] = useState( + null, + ); + + // Advanced generation config state + const [advancedThinkingEnabled, setAdvancedThinkingEnabled] = useState(false); + const [advancedModalityEnabled, setAdvancedModalityEnabled] = useState(false); + const [focusedConfigIndex, setFocusedConfigIndex] = useState(0); + // 0 = thinking, 1 = modality + // Main authentication entries (flat three-option layout) const mainItems = [ { @@ -119,11 +164,13 @@ export function AuthDialog(): React.JSX.Element { value: 'API_KEY' as MainOption, }, { - key: AuthType.QWEN_OAUTH, - title: t('Qwen OAuth'), - label: t('Qwen OAuth'), - description: t('Discontinued — switch to Coding Plan or API Key'), - value: AuthType.QWEN_OAUTH as MainOption, + key: 'OAUTH', + title: t('OAuth'), + label: t('OAuth'), + description: t( + 'Browser-based authentication with third-party providers (e.g. OpenRouter, ModelScope)', + ), + value: 'OAUTH' as MainOption, }, ]; @@ -210,6 +257,38 @@ export function AuthDialog(): React.JSX.Element { }, ]; + const protocolItems = [ + { + key: AuthType.USE_OPENAI, + title: t('OpenAI-compatible'), + label: t('OpenAI-compatible'), + description: t( + 'OpenAI Chat Completions API (OpenRouter, vLLM, Ollama, LM Studio, Fireworks, etc.)', + ), + value: AuthType.USE_OPENAI as AuthType, + }, + { + key: AuthType.USE_ANTHROPIC, + title: t('Anthropic-compatible'), + label: t('Anthropic-compatible'), + description: t('Anthropic Messages API'), + value: AuthType.USE_ANTHROPIC as AuthType, + }, + { + key: AuthType.USE_GEMINI, + title: t('Gemini-compatible'), + label: t('Gemini-compatible'), + description: t('Google Gemini API'), + value: AuthType.USE_GEMINI as AuthType, + }, + ]; + + const DEFAULT_CUSTOM_BASE_URLS: Partial> = { + [AuthType.USE_OPENAI]: 'https://api.openai.com/v1', + [AuthType.USE_ANTHROPIC]: 'https://api.anthropic.com/v1', + [AuthType.USE_GEMINI]: 'https://generativelanguage.googleapis.com', + }; + const apiKeyTypeItems = [ { key: 'ALIBABA_STANDARD_API_KEY', @@ -229,8 +308,36 @@ export function AuthDialog(): React.JSX.Element { }, ]; + const oauthProviderItems = [ + { + key: 'OPENROUTER_OAUTH', + title: t('OpenRouter'), + label: t('OpenRouter'), + description: t( + 'Browser OAuth · Auto-configure API key and OpenRouter models', + ), + value: 'OPENROUTER_OAUTH' as OAuthOption, + }, + { + key: 'MODELSCOPE_OAUTH', + title: t('ModelScope'), + label: t('ModelScope'), + description: t( + 'Browser OAuth · Auto-configure API key and ModelScope models', + ), + value: 'MODELSCOPE_OAUTH' as OAuthOption, + }, + { + key: 'QWEN_OAUTH_DISCONTINUED', + title: t('Qwen'), + label: t('Qwen'), + description: t('Discontinued — switch to Coding Plan or API Key'), + value: 'QWEN_OAUTH_DISCONTINUED' as OAuthOption, + }, + ]; + // Map an AuthType to the corresponding main menu option. - // QWEN_OAUTH maps directly; USE_OPENAI maps to: + // QWEN_OAUTH maps to 'OAUTH'; USE_OPENAI maps to: // - CODING_PLAN when current config matches coding plan // - API_KEY for other OpenAI / Anthropic / Gemini-compatible configs const contentGenConfig = config.getContentGeneratorConfig(); @@ -240,7 +347,7 @@ export function AuthDialog(): React.JSX.Element { contentGenConfig?.apiKeyEnvKey, ) !== false; const authTypeToMainOption = (authType: AuthType): MainOption => { - if (authType === AuthType.QWEN_OAUTH) return AuthType.QWEN_OAUTH; + if (authType === AuthType.QWEN_OAUTH) return 'OAUTH'; if (authType === AuthType.USE_OPENAI && isCurrentlyCodingPlan) { return 'CODING_PLAN'; } @@ -269,8 +376,8 @@ export function AuthDialog(): React.JSX.Element { return item.value === authTypeToMainOption(defaultAuthType); } - // Priority 4: default to QWEN_OAUTH - return item.value === AuthType.QWEN_OAUTH; + // Priority 4: default to OAUTH + return item.value === 'OAUTH'; }), ); @@ -289,13 +396,8 @@ export function AuthDialog(): React.JSX.Element { return; } - // Qwen OAuth free tier discontinued — show warning instead of proceeding - if (value === AuthType.QWEN_OAUTH) { - setErrorMessage( - t( - 'Qwen OAuth free tier was discontinued on 2026-04-15. Please select Coding Plan or API Key instead.', - ), - ); + if (value === 'OAUTH') { + setViewLevel('oauth-provider-select'); return; } @@ -313,7 +415,53 @@ export function AuthDialog(): React.JSX.Element { return; } - setViewLevel('custom-info'); + // Reset custom wizard state and go to protocol selection + setCustomProtocolIndex(0); + setCustomProtocol(AuthType.USE_OPENAI); + setCustomBaseUrl(''); + setCustomBaseUrlError(null); + setCustomApiKey(''); + setCustomApiKeyError(null); + setCustomModelIds(''); + setCustomModelIdsError(null); + setAdvancedThinkingEnabled(false); + setAdvancedModalityEnabled(false); + setFocusedConfigIndex(0); + setViewLevel('custom-protocol-select'); + }; + + const handleOAuthProviderSelect = async (value: OAuthOption) => { + setErrorMessage(null); + onAuthError(null); + + if (value === 'OPENROUTER_OAUTH') { + await handleOpenRouterSubmit(); + return; + } + + // Qwen OAuth free tier discontinued — show warning instead of proceeding + if (value === 'QWEN_OAUTH_DISCONTINUED') { + setErrorMessage( + t( + 'Qwen OAuth free tier was discontinued on 2026-04-15. Please select Coding Plan or API Key instead.', + ), + ); + return; + } + + // Future: Add support for ModelScope OAuth when implemented + if (value === 'MODELSCOPE_OAUTH') { + // Currently not implemented, show message + setErrorMessage( + t( + 'ModelScope OAuth is not yet implemented. Please select another option.', + ), + ); + return; + } + + // For other OAuth providers, you can extend the functionality here + await onAuthSelect(AuthType.USE_OPENAI); }; const handleRegionSelect = async (selectedRegion: CodingPlanRegion) => { @@ -381,6 +529,89 @@ export function AuthDialog(): React.JSX.Element { ); }; + const handleCustomProtocolSelect = (protocol: AuthType) => { + setErrorMessage(null); + onAuthError(null); + setCustomProtocol(protocol); + const defaultUrl = DEFAULT_CUSTOM_BASE_URLS[protocol] ?? ''; + setCustomBaseUrl(defaultUrl); + setCustomBaseUrlError(null); + setViewLevel('custom-base-url-input'); + }; + + const handleCustomBaseUrlSubmit = () => { + const trimmedUrl = customBaseUrl.trim(); + if (!trimmedUrl) { + setCustomBaseUrlError(t('Base URL cannot be empty.')); + return; + } + if (!/^https?:\/\//i.test(trimmedUrl)) { + setCustomBaseUrlError(t('Base URL must start with http:// or https://.')); + return; + } + setCustomBaseUrlError(null); + setCustomApiKey(''); + setCustomApiKeyError(null); + setViewLevel('custom-api-key-input'); + }; + + const handleCustomApiKeySubmitLocal = () => { + const trimmedKey = customApiKey.trim(); + if (!trimmedKey) { + setCustomApiKeyError(t('API key cannot be empty.')); + return; + } + setCustomApiKeyError(null); + setCustomModelIds(''); + setCustomModelIdsError(null); + setViewLevel('custom-model-id-input'); + }; + + const handleCustomModelIdSubmit = () => { + const normalized = normalizeCustomModelIds(customModelIds); + if (normalized.length === 0) { + setCustomModelIdsError(t('Model IDs cannot be empty.')); + return; + } + setCustomModelIdsError(null); + setViewLevel('custom-advanced-config'); + }; + + const handleAdvancedConfigSubmit = () => { + setViewLevel('custom-review-json'); + }; + + const handleCustomReviewSubmit = () => { + const trimmedBaseUrl = customBaseUrl.trim(); + const trimmedApiKey = customApiKey.trim(); + const trimmedModelIds = customModelIds; + + // Build generationConfig only if any advanced option is set + const hasThinking = advancedThinkingEnabled; + const hasModality = advancedModalityEnabled; + + const generationConfig = + hasThinking || hasModality + ? { + enableThinking: hasThinking ? true : undefined, + multimodal: hasModality + ? { image: true, video: true, audio: true } + : undefined, + } + : undefined; + + void handleCustomApiKeySubmit( + customProtocol as + | AuthType.USE_OPENAI + | AuthType.USE_ANTHROPIC + | AuthType.USE_GEMINI, + trimmedBaseUrl, + trimmedApiKey, + trimmedModelIds, + generationConfig, + ); + }; + const handleGoBack = () => { setErrorMessage(null); onAuthError(null); @@ -391,14 +622,26 @@ export function AuthDialog(): React.JSX.Element { setViewLevel('region-select'); } else if (viewLevel === 'api-key-type-select') { setViewLevel('main'); - } else if (viewLevel === 'custom-info') { + } else if (viewLevel === 'custom-protocol-select') { setViewLevel('api-key-type-select'); + } else if (viewLevel === 'custom-base-url-input') { + setViewLevel('custom-protocol-select'); + } else if (viewLevel === 'custom-api-key-input') { + setViewLevel('custom-base-url-input'); + } else if (viewLevel === 'custom-model-id-input') { + setViewLevel('custom-api-key-input'); + } else if (viewLevel === 'custom-advanced-config') { + setViewLevel('custom-model-id-input'); + } else if (viewLevel === 'custom-review-json') { + setViewLevel('custom-advanced-config'); } else if (viewLevel === 'alibaba-standard-region-select') { setViewLevel('api-key-type-select'); } else if (viewLevel === 'alibaba-standard-api-key-input') { setViewLevel('alibaba-standard-region-select'); } else if (viewLevel === 'alibaba-standard-model-id-input') { setViewLevel('alibaba-standard-api-key-input'); + } else if (viewLevel === 'oauth-provider-select') { + setViewLevel('main'); } }; @@ -411,7 +654,18 @@ export function AuthDialog(): React.JSX.Element { return; } - if (viewLevel === 'api-key-input' || viewLevel === 'custom-info') { + if (viewLevel === 'api-key-input') { + handleGoBack(); + return; + } + if ( + viewLevel === 'custom-protocol-select' || + viewLevel === 'custom-base-url-input' || + viewLevel === 'custom-api-key-input' || + viewLevel === 'custom-model-id-input' || + viewLevel === 'custom-advanced-config' || + viewLevel === 'custom-review-json' + ) { handleGoBack(); return; } @@ -419,7 +673,8 @@ export function AuthDialog(): React.JSX.Element { viewLevel === 'api-key-type-select' || viewLevel === 'alibaba-standard-region-select' || viewLevel === 'alibaba-standard-api-key-input' || - viewLevel === 'alibaba-standard-model-id-input' + viewLevel === 'alibaba-standard-model-id-input' || + viewLevel === 'oauth-provider-select' ) { handleGoBack(); return; @@ -443,6 +698,50 @@ export function AuthDialog(): React.JSX.Element { { isActive: true }, ); + // Handle Enter key for review view to save + useKeypress( + (key) => { + if (key.name === 'return' && viewLevel === 'custom-review-json') { + handleCustomReviewSubmit(); + } + }, + { isActive: true }, + ); + + // Advanced config keypress: ↑↓ to navigate, Space to toggle, Enter to submit + useKeypress( + (key) => { + if (viewLevel !== 'custom-advanced-config') return; + + const { name } = key; + + if (name === 'up') { + setFocusedConfigIndex((v) => (v <= 0 ? 1 : v - 1)); + return; + } + + if (name === 'down') { + setFocusedConfigIndex((v) => (v >= 1 ? 0 : v + 1)); + return; + } + + if (name === 'space') { + if (focusedConfigIndex === 0) { + setAdvancedThinkingEnabled((v) => !v); + } else { + setAdvancedModalityEnabled((v) => !v); + } + return; + } + + if (name === 'return') { + handleAdvancedConfigSubmit(); + return; + } + }, + { isActive: true }, + ); + // Render main auth selection const renderMainView = () => ( <> @@ -625,26 +924,294 @@ export function AuthDialog(): React.JSX.Element {
); - // Render custom mode info - const renderCustomInfoView = () => ( + // Render custom protocol selection + const renderCustomProtocolSelectView = () => ( <> + + { + const index = protocolItems.findIndex( + (item) => item.value === value, + ); + setCustomProtocolIndex(index); + }} + itemGap={1} + /> + + + + {t('Enter to select, ↑↓ to navigate, Esc to go back')} + + + + ); + + // Render custom base URL input + const renderCustomBaseUrlInputView = () => ( + - {t('You can configure your API key and models in settings.json')} + {t('Enter the API endpoint for this protocol.')} - {t('Refer to the documentation for setup instructions')} + { + setCustomBaseUrl(value); + if (customBaseUrlError) { + setCustomBaseUrlError(null); + } + }} + onSubmit={handleCustomBaseUrlSubmit} + placeholder="https://api.openai.com/v1" + /> - + {customBaseUrlError && ( + + {customBaseUrlError} + + )} + - {MODEL_PROVIDERS_DOCUMENTATION_URL} + {t( + 'Need advanced generationConfig or capabilities? See documentation', + )} - {t('Esc to go back')} + + {t('Enter to submit, Esc to go back')} + + + + ); + + // Render custom API key input + const renderCustomApiKeyInputView = () => ( + + + + {t('Enter the API key for this endpoint.')} + + + + { + setCustomApiKey(value); + if (customApiKeyError) { + setCustomApiKeyError(null); + } + }} + onSubmit={handleCustomApiKeySubmitLocal} + placeholder="sk-..." + /> + + {customApiKeyError && ( + + {customApiKeyError} + + )} + + + {t('Enter to submit, Esc to go back')} + + + + ); + + // Render custom model ID input + const renderCustomModelIdInputView = () => ( + + + + {t('Enter one or more model IDs, separated by commas.')} + + + + { + setCustomModelIds(value); + if (customModelIdsError) { + setCustomModelIdsError(null); + } + }} + onSubmit={handleCustomModelIdSubmit} + placeholder="qwen/qwen3-coder,openai/gpt-4.1" + /> + + {customModelIdsError && ( + + {customModelIdsError} + + )} + + + {t('Enter to submit, Esc to go back')} + + + + ); + + // Render custom advanced config + const renderCustomAdvancedConfigView = () => { + const checkmark = (v: boolean) => (v ? '◉' : '○'); + const cursor = (index: number) => + focusedConfigIndex === index ? '›' : ' '; + + return ( + + + + {t('Optional: configure advanced generation settings.')} + + + + + {cursor(0)} {checkmark(advancedThinkingEnabled)}{' '} + {t('Enable thinking')} + + + + + {t( + 'Allows the model to perform extended reasoning before responding.', + )} + + + + + {cursor(1)} {checkmark(advancedModalityEnabled)}{' '} + {t('Enable modality')} + + + + + {t('Enables image, video, and audio input/output capabilities.')} + + + + + {t( + '\u2191\u2193 to navigate, Space to toggle, Enter to continue, Esc to go back', + )} + + + + ); + }; + + // Render custom review JSON + const renderCustomReviewJsonView = () => { + const generatedEnvKey = generateCustomApiKeyEnvKey( + customProtocol, + customBaseUrl.trim(), + ); + const normalizedIds = normalizeCustomModelIds(customModelIds); + const maskedKey = maskApiKey(customApiKey); + + // Build generationConfig preview lines + const hasThinking = advancedThinkingEnabled; + const hasModality = advancedModalityEnabled; + const hasGenConfig = hasThinking || hasModality; + + let genConfig: Record | undefined; + if (hasGenConfig) { + genConfig = {}; + if (hasModality) { + genConfig['modalities'] = { + image: true, + video: true, + audio: true, + }; + } + if (hasThinking) { + genConfig['extra_body'] = { + enable_thinking: true, + }; + } + } + + const modelEntries = normalizedIds.map((id) => { + const entry: Record = { + id, + name: id, + baseUrl: customBaseUrl.trim(), + envKey: generatedEnvKey, + }; + if (genConfig) { + entry['generationConfig'] = genConfig; + } + return entry; + }); + + const preview = { + env: { [generatedEnvKey]: maskedKey }, + modelProviders: { + [customProtocol]: modelEntries, + }, + security: { + auth: { + selectedType: customProtocol, + }, + }, + model: { + name: normalizedIds[0], + }, + }; + + const jsonPreview = JSON.stringify(preview, null, 2); + + return ( + + + + {t('The following JSON will be saved to settings.json:')} + + + + {jsonPreview} + + + + {t('Enter to save, Esc to go back')} + + + + ); + }; + + const renderOAuthProviderSelectView = () => ( + <> + + { + const index = oauthProviderItems.findIndex( + (item) => item.value === value, + ); + setOAuthProviderIndex(index); + }} + itemGap={1} + /> + + + + {t('Enter to select, ↑↓ to navigate, Esc to go back')} + ); @@ -659,8 +1226,18 @@ export function AuthDialog(): React.JSX.Element { return t('Enter Coding Plan API Key'); case 'api-key-type-select': return t('Select API Key Type'); - case 'custom-info': - return t('Custom Configuration'); + case 'custom-protocol-select': + return t('Step 1/6 \u00B7 Protocol'); + case 'custom-base-url-input': + return t('Step 2/6 \u00B7 Base URL'); + case 'custom-api-key-input': + return t('Step 3/6 \u00B7 API Key'); + case 'custom-model-id-input': + return t('Step 4/6 \u00B7 Model IDs'); + case 'custom-advanced-config': + return t('Step 5/6 \u00B7 Advanced Config'); + case 'custom-review-json': + return t('Step 6/6 \u00B7 Review'); case 'alibaba-standard-region-select': return t( 'Select Region for Alibaba Cloud ModelStudio Standard API Key', @@ -669,6 +1246,8 @@ export function AuthDialog(): React.JSX.Element { return t('Enter Alibaba Cloud ModelStudio Standard API Key'); case 'alibaba-standard-model-id-input': return t('Enter Model IDs'); + case 'oauth-provider-select': + return t('Select OAuth Provider'); default: return t('Select Authentication Method'); } @@ -694,7 +1273,15 @@ export function AuthDialog(): React.JSX.Element { renderAlibabaStandardApiKeyInputView()} {viewLevel === 'alibaba-standard-model-id-input' && renderAlibabaStandardModelIdInputView()} - {viewLevel === 'custom-info' && renderCustomInfoView()} + {viewLevel === 'custom-protocol-select' && + renderCustomProtocolSelectView()} + {viewLevel === 'custom-base-url-input' && renderCustomBaseUrlInputView()} + {viewLevel === 'custom-api-key-input' && renderCustomApiKeyInputView()} + {viewLevel === 'custom-model-id-input' && renderCustomModelIdInputView()} + {viewLevel === 'custom-advanced-config' && + renderCustomAdvancedConfigView()} + {viewLevel === 'custom-review-json' && renderCustomReviewJsonView()} + {viewLevel === 'oauth-provider-select' && renderOAuthProviderSelectView()} {(authError || errorMessage) && ( diff --git a/packages/cli/src/ui/auth/useAuth.test.ts b/packages/cli/src/ui/auth/useAuth.test.ts new file mode 100644 index 000000000..53ca65b86 --- /dev/null +++ b/packages/cli/src/ui/auth/useAuth.test.ts @@ -0,0 +1,351 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, it, expect, vi, beforeEach } from 'vitest'; +import { renderHook, act } from '@testing-library/react'; +import { AuthType } from '@qwen-code/qwen-code-core'; +import { + useAuthCommand, + generateCustomApiKeyEnvKey, + normalizeCustomModelIds, + maskApiKey, +} from './useAuth.js'; +import { + OPENROUTER_OAUTH_CALLBACK_URL, + applyOpenRouterModelsConfiguration, + createOpenRouterOAuthSession, + runOpenRouterOAuthLogin, +} from '../../commands/auth/openrouterOAuth.js'; + +vi.mock('../hooks/useQwenAuth.js', () => ({ + useQwenAuth: vi.fn(() => ({ + qwenAuthState: {}, + cancelQwenAuth: vi.fn(), + })), +})); + +vi.mock('../../utils/settingsUtils.js', () => ({ + backupSettingsFile: vi.fn(), +})); + +vi.mock('../../config/modelProvidersScope.js', () => ({ + getPersistScopeForModelSelection: vi.fn(() => 'user'), +})); + +vi.mock('../../commands/auth/openrouterOAuth.js', () => ({ + OPENROUTER_OAUTH_CALLBACK_URL: 'http://localhost:3000/openrouter/callback', + createOpenRouterOAuthSession: vi.fn(() => ({ + callbackUrl: 'http://localhost:3000/openrouter/callback', + codeVerifier: 'test-verifier', + state: 'test-state', + authorizationUrl: + 'https://openrouter.ai/auth?callback_url=http%3A%2F%2Flocalhost%3A3000%2Fopenrouter%2Fcallback&code_challenge=test-challenge&state=test-state', + })), + applyOpenRouterModelsConfiguration: vi.fn(async () => ({ + updatedConfigs: [ + { + id: 'openai/gpt-4o-mini:free', + name: 'OpenRouter · GPT-4o mini', + baseUrl: 'https://openrouter.ai/api/v1', + envKey: 'OPENROUTER_API_KEY', + }, + ], + activeModelId: 'openai/gpt-4o-mini:free', + persistScope: 'user', + })), + runOpenRouterOAuthLogin: vi.fn( + () => new Promise(() => undefined) as Promise<{ apiKey: string }>, + ), +})); + +const createSettings = () => ({ + merged: { + modelProviders: {}, + }, + setValue: vi.fn(), + forScope: vi.fn(() => ({ + path: '/tmp/settings.json', + })), +}); + +const createConfig = () => ({ + getAuthType: vi.fn(() => AuthType.USE_OPENAI), + getUsageStatisticsEnabled: vi.fn(() => false), + reloadModelProvidersConfig: vi.fn(), + refreshAuth: vi.fn(async () => undefined), +}); + +describe('useAuthCommand', () => { + beforeEach(() => { + vi.clearAllMocks(); + }); + + it('closes auth dialog immediately when starting OpenRouter OAuth', async () => { + const settings = createSettings(); + const config = createConfig(); + const addItem = vi.fn(); + + const { result } = renderHook(() => + useAuthCommand(settings as never, config as never, addItem), + ); + + act(() => { + result.current.openAuthDialog(); + }); + + expect(result.current.isAuthDialogOpen).toBe(true); + + await act(async () => { + void result.current.handleOpenRouterSubmit(); + await Promise.resolve(); + }); + + expect(result.current.pendingAuthType).toBe(AuthType.USE_OPENAI); + expect(result.current.isAuthenticating).toBe(true); + expect(result.current.externalAuthState).toEqual({ + title: 'OpenRouter Authentication', + message: + 'Open the authorization page if your browser does not launch automatically.', + detail: expect.stringContaining('https://openrouter.ai/auth'), + }); + expect(result.current.isAuthDialogOpen).toBe(false); + expect(addItem).not.toHaveBeenCalled(); + }); + + it('cancels OpenRouter OAuth wait and reopens the auth dialog', async () => { + const settings = createSettings(); + const config = createConfig(); + const addItem = vi.fn(); + + const { result } = renderHook(() => + useAuthCommand(settings as never, config as never, addItem), + ); + + act(() => { + result.current.openAuthDialog(); + }); + + await act(async () => { + void result.current.handleOpenRouterSubmit(); + await Promise.resolve(); + }); + + expect(result.current.isAuthenticating).toBe(true); + expect(createOpenRouterOAuthSession).toHaveBeenCalledWith( + OPENROUTER_OAUTH_CALLBACK_URL, + ); + expect(runOpenRouterOAuthLogin).toHaveBeenCalledWith( + OPENROUTER_OAUTH_CALLBACK_URL, + expect.objectContaining({ + abortSignal: expect.any(AbortSignal), + session: expect.objectContaining({ + authorizationUrl: expect.stringContaining( + 'https://openrouter.ai/auth', + ), + }), + }), + ); + + act(() => { + result.current.cancelAuthentication(); + }); + + const abortSignal = vi.mocked(runOpenRouterOAuthLogin).mock.calls[0]?.[1] + ?.abortSignal; + expect(abortSignal?.aborted).toBe(true); + expect(result.current.isAuthenticating).toBe(false); + expect(result.current.externalAuthState).toBe(null); + expect(result.current.pendingAuthType).toBe(AuthType.USE_OPENAI); + expect(result.current.isAuthDialogOpen).toBe(true); + }); + + it('cleans up UI state when OpenRouter OAuth rejects with AbortError', async () => { + const settings = createSettings(); + const config = createConfig(); + const addItem = vi.fn(); + vi.mocked(runOpenRouterOAuthLogin).mockRejectedValueOnce( + new DOMException('OpenRouter OAuth cancelled.', 'AbortError'), + ); + + const { result } = renderHook(() => + useAuthCommand(settings as never, config as never, addItem), + ); + + await act(async () => { + await result.current.handleOpenRouterSubmit(); + }); + + expect(result.current.isAuthenticating).toBe(false); + expect(result.current.externalAuthState).toBe(null); + expect(result.current.pendingAuthType).toBeUndefined(); + expect(result.current.isAuthDialogOpen).toBe(true); + expect(addItem).not.toHaveBeenCalled(); + }); + + it('adds /model and /manage-models guidance after OpenRouter auth succeeds', async () => { + const settings = createSettings(); + const config = createConfig(); + const addItem = vi.fn(); + vi.mocked(runOpenRouterOAuthLogin).mockResolvedValueOnce({ + apiKey: 'oauth-key-123', + userId: 'user-1', + }); + + const { result } = renderHook(() => + useAuthCommand(settings as never, config as never, addItem), + ); + + await act(async () => { + await result.current.handleOpenRouterSubmit(); + }); + + expect(applyOpenRouterModelsConfiguration).toHaveBeenCalledWith( + expect.objectContaining({ + settings: expect.anything(), + config: expect.anything(), + apiKey: 'oauth-key-123', + reloadConfig: true, + }), + ); + expect(addItem).toHaveBeenCalledWith( + expect.objectContaining({ text: 'Successfully configured OpenRouter.' }), + expect.any(Number), + ); + expect(addItem).toHaveBeenCalledWith( + expect.objectContaining({ text: 'Use /model to switch models.' }), + expect.any(Number), + ); + expect(addItem).toHaveBeenCalledWith( + expect.objectContaining({ + text: 'Want more OpenRouter models? Use /manage-models to browse and enable them.', + }), + expect.any(Number), + ); + }); +}); + +describe('generateCustomApiKeyEnvKey', () => { + it('generates env key from openai protocol and base URL', () => { + const key = generateCustomApiKeyEnvKey( + 'openai', + 'https://api.openai.com/v1', + ); + expect(key).toBe('QWEN_CUSTOM_API_KEY_OPENAI_HTTPS_API_OPENAI_COM_V1'); + }); + + it('generates env key from anthropic protocol and base URL', () => { + const key = generateCustomApiKeyEnvKey( + 'anthropic', + 'https://api.anthropic.com/v1', + ); + expect(key).toBe( + 'QWEN_CUSTOM_API_KEY_ANTHROPIC_HTTPS_API_ANTHROPIC_COM_V1', + ); + }); + + it('generates env key from gemini protocol and base URL', () => { + const key = generateCustomApiKeyEnvKey( + 'gemini', + 'https://generativelanguage.googleapis.com', + ); + expect(key).toBe( + 'QWEN_CUSTOM_API_KEY_GEMINI_HTTPS_GENERATIVELANGUAGE_GOOGLEAPIS_COM', + ); + }); + + it('handles localhost URLs', () => { + const key = generateCustomApiKeyEnvKey( + 'openai', + 'http://localhost:11434/v1', + ); + expect(key).toBe('QWEN_CUSTOM_API_KEY_OPENAI_HTTP_LOCALHOST_11434_V1'); + }); + + it('normalizes trailing slashes and special chars', () => { + const key = generateCustomApiKeyEnvKey( + 'openai', + 'https://openrouter.ai/api/v1/', + ); + expect(key).toBe('QWEN_CUSTOM_API_KEY_OPENAI_HTTPS_OPENROUTER_AI_API_V1'); + }); + + it('different protocols with same base URL produce different keys', () => { + const baseUrl = 'https://api.example.com/v1'; + const openaiKey = generateCustomApiKeyEnvKey('openai', baseUrl); + const anthropicKey = generateCustomApiKeyEnvKey('anthropic', baseUrl); + expect(openaiKey).not.toBe(anthropicKey); + expect(openaiKey).toContain('OPENAI'); + expect(anthropicKey).toContain('ANTHROPIC'); + }); +}); + +describe('normalizeCustomModelIds', () => { + it('splits comma-separated model IDs', () => { + const result = normalizeCustomModelIds('qwen/qwen3-coder,openai/gpt-4.1'); + expect(result).toEqual(['qwen/qwen3-coder', 'openai/gpt-4.1']); + }); + + it('trims whitespace from each model ID', () => { + const result = normalizeCustomModelIds( + ' qwen/qwen3-coder , openai/gpt-4.1 ', + ); + expect(result).toEqual(['qwen/qwen3-coder', 'openai/gpt-4.1']); + }); + + it('deduplicates while preserving order', () => { + const result = normalizeCustomModelIds( + 'qwen/qwen3-coder,openai/gpt-4.1,qwen/qwen3-coder', + ); + expect(result).toEqual(['qwen/qwen3-coder', 'openai/gpt-4.1']); + }); + + it('removes empty entries', () => { + const result = normalizeCustomModelIds('qwen/qwen3-coder,,openai/gpt-4.1'); + expect(result).toEqual(['qwen/qwen3-coder', 'openai/gpt-4.1']); + }); + + it('returns empty array for empty input', () => { + const result = normalizeCustomModelIds(''); + expect(result).toEqual([]); + }); + + it('returns empty array for whitespace-only input', () => { + const result = normalizeCustomModelIds(' , , '); + expect(result).toEqual([]); + }); + + it('handles single model ID', () => { + const result = normalizeCustomModelIds('qwen/qwen3-coder'); + expect(result).toEqual(['qwen/qwen3-coder']); + }); +}); + +describe('maskApiKey', () => { + it('masks a standard API key showing first 3 and last 4 chars', () => { + const result = maskApiKey('sk-or-v1-1234567890abcdef'); + expect(result).toBe('sk-...cdef'); + }); + + it('shows placeholder for empty string', () => { + const result = maskApiKey(''); + expect(result).toBe('(not set)'); + }); + + it('masks short keys with asterisks', () => { + const result = maskApiKey('abc'); + expect(result).toBe('***'); + }); + + it('masks 6-char keys with asterisks', () => { + const result = maskApiKey('abcdef'); + expect(result).toBe('***'); + }); + + it('trims whitespace before masking', () => { + const result = maskApiKey(' sk-or-v1-1234567890abcdef '); + expect(result).toBe('sk-...cdef'); + }); +}); diff --git a/packages/cli/src/ui/auth/useAuth.ts b/packages/cli/src/ui/auth/useAuth.ts index eb32d7f87..c16c6060e 100644 --- a/packages/cli/src/ui/auth/useAuth.ts +++ b/packages/cli/src/ui/auth/useAuth.ts @@ -39,6 +39,53 @@ import { DASHSCOPE_STANDARD_API_KEY_ENV_KEY, type AlibabaStandardRegion, } from '../../constants/alibabaStandardApiKey.js'; +import { + applyOpenRouterModelsConfiguration, + createOpenRouterOAuthSession, + OPENROUTER_OAUTH_CALLBACK_URL, + runOpenRouterOAuthLogin, +} from '../../commands/auth/openrouterOAuth.js'; + +/** + * Generate a Qwen-managed env key from protocol and base URL. + * Format: QWEN_CUSTOM_API_KEY_${PROTOCOL}_${NORMALIZED_BASE_URL} + */ +export function generateCustomApiKeyEnvKey( + protocol: string, + baseUrl: string, +): string { + const normalize = (value: string) => + value + .trim() + .toUpperCase() + .replace(/[^A-Z0-9]+/g, '_') + .replace(/_+/g, '_') + .replace(/^_+|_+$/g, ''); + + return `QWEN_CUSTOM_API_KEY_${normalize(protocol)}_${normalize(baseUrl)}`; +} + +/** + * Normalize model IDs: split by comma, trim, deduplicate, remove empty. + */ +export function normalizeCustomModelIds(modelIdsInput: string): string[] { + return modelIdsInput + .split(',') + .map((id) => id.trim()) + .filter((id, index, array) => id.length > 0 && array.indexOf(id) === index); +} + +/** + * Mask an API key for display: show first 3 and last 4 chars. + */ +export function maskApiKey(apiKey: string): string { + const trimmed = apiKey.trim(); + if (trimmed.length === 0) return '(not set)'; + if (trimmed.length <= 6) return '***'; + const head = trimmed.slice(0, 3); + const tail = trimmed.slice(-4); + return `${head}...${tail}`; +} export type { QwenAuthState } from '../hooks/useQwenAuth.js'; @@ -61,6 +108,13 @@ export const useAuthCommand = ( const [pendingAuthType, setPendingAuthType] = useState( undefined, ); + const [externalAuthState, setExternalAuthState] = useState<{ + title: string; + message: string; + detail?: string; + } | null>(null); + const [openRouterAuthAbortController, setOpenRouterAuthAbortController] = + useState(null); const { qwenAuthState, cancelQwenAuth } = useQwenAuth( pendingAuthType, @@ -81,6 +135,7 @@ export const useAuthCommand = ( const handleAuthFailure = useCallback( (error: unknown) => { setIsAuthenticating(false); + setExternalAuthState(null); const errorMessage = t('Failed to authenticate. Message: {{message}}', { message: getErrorMessage(error), }); @@ -276,6 +331,11 @@ export const useAuthCommand = ( cancelQwenAuth(); } + if (isAuthenticating && pendingAuthType === AuthType.USE_OPENAI) { + openRouterAuthAbortController?.abort(); + setOpenRouterAuthAbortController(null); + } + // Log authentication cancellation if (isAuthenticating && pendingAuthType) { const authEvent = new AuthEvent(pendingAuthType, 'manual', 'cancelled'); @@ -284,9 +344,16 @@ export const useAuthCommand = ( // Do not reset pendingAuthType here, persist the previously selected type. setIsAuthenticating(false); + setExternalAuthState(null); setIsAuthDialogOpen(true); setAuthError(null); - }, [isAuthenticating, pendingAuthType, cancelQwenAuth, config]); + }, [ + isAuthenticating, + pendingAuthType, + cancelQwenAuth, + config, + openRouterAuthAbortController, + ]); /** * Handle coding plan submission - generates configs from template and stores api-key @@ -552,6 +619,284 @@ export const useAuthCommand = ( [settings, config, handleAuthFailure, addItem, onAuthChange], ); + const handleOpenRouterSubmit = useCallback(async () => { + try { + setPendingAuthType(AuthType.USE_OPENAI); + setIsAuthenticating(true); + setAuthError(null); + setIsAuthDialogOpen(false); + + const oauthSession = createOpenRouterOAuthSession( + OPENROUTER_OAUTH_CALLBACK_URL, + ); + setExternalAuthState({ + title: t('OpenRouter Authentication'), + message: t( + 'Open the authorization page if your browser does not launch automatically.', + ), + detail: oauthSession.authorizationUrl, + }); + + const abortController = new AbortController(); + setOpenRouterAuthAbortController(abortController); + const oauthResult = await runOpenRouterOAuthLogin( + OPENROUTER_OAUTH_CALLBACK_URL, + { + abortSignal: abortController.signal, + session: oauthSession, + }, + ); + setOpenRouterAuthAbortController(null); + setExternalAuthState({ + title: t('OpenRouter Authentication'), + message: t('Finalizing OpenRouter setup...'), + detail: t( + 'Syncing OpenRouter models and updating your local configuration.', + ), + }); + const selectedKey = oauthResult.apiKey; + if (!selectedKey) { + throw new Error( + t('OpenRouter authentication completed without an API key.'), + ); + } + + const persistScope = getPersistScopeForModelSelection(settings); + const settingsFile = settings.forScope(persistScope); + backupSettingsFile(settingsFile.path); + + await applyOpenRouterModelsConfiguration({ + settings, + config, + apiKey: selectedKey, + reloadConfig: true, + }); + await config.refreshAuth(AuthType.USE_OPENAI); + + setAuthError(null); + setExternalAuthState(null); + setAuthState(AuthState.Authenticated); + setPendingAuthType(undefined); + setIsAuthDialogOpen(false); + setIsAuthenticating(false); + onAuthChange?.(); + + addItem( + { + type: MessageType.INFO, + text: t('Successfully configured OpenRouter.'), + }, + Date.now(), + ); + + addItem( + { + type: MessageType.INFO, + text: t('Use /model to switch models.'), + }, + Date.now(), + ); + + addItem( + { + type: MessageType.INFO, + text: t( + 'Want more OpenRouter models? Use /manage-models to browse and enable them.', + ), + }, + Date.now(), + ); + + const authEvent = new AuthEvent(AuthType.USE_OPENAI, 'manual', 'success'); + logAuth(config, authEvent); + } catch (error) { + setOpenRouterAuthAbortController(null); + if (error instanceof DOMException && error.name === 'AbortError') { + setExternalAuthState(null); + setPendingAuthType(undefined); + setIsAuthenticating(false); + setIsAuthDialogOpen(true); + return; + } + handleAuthFailure(error); + } + }, [ + settings, + config, + handleAuthFailure, + addItem, + onAuthChange, + setOpenRouterAuthAbortController, + ]); + + /** + * Handle custom API key setup wizard submission. + * Persists key to env[generatedEnvKey] and creates modelProviders entries. + */ + const handleCustomApiKeySubmit = useCallback( + async ( + protocol: + | AuthType.USE_OPENAI + | AuthType.USE_ANTHROPIC + | AuthType.USE_GEMINI, + baseUrl: string, + apiKey: string, + modelIdsInput: string, + generationConfig?: { + enableThinking?: boolean; + multimodal?: { + image?: boolean; + video?: boolean; + audio?: boolean; + }; + maxTokens?: number; + }, + ) => { + try { + setIsAuthenticating(true); + setAuthError(null); + + const trimmedApiKey = apiKey.trim(); + const trimmedBaseUrl = baseUrl.trim(); + const modelIds = normalizeCustomModelIds(modelIdsInput); + + if (!trimmedApiKey) { + throw new Error(t('API key cannot be empty.')); + } + if (!trimmedBaseUrl) { + throw new Error(t('Base URL cannot be empty.')); + } + if (!/^https?:\/\//i.test(trimmedBaseUrl)) { + throw new Error(t('Base URL must start with http:// or https://.')); + } + if (modelIds.length === 0) { + throw new Error(t('Model IDs cannot be empty.')); + } + + const generatedEnvKey = generateCustomApiKeyEnvKey( + protocol, + trimmedBaseUrl, + ); + const persistScope = getPersistScopeForModelSelection(settings); + + const settingsFile = settings.forScope(persistScope); + backupSettingsFile(settingsFile.path); + + // Persist API key to env + settings.setValue( + persistScope, + `env.${generatedEnvKey}`, + trimmedApiKey, + ); + process.env[generatedEnvKey] = trimmedApiKey; + + // Build generationConfig if any option is set + let genConfig: ProviderModelConfig['generationConfig'] | undefined; + if (generationConfig) { + const hasThinking = generationConfig.enableThinking === true; + const hasMultimodal = + generationConfig.multimodal && + (generationConfig.multimodal.image === true || + generationConfig.multimodal.video === true || + generationConfig.multimodal.audio === true); + const hasMaxTokens = + generationConfig.maxTokens !== undefined && + generationConfig.maxTokens > 0; + + if (hasThinking || hasMultimodal || hasMaxTokens) { + genConfig = {}; + if (hasMultimodal) { + genConfig.modalities = { + image: generationConfig.multimodal!.image ?? false, + video: generationConfig.multimodal!.video ?? false, + audio: generationConfig.multimodal!.audio ?? false, + }; + } + if (hasThinking) { + genConfig.extra_body = { enable_thinking: true }; + } + if (hasMaxTokens) { + genConfig.samplingParams = { + max_tokens: generationConfig.maxTokens, + }; + } + } + } + + // Build new model configs + const newConfigs: ProviderModelConfig[] = modelIds.map((modelId) => ({ + id: modelId, + name: modelId, + baseUrl: trimmedBaseUrl, + envKey: generatedEnvKey, + ...(genConfig ? { generationConfig: genConfig } : {}), + })); + + // Merge with existing configs: replace same generatedEnvKey, preserve rest + const existingConfigs = + ( + settings.merged.modelProviders as ModelProvidersConfig | undefined + )?.[protocol] || []; + + const preservedConfigs = existingConfigs.filter( + (existing) => existing.envKey !== generatedEnvKey, + ); + + const updatedConfigs = [...newConfigs, ...preservedConfigs]; + + // Persist modelProviders, security, model + settings.setValue( + persistScope, + `modelProviders.${protocol}`, + updatedConfigs, + ); + settings.setValue(persistScope, 'security.auth.selectedType', protocol); + settings.setValue(persistScope, 'model.name', modelIds[0]); + + // Hot-reload before refreshAuth + const updatedModelProviders: ModelProvidersConfig = { + ...(settings.merged.modelProviders as + | ModelProvidersConfig + | undefined), + [protocol]: updatedConfigs, + }; + config.reloadModelProvidersConfig(updatedModelProviders); + await config.refreshAuth(protocol); + + setAuthError(null); + setAuthState(AuthState.Authenticated); + setPendingAuthType(undefined); + setIsAuthDialogOpen(false); + setIsAuthenticating(false); + onAuthChange?.(); + + addItem( + { + type: MessageType.INFO, + text: t( + 'Custom API Key authenticated successfully. Settings updated with generated env key and model provider config.', + ), + }, + Date.now(), + ); + + addItem( + { + type: MessageType.INFO, + text: t('Tip: Use /model to switch between configured models.'), + }, + Date.now(), + ); + + const authEvent = new AuthEvent(protocol, 'manual', 'success'); + logAuth(config, authEvent); + } catch (error) { + handleAuthFailure(error); + } + }, + [settings, config, handleAuthFailure, addItem, onAuthChange], + ); + /** /** * We previously used a useEffect to trigger authentication automatically when @@ -600,10 +945,13 @@ export const useAuthCommand = ( isAuthDialogOpen, isAuthenticating, pendingAuthType, + externalAuthState, qwenAuthState, handleAuthSelect, handleCodingPlanSubmit, handleAlibabaStandardSubmit, + handleOpenRouterSubmit, + handleCustomApiKeySubmit, openAuthDialog, cancelAuthentication, }; diff --git a/packages/cli/src/ui/commands/manageModelsCommand.test.ts b/packages/cli/src/ui/commands/manageModelsCommand.test.ts new file mode 100644 index 000000000..2666d4f8b --- /dev/null +++ b/packages/cli/src/ui/commands/manageModelsCommand.test.ts @@ -0,0 +1,38 @@ +/** + * @license + * Copyright 2025 Qwen + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, it, expect, beforeEach } from 'vitest'; +import { manageModelsCommand } from './manageModelsCommand.js'; +import type { CommandContext } from './types.js'; +import { createMockCommandContext } from '../../test-utils/mockCommandContext.js'; + +describe('manageModelsCommand', () => { + let mockContext: CommandContext; + + beforeEach(() => { + mockContext = createMockCommandContext(); + }); + + it('should return a dialog action to open the manage-models dialog', () => { + if (!manageModelsCommand.action) { + throw new Error('The manage-models command must have an action.'); + } + + const result = manageModelsCommand.action(mockContext, ''); + + expect(result).toEqual({ + type: 'dialog', + dialog: 'manage-models', + }); + }); + + it('should have the correct name and description', () => { + expect(manageModelsCommand.name).toBe('manage-models'); + expect(manageModelsCommand.description).toBe( + 'Browse dynamic model catalogs and choose which models stay enabled locally', + ); + }); +}); diff --git a/packages/cli/src/ui/commands/manageModelsCommand.ts b/packages/cli/src/ui/commands/manageModelsCommand.ts new file mode 100644 index 000000000..e16b18016 --- /dev/null +++ b/packages/cli/src/ui/commands/manageModelsCommand.ts @@ -0,0 +1,24 @@ +/** + * @license + * Copyright 2025 Qwen + * SPDX-License-Identifier: Apache-2.0 + */ + +import type { OpenDialogActionReturn, SlashCommand } from './types.js'; +import { CommandKind } from './types.js'; +import { t } from '../../i18n/index.js'; + +export const manageModelsCommand: SlashCommand = { + name: 'manage-models', + get description() { + return t( + 'Browse dynamic model catalogs and choose which models stay enabled locally', + ); + }, + kind: CommandKind.BUILT_IN, + supportedModes: ['interactive'] as const, + action: (): OpenDialogActionReturn => ({ + type: 'dialog', + dialog: 'manage-models', + }), +}; diff --git a/packages/cli/src/ui/commands/rewindCommand.ts b/packages/cli/src/ui/commands/rewindCommand.ts index 07f78a1a5..020091795 100644 --- a/packages/cli/src/ui/commands/rewindCommand.ts +++ b/packages/cli/src/ui/commands/rewindCommand.ts @@ -16,7 +16,7 @@ export const rewindCommand: SlashCommand = { }, kind: CommandKind.BUILT_IN, action: async (): Promise => ({ - type: 'dialog', - dialog: 'rewind', - }), + type: 'dialog', + dialog: 'rewind', + }), }; diff --git a/packages/cli/src/ui/commands/types.ts b/packages/cli/src/ui/commands/types.ts index 2e3f8df62..9afd2ac9c 100644 --- a/packages/cli/src/ui/commands/types.ts +++ b/packages/cli/src/ui/commands/types.ts @@ -173,6 +173,7 @@ export interface OpenDialogActionReturn { | 'memory' | 'model' | 'fast-model' + | 'manage-models' | 'subagent_create' | 'subagent_list' | 'trust' diff --git a/packages/cli/src/ui/components/DialogManager.tsx b/packages/cli/src/ui/components/DialogManager.tsx index e626a60df..f884d089a 100644 --- a/packages/cli/src/ui/components/DialogManager.tsx +++ b/packages/cli/src/ui/components/DialogManager.tsx @@ -16,11 +16,13 @@ import { PluginChoicePrompt } from './PluginChoicePrompt.js'; import { ThemeDialog } from './ThemeDialog.js'; import { SettingsDialog } from './SettingsDialog.js'; import { QwenOAuthProgress } from './QwenOAuthProgress.js'; +import { ExternalAuthProgress } from './ExternalAuthProgress.js'; import { AuthDialog } from '../auth/AuthDialog.js'; import { EditorSettingsDialog } from './EditorSettingsDialog.js'; import { TrustDialog } from './TrustDialog.js'; import { PermissionsDialog } from './PermissionsDialog.js'; import { ModelDialog } from './ModelDialog.js'; +import { ManageModelsDialog } from './ManageModelsDialog.js'; import { ArenaStartDialog } from './arena/ArenaStartDialog.js'; import { ArenaSelectDialog } from './arena/ArenaSelectDialog.js'; import { ArenaStopDialog } from './arena/ArenaStopDialog.js'; @@ -213,6 +215,14 @@ export const DialogManager = ({ /> ); } + if (uiState.isManageModelsDialogOpen) { + return ( + + ); + } if (uiState.isSettingsDialogOpen) { return ( @@ -310,6 +320,23 @@ export const DialogManager = ({ } if (uiState.isAuthenticating) { + if ( + uiState.pendingAuthType === AuthType.USE_OPENAI && + uiState.externalAuthState + ) { + return ( + { + uiActions.cancelAuthentication(); + uiActions.setAuthState(AuthState.Updating); + }} + /> + ); + } + // OpenAI authentication now handled through AuthDialog with coding-plan/custom sub-modes // Qwen OAuth remains as a separate flow if (uiState.pendingAuthType === AuthType.QWEN_OAUTH) { diff --git a/packages/cli/src/ui/components/ExternalAuthProgress.test.tsx b/packages/cli/src/ui/components/ExternalAuthProgress.test.tsx new file mode 100644 index 000000000..528d20237 --- /dev/null +++ b/packages/cli/src/ui/components/ExternalAuthProgress.test.tsx @@ -0,0 +1,29 @@ +/** + * @license + * Copyright 2025 Qwen + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, expect, it, vi } from 'vitest'; +import { render } from 'ink-testing-library'; +import { ExternalAuthProgress } from './ExternalAuthProgress.js'; + +vi.mock('../hooks/useKeypress.js', () => ({ + useKeypress: vi.fn(), +})); + +describe('ExternalAuthProgress', () => { + it('shows cancel hint when cancel is available', () => { + const onCancel = vi.fn(); + const { lastFrame } = render( + , + ); + + expect(lastFrame()).toContain('Esc to cancel'); + }); +}); diff --git a/packages/cli/src/ui/components/ExternalAuthProgress.tsx b/packages/cli/src/ui/components/ExternalAuthProgress.tsx new file mode 100644 index 000000000..cfea9a81d --- /dev/null +++ b/packages/cli/src/ui/components/ExternalAuthProgress.tsx @@ -0,0 +1,60 @@ +/** + * @license + * Copyright 2025 Qwen + * SPDX-License-Identifier: Apache-2.0 + */ + +import type React from 'react'; +import { Box, Text } from 'ink'; +import { theme } from '../semantic-colors.js'; +import { t } from '../../i18n/index.js'; +import { useKeypress } from '../hooks/useKeypress.js'; + +interface ExternalAuthProgressProps { + title: string; + message: string; + detail?: string; + onCancel?: () => void; +} + +export function ExternalAuthProgress({ + title, + message, + detail, + onCancel, +}: ExternalAuthProgressProps): React.JSX.Element { + useKeypress( + (key) => { + if (key.name === 'escape' || (key.ctrl && key.name === 'c')) { + onCancel?.(); + } + }, + { isActive: Boolean(onCancel) }, + ); + + return ( + + {title} + + + {message} + {detail ? {detail} : null} + + + + + {t('Please wait while authentication completes...')} + + {onCancel ? ( + {t('Esc to cancel')} + ) : null} + + + ); +} diff --git a/packages/cli/src/ui/components/ManageModelsDialog.test.tsx b/packages/cli/src/ui/components/ManageModelsDialog.test.tsx new file mode 100644 index 000000000..a39e3c1f8 --- /dev/null +++ b/packages/cli/src/ui/components/ManageModelsDialog.test.tsx @@ -0,0 +1,132 @@ +/** + * @license + * Copyright 2025 Qwen + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, expect, it } from 'vitest'; +import { + applyCatalogFilters, + buildModelLabel, + getNextEnabledTabSource, + getNextFocusMode, + type FilterMode, +} from './ManageModelsDialog.js'; +import type { ManageModelsCatalogEntry } from '../manageModels/manageModels.js'; + +function makeEntry( + id: string, + options: { + badges?: string[]; + supportsVision?: boolean; + contextWindowSize?: number; + } = {}, +): ManageModelsCatalogEntry { + return { + id, + label: id, + searchText: `${id} ${(options.badges || []).join(' ')}`, + supportsVision: options.supportsVision ?? false, + contextWindowSize: options.contextWindowSize, + badges: options.badges || [], + model: { + id, + name: id, + baseUrl: 'https://openrouter.ai/api/v1', + }, + }; +} + +describe('ManageModelsDialog helpers', () => { + it('buildModelLabel uses the short display label only', () => { + expect( + buildModelLabel( + makeEntry('qwen/qwen3-coder:free', { + badges: ['free', 'vision'], + contextWindowSize: 1_000_000, + }), + ), + ).toBe('qwen/qwen3-coder:free'); + }); + + it.each<[FilterMode, string[]]>([ + ['all', ['qwen/qwen3-coder:free', 'openai/gpt-4o-mini']], + ['enabled', ['openai/gpt-4o-mini']], + ['free', ['qwen/qwen3-coder:free']], + ['vision', ['qwen/qwen3-coder:free']], + ])('applyCatalogFilters supports %s filter', (filterMode, expectedIds) => { + const entries = [ + makeEntry('qwen/qwen3-coder:free', { + badges: ['free', 'vision'], + supportsVision: true, + }), + makeEntry('openai/gpt-4o-mini'), + ]; + + expect( + applyCatalogFilters({ + entries, + query: '', + selectedIds: ['openai/gpt-4o-mini'], + filterMode, + }).map((entry) => entry.id), + ).toEqual(expectedIds); + }); + + it('applyCatalogFilters combines query and filter mode', () => { + const entries = [ + makeEntry('qwen/qwen3-coder:free', { + badges: ['free'], + }), + makeEntry('glm/glm-4.5-air:free', { + badges: ['free'], + }), + ]; + + expect( + applyCatalogFilters({ + entries, + query: 'qwen', + selectedIds: [], + filterMode: 'free', + }).map((entry) => entry.id), + ).toEqual(['qwen/qwen3-coder:free']); + }); + + it('applyCatalogFilters supports enabled quick filter in search', () => { + const entries = [ + makeEntry('qwen/qwen3-coder:free'), + makeEntry('openai/gpt-4o-mini'), + ]; + + expect( + applyCatalogFilters({ + entries, + query: 'enabled', + selectedIds: ['openai/gpt-4o-mini'], + filterMode: 'all', + }).map((entry) => entry.id), + ).toEqual(['openai/gpt-4o-mini']); + + expect( + applyCatalogFilters({ + entries, + query: 'is:enabled gpt', + selectedIds: ['openai/gpt-4o-mini'], + filterMode: 'all', + }).map((entry) => entry.id), + ).toEqual(['openai/gpt-4o-mini']); + }); + + it('cycles focus across tabs, search, and list', () => { + expect(getNextFocusMode('tabs', 'forward', true)).toBe('search'); + expect(getNextFocusMode('search', 'forward', true)).toBe('list'); + expect(getNextFocusMode('list', 'forward', true)).toBe('tabs'); + expect(getNextFocusMode('search', 'backward', false)).toBe('tabs'); + }); + + it('keeps provider tab on the only enabled source', () => { + expect(getNextEnabledTabSource('openrouter', 'left')).toBe('openrouter'); + expect(getNextEnabledTabSource('openrouter', 'right')).toBe('openrouter'); + }); +}); diff --git a/packages/cli/src/ui/components/ManageModelsDialog.tsx b/packages/cli/src/ui/components/ManageModelsDialog.tsx new file mode 100644 index 000000000..98998d54f --- /dev/null +++ b/packages/cli/src/ui/components/ManageModelsDialog.tsx @@ -0,0 +1,648 @@ +/** + * @license + * Copyright 2025 Qwen + * SPDX-License-Identifier: Apache-2.0 + */ + +import { Box, Text } from 'ink'; +import { useCallback, useEffect, useMemo, useState } from 'react'; +import process from 'node:process'; +import { + type Config, + type ProviderModelConfig as ModelConfig, +} from '@qwen-code/qwen-code-core'; +import { useSettings } from '../contexts/SettingsContext.js'; +import { useKeypress } from '../hooks/useKeypress.js'; +import { theme } from '../semantic-colors.js'; +import { TextInput } from './shared/TextInput.js'; +import type { LoadedSettings } from '../../config/settings.js'; +import { + type ManageModelsCatalog, + type ManageModelsCatalogEntry, + type ManageModelsSource, + fetchManageModelsCatalog, + getEnabledModelIdsForSource, + saveManageModelsSelection, +} from '../manageModels/manageModels.js'; + +interface ManageModelsDialogProps { + config: Config; + onClose: () => void; +} + +type DialogStatus = 'loading' | 'ready' | 'saving' | 'error'; +type FocusMode = 'tabs' | 'search' | 'list'; +export type FilterMode = 'all' | 'enabled' | 'free' | 'vision'; + +const MAX_VISIBLE_MODELS = 12; +const MANAGE_MODELS_TABS = [ + { source: 'openrouter', label: 'OpenRouter', enabled: true }, + { source: 'modelstudio', label: 'ModelStudio', enabled: false }, +] as const; + +type ManageModelsTabSource = (typeof MANAGE_MODELS_TABS)[number]['source']; + +export function buildModelLabel(entry: ManageModelsCatalogEntry): string { + return entry.label; +} + +export function applyCatalogFilters(params: { + entries: ManageModelsCatalogEntry[]; + query: string; + selectedIds: string[]; + filterMode: FilterMode; +}): ManageModelsCatalogEntry[] { + const { entries, query, selectedIds, filterMode } = params; + const normalized = query.trim().toLowerCase(); + const rawTokens = normalized ? normalized.split(/\s+/).filter(Boolean) : []; + const quickFilterEnabled = rawTokens.some( + (token) => token === 'enabled' || token === 'is:enabled', + ); + const tokens = rawTokens.filter( + (token) => token !== 'enabled' && token !== 'is:enabled', + ); + const selectedSet = new Set(selectedIds); + + return entries.filter((entry) => { + if ( + (filterMode === 'enabled' || quickFilterEnabled) && + !selectedSet.has(entry.id) + ) { + return false; + } + if (filterMode === 'free' && !entry.badges.includes('free')) { + return false; + } + if (filterMode === 'vision' && !entry.supportsVision) { + return false; + } + + if (tokens.length === 0) { + return true; + } + + const haystack = `${entry.searchText} ${entry.id}`.toLowerCase(); + return tokens.every((token) => haystack.includes(token)); + }); +} + +function getFilterLabel(filterMode: FilterMode): string { + switch (filterMode) { + case 'enabled': + return 'Enabled'; + case 'free': + return 'Free'; + case 'vision': + return 'Vision'; + case 'all': + default: + return 'All'; + } +} + +function cycleFilter( + current: FilterMode, + direction: 'left' | 'right', +): FilterMode { + const modes: FilterMode[] = ['all', 'enabled', 'free', 'vision']; + const currentIndex = modes.indexOf(current); + const nextIndex = + direction === 'right' + ? (currentIndex + 1) % modes.length + : (currentIndex - 1 + modes.length) % modes.length; + return modes[nextIndex] || 'all'; +} + +function formatContextWindowSize(value?: number): string { + return typeof value === 'number' ? value.toLocaleString('en-US') : 'unknown'; +} + +export function getNextFocusMode( + current: FocusMode, + direction: 'forward' | 'backward', + hasList: boolean, +): FocusMode { + const order: FocusMode[] = hasList + ? ['tabs', 'search', 'list'] + : ['tabs', 'search']; + const currentIndex = order.indexOf(current); + const safeIndex = currentIndex >= 0 ? currentIndex : 0; + const nextIndex = + direction === 'forward' + ? (safeIndex + 1) % order.length + : (safeIndex - 1 + order.length) % order.length; + return order[nextIndex] || 'tabs'; +} + +export function getNextEnabledTabSource( + current: ManageModelsTabSource, + direction: 'left' | 'right', +): ManageModelsTabSource { + const currentIndex = MANAGE_MODELS_TABS.findIndex( + (tab) => tab.source === current, + ); + const safeIndex = currentIndex >= 0 ? currentIndex : 0; + + for (let offset = 1; offset <= MANAGE_MODELS_TABS.length; offset += 1) { + const candidateIndex = + direction === 'right' + ? (safeIndex + offset) % MANAGE_MODELS_TABS.length + : (safeIndex - offset + MANAGE_MODELS_TABS.length) % + MANAGE_MODELS_TABS.length; + const candidate = MANAGE_MODELS_TABS[candidateIndex]; + if (candidate?.enabled) { + return candidate.source; + } + } + + return current; +} + +export function ManageModelsDialog({ + config, + onClose, +}: ManageModelsDialogProps): React.JSX.Element { + const settings = useSettings(); + const [activeTabSource, setActiveTabSource] = + useState('openrouter'); + const source: ManageModelsSource = 'openrouter'; + + const [status, setStatus] = useState('loading'); + const [error, setError] = useState(null); + const [catalog, setCatalog] = useState(null); + const [query, setQuery] = useState(''); + const [focusMode, setFocusMode] = useState('tabs'); + const [filterMode, setFilterMode] = useState('all'); + const [selectedIds, setSelectedIds] = useState([]); + const [highlightedId, setHighlightedId] = useState(null); + const [statusMessage, setStatusMessage] = useState(null); + + const loadCatalog = useCallback(async () => { + setStatus('loading'); + setError(null); + setStatusMessage(null); + + try { + const nextCatalog = await fetchManageModelsCatalog(source); + const enabledIds = getEnabledModelIdsForSource(source, settings); + setCatalog(nextCatalog); + setSelectedIds(enabledIds); + setHighlightedId(nextCatalog.entries[0]?.id || null); + setStatus('ready'); + } catch (loadError) { + setError( + loadError instanceof Error ? loadError.message : String(loadError), + ); + setStatus('error'); + } + }, [settings, source]); + + useEffect(() => { + void loadCatalog(); + }, [loadCatalog]); + + const filteredEntries = useMemo( + () => + applyCatalogFilters({ + entries: catalog?.entries || [], + query, + selectedIds, + filterMode, + }), + [catalog?.entries, query, selectedIds, filterMode], + ); + + useEffect(() => { + if (filteredEntries.length === 0) { + setHighlightedId(null); + if (focusMode === 'list') { + setFocusMode('search'); + } + return; + } + + if ( + highlightedId && + filteredEntries.some((entry) => entry.id === highlightedId) + ) { + return; + } + + setHighlightedId(filteredEntries[0]?.id || null); + }, [filteredEntries, focusMode, highlightedId]); + + const highlightedIndex = useMemo(() => { + if (!highlightedId) { + return 0; + } + const index = filteredEntries.findIndex( + (entry) => entry.id === highlightedId, + ); + return index >= 0 ? index : 0; + }, [filteredEntries, highlightedId]); + + const highlightedEntry = useMemo(() => { + if (!highlightedId) { + return null; + } + return catalog?.entries.find((entry) => entry.id === highlightedId) || null; + }, [catalog?.entries, highlightedId]); + + const visibleWindow = useMemo(() => { + if (filteredEntries.length <= MAX_VISIBLE_MODELS) { + return { + start: 0, + entries: filteredEntries, + }; + } + + const centeredStart = Math.max( + 0, + Math.min( + highlightedIndex - Math.floor(MAX_VISIBLE_MODELS / 2), + filteredEntries.length - MAX_VISIBLE_MODELS, + ), + ); + + return { + start: centeredStart, + entries: filteredEntries.slice( + centeredStart, + centeredStart + MAX_VISIBLE_MODELS, + ), + }; + }, [filteredEntries, highlightedIndex]); + + const moveHighlight = useCallback( + (direction: 'up' | 'down') => { + if (filteredEntries.length === 0) { + return; + } + + if (direction === 'up') { + if (highlightedIndex <= 0) { + setFocusMode('search'); + return; + } + setHighlightedId(filteredEntries[highlightedIndex - 1]?.id || null); + return; + } + + const nextIndex = Math.min( + highlightedIndex + 1, + filteredEntries.length - 1, + ); + setHighlightedId(filteredEntries[nextIndex]?.id || null); + }, + [filteredEntries, highlightedIndex], + ); + + const toggleHighlightedSelection = useCallback(() => { + const currentEntry = filteredEntries[highlightedIndex]; + if (!currentEntry) { + return; + } + + setSelectedIds((current) => { + const next = new Set(current); + if (next.has(currentEntry.id)) { + next.delete(currentEntry.id); + } else { + next.add(currentEntry.id); + } + return Array.from(next); + }); + }, [filteredEntries, highlightedIndex]); + + const handleSave = useCallback(async () => { + if (!catalog) { + return; + } + + const selectedEntries = catalog.entries.filter((entry) => + selectedIds.includes(entry.id), + ); + + if (selectedEntries.length === 0) { + setError('Select at least one model to keep enabled.'); + return; + } + + setStatus('saving'); + setError(null); + setStatusMessage(null); + + try { + const selectedModels: ModelConfig[] = selectedEntries.map( + (entry) => entry.model, + ); + const result = await saveManageModelsSelection({ + source, + selectedModels, + settings: settings as LoadedSettings, + config, + }); + setSelectedIds(result.selectedIds); + setStatus('ready'); + setStatusMessage( + result.activeModelId + ? `Saved ${result.selectedIds.length} enabled models · active model: ${result.activeModelId} · use /model to switch models` + : `Saved ${result.selectedIds.length} enabled models · use /model to switch models`, + ); + } catch (saveError) { + setError( + saveError instanceof Error ? saveError.message : String(saveError), + ); + setStatus('error'); + } + }, [catalog, config, selectedIds, settings, source]); + + useKeypress( + (key) => { + if (key.name === 'escape') { + onClose(); + return; + } + + if (key.ctrl && key.name === 'r' && status !== 'saving') { + void loadCatalog(); + return; + } + + if (status === 'saving') { + return; + } + + if (key.name === 'tab') { + setFocusMode((current) => + getNextFocusMode( + current, + key.shift ? 'backward' : 'forward', + filteredEntries.length > 0, + ), + ); + return; + } + + if (focusMode === 'tabs') { + if (key.name === 'left') { + setActiveTabSource((current) => + getNextEnabledTabSource(current, 'left'), + ); + return; + } + if (key.name === 'right') { + setActiveTabSource((current) => + getNextEnabledTabSource(current, 'right'), + ); + return; + } + if (key.name === 'down') { + setFocusMode('search'); + } + return; + } + + if (focusMode === 'search') { + if (key.name === 'left') { + setFilterMode((current) => cycleFilter(current, 'left')); + return; + } + if (key.name === 'right') { + setFilterMode((current) => cycleFilter(current, 'right')); + return; + } + if (key.name === 'up') { + setFocusMode('tabs'); + return; + } + if (key.name === 'down' && filteredEntries.length > 0) { + setFocusMode('list'); + } + return; + } + + if (focusMode === 'list') { + if (key.name === 'up') { + moveHighlight('up'); + return; + } + if (key.name === 'down') { + moveHighlight('down'); + return; + } + if (key.name === 'space' || key.sequence === ' ') { + toggleHighlightedSelection(); + return; + } + if (key.name === 'return') { + void handleSave(); + } + } + }, + { isActive: true }, + ); + + const terminalWidth = process.stdout.columns || 120; + const searchInputWidth = Math.max(40, Math.min(100, terminalWidth - 16)); + + const enabledSet = useMemo(() => new Set(selectedIds), [selectedIds]); + const hiddenAboveCount = visibleWindow.start; + const hiddenBelowCount = Math.max( + 0, + filteredEntries.length - + (visibleWindow.start + visibleWindow.entries.length), + ); + + return ( + + + + {'─'.repeat(200)} + + + + + + Manage Models:{' '} + + {MANAGE_MODELS_TABS.map((tab) => { + const isActive = activeTabSource === tab.source; + const isFocused = focusMode === 'tabs' && isActive; + + return ( + + {isActive ? ( + + {` ${tab.label} `} + + ) : ( + + {` ${tab.label}${tab.enabled ? '' : ' (soon)'} `} + + )} + + ); + })} + + + + {(status === 'loading' || status === 'saving') && ( + + + {status === 'loading' + ? 'Loading OpenRouter catalog…' + : 'Saving enabled models…'} + + + )} + + {error && ( + + {error} + + )} + + {statusMessage && ( + + {statusMessage} + + )} + + + { + if (filteredEntries.length > 0) { + setFocusMode('list'); + } + }} + onDown={() => { + if (filteredEntries.length > 0) { + setFocusMode('list'); + } + }} + placeholder="Search models… (type enabled to filter)" + height={1} + isActive={status !== 'saving' && focusMode === 'search'} + inputWidth={searchInputWidth} + /> + + + + + + {getFilterLabel(filterMode)} · {catalog?.entries.length || 0} total + · {filteredEntries.length} shown · {selectedIds.length} enabled + + + {filteredEntries.length === 0 ? ( + + No models match the current search and filter. + + ) : ( + + {hiddenAboveCount > 0 && ( + + ↑ {hiddenAboveCount} more above + + )} + + {visibleWindow.entries.map((entry, index) => { + const absoluteIndex = visibleWindow.start + index; + const isActive = + focusMode === 'list' && absoluteIndex === highlightedIndex; + const isEnabled = enabledSet.has(entry.id); + const prefix = isActive ? '›' : ' '; + const checkbox = isEnabled ? '[✓]' : '[ ]'; + const rowColor = isActive + ? theme.status.success + : isEnabled + ? theme.text.accent + : theme.text.primary; + + return ( + + {prefix} {checkbox} {buildModelLabel(entry)} + + ); + })} + + {hiddenBelowCount > 0 && ( + + ↓ {hiddenBelowCount} more below + + )} + + )} + + + + Details + {highlightedEntry ? ( + + {highlightedEntry.label} + + Model ID: {highlightedEntry.id} + + + Enabled: {enabledSet.has(highlightedEntry.id) ? 'yes' : 'no'} + + + Vision: {highlightedEntry.supportsVision ? 'yes' : 'no'} + + + Context:{' '} + {formatContextWindowSize(highlightedEntry.contextWindowSize)} + + + Tags:{' '} + {highlightedEntry.badges.length > 0 + ? highlightedEntry.badges.join(', ') + : 'none'} + + + ) : ( + + Move to the model list to inspect a model. + + )} + + + + + + ←/→ tab switch · ↓ enter list · Space toggle · Enter save · Esc cancel + + + + ); +} diff --git a/packages/cli/src/ui/components/shared/MultiSelect.tsx b/packages/cli/src/ui/components/shared/MultiSelect.tsx index b910430ba..7191d4fd6 100644 --- a/packages/cli/src/ui/components/shared/MultiSelect.tsx +++ b/packages/cli/src/ui/components/shared/MultiSelect.tsx @@ -19,9 +19,10 @@ export interface MultiSelectItem extends SelectionListItem { export interface MultiSelectProps { items: Array>; initialIndex?: number; - initialSelectedKeys?: string[]; + selectedKeys?: string[]; onConfirm: (selectedValues: T[]) => void; onChange?: (selectedValues: T[]) => void; + onSelectedKeysChange?: (selectedKeys: string[]) => void; onHighlight?: (value: T) => void; isFocused?: boolean; showNumbers?: boolean; @@ -43,32 +44,18 @@ function getSelectedValues( export function MultiSelect({ items, initialIndex = 0, - initialSelectedKeys = EMPTY_SELECTED_KEYS, + selectedKeys = EMPTY_SELECTED_KEYS, onConfirm, onChange, + onSelectedKeysChange, onHighlight, isFocused = true, showNumbers = true, showScrollArrows = false, maxItemsToShow = 10, }: MultiSelectProps): React.JSX.Element { - const [selectedKeys, setSelectedKeys] = useState>( - () => new Set(initialSelectedKeys), - ); const [scrollOffset, setScrollOffset] = useState(0); - - useEffect(() => { - setSelectedKeys((prev) => { - const next = new Set(initialSelectedKeys); - if ( - prev.size === next.size && - Array.from(next).every((key) => prev.has(key)) - ) { - return prev; - } - return next; - }); - }, [initialSelectedKeys]); + const selectedKeySet = useMemo(() => new Set(selectedKeys), [selectedKeys]); const { activeIndex } = useSelectionList({ items, @@ -81,7 +68,7 @@ export function MultiSelect({ showNumbers: false, onHighlight, onSelect: () => { - onConfirm(getSelectedValues(items, selectedKeys)); + onConfirm(getSelectedValues(items, selectedKeySet)); }, }); @@ -92,23 +79,19 @@ export function MultiSelect({ return; } - setSelectedKeys((prev) => { - const next = new Set(prev); - if (next.has(item.key)) { - next.delete(item.key); - } else { - next.add(item.key); - } - return next; - }); + const next = new Set(selectedKeySet); + if (next.has(item.key)) { + next.delete(item.key); + } else { + next.add(item.key); + } + const nextKeys = Array.from(next); + onSelectedKeysChange?.(nextKeys); + onChange?.(getSelectedValues(items, next)); }, - [items], + [items, onChange, onSelectedKeysChange, selectedKeySet], ); - useEffect(() => { - onChange?.(getSelectedValues(items, selectedKeys)); - }, [items, selectedKeys, onChange]); - useKeypress( (key) => { if (key.name === 'space' || key.sequence === ' ') { @@ -152,7 +135,7 @@ export function MultiSelect({ {visibleItems.map((item, index) => { const itemIndex = scrollOffset + index; const isActive = activeIndex === itemIndex; - const isChecked = selectedKeys.has(item.key); + const isChecked = selectedKeySet.has(item.key); const itemNumberText = `${String(itemIndex + 1).padStart( numberColumnWidth, diff --git a/packages/cli/src/ui/components/shared/TextInput.tsx b/packages/cli/src/ui/components/shared/TextInput.tsx index 3e7b62258..b77848d0e 100644 --- a/packages/cli/src/ui/components/shared/TextInput.tsx +++ b/packages/cli/src/ui/components/shared/TextInput.tsx @@ -22,7 +22,7 @@ export interface TextInputProps { onChange: (text: string) => void; onSubmit?: () => void; /** Called when Tab is pressed; if provided, prevents the default tab-insertion behaviour. */ - onTab?: () => void; + onTab?: (key: Key) => void; /** Called when ↑ is pressed; if provided, prevents cursor-up in the buffer. */ onUp?: () => void; /** Called when ↓ is pressed; if provided, prevents cursor-down in the buffer. */ @@ -80,7 +80,7 @@ export function TextInput({ // Tab completion: delegate to caller instead of inserting a tab character // During paste, let tab through as literal content (e.g. Excel tab-separated data) if (key.name === 'tab' && !key.paste) { - onTab?.(); + onTab?.(key); return; } diff --git a/packages/cli/src/ui/contexts/UIActionsContext.tsx b/packages/cli/src/ui/contexts/UIActionsContext.tsx index c035e6421..052ff5491 100644 --- a/packages/cli/src/ui/contexts/UIActionsContext.tsx +++ b/packages/cli/src/ui/contexts/UIActionsContext.tsx @@ -52,6 +52,25 @@ export interface UIActions { region: AlibabaStandardRegion, modelIdsInput: string, ) => Promise; + handleOpenRouterSubmit: () => Promise; + handleCustomApiKeySubmit: ( + protocol: + | AuthType.USE_OPENAI + | AuthType.USE_ANTHROPIC + | AuthType.USE_GEMINI, + baseUrl: string, + apiKey: string, + modelIdsInput: string, + generationConfig?: { + enableThinking?: boolean; + multimodal?: { + image?: boolean; + video?: boolean; + audio?: boolean; + }; + maxTokens?: number; + }, + ) => Promise; setAuthState: (state: AuthState) => void; onAuthError: (error: string | null) => void; cancelAuthentication: () => void; @@ -64,6 +83,8 @@ export interface UIActions { closeMemoryDialog: () => void; closeModelDialog: () => void; openModelDialog: (options?: { fastModelMode?: boolean }) => void; + openManageModelsDialog: () => void; + closeManageModelsDialog: () => void; openArenaDialog: (type: Exclude) => void; closeArenaDialog: () => void; handleArenaModelsSelected?: (models: string[]) => void; diff --git a/packages/cli/src/ui/contexts/UIStateContext.tsx b/packages/cli/src/ui/contexts/UIStateContext.tsx index a0ddd9c9d..c987d99cd 100644 --- a/packages/cli/src/ui/contexts/UIStateContext.tsx +++ b/packages/cli/src/ui/contexts/UIStateContext.tsx @@ -18,7 +18,7 @@ import type { PluginChoiceRequest, } from '../types.js'; import type { TodoItem } from '../components/TodoDisplay.js'; -import type { QwenAuthState } from '../hooks/useQwenAuth.js'; +import type { ExternalAuthState, QwenAuthState } from '../hooks/useQwenAuth.js'; import type { CommandContext, SlashCommand } from '../commands/types.js'; import type { TextBuffer } from '../components/shared/text-buffer.js'; import type { @@ -48,6 +48,7 @@ export interface UIState { authError: string | null; isAuthDialogOpen: boolean; pendingAuthType: AuthType | undefined; + externalAuthState: ExternalAuthState | null; // Qwen OAuth state qwenAuthState: QwenAuthState; editorError: string | null; @@ -58,6 +59,7 @@ export interface UIState { isMemoryDialogOpen: boolean; isModelDialogOpen: boolean; isFastModelMode: boolean; + isManageModelsDialogOpen: boolean; isTrustDialogOpen: boolean; activeArenaDialog: ArenaDialogType; isPermissionsDialogOpen: boolean; diff --git a/packages/cli/src/ui/hooks/slashCommandProcessor.ts b/packages/cli/src/ui/hooks/slashCommandProcessor.ts index fea4b12a0..fb1281f16 100644 --- a/packages/cli/src/ui/hooks/slashCommandProcessor.ts +++ b/packages/cli/src/ui/hooks/slashCommandProcessor.ts @@ -86,6 +86,7 @@ interface SlashCommandProcessorActions { openMemoryDialog: () => void; openSettingsDialog: () => void; openModelDialog: (options?: { fastModelMode?: boolean }) => void; + openManageModelsDialog: () => void; openTrustDialog: () => void; openPermissionsDialog: () => void; openApprovalModeDialog: () => void; @@ -595,6 +596,9 @@ export const useSlashCommandProcessor = ( case 'fast-model': actions.openModelDialog({ fastModelMode: true }); return { type: 'handled' }; + case 'manage-models': + actions.openManageModelsDialog(); + return { type: 'handled' }; case 'trust': actions.openTrustDialog(); return { type: 'handled' }; diff --git a/packages/cli/src/ui/hooks/useManageModelsCommand.test.ts b/packages/cli/src/ui/hooks/useManageModelsCommand.test.ts new file mode 100644 index 000000000..f9d46218f --- /dev/null +++ b/packages/cli/src/ui/hooks/useManageModelsCommand.test.ts @@ -0,0 +1,40 @@ +/** + * @license + * Copyright 2025 Qwen + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, it, expect } from 'vitest'; +import { renderHook, act } from '@testing-library/react'; +import { useManageModelsCommand } from './useManageModelsCommand.js'; + +describe('useManageModelsCommand', () => { + it('should initialize with the dialog closed', () => { + const { result } = renderHook(() => useManageModelsCommand()); + expect(result.current.isManageModelsDialogOpen).toBe(false); + }); + + it('should open the dialog when openManageModelsDialog is called', () => { + const { result } = renderHook(() => useManageModelsCommand()); + + act(() => { + result.current.openManageModelsDialog(); + }); + + expect(result.current.isManageModelsDialogOpen).toBe(true); + }); + + it('should close the dialog when closeManageModelsDialog is called', () => { + const { result } = renderHook(() => useManageModelsCommand()); + + act(() => { + result.current.openManageModelsDialog(); + }); + expect(result.current.isManageModelsDialogOpen).toBe(true); + + act(() => { + result.current.closeManageModelsDialog(); + }); + expect(result.current.isManageModelsDialogOpen).toBe(false); + }); +}); diff --git a/packages/cli/src/ui/hooks/useManageModelsCommand.ts b/packages/cli/src/ui/hooks/useManageModelsCommand.ts new file mode 100644 index 000000000..ed1f2965e --- /dev/null +++ b/packages/cli/src/ui/hooks/useManageModelsCommand.ts @@ -0,0 +1,32 @@ +/** + * @license + * Copyright 2025 Qwen + * SPDX-License-Identifier: Apache-2.0 + */ + +import { useCallback, useState } from 'react'; + +interface UseManageModelsCommandReturn { + isManageModelsDialogOpen: boolean; + openManageModelsDialog: () => void; + closeManageModelsDialog: () => void; +} + +export function useManageModelsCommand(): UseManageModelsCommandReturn { + const [isManageModelsDialogOpen, setIsManageModelsDialogOpen] = + useState(false); + + const openManageModelsDialog = useCallback(() => { + setIsManageModelsDialogOpen(true); + }, []); + + const closeManageModelsDialog = useCallback(() => { + setIsManageModelsDialogOpen(false); + }, []); + + return { + isManageModelsDialogOpen, + openManageModelsDialog, + closeManageModelsDialog, + }; +} diff --git a/packages/cli/src/ui/hooks/useQwenAuth.ts b/packages/cli/src/ui/hooks/useQwenAuth.ts index 2b1819c1c..f3bbf7e19 100644 --- a/packages/cli/src/ui/hooks/useQwenAuth.ts +++ b/packages/cli/src/ui/hooks/useQwenAuth.ts @@ -24,6 +24,12 @@ export interface QwenAuthState { authMessage: string | null; } +export interface ExternalAuthState { + title: string; + message: string; + detail?: string; +} + export const useQwenAuth = ( pendingAuthType: AuthType | undefined, isAuthenticating: boolean, diff --git a/packages/cli/src/ui/manageModels/manageModels.test.ts b/packages/cli/src/ui/manageModels/manageModels.test.ts new file mode 100644 index 000000000..8ad3568c8 --- /dev/null +++ b/packages/cli/src/ui/manageModels/manageModels.test.ts @@ -0,0 +1,140 @@ +/** + * @license + * Copyright 2025 Qwen + * SPDX-License-Identifier: Apache-2.0 + */ + +import { beforeEach, describe, expect, it, vi } from 'vitest'; +import { + AuthType, + type Config, + type ModelProvidersConfig, +} from '@qwen-code/qwen-code-core'; +import type { LoadedSettings } from '../../config/settings.js'; +import { + fetchManageModelsCatalog, + getEnabledModelIdsForSource, + saveManageModelsSelection, +} from './manageModels.js'; + +const { + mockFetchOpenRouterModels, + mockMergeOpenRouterConfigs, + mockIsOpenRouterConfig, +} = vi.hoisted(() => ({ + mockFetchOpenRouterModels: vi.fn(), + mockMergeOpenRouterConfigs: vi.fn(), + mockIsOpenRouterConfig: vi.fn(), +})); + +vi.mock('../../commands/auth/openrouterOAuth.js', () => ({ + OPENROUTER_DEFAULT_MODEL: 'openai/gpt-4o-mini', + fetchOpenRouterModels: mockFetchOpenRouterModels, + mergeOpenRouterConfigs: mockMergeOpenRouterConfigs, + isOpenRouterConfig: mockIsOpenRouterConfig, +})); + +describe('manageModels', () => { + beforeEach(() => { + vi.clearAllMocks(); + }); + + it('fetchManageModelsCatalog maps OpenRouter models into catalog entries', async () => { + mockFetchOpenRouterModels.mockResolvedValue([ + { + id: 'qwen/qwen3-coder:free', + name: 'OpenRouter · Qwen3 Coder', + capabilities: { vision: true }, + generationConfig: { contextWindowSize: 1_000_000 }, + }, + ]); + + const catalog = await fetchManageModelsCatalog('openrouter'); + + expect(catalog.source).toBe('openrouter'); + expect(catalog.entries).toHaveLength(1); + expect(catalog.entries[0]?.label).toBe('Qwen3 Coder'); + expect(catalog.entries[0]?.badges).toEqual( + expect.arrayContaining(['free', 'vision', 'long-context']), + ); + }); + + it('getEnabledModelIdsForSource only returns OpenRouter-enabled ids', () => { + mockIsOpenRouterConfig.mockImplementation( + (config: { baseUrl?: string }) => + config.baseUrl?.includes('openrouter') ?? false, + ); + + const settings = { + merged: { + modelProviders: { + [AuthType.USE_OPENAI]: [ + { + id: 'openai/gpt-4o-mini', + baseUrl: 'https://openrouter.ai/api/v1', + }, + { id: 'custom/model', baseUrl: 'https://example.com/v1' }, + ], + }, + }, + } as unknown as LoadedSettings; + + expect(getEnabledModelIdsForSource('openrouter', settings)).toEqual([ + 'openai/gpt-4o-mini', + ]); + }); + + it('saveManageModelsSelection merges selected OpenRouter models and reloads config', async () => { + const settings = { + isTrusted: false, + user: { settings: { modelProviders: {} } }, + workspace: { settings: {} }, + merged: { + modelProviders: { + [AuthType.USE_OPENAI]: [ + { id: 'old-openrouter', baseUrl: 'https://openrouter.ai/api/v1' }, + { id: 'custom/model', baseUrl: 'https://example.com/v1' }, + ], + } satisfies ModelProvidersConfig, + }, + setValue: vi.fn(), + } as unknown as LoadedSettings; + + const config = { + getContentGeneratorConfig: vi + .fn() + .mockReturnValue({ authType: AuthType.USE_OPENAI }), + getModel: vi.fn().mockReturnValue('old-openrouter'), + reloadModelProvidersConfig: vi.fn(), + refreshAuth: vi.fn().mockResolvedValue(undefined), + } as unknown as Config; + + mockMergeOpenRouterConfigs.mockReturnValue([ + { id: 'openai/gpt-4o-mini', baseUrl: 'https://openrouter.ai/api/v1' }, + { id: 'custom/model', baseUrl: 'https://example.com/v1' }, + ]); + + const result = await saveManageModelsSelection({ + source: 'openrouter', + selectedModels: [ + { id: 'openai/gpt-4o-mini', baseUrl: 'https://openrouter.ai/api/v1' }, + ], + settings, + config, + }); + + expect(mockMergeOpenRouterConfigs).toHaveBeenCalled(); + expect(settings.setValue).toHaveBeenCalledWith( + expect.anything(), + `modelProviders.${AuthType.USE_OPENAI}`, + [ + { id: 'openai/gpt-4o-mini', baseUrl: 'https://openrouter.ai/api/v1' }, + { id: 'custom/model', baseUrl: 'https://example.com/v1' }, + ], + ); + expect(config.reloadModelProvidersConfig).toHaveBeenCalled(); + expect(config.refreshAuth).toHaveBeenCalledWith(AuthType.USE_OPENAI); + expect(result.selectedIds).toEqual(['openai/gpt-4o-mini']); + expect(result.activeModelId).toBe('openai/gpt-4o-mini'); + }); +}); diff --git a/packages/cli/src/ui/manageModels/manageModels.ts b/packages/cli/src/ui/manageModels/manageModels.ts new file mode 100644 index 000000000..c2d4bfe12 --- /dev/null +++ b/packages/cli/src/ui/manageModels/manageModels.ts @@ -0,0 +1,211 @@ +/** + * @license + * Copyright 2025 Qwen + * SPDX-License-Identifier: Apache-2.0 + */ + +import { + AuthType, + type Config, + type ProviderModelConfig as ModelConfig, + type ModelProvidersConfig, +} from '@qwen-code/qwen-code-core'; +import type { LoadedSettings } from '../../config/settings.js'; +import { getPersistScopeForModelSelection } from '../../config/modelProvidersScope.js'; +import { + OPENROUTER_DEFAULT_MODEL, + fetchOpenRouterModels, + isOpenRouterConfig, + mergeOpenRouterConfigs, +} from '../../commands/auth/openrouterOAuth.js'; + +export const MANAGE_MODELS_SOURCES = ['openrouter'] as const; + +export type ManageModelsSource = (typeof MANAGE_MODELS_SOURCES)[number]; + +export interface ManageModelsCatalogEntry { + id: string; + label: string; + searchText: string; + supportsVision: boolean; + contextWindowSize?: number; + badges: string[]; + model: ModelConfig; +} + +export interface ManageModelsCatalog { + source: ManageModelsSource; + title: string; + description: string; + authType: AuthType; + entries: ManageModelsCatalogEntry[]; +} + +export interface ManageModelsSaveResult { + updatedConfigs: ModelConfig[]; + selectedIds: string[]; + activeModelId?: string; +} + +function isFreeOpenRouterModel(modelId: string): boolean { + const normalizedId = modelId.toLowerCase(); + return normalizedId.includes(':free') || normalizedId === 'openrouter/free'; +} + +function getManageModelsDisplayLabel( + source: ManageModelsSource, + model: ModelConfig, +): string { + const rawLabel = model.name || model.id; + + switch (source) { + case 'openrouter': + return rawLabel.replace(/^OpenRouter\s*·\s*/i, '').trim() || model.id; + default: + return rawLabel; + } +} + +function createEntry( + source: ManageModelsSource, + model: ModelConfig, +): ManageModelsCatalogEntry { + const contextWindowSize = model.generationConfig?.contextWindowSize; + const supportsVision = model.capabilities?.vision === true; + const badges: string[] = []; + + if (isFreeOpenRouterModel(model.id)) { + badges.push('free'); + } + if (supportsVision) { + badges.push('vision'); + } + if (typeof contextWindowSize === 'number' && contextWindowSize >= 1_000_000) { + badges.push('long-context'); + } + + const displayLabel = getManageModelsDisplayLabel(source, model); + + return { + id: model.id, + label: displayLabel, + searchText: [model.id, model.name, displayLabel, ...badges] + .filter(Boolean) + .join(' '), + supportsVision, + contextWindowSize, + badges, + model, + }; +} + +export async function fetchManageModelsCatalog( + source: ManageModelsSource, +): Promise { + switch (source) { + case 'openrouter': { + const models = await fetchOpenRouterModels(); + return { + source, + title: 'OpenRouter', + description: + 'Browse the latest OpenRouter model catalog and choose which models are enabled locally.', + authType: AuthType.USE_OPENAI, + entries: models.map((model) => createEntry(source, model)), + }; + } + default: + throw new Error(`Unsupported manage models source: ${source}`); + } +} + +export function getEnabledModelIdsForSource( + source: ManageModelsSource, + settings: LoadedSettings, +): string[] { + const modelProviders = settings.merged.modelProviders as + | ModelProvidersConfig + | undefined; + const openaiConfigs = modelProviders?.[AuthType.USE_OPENAI] || []; + + switch (source) { + case 'openrouter': + return openaiConfigs + .filter((config) => isOpenRouterConfig(config)) + .map((config) => config.id); + default: + return []; + } +} + +export async function saveManageModelsSelection(params: { + source: ManageModelsSource; + selectedModels: ModelConfig[]; + settings: LoadedSettings; + config: Config; +}): Promise { + const { source, selectedModels, settings, config } = params; + const persistScope = getPersistScopeForModelSelection(settings); + const mergedModelProviders = settings.merged.modelProviders as + | ModelProvidersConfig + | undefined; + const existingOpenAIConfigs = + mergedModelProviders?.[AuthType.USE_OPENAI] || []; + + switch (source) { + case 'openrouter': { + const updatedConfigs = mergeOpenRouterConfigs( + existingOpenAIConfigs, + selectedModels, + ); + + if (updatedConfigs.length === 0) { + throw new Error( + 'At least one OpenAI-compatible model must remain enabled.', + ); + } + + settings.setValue( + persistScope, + `modelProviders.${AuthType.USE_OPENAI}`, + updatedConfigs, + ); + + const selectedIds = selectedModels.map((model) => model.id); + const currentAuthType = config.getContentGeneratorConfig()?.authType; + const currentModelId = config.getModel(); + const currentModelStillAvailable = currentModelId + ? updatedConfigs.some((model) => model.id === currentModelId) + : false; + + let activeModelId = currentModelId; + if (!currentModelStillAvailable) { + const preferredDefault = updatedConfigs.find( + (model) => model.id === OPENROUTER_DEFAULT_MODEL, + ); + activeModelId = preferredDefault?.id || updatedConfigs[0]?.id; + if (activeModelId) { + settings.setValue(persistScope, 'model.name', activeModelId); + } + } + + const updatedModelProviders: ModelProvidersConfig = { + ...(mergedModelProviders || {}), + [AuthType.USE_OPENAI]: updatedConfigs, + }; + config.reloadModelProvidersConfig(updatedModelProviders); + + if (currentAuthType === AuthType.USE_OPENAI) { + await config.refreshAuth(AuthType.USE_OPENAI); + } + + return { + updatedConfigs, + selectedIds, + activeModelId, + }; + } + default: + throw new Error(`Unsupported manage models source: ${source}`); + } +} diff --git a/packages/core/src/services/shellExecutionService.ts b/packages/core/src/services/shellExecutionService.ts index 06a628881..816f41fa5 100644 --- a/packages/core/src/services/shellExecutionService.ts +++ b/packages/core/src/services/shellExecutionService.ts @@ -663,10 +663,10 @@ export class ShellExecutionService { const finalOutput = shellExecutionConfig.disableDynamicLineTrimming ? newOutput : trimmedOutput; - const finalOutputComparison = shellExecutionConfig - .disableDynamicLineTrimming - ? newOutputComparison - : trimmedOutputComparison; + const finalOutputComparison = + shellExecutionConfig.disableDynamicLineTrimming + ? newOutputComparison + : trimmedOutputComparison; if (!areAnsiOutputsEqual(outputComparison, finalOutputComparison)) { outputComparison = finalOutputComparison; diff --git a/packages/core/src/utils/fileUtils.test.ts b/packages/core/src/utils/fileUtils.test.ts index 5f1bc1034..a1f529cdf 100644 --- a/packages/core/src/utils/fileUtils.test.ts +++ b/packages/core/src/utils/fileUtils.test.ts @@ -40,6 +40,40 @@ vi.mock('mime/lite', () => ({ getType: vi.fn(), })); +// Mock execFile so isPdftotextAvailable does not spawn a real process. +// On platforms where pdftotext is not installed (e.g. Windows CI), +// the 5-second execFile timeout can exceed the default 5s test timeout. +vi.mock('node:child_process', async (importOriginal) => { + const actual = await importOriginal(); + return { + ...actual, + execFile: vi.fn( + ( + _command: string, + _args: string[], + _optionsOrCallback: unknown, + _callback?: unknown, + ) => { + // Resolve the callback (supports both signatures of execFile) + const cb = + typeof _optionsOrCallback === 'function' + ? _optionsOrCallback + : _callback; + const error = Object.assign(new Error('Command not found'), { + code: 'ENOENT', + }); + if (typeof cb === 'function') { + setImmediate(() => cb(error, '', '')); + } + return { + kill: vi.fn(), + on: vi.fn(), + } as unknown as import('node:child_process').ChildProcess; + }, + ), + }; +}); + const mockMimeGetType = mime.getType as Mock; describe('fileUtils', () => { diff --git a/packages/core/src/utils/terminalSerializer.test.ts b/packages/core/src/utils/terminalSerializer.test.ts index fe3123bef..ab067d186 100644 --- a/packages/core/src/utils/terminalSerializer.test.ts +++ b/packages/core/src/utils/terminalSerializer.test.ts @@ -188,7 +188,12 @@ describe('terminalSerializer', () => { unwrapWrappedLines: true, }); const visibleText = result - .map((line) => line.map((token) => token.text).join('').trimEnd()) + .map((line) => + line + .map((token) => token.text) + .join('') + .trimEnd(), + ) .filter(Boolean); expect(visibleText).toEqual(['abcdefghijkl', 'short']); From f4207428317c207834fec228998ba01dec72a4e1 Mon Sep 17 00:00:00 2001 From: Shaojin Wen Date: Mon, 27 Apr 2026 16:54:10 +0800 Subject: [PATCH 25/36] feat(cli,core): LLM-generated summary labels for tool-call batches (#3538) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat(cli,core): generate tool-use summaries for compact mode After each tool batch completes, fire a parallel fast-model call to generate a short git-commit-subject-style label summarizing what the batch accomplished (e.g. "Read txt files", "Searched in auth/"). In compact mode the label replaces the generic "Tool × N" header so N parallel tool calls collapse to a single semantic row. The fast-model call (~1s) runs fire-and-forget, overlapped with the next turn's API stream, so there is no perceived latency. Missing fast model, aborted turns, and model failures all degrade silently to the existing rendering. The summary is also emitted as a `tool_use_summary` history entry with `precedingToolUseIds`, keeping the shape compatible with SDK clients that want to render collapsed tool views on their own. Gated by `experimental.emitToolUseSummaries` (default on). Can be overridden per-session with `QWEN_CODE_EMIT_TOOL_USE_SUMMARIES=0|1`. The system prompt and truncation rules (300 chars per tool field, 200 chars of trailing assistant text as intent prefix) match the existing behavior seen in other tools that emit the same message type, so SDK consumers see a consistent shape across clients. * fix(core): bound cleanSummary quote-strip regex to avoid ReDoS CodeQL js/polynomial-redos flagged the /^["'`]+|["'`]+$/g pattern in cleanSummary because its input comes from an LLM (treated as uncontrolled). The original regex is anchored and linear in practice, but tightening the quantifier to {1,10} both satisfies the static check and caps engine work on pathological model output with a long run of quotes. Ten opening/closing quotes is well past anything a real label would produce. * fix(cli): render tool_use_summary inline so full mode also shows the label The summary was only visible in compact mode because the full-mode ToolGroupMessage ignored the compactLabel prop. Compact mode got away with this because mergeCompactToolGroups triggers refreshStatic(), which re-renders the merged tool_group with its newly-looked-up label. Full mode has no such refresh path, so when the fast-model call resolves *after* the tool_group has been committed to the append-only , there is no way to retroactively decorate it. Switch to rendering `tool_use_summary` as its own inline history item (a single dim `●