mirror of
https://github.com/QwenLM/qwen-code.git
synced 2026-04-28 11:41:04 +00:00
* docs: add auto-memory implementation log
* feat(core): add managed auto-memory storage scaffold
* feat(core): load managed auto-memory index
* feat(core): add managed auto-memory recall
* feat(core): add managed auto-memory extraction
* feat(cli): add managed auto-memory dream commands
* feat(core): add auxiliary side-query foundation
* feat(memory): add model-driven recall selection
* feat(memory): add model-driven extraction planner
* feat(core): add background task runtime foundation
* feat(memory): schedule auto dream in background
* feat(core): add background agent runner foundation
* feat(memory): add extraction agent planner
* feat(core): add dream agent planner
* feat(core): rebuild managed memory index
* feat(memory): add governance status commands
* feat(memory): add managed forget flow
* feat(core): harden background agent planning
* feat(memory): complete managed parity closure
* test(memory): add managed lifecycle integration coverage
* feat: same to cc
* feat(memory-ui): add memory saved notification and memory count badge
Feature 3 - Memory Saved Notification:
- Add HistoryItemMemorySaved type to types.ts
- Create MemorySavedMessage component for rendering '● Saved/Updated N memories'
- In useGeminiStream: detect in-turn memory writes via mapToDisplay's
memoryWriteCount field and emit 'memory_saved' history item after turn
- In client.ts: capture background dream/extract promises and expose
via consumePendingMemoryTaskPromises(); useGeminiStream listens
post-turn and emits 'Updated N memories' notification for background tasks
Feature 4 - Memory Count Badge:
- Add isMemoryOp field to IndividualToolCallDisplay
- Add memoryWriteCount/memoryReadCount to HistoryItemToolGroup
- Add detectMemoryOp() in useReactToolScheduler using isAutoMemPath
- ToolGroupMessage renders '● Recalled N memories, Wrote N memories' badge
at the top of tool groups that touch memory files
Fix: process.env bracket-access in paths.ts (noPropertyAccessFromIndexSignature)
Fix: MemoryDialog.test.tsx mock useSettings to satisfy SettingsProvider requirement
* fix(memory-ui): auto-approve memory writes, collapse memory tool groups, fix MEMORY.md path
Problem 1 - Auto-approve memory file operations:
- write-file.ts: getDefaultPermission() checks isAutoMemPath; returns 'allow'
for managed auto-memory files, 'ask' for all other files
- edit.ts: same pattern
Problem 2 - Feature 4 UX: collapse memory-only tool groups:
- ToolGroupMessage: detect when all tool calls have isMemoryOp set (pure memory
group) and all are complete; render compact '● Recalled/Wrote N memories
(ctrl+o to expand)' instead of individual tool call rows
- ctrl+o toggles expand/collapse when isFocused and group is memory-only
- Mixed groups (memory + other tools) keep badge-at-top behaviour
- Expanded state shows individual tool calls with '● Memory operations
(ctrl+o to collapse)' header
Problem 3 - MEMORY.md path mismatch:
- prompt.ts: Step 2 now references full absolute path ${memoryDir}/MEMORY.md
so the model writes to the correct location inside the memory directory,
not to the parent project directory
Fix tests:
- write-file.test.ts: add getProjectRoot to mockConfigInternal
- prompt.test.ts: update assertion to match full-path section header
* fix(memory-ui): fix duplicate notification, broken ctrl+o, and Edit tool detection
- Remove duplicate 'Saved N memories' notification: the tool group badge already
shows 'Wrote N memories'; the separate HistoryItemMemorySaved addItem after
onComplete was double-counting. Keep only the background-task path
(consumePendingMemoryTaskPromises).
- Remove ctrl+o expand: Ink's Static area freezes items on first render and
cannot respond to user input. useInput/useState(isExpanded) in a Static item
is a no-op. Removed the dead code; memory-only groups now always render as
the compact summary (no fake interactive hint).
- Fix Edit tool detection: detectMemoryOp was checking for 'edit_file' but the
real tool name constant is 'edit'. Also removed non-existent 'create_file'
(write_file covers all writes). Now editing MEMORY.md is correctly identified
as a memory write op, collapses to 'Wrote N memories', and is auto-approved.
* fix(dream): run /dream as a visible submit_prompt turn, not a silent background agent
The previous implementation ran an AgentHeadless background agent that could
take 5+ minutes with zero UI feedback — user saw a blank screen for the entire
duration and then at most one line of text.
Fix: /dream now returns submit_prompt with the consolidation task prompt so it
runs as a regular AI conversation turn. Tool calls (read_file, write_file, edit,
grep_search, list_directory, glob) are immediately visible as collapsed tool
groups as the model works through the memory files — identical UX to Claude Code.
Also export buildConsolidationTaskPrompt from dreamAgentPlanner so dreamCommand
can reuse the same detailed consolidation prompt that was already written.
* fix(memory): auto-allow ls/glob/grep on memory base directory
Add getMemoryBaseDir() to getDefaultPermission() allow list in ls.ts,
glob.ts, and grep.ts — mirrors the existing pattern in read-file.ts.
Without this, ListFiles/Glob/Grep on ~/.qwen/* would trigger an
approval dialog, blocking /dream at its very first step.
* fix(background): prevent permission prompt hangs in background agents
Match Claude Code's headless-agent intent: background memory agents must never
block on interactive permission prompts.
Wrap background runtime config so getApprovalMode() returns YOLO, ensuring any
ask decision is auto-approved instead of hanging forever. Add regression test
covering the wrapped approval mode.
* fix(memory): run auto extract through forked agent
Make managed auto-memory extraction follow the Claude Code architecture:
background extraction now uses a forked agent to read/write memory files
directly, instead of planning patches and applying them with a separate
filesystem pipeline.
Keep the old patch/model path only as fallback if the forked agent fails.
Add regression tests covering the new execution path and tool whitelist.
* refactor(memory): remove legacy extract fallback pipeline
Delete the old patch/model/heuristic extraction path entirely.
Managed auto-memory extract now runs only through the forked-agent
execution flow, with no planner/apply fallback stages remaining.
Also remove obsolete exports/tests and update scheduler/integration
coverage to use the forked-agent-only architecture.
* refactor(memory): move auxiliary files out of memory/ directory
meta.json, extract-cursor.json, and consolidation.lock are internal
bookkeeping files, not user-visible memories. Move them one level up
to the project state dir (parent of memory/) so that the memory/
directory contains only MEMORY.md and topic files, matching the
clean layout of the upstream reference implementation.
Add getAutoMemoryProjectStateDir() helper in paths.ts and update the
three path accessors + store.test.ts path assertions accordingly.
* fix(memory): record lastDreamAt after manual /dream run
The /dream command submits a prompt to the main agent (submit_prompt),
which writes memory files directly. Because it bypasses dreamScheduler,
meta.json was never updated and /memory always showed 'never'.
Fix by:
- Exporting writeDreamManualRunToMetadata() from dream.ts
- Adding optional onComplete callback to SubmitPromptActionReturn and
SubmitPromptResult (types.ts / commands/types.ts)
- Propagating onComplete through slashCommandProcessor.ts
- Firing onComplete after turn completion in useGeminiStream.ts
- Providing the callback in dreamCommand.ts to write lastDreamAt
* fix(memory): remove scope params from /remember in managed auto-memory mode
--global/--project are legacy save_memory tool concepts. In managed
auto-memory mode the forked agent decides the appropriate type
(user/feedback/project/reference) based on the content of the fact.
Also improve the prompt wording to explicitly ask the agent to choose
the correct type, reducing the tendency to default to 'project'.
* feat(ui): show '✦ dreaming' indicator in footer during background dream
Subscribe to getManagedAutoMemoryDreamTaskRegistry() in Footer via a
useDreamRunning() hook. While any dream task for the current project is
pending or running, display '✦ dreaming' in the right section of the
footer bar, between Debug Mode and context usage.
* refactor(memory): align dream/extract infrastructure with Claude Code patterns
Five improvements based on Claude Code parity audit:
1. Memoize getAutoMemoryRoot (paths.ts)
- Add _autoMemoryRootCache Map, keyed by projectRoot
- findCanonicalGitRoot() walks the filesystem per call; memoize avoids
repeated git-tree traversal on hot-path schedulers/scanners
- Expose clearAutoMemoryRootCache() for test teardown
2. Lock file stores PID + isProcessRunning reclaim (dreamScheduler.ts)
- acquireDreamLock() writes process.pid to the lock file body
- lockExists() reads PID and calls process.kill(pid, 0); dead/missing
PID reclaims the lock immediately instead of waiting 2h
- Stale threshold reduced to 1h (PID-reuse guard, same as CC)
3. Session scan throttle (dreamScheduler.ts)
- Add SESSION_SCAN_INTERVAL_MS = 10min (same as CC)
- Add lastSessionScanAt Map<projectRoot, number> to ManagedAutoMemoryDreamRuntime
- When time-gate passes but session-gate doesn't, throttle prevents
re-scanning the filesystem on every user turn
4. mtime-based session counting (dreamScheduler.ts)
- Replace fragile recentSessionIdsSinceDream Set in meta.json with
filesystem mtime scan (listSessionsTouchedSince)
- Mirrors Claude Code's listSessionsTouchedSince: reads session JSONL
files from Storage.getProjectDir()/chats/, filters by mtime > lastDreamAt
- Immune to meta.json corruption/loss; no per-turn metadata write
- ManagedAutoMemoryDreamRuntime accepts injectable SessionScannerFn
for clean unit testing without real session files
5. Extraction mutual exclusion extended to write_file/edit (extractScheduler.ts)
- historySliceUsesMemoryTool() now checks write_file/edit/replace/create_file
tool calls whose file_path is within isAutoMemPath()
- Previously only detected save_memory; missed direct file writes by
the main agent, causing redundant background extraction
* docs(memory): add user-facing memory docs, i18n for all locales, simplify /forget
- Add docs/users/features/memory.md: comprehensive user-facing guide covering
QWEN.md instructions, auto-memory behaviour, all memory commands, and
troubleshooting; replaces the placeholder auto-memory.md
- Update docs/users/features/_meta.ts: rename entry auto-memory → memory
- Update docs/users/features/commands.md: add /init, /remember, /forget,
/dream rows; fix /memory description; remove /init duplicate
- Update docs/users/configuration/settings.md: add memory.* settings section
(enableManagedAutoMemory, enableManagedAutoDream) between tools and permissions
- Remove /forget --apply flag: preview-then-apply flow replaced with direct
deletion; update forgetCommand.ts, en.js, zh.js accordingly
- Add all auto-memory i18n keys to de, ja, pt, ru locales (18 keys each):
Open auto-memory folder, Auto-memory/Auto-dream status lines, never/on/off,
✦ dreaming, /forget and /remember usage strings, all managed-memory messages
- Remove dead save_memory branch from extractScheduler.partWritesToMemory()
- Add ✦ dreaming indicator to Footer.tsx with i18n; fix Footer.test.tsx mocks
- Refactor MemoryDialog.tsx auto-dream status line to use i18n
- Remove save_memory tool (memoryTool.ts/test); clean up webui references
- Add extractionPlanner.ts, const.ts and associated tests
- Delete stale docs/users/configuration/memory.md and
docs/developers/tools/memory.md (content superseded)
* refactor(memory): remove all Claude Code references from comments and test names
* test(memory): remove empty placeholder test files that cause vitest to fail
* fix eslint
* fix test in windows
* fix test
* fix(memory): address critical review findings from PR #3087
- fix(read-file): narrow auto-allow from getMemoryBaseDir() (~/.qwen) to
isAutoMemPath(projectRoot) to prevent exposing settings.json / OAuth
credentials without user approval (wenshao review)
- fix(forget): per-entry deletion instead of whole-file unlink
- assign stable per-entry IDs (relativePath:index for multi-entry files)
so the model can target individual entries without removing siblings
- rewrite file keeping unmatched entries; only unlink when file becomes
empty (wenshao review)
- fix(entries): round-trip correctness for multi-entry new-format bodies
- parseAutoMemoryEntries: plain-text line closes current entry and opens
a new one (was silently ignored when current was already set)
- renderAutoMemoryBody: emit blank line between adjacent entries so the
parser can detect entry boundaries on re-read (wenshao review)
- fix(entries): resolve two CodeQL polynomial-regex alerts
- indentedMatch: \s{2,}(?:[-*]\s+)? → [\t ]{2,}(?:[-*][\t ]+)?
- topLevelMatch: :\s*(.+)$ → :[ \t]*(\S.*)$
(github-advanced-security review)
- fix(scan.test): use forward-slash literal for relativePath expectation
since listMarkdownFiles() normalises all separators to '/' on all
platforms including Windows
* fix(memory): replace isAutoMemPath startsWith with path.relative()
Using path.relative() instead of string startsWith() is more robust
across platforms — it correctly handles Windows path-separator
differences and avoids potential edge cases where a path prefix match
could succeed on non-separator boundaries.
Addresses github-actions review item 3 (PR #3087).
* feat(telemetry): add auto-memory telemetry instrumentation
Add OpenTelemetry logs + metrics for the five auto-memory lifecycle
events: extract, dream, recall, forget, and remember.
Telemetry layer (packages/core/src/telemetry/):
- constants.ts: 5 new event-name constants
(qwen-code.memory.{extract,dream,recall,forget,remember})
- types.ts: 5 new event classes with typed constructor params
(MemoryExtractEvent, MemoryDreamEvent, MemoryRecallEvent,
MemoryForgetEvent, MemoryRememberEvent)
- metrics.ts: 8 new OTel instruments (5 Counters + 3 Histograms)
with recordMemoryXxx() helpers; registered inside initializeMetrics()
- loggers.ts: logMemoryExtract/Dream/Recall/Forget/Remember() — each
emits a structured log record and calls its recordXxx() counterpart
- index.ts: re-exports all new symbols
Instrumentation call-sites:
- extractScheduler.ts ManagedAutoMemoryExtractRuntime.runTask():
emits extract event with trigger=auto, completed/failed status,
patches_count, touched_topics, and wall-clock duration
- dream.ts runManagedAutoMemoryDream():
emits dream event with trigger=auto, updated/noop status,
deduped_entries, touched_topics, and duration; covers both
agent-planner and mechanical fallback paths
- recall.ts resolveRelevantAutoMemoryPromptForQuery():
emits recall event with strategy, docs_scanned/selected, and
duration; covers model, heuristic, and none paths
- forget.ts forgetManagedAutoMemoryEntries():
emits forget event with removed_entries_count, touched_topics,
and selection_strategy (model/heuristic/none)
- rememberCommand.ts action():
emits remember event with topic=managed|legacy at command
invocation time (before agent decides the actual memory type)
* refactor(telemetry): remove memory forget/remember telemetry events
Remove EVENT_MEMORY_FORGET and EVENT_MEMORY_REMEMBER along with all
associated infrastructure that is no longer needed:
- constants.ts: remove EVENT_MEMORY_FORGET, EVENT_MEMORY_REMEMBER
- types.ts: remove MemoryForgetEvent, MemoryRememberEvent classes
- metrics.ts: remove MEMORY_FORGET_COUNT, MEMORY_REMEMBER_COUNT constants,
memoryForgetCounter, memoryRememberCounter module vars,
their initialization in initializeMetrics(), and
recordMemoryForgetMetrics(), recordMemoryRememberMetrics() functions
- loggers.ts: remove logMemoryForget(), logMemoryRemember() functions
and their imports
- index.ts: remove all re-exports for the above symbols
- memory/forget.ts: remove logMemoryForget call-site and import
- cli/rememberCommand.ts: remove logMemoryRemember call-sites and import
* change default value
* fix forked agent
* refactor(background): unify fork primitives into runForkedAgent + cleanup
- Merge runForkedQuery into runForkedAgent via TypeScript overloads:
with cacheSafeParams → GeminiChat single-turn path (ForkedQueryResult)
without cacheSafeParams → AgentHeadless multi-turn path (ForkedAgentResult)
- Delete forkedQuery.ts; move its test to background/forkedAgent.cache.test.ts
- Remove forkedQuery export from followup/index.ts
- Migrate all callers (suggestionGenerator, speculation, btwCommand, client)
to import from background/forkedAgent
- Add getFastModel() / setFastModel() to Config; expose in CLI config init
and ModelDialog / modelCommand
- Remove resolveFastModel() from AppContainer — now delegated to config.getFastModel()
- Strip Claude Code references from code comments
* fix(memory): address wenshao's critical review findings
- dream.ts: writeDreamManualRunToMetadata now persists lastDreamSessionId
and resets recentSessionIdsSinceDream, preventing auto-dream from firing
again in the same session after a manual /dream
- config.ts: gate managed auto-memory injection on getManagedAutoMemoryEnabled();
when disabled, previously saved memories are no longer injected into new sessions
- rememberCommand.ts: remove legacy save_memory branch (tool was removed);
fall back to submit_prompt directing agent to write to QWEN.md instead
- BuiltinCommandLoader.ts: only register /dream and /forget when managed
auto-memory is enabled, matching the feature's runtime availability
- forget.ts: return early in forgetManagedAutoMemoryMatches when matches is
empty, avoiding unnecessary directory scaffolding as a side effect
* fix test
* fix ci test
* feat(memory): align extract/dream agents to Claude Code patterns
- fix(client): move saveCacheSafeParams before early-return paths so
extract agents always have cache params available (fixes extract never
triggering in skipNextSpeakerCheck mode)
- feat(extract): add read-only shell tool + memory-scoped write
permissions; create inline createMemoryScopedAgentConfig() with
PermissionManager wrapper (isToolEnabled + evaluate) that allows only
read-only shell commands and write/edit within the auto-memory dir
- feat(extract): align prompt to Claude Code patterns — manifest block
listing existing files, parallel read-then-write strategy, two-step
save (memory file then index)
- feat(dream): remove mechanical fallback; runManagedAutoMemoryDream is
now agent-only and throws without config
- feat(dream): align prompt to Claude Code 4-phase structure
(Orient/Gather/Consolidate/Prune+Index); add narrow transcript grep,
relative→absolute date conversion, stale index pruning, index size cap
- fix(permissions): add isToolEnabled() to MemoryScopedPermissionManager
to prevent TypeError crash in CoreToolScheduler._schedule
- test: update dreamScheduler tests to mock dream.js; replace removed
mechanical-dedup test with scheduler infrastructure verification
* move doc to design
* refactor(memory): unify extract+dream background task management into MemoryBackgroundTaskHub
- Add memoryTaskHub.ts: single BackgroundTaskRegistry + BackgroundTaskDrainer shared
by all memory background tasks; exposes listExtractTasks() / listDreamTasks()
typed query helpers and a unified drain() method
- extractScheduler: ManagedAutoMemoryExtractRuntime accepts hub via constructor
(defaults to defaultMemoryTaskHub); test factory gets isolated fresh hub
- dreamScheduler: same pattern — sessionScanner + hub injection; BackgroundTask-
Scheduler initialized from injected hub; test factory gets isolated hub
- status.ts: replace two separate getRegistry() calls with defaultMemoryTaskHub
typed query methods
- Footer.tsx (useDreamRunning): subscribe to shared registry, filter by
DREAM_TASK_TYPE so extract tasks do not trigger the dream spinner
- index.ts: re-export memoryTaskHub.ts so defaultMemoryTaskHub/DREAM_TASK_TYPE/
EXTRACT_TASK_TYPE are available as top-level package exports
* refactor(background): introduce general-purpose BackgroundTaskHub
Replace memory-specific MemoryBackgroundTaskHub with a domain-agnostic
BackgroundTaskHub in the background/ layer. Any future background task
runtime (3rd, 4th, …) plugs in by accepting a hub via constructor
injection — no new infrastructure required.
Changes:
- Add background/taskHub.ts: BackgroundTaskHub (registry + drainer +
createScheduler() + listByType(taskType, projectRoot?)) and the
globalBackgroundTaskHub singleton. Zero knowledge of any task type.
- Delete memory/memoryTaskHub.ts: its narrow listExtractTasks /
listDreamTasks helpers are replaced by the generic listByType() call.
- Move EXTRACT_TASK_TYPE to extractScheduler.ts (owned by the runtime
that defines it); replace 3 hardcoded string literals with the const.
- Move DREAM_TASK_TYPE to dreamScheduler.ts; use hub.createScheduler()
instead of manually wiring new BackgroundTaskScheduler(reg, drain).
- status.ts: globalBackgroundTaskHub.listByType(EXTRACT_TASK_TYPE, ...)
- Footer.tsx: globalBackgroundTaskHub.registry (shared, filtered by type)
- index.ts: export background/taskHub.js; drop memory/memoryTaskHub.js
* test(background): add BackgroundTaskHub unit tests and hub isolation checks
- background/taskHub.test.ts (11 tests):
- createScheduler(): tasks registered via scheduler appear in hub registry;
multiple calls return distinct scheduler instances
- listByType(): filters by taskType, filters by projectRoot, returns []
for unknown types, two types co-exist in registry but stay separated
- drain(): resolves false on timeout, resolves true when tasks complete,
resolves true immediately when no tasks in flight
- isolation: tasks in hubA do not appear in hubB
- globalBackgroundTaskHub: is a BackgroundTaskHub instance with registry/drainer
- extractScheduler.test.ts (+1 test):
- factory-created runtimes have isolated registries; tasks in runtimeA
are invisible to runtimeB; all tasks carry EXTRACT_TASK_TYPE
- dreamScheduler.test.ts (+1 test):
- factory-created runtimes have isolated registries; tasks in runtimeA
are invisible to runtimeB; all tasks carry DREAM_TASK_TYPE
* refactor(memory): consolidate all memory state into MemoryManager
Replace BackgroundTaskRegistry/Drainer/Scheduler/Hub helper classes and
module-level globals with a single MemoryManager class owned by Config.
## Changes
### New
- packages/core/src/memory/manager.ts — MemoryManager with:
- scheduleExtract / scheduleDream (inline queuing + deduplication logic)
- recall / forget / selectForgetCandidates / forgetMatches
- getStatus / drain / appendToUserMemory
- subscribe(listener) compatible with useSyncExternalStore
- storeWith() atomic record registration (no double-notify)
- Distinct skippedReason 'scan_throttled' vs 'min_sessions' for dream
- packages/core/src/utils/forkedAgent.ts — pure cache util (moved from background/)
- packages/core/src/utils/sideQuery.ts — pure util (moved from auxiliary/)
### Deleted
- background/taskRegistry, taskDrainer, taskScheduler, taskHub and all tests
- background/forkedAgent (moved to utils/)
- auxiliary/sideQuery (moved to utils/)
- memory/extractScheduler, dreamScheduler, state and all tests
### Modified
- config/config.ts — Config owns MemoryManager instance; getMemoryManager()
- core/client.ts — all memory ops via config.getMemoryManager()
- core/client.test.ts — mock MemoryManager instead of individual modules
- memory/status.ts — accepts MemoryManager param, drops globalBackgroundTaskHub
- index.ts — memory exports reduced from 14 modules to 5 (manager/types/paths/store/const)
- cli/commands/dreamCommand.ts — via config.getMemoryManager()
- cli/commands/forgetCommand.ts — via config.getMemoryManager()
- cli/components/Footer.tsx — useSyncExternalStore replacing setInterval polling
- cli/components/Footer.test.tsx — add getMemoryManager mock
1186 lines
43 KiB
TypeScript
Executable file
1186 lines
43 KiB
TypeScript
Executable file
/**
|
||
* @license
|
||
* Copyright 2025 Google LLC
|
||
* SPDX-License-Identifier: Apache-2.0
|
||
*/
|
||
|
||
import {
|
||
ApprovalMode,
|
||
AuthType,
|
||
Config,
|
||
DEFAULT_QWEN_EMBEDDING_MODEL,
|
||
FileDiscoveryService,
|
||
getAllGeminiMdFilenames,
|
||
loadServerHierarchicalMemory,
|
||
setGeminiMdFilename as setServerGeminiMdFilename,
|
||
resolveTelemetrySettings,
|
||
FatalConfigError,
|
||
Storage,
|
||
InputFormat,
|
||
OutputFormat,
|
||
SessionService,
|
||
ideContextStore,
|
||
type ResumedSessionData,
|
||
type LspClient,
|
||
type ToolName,
|
||
EditTool,
|
||
ShellTool,
|
||
WriteFileTool,
|
||
NativeLspClient,
|
||
createDebugLogger,
|
||
NativeLspService,
|
||
isToolEnabled,
|
||
} from '@qwen-code/qwen-code-core';
|
||
import { extensionsCommand } from '../commands/extensions.js';
|
||
import { hooksCommand } from '../commands/hooks.js';
|
||
import type { Settings } from './settings.js';
|
||
import { loadSettings, SettingScope } from './settings.js';
|
||
import { authCommand } from '../commands/auth.js';
|
||
import {
|
||
resolveCliGenerationConfig,
|
||
getAuthTypeFromEnv,
|
||
} from '../utils/modelConfigUtils.js';
|
||
import yargs, { type Argv } from 'yargs';
|
||
import { hideBin } from 'yargs/helpers';
|
||
import * as fs from 'node:fs';
|
||
import * as path from 'node:path';
|
||
import { homedir } from 'node:os';
|
||
|
||
import { resolvePath } from '../utils/resolvePath.js';
|
||
import { getCliVersion } from '../utils/version.js';
|
||
import { loadSandboxConfig } from './sandboxConfig.js';
|
||
import { appEvents } from '../utils/events.js';
|
||
import { mcpCommand } from '../commands/mcp.js';
|
||
import { channelCommand } from '../commands/channel.js';
|
||
|
||
// UUID v4 regex pattern for validation
|
||
const SESSION_ID_REGEX =
|
||
/^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}(-agent-[a-zA-Z0-9_.-]+)?$/i;
|
||
|
||
/**
|
||
* Validates if a string is a valid session ID format.
|
||
* Accepts a standard UUID, or a UUID followed by `-agent-{suffix}`
|
||
* (used by Arena to give each agent a deterministic session ID).
|
||
*/
|
||
function isValidSessionId(value: string): boolean {
|
||
return SESSION_ID_REGEX.test(value);
|
||
}
|
||
|
||
import { isWorkspaceTrusted } from './trustedFolders.js';
|
||
import { buildWebSearchConfig } from './webSearch.js';
|
||
import { writeStderrLine } from '../utils/stdioHelpers.js';
|
||
|
||
const debugLogger = createDebugLogger('CONFIG');
|
||
|
||
const VALID_APPROVAL_MODE_VALUES = [
|
||
'plan',
|
||
'default',
|
||
'auto-edit',
|
||
'yolo',
|
||
] as const;
|
||
|
||
function formatApprovalModeError(value: string): Error {
|
||
return new Error(
|
||
`Invalid approval mode: ${value}. Valid values are: ${VALID_APPROVAL_MODE_VALUES.join(
|
||
', ',
|
||
)}`,
|
||
);
|
||
}
|
||
|
||
function parseApprovalModeValue(value: string): ApprovalMode {
|
||
const normalized = value.trim().toLowerCase();
|
||
switch (normalized) {
|
||
case 'plan':
|
||
return ApprovalMode.PLAN;
|
||
case 'default':
|
||
return ApprovalMode.DEFAULT;
|
||
case 'yolo':
|
||
return ApprovalMode.YOLO;
|
||
case 'auto_edit':
|
||
case 'autoedit':
|
||
case 'auto-edit':
|
||
return ApprovalMode.AUTO_EDIT;
|
||
default:
|
||
throw formatApprovalModeError(value);
|
||
}
|
||
}
|
||
|
||
export interface CliArgs {
|
||
query: string | undefined;
|
||
model: string | undefined;
|
||
sandbox: boolean | string | undefined;
|
||
sandboxImage: string | undefined;
|
||
debug: boolean | undefined;
|
||
prompt: string | undefined;
|
||
promptInteractive: string | undefined;
|
||
systemPrompt: string | undefined;
|
||
appendSystemPrompt: string | undefined;
|
||
yolo: boolean | undefined;
|
||
approvalMode: string | undefined;
|
||
telemetry: boolean | undefined;
|
||
checkpointing: boolean | undefined;
|
||
telemetryTarget: string | undefined;
|
||
telemetryOtlpEndpoint: string | undefined;
|
||
telemetryOtlpProtocol: string | undefined;
|
||
telemetryLogPrompts: boolean | undefined;
|
||
telemetryOutfile: string | undefined;
|
||
allowedMcpServerNames: string[] | undefined;
|
||
allowedTools: string[] | undefined;
|
||
acp: boolean | undefined;
|
||
experimentalAcp: boolean | undefined;
|
||
experimentalLsp: boolean | undefined;
|
||
extensions: string[] | undefined;
|
||
listExtensions: boolean | undefined;
|
||
openaiLogging: boolean | undefined;
|
||
openaiApiKey: string | undefined;
|
||
openaiBaseUrl: string | undefined;
|
||
openaiLoggingDir: string | undefined;
|
||
proxy: string | undefined;
|
||
includeDirectories: string[] | undefined;
|
||
tavilyApiKey: string | undefined;
|
||
googleApiKey: string | undefined;
|
||
googleSearchEngineId: string | undefined;
|
||
webSearchDefault: string | undefined;
|
||
screenReader: boolean | undefined;
|
||
inputFormat?: string | undefined;
|
||
outputFormat: string | undefined;
|
||
includePartialMessages?: boolean;
|
||
/**
|
||
* If chat recording is disabled, the chat history would not be recorded,
|
||
* so --continue and --resume would not take effect.
|
||
*/
|
||
chatRecording: boolean | undefined;
|
||
/** Resume the most recent session for the current project */
|
||
continue: boolean | undefined;
|
||
/** Resume a specific session by its ID */
|
||
resume: string | undefined;
|
||
/** Specify a session ID without session resumption */
|
||
sessionId: string | undefined;
|
||
maxSessionTurns: number | undefined;
|
||
coreTools: string[] | undefined;
|
||
excludeTools: string[] | undefined;
|
||
authType: string | undefined;
|
||
channel: string | undefined;
|
||
}
|
||
|
||
function normalizeOutputFormat(
|
||
format: string | OutputFormat | undefined,
|
||
): OutputFormat | undefined {
|
||
if (!format) {
|
||
return undefined;
|
||
}
|
||
if (format === OutputFormat.STREAM_JSON) {
|
||
return OutputFormat.STREAM_JSON;
|
||
}
|
||
if (format === 'json' || format === OutputFormat.JSON) {
|
||
return OutputFormat.JSON;
|
||
}
|
||
return OutputFormat.TEXT;
|
||
}
|
||
|
||
export async function parseArguments(): Promise<CliArgs> {
|
||
let rawArgv = hideBin(process.argv);
|
||
|
||
// hack: if the first argument is the CLI entry point, remove it
|
||
if (
|
||
rawArgv.length > 0 &&
|
||
(rawArgv[0].endsWith('/dist/qwen-cli/cli.js') ||
|
||
rawArgv[0].endsWith('/dist/cli.js') ||
|
||
rawArgv[0].endsWith('/dist/cli/cli.js'))
|
||
) {
|
||
rawArgv = rawArgv.slice(1);
|
||
}
|
||
|
||
const yargsInstance = yargs(rawArgv)
|
||
.locale('en')
|
||
.scriptName('qwen')
|
||
.usage(
|
||
'Usage: qwen [options] [command]\n\nQwen Code - Launch an interactive CLI, use -p/--prompt for non-interactive mode',
|
||
)
|
||
.option('telemetry', {
|
||
type: 'boolean',
|
||
description:
|
||
'Enable telemetry? This flag specifically controls if telemetry is sent. Other --telemetry-* flags set specific values but do not enable telemetry on their own.',
|
||
})
|
||
.option('telemetry-target', {
|
||
type: 'string',
|
||
choices: ['local', 'gcp'],
|
||
description:
|
||
'Set the telemetry target (local or gcp). Overrides settings files.',
|
||
})
|
||
.option('telemetry-otlp-endpoint', {
|
||
type: 'string',
|
||
description:
|
||
'Set the OTLP endpoint for telemetry. Overrides environment variables and settings files.',
|
||
})
|
||
.option('telemetry-otlp-protocol', {
|
||
type: 'string',
|
||
choices: ['grpc', 'http'],
|
||
description:
|
||
'Set the OTLP protocol for telemetry (grpc or http). Overrides settings files.',
|
||
})
|
||
.option('telemetry-log-prompts', {
|
||
type: 'boolean',
|
||
description:
|
||
'Enable or disable logging of user prompts for telemetry. Overrides settings files.',
|
||
})
|
||
.option('telemetry-outfile', {
|
||
type: 'string',
|
||
description: 'Redirect all telemetry output to the specified file.',
|
||
})
|
||
.deprecateOption(
|
||
'telemetry',
|
||
'Use the "telemetry.enabled" setting in settings.json instead. This flag will be removed in a future version.',
|
||
)
|
||
.deprecateOption(
|
||
'telemetry-target',
|
||
'Use the "telemetry.target" setting in settings.json instead. This flag will be removed in a future version.',
|
||
)
|
||
.deprecateOption(
|
||
'telemetry-otlp-endpoint',
|
||
'Use the "telemetry.otlpEndpoint" setting in settings.json instead. This flag will be removed in a future version.',
|
||
)
|
||
.deprecateOption(
|
||
'telemetry-otlp-protocol',
|
||
'Use the "telemetry.otlpProtocol" setting in settings.json instead. This flag will be removed in a future version.',
|
||
)
|
||
.deprecateOption(
|
||
'telemetry-log-prompts',
|
||
'Use the "telemetry.logPrompts" setting in settings.json instead. This flag will be removed in a future version.',
|
||
)
|
||
.deprecateOption(
|
||
'telemetry-outfile',
|
||
'Use the "telemetry.outfile" setting in settings.json instead. This flag will be removed in a future version.',
|
||
)
|
||
.option('debug', {
|
||
alias: 'd',
|
||
type: 'boolean',
|
||
description: 'Run in debug mode?',
|
||
default: false,
|
||
})
|
||
.option('proxy', {
|
||
type: 'string',
|
||
description: 'Proxy for Qwen Code, like schema://user:password@host:port',
|
||
})
|
||
.deprecateOption(
|
||
'proxy',
|
||
'Use the "proxy" setting in settings.json instead. This flag will be removed in a future version.',
|
||
)
|
||
.option('chat-recording', {
|
||
type: 'boolean',
|
||
description:
|
||
'Enable chat recording to disk. If false, chat history is not saved and --continue/--resume will not work.',
|
||
})
|
||
.command('$0 [query..]', 'Launch Qwen Code CLI', (yargsInstance: Argv) =>
|
||
yargsInstance
|
||
.positional('query', {
|
||
description:
|
||
'Positional prompt. Defaults to one-shot; use -i/--prompt-interactive for interactive.',
|
||
})
|
||
.option('model', {
|
||
alias: 'm',
|
||
type: 'string',
|
||
description: `Model`,
|
||
})
|
||
.option('prompt', {
|
||
alias: 'p',
|
||
type: 'string',
|
||
description: 'Prompt. Appended to input on stdin (if any).',
|
||
})
|
||
.option('prompt-interactive', {
|
||
alias: 'i',
|
||
type: 'string',
|
||
description:
|
||
'Execute the provided prompt and continue in interactive mode',
|
||
})
|
||
.option('system-prompt', {
|
||
type: 'string',
|
||
description:
|
||
'Override the main session system prompt for this run. Can be combined with --append-system-prompt.',
|
||
})
|
||
.option('append-system-prompt', {
|
||
type: 'string',
|
||
description:
|
||
'Append instructions to the main session system prompt for this run. Can be combined with --system-prompt.',
|
||
})
|
||
.option('sandbox', {
|
||
alias: 's',
|
||
type: 'boolean',
|
||
description: 'Run in sandbox?',
|
||
})
|
||
.option('sandbox-image', {
|
||
type: 'string',
|
||
description: 'Sandbox image URI.',
|
||
})
|
||
.option('yolo', {
|
||
alias: 'y',
|
||
type: 'boolean',
|
||
description:
|
||
'Automatically accept all actions (aka YOLO mode, see https://www.youtube.com/watch?v=xvFZjo5PgG0 for more details)?',
|
||
default: false,
|
||
})
|
||
.option('approval-mode', {
|
||
type: 'string',
|
||
choices: ['plan', 'default', 'auto-edit', 'yolo'],
|
||
description:
|
||
'Set the approval mode: plan (plan only), default (prompt for approval), auto-edit (auto-approve edit tools), yolo (auto-approve all tools)',
|
||
})
|
||
.option('checkpointing', {
|
||
type: 'boolean',
|
||
description: 'Enables checkpointing of file edits',
|
||
default: false,
|
||
})
|
||
.option('acp', {
|
||
type: 'boolean',
|
||
description: 'Starts the agent in ACP mode',
|
||
})
|
||
.option('experimental-acp', {
|
||
type: 'boolean',
|
||
description:
|
||
'Starts the agent in ACP mode (deprecated, use --acp instead)',
|
||
hidden: true,
|
||
})
|
||
.option('experimental-skills', {
|
||
type: 'boolean',
|
||
description:
|
||
'Deprecated: Skills are now enabled by default. This flag is ignored.',
|
||
hidden: true,
|
||
})
|
||
.option('experimental-lsp', {
|
||
type: 'boolean',
|
||
description:
|
||
'Enable experimental LSP (Language Server Protocol) feature for code intelligence',
|
||
default: false,
|
||
})
|
||
.option('channel', {
|
||
type: 'string',
|
||
choices: ['VSCode', 'ACP', 'SDK', 'CI'],
|
||
description: 'Channel identifier (VSCode, ACP, SDK, CI)',
|
||
})
|
||
.option('allowed-mcp-server-names', {
|
||
type: 'array',
|
||
string: true,
|
||
description: 'Allowed MCP server names',
|
||
coerce: (mcpServerNames: string[]) =>
|
||
// Handle comma-separated values
|
||
mcpServerNames.flatMap((mcpServerName) =>
|
||
mcpServerName.split(',').map((m) => m.trim()),
|
||
),
|
||
})
|
||
.option('allowed-tools', {
|
||
type: 'array',
|
||
string: true,
|
||
description: 'Tools that are allowed to run without confirmation',
|
||
coerce: (tools: string[]) =>
|
||
// Handle comma-separated values
|
||
tools.flatMap((tool) => tool.split(',').map((t) => t.trim())),
|
||
})
|
||
.option('extensions', {
|
||
alias: 'e',
|
||
type: 'array',
|
||
string: true,
|
||
description:
|
||
'A list of extensions to use. If not provided, all extensions are used.',
|
||
coerce: (extensions: string[]) =>
|
||
// Handle comma-separated values
|
||
extensions.flatMap((extension) =>
|
||
extension.split(',').map((e) => e.trim()),
|
||
),
|
||
})
|
||
.option('list-extensions', {
|
||
alias: 'l',
|
||
type: 'boolean',
|
||
description: 'List all available extensions and exit.',
|
||
})
|
||
.option('include-directories', {
|
||
alias: 'add-dir',
|
||
type: 'array',
|
||
string: true,
|
||
description:
|
||
'Additional directories to include in the workspace (comma-separated or multiple --include-directories)',
|
||
coerce: (dirs: string[]) =>
|
||
// Handle comma-separated values
|
||
dirs.flatMap((dir) => dir.split(',').map((d) => d.trim())),
|
||
})
|
||
.option('openai-logging', {
|
||
type: 'boolean',
|
||
description:
|
||
'Enable logging of OpenAI API calls for debugging and analysis',
|
||
})
|
||
.option('openai-logging-dir', {
|
||
type: 'string',
|
||
description:
|
||
'Custom directory path for OpenAI API logs. Overrides settings files.',
|
||
})
|
||
.option('openai-api-key', {
|
||
type: 'string',
|
||
description: 'OpenAI API key to use for authentication',
|
||
})
|
||
.option('openai-base-url', {
|
||
type: 'string',
|
||
description: 'OpenAI base URL (for custom endpoints)',
|
||
})
|
||
.option('tavily-api-key', {
|
||
type: 'string',
|
||
description: 'Tavily API key for web search',
|
||
})
|
||
.option('google-api-key', {
|
||
type: 'string',
|
||
description: 'Google Custom Search API key',
|
||
})
|
||
.option('google-search-engine-id', {
|
||
type: 'string',
|
||
description: 'Google Custom Search Engine ID',
|
||
})
|
||
.option('web-search-default', {
|
||
type: 'string',
|
||
description:
|
||
'Default web search provider (dashscope, tavily, google)',
|
||
})
|
||
.option('screen-reader', {
|
||
type: 'boolean',
|
||
description: 'Enable screen reader mode for accessibility.',
|
||
})
|
||
.option('input-format', {
|
||
type: 'string',
|
||
choices: ['text', 'stream-json'],
|
||
description: 'The format consumed from standard input.',
|
||
default: 'text',
|
||
})
|
||
.option('output-format', {
|
||
alias: 'o',
|
||
type: 'string',
|
||
description: 'The format of the CLI output.',
|
||
choices: ['text', 'json', 'stream-json'],
|
||
})
|
||
.option('include-partial-messages', {
|
||
type: 'boolean',
|
||
description:
|
||
'Include partial assistant messages when using stream-json output.',
|
||
default: false,
|
||
})
|
||
.option('continue', {
|
||
alias: 'c',
|
||
type: 'boolean',
|
||
description:
|
||
'Resume the most recent session for the current project.',
|
||
default: false,
|
||
})
|
||
.option('resume', {
|
||
alias: 'r',
|
||
type: 'string',
|
||
description:
|
||
'Resume a specific session by its ID. Use without an ID to show session picker.',
|
||
})
|
||
.option('session-id', {
|
||
type: 'string',
|
||
description: 'Specify a session ID for this run.',
|
||
})
|
||
.option('max-session-turns', {
|
||
type: 'number',
|
||
description: 'Maximum number of session turns',
|
||
})
|
||
.option('core-tools', {
|
||
type: 'array',
|
||
string: true,
|
||
description: 'Core tool paths',
|
||
coerce: (tools: string[]) =>
|
||
tools.flatMap((tool) => tool.split(',').map((t) => t.trim())),
|
||
})
|
||
.option('exclude-tools', {
|
||
type: 'array',
|
||
string: true,
|
||
description: 'Tools to exclude',
|
||
coerce: (tools: string[]) =>
|
||
tools.flatMap((tool) => tool.split(',').map((t) => t.trim())),
|
||
})
|
||
.option('allowed-tools', {
|
||
type: 'array',
|
||
string: true,
|
||
description: 'Tools to allow, will bypass confirmation',
|
||
coerce: (tools: string[]) =>
|
||
tools.flatMap((tool) => tool.split(',').map((t) => t.trim())),
|
||
})
|
||
.option('auth-type', {
|
||
type: 'string',
|
||
choices: [
|
||
AuthType.USE_OPENAI,
|
||
AuthType.USE_ANTHROPIC,
|
||
AuthType.QWEN_OAUTH,
|
||
AuthType.USE_GEMINI,
|
||
AuthType.USE_VERTEX_AI,
|
||
],
|
||
description: 'Authentication type',
|
||
})
|
||
.deprecateOption(
|
||
'sandbox-image',
|
||
'Use the "tools.sandboxImage" setting in settings.json instead. This flag will be removed in a future version.',
|
||
)
|
||
.deprecateOption(
|
||
'checkpointing',
|
||
'Use the "general.checkpointing.enabled" setting in settings.json instead. This flag will be removed in a future version.',
|
||
)
|
||
.deprecateOption(
|
||
'prompt',
|
||
'Use the positional prompt instead. This flag will be removed in a future version.',
|
||
)
|
||
// Ensure validation flows through .fail() for clean UX
|
||
.fail((msg: string, err: Error | undefined, yargs: Argv) => {
|
||
writeStderrLine(msg || err?.message || 'Unknown error');
|
||
yargs.showHelp();
|
||
process.exit(1);
|
||
})
|
||
.check((argv: { [x: string]: unknown }) => {
|
||
// The 'query' positional can be a string (for one arg) or string[] (for multiple).
|
||
// This guard safely checks if any positional argument was provided.
|
||
const query = argv['query'] as string | string[] | undefined;
|
||
const hasPositionalQuery = Array.isArray(query)
|
||
? query.length > 0
|
||
: !!query;
|
||
|
||
if (argv['prompt'] && hasPositionalQuery) {
|
||
return 'Cannot use both a positional prompt and the --prompt (-p) flag together';
|
||
}
|
||
if (argv['prompt'] && argv['promptInteractive']) {
|
||
return 'Cannot use both --prompt (-p) and --prompt-interactive (-i) together';
|
||
}
|
||
if (argv['yolo'] && argv['approvalMode']) {
|
||
return 'Cannot use both --yolo (-y) and --approval-mode together. Use --approval-mode=yolo instead.';
|
||
}
|
||
if (
|
||
argv['includePartialMessages'] &&
|
||
argv['outputFormat'] !== OutputFormat.STREAM_JSON
|
||
) {
|
||
return '--include-partial-messages requires --output-format stream-json';
|
||
}
|
||
if (
|
||
argv['inputFormat'] === 'stream-json' &&
|
||
argv['outputFormat'] !== OutputFormat.STREAM_JSON
|
||
) {
|
||
return '--input-format stream-json requires --output-format stream-json';
|
||
}
|
||
if (argv['continue'] && argv['resume']) {
|
||
return 'Cannot use both --continue and --resume together. Use --continue to resume the latest session, or --resume <sessionId> to resume a specific session.';
|
||
}
|
||
if (argv['sessionId'] && (argv['continue'] || argv['resume'])) {
|
||
return 'Cannot use --session-id with --continue or --resume. Use --session-id to start a new session with a specific ID, or use --continue/--resume to resume an existing session.';
|
||
}
|
||
if (
|
||
argv['sessionId'] &&
|
||
!isValidSessionId(argv['sessionId'] as string)
|
||
) {
|
||
return `Invalid --session-id: "${argv['sessionId']}". Must be a valid UUID (e.g., "123e4567-e89b-12d3-a456-426614174000").`;
|
||
}
|
||
if (argv['resume'] && !isValidSessionId(argv['resume'] as string)) {
|
||
return `Invalid --resume: "${argv['resume']}". Must be a valid UUID (e.g., "123e4567-e89b-12d3-a456-426614174000").`;
|
||
}
|
||
return true;
|
||
}),
|
||
)
|
||
// Register MCP subcommands
|
||
.command(mcpCommand)
|
||
// Register Extension subcommands
|
||
.command(extensionsCommand)
|
||
// Register Auth subcommands
|
||
.command(authCommand)
|
||
// Register Hooks subcommands
|
||
.command(hooksCommand)
|
||
// Register Channel subcommands
|
||
.command(channelCommand);
|
||
|
||
yargsInstance
|
||
.version(await getCliVersion()) // This will enable the --version flag based on package.json
|
||
.alias('v', 'version')
|
||
.help()
|
||
.alias('h', 'help')
|
||
.strict()
|
||
.demandCommand(0, 0); // Allow base command to run with no subcommands
|
||
|
||
yargsInstance.wrap(yargsInstance.terminalWidth());
|
||
const result = await yargsInstance.parse();
|
||
|
||
// If yargs handled --help/--version it will have exited; nothing to do here.
|
||
|
||
// Handle case where MCP subcommands are executed - they should exit the process
|
||
// and not return to main CLI logic
|
||
if (
|
||
result._.length > 0 &&
|
||
(result._[0] === 'mcp' ||
|
||
result._[0] === 'extensions' ||
|
||
result._[0] === 'hooks' ||
|
||
result._[0] === 'channel')
|
||
) {
|
||
// MCP/Extensions/Hooks commands handle their own execution and process exit
|
||
process.exit(0);
|
||
}
|
||
|
||
// Normalize query args: handle both quoted "@path file" and unquoted @path file
|
||
const queryArg = (result as { query?: string | string[] | undefined }).query;
|
||
const q: string | undefined = Array.isArray(queryArg)
|
||
? queryArg.join(' ')
|
||
: queryArg;
|
||
|
||
// Route positional args: explicit -i flag -> interactive; else -> one-shot (even for @commands)
|
||
if (q && !result['prompt']) {
|
||
const hasExplicitInteractive =
|
||
result['promptInteractive'] === '' || !!result['promptInteractive'];
|
||
if (hasExplicitInteractive) {
|
||
result['promptInteractive'] = q;
|
||
} else {
|
||
result['prompt'] = q;
|
||
}
|
||
}
|
||
|
||
// Keep CliArgs.query as a string for downstream typing
|
||
(result as Record<string, unknown>)['query'] = q || undefined;
|
||
|
||
// The import format is now only controlled by settings.memoryImportFormat
|
||
// We no longer accept it as a CLI argument
|
||
|
||
// Handle deprecated --experimental-acp flag
|
||
if (result['experimentalAcp']) {
|
||
writeStderrLine(
|
||
'\x1b[33m⚠ Warning: --experimental-acp is deprecated and will be removed in a future release. Please use --acp instead.\x1b[0m',
|
||
);
|
||
// Map experimental-acp to acp if acp is not explicitly set
|
||
if (!result['acp']) {
|
||
(result as Record<string, unknown>)['acp'] = true;
|
||
}
|
||
}
|
||
|
||
// Apply ACP fallback: if acp or experimental-acp is present but no explicit --channel, treat as ACP
|
||
if ((result['acp'] || result['experimentalAcp']) && !result['channel']) {
|
||
(result as Record<string, unknown>)['channel'] = 'ACP';
|
||
}
|
||
|
||
return result as unknown as CliArgs;
|
||
}
|
||
|
||
// This function is now a thin wrapper around the server's implementation.
|
||
// It's kept in the CLI for now as App.tsx directly calls it for memory refresh.
|
||
// TODO: Consider if App.tsx should get memory via a server call or if Config should refresh itself.
|
||
export async function loadHierarchicalGeminiMemory(
|
||
currentWorkingDirectory: string,
|
||
includeDirectoriesToReadGemini: readonly string[] = [],
|
||
fileService: FileDiscoveryService,
|
||
extensionContextFilePaths: string[] = [],
|
||
folderTrust: boolean,
|
||
memoryImportFormat: 'flat' | 'tree' = 'tree',
|
||
): Promise<{ memoryContent: string; fileCount: number }> {
|
||
// FIX: Use real, canonical paths for a reliable comparison to handle symlinks.
|
||
const realCwd = fs.realpathSync(path.resolve(currentWorkingDirectory));
|
||
const realHome = fs.realpathSync(path.resolve(homedir()));
|
||
const isHomeDirectory = realCwd === realHome;
|
||
|
||
// If it is the home directory, pass an empty string to the core memory
|
||
// function to signal that it should skip the workspace search.
|
||
const effectiveCwd = isHomeDirectory ? '' : currentWorkingDirectory;
|
||
|
||
// Directly call the server function with the corrected path.
|
||
return loadServerHierarchicalMemory(
|
||
effectiveCwd,
|
||
includeDirectoriesToReadGemini,
|
||
fileService,
|
||
extensionContextFilePaths,
|
||
folderTrust,
|
||
memoryImportFormat,
|
||
);
|
||
}
|
||
|
||
export function isDebugMode(argv: CliArgs): boolean {
|
||
return (
|
||
argv.debug ||
|
||
[process.env['DEBUG'], process.env['DEBUG_MODE']].some(
|
||
(v) => v === 'true' || v === '1',
|
||
)
|
||
);
|
||
}
|
||
|
||
export async function loadCliConfig(
|
||
settings: Settings,
|
||
argv: CliArgs,
|
||
cwd: string = process.cwd(),
|
||
overrideExtensions?: string[],
|
||
/**
|
||
* Optional separated hooks for proper source attribution.
|
||
* If provided, these override settings.hooks for hook loading.
|
||
*/
|
||
hooksConfig?: {
|
||
userHooks?: Record<string, unknown>;
|
||
projectHooks?: Record<string, unknown>;
|
||
},
|
||
): Promise<Config> {
|
||
const debugMode = isDebugMode(argv);
|
||
|
||
// Set runtime output directory from settings (env var QWEN_RUNTIME_DIR
|
||
// is auto-detected inside getRuntimeBaseDir() at each call site).
|
||
// Pass cwd so that relative paths like ".qwen" resolve per-project.
|
||
Storage.setRuntimeBaseDir(settings.advanced?.runtimeOutputDir, cwd);
|
||
|
||
const ideMode = settings.ide?.enabled ?? false;
|
||
|
||
const folderTrust = settings.security?.folderTrust?.enabled ?? false;
|
||
const trustedFolder = isWorkspaceTrusted(settings)?.isTrusted ?? true;
|
||
|
||
// Set the context filename in the server's memoryTool module BEFORE loading memory
|
||
// TODO(b/343434939): This is a bit of a hack. The contextFileName should ideally be passed
|
||
// directly to the Config constructor in core, and have core handle setGeminiMdFilename.
|
||
// However, loadHierarchicalGeminiMemory is called *before* createServerConfig.
|
||
if (settings.context?.fileName) {
|
||
setServerGeminiMdFilename(settings.context.fileName);
|
||
} else {
|
||
// Reset to default context filenames if not provided in settings.
|
||
setServerGeminiMdFilename(getAllGeminiMdFilenames());
|
||
}
|
||
|
||
// Automatically load output-language.md if it exists
|
||
const projectStorage = new Storage(cwd);
|
||
const projectOutputLanguagePath = path.join(
|
||
projectStorage.getQwenDir(),
|
||
'output-language.md',
|
||
);
|
||
const globalOutputLanguagePath = path.join(
|
||
Storage.getGlobalQwenDir(),
|
||
'output-language.md',
|
||
);
|
||
|
||
let outputLanguageFilePath: string | undefined;
|
||
if (fs.existsSync(projectOutputLanguagePath)) {
|
||
outputLanguageFilePath = projectOutputLanguagePath;
|
||
} else if (fs.existsSync(globalOutputLanguagePath)) {
|
||
outputLanguageFilePath = globalOutputLanguagePath;
|
||
}
|
||
|
||
const fileService = new FileDiscoveryService(cwd);
|
||
|
||
const includeDirectories = (settings.context?.includeDirectories || [])
|
||
.map(resolvePath)
|
||
.concat((argv.includeDirectories || []).map(resolvePath));
|
||
|
||
// LSP configuration: enabled only via --experimental-lsp flag
|
||
const lspEnabled = argv.experimentalLsp === true;
|
||
let lspClient: LspClient | undefined;
|
||
const question = argv.promptInteractive || argv.prompt || '';
|
||
const inputFormat: InputFormat =
|
||
(argv.inputFormat as InputFormat | undefined) ?? InputFormat.TEXT;
|
||
const argvOutputFormat = normalizeOutputFormat(
|
||
argv.outputFormat as string | OutputFormat | undefined,
|
||
);
|
||
const settingsOutputFormat = normalizeOutputFormat(settings.output?.format);
|
||
const outputFormat =
|
||
argvOutputFormat ?? settingsOutputFormat ?? OutputFormat.TEXT;
|
||
const outputSettingsFormat: OutputFormat =
|
||
outputFormat === OutputFormat.STREAM_JSON
|
||
? settingsOutputFormat &&
|
||
settingsOutputFormat !== OutputFormat.STREAM_JSON
|
||
? settingsOutputFormat
|
||
: OutputFormat.TEXT
|
||
: (outputFormat as OutputFormat);
|
||
const includePartialMessages = Boolean(argv.includePartialMessages);
|
||
|
||
// Determine approval mode with backward compatibility
|
||
let approvalMode: ApprovalMode;
|
||
if (argv.approvalMode) {
|
||
approvalMode = parseApprovalModeValue(argv.approvalMode);
|
||
} else if (argv.yolo) {
|
||
approvalMode = ApprovalMode.YOLO;
|
||
} else if (settings.tools?.approvalMode) {
|
||
approvalMode = parseApprovalModeValue(settings.tools.approvalMode);
|
||
} else {
|
||
approvalMode = ApprovalMode.DEFAULT;
|
||
}
|
||
|
||
// Force approval mode to default if the folder is not trusted.
|
||
if (
|
||
!trustedFolder &&
|
||
approvalMode !== ApprovalMode.DEFAULT &&
|
||
approvalMode !== ApprovalMode.PLAN
|
||
) {
|
||
writeStderrLine(
|
||
`Approval mode overridden to "default" because the current folder is not trusted.`,
|
||
);
|
||
approvalMode = ApprovalMode.DEFAULT;
|
||
}
|
||
|
||
let telemetrySettings;
|
||
try {
|
||
telemetrySettings = await resolveTelemetrySettings({
|
||
argv,
|
||
env: process.env as unknown as Record<string, string | undefined>,
|
||
settings: settings.telemetry,
|
||
});
|
||
} catch (err) {
|
||
if (err instanceof FatalConfigError) {
|
||
throw new FatalConfigError(
|
||
`Invalid telemetry configuration: ${err.message}.`,
|
||
);
|
||
}
|
||
throw err;
|
||
}
|
||
|
||
// Interactive mode determination with priority:
|
||
// 1. If promptInteractive (-i flag) is provided, it is explicitly interactive
|
||
// 2. If outputFormat is stream-json or json (no matter input-format) along with query or prompt, it is non-interactive
|
||
// 3. If no query or prompt is provided, check isTTY: TTY means interactive, non-TTY means non-interactive
|
||
const hasQuery = !!argv.query;
|
||
const hasPrompt = !!argv.prompt;
|
||
let interactive: boolean;
|
||
if (argv.promptInteractive) {
|
||
// Priority 1: Explicit -i flag means interactive
|
||
interactive = true;
|
||
} else if (
|
||
(outputFormat === OutputFormat.STREAM_JSON ||
|
||
outputFormat === OutputFormat.JSON) &&
|
||
(hasQuery || hasPrompt)
|
||
) {
|
||
// Priority 2: JSON/stream-json output with query/prompt means non-interactive
|
||
interactive = false;
|
||
} else if (!hasQuery && !hasPrompt) {
|
||
// Priority 3: No query or prompt means interactive only if TTY (format arguments ignored)
|
||
interactive = process.stdin.isTTY ?? false;
|
||
} else {
|
||
// Default: If we have query/prompt but output format is TEXT, assume non-interactive
|
||
// (fallback for edge cases where query/prompt is provided with TEXT output)
|
||
interactive = false;
|
||
}
|
||
// ── Unified permissions construction ─────────────────────────────────────
|
||
// All permission sources are merged here, before constructing Config.
|
||
// The resulting three arrays are the single source of truth that Config /
|
||
// PermissionManager will use.
|
||
//
|
||
// Sources (in order of precedence within each list):
|
||
// 1. settings.permissions.{allow,ask,deny} (persistent, merged by LoadedSettings)
|
||
// 2. argv.coreTools → allow (allowlist mode: only these tools are available)
|
||
// 3. argv.allowedTools → allow (auto-approve these tools/commands)
|
||
// 4. argv.excludeTools → deny (block these tools completely)
|
||
// 5. Non-interactive mode exclusions → deny (unless explicitly allowed above)
|
||
|
||
// Start from settings-level rules.
|
||
// Read from both new `permissions` and legacy `tools` paths for compatibility.
|
||
// Note: settings.tools.core / argv.coreTools are intentionally NOT merged into
|
||
// mergedAllow — they have whitelist semantics (only listed tools are registered),
|
||
// not auto-approve semantics. They are passed via the `coreTools` Config param
|
||
// and handled by PermissionManager.coreToolsAllowList.
|
||
const resolvedCoreTools: string[] = [
|
||
...(argv.coreTools ?? []),
|
||
...(settings.tools?.core ?? []),
|
||
];
|
||
const mergedAllow: string[] = [
|
||
...(settings.permissions?.allow ?? []),
|
||
...(settings.tools?.allowed ?? []),
|
||
];
|
||
const mergedAsk: string[] = [...(settings.permissions?.ask ?? [])];
|
||
const mergedDeny: string[] = [
|
||
...(settings.permissions?.deny ?? []),
|
||
...(settings.tools?.exclude ?? []),
|
||
];
|
||
|
||
// argv.allowedTools adds allow rules (auto-approve).
|
||
for (const t of argv.allowedTools ?? []) {
|
||
if (t && !mergedAllow.includes(t)) mergedAllow.push(t);
|
||
}
|
||
|
||
// argv.excludeTools adds deny rules.
|
||
for (const t of argv.excludeTools ?? []) {
|
||
if (t && !mergedDeny.includes(t)) mergedDeny.push(t);
|
||
}
|
||
|
||
// Helper: check if a tool is explicitly covered by an allow rule OR by the
|
||
// coreTools whitelist. Uses alias matching for coreTools (via isToolEnabled)
|
||
// to preserve the original behaviour where "ShellTool", "Shell", and
|
||
// "run_shell_command" are all accepted as the same tool.
|
||
const isExplicitlyAllowed = (toolName: ToolName): boolean => {
|
||
const name = toolName as string;
|
||
// 1. Check permissions.allow / allowedTools rules.
|
||
if (
|
||
mergedAllow.some((rule) => {
|
||
const openParen = rule.indexOf('(');
|
||
const ruleName =
|
||
openParen === -1 ? rule.trim() : rule.substring(0, openParen).trim();
|
||
return ruleName === name;
|
||
})
|
||
) {
|
||
return true;
|
||
}
|
||
// 2. Check coreTools whitelist (with alias matching).
|
||
// If coreTools is non-empty and explicitly includes this tool, it is
|
||
// considered allowed for non-interactive mode exclusion purposes.
|
||
if (resolvedCoreTools.length > 0) {
|
||
return isToolEnabled(toolName, resolvedCoreTools, []);
|
||
}
|
||
return false;
|
||
};
|
||
|
||
// In non-interactive mode, tools that require a user prompt are denied unless
|
||
// the caller has explicitly allowed them. Stream-JSON input is excluded from
|
||
// this logic because approval can be sent programmatically via JSON messages.
|
||
const isAcpMode = argv.acp || argv.experimentalAcp;
|
||
if (!interactive && !isAcpMode && inputFormat !== InputFormat.STREAM_JSON) {
|
||
const denyUnlessAllowed = (toolName: ToolName): void => {
|
||
if (!isExplicitlyAllowed(toolName)) {
|
||
const name = toolName as string;
|
||
if (!mergedDeny.includes(name)) mergedDeny.push(name);
|
||
}
|
||
};
|
||
|
||
switch (approvalMode) {
|
||
case ApprovalMode.PLAN:
|
||
case ApprovalMode.DEFAULT:
|
||
// Deny all write/execute tools unless explicitly allowed.
|
||
denyUnlessAllowed(ShellTool.Name as ToolName);
|
||
denyUnlessAllowed(EditTool.Name as ToolName);
|
||
denyUnlessAllowed(WriteFileTool.Name as ToolName);
|
||
break;
|
||
case ApprovalMode.AUTO_EDIT:
|
||
// Only shell requires a prompt in auto-edit mode.
|
||
denyUnlessAllowed(ShellTool.Name as ToolName);
|
||
break;
|
||
case ApprovalMode.YOLO:
|
||
// No extra denials for YOLO mode.
|
||
break;
|
||
default:
|
||
break;
|
||
}
|
||
}
|
||
|
||
let allowedMcpServers: Set<string> | undefined;
|
||
let excludedMcpServers: Set<string> | undefined;
|
||
if (argv.allowedMcpServerNames) {
|
||
allowedMcpServers = new Set(argv.allowedMcpServerNames.filter(Boolean));
|
||
excludedMcpServers = undefined;
|
||
} else {
|
||
allowedMcpServers = settings.mcp?.allowed
|
||
? new Set(settings.mcp.allowed.filter(Boolean))
|
||
: undefined;
|
||
excludedMcpServers = settings.mcp?.excluded
|
||
? new Set(settings.mcp.excluded.filter(Boolean))
|
||
: undefined;
|
||
}
|
||
|
||
const selectedAuthType =
|
||
(argv.authType as AuthType | undefined) ||
|
||
settings.security?.auth?.selectedType ||
|
||
/* getAuthTypeFromEnv means no authType was explicitly provided, we infer the authType from env vars */
|
||
getAuthTypeFromEnv();
|
||
|
||
// Unified resolution of generation config with source attribution
|
||
const resolvedCliConfig = resolveCliGenerationConfig({
|
||
argv: {
|
||
model: argv.model,
|
||
openaiApiKey: argv.openaiApiKey,
|
||
openaiBaseUrl: argv.openaiBaseUrl,
|
||
openaiLogging: argv.openaiLogging,
|
||
openaiLoggingDir: argv.openaiLoggingDir,
|
||
},
|
||
settings,
|
||
selectedAuthType,
|
||
env: process.env as Record<string, string | undefined>,
|
||
});
|
||
|
||
const { model: resolvedModel } = resolvedCliConfig;
|
||
|
||
const sandboxConfig = await loadSandboxConfig(settings, argv);
|
||
const screenReader =
|
||
argv.screenReader !== undefined
|
||
? argv.screenReader
|
||
: (settings.ui?.accessibility?.screenReader ?? false);
|
||
|
||
let sessionId: string | undefined;
|
||
let sessionData: ResumedSessionData | undefined;
|
||
|
||
if (argv.continue || argv.resume) {
|
||
const sessionService = new SessionService(cwd);
|
||
if (argv.continue) {
|
||
sessionData = await sessionService.loadLastSession();
|
||
if (sessionData) {
|
||
sessionId = sessionData.conversation.sessionId;
|
||
}
|
||
}
|
||
|
||
if (argv.resume) {
|
||
sessionId = argv.resume;
|
||
sessionData = await sessionService.loadSession(argv.resume);
|
||
if (!sessionData) {
|
||
const message = `No saved session found with ID ${argv.resume}. Run \`qwen --resume\` without an ID to choose from existing sessions.`;
|
||
writeStderrLine(message);
|
||
process.exit(1);
|
||
}
|
||
}
|
||
} else if (argv['sessionId']) {
|
||
// Use provided session ID without session resumption
|
||
// Check if session ID is already in use
|
||
const sessionService = new SessionService(cwd);
|
||
const exists = await sessionService.sessionExists(argv['sessionId']);
|
||
if (exists) {
|
||
const message = `Error: Session Id ${argv['sessionId']} is already in use.`;
|
||
writeStderrLine(message);
|
||
process.exit(1);
|
||
}
|
||
sessionId = argv['sessionId'];
|
||
}
|
||
|
||
const modelProvidersConfig = settings.modelProviders;
|
||
|
||
const config = new Config({
|
||
sessionId,
|
||
sessionData,
|
||
embeddingModel: DEFAULT_QWEN_EMBEDDING_MODEL,
|
||
sandbox: sandboxConfig,
|
||
targetDir: cwd,
|
||
includeDirectories,
|
||
loadMemoryFromIncludeDirectories:
|
||
settings.context?.loadFromIncludeDirectories || false,
|
||
importFormat: settings.context?.importFormat || 'tree',
|
||
debugMode,
|
||
question,
|
||
systemPrompt: argv.systemPrompt,
|
||
appendSystemPrompt: argv.appendSystemPrompt,
|
||
// Legacy fields – kept for backward compatibility with getCoreTools() etc.
|
||
coreTools: argv.coreTools || settings.tools?.core || undefined,
|
||
allowedTools: argv.allowedTools || settings.tools?.allowed || undefined,
|
||
excludeTools: mergedDeny,
|
||
// New unified permissions (PermissionManager source of truth).
|
||
permissions: {
|
||
allow: mergedAllow.length > 0 ? mergedAllow : undefined,
|
||
ask: mergedAsk.length > 0 ? mergedAsk : undefined,
|
||
deny: mergedDeny.length > 0 ? mergedDeny : undefined,
|
||
},
|
||
// Permission rule persistence callback (writes to settings files).
|
||
onPersistPermissionRule: async (scope, ruleType, rule) => {
|
||
const currentSettings = loadSettings(cwd);
|
||
const settingScope =
|
||
scope === 'project' ? SettingScope.Workspace : SettingScope.User;
|
||
const key = `permissions.${ruleType}`;
|
||
const currentRules: string[] =
|
||
currentSettings.forScope(settingScope).settings.permissions?.[
|
||
ruleType
|
||
] ?? [];
|
||
if (!currentRules.includes(rule)) {
|
||
currentSettings.setValue(settingScope, key, [...currentRules, rule]);
|
||
}
|
||
},
|
||
toolDiscoveryCommand: settings.tools?.discoveryCommand,
|
||
toolCallCommand: settings.tools?.callCommand,
|
||
mcpServerCommand: settings.mcp?.serverCommand,
|
||
mcpServers: settings.mcpServers || {},
|
||
allowedMcpServers: allowedMcpServers
|
||
? Array.from(allowedMcpServers)
|
||
: undefined,
|
||
excludedMcpServers: excludedMcpServers
|
||
? Array.from(excludedMcpServers)
|
||
: undefined,
|
||
approvalMode,
|
||
accessibility: {
|
||
...settings.ui?.accessibility,
|
||
screenReader,
|
||
},
|
||
telemetry: telemetrySettings,
|
||
usageStatisticsEnabled: settings.privacy?.usageStatisticsEnabled ?? true,
|
||
clearContextOnIdle: settings.context?.clearContextOnIdle,
|
||
fileFiltering: settings.context?.fileFiltering,
|
||
checkpointing:
|
||
argv.checkpointing || settings.general?.checkpointing?.enabled,
|
||
proxy:
|
||
argv.proxy ||
|
||
process.env['HTTPS_PROXY'] ||
|
||
process.env['https_proxy'] ||
|
||
process.env['HTTP_PROXY'] ||
|
||
process.env['http_proxy'],
|
||
cwd,
|
||
fileDiscoveryService: fileService,
|
||
bugCommand: settings.advanced?.bugCommand,
|
||
model: resolvedModel,
|
||
outputLanguageFilePath,
|
||
sessionTokenLimit: settings.model?.sessionTokenLimit ?? -1,
|
||
maxSessionTurns:
|
||
argv.maxSessionTurns ?? settings.model?.maxSessionTurns ?? -1,
|
||
experimentalZedIntegration: argv.acp || argv.experimentalAcp || false,
|
||
cronEnabled: settings.experimental?.cron ?? false,
|
||
listExtensions: argv.listExtensions || false,
|
||
overrideExtensions: overrideExtensions || argv.extensions,
|
||
noBrowser: !!process.env['NO_BROWSER'],
|
||
authType: selectedAuthType,
|
||
inputFormat,
|
||
outputFormat,
|
||
includePartialMessages,
|
||
modelProvidersConfig,
|
||
generationConfigSources: resolvedCliConfig.sources,
|
||
generationConfig: resolvedCliConfig.generationConfig,
|
||
warnings: resolvedCliConfig.warnings,
|
||
allowedHttpHookUrls: settings.security?.allowedHttpHookUrls ?? [],
|
||
cliVersion: await getCliVersion(),
|
||
webSearch: buildWebSearchConfig(argv, settings, selectedAuthType),
|
||
ideMode,
|
||
chatCompression: settings.model?.chatCompression,
|
||
folderTrust,
|
||
interactive,
|
||
trustedFolder,
|
||
useRipgrep: settings.tools?.useRipgrep,
|
||
useBuiltinRipgrep: settings.tools?.useBuiltinRipgrep,
|
||
shouldUseNodePtyShell: settings.tools?.shell?.enableInteractiveShell,
|
||
skipNextSpeakerCheck: settings.model?.skipNextSpeakerCheck,
|
||
skipLoopDetection: settings.model?.skipLoopDetection ?? true,
|
||
skipStartupContext: settings.model?.skipStartupContext ?? false,
|
||
truncateToolOutputThreshold: settings.tools?.truncateToolOutputThreshold,
|
||
truncateToolOutputLines: settings.tools?.truncateToolOutputLines,
|
||
eventEmitter: appEvents,
|
||
gitCoAuthor: settings.general?.gitCoAuthor,
|
||
output: {
|
||
format: outputSettingsFormat,
|
||
},
|
||
enableManagedAutoMemory: settings.memory?.enableManagedAutoMemory ?? true,
|
||
enableManagedAutoDream: settings.memory?.enableManagedAutoDream ?? false,
|
||
fastModel: settings.fastModel || undefined,
|
||
// Use separated hooks if provided, otherwise fall back to merged hooks
|
||
userHooks: hooksConfig?.userHooks ?? settings.hooks,
|
||
projectHooks: hooksConfig?.projectHooks,
|
||
hooks: settings.hooks, // Keep for backward compatibility
|
||
disableAllHooks: settings.disableAllHooks ?? false,
|
||
channel: argv.channel,
|
||
// Precedence: explicit CLI flag > settings file > default(true).
|
||
// NOTE: do NOT set a yargs default for `chat-recording`, otherwise argv will
|
||
// always be true and the settings file can never disable recording.
|
||
chatRecording:
|
||
argv.chatRecording ?? settings.general?.chatRecording ?? true,
|
||
defaultFileEncoding: settings.general?.defaultFileEncoding,
|
||
lsp: {
|
||
enabled: lspEnabled,
|
||
},
|
||
agents: settings.agents
|
||
? {
|
||
displayMode: settings.agents.displayMode,
|
||
arena: settings.agents.arena
|
||
? {
|
||
worktreeBaseDir: settings.agents.arena.worktreeBaseDir,
|
||
preserveArtifacts:
|
||
settings.agents.arena.preserveArtifacts ?? false,
|
||
}
|
||
: undefined,
|
||
}
|
||
: undefined,
|
||
});
|
||
|
||
if (lspEnabled) {
|
||
try {
|
||
const lspService = new NativeLspService(
|
||
config,
|
||
config.getWorkspaceContext(),
|
||
appEvents,
|
||
fileService,
|
||
ideContextStore,
|
||
{
|
||
requireTrustedWorkspace: folderTrust,
|
||
},
|
||
);
|
||
|
||
await lspService.discoverAndPrepare();
|
||
await lspService.start();
|
||
lspClient = new NativeLspClient(lspService);
|
||
config.setLspClient(lspClient);
|
||
} catch (err) {
|
||
debugLogger.warn('Failed to initialize native LSP service:', err);
|
||
}
|
||
}
|
||
|
||
return config;
|
||
}
|