mirror of
https://github.com/QwenLM/qwen-code.git
synced 2026-04-28 03:30:40 +00:00
feat(memory): managed auto-memory and auto-dream system (#3087)
* docs: add auto-memory implementation log
* feat(core): add managed auto-memory storage scaffold
* feat(core): load managed auto-memory index
* feat(core): add managed auto-memory recall
* feat(core): add managed auto-memory extraction
* feat(cli): add managed auto-memory dream commands
* feat(core): add auxiliary side-query foundation
* feat(memory): add model-driven recall selection
* feat(memory): add model-driven extraction planner
* feat(core): add background task runtime foundation
* feat(memory): schedule auto dream in background
* feat(core): add background agent runner foundation
* feat(memory): add extraction agent planner
* feat(core): add dream agent planner
* feat(core): rebuild managed memory index
* feat(memory): add governance status commands
* feat(memory): add managed forget flow
* feat(core): harden background agent planning
* feat(memory): complete managed parity closure
* test(memory): add managed lifecycle integration coverage
* feat: same to cc
* feat(memory-ui): add memory saved notification and memory count badge
Feature 3 - Memory Saved Notification:
- Add HistoryItemMemorySaved type to types.ts
- Create MemorySavedMessage component for rendering '● Saved/Updated N memories'
- In useGeminiStream: detect in-turn memory writes via mapToDisplay's
memoryWriteCount field and emit 'memory_saved' history item after turn
- In client.ts: capture background dream/extract promises and expose
via consumePendingMemoryTaskPromises(); useGeminiStream listens
post-turn and emits 'Updated N memories' notification for background tasks
Feature 4 - Memory Count Badge:
- Add isMemoryOp field to IndividualToolCallDisplay
- Add memoryWriteCount/memoryReadCount to HistoryItemToolGroup
- Add detectMemoryOp() in useReactToolScheduler using isAutoMemPath
- ToolGroupMessage renders '● Recalled N memories, Wrote N memories' badge
at the top of tool groups that touch memory files
Fix: process.env bracket-access in paths.ts (noPropertyAccessFromIndexSignature)
Fix: MemoryDialog.test.tsx mock useSettings to satisfy SettingsProvider requirement
* fix(memory-ui): auto-approve memory writes, collapse memory tool groups, fix MEMORY.md path
Problem 1 - Auto-approve memory file operations:
- write-file.ts: getDefaultPermission() checks isAutoMemPath; returns 'allow'
for managed auto-memory files, 'ask' for all other files
- edit.ts: same pattern
Problem 2 - Feature 4 UX: collapse memory-only tool groups:
- ToolGroupMessage: detect when all tool calls have isMemoryOp set (pure memory
group) and all are complete; render compact '● Recalled/Wrote N memories
(ctrl+o to expand)' instead of individual tool call rows
- ctrl+o toggles expand/collapse when isFocused and group is memory-only
- Mixed groups (memory + other tools) keep badge-at-top behaviour
- Expanded state shows individual tool calls with '● Memory operations
(ctrl+o to collapse)' header
Problem 3 - MEMORY.md path mismatch:
- prompt.ts: Step 2 now references full absolute path ${memoryDir}/MEMORY.md
so the model writes to the correct location inside the memory directory,
not to the parent project directory
Fix tests:
- write-file.test.ts: add getProjectRoot to mockConfigInternal
- prompt.test.ts: update assertion to match full-path section header
* fix(memory-ui): fix duplicate notification, broken ctrl+o, and Edit tool detection
- Remove duplicate 'Saved N memories' notification: the tool group badge already
shows 'Wrote N memories'; the separate HistoryItemMemorySaved addItem after
onComplete was double-counting. Keep only the background-task path
(consumePendingMemoryTaskPromises).
- Remove ctrl+o expand: Ink's Static area freezes items on first render and
cannot respond to user input. useInput/useState(isExpanded) in a Static item
is a no-op. Removed the dead code; memory-only groups now always render as
the compact summary (no fake interactive hint).
- Fix Edit tool detection: detectMemoryOp was checking for 'edit_file' but the
real tool name constant is 'edit'. Also removed non-existent 'create_file'
(write_file covers all writes). Now editing MEMORY.md is correctly identified
as a memory write op, collapses to 'Wrote N memories', and is auto-approved.
* fix(dream): run /dream as a visible submit_prompt turn, not a silent background agent
The previous implementation ran an AgentHeadless background agent that could
take 5+ minutes with zero UI feedback — user saw a blank screen for the entire
duration and then at most one line of text.
Fix: /dream now returns submit_prompt with the consolidation task prompt so it
runs as a regular AI conversation turn. Tool calls (read_file, write_file, edit,
grep_search, list_directory, glob) are immediately visible as collapsed tool
groups as the model works through the memory files — identical UX to Claude Code.
Also export buildConsolidationTaskPrompt from dreamAgentPlanner so dreamCommand
can reuse the same detailed consolidation prompt that was already written.
* fix(memory): auto-allow ls/glob/grep on memory base directory
Add getMemoryBaseDir() to getDefaultPermission() allow list in ls.ts,
glob.ts, and grep.ts — mirrors the existing pattern in read-file.ts.
Without this, ListFiles/Glob/Grep on ~/.qwen/* would trigger an
approval dialog, blocking /dream at its very first step.
* fix(background): prevent permission prompt hangs in background agents
Match Claude Code's headless-agent intent: background memory agents must never
block on interactive permission prompts.
Wrap background runtime config so getApprovalMode() returns YOLO, ensuring any
ask decision is auto-approved instead of hanging forever. Add regression test
covering the wrapped approval mode.
* fix(memory): run auto extract through forked agent
Make managed auto-memory extraction follow the Claude Code architecture:
background extraction now uses a forked agent to read/write memory files
directly, instead of planning patches and applying them with a separate
filesystem pipeline.
Keep the old patch/model path only as fallback if the forked agent fails.
Add regression tests covering the new execution path and tool whitelist.
* refactor(memory): remove legacy extract fallback pipeline
Delete the old patch/model/heuristic extraction path entirely.
Managed auto-memory extract now runs only through the forked-agent
execution flow, with no planner/apply fallback stages remaining.
Also remove obsolete exports/tests and update scheduler/integration
coverage to use the forked-agent-only architecture.
* refactor(memory): move auxiliary files out of memory/ directory
meta.json, extract-cursor.json, and consolidation.lock are internal
bookkeeping files, not user-visible memories. Move them one level up
to the project state dir (parent of memory/) so that the memory/
directory contains only MEMORY.md and topic files, matching the
clean layout of the upstream reference implementation.
Add getAutoMemoryProjectStateDir() helper in paths.ts and update the
three path accessors + store.test.ts path assertions accordingly.
* fix(memory): record lastDreamAt after manual /dream run
The /dream command submits a prompt to the main agent (submit_prompt),
which writes memory files directly. Because it bypasses dreamScheduler,
meta.json was never updated and /memory always showed 'never'.
Fix by:
- Exporting writeDreamManualRunToMetadata() from dream.ts
- Adding optional onComplete callback to SubmitPromptActionReturn and
SubmitPromptResult (types.ts / commands/types.ts)
- Propagating onComplete through slashCommandProcessor.ts
- Firing onComplete after turn completion in useGeminiStream.ts
- Providing the callback in dreamCommand.ts to write lastDreamAt
* fix(memory): remove scope params from /remember in managed auto-memory mode
--global/--project are legacy save_memory tool concepts. In managed
auto-memory mode the forked agent decides the appropriate type
(user/feedback/project/reference) based on the content of the fact.
Also improve the prompt wording to explicitly ask the agent to choose
the correct type, reducing the tendency to default to 'project'.
* feat(ui): show '✦ dreaming' indicator in footer during background dream
Subscribe to getManagedAutoMemoryDreamTaskRegistry() in Footer via a
useDreamRunning() hook. While any dream task for the current project is
pending or running, display '✦ dreaming' in the right section of the
footer bar, between Debug Mode and context usage.
* refactor(memory): align dream/extract infrastructure with Claude Code patterns
Five improvements based on Claude Code parity audit:
1. Memoize getAutoMemoryRoot (paths.ts)
- Add _autoMemoryRootCache Map, keyed by projectRoot
- findCanonicalGitRoot() walks the filesystem per call; memoize avoids
repeated git-tree traversal on hot-path schedulers/scanners
- Expose clearAutoMemoryRootCache() for test teardown
2. Lock file stores PID + isProcessRunning reclaim (dreamScheduler.ts)
- acquireDreamLock() writes process.pid to the lock file body
- lockExists() reads PID and calls process.kill(pid, 0); dead/missing
PID reclaims the lock immediately instead of waiting 2h
- Stale threshold reduced to 1h (PID-reuse guard, same as CC)
3. Session scan throttle (dreamScheduler.ts)
- Add SESSION_SCAN_INTERVAL_MS = 10min (same as CC)
- Add lastSessionScanAt Map<projectRoot, number> to ManagedAutoMemoryDreamRuntime
- When time-gate passes but session-gate doesn't, throttle prevents
re-scanning the filesystem on every user turn
4. mtime-based session counting (dreamScheduler.ts)
- Replace fragile recentSessionIdsSinceDream Set in meta.json with
filesystem mtime scan (listSessionsTouchedSince)
- Mirrors Claude Code's listSessionsTouchedSince: reads session JSONL
files from Storage.getProjectDir()/chats/, filters by mtime > lastDreamAt
- Immune to meta.json corruption/loss; no per-turn metadata write
- ManagedAutoMemoryDreamRuntime accepts injectable SessionScannerFn
for clean unit testing without real session files
5. Extraction mutual exclusion extended to write_file/edit (extractScheduler.ts)
- historySliceUsesMemoryTool() now checks write_file/edit/replace/create_file
tool calls whose file_path is within isAutoMemPath()
- Previously only detected save_memory; missed direct file writes by
the main agent, causing redundant background extraction
* docs(memory): add user-facing memory docs, i18n for all locales, simplify /forget
- Add docs/users/features/memory.md: comprehensive user-facing guide covering
QWEN.md instructions, auto-memory behaviour, all memory commands, and
troubleshooting; replaces the placeholder auto-memory.md
- Update docs/users/features/_meta.ts: rename entry auto-memory → memory
- Update docs/users/features/commands.md: add /init, /remember, /forget,
/dream rows; fix /memory description; remove /init duplicate
- Update docs/users/configuration/settings.md: add memory.* settings section
(enableManagedAutoMemory, enableManagedAutoDream) between tools and permissions
- Remove /forget --apply flag: preview-then-apply flow replaced with direct
deletion; update forgetCommand.ts, en.js, zh.js accordingly
- Add all auto-memory i18n keys to de, ja, pt, ru locales (18 keys each):
Open auto-memory folder, Auto-memory/Auto-dream status lines, never/on/off,
✦ dreaming, /forget and /remember usage strings, all managed-memory messages
- Remove dead save_memory branch from extractScheduler.partWritesToMemory()
- Add ✦ dreaming indicator to Footer.tsx with i18n; fix Footer.test.tsx mocks
- Refactor MemoryDialog.tsx auto-dream status line to use i18n
- Remove save_memory tool (memoryTool.ts/test); clean up webui references
- Add extractionPlanner.ts, const.ts and associated tests
- Delete stale docs/users/configuration/memory.md and
docs/developers/tools/memory.md (content superseded)
* refactor(memory): remove all Claude Code references from comments and test names
* test(memory): remove empty placeholder test files that cause vitest to fail
* fix eslint
* fix test in windows
* fix test
* fix(memory): address critical review findings from PR #3087
- fix(read-file): narrow auto-allow from getMemoryBaseDir() (~/.qwen) to
isAutoMemPath(projectRoot) to prevent exposing settings.json / OAuth
credentials without user approval (wenshao review)
- fix(forget): per-entry deletion instead of whole-file unlink
- assign stable per-entry IDs (relativePath:index for multi-entry files)
so the model can target individual entries without removing siblings
- rewrite file keeping unmatched entries; only unlink when file becomes
empty (wenshao review)
- fix(entries): round-trip correctness for multi-entry new-format bodies
- parseAutoMemoryEntries: plain-text line closes current entry and opens
a new one (was silently ignored when current was already set)
- renderAutoMemoryBody: emit blank line between adjacent entries so the
parser can detect entry boundaries on re-read (wenshao review)
- fix(entries): resolve two CodeQL polynomial-regex alerts
- indentedMatch: \s{2,}(?:[-*]\s+)? → [\t ]{2,}(?:[-*][\t ]+)?
- topLevelMatch: :\s*(.+)$ → :[ \t]*(\S.*)$
(github-advanced-security review)
- fix(scan.test): use forward-slash literal for relativePath expectation
since listMarkdownFiles() normalises all separators to '/' on all
platforms including Windows
* fix(memory): replace isAutoMemPath startsWith with path.relative()
Using path.relative() instead of string startsWith() is more robust
across platforms — it correctly handles Windows path-separator
differences and avoids potential edge cases where a path prefix match
could succeed on non-separator boundaries.
Addresses github-actions review item 3 (PR #3087).
* feat(telemetry): add auto-memory telemetry instrumentation
Add OpenTelemetry logs + metrics for the five auto-memory lifecycle
events: extract, dream, recall, forget, and remember.
Telemetry layer (packages/core/src/telemetry/):
- constants.ts: 5 new event-name constants
(qwen-code.memory.{extract,dream,recall,forget,remember})
- types.ts: 5 new event classes with typed constructor params
(MemoryExtractEvent, MemoryDreamEvent, MemoryRecallEvent,
MemoryForgetEvent, MemoryRememberEvent)
- metrics.ts: 8 new OTel instruments (5 Counters + 3 Histograms)
with recordMemoryXxx() helpers; registered inside initializeMetrics()
- loggers.ts: logMemoryExtract/Dream/Recall/Forget/Remember() — each
emits a structured log record and calls its recordXxx() counterpart
- index.ts: re-exports all new symbols
Instrumentation call-sites:
- extractScheduler.ts ManagedAutoMemoryExtractRuntime.runTask():
emits extract event with trigger=auto, completed/failed status,
patches_count, touched_topics, and wall-clock duration
- dream.ts runManagedAutoMemoryDream():
emits dream event with trigger=auto, updated/noop status,
deduped_entries, touched_topics, and duration; covers both
agent-planner and mechanical fallback paths
- recall.ts resolveRelevantAutoMemoryPromptForQuery():
emits recall event with strategy, docs_scanned/selected, and
duration; covers model, heuristic, and none paths
- forget.ts forgetManagedAutoMemoryEntries():
emits forget event with removed_entries_count, touched_topics,
and selection_strategy (model/heuristic/none)
- rememberCommand.ts action():
emits remember event with topic=managed|legacy at command
invocation time (before agent decides the actual memory type)
* refactor(telemetry): remove memory forget/remember telemetry events
Remove EVENT_MEMORY_FORGET and EVENT_MEMORY_REMEMBER along with all
associated infrastructure that is no longer needed:
- constants.ts: remove EVENT_MEMORY_FORGET, EVENT_MEMORY_REMEMBER
- types.ts: remove MemoryForgetEvent, MemoryRememberEvent classes
- metrics.ts: remove MEMORY_FORGET_COUNT, MEMORY_REMEMBER_COUNT constants,
memoryForgetCounter, memoryRememberCounter module vars,
their initialization in initializeMetrics(), and
recordMemoryForgetMetrics(), recordMemoryRememberMetrics() functions
- loggers.ts: remove logMemoryForget(), logMemoryRemember() functions
and their imports
- index.ts: remove all re-exports for the above symbols
- memory/forget.ts: remove logMemoryForget call-site and import
- cli/rememberCommand.ts: remove logMemoryRemember call-sites and import
* change default value
* fix forked agent
* refactor(background): unify fork primitives into runForkedAgent + cleanup
- Merge runForkedQuery into runForkedAgent via TypeScript overloads:
with cacheSafeParams → GeminiChat single-turn path (ForkedQueryResult)
without cacheSafeParams → AgentHeadless multi-turn path (ForkedAgentResult)
- Delete forkedQuery.ts; move its test to background/forkedAgent.cache.test.ts
- Remove forkedQuery export from followup/index.ts
- Migrate all callers (suggestionGenerator, speculation, btwCommand, client)
to import from background/forkedAgent
- Add getFastModel() / setFastModel() to Config; expose in CLI config init
and ModelDialog / modelCommand
- Remove resolveFastModel() from AppContainer — now delegated to config.getFastModel()
- Strip Claude Code references from code comments
* fix(memory): address wenshao's critical review findings
- dream.ts: writeDreamManualRunToMetadata now persists lastDreamSessionId
and resets recentSessionIdsSinceDream, preventing auto-dream from firing
again in the same session after a manual /dream
- config.ts: gate managed auto-memory injection on getManagedAutoMemoryEnabled();
when disabled, previously saved memories are no longer injected into new sessions
- rememberCommand.ts: remove legacy save_memory branch (tool was removed);
fall back to submit_prompt directing agent to write to QWEN.md instead
- BuiltinCommandLoader.ts: only register /dream and /forget when managed
auto-memory is enabled, matching the feature's runtime availability
- forget.ts: return early in forgetManagedAutoMemoryMatches when matches is
empty, avoiding unnecessary directory scaffolding as a side effect
* fix test
* fix ci test
* feat(memory): align extract/dream agents to Claude Code patterns
- fix(client): move saveCacheSafeParams before early-return paths so
extract agents always have cache params available (fixes extract never
triggering in skipNextSpeakerCheck mode)
- feat(extract): add read-only shell tool + memory-scoped write
permissions; create inline createMemoryScopedAgentConfig() with
PermissionManager wrapper (isToolEnabled + evaluate) that allows only
read-only shell commands and write/edit within the auto-memory dir
- feat(extract): align prompt to Claude Code patterns — manifest block
listing existing files, parallel read-then-write strategy, two-step
save (memory file then index)
- feat(dream): remove mechanical fallback; runManagedAutoMemoryDream is
now agent-only and throws without config
- feat(dream): align prompt to Claude Code 4-phase structure
(Orient/Gather/Consolidate/Prune+Index); add narrow transcript grep,
relative→absolute date conversion, stale index pruning, index size cap
- fix(permissions): add isToolEnabled() to MemoryScopedPermissionManager
to prevent TypeError crash in CoreToolScheduler._schedule
- test: update dreamScheduler tests to mock dream.js; replace removed
mechanical-dedup test with scheduler infrastructure verification
* move doc to design
* refactor(memory): unify extract+dream background task management into MemoryBackgroundTaskHub
- Add memoryTaskHub.ts: single BackgroundTaskRegistry + BackgroundTaskDrainer shared
by all memory background tasks; exposes listExtractTasks() / listDreamTasks()
typed query helpers and a unified drain() method
- extractScheduler: ManagedAutoMemoryExtractRuntime accepts hub via constructor
(defaults to defaultMemoryTaskHub); test factory gets isolated fresh hub
- dreamScheduler: same pattern — sessionScanner + hub injection; BackgroundTask-
Scheduler initialized from injected hub; test factory gets isolated hub
- status.ts: replace two separate getRegistry() calls with defaultMemoryTaskHub
typed query methods
- Footer.tsx (useDreamRunning): subscribe to shared registry, filter by
DREAM_TASK_TYPE so extract tasks do not trigger the dream spinner
- index.ts: re-export memoryTaskHub.ts so defaultMemoryTaskHub/DREAM_TASK_TYPE/
EXTRACT_TASK_TYPE are available as top-level package exports
* refactor(background): introduce general-purpose BackgroundTaskHub
Replace memory-specific MemoryBackgroundTaskHub with a domain-agnostic
BackgroundTaskHub in the background/ layer. Any future background task
runtime (3rd, 4th, …) plugs in by accepting a hub via constructor
injection — no new infrastructure required.
Changes:
- Add background/taskHub.ts: BackgroundTaskHub (registry + drainer +
createScheduler() + listByType(taskType, projectRoot?)) and the
globalBackgroundTaskHub singleton. Zero knowledge of any task type.
- Delete memory/memoryTaskHub.ts: its narrow listExtractTasks /
listDreamTasks helpers are replaced by the generic listByType() call.
- Move EXTRACT_TASK_TYPE to extractScheduler.ts (owned by the runtime
that defines it); replace 3 hardcoded string literals with the const.
- Move DREAM_TASK_TYPE to dreamScheduler.ts; use hub.createScheduler()
instead of manually wiring new BackgroundTaskScheduler(reg, drain).
- status.ts: globalBackgroundTaskHub.listByType(EXTRACT_TASK_TYPE, ...)
- Footer.tsx: globalBackgroundTaskHub.registry (shared, filtered by type)
- index.ts: export background/taskHub.js; drop memory/memoryTaskHub.js
* test(background): add BackgroundTaskHub unit tests and hub isolation checks
- background/taskHub.test.ts (11 tests):
- createScheduler(): tasks registered via scheduler appear in hub registry;
multiple calls return distinct scheduler instances
- listByType(): filters by taskType, filters by projectRoot, returns []
for unknown types, two types co-exist in registry but stay separated
- drain(): resolves false on timeout, resolves true when tasks complete,
resolves true immediately when no tasks in flight
- isolation: tasks in hubA do not appear in hubB
- globalBackgroundTaskHub: is a BackgroundTaskHub instance with registry/drainer
- extractScheduler.test.ts (+1 test):
- factory-created runtimes have isolated registries; tasks in runtimeA
are invisible to runtimeB; all tasks carry EXTRACT_TASK_TYPE
- dreamScheduler.test.ts (+1 test):
- factory-created runtimes have isolated registries; tasks in runtimeA
are invisible to runtimeB; all tasks carry DREAM_TASK_TYPE
* refactor(memory): consolidate all memory state into MemoryManager
Replace BackgroundTaskRegistry/Drainer/Scheduler/Hub helper classes and
module-level globals with a single MemoryManager class owned by Config.
## Changes
### New
- packages/core/src/memory/manager.ts — MemoryManager with:
- scheduleExtract / scheduleDream (inline queuing + deduplication logic)
- recall / forget / selectForgetCandidates / forgetMatches
- getStatus / drain / appendToUserMemory
- subscribe(listener) compatible with useSyncExternalStore
- storeWith() atomic record registration (no double-notify)
- Distinct skippedReason 'scan_throttled' vs 'min_sessions' for dream
- packages/core/src/utils/forkedAgent.ts — pure cache util (moved from background/)
- packages/core/src/utils/sideQuery.ts — pure util (moved from auxiliary/)
### Deleted
- background/taskRegistry, taskDrainer, taskScheduler, taskHub and all tests
- background/forkedAgent (moved to utils/)
- auxiliary/sideQuery (moved to utils/)
- memory/extractScheduler, dreamScheduler, state and all tests
### Modified
- config/config.ts — Config owns MemoryManager instance; getMemoryManager()
- core/client.ts — all memory ops via config.getMemoryManager()
- core/client.test.ts — mock MemoryManager instead of individual modules
- memory/status.ts — accepts MemoryManager param, drops globalBackgroundTaskHub
- index.ts — memory exports reduced from 14 modules to 5 (manager/types/paths/store/const)
- cli/commands/dreamCommand.ts — via config.getMemoryManager()
- cli/commands/forgetCommand.ts — via config.getMemoryManager()
- cli/components/Footer.tsx — useSyncExternalStore replacing setInterval polling
- cli/components/Footer.test.tsx — add getMemoryManager mock
This commit is contained in:
parent
07475026f6
commit
9e2f63a1ca
137 changed files with 9809 additions and 2737 deletions
|
|
@ -1128,6 +1128,9 @@ export async function loadCliConfig(
|
|||
output: {
|
||||
format: outputSettingsFormat,
|
||||
},
|
||||
enableManagedAutoMemory: settings.memory?.enableManagedAutoMemory ?? true,
|
||||
enableManagedAutoDream: settings.memory?.enableManagedAutoDream ?? false,
|
||||
fastModel: settings.fastModel || undefined,
|
||||
// Use separated hooks if provided, otherwise fall back to merged hooks
|
||||
userHooks: hooksConfig?.userHooks ?? settings.hooks,
|
||||
projectHooks: hooksConfig?.projectHooks,
|
||||
|
|
|
|||
|
|
@ -1015,6 +1015,38 @@ const SETTINGS_SCHEMA = {
|
|||
},
|
||||
},
|
||||
|
||||
memory: {
|
||||
type: 'object',
|
||||
label: 'Memory',
|
||||
category: 'Memory',
|
||||
requiresRestart: false,
|
||||
default: {},
|
||||
description: 'Settings for managed auto-memory.',
|
||||
showInDialog: false,
|
||||
properties: {
|
||||
enableManagedAutoMemory: {
|
||||
type: 'boolean',
|
||||
label: 'Enable Managed Auto-Memory',
|
||||
category: 'Memory',
|
||||
requiresRestart: false,
|
||||
default: true,
|
||||
description:
|
||||
'Enable background extraction of memories from conversations.',
|
||||
showInDialog: false,
|
||||
},
|
||||
enableManagedAutoDream: {
|
||||
type: 'boolean',
|
||||
label: 'Enable Managed Auto-Dream',
|
||||
category: 'Memory',
|
||||
requiresRestart: false,
|
||||
default: false,
|
||||
description:
|
||||
'Enable automatic consolidation (dream) of collected memories.',
|
||||
showInDialog: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
permissions: {
|
||||
type: 'object',
|
||||
label: 'Permissions',
|
||||
|
|
|
|||
|
|
@ -887,6 +887,45 @@ export default {
|
|||
'Verwendung: /memory add [--global|--project] <zu merkender Text>',
|
||||
'Attempting to save to memory {{scope}}: "{{fact}}"':
|
||||
'Versuche im Speicher {{scope}} zu speichern: "{{fact}}"',
|
||||
'Open auto-memory folder': 'Auto-Speicher-Ordner öffnen',
|
||||
'Auto-memory: {{status}}': 'Auto-Speicher: {{status}}',
|
||||
'Auto-dream: {{status}} · {{lastDream}} · /dream to run':
|
||||
'Auto-Konsolidierung: {{status}} · {{lastDream}} · /dream zum Ausführen',
|
||||
never: 'nie',
|
||||
on: 'ein',
|
||||
off: 'aus',
|
||||
'❆ dreaming': '❆ konsolidiert',
|
||||
'Remove matching entries from managed auto-memory.':
|
||||
'Passende Einträge aus dem verwalteten Auto-Speicher entfernen.',
|
||||
'Usage: /forget <memory text to remove>':
|
||||
'Verwendung: /forget <zu entfernender Erinnerungstext>',
|
||||
'No managed auto-memory entries matched: {{query}}':
|
||||
'Keine verwalteten Auto-Speicher-Einträge gefunden: {{query}}',
|
||||
'Show managed auto-memory status.':
|
||||
'Status des verwalteten Auto-Speichers anzeigen.',
|
||||
'Run managed auto-memory extraction for the current session.':
|
||||
'Verwaltete Auto-Speicher-Extraktion für die aktuelle Sitzung ausführen.',
|
||||
'Managed auto-memory root: {{root}}':
|
||||
'Verwalteter Auto-Speicher-Stamm: {{root}}',
|
||||
'Managed auto-memory topics:': 'Verwaltete Auto-Speicher-Themen:',
|
||||
'No extraction cursor found yet.': 'Noch kein Extraktions-Cursor gefunden.',
|
||||
'Cursor: session={{sessionId}}, offset={{offset}}, updated={{updatedAt}}':
|
||||
'Cursor: Sitzung={{sessionId}}, Offset={{offset}}, Aktualisiert={{updatedAt}}',
|
||||
'No chat client available to extract memory.':
|
||||
'Kein Chat-Client verfügbar, um Erinnerungen zu extrahieren.',
|
||||
'Managed auto-memory extraction is already running.':
|
||||
'Verwaltete Auto-Speicher-Extraktion läuft bereits.',
|
||||
'Managed auto-memory extraction found no new durable memories.':
|
||||
'Verwaltete Auto-Speicher-Extraktion hat keine neuen dauerhaften Erinnerungen gefunden.',
|
||||
'Consolidate managed auto-memory topic files.':
|
||||
'Verwaltete Auto-Speicher-Themendateien konsolidieren.',
|
||||
'Managed auto-memory dream found nothing to improve.':
|
||||
'Auto-Speicher-Konsolidierung hat nichts zu verbessern gefunden.',
|
||||
'Deduplicated entries: {{count}}': 'Deduplizierte Einträge: {{count}}',
|
||||
'Save a durable memory using the save_memory tool.':
|
||||
'Eine dauerhafte Erinnerung mit dem save_memory-Tool speichern.',
|
||||
'Usage: /remember [--global|--project] <text to remember>':
|
||||
'Verwendung: /remember [--global|--project] <zu merkender Text>',
|
||||
|
||||
// ============================================================================
|
||||
// Commands - MCP
|
||||
|
|
|
|||
|
|
@ -949,6 +949,43 @@ export default {
|
|||
'Usage: /memory add [--global|--project] <text to remember>',
|
||||
'Attempting to save to memory {{scope}}: "{{fact}}"':
|
||||
'Attempting to save to memory {{scope}}: "{{fact}}"',
|
||||
'Open auto-memory folder': 'Open auto-memory folder',
|
||||
'Auto-memory: {{status}}': 'Auto-memory: {{status}}',
|
||||
'Auto-dream: {{status}} · {{lastDream}} · /dream to run':
|
||||
'Auto-dream: {{status}} · {{lastDream}} · /dream to run',
|
||||
never: 'never',
|
||||
on: 'on',
|
||||
off: 'off',
|
||||
'✦ dreaming': '✦ dreaming',
|
||||
'Remove matching entries from managed auto-memory.':
|
||||
'Remove matching entries from managed auto-memory.',
|
||||
'Usage: /forget <memory text to remove>':
|
||||
'Usage: /forget <memory text to remove>',
|
||||
'No managed auto-memory entries matched: {{query}}':
|
||||
'No managed auto-memory entries matched: {{query}}',
|
||||
'Show managed auto-memory status.': 'Show managed auto-memory status.',
|
||||
'Run managed auto-memory extraction for the current session.':
|
||||
'Run managed auto-memory extraction for the current session.',
|
||||
'Managed auto-memory root: {{root}}': 'Managed auto-memory root: {{root}}',
|
||||
'Managed auto-memory topics:': 'Managed auto-memory topics:',
|
||||
'No extraction cursor found yet.': 'No extraction cursor found yet.',
|
||||
'Cursor: session={{sessionId}}, offset={{offset}}, updated={{updatedAt}}':
|
||||
'Cursor: session={{sessionId}}, offset={{offset}}, updated={{updatedAt}}',
|
||||
'No chat client available to extract memory.':
|
||||
'No chat client available to extract memory.',
|
||||
'Managed auto-memory extraction is already running.':
|
||||
'Managed auto-memory extraction is already running.',
|
||||
'Managed auto-memory extraction found no new durable memories.':
|
||||
'Managed auto-memory extraction found no new durable memories.',
|
||||
'Consolidate managed auto-memory topic files.':
|
||||
'Consolidate managed auto-memory topic files.',
|
||||
'Managed auto-memory dream found nothing to improve.':
|
||||
'Managed auto-memory dream found nothing to improve.',
|
||||
'Deduplicated entries: {{count}}': 'Deduplicated entries: {{count}}',
|
||||
'Save a durable memory using the save_memory tool.':
|
||||
'Save a durable memory using the save_memory tool.',
|
||||
'Usage: /remember [--global|--project] <text to remember>':
|
||||
'Usage: /remember [--global|--project] <text to remember>',
|
||||
|
||||
// ============================================================================
|
||||
// Commands - MCP
|
||||
|
|
|
|||
|
|
@ -649,6 +649,45 @@ export default {
|
|||
'使い方: /memory add [--global|--project] <記憶するテキスト>',
|
||||
'Attempting to save to memory {{scope}}: "{{fact}}"':
|
||||
'メモリ {{scope}} への保存を試行中: "{{fact}}"',
|
||||
'Open auto-memory folder': '自動メモリフォルダを開く',
|
||||
'Auto-memory: {{status}}': '自動メモリ: {{status}}',
|
||||
'Auto-dream: {{status}} · {{lastDream}} · /dream to run':
|
||||
'自動統合: {{status}} · {{lastDream}} · /dream で実行',
|
||||
never: '未実行',
|
||||
on: 'オン',
|
||||
off: 'オフ',
|
||||
'❆ dreaming': '❆ 整理中',
|
||||
'Remove matching entries from managed auto-memory.':
|
||||
'マネージド自動メモリから一致するエントリを削除する。',
|
||||
'Usage: /forget <memory text to remove>':
|
||||
'使い方: /forget <削除するメモリテキスト>',
|
||||
'No managed auto-memory entries matched: {{query}}':
|
||||
'一致するマネージド自動メモリエントリなし: {{query}}',
|
||||
'Show managed auto-memory status.':
|
||||
'マネージド自動メモリのステータスを表示する。',
|
||||
'Run managed auto-memory extraction for the current session.':
|
||||
'現在のセッションのマネージド自動メモリ抽出を実行する。',
|
||||
'Managed auto-memory root: {{root}}':
|
||||
'マネージド自動メモリのルート: {{root}}',
|
||||
'Managed auto-memory topics:': 'マネージド自動メモリのトピック:',
|
||||
'No extraction cursor found yet.': 'まだ抽出カーソルが見つかりません。',
|
||||
'Cursor: session={{sessionId}}, offset={{offset}}, updated={{updatedAt}}':
|
||||
'カーソル: セッション={{sessionId}}, オフセット={{offset}}, 更新={{updatedAt}}',
|
||||
'No chat client available to extract memory.':
|
||||
'メモリを抽出できるチャットクライアントがありません。',
|
||||
'Managed auto-memory extraction is already running.':
|
||||
'マネージド自動メモリ抽出はすでに実行中です。',
|
||||
'Managed auto-memory extraction found no new durable memories.':
|
||||
'マネージド自動メモリ抽出で新しい永続メモリは見つかりませんでした。',
|
||||
'Consolidate managed auto-memory topic files.':
|
||||
'マネージド自動メモリトピックファイルを統合する。',
|
||||
'Managed auto-memory dream found nothing to improve.':
|
||||
'自動メモリ統合で改善するものは見つかりませんでした。',
|
||||
'Deduplicated entries: {{count}}': '重複除去したエントリ: {{count}}',
|
||||
'Save a durable memory using the save_memory tool.':
|
||||
'save_memoryツールを使用して永続メモリを保存する。',
|
||||
'Usage: /remember [--global|--project] <text to remember>':
|
||||
'使い方: /remember [--global|--project] <覚えておくテキスト>',
|
||||
// MCP
|
||||
'Authenticate with an OAuth-enabled MCP server':
|
||||
'OAuth対応のMCPサーバーで認証',
|
||||
|
|
|
|||
|
|
@ -893,6 +893,46 @@ export default {
|
|||
'Uso: /memory add [--global|--project] <texto para lembrar>',
|
||||
'Attempting to save to memory {{scope}}: "{{fact}}"':
|
||||
'Tentando salvar na memória {{scope}}: "{{fact}}"',
|
||||
'Open auto-memory folder': 'Abrir pasta de memória automática',
|
||||
'Auto-memory: {{status}}': 'Memória automática: {{status}}',
|
||||
'Auto-dream: {{status}} · {{lastDream}} · /dream to run':
|
||||
'Consolidação automática: {{status}} · {{lastDream}} · /dream para executar',
|
||||
never: 'nunca',
|
||||
on: 'ativado',
|
||||
off: 'desativado',
|
||||
'❆ dreaming': '❆ consolidando',
|
||||
'Remove matching entries from managed auto-memory.':
|
||||
'Remover entradas correspondentes da memória automática gerenciada.',
|
||||
'Usage: /forget <memory text to remove>':
|
||||
'Uso: /forget <texto de memória a remover>',
|
||||
'No managed auto-memory entries matched: {{query}}':
|
||||
'Nenhuma entrada de memória automática gerenciada correspondeu: {{query}}',
|
||||
'Show managed auto-memory status.':
|
||||
'Mostrar status da memória automática gerenciada.',
|
||||
'Run managed auto-memory extraction for the current session.':
|
||||
'Executar extração de memória automática gerenciada para a sessão atual.',
|
||||
'Managed auto-memory root: {{root}}':
|
||||
'Raiz da memória automática gerenciada: {{root}}',
|
||||
'Managed auto-memory topics:': 'Tópicos de memória automática gerenciada:',
|
||||
'No extraction cursor found yet.':
|
||||
'Nenhum cursor de extração encontrado ainda.',
|
||||
'Cursor: session={{sessionId}}, offset={{offset}}, updated={{updatedAt}}':
|
||||
'Cursor: sessão={{sessionId}}, offset={{offset}}, atualizado={{updatedAt}}',
|
||||
'No chat client available to extract memory.':
|
||||
'Nenhum cliente de chat disponível para extrair memória.',
|
||||
'Managed auto-memory extraction is already running.':
|
||||
'A extração de memória automática gerenciada já está em execução.',
|
||||
'Managed auto-memory extraction found no new durable memories.':
|
||||
'A extração de memória automática gerenciada não encontrou novas memórias duráveis.',
|
||||
'Consolidate managed auto-memory topic files.':
|
||||
'Consolidar arquivos de tópicos de memória automática gerenciada.',
|
||||
'Managed auto-memory dream found nothing to improve.':
|
||||
'A consolidação de memória automática não encontrou nada para melhorar.',
|
||||
'Deduplicated entries: {{count}}': 'Entradas desduplicadas: {{count}}',
|
||||
'Save a durable memory using the save_memory tool.':
|
||||
'Salvar uma memória durável usando a ferramenta save_memory.',
|
||||
'Usage: /remember [--global|--project] <text to remember>':
|
||||
'Uso: /remember [--global|--project] <texto a lembrar>',
|
||||
|
||||
// ============================================================================
|
||||
// Commands - MCP
|
||||
|
|
|
|||
|
|
@ -896,6 +896,44 @@ export default {
|
|||
'Использование: /memory add [--global|--project] <текст для запоминания>',
|
||||
'Attempting to save to memory {{scope}}: "{{fact}}"':
|
||||
'Попытка сохранить в память {{scope}}: "{{fact}}"',
|
||||
'Open auto-memory folder': 'Открыть папку автопамяти',
|
||||
'Auto-memory: {{status}}': 'Автопамять: {{status}}',
|
||||
'Auto-dream: {{status}} · {{lastDream}} · /dream to run':
|
||||
'Автоконсолидация: {{status}} · {{lastDream}} · /dream для запуска',
|
||||
never: 'никогда',
|
||||
on: 'вкл',
|
||||
off: 'выкл',
|
||||
'❆ dreaming': '❆ консолидация',
|
||||
'Remove matching entries from managed auto-memory.':
|
||||
'Удалить совпадающие записи из управляемой автопамяти.',
|
||||
'Usage: /forget <memory text to remove>':
|
||||
'Использование: /forget <текст воспоминания для удаления>',
|
||||
'No managed auto-memory entries matched: {{query}}':
|
||||
'Не найдено совпадающих записей автопамяти: {{query}}',
|
||||
'Show managed auto-memory status.': 'Показать статус управляемой автопамяти.',
|
||||
'Run managed auto-memory extraction for the current session.':
|
||||
'Запустить извлечение управляемой автопамяти для текущей сессии.',
|
||||
'Managed auto-memory root: {{root}}':
|
||||
'Корневая директория управляемой автопамяти: {{root}}',
|
||||
'Managed auto-memory topics:': 'Темы управляемой автопамяти:',
|
||||
'No extraction cursor found yet.': 'Курсор извлечения ещё не найден.',
|
||||
'Cursor: session={{sessionId}}, offset={{offset}}, updated={{updatedAt}}':
|
||||
'Курсор: сессия={{sessionId}}, смещение={{offset}}, обновлено={{updatedAt}}',
|
||||
'No chat client available to extract memory.':
|
||||
'Нет доступного чат-клиента для извлечения памяти.',
|
||||
'Managed auto-memory extraction is already running.':
|
||||
'Извлечение управляемой автопамяти уже выполняется.',
|
||||
'Managed auto-memory extraction found no new durable memories.':
|
||||
'Извлечение управляемой автопамяти не нашло новых долгосрочных воспоминаний.',
|
||||
'Consolidate managed auto-memory topic files.':
|
||||
'Консолидировать файлы тем управляемой автопамяти.',
|
||||
'Managed auto-memory dream found nothing to improve.':
|
||||
'Консолидация автопамяти не нашла чего улучшать.',
|
||||
'Deduplicated entries: {{count}}': 'Удалено дубликатов: {{count}}',
|
||||
'Save a durable memory using the save_memory tool.':
|
||||
'Сохранить долгосрочную память с помощью инструмента save_memory.',
|
||||
'Usage: /remember [--global|--project] <text to remember>':
|
||||
'Использование: /remember [--global|--project] <текст для запоминания>',
|
||||
|
||||
// ============================================================================
|
||||
// Команды - MCP
|
||||
|
|
|
|||
|
|
@ -899,6 +899,41 @@ export default {
|
|||
'用法:/memory add [--global|--project] <要记住的文本>',
|
||||
'Attempting to save to memory {{scope}}: "{{fact}}"':
|
||||
'正在尝试保存到记忆 {{scope}}:"{{fact}}"',
|
||||
'Open auto-memory folder': '打开自动记忆文件夹',
|
||||
'Auto-memory: {{status}}': '自动记忆:{{status}}',
|
||||
'Auto-dream: {{status}} · {{lastDream}} · /dream to run':
|
||||
'自动整理:{{status}} · {{lastDream}} · /dream 立即运行',
|
||||
never: '从未',
|
||||
on: '开',
|
||||
off: '关',
|
||||
'✦ dreaming': '✦ 整理中',
|
||||
'Remove matching entries from managed auto-memory.':
|
||||
'从托管自动记忆中删除匹配的条目。',
|
||||
'Usage: /forget <memory text to remove>': '用法:/forget <要删除的记忆文本>',
|
||||
'No managed auto-memory entries matched: {{query}}':
|
||||
'没有匹配的托管自动记忆条目:{{query}}',
|
||||
'Show managed auto-memory status.': '显示托管自动记忆状态',
|
||||
'Run managed auto-memory extraction for the current session.':
|
||||
'为当前会话运行托管自动记忆提炼',
|
||||
'Managed auto-memory root: {{root}}': '托管自动记忆根目录:{{root}}',
|
||||
'Managed auto-memory topics:': '托管自动记忆主题:',
|
||||
'No extraction cursor found yet.': '尚未找到提炼游标。',
|
||||
'Cursor: session={{sessionId}}, offset={{offset}}, updated={{updatedAt}}':
|
||||
'游标:session={{sessionId}},offset={{offset}},updated={{updatedAt}}',
|
||||
'No chat client available to extract memory.':
|
||||
'没有可用于提炼记忆的聊天客户端。',
|
||||
'Managed auto-memory extraction is already running.':
|
||||
'托管自动记忆提炼已在运行中。',
|
||||
'Managed auto-memory extraction found no new durable memories.':
|
||||
'托管自动记忆提炼未发现新的持久记忆。',
|
||||
'Consolidate managed auto-memory topic files.': '整理托管自动记忆主题文件',
|
||||
'Managed auto-memory dream found nothing to improve.':
|
||||
'托管自动记忆 dream 未发现可改进内容。',
|
||||
'Deduplicated entries: {{count}}': '去重条目数:{{count}}',
|
||||
'Save a durable memory using the save_memory tool.':
|
||||
'使用 save_memory 工具保存一条持久记忆',
|
||||
'Usage: /remember [--global|--project] <text to remember>':
|
||||
'用法:/remember [--global|--project] <要记住的文本>',
|
||||
|
||||
// ============================================================================
|
||||
// Commands - MCP
|
||||
|
|
|
|||
|
|
@ -81,12 +81,14 @@ vi.mock('../ui/commands/bugCommand.js', () => ({ bugCommand: {} }));
|
|||
vi.mock('../ui/commands/clearCommand.js', () => ({ clearCommand: {} }));
|
||||
vi.mock('../ui/commands/compressCommand.js', () => ({ compressCommand: {} }));
|
||||
vi.mock('../ui/commands/docsCommand.js', () => ({ docsCommand: {} }));
|
||||
vi.mock('../ui/commands/exportCommand.js', () => ({ exportCommand: {} }));
|
||||
vi.mock('../ui/commands/editorCommand.js', () => ({ editorCommand: {} }));
|
||||
vi.mock('../ui/commands/extensionsCommand.js', () => ({
|
||||
extensionsCommand: {},
|
||||
}));
|
||||
vi.mock('../ui/commands/helpCommand.js', () => ({ helpCommand: {} }));
|
||||
vi.mock('../ui/commands/memoryCommand.js', () => ({ memoryCommand: {} }));
|
||||
vi.mock('../ui/commands/insightCommand.js', () => ({ insightCommand: {} }));
|
||||
vi.mock('../ui/commands/modelCommand.js', () => ({
|
||||
modelCommand: { name: 'model' },
|
||||
}));
|
||||
|
|
@ -122,6 +124,7 @@ describe('BuiltinCommandLoader', () => {
|
|||
getFolderTrust: vi.fn().mockReturnValue(true),
|
||||
getUseModelRouter: () => false,
|
||||
getDisableAllHooks: vi.fn().mockReturnValue(false),
|
||||
getManagedAutoMemoryEnabled: vi.fn().mockReturnValue(true),
|
||||
} as unknown as Config;
|
||||
|
||||
restoreCommandMock.mockReturnValue({
|
||||
|
|
|
|||
|
|
@ -30,8 +30,11 @@ import { createDebugLogger } from '@qwen-code/qwen-code-core';
|
|||
import { initCommand } from '../ui/commands/initCommand.js';
|
||||
import { languageCommand } from '../ui/commands/languageCommand.js';
|
||||
import { mcpCommand } from '../ui/commands/mcpCommand.js';
|
||||
import { dreamCommand } from '../ui/commands/dreamCommand.js';
|
||||
import { forgetCommand } from '../ui/commands/forgetCommand.js';
|
||||
import { memoryCommand } from '../ui/commands/memoryCommand.js';
|
||||
import { modelCommand } from '../ui/commands/modelCommand.js';
|
||||
import { rememberCommand } from '../ui/commands/rememberCommand.js';
|
||||
import { planCommand } from '../ui/commands/planCommand.js';
|
||||
import { permissionsCommand } from '../ui/commands/permissionsCommand.js';
|
||||
import { trustCommand } from '../ui/commands/trustCommand.js';
|
||||
|
|
@ -103,8 +106,12 @@ export class BuiltinCommandLoader implements ICommandLoader {
|
|||
initCommand,
|
||||
languageCommand,
|
||||
mcpCommand,
|
||||
...(this.config?.getManagedAutoMemoryEnabled()
|
||||
? [dreamCommand, forgetCommand]
|
||||
: []),
|
||||
memoryCommand,
|
||||
modelCommand,
|
||||
rememberCommand,
|
||||
planCommand,
|
||||
permissionsCommand,
|
||||
...(this.config?.getFolderTrust() ? [trustCommand] : []),
|
||||
|
|
|
|||
|
|
@ -123,6 +123,7 @@ import { useAgentsManagerDialog } from './hooks/useAgentsManagerDialog.js';
|
|||
import { useExtensionsManagerDialog } from './hooks/useExtensionsManagerDialog.js';
|
||||
import { useMcpDialog } from './hooks/useMcpDialog.js';
|
||||
import { useHooksDialog } from './hooks/useHooksDialog.js';
|
||||
import { useMemoryDialog } from './hooks/useMemoryDialog.js';
|
||||
import { useAttentionNotifications } from './hooks/useAttentionNotifications.js';
|
||||
import { useContextualTips } from './hooks/useContextualTips.js';
|
||||
import { getTipHistory } from '../services/tips/index.js';
|
||||
|
|
@ -531,6 +532,8 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
|
||||
const { isSettingsDialogOpen, openSettingsDialog, closeSettingsDialog } =
|
||||
useSettingsCommand();
|
||||
const { isMemoryDialogOpen, openMemoryDialog, closeMemoryDialog } =
|
||||
useMemoryDialog();
|
||||
|
||||
const {
|
||||
isModelDialogOpen,
|
||||
|
|
@ -579,6 +582,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
openAuthDialog,
|
||||
openThemeDialog,
|
||||
openEditorDialog,
|
||||
openMemoryDialog,
|
||||
openSettingsDialog,
|
||||
openModelDialog,
|
||||
openTrustDialog,
|
||||
|
|
@ -606,6 +610,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
openAuthDialog,
|
||||
openThemeDialog,
|
||||
openEditorDialog,
|
||||
openMemoryDialog,
|
||||
openSettingsDialog,
|
||||
openModelDialog,
|
||||
openArenaDialog,
|
||||
|
|
@ -1141,24 +1146,6 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
const followupSuggestionsEnabled =
|
||||
settings.merged.ui?.enableFollowupSuggestions === true;
|
||||
|
||||
// Resolve fastModel, validating it belongs to the current authType.
|
||||
// If the configured fastModel is from a different provider, the API call
|
||||
// would fail silently (DashScope/Qwen client rejects unknown model IDs),
|
||||
// so fall back to the main model instead.
|
||||
const resolveFastModel = useCallback((): string | undefined => {
|
||||
const fastModel = settings.merged.fastModel;
|
||||
if (!fastModel) return undefined;
|
||||
const currentAuthType = config.getContentGeneratorConfig()?.authType;
|
||||
if (!currentAuthType) return undefined;
|
||||
const availableModels = config
|
||||
.getModelsConfig()
|
||||
.getAvailableModelsForAuthType(currentAuthType);
|
||||
const belongsToCurrentAuth = availableModels.some(
|
||||
(m) => m.id === fastModel,
|
||||
);
|
||||
return belongsToCurrentAuth ? fastModel : undefined;
|
||||
}, [settings.merged.fastModel, config]);
|
||||
|
||||
useEffect(() => {
|
||||
// Clear suggestion when feature is disabled at runtime
|
||||
if (!followupSuggestionsEnabled) {
|
||||
|
|
@ -1210,7 +1197,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
const fullHistory = geminiClient.getChat().getHistory(true);
|
||||
const conversationHistory =
|
||||
fullHistory.length > 40 ? fullHistory.slice(-40) : fullHistory;
|
||||
const fastModel = resolveFastModel();
|
||||
const fastModel = config.getFastModel();
|
||||
generatePromptSuggestion(config, conversationHistory, ac.signal, {
|
||||
enableCacheSharing: settings.merged.ui?.enableCacheSharing === true,
|
||||
model: fastModel,
|
||||
|
|
@ -1503,6 +1490,8 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
exitEditorDialog,
|
||||
isSettingsDialogOpen,
|
||||
closeSettingsDialog,
|
||||
isMemoryDialogOpen,
|
||||
closeMemoryDialog,
|
||||
activeArenaDialog,
|
||||
closeArenaDialog,
|
||||
isFolderTrustDialogOpen,
|
||||
|
|
@ -1811,6 +1800,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
!!loopDetectionConfirmationRequest ||
|
||||
isThemeDialogOpen ||
|
||||
isSettingsDialogOpen ||
|
||||
isMemoryDialogOpen ||
|
||||
isModelDialogOpen ||
|
||||
isTrustDialogOpen ||
|
||||
activeArenaDialog !== null ||
|
||||
|
|
@ -1860,6 +1850,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
debugMessage,
|
||||
quittingMessages,
|
||||
isSettingsDialogOpen,
|
||||
isMemoryDialogOpen,
|
||||
isModelDialogOpen,
|
||||
isFastModelMode,
|
||||
isTrustDialogOpen,
|
||||
|
|
@ -1966,6 +1957,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
debugMessage,
|
||||
quittingMessages,
|
||||
isSettingsDialogOpen,
|
||||
isMemoryDialogOpen,
|
||||
isModelDialogOpen,
|
||||
isFastModelMode,
|
||||
isTrustDialogOpen,
|
||||
|
|
@ -2064,6 +2056,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
() => ({
|
||||
openThemeDialog,
|
||||
openEditorDialog,
|
||||
openMemoryDialog,
|
||||
handleThemeSelect,
|
||||
handleThemeHighlight,
|
||||
handleApprovalModeSelect,
|
||||
|
|
@ -2076,6 +2069,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
handleEditorSelect,
|
||||
exitEditorDialog,
|
||||
closeSettingsDialog,
|
||||
closeMemoryDialog,
|
||||
closeModelDialog,
|
||||
openModelDialog,
|
||||
openArenaDialog,
|
||||
|
|
@ -2124,6 +2118,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
[
|
||||
openThemeDialog,
|
||||
openEditorDialog,
|
||||
openMemoryDialog,
|
||||
handleThemeSelect,
|
||||
handleThemeHighlight,
|
||||
handleApprovalModeSelect,
|
||||
|
|
@ -2136,6 +2131,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||
handleEditorSelect,
|
||||
exitEditorDialog,
|
||||
closeSettingsDialog,
|
||||
closeMemoryDialog,
|
||||
closeModelDialog,
|
||||
openModelDialog,
|
||||
openArenaDialog,
|
||||
|
|
|
|||
|
|
@ -23,26 +23,35 @@ vi.mock('../../i18n/index.js', () => ({
|
|||
},
|
||||
}));
|
||||
|
||||
// Must use vi.hoisted so the mock factory can reference it before module eval.
|
||||
const mockRunForkedAgent = vi.hoisted(() => vi.fn());
|
||||
const mockGetCacheSafeParams = vi.hoisted(() =>
|
||||
vi.fn().mockReturnValue({
|
||||
generationConfig: {},
|
||||
history: [],
|
||||
model: 'test-model',
|
||||
version: 1,
|
||||
}),
|
||||
);
|
||||
|
||||
vi.mock('@qwen-code/qwen-code-core', () => ({
|
||||
runForkedAgent: mockRunForkedAgent,
|
||||
getCacheSafeParams: mockGetCacheSafeParams,
|
||||
}));
|
||||
|
||||
describe('btwCommand', () => {
|
||||
let mockContext: CommandContext;
|
||||
let mockGenerateContent: ReturnType<typeof vi.fn>;
|
||||
let mockGetHistory: ReturnType<typeof vi.fn>;
|
||||
|
||||
const createConfig = (overrides: Record<string, unknown> = {}) => ({
|
||||
getGeminiClient: () => ({
|
||||
getHistory: mockGetHistory,
|
||||
generateContent: mockGenerateContent,
|
||||
}),
|
||||
getGeminiClient: () => ({}),
|
||||
getModel: () => 'test-model',
|
||||
getSessionId: () => 'test-session-id',
|
||||
getApprovalMode: () => 'default',
|
||||
...overrides,
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
|
||||
mockGenerateContent = vi.fn();
|
||||
mockGetHistory = vi.fn().mockReturnValue([]);
|
||||
|
||||
mockContext = createMockCommandContext({
|
||||
services: {
|
||||
config: createConfig(),
|
||||
|
|
@ -90,37 +99,14 @@ describe('btwCommand', () => {
|
|||
});
|
||||
});
|
||||
|
||||
it('should return error when model is not configured', async () => {
|
||||
const noModelContext = createMockCommandContext({
|
||||
services: {
|
||||
config: createConfig({
|
||||
getModel: () => '',
|
||||
}),
|
||||
},
|
||||
});
|
||||
|
||||
const result = await btwCommand.action!(noModelContext, 'test question');
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: 'No model configured.',
|
||||
});
|
||||
});
|
||||
|
||||
describe('interactive mode', () => {
|
||||
const flushPromises = () =>
|
||||
new Promise<void>((resolve) => setTimeout(resolve, 0));
|
||||
|
||||
it('should set btwItem and update it on success', async () => {
|
||||
mockGenerateContent.mockResolvedValue({
|
||||
candidates: [
|
||||
{
|
||||
content: {
|
||||
parts: [{ text: 'The answer is 42.' }],
|
||||
},
|
||||
},
|
||||
],
|
||||
mockRunForkedAgent.mockResolvedValue({
|
||||
text: 'The answer is 42.',
|
||||
usage: { inputTokens: 10, outputTokens: 5, cacheHitTokens: 3 },
|
||||
});
|
||||
|
||||
await btwCommand.action!(mockContext, 'what is the meaning of life?');
|
||||
|
|
@ -154,89 +140,25 @@ describe('btwCommand', () => {
|
|||
expect(mockContext.ui.addItem).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should pass conversation history to generateContent', async () => {
|
||||
const history = [
|
||||
{ role: 'user', parts: [{ text: 'Hello' }] },
|
||||
{ role: 'model', parts: [{ text: 'Hi!' }] },
|
||||
];
|
||||
mockGetHistory.mockReturnValue(history);
|
||||
mockGenerateContent.mockResolvedValue({
|
||||
candidates: [{ content: { parts: [{ text: 'answer' }] } }],
|
||||
it('should invoke runForkedAgent with cacheSafeParams and userMessage', async () => {
|
||||
mockRunForkedAgent.mockResolvedValue({
|
||||
text: 'answer',
|
||||
usage: { inputTokens: 5, outputTokens: 2, cacheHitTokens: 0 },
|
||||
});
|
||||
|
||||
await btwCommand.action!(mockContext, 'my question');
|
||||
await flushPromises();
|
||||
|
||||
expect(mockGenerateContent).toHaveBeenCalledWith(
|
||||
[
|
||||
...history,
|
||||
{
|
||||
role: 'user',
|
||||
parts: [
|
||||
{
|
||||
text: expect.stringContaining('my question'),
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
{},
|
||||
expect.any(AbortSignal),
|
||||
'test-model',
|
||||
expect.stringMatching(/^test-session-id########btw-/),
|
||||
expect(mockRunForkedAgent).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
cacheSafeParams: expect.objectContaining({ model: 'test-model' }),
|
||||
userMessage: expect.stringContaining('my question'),
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should trim history to last 20 messages for long conversations', async () => {
|
||||
// Build 24 history entries — exceeds the 20-message limit
|
||||
const longHistory = Array.from({ length: 12 }, (_, i) => [
|
||||
{ role: 'user', parts: [{ text: `Q${i}` }] },
|
||||
{ role: 'model', parts: [{ text: `A${i}` }] },
|
||||
]).flat();
|
||||
mockGetHistory.mockReturnValue(longHistory);
|
||||
mockGenerateContent.mockResolvedValue({
|
||||
candidates: [{ content: { parts: [{ text: 'answer' }] } }],
|
||||
});
|
||||
|
||||
await btwCommand.action!(mockContext, 'test');
|
||||
await flushPromises();
|
||||
|
||||
const calledContents = mockGenerateContent.mock.calls[0][0];
|
||||
// 20 history entries + 1 btw question = 21
|
||||
expect(calledContents).toHaveLength(21);
|
||||
// First entry should be user (Q2, since slice(-20) on 24 starts at index 4)
|
||||
expect(calledContents[0].role).toBe('user');
|
||||
expect(calledContents[0].parts[0].text).toBe('Q2');
|
||||
});
|
||||
|
||||
it('should trim history and skip leading model entry to preserve alternation', async () => {
|
||||
// Build 21 entries: 10 full turns + 1 trailing user message.
|
||||
// slice(-20) yields [M0, U1, M1, ..., U9, M9, U10] — starts with model.
|
||||
// trimHistory should drop that leading model entry.
|
||||
const oddHistory = [
|
||||
...Array.from({ length: 11 }, (_, i) => [
|
||||
{ role: 'user', parts: [{ text: `Q${i}` }] },
|
||||
{ role: 'model', parts: [{ text: `A${i}` }] },
|
||||
]).flat(),
|
||||
].slice(0, 21); // [U0, M0, U1, M1, ..., U9, M9, U10]
|
||||
expect(oddHistory).toHaveLength(21);
|
||||
|
||||
mockGetHistory.mockReturnValue(oddHistory);
|
||||
mockGenerateContent.mockResolvedValue({
|
||||
candidates: [{ content: { parts: [{ text: 'answer' }] } }],
|
||||
});
|
||||
|
||||
await btwCommand.action!(mockContext, 'test');
|
||||
await flushPromises();
|
||||
|
||||
const calledContents = mockGenerateContent.mock.calls[0][0];
|
||||
// slice(-20) = 20 entries starting with M0 (model) → slice(1) = 19, + 1 btw = 20
|
||||
expect(calledContents).toHaveLength(20);
|
||||
expect(calledContents[0].role).toBe('user');
|
||||
expect(calledContents[0].parts[0].text).toBe('Q1');
|
||||
});
|
||||
|
||||
it('should add error item on failure and clear btwItem', async () => {
|
||||
mockGenerateContent.mockRejectedValue(new Error('API error'));
|
||||
mockRunForkedAgent.mockRejectedValue(new Error('API error'));
|
||||
|
||||
await btwCommand.action!(mockContext, 'test question');
|
||||
await flushPromises();
|
||||
|
|
@ -255,7 +177,7 @@ describe('btwCommand', () => {
|
|||
});
|
||||
|
||||
it('should handle non-Error exceptions', async () => {
|
||||
mockGenerateContent.mockRejectedValue('string error');
|
||||
mockRunForkedAgent.mockRejectedValue('string error');
|
||||
|
||||
await btwCommand.action!(mockContext, 'test question');
|
||||
await flushPromises();
|
||||
|
|
@ -270,6 +192,11 @@ describe('btwCommand', () => {
|
|||
});
|
||||
|
||||
it('should not block when another pendingItem exists', async () => {
|
||||
mockRunForkedAgent.mockResolvedValue({
|
||||
text: 'answer',
|
||||
usage: { inputTokens: 5, outputTokens: 2, cacheHitTokens: 0 },
|
||||
});
|
||||
|
||||
const busyContext = createMockCommandContext({
|
||||
services: {
|
||||
config: createConfig(),
|
||||
|
|
@ -279,26 +206,21 @@ describe('btwCommand', () => {
|
|||
},
|
||||
});
|
||||
|
||||
mockGenerateContent.mockResolvedValue({
|
||||
candidates: [{ content: { parts: [{ text: 'answer' }] } }],
|
||||
});
|
||||
|
||||
// btw should NOT be blocked by pendingItem anymore
|
||||
// btw should NOT be blocked by pendingItem
|
||||
const result = await btwCommand.action!(busyContext, 'test question');
|
||||
expect(result).toBeUndefined();
|
||||
expect(busyContext.ui.setBtwItem).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should not update btwItem when cancelled via btwAbortControllerRef', async () => {
|
||||
mockGenerateContent.mockImplementation(
|
||||
mockRunForkedAgent.mockImplementation(
|
||||
() =>
|
||||
new Promise((resolve) =>
|
||||
setTimeout(
|
||||
() =>
|
||||
resolve({
|
||||
candidates: [
|
||||
{ content: { parts: [{ text: 'late answer' }] } },
|
||||
],
|
||||
text: 'late answer',
|
||||
usage: { inputTokens: 5, outputTokens: 2, cacheHitTokens: 0 },
|
||||
}),
|
||||
50,
|
||||
),
|
||||
|
|
@ -307,7 +229,6 @@ describe('btwCommand', () => {
|
|||
|
||||
await btwCommand.action!(mockContext, 'test question');
|
||||
|
||||
// The btw command should have registered its AbortController
|
||||
expect(mockContext.ui.btwAbortControllerRef.current).toBeInstanceOf(
|
||||
AbortController,
|
||||
);
|
||||
|
|
@ -323,25 +244,24 @@ describe('btwCommand', () => {
|
|||
});
|
||||
|
||||
it('should clear btwAbortControllerRef after successful completion', async () => {
|
||||
mockGenerateContent.mockResolvedValue({
|
||||
candidates: [{ content: { parts: [{ text: 'answer' }] } }],
|
||||
mockRunForkedAgent.mockResolvedValue({
|
||||
text: 'answer',
|
||||
usage: { inputTokens: 5, outputTokens: 2, cacheHitTokens: 0 },
|
||||
});
|
||||
|
||||
await btwCommand.action!(mockContext, 'test question');
|
||||
|
||||
// Ref is set during the call
|
||||
expect(mockContext.ui.btwAbortControllerRef.current).toBeInstanceOf(
|
||||
AbortController,
|
||||
);
|
||||
|
||||
await flushPromises();
|
||||
|
||||
// After completion, ref should be cleaned up
|
||||
expect(mockContext.ui.btwAbortControllerRef.current).toBeNull();
|
||||
});
|
||||
|
||||
it('should clear btwAbortControllerRef after error', async () => {
|
||||
mockGenerateContent.mockRejectedValue(new Error('API error'));
|
||||
mockRunForkedAgent.mockRejectedValue(new Error('API error'));
|
||||
|
||||
await btwCommand.action!(mockContext, 'test question');
|
||||
|
||||
|
|
@ -355,25 +275,24 @@ describe('btwCommand', () => {
|
|||
});
|
||||
|
||||
it('should cancel previous btw when starting a new one', async () => {
|
||||
mockGenerateContent.mockResolvedValue({
|
||||
candidates: [{ content: { parts: [{ text: 'answer' }] } }],
|
||||
mockRunForkedAgent.mockResolvedValue({
|
||||
text: 'answer',
|
||||
usage: { inputTokens: 5, outputTokens: 2, cacheHitTokens: 0 },
|
||||
});
|
||||
|
||||
await btwCommand.action!(mockContext, 'first question');
|
||||
|
||||
// cancelBtw should have been called to clean up any previous btw
|
||||
expect(mockContext.ui.cancelBtw).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Second btw call
|
||||
await btwCommand.action!(mockContext, 'second question');
|
||||
|
||||
// cancelBtw called again for the second invocation
|
||||
expect(mockContext.ui.cancelBtw).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
|
||||
it('should return fallback text when response has no parts', async () => {
|
||||
mockGenerateContent.mockResolvedValue({
|
||||
candidates: [{ content: { parts: [] } }],
|
||||
it('should return fallback text when text is null', async () => {
|
||||
mockRunForkedAgent.mockResolvedValue({
|
||||
text: null,
|
||||
usage: { inputTokens: 5, outputTokens: 0, cacheHitTokens: 0 },
|
||||
});
|
||||
|
||||
await btwCommand.action!(mockContext, 'test question');
|
||||
|
|
@ -390,8 +309,9 @@ describe('btwCommand', () => {
|
|||
});
|
||||
|
||||
it('should return void immediately without blocking', async () => {
|
||||
mockGenerateContent.mockResolvedValue({
|
||||
candidates: [{ content: { parts: [{ text: 'answer' }] } }],
|
||||
mockRunForkedAgent.mockResolvedValue({
|
||||
text: 'answer',
|
||||
usage: { inputTokens: 5, outputTokens: 2, cacheHitTokens: 0 },
|
||||
});
|
||||
|
||||
const result = await btwCommand.action!(mockContext, 'test question');
|
||||
|
|
@ -421,8 +341,9 @@ describe('btwCommand', () => {
|
|||
});
|
||||
|
||||
it('should return info message on success', async () => {
|
||||
mockGenerateContent.mockResolvedValue({
|
||||
candidates: [{ content: { parts: [{ text: 'the answer' }] } }],
|
||||
mockRunForkedAgent.mockResolvedValue({
|
||||
text: 'the answer',
|
||||
usage: { inputTokens: 5, outputTokens: 2, cacheHitTokens: 0 },
|
||||
});
|
||||
|
||||
const result = await btwCommand.action!(
|
||||
|
|
@ -438,7 +359,7 @@ describe('btwCommand', () => {
|
|||
});
|
||||
|
||||
it('should return error message on failure', async () => {
|
||||
mockGenerateContent.mockRejectedValue(new Error('network error'));
|
||||
mockRunForkedAgent.mockRejectedValue(new Error('network error'));
|
||||
|
||||
const result = await btwCommand.action!(
|
||||
nonInteractiveContext,
|
||||
|
|
@ -466,8 +387,9 @@ describe('btwCommand', () => {
|
|||
});
|
||||
|
||||
it('should return stream_messages generator on success', async () => {
|
||||
mockGenerateContent.mockResolvedValue({
|
||||
candidates: [{ content: { parts: [{ text: 'streamed answer' }] } }],
|
||||
mockRunForkedAgent.mockResolvedValue({
|
||||
text: 'streamed answer',
|
||||
usage: { inputTokens: 5, outputTokens: 3, cacheHitTokens: 0 },
|
||||
});
|
||||
|
||||
const result = (await btwCommand.action!(acpContext, 'my question')) as {
|
||||
|
|
@ -489,7 +411,7 @@ describe('btwCommand', () => {
|
|||
});
|
||||
|
||||
it('should yield error message on failure', async () => {
|
||||
mockGenerateContent.mockRejectedValue(new Error('api failure'));
|
||||
mockRunForkedAgent.mockRejectedValue(new Error('api failure'));
|
||||
|
||||
const result = (await btwCommand.action!(acpContext, 'my question')) as {
|
||||
type: string;
|
||||
|
|
|
|||
|
|
@ -13,12 +13,7 @@ import { CommandKind } from './types.js';
|
|||
import { MessageType } from '../types.js';
|
||||
import type { HistoryItemBtw } from '../types.js';
|
||||
import { t } from '../../i18n/index.js';
|
||||
import type { GeminiClient } from '@qwen-code/qwen-code-core';
|
||||
import type { Content } from '@google/genai';
|
||||
|
||||
function makeBtwPromptId(sessionId: string): string {
|
||||
return `${sessionId}########btw-${Date.now()}`;
|
||||
}
|
||||
import { getCacheSafeParams, runForkedAgent } from '@qwen-code/qwen-code-core';
|
||||
|
||||
function formatBtwError(error: unknown): string {
|
||||
return t('Failed to answer btw question: {{error}}', {
|
||||
|
|
@ -27,83 +22,59 @@ function formatBtwError(error: unknown): string {
|
|||
});
|
||||
}
|
||||
|
||||
// Keep only the most recent history messages to limit token usage for side
|
||||
// questions. MAX_BTW_HISTORY_MESSAGES caps the number of history Content
|
||||
// entries included as context before the /btw question is appended.
|
||||
const MAX_BTW_HISTORY_MESSAGES = 20;
|
||||
|
||||
function trimHistory(history: Content[]): Content[] {
|
||||
if (history.length <= MAX_BTW_HISTORY_MESSAGES) {
|
||||
return history;
|
||||
}
|
||||
// Slice from the end, ensuring we start on a 'user' message so the
|
||||
// alternating user/model pattern is preserved.
|
||||
const sliced = history.slice(-MAX_BTW_HISTORY_MESSAGES);
|
||||
if (sliced[0]?.role === 'model' && sliced.length > 1) {
|
||||
return sliced.slice(1);
|
||||
}
|
||||
return sliced;
|
||||
/**
|
||||
* Wrap the user's side question with constraints so the model knows it must
|
||||
* answer without tools in a single response.
|
||||
*
|
||||
* The system-reminder is embedded in the user message rather than overriding
|
||||
* systemInstruction, because runForkedAgent inherits systemInstruction from
|
||||
* CacheSafeParams (changing it would bust the prompt cache).
|
||||
*/
|
||||
function buildBtwPrompt(question: string): string {
|
||||
return [
|
||||
'<system-reminder>',
|
||||
'This is a side question from the user. Answer directly in a single response.',
|
||||
'',
|
||||
'CRITICAL CONSTRAINTS:',
|
||||
'- You have NO tools available — you cannot read files, run commands, or take any actions.',
|
||||
'- You can ONLY use information already present in the conversation context.',
|
||||
'- NEVER promise to look something up or investigate further.',
|
||||
'- If you do not know the answer, say so.',
|
||||
'- The main conversation is NOT interrupted; you are a separate, lightweight fork.',
|
||||
'</system-reminder>',
|
||||
'',
|
||||
question,
|
||||
].join('\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper to make the ephemeral generateContent call and extract the answer.
|
||||
* Uses a snapshot of the current conversation history as context.
|
||||
* Run a side question using runForkedAgent (cache path).
|
||||
*
|
||||
* runForkedAgent with cacheSafeParams shares the main conversation's
|
||||
* CacheSafeParams (systemInstruction + history) so the fork sees the full
|
||||
* conversation context and benefits from prompt-cache hits. Tools are denied
|
||||
* at the per-request level (NO_TOOLS) — single-turn, text-only.
|
||||
*/
|
||||
async function askBtw(
|
||||
geminiClient: GeminiClient,
|
||||
model: string,
|
||||
context: CommandContext,
|
||||
question: string,
|
||||
abortSignal: AbortSignal,
|
||||
promptId: string,
|
||||
): Promise<string> {
|
||||
const history = trimHistory(geminiClient.getHistory(true));
|
||||
const { config } = context.services;
|
||||
if (!config) throw new Error('Config not loaded');
|
||||
|
||||
// Side-question guidance sent as a user message (not a system instruction).
|
||||
// Inspired by Claude Code's design:
|
||||
// - Emphasizes direct answering without tools
|
||||
// - Clarifies the isolated nature of the side question
|
||||
// - Prevents the model from promising actions it can't take
|
||||
const response = await geminiClient.generateContent(
|
||||
[
|
||||
...history,
|
||||
{
|
||||
role: 'user',
|
||||
parts: [
|
||||
{
|
||||
text: `[This is a side question - answer directly and concisely.
|
||||
const cacheSafeParams = getCacheSafeParams();
|
||||
if (!cacheSafeParams)
|
||||
throw new Error(t('No conversation context available for /btw'));
|
||||
|
||||
IMPORTANT:
|
||||
- You are a separate, lightweight agent spawned to answer this one question
|
||||
- The main conversation continues independently in the background
|
||||
- Do NOT reference being interrupted or what you were "previously doing"
|
||||
|
||||
CRITICAL CONSTRAINTS:
|
||||
- You have NO tools available - you cannot read files, run commands, search, or take any actions
|
||||
- This is a one-off response in a single turn
|
||||
- You can ONLY provide information based on what you already know from the conversation context
|
||||
- NEVER say things like "Let me try...", "I'll now...", "Let me check...", or promise to take any action
|
||||
- If you don't know the answer, say so - do not offer to look it up or investigate
|
||||
|
||||
Simply answer the question directly with the information you have.]
|
||||
|
||||
${question}`,
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
{},
|
||||
const result = await runForkedAgent({
|
||||
config,
|
||||
userMessage: buildBtwPrompt(question),
|
||||
cacheSafeParams,
|
||||
abortSignal,
|
||||
model,
|
||||
promptId,
|
||||
);
|
||||
});
|
||||
|
||||
const parts = response.candidates?.[0]?.content?.parts;
|
||||
return (
|
||||
parts
|
||||
?.map((part) => part.text)
|
||||
.filter((text): text is string => typeof text === 'string')
|
||||
.join('') || t('No response received.')
|
||||
);
|
||||
return result.text || t('No response received.');
|
||||
}
|
||||
|
||||
export const btwCommand: SlashCommand = {
|
||||
|
|
@ -141,21 +112,8 @@ export const btwCommand: SlashCommand = {
|
|||
};
|
||||
}
|
||||
|
||||
const geminiClient = config.getGeminiClient();
|
||||
const model = config.getModel();
|
||||
const sessionId = config.getSessionId();
|
||||
|
||||
if (!model) {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: t('No model configured.'),
|
||||
};
|
||||
}
|
||||
|
||||
// ACP mode: return a stream_messages async generator
|
||||
if (executionMode === 'acp') {
|
||||
const btwPromptId = makeBtwPromptId(sessionId);
|
||||
const messages = async function* () {
|
||||
try {
|
||||
yield {
|
||||
|
|
@ -163,13 +121,7 @@ export const btwCommand: SlashCommand = {
|
|||
content: t('Thinking...'),
|
||||
};
|
||||
|
||||
const answer = await askBtw(
|
||||
geminiClient,
|
||||
model,
|
||||
question,
|
||||
abortSignal,
|
||||
btwPromptId,
|
||||
);
|
||||
const answer = await askBtw(context, question, abortSignal);
|
||||
|
||||
yield {
|
||||
messageType: 'info' as const,
|
||||
|
|
@ -189,14 +141,7 @@ export const btwCommand: SlashCommand = {
|
|||
// Non-interactive mode: return a simple message result
|
||||
if (executionMode === 'non_interactive') {
|
||||
try {
|
||||
const btwPromptId = makeBtwPromptId(sessionId);
|
||||
const answer = await askBtw(
|
||||
geminiClient,
|
||||
model,
|
||||
question,
|
||||
abortSignal,
|
||||
btwPromptId,
|
||||
);
|
||||
const answer = await askBtw(context, question, abortSignal);
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'info',
|
||||
|
|
@ -231,10 +176,9 @@ export const btwCommand: SlashCommand = {
|
|||
};
|
||||
ui.setBtwItem(pendingItem);
|
||||
|
||||
// Fire-and-forget: run the API call in the background so the main
|
||||
// Fire-and-forget: runForkedAgent runs in the background so the main
|
||||
// conversation is not blocked while waiting for the btw answer.
|
||||
const btwPromptId = makeBtwPromptId(sessionId);
|
||||
void askBtw(geminiClient, model, question, btwSignal, btwPromptId)
|
||||
void askBtw(context, question, btwSignal)
|
||||
.then((answer) => {
|
||||
if (btwSignal.aborted) return;
|
||||
|
||||
|
|
|
|||
51
packages/cli/src/ui/commands/dreamCommand.ts
Normal file
51
packages/cli/src/ui/commands/dreamCommand.ts
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import {
|
||||
getAutoMemoryRoot,
|
||||
getProjectHash,
|
||||
QWEN_DIR,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import { t } from '../../i18n/index.js';
|
||||
import type { SlashCommand } from './types.js';
|
||||
import { CommandKind } from './types.js';
|
||||
|
||||
export const dreamCommand: SlashCommand = {
|
||||
name: 'dream',
|
||||
get description() {
|
||||
return t('Consolidate managed auto-memory topic files.');
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: async (context) => {
|
||||
const config = context.services.config;
|
||||
if (!config) {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: t('Config not loaded.'),
|
||||
};
|
||||
}
|
||||
|
||||
const projectRoot = config.getProjectRoot();
|
||||
const memoryRoot = getAutoMemoryRoot(projectRoot);
|
||||
const projectHash = getProjectHash(projectRoot);
|
||||
const transcriptDir = `${QWEN_DIR}/tmp/${projectHash}/chats`;
|
||||
|
||||
const prompt = config
|
||||
.getMemoryManager()
|
||||
.buildConsolidationPrompt(memoryRoot, transcriptDir);
|
||||
|
||||
return {
|
||||
type: 'submit_prompt',
|
||||
content: prompt,
|
||||
onComplete: async () => {
|
||||
await config
|
||||
.getMemoryManager()
|
||||
.writeDreamManualRun(projectRoot, config.getSessionId());
|
||||
},
|
||||
};
|
||||
},
|
||||
};
|
||||
52
packages/cli/src/ui/commands/forgetCommand.ts
Normal file
52
packages/cli/src/ui/commands/forgetCommand.ts
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { t } from '../../i18n/index.js';
|
||||
import type { SlashCommand } from './types.js';
|
||||
import { CommandKind } from './types.js';
|
||||
|
||||
export const forgetCommand: SlashCommand = {
|
||||
name: 'forget',
|
||||
get description() {
|
||||
return t('Remove matching entries from managed auto-memory.');
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: async (context, args) => {
|
||||
const query = args.trim();
|
||||
|
||||
if (!query) {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: t('Usage: /forget <memory text to remove>'),
|
||||
};
|
||||
}
|
||||
|
||||
const config = context.services.config;
|
||||
if (!config) {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: t('Config not loaded.'),
|
||||
};
|
||||
}
|
||||
|
||||
const selection = await config
|
||||
.getMemoryManager()
|
||||
.selectForgetCandidates(config.getProjectRoot(), query, { config });
|
||||
|
||||
const result = await config
|
||||
.getMemoryManager()
|
||||
.forgetMatches(config.getProjectRoot(), selection.matches);
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'info',
|
||||
content:
|
||||
result.systemMessage ??
|
||||
t('No managed auto-memory entries matched: {{query}}', { query }),
|
||||
};
|
||||
},
|
||||
};
|
||||
|
|
@ -4,518 +4,36 @@
|
|||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type { Mock } from 'vitest';
|
||||
import { vi, describe, it, expect, beforeEach } from 'vitest';
|
||||
import { describe, expect, it } from 'vitest';
|
||||
import { memoryCommand } from './memoryCommand.js';
|
||||
import type { SlashCommand, CommandContext } from './types.js';
|
||||
import { createMockCommandContext } from '../../test-utils/mockCommandContext.js';
|
||||
import { MessageType } from '../types.js';
|
||||
import type { LoadedSettings } from '../../config/settings.js';
|
||||
import { readFile } from 'node:fs/promises';
|
||||
import os from 'node:os';
|
||||
import path from 'node:path';
|
||||
import {
|
||||
getErrorMessage,
|
||||
loadServerHierarchicalMemory,
|
||||
QWEN_DIR,
|
||||
setGeminiMdFilename,
|
||||
type FileDiscoveryService,
|
||||
type LoadServerHierarchicalMemoryResponse,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
|
||||
vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => {
|
||||
const original =
|
||||
await importOriginal<typeof import('@qwen-code/qwen-code-core')>();
|
||||
return {
|
||||
...original,
|
||||
getErrorMessage: vi.fn((error: unknown) => {
|
||||
if (error instanceof Error) return error.message;
|
||||
return String(error);
|
||||
}),
|
||||
loadServerHierarchicalMemory: vi.fn(),
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('node:fs/promises', () => {
|
||||
const readFile = vi.fn();
|
||||
return {
|
||||
readFile,
|
||||
default: {
|
||||
readFile,
|
||||
},
|
||||
};
|
||||
});
|
||||
|
||||
const mockLoadServerHierarchicalMemory = loadServerHierarchicalMemory as Mock;
|
||||
const mockReadFile = readFile as unknown as Mock;
|
||||
|
||||
describe('memoryCommand', () => {
|
||||
let mockContext: CommandContext;
|
||||
|
||||
const getSubCommand = (name: 'show' | 'add' | 'refresh'): SlashCommand => {
|
||||
const subCommand = memoryCommand.subCommands?.find(
|
||||
(cmd) => cmd.name === name,
|
||||
);
|
||||
if (!subCommand) {
|
||||
throw new Error(`/memory ${name} command not found.`);
|
||||
}
|
||||
return subCommand;
|
||||
};
|
||||
|
||||
describe('/memory show', () => {
|
||||
let showCommand: SlashCommand;
|
||||
let mockGetUserMemory: Mock;
|
||||
let mockGetGeminiMdFileCount: Mock;
|
||||
|
||||
beforeEach(() => {
|
||||
setGeminiMdFilename('QWEN.md');
|
||||
mockReadFile.mockReset();
|
||||
vi.restoreAllMocks();
|
||||
|
||||
showCommand = getSubCommand('show');
|
||||
|
||||
mockGetUserMemory = vi.fn();
|
||||
mockGetGeminiMdFileCount = vi.fn();
|
||||
|
||||
mockContext = createMockCommandContext({
|
||||
services: {
|
||||
config: {
|
||||
getUserMemory: mockGetUserMemory,
|
||||
getGeminiMdFileCount: mockGetGeminiMdFileCount,
|
||||
},
|
||||
},
|
||||
});
|
||||
it('opens the memory dialog in interactive mode', async () => {
|
||||
const context = createMockCommandContext({
|
||||
executionMode: 'interactive',
|
||||
});
|
||||
|
||||
it('should display a message if memory is empty', async () => {
|
||||
if (!showCommand.action) throw new Error('Command has no action');
|
||||
const result = await memoryCommand.action?.(context, '');
|
||||
|
||||
mockGetUserMemory.mockReturnValue('');
|
||||
mockGetGeminiMdFileCount.mockReturnValue(0);
|
||||
|
||||
await showCommand.action(mockContext, '');
|
||||
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: 'Memory is currently empty.',
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
});
|
||||
|
||||
it('should display the memory content and file count if it exists', async () => {
|
||||
if (!showCommand.action) throw new Error('Command has no action');
|
||||
|
||||
const memoryContent = 'This is a test memory.';
|
||||
|
||||
mockGetUserMemory.mockReturnValue(memoryContent);
|
||||
mockGetGeminiMdFileCount.mockReturnValue(1);
|
||||
|
||||
await showCommand.action(mockContext, '');
|
||||
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: `Current memory content from 1 file(s):\n\n---\n${memoryContent}\n---`,
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
});
|
||||
|
||||
it('should show project memory from the configured context file', async () => {
|
||||
const projectCommand = showCommand.subCommands?.find(
|
||||
(cmd) => cmd.name === '--project',
|
||||
);
|
||||
if (!projectCommand?.action) throw new Error('Command has no action');
|
||||
|
||||
setGeminiMdFilename('AGENTS.md');
|
||||
vi.spyOn(process, 'cwd').mockReturnValue('/test/project');
|
||||
mockReadFile.mockResolvedValue('project memory');
|
||||
|
||||
await projectCommand.action(mockContext, '');
|
||||
|
||||
const expectedProjectPath = path.join('/test/project', 'AGENTS.md');
|
||||
expect(mockReadFile).toHaveBeenCalledWith(expectedProjectPath, 'utf-8');
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: expect.stringContaining(expectedProjectPath),
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
});
|
||||
|
||||
it('should show global memory from the configured context file', async () => {
|
||||
const globalCommand = showCommand.subCommands?.find(
|
||||
(cmd) => cmd.name === '--global',
|
||||
);
|
||||
if (!globalCommand?.action) throw new Error('Command has no action');
|
||||
|
||||
setGeminiMdFilename('AGENTS.md');
|
||||
vi.spyOn(os, 'homedir').mockReturnValue('/home/user');
|
||||
mockReadFile.mockResolvedValue('global memory');
|
||||
|
||||
await globalCommand.action(mockContext, '');
|
||||
|
||||
const expectedGlobalPath = path.join('/home/user', QWEN_DIR, 'AGENTS.md');
|
||||
expect(mockReadFile).toHaveBeenCalledWith(expectedGlobalPath, 'utf-8');
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: expect.stringContaining('Global memory content'),
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
});
|
||||
|
||||
it('should fall back to AGENTS.md when QWEN.md does not exist for --project', async () => {
|
||||
const projectCommand = showCommand.subCommands?.find(
|
||||
(cmd) => cmd.name === '--project',
|
||||
);
|
||||
if (!projectCommand?.action) throw new Error('Command has no action');
|
||||
|
||||
setGeminiMdFilename(['QWEN.md', 'AGENTS.md']);
|
||||
vi.spyOn(process, 'cwd').mockReturnValue('/test/project');
|
||||
mockReadFile.mockImplementation(async (filePath: string) => {
|
||||
if (filePath.endsWith('AGENTS.md')) return 'agents memory content';
|
||||
throw new Error('ENOENT');
|
||||
});
|
||||
|
||||
await projectCommand.action(mockContext, '');
|
||||
|
||||
const expectedPath = path.join('/test/project', 'AGENTS.md');
|
||||
expect(mockReadFile).toHaveBeenCalledWith(expectedPath, 'utf-8');
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: expect.stringContaining('agents memory content'),
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
});
|
||||
|
||||
it('should fall back to AGENTS.md when QWEN.md does not exist for --global', async () => {
|
||||
const globalCommand = showCommand.subCommands?.find(
|
||||
(cmd) => cmd.name === '--global',
|
||||
);
|
||||
if (!globalCommand?.action) throw new Error('Command has no action');
|
||||
|
||||
setGeminiMdFilename(['QWEN.md', 'AGENTS.md']);
|
||||
vi.spyOn(os, 'homedir').mockReturnValue('/home/user');
|
||||
mockReadFile.mockImplementation(async (filePath: string) => {
|
||||
if (filePath.endsWith('AGENTS.md')) return 'global agents memory';
|
||||
throw new Error('ENOENT');
|
||||
});
|
||||
|
||||
await globalCommand.action(mockContext, '');
|
||||
|
||||
const expectedPath = path.join('/home/user', QWEN_DIR, 'AGENTS.md');
|
||||
expect(mockReadFile).toHaveBeenCalledWith(expectedPath, 'utf-8');
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: expect.stringContaining('global agents memory'),
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
});
|
||||
|
||||
it('should show content from both QWEN.md and AGENTS.md for --project when both exist', async () => {
|
||||
const projectCommand = showCommand.subCommands?.find(
|
||||
(cmd) => cmd.name === '--project',
|
||||
);
|
||||
if (!projectCommand?.action) throw new Error('Command has no action');
|
||||
|
||||
setGeminiMdFilename(['QWEN.md', 'AGENTS.md']);
|
||||
vi.spyOn(process, 'cwd').mockReturnValue('/test/project');
|
||||
mockReadFile.mockImplementation(async (filePath: string) => {
|
||||
if (filePath.endsWith('QWEN.md')) return 'qwen memory';
|
||||
if (filePath.endsWith('AGENTS.md')) return 'agents memory';
|
||||
throw new Error('ENOENT');
|
||||
});
|
||||
|
||||
await projectCommand.action(mockContext, '');
|
||||
|
||||
expect(mockReadFile).toHaveBeenCalledWith(
|
||||
path.join('/test/project', 'QWEN.md'),
|
||||
'utf-8',
|
||||
);
|
||||
expect(mockReadFile).toHaveBeenCalledWith(
|
||||
path.join('/test/project', 'AGENTS.md'),
|
||||
'utf-8',
|
||||
);
|
||||
const addItemCall = (mockContext.ui.addItem as Mock).mock.calls[0][0];
|
||||
expect(addItemCall.text).toContain('qwen memory');
|
||||
expect(addItemCall.text).toContain('agents memory');
|
||||
});
|
||||
|
||||
it('should show content from both files for --global when both exist', async () => {
|
||||
const globalCommand = showCommand.subCommands?.find(
|
||||
(cmd) => cmd.name === '--global',
|
||||
);
|
||||
if (!globalCommand?.action) throw new Error('Command has no action');
|
||||
|
||||
setGeminiMdFilename(['QWEN.md', 'AGENTS.md']);
|
||||
vi.spyOn(os, 'homedir').mockReturnValue('/home/user');
|
||||
mockReadFile.mockImplementation(async (filePath: string) => {
|
||||
if (filePath.endsWith('QWEN.md')) return 'global qwen memory';
|
||||
if (filePath.endsWith('AGENTS.md')) return 'global agents memory';
|
||||
throw new Error('ENOENT');
|
||||
});
|
||||
|
||||
await globalCommand.action(mockContext, '');
|
||||
|
||||
expect(mockReadFile).toHaveBeenCalledWith(
|
||||
path.join('/home/user', QWEN_DIR, 'QWEN.md'),
|
||||
'utf-8',
|
||||
);
|
||||
expect(mockReadFile).toHaveBeenCalledWith(
|
||||
path.join('/home/user', QWEN_DIR, 'AGENTS.md'),
|
||||
'utf-8',
|
||||
);
|
||||
const addItemCall = (mockContext.ui.addItem as Mock).mock.calls[0][0];
|
||||
expect(addItemCall.text).toContain('global qwen memory');
|
||||
expect(addItemCall.text).toContain('global agents memory');
|
||||
expect(result).toEqual({
|
||||
type: 'dialog',
|
||||
dialog: 'memory',
|
||||
});
|
||||
});
|
||||
|
||||
describe('/memory add', () => {
|
||||
let addCommand: SlashCommand;
|
||||
|
||||
beforeEach(() => {
|
||||
addCommand = getSubCommand('add');
|
||||
mockContext = createMockCommandContext();
|
||||
it('returns a non-interactive fallback message outside the interactive UI', async () => {
|
||||
const context = createMockCommandContext({
|
||||
executionMode: 'non_interactive',
|
||||
});
|
||||
|
||||
it('should return an error message if no arguments are provided', () => {
|
||||
if (!addCommand.action) throw new Error('Command has no action');
|
||||
const result = await memoryCommand.action?.(context, '');
|
||||
|
||||
const result = addCommand.action(mockContext, ' ');
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: 'Usage: /memory add [--global|--project] <text to remember>',
|
||||
});
|
||||
|
||||
expect(mockContext.ui.addItem).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should return a tool action and add an info message when arguments are provided', () => {
|
||||
if (!addCommand.action) throw new Error('Command has no action');
|
||||
|
||||
const fact = 'remember this';
|
||||
const result = addCommand.action(mockContext, ` ${fact} `);
|
||||
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: `Attempting to save to memory : "${fact}"`,
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'tool',
|
||||
toolName: 'save_memory',
|
||||
toolArgs: { fact },
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle --global flag and add scope to tool args', () => {
|
||||
if (!addCommand.action) throw new Error('Command has no action');
|
||||
|
||||
const fact = 'remember this globally';
|
||||
const result = addCommand.action(mockContext, `--global ${fact}`);
|
||||
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: `Attempting to save to memory (global): "${fact}"`,
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'tool',
|
||||
toolName: 'save_memory',
|
||||
toolArgs: { fact, scope: 'global' },
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle --project flag and add scope to tool args', () => {
|
||||
if (!addCommand.action) throw new Error('Command has no action');
|
||||
|
||||
const fact = 'remember this for project';
|
||||
const result = addCommand.action(mockContext, `--project ${fact}`);
|
||||
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: `Attempting to save to memory (project): "${fact}"`,
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'tool',
|
||||
toolName: 'save_memory',
|
||||
toolArgs: { fact, scope: 'project' },
|
||||
});
|
||||
});
|
||||
|
||||
it('should return error if flag is provided but no fact follows', () => {
|
||||
if (!addCommand.action) throw new Error('Command has no action');
|
||||
|
||||
const result = addCommand.action(mockContext, '--global ');
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: 'Usage: /memory add [--global|--project] <text to remember>',
|
||||
});
|
||||
|
||||
expect(mockContext.ui.addItem).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('/memory refresh', () => {
|
||||
let refreshCommand: SlashCommand;
|
||||
let mockSetUserMemory: Mock;
|
||||
let mockSetGeminiMdFileCount: Mock;
|
||||
|
||||
beforeEach(() => {
|
||||
refreshCommand = getSubCommand('refresh');
|
||||
mockSetUserMemory = vi.fn();
|
||||
mockSetGeminiMdFileCount = vi.fn();
|
||||
const mockConfig = {
|
||||
setUserMemory: mockSetUserMemory,
|
||||
setGeminiMdFileCount: mockSetGeminiMdFileCount,
|
||||
getWorkingDir: () => '/test/dir',
|
||||
getDebugMode: () => false,
|
||||
getFileService: () => ({}) as FileDiscoveryService,
|
||||
getExtensionContextFilePaths: () => [],
|
||||
shouldLoadMemoryFromIncludeDirectories: () => false,
|
||||
getWorkspaceContext: () => ({
|
||||
getDirectories: () => [],
|
||||
}),
|
||||
getFileFilteringOptions: () => ({
|
||||
ignore: [],
|
||||
include: [],
|
||||
}),
|
||||
getFolderTrust: () => false,
|
||||
};
|
||||
|
||||
mockContext = createMockCommandContext({
|
||||
services: {
|
||||
config: mockConfig,
|
||||
settings: {
|
||||
merged: {},
|
||||
} as LoadedSettings,
|
||||
},
|
||||
});
|
||||
mockLoadServerHierarchicalMemory.mockClear();
|
||||
});
|
||||
|
||||
it('should display success message when memory is refreshed with content', async () => {
|
||||
if (!refreshCommand.action) throw new Error('Command has no action');
|
||||
|
||||
const refreshResult: LoadServerHierarchicalMemoryResponse = {
|
||||
memoryContent: 'new memory content',
|
||||
fileCount: 2,
|
||||
};
|
||||
mockLoadServerHierarchicalMemory.mockResolvedValue(refreshResult);
|
||||
|
||||
await refreshCommand.action(mockContext, '');
|
||||
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: 'Refreshing memory from source files...',
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
|
||||
expect(loadServerHierarchicalMemory).toHaveBeenCalledOnce();
|
||||
expect(mockSetUserMemory).toHaveBeenCalledWith(
|
||||
refreshResult.memoryContent,
|
||||
);
|
||||
expect(mockSetGeminiMdFileCount).toHaveBeenCalledWith(
|
||||
refreshResult.fileCount,
|
||||
);
|
||||
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: 'Memory refreshed successfully. Loaded 18 characters from 2 file(s).',
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
});
|
||||
|
||||
it('should display success message when memory is refreshed with no content', async () => {
|
||||
if (!refreshCommand.action) throw new Error('Command has no action');
|
||||
|
||||
const refreshResult = { memoryContent: '', fileCount: 0 };
|
||||
mockLoadServerHierarchicalMemory.mockResolvedValue(refreshResult);
|
||||
|
||||
await refreshCommand.action(mockContext, '');
|
||||
|
||||
expect(loadServerHierarchicalMemory).toHaveBeenCalledOnce();
|
||||
expect(mockSetUserMemory).toHaveBeenCalledWith('');
|
||||
expect(mockSetGeminiMdFileCount).toHaveBeenCalledWith(0);
|
||||
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: 'Memory refreshed successfully. No memory content found.',
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
});
|
||||
|
||||
it('should display an error message if refreshing fails', async () => {
|
||||
if (!refreshCommand.action) throw new Error('Command has no action');
|
||||
|
||||
const error = new Error('Failed to read memory files.');
|
||||
mockLoadServerHierarchicalMemory.mockRejectedValue(error);
|
||||
|
||||
await refreshCommand.action(mockContext, '');
|
||||
|
||||
expect(loadServerHierarchicalMemory).toHaveBeenCalledOnce();
|
||||
expect(mockSetUserMemory).not.toHaveBeenCalled();
|
||||
expect(mockSetGeminiMdFileCount).not.toHaveBeenCalled();
|
||||
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.ERROR,
|
||||
text: `Error refreshing memory: ${error.message}`,
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
|
||||
expect(getErrorMessage).toHaveBeenCalledWith(error);
|
||||
});
|
||||
|
||||
it('should not throw if config service is unavailable', async () => {
|
||||
if (!refreshCommand.action) throw new Error('Command has no action');
|
||||
|
||||
const nullConfigContext = createMockCommandContext({
|
||||
services: { config: null },
|
||||
});
|
||||
|
||||
await expect(
|
||||
refreshCommand.action(nullConfigContext, ''),
|
||||
).resolves.toBeUndefined();
|
||||
|
||||
expect(nullConfigContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: 'Refreshing memory from source files...',
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
|
||||
expect(loadServerHierarchicalMemory).not.toHaveBeenCalled();
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'info',
|
||||
content:
|
||||
'The memory manager is only available in the interactive UI. In non-interactive mode, open the user or project memory files directly.',
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
|
|||
|
|
@ -4,349 +4,32 @@
|
|||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import {
|
||||
getErrorMessage,
|
||||
getAllGeminiMdFilenames,
|
||||
loadServerHierarchicalMemory,
|
||||
QWEN_DIR,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import path from 'node:path';
|
||||
import os from 'node:os';
|
||||
import fs from 'node:fs/promises';
|
||||
import { MessageType } from '../types.js';
|
||||
import type { SlashCommand, SlashCommandActionReturn } from './types.js';
|
||||
import type { SlashCommand } from './types.js';
|
||||
import { CommandKind } from './types.js';
|
||||
import { t } from '../../i18n/index.js';
|
||||
|
||||
/**
|
||||
* Read all existing memory files from the configured filenames in a directory.
|
||||
* Returns an array of found files with their paths and contents.
|
||||
*/
|
||||
async function findAllExistingMemoryFiles(
|
||||
dir: string,
|
||||
): Promise<Array<{ filePath: string; content: string }>> {
|
||||
const results: Array<{ filePath: string; content: string }> = [];
|
||||
for (const filename of getAllGeminiMdFilenames()) {
|
||||
const filePath = path.join(dir, filename);
|
||||
try {
|
||||
const content = await fs.readFile(filePath, 'utf-8');
|
||||
if (content.trim().length > 0) {
|
||||
results.push({ filePath, content });
|
||||
}
|
||||
} catch {
|
||||
// File doesn't exist, try next
|
||||
}
|
||||
}
|
||||
return results;
|
||||
}
|
||||
|
||||
export const memoryCommand: SlashCommand = {
|
||||
name: 'memory',
|
||||
get description() {
|
||||
return t('Commands for interacting with memory.');
|
||||
return t('Open the memory manager.');
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
subCommands: [
|
||||
{
|
||||
name: 'show',
|
||||
get description() {
|
||||
return t('Show the current memory contents.');
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: async (context) => {
|
||||
const memoryContent = context.services.config?.getUserMemory() || '';
|
||||
const fileCount = context.services.config?.getGeminiMdFileCount() || 0;
|
||||
action: async (context) => {
|
||||
const executionMode = context.executionMode ?? 'interactive';
|
||||
|
||||
const messageContent =
|
||||
memoryContent.length > 0
|
||||
? `${t('Current memory content from {{count}} file(s):', { count: String(fileCount) })}\n\n---\n${memoryContent}\n---`
|
||||
: t('Memory is currently empty.');
|
||||
if (executionMode === 'interactive') {
|
||||
return {
|
||||
type: 'dialog',
|
||||
dialog: 'memory',
|
||||
};
|
||||
}
|
||||
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: messageContent,
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
},
|
||||
subCommands: [
|
||||
{
|
||||
name: '--project',
|
||||
get description() {
|
||||
return t('Show project-level memory contents.');
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: async (context) => {
|
||||
const workingDir =
|
||||
context.services.config?.getWorkingDir?.() ?? process.cwd();
|
||||
const results = await findAllExistingMemoryFiles(workingDir);
|
||||
|
||||
if (results.length > 0) {
|
||||
const combined = results
|
||||
.map((r) =>
|
||||
t(
|
||||
'Project memory content from {{path}}:\n\n---\n{{content}}\n---',
|
||||
{ path: r.filePath, content: r.content },
|
||||
),
|
||||
)
|
||||
.join('\n\n');
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: combined,
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
} else {
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: t(
|
||||
'Project memory file not found or is currently empty.',
|
||||
),
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
name: '--global',
|
||||
get description() {
|
||||
return t('Show global memory contents.');
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: async (context) => {
|
||||
const globalDir = path.join(os.homedir(), QWEN_DIR);
|
||||
const results = await findAllExistingMemoryFiles(globalDir);
|
||||
|
||||
if (results.length > 0) {
|
||||
const combined = results
|
||||
.map((r) =>
|
||||
t('Global memory content:\n\n---\n{{content}}\n---', {
|
||||
content: r.content,
|
||||
}),
|
||||
)
|
||||
.join('\n\n');
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: combined,
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
} else {
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: t(
|
||||
'Global memory file not found or is currently empty.',
|
||||
),
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
}
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'add',
|
||||
get description() {
|
||||
return t(
|
||||
'Add content to the memory. Use --global for global memory or --project for project memory.',
|
||||
);
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: (context, args): SlashCommandActionReturn | void => {
|
||||
if (!args || args.trim() === '') {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: t(
|
||||
'Usage: /memory add [--global|--project] <text to remember>',
|
||||
),
|
||||
};
|
||||
}
|
||||
|
||||
const trimmedArgs = args.trim();
|
||||
let scope: 'global' | 'project' | undefined;
|
||||
let fact: string;
|
||||
|
||||
// Check for scope flags
|
||||
if (trimmedArgs.startsWith('--global ')) {
|
||||
scope = 'global';
|
||||
fact = trimmedArgs.substring('--global '.length).trim();
|
||||
} else if (trimmedArgs.startsWith('--project ')) {
|
||||
scope = 'project';
|
||||
fact = trimmedArgs.substring('--project '.length).trim();
|
||||
} else if (trimmedArgs === '--global' || trimmedArgs === '--project') {
|
||||
// Flag provided but no text after it
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: t(
|
||||
'Usage: /memory add [--global|--project] <text to remember>',
|
||||
),
|
||||
};
|
||||
} else {
|
||||
// No scope specified, will be handled by the tool
|
||||
fact = trimmedArgs;
|
||||
}
|
||||
|
||||
if (!fact || fact.trim() === '') {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: t(
|
||||
'Usage: /memory add [--global|--project] <text to remember>',
|
||||
),
|
||||
};
|
||||
}
|
||||
|
||||
const scopeText = scope ? `(${scope})` : '';
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: t('Attempting to save to memory {{scope}}: "{{fact}}"', {
|
||||
scope: scopeText,
|
||||
fact,
|
||||
}),
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
|
||||
return {
|
||||
type: 'tool',
|
||||
toolName: 'save_memory',
|
||||
toolArgs: scope ? { fact, scope } : { fact },
|
||||
};
|
||||
},
|
||||
subCommands: [
|
||||
{
|
||||
name: '--project',
|
||||
get description() {
|
||||
return t('Add content to project-level memory.');
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: (context, args): SlashCommandActionReturn | void => {
|
||||
if (!args || args.trim() === '') {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: t('Usage: /memory add --project <text to remember>'),
|
||||
};
|
||||
}
|
||||
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: t('Attempting to save to project memory: "{{text}}"', {
|
||||
text: args.trim(),
|
||||
}),
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
|
||||
return {
|
||||
type: 'tool',
|
||||
toolName: 'save_memory',
|
||||
toolArgs: { fact: args.trim(), scope: 'project' },
|
||||
};
|
||||
},
|
||||
},
|
||||
{
|
||||
name: '--global',
|
||||
get description() {
|
||||
return t('Add content to global memory.');
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: (context, args): SlashCommandActionReturn | void => {
|
||||
if (!args || args.trim() === '') {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: t('Usage: /memory add --global <text to remember>'),
|
||||
};
|
||||
}
|
||||
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: t('Attempting to save to global memory: "{{text}}"', {
|
||||
text: args.trim(),
|
||||
}),
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
|
||||
return {
|
||||
type: 'tool',
|
||||
toolName: 'save_memory',
|
||||
toolArgs: { fact: args.trim(), scope: 'global' },
|
||||
};
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'refresh',
|
||||
get description() {
|
||||
return t('Refresh the memory from the source.');
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: async (context) => {
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: t('Refreshing memory from source files...'),
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
|
||||
try {
|
||||
const config = context.services.config;
|
||||
if (config) {
|
||||
const { memoryContent, fileCount } =
|
||||
await loadServerHierarchicalMemory(
|
||||
config.getWorkingDir(),
|
||||
config.shouldLoadMemoryFromIncludeDirectories()
|
||||
? config.getWorkspaceContext().getDirectories()
|
||||
: [],
|
||||
config.getFileService(),
|
||||
config.getExtensionContextFilePaths(),
|
||||
config.getFolderTrust(),
|
||||
context.services.settings.merged.context?.importFormat ||
|
||||
'tree', // Use setting or default to 'tree'
|
||||
);
|
||||
config.setUserMemory(memoryContent);
|
||||
config.setGeminiMdFileCount(fileCount);
|
||||
|
||||
const successMessage =
|
||||
memoryContent.length > 0
|
||||
? `Memory refreshed successfully. Loaded ${memoryContent.length} characters from ${fileCount} file(s).`
|
||||
: 'Memory refreshed successfully. No memory content found.';
|
||||
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: successMessage,
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
const errorMessage = getErrorMessage(error);
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.ERROR,
|
||||
text: `Error refreshing memory: ${errorMessage}`,
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
}
|
||||
},
|
||||
},
|
||||
],
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'info',
|
||||
content: t(
|
||||
'The memory manager is only available in the interactive UI. In non-interactive mode, open the user or project memory files directly.',
|
||||
),
|
||||
};
|
||||
},
|
||||
};
|
||||
|
|
|
|||
|
|
@ -72,6 +72,9 @@ export const modelCommand: SlashCommand = {
|
|||
'fastModel',
|
||||
modelName,
|
||||
);
|
||||
// Sync the runtime Config so forked agents pick up the change immediately
|
||||
// without requiring a restart.
|
||||
config.setFastModel(modelName);
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'info',
|
||||
|
|
|
|||
58
packages/cli/src/ui/commands/rememberCommand.ts
Normal file
58
packages/cli/src/ui/commands/rememberCommand.ts
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { getAutoMemoryRoot } from '@qwen-code/qwen-code-core';
|
||||
import { t } from '../../i18n/index.js';
|
||||
import type {
|
||||
CommandContext,
|
||||
SlashCommand,
|
||||
SlashCommandActionReturn,
|
||||
} from './types.js';
|
||||
import { CommandKind } from './types.js';
|
||||
|
||||
export const rememberCommand: SlashCommand = {
|
||||
name: 'remember',
|
||||
get description() {
|
||||
return t('Save a durable memory to the memory system.');
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: (context: CommandContext, args): SlashCommandActionReturn | void => {
|
||||
const fact = args.trim();
|
||||
if (!fact) {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: t('Usage: /remember <text to remember>'),
|
||||
};
|
||||
}
|
||||
|
||||
const config = context.services.config;
|
||||
const useManagedMemory = config?.getManagedAutoMemoryEnabled() ?? false;
|
||||
|
||||
if (useManagedMemory) {
|
||||
// In managed auto-memory mode the save_memory tool is not registered.
|
||||
// Submit a prompt so the main agent writes the per-entry file directly,
|
||||
// choosing the appropriate type (user / feedback / project / reference)
|
||||
// based on the content, following the instructions in buildManagedAutoMemoryPrompt.
|
||||
const memoryDir = config
|
||||
? getAutoMemoryRoot(config.getProjectRoot())
|
||||
: undefined;
|
||||
const dirHint = memoryDir ? ` Save it to \`${memoryDir}\`.` : '';
|
||||
return {
|
||||
type: 'submit_prompt',
|
||||
content: `Please save the following to your memory system.${dirHint} Choose the most appropriate memory type (user, feedback, project, or reference) based on the content:\n\n${fact}`,
|
||||
};
|
||||
}
|
||||
|
||||
// Managed auto-memory is disabled: ask the agent to save to QWEN.md
|
||||
// using its native file tools. We do not call save_memory because that
|
||||
// tool was removed.
|
||||
return {
|
||||
type: 'submit_prompt',
|
||||
content: `Please save the following fact to memory (e.g. append to QWEN.md in the project root):\n\n${fact}`,
|
||||
};
|
||||
},
|
||||
};
|
||||
|
|
@ -156,6 +156,7 @@ export interface OpenDialogActionReturn {
|
|||
| 'theme'
|
||||
| 'editor'
|
||||
| 'settings'
|
||||
| 'memory'
|
||||
| 'model'
|
||||
| 'fast-model'
|
||||
| 'subagent_create'
|
||||
|
|
@ -186,6 +187,8 @@ export interface LoadHistoryActionReturn {
|
|||
export interface SubmitPromptActionReturn {
|
||||
type: 'submit_prompt';
|
||||
content: PartListUnion;
|
||||
/** Optional callback invoked after the agent turn completes successfully. */
|
||||
onComplete?: () => Promise<void>;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -43,6 +43,7 @@ import { ExtensionsManagerDialog } from './extensions/ExtensionsManagerDialog.js
|
|||
import { MCPManagementDialog } from './mcp/MCPManagementDialog.js';
|
||||
import { HooksManagementDialog } from './hooks/HooksManagementDialog.js';
|
||||
import { SessionPicker } from './SessionPicker.js';
|
||||
import { MemoryDialog } from './MemoryDialog.js';
|
||||
|
||||
interface DialogManagerProps {
|
||||
addItem: UseHistoryManagerReturn['addItem'];
|
||||
|
|
@ -237,6 +238,9 @@ export const DialogManager = ({
|
|||
</Box>
|
||||
);
|
||||
}
|
||||
if (uiState.isMemoryDialogOpen) {
|
||||
return <MemoryDialog onClose={uiActions.closeMemoryDialog} />;
|
||||
}
|
||||
if (uiState.isApprovalModeDialogOpen) {
|
||||
const currentMode = config.getApprovalMode();
|
||||
return (
|
||||
|
|
|
|||
|
|
@ -17,16 +17,36 @@ import type { LoadedSettings } from '../../config/settings.js';
|
|||
vi.mock('../hooks/useTerminalSize.js');
|
||||
const useTerminalSizeMock = vi.mocked(useTerminalSize.useTerminalSize);
|
||||
|
||||
vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => {
|
||||
const actual =
|
||||
await importOriginal<typeof import('@qwen-code/qwen-code-core')>();
|
||||
const registry = {
|
||||
list: vi.fn(() => []),
|
||||
subscribe: vi.fn(() => () => {}),
|
||||
};
|
||||
return {
|
||||
...actual,
|
||||
getManagedAutoMemoryDreamTaskRegistry: vi.fn(() => registry),
|
||||
};
|
||||
});
|
||||
|
||||
const defaultProps = {
|
||||
model: 'gemini-pro',
|
||||
};
|
||||
|
||||
const createMockMemoryManager = () => ({
|
||||
subscribe: vi.fn(() => () => {}),
|
||||
listTasksByType: vi.fn(() => []),
|
||||
});
|
||||
|
||||
const createMockConfig = (overrides = {}) => ({
|
||||
getModel: vi.fn(() => defaultProps.model),
|
||||
getDebugMode: vi.fn(() => false),
|
||||
getContentGeneratorConfig: vi.fn(() => ({ contextWindowSize: 131072 })),
|
||||
getMcpServers: vi.fn(() => ({})),
|
||||
getBlockedMcpServers: vi.fn(() => []),
|
||||
getProjectRoot: vi.fn(() => '/test/project'),
|
||||
getMemoryManager: vi.fn(createMockMemoryManager),
|
||||
...overrides,
|
||||
});
|
||||
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@
|
|||
*/
|
||||
|
||||
import type React from 'react';
|
||||
import { useCallback, useSyncExternalStore } from 'react';
|
||||
import { Box, Text } from 'ink';
|
||||
import { theme } from '../semantic-colors.js';
|
||||
import { ContextUsageDisplay } from './ContextUsageDisplay.js';
|
||||
|
|
@ -20,10 +21,37 @@ import { useVimMode } from '../contexts/VimModeContext.js';
|
|||
import { ApprovalMode } from '@qwen-code/qwen-code-core';
|
||||
import { t } from '../../i18n/index.js';
|
||||
|
||||
/**
|
||||
* Returns true while any dream task for the current project is in
|
||||
* 'pending' or 'running' state. Uses MemoryManager's subscribe/notify
|
||||
* mechanism so there is zero polling overhead.
|
||||
*/
|
||||
function useDreamRunning(projectRoot: string): boolean {
|
||||
const config = useConfig();
|
||||
|
||||
const subscribe = useCallback(
|
||||
(onStoreChange: () => void) =>
|
||||
config.getMemoryManager().subscribe(onStoreChange),
|
||||
[config],
|
||||
);
|
||||
|
||||
const getSnapshot = useCallback(
|
||||
() =>
|
||||
config
|
||||
.getMemoryManager()
|
||||
.listTasksByType('dream', projectRoot)
|
||||
.some((task) => task.status === 'pending' || task.status === 'running'),
|
||||
[config, projectRoot],
|
||||
);
|
||||
|
||||
return useSyncExternalStore(subscribe, getSnapshot);
|
||||
}
|
||||
|
||||
export const Footer: React.FC = () => {
|
||||
const uiState = useUIState();
|
||||
const config = useConfig();
|
||||
const { vimEnabled, vimMode } = useVimMode();
|
||||
const dreamRunning = useDreamRunning(config.getProjectRoot());
|
||||
const { text: statusLineText } = useStatusLine();
|
||||
|
||||
const { promptTokenCount, showAutoAcceptIndicator } = {
|
||||
|
|
@ -85,6 +113,12 @@ export const Footer: React.FC = () => {
|
|||
node: <Text color={theme.status.warning}>Debug Mode</Text>,
|
||||
});
|
||||
}
|
||||
if (dreamRunning) {
|
||||
rightItems.push({
|
||||
key: 'dream',
|
||||
node: <Text color={theme.text.secondary}>{t('✦ dreaming')}</Text>,
|
||||
});
|
||||
}
|
||||
if (promptTokenCount > 0 && contextWindowSize) {
|
||||
rightItems.push({
|
||||
key: 'context',
|
||||
|
|
|
|||
|
|
@ -48,6 +48,7 @@ import { ContextUsage } from './views/ContextUsage.js';
|
|||
import { ArenaAgentCard, ArenaSessionCard } from './arena/ArenaCards.js';
|
||||
import { InsightProgressMessage } from './messages/InsightProgressMessage.js';
|
||||
import { BtwMessage } from './messages/BtwMessage.js';
|
||||
import { MemorySavedMessage } from './messages/MemorySavedMessage.js';
|
||||
import { useCompactMode } from '../contexts/CompactModeContext.js';
|
||||
|
||||
interface HistoryItemDisplayProps {
|
||||
|
|
@ -189,6 +190,8 @@ const HistoryItemDisplayComponent: React.FC<HistoryItemDisplayProps> = ({
|
|||
isFocused={isFocused}
|
||||
activeShellPtyId={activeShellPtyId}
|
||||
embeddedShellFocused={embeddedShellFocused}
|
||||
memoryWriteCount={itemForDisplay.memoryWriteCount}
|
||||
memoryReadCount={itemForDisplay.memoryReadCount}
|
||||
isUserInitiated={itemForDisplay.isUserInitiated}
|
||||
/>
|
||||
)}
|
||||
|
|
@ -268,6 +271,9 @@ const HistoryItemDisplayComponent: React.FC<HistoryItemDisplayProps> = ({
|
|||
</Box>
|
||||
</Box>
|
||||
)}
|
||||
{itemForDisplay.type === 'memory_saved' && (
|
||||
<MemorySavedMessage item={itemForDisplay} />
|
||||
)}
|
||||
</Box>
|
||||
);
|
||||
};
|
||||
|
|
|
|||
|
|
@ -630,18 +630,18 @@ describe('InputPrompt', () => {
|
|||
});
|
||||
|
||||
it('should handle the "backspace" edge case correctly', async () => {
|
||||
// SCENARIO: /memory -> Backspace -> /memory -> Tab (to accept 'show')
|
||||
// SCENARIO: /config -> Backspace -> /config -> Tab (to accept 'set')
|
||||
mockedUseCommandCompletion.mockReturnValue({
|
||||
...mockCommandCompletion,
|
||||
showSuggestions: true,
|
||||
suggestions: [
|
||||
{ label: 'show', value: 'show' },
|
||||
{ label: 'add', value: 'add' },
|
||||
{ label: 'set', value: 'set' },
|
||||
{ label: 'reset', value: 'reset' },
|
||||
],
|
||||
activeSuggestionIndex: 0, // 'show' is highlighted
|
||||
activeSuggestionIndex: 0, // 'set' is highlighted
|
||||
});
|
||||
// The user has backspaced, so the query is now just '/memory'
|
||||
props.buffer.setText('/memory');
|
||||
// The user has backspaced, so the query is now just '/config'
|
||||
props.buffer.setText('/config');
|
||||
|
||||
const { stdin, unmount } = renderWithProviders(<InputPrompt {...props} />);
|
||||
await wait();
|
||||
|
|
@ -649,20 +649,20 @@ describe('InputPrompt', () => {
|
|||
stdin.write('\t'); // Press Tab
|
||||
await wait();
|
||||
|
||||
// It should NOT become '/show'. It should correctly become '/memory show'.
|
||||
// It should NOT become '/set'. It should correctly become '/config set'.
|
||||
expect(mockCommandCompletion.handleAutocomplete).toHaveBeenCalledWith(0);
|
||||
unmount();
|
||||
});
|
||||
|
||||
it('should complete a partial argument for a command', async () => {
|
||||
// SCENARIO: /memory add fi- -> Tab
|
||||
// SCENARIO: /config set fi- -> Tab
|
||||
mockedUseCommandCompletion.mockReturnValue({
|
||||
...mockCommandCompletion,
|
||||
showSuggestions: true,
|
||||
suggestions: [{ label: 'fix-foo', value: 'fix-foo' }],
|
||||
activeSuggestionIndex: 0,
|
||||
});
|
||||
props.buffer.setText('/memory add fi-');
|
||||
props.buffer.setText('/config set fi-');
|
||||
|
||||
const { stdin, unmount } = renderWithProviders(<InputPrompt {...props} />);
|
||||
await wait();
|
||||
|
|
@ -925,8 +925,8 @@ describe('InputPrompt', () => {
|
|||
});
|
||||
|
||||
it('should NOT trigger completion when cursor is after space following /', async () => {
|
||||
mockBuffer.text = '/memory add';
|
||||
mockBuffer.lines = ['/memory add'];
|
||||
mockBuffer.text = '/config set';
|
||||
mockBuffer.lines = ['/config set'];
|
||||
mockBuffer.cursor = [0, 11];
|
||||
|
||||
mockedUseCommandCompletion.mockReturnValue({
|
||||
|
|
|
|||
69
packages/cli/src/ui/components/MemoryDialog.test.tsx
Normal file
69
packages/cli/src/ui/components/MemoryDialog.test.tsx
Normal file
|
|
@ -0,0 +1,69 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { act } from '@testing-library/react';
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import { render } from 'ink-testing-library';
|
||||
import { MemoryDialog } from './MemoryDialog.js';
|
||||
import { useConfig } from '../contexts/ConfigContext.js';
|
||||
import { useSettings } from '../contexts/SettingsContext.js';
|
||||
import { useLaunchEditor } from '../hooks/useLaunchEditor.js';
|
||||
import { useKeypress } from '../hooks/useKeypress.js';
|
||||
|
||||
vi.mock('../contexts/ConfigContext.js', () => ({
|
||||
useConfig: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../contexts/SettingsContext.js', () => ({
|
||||
useSettings: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../hooks/useLaunchEditor.js', () => ({
|
||||
useLaunchEditor: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../hooks/useKeypress.js', () => ({
|
||||
useKeypress: vi.fn(),
|
||||
}));
|
||||
|
||||
const mockedUseConfig = vi.mocked(useConfig);
|
||||
const mockedUseSettings = vi.mocked(useSettings);
|
||||
const mockedUseLaunchEditor = vi.mocked(useLaunchEditor);
|
||||
const mockedUseKeypress = vi.mocked(useKeypress);
|
||||
|
||||
describe('MemoryDialog', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
|
||||
mockedUseConfig.mockReturnValue({
|
||||
getWorkingDir: vi.fn(() => '/tmp/project'),
|
||||
getProjectRoot: vi.fn(() => '/tmp/project'),
|
||||
getManagedAutoMemoryEnabled: vi.fn(() => false),
|
||||
getManagedAutoDreamEnabled: vi.fn(() => false),
|
||||
} as never);
|
||||
|
||||
mockedUseSettings.mockReturnValue({ setValue: vi.fn() } as never);
|
||||
mockedUseLaunchEditor.mockReturnValue(vi.fn());
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('moves selection with down arrow key events', () => {
|
||||
const { lastFrame } = render(<MemoryDialog onClose={vi.fn()} />);
|
||||
|
||||
expect(lastFrame()).toContain('› 1. User memory');
|
||||
|
||||
const keypressHandler = mockedUseKeypress.mock.calls[0][0];
|
||||
|
||||
act(() => {
|
||||
keypressHandler({ name: 'down' } as never);
|
||||
});
|
||||
|
||||
expect(lastFrame()).toContain('› 2. Project memory');
|
||||
});
|
||||
});
|
||||
410
packages/cli/src/ui/components/MemoryDialog.tsx
Normal file
410
packages/cli/src/ui/components/MemoryDialog.tsx
Normal file
|
|
@ -0,0 +1,410 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { Box, Text } from 'ink';
|
||||
import { useCallback, useEffect, useMemo, useState } from 'react';
|
||||
import fs from 'node:fs/promises';
|
||||
import os from 'node:os';
|
||||
import path from 'node:path';
|
||||
import { spawnSync } from 'node:child_process';
|
||||
import {
|
||||
getAllGeminiMdFilenames,
|
||||
QWEN_DIR,
|
||||
getAutoMemoryRoot,
|
||||
getAutoMemoryProjectStateDir,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import { useConfig } from '../contexts/ConfigContext.js';
|
||||
import { useSettings } from '../contexts/SettingsContext.js';
|
||||
import { SettingScope } from '../../config/settings.js';
|
||||
import { useLaunchEditor } from '../hooks/useLaunchEditor.js';
|
||||
import { useKeypress } from '../hooks/useKeypress.js';
|
||||
import { theme } from '../semantic-colors.js';
|
||||
import { formatRelativeTime } from '../utils/formatters.js';
|
||||
import { t } from '../../i18n/index.js';
|
||||
|
||||
type MemoryDialogTarget = 'project' | 'global' | 'managed';
|
||||
|
||||
interface MemoryDialogProps {
|
||||
onClose: () => void;
|
||||
}
|
||||
|
||||
interface DialogItem {
|
||||
label: string;
|
||||
value: MemoryDialogTarget;
|
||||
description?: string;
|
||||
}
|
||||
|
||||
async function resolvePreferredMemoryFile(
|
||||
dir: string,
|
||||
fallbackFilename: string,
|
||||
): Promise<string> {
|
||||
for (const filename of getAllGeminiMdFilenames()) {
|
||||
const filePath = path.join(dir, filename);
|
||||
try {
|
||||
await fs.access(filePath);
|
||||
return filePath;
|
||||
} catch {
|
||||
// Try the next configured file name.
|
||||
}
|
||||
}
|
||||
|
||||
return path.join(dir, fallbackFilename);
|
||||
}
|
||||
|
||||
function openFolderPath(folderPath: string): void {
|
||||
let command = 'xdg-open';
|
||||
|
||||
switch (process.platform) {
|
||||
case 'darwin':
|
||||
command = 'open';
|
||||
break;
|
||||
case 'win32':
|
||||
command = 'explorer';
|
||||
break;
|
||||
default:
|
||||
command = 'xdg-open';
|
||||
break;
|
||||
}
|
||||
|
||||
const needsShell =
|
||||
process.platform === 'win32' &&
|
||||
(command.endsWith('.cmd') || command.endsWith('.bat'));
|
||||
|
||||
const result = spawnSync(command, [folderPath], {
|
||||
stdio: 'inherit',
|
||||
shell: needsShell,
|
||||
});
|
||||
|
||||
if (result.error) {
|
||||
throw result.error;
|
||||
}
|
||||
if (typeof result.status === 'number' && result.status !== 0) {
|
||||
throw new Error(`Folder opener exited with status ${result.status}`);
|
||||
}
|
||||
}
|
||||
|
||||
async function ensureFileExists(filePath: string): Promise<void> {
|
||||
await fs.mkdir(path.dirname(filePath), { recursive: true });
|
||||
try {
|
||||
await fs.access(filePath);
|
||||
} catch {
|
||||
await fs.writeFile(filePath, '', 'utf-8');
|
||||
}
|
||||
}
|
||||
|
||||
function formatDisplayPath(filePath: string): string {
|
||||
const home = os.homedir();
|
||||
if (filePath.startsWith(home)) {
|
||||
return `~${filePath.slice(home.length)}`;
|
||||
}
|
||||
return filePath;
|
||||
}
|
||||
|
||||
export function MemoryDialog({ onClose }: MemoryDialogProps) {
|
||||
const config = useConfig();
|
||||
const loadedSettings = useSettings();
|
||||
const launchEditor = useLaunchEditor();
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
const [highlightedIndex, setHighlightedIndex] = useState(0);
|
||||
// 'autoMemory' | 'autoDream' = focus on that toggle row; 'list' = focus on the file list
|
||||
const [focusedSection, setFocusedSection] = useState<
|
||||
'autoMemory' | 'autoDream' | 'list'
|
||||
>('list');
|
||||
const [autoMemoryOn, setAutoMemoryOn] = useState(() =>
|
||||
config.getManagedAutoMemoryEnabled(),
|
||||
);
|
||||
const [autoDreamOn, setAutoDreamOn] = useState(() =>
|
||||
config.getManagedAutoDreamEnabled(),
|
||||
);
|
||||
const [lastDreamAt, setLastDreamAt] = useState<number | null>(null);
|
||||
|
||||
const globalMemoryPath = useMemo(
|
||||
() =>
|
||||
path.join(
|
||||
os.homedir(),
|
||||
QWEN_DIR,
|
||||
getAllGeminiMdFilenames()[0] ?? 'QWEN.md',
|
||||
),
|
||||
[],
|
||||
);
|
||||
const projectMemoryPath = useMemo(
|
||||
() =>
|
||||
path.join(
|
||||
config.getWorkingDir(),
|
||||
getAllGeminiMdFilenames()[0] ?? 'QWEN.md',
|
||||
),
|
||||
[config],
|
||||
);
|
||||
const managedMemoryPath = useMemo(
|
||||
() => getAutoMemoryRoot(config.getProjectRoot()),
|
||||
[config],
|
||||
);
|
||||
|
||||
const memoryStatePath = useMemo(
|
||||
() => getAutoMemoryProjectStateDir(config.getProjectRoot()),
|
||||
[config],
|
||||
);
|
||||
|
||||
const items = useMemo<DialogItem[]>(
|
||||
() => [
|
||||
{
|
||||
label: t('User memory'),
|
||||
value: 'global',
|
||||
description: t('Saved in {{path}}', {
|
||||
path: formatDisplayPath(globalMemoryPath),
|
||||
}),
|
||||
},
|
||||
{
|
||||
label: t('Project memory'),
|
||||
value: 'project',
|
||||
description: t('Saved in {{path}}', {
|
||||
path:
|
||||
path.relative(config.getWorkingDir(), projectMemoryPath) ||
|
||||
path.basename(projectMemoryPath),
|
||||
}),
|
||||
},
|
||||
{
|
||||
label: t('Open auto-memory folder'),
|
||||
value: 'managed',
|
||||
},
|
||||
],
|
||||
[config, globalMemoryPath, projectMemoryPath],
|
||||
);
|
||||
|
||||
// Load lastDreamAt from meta.json
|
||||
useEffect(() => {
|
||||
let cancelled = false;
|
||||
|
||||
async function loadMeta() {
|
||||
try {
|
||||
const metadataPath = path.join(memoryStatePath, 'meta.json');
|
||||
const content = await fs.readFile(metadataPath, 'utf-8');
|
||||
const parsed = JSON.parse(content) as { lastDreamAt?: string };
|
||||
if (!cancelled && parsed.lastDreamAt) {
|
||||
const ts = new Date(parsed.lastDreamAt).getTime();
|
||||
if (!Number.isNaN(ts)) {
|
||||
setLastDreamAt(ts);
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// meta.json not found or invalid — keep null
|
||||
}
|
||||
}
|
||||
|
||||
void loadMeta();
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}, [memoryStatePath]);
|
||||
|
||||
const dreamStatusText = useMemo(() => {
|
||||
if (lastDreamAt !== null) return formatRelativeTime(lastDreamAt);
|
||||
return t('never');
|
||||
}, [lastDreamAt]);
|
||||
|
||||
const resolveTargetPath = useCallback(
|
||||
async (target: MemoryDialogTarget): Promise<string> => {
|
||||
switch (target) {
|
||||
case 'project':
|
||||
return resolvePreferredMemoryFile(
|
||||
config.getWorkingDir(),
|
||||
getAllGeminiMdFilenames()[0] ?? 'QWEN.md',
|
||||
);
|
||||
case 'global':
|
||||
return resolvePreferredMemoryFile(
|
||||
path.join(os.homedir(), QWEN_DIR),
|
||||
getAllGeminiMdFilenames()[0] ?? 'QWEN.md',
|
||||
);
|
||||
case 'managed':
|
||||
return managedMemoryPath;
|
||||
default:
|
||||
return managedMemoryPath;
|
||||
}
|
||||
},
|
||||
[config, managedMemoryPath],
|
||||
);
|
||||
|
||||
const handleSelect = useCallback(
|
||||
async (target: MemoryDialogTarget) => {
|
||||
try {
|
||||
setError(null);
|
||||
const targetPath = await resolveTargetPath(target);
|
||||
if (target === 'managed') {
|
||||
await fs.mkdir(targetPath, { recursive: true });
|
||||
openFolderPath(targetPath);
|
||||
} else {
|
||||
await ensureFileExists(targetPath);
|
||||
await launchEditor(targetPath);
|
||||
}
|
||||
onClose();
|
||||
} catch (selectionError) {
|
||||
setError(
|
||||
selectionError instanceof Error
|
||||
? selectionError.message
|
||||
: String(selectionError),
|
||||
);
|
||||
}
|
||||
},
|
||||
[launchEditor, onClose, resolveTargetPath],
|
||||
);
|
||||
|
||||
const handleToggleAutoMemory = useCallback(() => {
|
||||
const newValue = !autoMemoryOn;
|
||||
loadedSettings.setValue(
|
||||
SettingScope.Workspace,
|
||||
'memory.enableManagedAutoMemory',
|
||||
newValue,
|
||||
);
|
||||
setAutoMemoryOn(newValue);
|
||||
}, [autoMemoryOn, loadedSettings]);
|
||||
|
||||
const handleToggleAutoDream = useCallback(() => {
|
||||
const newValue = !autoDreamOn;
|
||||
loadedSettings.setValue(
|
||||
SettingScope.Workspace,
|
||||
'memory.enableManagedAutoDream',
|
||||
newValue,
|
||||
);
|
||||
setAutoDreamOn(newValue);
|
||||
}, [autoDreamOn, loadedSettings]);
|
||||
|
||||
useKeypress(
|
||||
(key) => {
|
||||
if (key.name === 'escape') {
|
||||
onClose();
|
||||
return;
|
||||
}
|
||||
|
||||
if (focusedSection === 'autoMemory') {
|
||||
if (key.name === 'down') {
|
||||
setFocusedSection('autoDream');
|
||||
return;
|
||||
}
|
||||
if (key.name === 'return') {
|
||||
handleToggleAutoMemory();
|
||||
return;
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (focusedSection === 'autoDream') {
|
||||
if (key.name === 'up') {
|
||||
setFocusedSection('autoMemory');
|
||||
return;
|
||||
}
|
||||
if (key.name === 'down') {
|
||||
setFocusedSection('list');
|
||||
setHighlightedIndex(0);
|
||||
return;
|
||||
}
|
||||
if (key.name === 'return') {
|
||||
handleToggleAutoDream();
|
||||
return;
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// focusedSection === 'list'
|
||||
if (key.name === 'up') {
|
||||
if (highlightedIndex === 0) {
|
||||
setFocusedSection('autoDream');
|
||||
} else {
|
||||
setHighlightedIndex((current) => current - 1);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (key.name === 'down') {
|
||||
setHighlightedIndex((current) => (current + 1) % items.length);
|
||||
return;
|
||||
}
|
||||
|
||||
if (key.name === 'return') {
|
||||
void handleSelect(items[highlightedIndex]?.value ?? 'project');
|
||||
return;
|
||||
}
|
||||
|
||||
if (key.sequence && /^[1-3]$/.test(key.sequence)) {
|
||||
const nextIndex = Number(key.sequence) - 1;
|
||||
if (items[nextIndex]) {
|
||||
setHighlightedIndex(nextIndex);
|
||||
void handleSelect(items[nextIndex].value);
|
||||
}
|
||||
}
|
||||
},
|
||||
{ isActive: true },
|
||||
);
|
||||
|
||||
return (
|
||||
<Box
|
||||
borderStyle="round"
|
||||
borderColor={theme.border.default}
|
||||
flexDirection="column"
|
||||
padding={1}
|
||||
width="100%"
|
||||
>
|
||||
<Text bold>{t('Memory')}</Text>
|
||||
|
||||
<Box marginTop={1} flexDirection="column">
|
||||
<Text
|
||||
color={
|
||||
focusedSection === 'autoMemory'
|
||||
? theme.status.success
|
||||
: theme.text.secondary
|
||||
}
|
||||
>
|
||||
{focusedSection === 'autoMemory' ? '› ' : ' '}
|
||||
{t('Auto-memory: {{status}}', {
|
||||
status: autoMemoryOn ? t('on') : t('off'),
|
||||
})}
|
||||
</Text>
|
||||
<Text
|
||||
color={
|
||||
focusedSection === 'autoDream'
|
||||
? theme.status.success
|
||||
: theme.text.secondary
|
||||
}
|
||||
>
|
||||
{focusedSection === 'autoDream' ? '› ' : ' '}
|
||||
{t('Auto-dream: {{status}} · {{lastDream}} · /dream to run', {
|
||||
status: autoDreamOn ? t('on') : t('off'),
|
||||
lastDream: dreamStatusText,
|
||||
})}
|
||||
</Text>
|
||||
</Box>
|
||||
|
||||
{error && (
|
||||
<Box marginTop={1}>
|
||||
<Text color={theme.status.error}>{error}</Text>
|
||||
</Box>
|
||||
)}
|
||||
|
||||
<Box marginTop={1} flexDirection="column">
|
||||
{items.map((item, index) => {
|
||||
const isSelected =
|
||||
focusedSection === 'list' && index === highlightedIndex;
|
||||
return (
|
||||
<Box key={item.value} flexDirection="row">
|
||||
<Text color={isSelected ? theme.status.success : undefined}>
|
||||
{isSelected ? '› ' : ' '}
|
||||
{index + 1}. {item.label}
|
||||
</Text>
|
||||
{item.description ? (
|
||||
<Text color={theme.text.secondary}>{` ${item.description}`}</Text>
|
||||
) : null}
|
||||
</Box>
|
||||
);
|
||||
})}
|
||||
</Box>
|
||||
|
||||
<Box marginTop={1}>
|
||||
<Text color={theme.text.secondary}>
|
||||
{t('Enter to confirm · Esc to cancel')}
|
||||
</Text>
|
||||
</Box>
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
|
|
@ -326,6 +326,8 @@ export function ModelDialog({
|
|||
}
|
||||
const scope = getPersistScopeForModelSelection(settings);
|
||||
settings.setValue(scope, 'fastModel', modelId);
|
||||
// Sync the runtime Config so forked agents pick up the change immediately.
|
||||
config?.setFastModel(modelId);
|
||||
uiState?.historyManager.addItem(
|
||||
{
|
||||
type: 'success',
|
||||
|
|
|
|||
|
|
@ -0,0 +1,38 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type React from 'react';
|
||||
import { Box, Text } from 'ink';
|
||||
import type { HistoryItemMemorySaved } from '../../types.js';
|
||||
|
||||
interface MemorySavedMessageProps {
|
||||
item: HistoryItemMemorySaved;
|
||||
}
|
||||
|
||||
/**
|
||||
* Displays a post-turn notification that managed-auto-memory files were written.
|
||||
* Shown when:
|
||||
* - The model directly wrote to memory files in-turn (via write_file / edit_file).
|
||||
* - The background dream / extraction pipeline completed and touched memory files.
|
||||
*/
|
||||
export const MemorySavedMessage: React.FC<MemorySavedMessageProps> = ({
|
||||
item,
|
||||
}) => {
|
||||
const verb = item.verb ?? 'Saved';
|
||||
const n = item.writtenCount;
|
||||
const label = n === 1 ? 'memory' : 'memories';
|
||||
|
||||
return (
|
||||
<Box flexDirection="row">
|
||||
<Box minWidth={2}>
|
||||
<Text dimColor>●</Text>
|
||||
</Box>
|
||||
<Text dimColor>
|
||||
{verb} {n} {label}
|
||||
</Text>
|
||||
</Box>
|
||||
);
|
||||
};
|
||||
|
|
@ -5,8 +5,8 @@
|
|||
*/
|
||||
|
||||
import type React from 'react';
|
||||
import { Box, Text } from 'ink';
|
||||
import { useMemo, useRef } from 'react';
|
||||
import { Box } from 'ink';
|
||||
import type { IndividualToolCallDisplay } from '../../types.js';
|
||||
import { ToolCallStatus } from '../../types.js';
|
||||
import { ToolMessage } from './ToolMessage.js';
|
||||
|
|
@ -39,6 +39,10 @@ interface ToolGroupMessageProps {
|
|||
activeShellPtyId?: number | null;
|
||||
embeddedShellFocused?: boolean;
|
||||
onShellInputSubmit?: (input: string) => void;
|
||||
/** Pre-computed count of write ops to managed-auto-memory files. */
|
||||
memoryWriteCount?: number;
|
||||
/** Pre-computed count of read ops from managed-auto-memory files. */
|
||||
memoryReadCount?: number;
|
||||
isUserInitiated?: boolean;
|
||||
}
|
||||
|
||||
|
|
@ -50,6 +54,8 @@ export const ToolGroupMessage: React.FC<ToolGroupMessageProps> = ({
|
|||
isFocused = true,
|
||||
activeShellPtyId,
|
||||
embeddedShellFocused,
|
||||
memoryWriteCount,
|
||||
memoryReadCount,
|
||||
isUserInitiated,
|
||||
}) => {
|
||||
const config = useConfig();
|
||||
|
|
@ -68,11 +74,28 @@ export const ToolGroupMessage: React.FC<ToolGroupMessageProps> = ({
|
|||
|
||||
// useMemo must be called unconditionally (Rules of Hooks) — before any early return
|
||||
// only prompt for tool approval on the first 'confirming' tool in the list
|
||||
// note, after the CTA, this automatically moves over to the next 'confirming' tool
|
||||
const toolAwaitingApproval = useMemo(
|
||||
() => toolCalls.find((tc) => tc.status === ToolCallStatus.Confirming),
|
||||
[toolCalls],
|
||||
);
|
||||
|
||||
// Detect if this is a "memory-only" group (all tool calls are memory ops)
|
||||
const isMemoryOnlyGroup = useMemo(
|
||||
() => toolCalls.length > 0 && toolCalls.every((t) => t.isMemoryOp != null),
|
||||
[toolCalls],
|
||||
);
|
||||
|
||||
const allComplete = useMemo(
|
||||
() =>
|
||||
toolCalls.every(
|
||||
(t) =>
|
||||
t.status === ToolCallStatus.Success ||
|
||||
t.status === ToolCallStatus.Error,
|
||||
),
|
||||
[toolCalls],
|
||||
);
|
||||
|
||||
// Determine which subagent tools currently have a pending confirmation.
|
||||
// Must be called unconditionally (Rules of Hooks) — before any early return.
|
||||
const subagentsAwaitingApproval = useMemo(
|
||||
|
|
@ -155,6 +178,37 @@ export const ToolGroupMessage: React.FC<ToolGroupMessageProps> = ({
|
|||
)
|
||||
: undefined;
|
||||
|
||||
// For completed memory-only groups, show a compact summary instead of individual tool calls
|
||||
if (isMemoryOnlyGroup && allComplete) {
|
||||
const readCount = memoryReadCount ?? 0;
|
||||
const writeCount = memoryWriteCount ?? 0;
|
||||
return (
|
||||
<Box
|
||||
flexDirection="column"
|
||||
borderStyle="round"
|
||||
width={contentWidth}
|
||||
borderColor={theme.border.default}
|
||||
>
|
||||
{readCount > 0 && (
|
||||
<Box paddingLeft={1}>
|
||||
<Text dimColor>
|
||||
{'● '}
|
||||
Recalled {readCount} {readCount === 1 ? 'memory' : 'memories'}
|
||||
</Text>
|
||||
</Box>
|
||||
)}
|
||||
{writeCount > 0 && (
|
||||
<Box paddingLeft={1}>
|
||||
<Text dimColor>
|
||||
{'● '}
|
||||
Wrote {writeCount} {writeCount === 1 ? 'memory' : 'memories'}
|
||||
</Text>
|
||||
</Box>
|
||||
)}
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<Box
|
||||
flexDirection="column"
|
||||
|
|
@ -172,6 +226,25 @@ export const ToolGroupMessage: React.FC<ToolGroupMessageProps> = ({
|
|||
borderColor={borderColor}
|
||||
gap={1}
|
||||
>
|
||||
{/* Memory badge for mixed groups (some memory ops + other ops) */}
|
||||
{!isMemoryOnlyGroup &&
|
||||
((memoryWriteCount ?? 0) > 0 || (memoryReadCount ?? 0) > 0) &&
|
||||
(() => {
|
||||
const parts: string[] = [];
|
||||
if ((memoryReadCount ?? 0) > 0) {
|
||||
const n = memoryReadCount!;
|
||||
parts.push(`Recalled ${n} ${n === 1 ? 'memory' : 'memories'}`);
|
||||
}
|
||||
if ((memoryWriteCount ?? 0) > 0) {
|
||||
const n = memoryWriteCount!;
|
||||
parts.push(`Wrote ${n} ${n === 1 ? 'memory' : 'memories'}`);
|
||||
}
|
||||
return (
|
||||
<Box paddingLeft={1}>
|
||||
<Text dimColor>● {parts.join(', ')}</Text>
|
||||
</Box>
|
||||
);
|
||||
})()}
|
||||
{toolCalls.map((tool) => {
|
||||
const isConfirming = toolAwaitingApproval?.callId === tool.callId;
|
||||
// A subagent's inline confirmation should only receive keyboard focus
|
||||
|
|
|
|||
|
|
@ -367,6 +367,10 @@ describe('BaseSelectionList', () => {
|
|||
expect(output).not.toContain('Item 1');
|
||||
|
||||
await updateActiveIndex(5); // Scroll further
|
||||
// Wait for scrollOffset state to settle after the second jump
|
||||
await waitFor(() => {
|
||||
expect(lastFrame()).toContain('Item 6');
|
||||
});
|
||||
output = lastFrame();
|
||||
expect(output).toContain('Item 4');
|
||||
expect(output).toContain('Item 6');
|
||||
|
|
|
|||
|
|
@ -29,6 +29,7 @@ export interface OpenAICredentials {
|
|||
export interface UIActions {
|
||||
openThemeDialog: () => void;
|
||||
openEditorDialog: () => void;
|
||||
openMemoryDialog: () => void;
|
||||
handleThemeSelect: (
|
||||
themeName: string | undefined,
|
||||
scope: SettingScope,
|
||||
|
|
@ -60,6 +61,7 @@ export interface UIActions {
|
|||
) => void;
|
||||
exitEditorDialog: () => void;
|
||||
closeSettingsDialog: () => void;
|
||||
closeMemoryDialog: () => void;
|
||||
closeModelDialog: () => void;
|
||||
openModelDialog: (options?: { fastModelMode?: boolean }) => void;
|
||||
openArenaDialog: (type: Exclude<ArenaDialogType, null>) => void;
|
||||
|
|
|
|||
|
|
@ -53,6 +53,7 @@ export interface UIState {
|
|||
debugMessage: string;
|
||||
quittingMessages: HistoryItem[] | null;
|
||||
isSettingsDialogOpen: boolean;
|
||||
isMemoryDialogOpen: boolean;
|
||||
isModelDialogOpen: boolean;
|
||||
isFastModelMode: boolean;
|
||||
isTrustDialogOpen: boolean;
|
||||
|
|
|
|||
|
|
@ -110,6 +110,7 @@ describe('useSlashCommandProcessor', () => {
|
|||
const mockLoadHistory = vi.fn();
|
||||
const mockOpenThemeDialog = vi.fn();
|
||||
const mockOpenAuthDialog = vi.fn();
|
||||
const mockOpenMemoryDialog = vi.fn();
|
||||
const mockOpenModelDialog = vi.fn();
|
||||
const mockSetQuittingMessages = vi.fn();
|
||||
|
||||
|
|
@ -126,6 +127,7 @@ describe('useSlashCommandProcessor', () => {
|
|||
mockFileLoadCommands.mockResolvedValue([]);
|
||||
mockMcpLoadCommands.mockResolvedValue([]);
|
||||
mockOpenModelDialog.mockClear();
|
||||
mockOpenMemoryDialog.mockClear();
|
||||
});
|
||||
|
||||
const setupProcessorHook = (
|
||||
|
|
@ -154,6 +156,7 @@ describe('useSlashCommandProcessor', () => {
|
|||
openAuthDialog: mockOpenAuthDialog,
|
||||
openThemeDialog: mockOpenThemeDialog,
|
||||
openEditorDialog: vi.fn(),
|
||||
openMemoryDialog: mockOpenMemoryDialog,
|
||||
openSettingsDialog: vi.fn(),
|
||||
openModelDialog: mockOpenModelDialog,
|
||||
openTrustDialog: vi.fn(),
|
||||
|
|
@ -429,6 +432,44 @@ describe('useSlashCommandProcessor', () => {
|
|||
expect(mockOpenModelDialog).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle "dialog: memory" action', async () => {
|
||||
const command = createTestCommand({
|
||||
name: 'memorycmd',
|
||||
action: vi.fn().mockResolvedValue({ type: 'dialog', dialog: 'memory' }),
|
||||
});
|
||||
const result = setupProcessorHook([command]);
|
||||
await waitFor(() => expect(result.current.slashCommands).toHaveLength(1));
|
||||
|
||||
await act(async () => {
|
||||
await result.current.handleSlashCommand('/memorycmd');
|
||||
});
|
||||
|
||||
expect(mockOpenMemoryDialog).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should pass interactive execution mode to command actions', async () => {
|
||||
const action = vi.fn().mockResolvedValue({
|
||||
type: 'message',
|
||||
messageType: 'info',
|
||||
content: 'ok',
|
||||
});
|
||||
const command = createTestCommand({
|
||||
name: 'interactivecmd',
|
||||
action,
|
||||
});
|
||||
const result = setupProcessorHook([command]);
|
||||
await waitFor(() => expect(result.current.slashCommands).toHaveLength(1));
|
||||
|
||||
await act(async () => {
|
||||
await result.current.handleSlashCommand('/interactivecmd');
|
||||
});
|
||||
|
||||
expect(action).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ executionMode: 'interactive' }),
|
||||
'',
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle "load_history" action', async () => {
|
||||
const mockClient = {
|
||||
setHistory: vi.fn(),
|
||||
|
|
@ -928,6 +969,7 @@ describe('useSlashCommandProcessor', () => {
|
|||
openAuthDialog: mockOpenAuthDialog,
|
||||
openThemeDialog: mockOpenThemeDialog,
|
||||
openEditorDialog: vi.fn(),
|
||||
openMemoryDialog: mockOpenMemoryDialog,
|
||||
openSettingsDialog: vi.fn(),
|
||||
openModelDialog: vi.fn(),
|
||||
openTrustDialog: vi.fn(),
|
||||
|
|
|
|||
|
|
@ -73,6 +73,7 @@ interface SlashCommandProcessorActions {
|
|||
openArenaDialog?: (type: Exclude<ArenaDialogType, null>) => void;
|
||||
openThemeDialog: () => void;
|
||||
openEditorDialog: () => void;
|
||||
openMemoryDialog: () => void;
|
||||
openSettingsDialog: () => void;
|
||||
openModelDialog: (options?: { fastModelMode?: boolean }) => void;
|
||||
openTrustDialog: () => void;
|
||||
|
|
@ -248,6 +249,7 @@ export const useSlashCommandProcessor = (
|
|||
);
|
||||
const commandContext = useMemo(
|
||||
(): CommandContext => ({
|
||||
executionMode: 'interactive',
|
||||
services: {
|
||||
config,
|
||||
settings,
|
||||
|
|
@ -513,6 +515,9 @@ export const useSlashCommandProcessor = (
|
|||
case 'settings':
|
||||
actions.openSettingsDialog();
|
||||
return { type: 'handled' };
|
||||
case 'memory':
|
||||
actions.openMemoryDialog();
|
||||
return { type: 'handled' };
|
||||
case 'model':
|
||||
actions.openModelDialog();
|
||||
return { type: 'handled' };
|
||||
|
|
@ -573,6 +578,7 @@ export const useSlashCommandProcessor = (
|
|||
return {
|
||||
type: 'submit_prompt',
|
||||
content: result.content,
|
||||
onComplete: result.onComplete,
|
||||
};
|
||||
case 'confirm_shell_commands': {
|
||||
const { outcome, approvedCommands } = await new Promise<{
|
||||
|
|
|
|||
|
|
@ -43,6 +43,10 @@ export interface DialogCloseOptions {
|
|||
isSettingsDialogOpen: boolean;
|
||||
closeSettingsDialog: () => void;
|
||||
|
||||
// Memory dialog
|
||||
isMemoryDialogOpen: boolean;
|
||||
closeMemoryDialog: () => void;
|
||||
|
||||
// Arena dialogs
|
||||
activeArenaDialog: ArenaDialogType;
|
||||
closeArenaDialog: () => void;
|
||||
|
|
@ -88,6 +92,11 @@ export function useDialogClose(options: DialogCloseOptions) {
|
|||
return true;
|
||||
}
|
||||
|
||||
if (options.isMemoryDialogOpen) {
|
||||
options.closeMemoryDialog();
|
||||
return true;
|
||||
}
|
||||
|
||||
if (options.activeArenaDialog !== null) {
|
||||
options.closeArenaDialog();
|
||||
return true;
|
||||
|
|
|
|||
|
|
@ -50,6 +50,7 @@ const MockedGeminiClientClass = vi.hoisted(() =>
|
|||
this.startChat = mockStartChat;
|
||||
this.sendMessageStream = mockSendMessageStream;
|
||||
this.addHistory = vi.fn();
|
||||
this.consumePendingMemoryTaskPromises = vi.fn().mockReturnValue([]);
|
||||
this.getChatRecordingService = vi.fn().mockReturnValue({
|
||||
recordThought: vi.fn(),
|
||||
initialize: vi.fn(),
|
||||
|
|
@ -1060,7 +1061,7 @@ describe('useGeminiStream', () => {
|
|||
const { result } = renderTestHook();
|
||||
|
||||
await act(async () => {
|
||||
await result.current.submitQuery('/memory add "test fact"');
|
||||
await result.current.submitQuery('/save-test-fact "test fact"');
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
|
|
|
|||
|
|
@ -237,6 +237,7 @@ export const useGeminiStream = (
|
|||
null,
|
||||
);
|
||||
const processedMemoryToolsRef = useRef<Set<string>>(new Set());
|
||||
const submitPromptOnCompleteRef = useRef<(() => Promise<void>) | null>(null);
|
||||
const modelOverrideRef = useRef<string | undefined>(undefined);
|
||||
const {
|
||||
startNewPrompt,
|
||||
|
|
@ -257,13 +258,13 @@ export const useGeminiStream = (
|
|||
async (completedToolCallsFromScheduler) => {
|
||||
// This onComplete is called when ALL scheduled tools for a given batch are done.
|
||||
if (completedToolCallsFromScheduler.length > 0) {
|
||||
const projectRoot = config.getProjectRoot();
|
||||
// Add the final state of these tools to the history for display.
|
||||
addItem(
|
||||
mapTrackedToolCallsToDisplay(
|
||||
completedToolCallsFromScheduler as TrackedToolCall[],
|
||||
),
|
||||
Date.now(),
|
||||
const toolGroupDisplay = mapTrackedToolCallsToDisplay(
|
||||
completedToolCallsFromScheduler as TrackedToolCall[],
|
||||
projectRoot,
|
||||
);
|
||||
addItem(toolGroupDisplay, Date.now());
|
||||
|
||||
// Handle tool response submission immediately when tools complete
|
||||
await handleCompletedTools(
|
||||
|
|
@ -278,8 +279,10 @@ export const useGeminiStream = (
|
|||
|
||||
const pendingToolCallGroupDisplay = useMemo(
|
||||
() =>
|
||||
toolCalls.length ? mapTrackedToolCallsToDisplay(toolCalls) : undefined,
|
||||
[toolCalls],
|
||||
toolCalls.length
|
||||
? mapTrackedToolCallsToDisplay(toolCalls, config.getProjectRoot())
|
||||
: undefined,
|
||||
[toolCalls, config],
|
||||
);
|
||||
|
||||
const activeToolPtyId = useMemo(() => {
|
||||
|
|
@ -563,6 +566,8 @@ export const useGeminiStream = (
|
|||
}
|
||||
case 'submit_prompt': {
|
||||
localQueryToSendToGemini = slashCommandResult.content;
|
||||
submitPromptOnCompleteRef.current =
|
||||
slashCommandResult.onComplete ?? null;
|
||||
|
||||
return {
|
||||
queryToSend: localQueryToSendToGemini,
|
||||
|
|
@ -1389,6 +1394,35 @@ export const useGeminiStream = (
|
|||
loopDetectedRef.current = false;
|
||||
handleLoopDetectedEvent();
|
||||
}
|
||||
|
||||
// If the turn was initiated by a submit_prompt with an onComplete
|
||||
// callback (e.g. /dream recording lastDreamAt), fire it now.
|
||||
const onComplete = submitPromptOnCompleteRef.current;
|
||||
if (onComplete) {
|
||||
submitPromptOnCompleteRef.current = null;
|
||||
void onComplete();
|
||||
}
|
||||
|
||||
// After the turn completes, wire up notifications for any background
|
||||
// dream / extraction tasks that were kicked off by the client.
|
||||
if (geminiClient) {
|
||||
const memoryTaskPromises =
|
||||
geminiClient.consumePendingMemoryTaskPromises();
|
||||
for (const p of memoryTaskPromises) {
|
||||
void p.then((count) => {
|
||||
if (count > 0) {
|
||||
addItem(
|
||||
{
|
||||
type: 'memory_saved',
|
||||
writtenCount: count,
|
||||
verb: 'Updated',
|
||||
} as HistoryItemWithoutId,
|
||||
Date.now(),
|
||||
);
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof UnauthorizedError) {
|
||||
onAuthError('Session expired or is unauthorized.');
|
||||
|
|
|
|||
31
packages/cli/src/ui/hooks/useMemoryDialog.ts
Normal file
31
packages/cli/src/ui/hooks/useMemoryDialog.ts
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { useState, useCallback } from 'react';
|
||||
|
||||
export interface UseMemoryDialogReturn {
|
||||
isMemoryDialogOpen: boolean;
|
||||
openMemoryDialog: () => void;
|
||||
closeMemoryDialog: () => void;
|
||||
}
|
||||
|
||||
export const useMemoryDialog = (): UseMemoryDialogReturn => {
|
||||
const [isMemoryDialogOpen, setIsMemoryDialogOpen] = useState(false);
|
||||
|
||||
const openMemoryDialog = useCallback(() => {
|
||||
setIsMemoryDialogOpen(true);
|
||||
}, []);
|
||||
|
||||
const closeMemoryDialog = useCallback(() => {
|
||||
setIsMemoryDialogOpen(false);
|
||||
}, []);
|
||||
|
||||
return {
|
||||
isMemoryDialogOpen,
|
||||
openMemoryDialog,
|
||||
closeMemoryDialog,
|
||||
};
|
||||
};
|
||||
|
|
@ -23,7 +23,9 @@ import type {
|
|||
import {
|
||||
CoreToolScheduler,
|
||||
createDebugLogger,
|
||||
isAutoMemPath,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import * as path from 'node:path';
|
||||
import { useCallback, useState, useMemo } from 'react';
|
||||
import type {
|
||||
HistoryItemToolGroup,
|
||||
|
|
@ -209,11 +211,32 @@ function mapCoreStatusToDisplayStatus(coreStatus: CoreStatus): ToolCallStatus {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns 'read' or 'write' if the tool call operates on a managed-auto-memory
|
||||
* file; returns undefined otherwise.
|
||||
*/
|
||||
function detectMemoryOp(
|
||||
toolName: string,
|
||||
args: Record<string, unknown>,
|
||||
projectRoot: string,
|
||||
): 'read' | 'write' | undefined {
|
||||
const WRITE_TOOLS = new Set(['write_file', 'edit']);
|
||||
const READ_TOOLS = new Set(['read_file']);
|
||||
const filePath = args?.['file_path'] as string | undefined;
|
||||
if (!filePath) return undefined;
|
||||
const resolved = path.resolve(filePath);
|
||||
if (!isAutoMemPath(resolved, projectRoot)) return undefined;
|
||||
if (WRITE_TOOLS.has(toolName)) return 'write';
|
||||
if (READ_TOOLS.has(toolName)) return 'read';
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Transforms `TrackedToolCall` objects into `HistoryItemToolGroup` objects for UI display.
|
||||
*/
|
||||
export function mapToDisplay(
|
||||
toolOrTools: TrackedToolCall[] | TrackedToolCall,
|
||||
projectRoot?: string,
|
||||
): HistoryItemToolGroup {
|
||||
const toolCalls = Array.isArray(toolOrTools) ? toolOrTools : [toolOrTools];
|
||||
|
||||
|
|
@ -243,6 +266,14 @@ export function mapToDisplay(
|
|||
name: displayName,
|
||||
description,
|
||||
renderOutputAsMarkdown,
|
||||
isMemoryOp:
|
||||
projectRoot && trackedCall.status !== 'error'
|
||||
? detectMemoryOp(
|
||||
trackedCall.request.name,
|
||||
trackedCall.request.args as Record<string, unknown>,
|
||||
projectRoot,
|
||||
)
|
||||
: undefined,
|
||||
};
|
||||
|
||||
switch (trackedCall.status) {
|
||||
|
|
@ -310,5 +341,7 @@ export function mapToDisplay(
|
|||
return {
|
||||
type: 'tool_group',
|
||||
tools: toolDisplays,
|
||||
memoryWriteCount: toolDisplays.filter((t) => t.isMemoryOp === 'write').length || undefined,
|
||||
memoryReadCount: toolDisplays.filter((t) => t.isMemoryOp === 'read').length || undefined,
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -562,12 +562,12 @@ describe('useSlashCompletion', () => {
|
|||
|
||||
const slashCommands = [
|
||||
createTestCommand({
|
||||
name: 'memory',
|
||||
description: 'Manage memory',
|
||||
name: 'config',
|
||||
description: 'Manage configuration',
|
||||
subCommands: [
|
||||
createTestCommand({
|
||||
name: 'show',
|
||||
description: 'Show memory',
|
||||
name: 'set',
|
||||
description: 'Set configuration',
|
||||
completion: mockCompletionFn,
|
||||
}),
|
||||
],
|
||||
|
|
@ -577,7 +577,7 @@ describe('useSlashCompletion', () => {
|
|||
const { result } = renderHook(() =>
|
||||
useTestHarnessForSlashCompletion(
|
||||
true,
|
||||
'/memory show --project',
|
||||
'/config set --project',
|
||||
slashCommands,
|
||||
mockCommandContext,
|
||||
),
|
||||
|
|
@ -587,8 +587,8 @@ describe('useSlashCompletion', () => {
|
|||
expect(mockCompletionFn).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
invocation: {
|
||||
raw: '/memory show --project',
|
||||
name: 'show',
|
||||
raw: '/config set --project',
|
||||
name: 'set',
|
||||
args: '--project',
|
||||
},
|
||||
}),
|
||||
|
|
|
|||
|
|
@ -69,6 +69,8 @@ export interface IndividualToolCallDisplay {
|
|||
confirmationDetails: ToolCallConfirmationDetails | undefined;
|
||||
renderOutputAsMarkdown?: boolean;
|
||||
ptyId?: number;
|
||||
/** If this tool call operated on a managed-auto-memory file, indicates whether it was a read or write. */
|
||||
isMemoryOp?: 'read' | 'write';
|
||||
}
|
||||
|
||||
export interface CompressionProps {
|
||||
|
|
@ -184,9 +186,25 @@ export type HistoryItemQuit = HistoryItemBase & {
|
|||
duration: string;
|
||||
};
|
||||
|
||||
/**
|
||||
* Displayed after a turn when managed-auto-memory files were written
|
||||
* (either in-turn by the model, or by the post-turn dream/extract pipeline).
|
||||
*/
|
||||
export type HistoryItemMemorySaved = HistoryItemBase & {
|
||||
type: 'memory_saved';
|
||||
/** Number of memory files written / updated. */
|
||||
writtenCount: number;
|
||||
/** Verb to display, e.g. 'Saved' or 'Updated'. Defaults to 'Saved'. */
|
||||
verb?: string;
|
||||
};
|
||||
|
||||
export type HistoryItemToolGroup = HistoryItemBase & {
|
||||
type: 'tool_group';
|
||||
tools: IndividualToolCallDisplay[];
|
||||
/** Count of tool calls that wrote to managed-auto-memory files. Pre-computed for badge rendering. */
|
||||
memoryWriteCount?: number;
|
||||
/** Count of tool calls that read from managed-auto-memory files. Pre-computed for badge rendering. */
|
||||
memoryReadCount?: number;
|
||||
isUserInitiated?: boolean;
|
||||
};
|
||||
|
||||
|
|
@ -429,6 +447,7 @@ export type HistoryItemWithoutId =
|
|||
| HistoryItemArenaSessionComplete
|
||||
| HistoryItemInsightProgress
|
||||
| HistoryItemBtw
|
||||
| HistoryItemMemorySaved
|
||||
| HistoryItemUserPromptSubmitBlocked
|
||||
| HistoryItemStopHookLoop
|
||||
| HistoryItemStopHookSystemMessage;
|
||||
|
|
@ -554,6 +573,8 @@ export interface ConsoleMessageItem {
|
|||
export interface SubmitPromptResult {
|
||||
type: 'submit_prompt';
|
||||
content: PartListUnion;
|
||||
/** Optional callback invoked after the agent turn completes successfully. */
|
||||
onComplete?: () => Promise<void>;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -91,14 +91,14 @@ describe('commandUtils', () => {
|
|||
describe('isSlashCommand', () => {
|
||||
it('should return true when query starts with /', () => {
|
||||
expect(isSlashCommand('/help')).toBe(true);
|
||||
expect(isSlashCommand('/memory show')).toBe(true);
|
||||
expect(isSlashCommand('/config set')).toBe(true);
|
||||
expect(isSlashCommand('/clear')).toBe(true);
|
||||
expect(isSlashCommand('/')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false when query does not start with /', () => {
|
||||
expect(isSlashCommand('help')).toBe(false);
|
||||
expect(isSlashCommand('memory show')).toBe(false);
|
||||
expect(isSlashCommand('config set')).toBe(false);
|
||||
expect(isSlashCommand('')).toBe(false);
|
||||
expect(isSlashCommand('path/to/file')).toBe(false);
|
||||
expect(isSlashCommand(' /help')).toBe(false);
|
||||
|
|
|
|||
|
|
@ -23,20 +23,20 @@ const mockCommands: readonly SlashCommand[] = [
|
|||
kind: CommandKind.FILE,
|
||||
},
|
||||
{
|
||||
name: 'memory',
|
||||
description: 'Manage memory',
|
||||
altNames: ['mem'],
|
||||
name: 'config',
|
||||
description: 'Manage configuration',
|
||||
altNames: ['cfg'],
|
||||
subCommands: [
|
||||
{
|
||||
name: 'add',
|
||||
description: 'Add to memory',
|
||||
name: 'set',
|
||||
description: 'Set configuration',
|
||||
action: async () => {},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
},
|
||||
{
|
||||
name: 'clear',
|
||||
description: 'Clear memory',
|
||||
altNames: ['c'],
|
||||
name: 'reset',
|
||||
description: 'Reset configuration',
|
||||
altNames: ['r'],
|
||||
action: async () => {},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
},
|
||||
|
|
@ -64,34 +64,34 @@ describe('parseSlashCommand', () => {
|
|||
});
|
||||
|
||||
it('should parse a subcommand', () => {
|
||||
const result = parseSlashCommand('/memory add', mockCommands);
|
||||
expect(result.commandToExecute?.name).toBe('add');
|
||||
const result = parseSlashCommand('/config set', mockCommands);
|
||||
expect(result.commandToExecute?.name).toBe('set');
|
||||
expect(result.args).toBe('');
|
||||
expect(result.canonicalPath).toEqual(['memory', 'add']);
|
||||
expect(result.canonicalPath).toEqual(['config', 'set']);
|
||||
});
|
||||
|
||||
it('should parse a subcommand with arguments', () => {
|
||||
const result = parseSlashCommand(
|
||||
'/memory add some important data',
|
||||
'/config set theme dark',
|
||||
mockCommands,
|
||||
);
|
||||
expect(result.commandToExecute?.name).toBe('add');
|
||||
expect(result.args).toBe('some important data');
|
||||
expect(result.canonicalPath).toEqual(['memory', 'add']);
|
||||
expect(result.commandToExecute?.name).toBe('set');
|
||||
expect(result.args).toBe('theme dark');
|
||||
expect(result.canonicalPath).toEqual(['config', 'set']);
|
||||
});
|
||||
|
||||
it('should handle a command alias', () => {
|
||||
const result = parseSlashCommand('/mem add some data', mockCommands);
|
||||
expect(result.commandToExecute?.name).toBe('add');
|
||||
expect(result.args).toBe('some data');
|
||||
expect(result.canonicalPath).toEqual(['memory', 'add']);
|
||||
const result = parseSlashCommand('/cfg set theme dark', mockCommands);
|
||||
expect(result.commandToExecute?.name).toBe('set');
|
||||
expect(result.args).toBe('theme dark');
|
||||
expect(result.canonicalPath).toEqual(['config', 'set']);
|
||||
});
|
||||
|
||||
it('should handle a subcommand alias', () => {
|
||||
const result = parseSlashCommand('/memory c', mockCommands);
|
||||
expect(result.commandToExecute?.name).toBe('clear');
|
||||
const result = parseSlashCommand('/config r', mockCommands);
|
||||
expect(result.commandToExecute?.name).toBe('reset');
|
||||
expect(result.args).toBe('');
|
||||
expect(result.canonicalPath).toEqual(['memory', 'clear']);
|
||||
expect(result.canonicalPath).toEqual(['config', 'reset']);
|
||||
});
|
||||
|
||||
it('should return undefined for an unknown command', () => {
|
||||
|
|
@ -103,22 +103,22 @@ describe('parseSlashCommand', () => {
|
|||
|
||||
it('should return the parent command if subcommand is unknown', () => {
|
||||
const result = parseSlashCommand(
|
||||
'/memory unknownsub some args',
|
||||
'/config unknownsub some args',
|
||||
mockCommands,
|
||||
);
|
||||
expect(result.commandToExecute?.name).toBe('memory');
|
||||
expect(result.commandToExecute?.name).toBe('config');
|
||||
expect(result.args).toBe('unknownsub some args');
|
||||
expect(result.canonicalPath).toEqual(['memory']);
|
||||
expect(result.canonicalPath).toEqual(['config']);
|
||||
});
|
||||
|
||||
it('should handle extra whitespace', () => {
|
||||
const result = parseSlashCommand(
|
||||
' /memory add some data ',
|
||||
' /config set theme dark ',
|
||||
mockCommands,
|
||||
);
|
||||
expect(result.commandToExecute?.name).toBe('add');
|
||||
expect(result.args).toBe('some data');
|
||||
expect(result.canonicalPath).toEqual(['memory', 'add']);
|
||||
expect(result.commandToExecute?.name).toBe('set');
|
||||
expect(result.args).toBe('theme dark');
|
||||
expect(result.canonicalPath).toEqual(['config', 'set']);
|
||||
});
|
||||
|
||||
it('should return undefined if query does not start with a slash', () => {
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ export type ParsedSlashCommand = {
|
|||
* Parses a raw slash command string into its command, arguments, and canonical path.
|
||||
* If no valid command is found, the `commandToExecute` property will be `undefined`.
|
||||
*
|
||||
* @param query The raw input string, e.g., "/memory add some data" or "/help".
|
||||
* @param query The raw input string, e.g., "/config set theme dark" or "/help".
|
||||
* @param commands The list of available top-level slash commands.
|
||||
* @returns An object containing the resolved command, its arguments, and its canonical path.
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -27,6 +27,37 @@ export function isInForkChild(messages: Content[]): boolean {
|
|||
export const FORK_PLACEHOLDER_RESULT =
|
||||
'Fork started — processing in background';
|
||||
|
||||
/**
|
||||
* Build functionResponse parts for every open function call in a model message.
|
||||
*
|
||||
* Shared by the fork subagent (agent.ts) and background agent history
|
||||
* construction (e.g. extractionAgentPlanner.ts) to close open tool calls
|
||||
* before injecting history into a new agent session.
|
||||
*
|
||||
* @param assistantMessage - The model message that may contain functionCall parts.
|
||||
* @param placeholderOutput - The placeholder string to use as each response's output.
|
||||
*/
|
||||
export function buildFunctionResponseParts(
|
||||
assistantMessage: Content,
|
||||
placeholderOutput: string,
|
||||
): Array<{
|
||||
functionResponse: {
|
||||
id: string | undefined;
|
||||
name: string | undefined;
|
||||
response: { output: string };
|
||||
};
|
||||
}> {
|
||||
return (
|
||||
assistantMessage.parts?.filter((part) => part.functionCall) ?? []
|
||||
).map((part) => ({
|
||||
functionResponse: {
|
||||
id: part.functionCall!.id,
|
||||
name: part.functionCall!.name,
|
||||
response: { output: placeholderOutput },
|
||||
},
|
||||
}));
|
||||
}
|
||||
|
||||
/**
|
||||
* Build extra history messages for a forked subagent.
|
||||
*
|
||||
|
|
@ -65,13 +96,10 @@ export function buildForkedMessages(
|
|||
// Build tool_result blocks for every tool_use, all with identical placeholder text.
|
||||
// Include the directive text in the same user message to maintain
|
||||
// proper user/model alternation.
|
||||
const toolResultParts = toolUseParts.map((part) => ({
|
||||
functionResponse: {
|
||||
id: part.functionCall!.id,
|
||||
name: part.functionCall!.name,
|
||||
response: { output: FORK_PLACEHOLDER_RESULT },
|
||||
},
|
||||
}));
|
||||
const toolResultParts = buildFunctionResponseParts(
|
||||
assistantMessage,
|
||||
FORK_PLACEHOLDER_RESULT,
|
||||
);
|
||||
|
||||
const toolResultMessage: Content = {
|
||||
role: 'user',
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ import type { ConfigParameters, SandboxConfig } from './config.js';
|
|||
import { Config, ApprovalMode } from './config.js';
|
||||
import * as fs from 'node:fs';
|
||||
import * as path from 'node:path';
|
||||
import { setGeminiMdFilename as mockSetGeminiMdFilename } from '../tools/memoryTool.js';
|
||||
import { setGeminiMdFilename as mockSetGeminiMdFilename } from '../memory/const.js';
|
||||
import {
|
||||
DEFAULT_TELEMETRY_TARGET,
|
||||
DEFAULT_OTLP_ENDPOINT,
|
||||
|
|
@ -39,6 +39,8 @@ import { RipgrepFallbackEvent } from '../telemetry/types.js';
|
|||
import { ToolRegistry } from '../tools/tool-registry.js';
|
||||
import { fireNotificationHook } from '../core/toolHookTriggers.js';
|
||||
import type { MessageBus } from '../confirmation-bus/message-bus.js';
|
||||
import { loadServerHierarchicalMemory } from '../utils/memoryDiscovery.js';
|
||||
import { readAutoMemoryIndex } from '../memory/store.js';
|
||||
|
||||
function createToolMock(toolName: string) {
|
||||
const ToolMock = vi.fn();
|
||||
|
|
@ -86,6 +88,10 @@ vi.mock('../utils/memoryDiscovery.js', () => ({
|
|||
.mockResolvedValue({ memoryContent: '', fileCount: 0 }),
|
||||
}));
|
||||
|
||||
vi.mock('../memory/store.js', () => ({
|
||||
readAutoMemoryIndex: vi.fn().mockResolvedValue(null),
|
||||
}));
|
||||
|
||||
// Mock individual tools if their constructors are complex or have side effects
|
||||
vi.mock('../tools/ls', () => ({
|
||||
LSTool: createToolMock('list_directory'),
|
||||
|
|
@ -120,8 +126,7 @@ vi.mock('../tools/web-fetch', () => ({
|
|||
vi.mock('../tools/read-many-files', () => ({
|
||||
ReadManyFilesTool: createToolMock('read_many_files'),
|
||||
}));
|
||||
vi.mock('../tools/memoryTool', () => ({
|
||||
MemoryTool: createToolMock('save_memory'),
|
||||
vi.mock('../memory/const.js', () => ({
|
||||
setGeminiMdFilename: vi.fn(),
|
||||
getCurrentGeminiMdFilename: vi.fn(() => 'QWEN.md'), // Mock the original filename
|
||||
getAllGeminiMdFilenames: vi.fn(() => ['QWEN.md', 'AGENTS.md']),
|
||||
|
|
@ -562,6 +567,40 @@ describe('Server Config (config.ts)', () => {
|
|||
expect(config.getUserMemory()).toBe('');
|
||||
});
|
||||
|
||||
it('refreshHierarchicalMemory should append managed auto-memory index when present', async () => {
|
||||
const config = new Config(baseParams);
|
||||
|
||||
vi.mocked(loadServerHierarchicalMemory).mockResolvedValue({
|
||||
memoryContent: '--- Context from: QWEN.md ---\nProject rules',
|
||||
fileCount: 1,
|
||||
});
|
||||
vi.mocked(readAutoMemoryIndex).mockResolvedValue(
|
||||
'# Managed Auto-Memory Index\n\n- [Project Memory](project.md)',
|
||||
);
|
||||
|
||||
await config.refreshHierarchicalMemory();
|
||||
|
||||
expect(config.getUserMemory()).toContain('Project rules');
|
||||
expect(config.getUserMemory()).toContain('# auto memory');
|
||||
expect(config.getUserMemory()).toContain('[Project Memory](project.md)');
|
||||
});
|
||||
|
||||
it('refreshHierarchicalMemory should include empty memory prompt when no managed auto-memory index exists', async () => {
|
||||
const config = new Config(baseParams);
|
||||
|
||||
vi.mocked(loadServerHierarchicalMemory).mockResolvedValue({
|
||||
memoryContent: '--- Context from: QWEN.md ---\nProject rules',
|
||||
fileCount: 1,
|
||||
});
|
||||
vi.mocked(readAutoMemoryIndex).mockResolvedValue(null);
|
||||
|
||||
await config.refreshHierarchicalMemory();
|
||||
|
||||
expect(config.getUserMemory()).toContain('Project rules');
|
||||
expect(config.getUserMemory()).toContain('# auto memory');
|
||||
expect(config.getUserMemory()).toContain('MEMORY.md is currently empty');
|
||||
});
|
||||
|
||||
it('Config constructor should call setGeminiMdFilename with contextFileName if provided', () => {
|
||||
const contextFileName = 'CUSTOM_AGENTS.md';
|
||||
const paramsWithContextFile: ConfigParameters = {
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ import { GlobTool } from '../tools/glob.js';
|
|||
import { GrepTool } from '../tools/grep.js';
|
||||
import { LSTool } from '../tools/ls.js';
|
||||
import type { SendSdkMcpMessage } from '../tools/mcp-client.js';
|
||||
import { MemoryTool, setGeminiMdFilename } from '../tools/memoryTool.js';
|
||||
import { setGeminiMdFilename } from '../memory/const.js';
|
||||
import { ReadFileTool } from '../tools/read-file.js';
|
||||
import { canUseRipgrep } from '../utils/ripgrepUtils.js';
|
||||
import { RipGrepTool } from '../tools/ripGrep.js';
|
||||
|
|
@ -137,6 +137,9 @@ import {
|
|||
setDebugLogSession,
|
||||
type DebugLogger,
|
||||
} from '../utils/debugLogger.js';
|
||||
import { getAutoMemoryRoot } from '../memory/paths.js';
|
||||
import { readAutoMemoryIndex } from '../memory/store.js';
|
||||
import { MemoryManager } from '../memory/manager.js';
|
||||
|
||||
import {
|
||||
ModelsConfig,
|
||||
|
|
@ -442,6 +445,17 @@ export interface ConfigParameters {
|
|||
modelProvidersConfig?: ModelProvidersConfig;
|
||||
/** Multi-agent collaboration settings (Arena, Team, Swarm) */
|
||||
agents?: AgentsCollabSettings;
|
||||
/** Enable managed auto-memory background extraction and dream. Defaults to true. */
|
||||
enableManagedAutoMemory?: boolean;
|
||||
/** Enable managed auto-dream consolidation separately from extraction. Defaults to true. */
|
||||
enableManagedAutoDream?: boolean;
|
||||
/**
|
||||
* Lightweight model for background tasks (memory extraction, dream, /btw side questions).
|
||||
* When set and valid for the current auth type, forked agents use this model instead of
|
||||
* the main session model, reducing latency and cost.
|
||||
* Corresponds to the `fastModel` setting (configurable via `/model --fast`).
|
||||
*/
|
||||
fastModel?: string;
|
||||
/**
|
||||
* Disable all hooks (default: false, hooks enabled).
|
||||
* Migration note: This replaces the deprecated hooksConfig.enabled setting.
|
||||
|
|
@ -638,6 +652,9 @@ export class Config {
|
|||
private readonly eventEmitter?: EventEmitter;
|
||||
private readonly channel: string | undefined;
|
||||
private readonly defaultFileEncoding: FileEncodingType | undefined;
|
||||
private readonly enableManagedAutoMemory: boolean;
|
||||
private readonly enableManagedAutoDream: boolean;
|
||||
private fastModel?: string;
|
||||
private readonly disableAllHooks: boolean;
|
||||
/** User-level hooks (always loaded regardless of trust) */
|
||||
private readonly userHooks?: Record<string, unknown>;
|
||||
|
|
@ -647,6 +664,7 @@ export class Config {
|
|||
private readonly hooks?: Record<string, unknown>;
|
||||
private hookSystem?: HookSystem;
|
||||
private messageBus?: MessageBus;
|
||||
private readonly memoryManager: MemoryManager;
|
||||
|
||||
constructor(params: ConfigParameters) {
|
||||
this.sessionId = params.sessionId ?? randomUUID();
|
||||
|
|
@ -819,12 +837,16 @@ export class Config {
|
|||
enabledExtensionOverrides: this.overrideExtensions,
|
||||
isWorkspaceTrusted: this.isTrustedFolder(),
|
||||
});
|
||||
this.enableManagedAutoMemory = params.enableManagedAutoMemory ?? true;
|
||||
this.enableManagedAutoDream = params.enableManagedAutoDream ?? false;
|
||||
this.fastModel = params.fastModel || undefined;
|
||||
this.disableAllHooks = params.disableAllHooks ?? false;
|
||||
// Store user and project hooks separately for proper source attribution
|
||||
this.userHooks = params.userHooks;
|
||||
this.projectHooks = params.projectHooks;
|
||||
// Legacy: fall back to merged hooks if new fields are not provided
|
||||
this.hooks = params.hooks;
|
||||
this.memoryManager = new MemoryManager();
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1062,7 +1084,20 @@ export class Config {
|
|||
this.isTrustedFolder(),
|
||||
this.getImportFormat(),
|
||||
);
|
||||
this.setUserMemory(memoryContent);
|
||||
if (this.getManagedAutoMemoryEnabled()) {
|
||||
const managedAutoMemoryIndex = await readAutoMemoryIndex(
|
||||
this.getProjectRoot(),
|
||||
);
|
||||
this.setUserMemory(
|
||||
this.memoryManager.appendToUserMemory(
|
||||
memoryContent,
|
||||
getAutoMemoryRoot(this.getProjectRoot()),
|
||||
managedAutoMemoryIndex,
|
||||
),
|
||||
);
|
||||
} else {
|
||||
this.setUserMemory(memoryContent);
|
||||
}
|
||||
this.setGeminiMdFileCount(fileCount);
|
||||
}
|
||||
|
||||
|
|
@ -1249,6 +1284,29 @@ export class Config {
|
|||
return this.contentGeneratorConfig?.model || this.modelsConfig.getModel();
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the fast model if one is configured and valid for the current auth type,
|
||||
* otherwise returns undefined. Background agents (memory extraction, dream, /btw)
|
||||
* use this as a cheaper alternative to the main session model.
|
||||
*/
|
||||
getFastModel(): string | undefined {
|
||||
if (!this.fastModel) return undefined;
|
||||
const authType = this.contentGeneratorConfig?.authType;
|
||||
if (!authType) return undefined;
|
||||
const available = this.getAvailableModelsForAuthType(authType);
|
||||
return available.some((m) => m.id === this.fastModel)
|
||||
? this.fastModel
|
||||
: undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the fast model at runtime (e.g., when the user runs `/model --fast <model>`).
|
||||
* Pass undefined or an empty string to clear the fast model override.
|
||||
*/
|
||||
setFastModel(model: string | undefined): void {
|
||||
this.fastModel = model || undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set model programmatically (e.g., VLM auto-switch, fallback).
|
||||
* Delegates to ModelsConfig.
|
||||
|
|
@ -1927,6 +1985,24 @@ export class Config {
|
|||
return this.disableAllHooks;
|
||||
}
|
||||
|
||||
getManagedAutoMemoryEnabled(): boolean {
|
||||
return this.enableManagedAutoMemory;
|
||||
}
|
||||
|
||||
getManagedAutoDreamEnabled(): boolean {
|
||||
return this.enableManagedAutoDream;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the MemoryManager instance created for this Config.
|
||||
* Use this to share background-task state (registry, drainer) with memory
|
||||
* module runtimes (extract, dream) instead of relying on module-level
|
||||
* globals.
|
||||
*/
|
||||
getMemoryManager(): MemoryManager {
|
||||
return this.memoryManager;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the message bus instance.
|
||||
* Returns undefined if not set.
|
||||
|
|
@ -2334,7 +2410,6 @@ export class Config {
|
|||
await registerCoreTool(EditTool, this);
|
||||
await registerCoreTool(WriteFileTool, this);
|
||||
await registerCoreTool(ShellTool, this);
|
||||
await registerCoreTool(MemoryTool);
|
||||
await registerCoreTool(TodoWriteTool, this);
|
||||
await registerCoreTool(AskUserQuestionTool, this);
|
||||
!this.sdkMode && (await registerCoreTool(ExitPlanModeTool, this));
|
||||
|
|
|
|||
|
|
@ -125,7 +125,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -360,7 +359,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -605,7 +603,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -835,7 +832,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -1065,7 +1061,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -1295,7 +1290,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -1525,7 +1519,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -1755,7 +1748,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -1985,7 +1977,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -2215,7 +2206,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -2468,7 +2458,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -2784,7 +2773,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -3037,7 +3025,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -3349,7 +3336,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
@ -3579,7 +3565,6 @@ IMPORTANT: Always use the todo_write tool to plan and track tasks throughout the
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the 'todo_write' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the 'agent' tool in order to reduce context usage. You should proactively use the 'agent' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
|
|||
|
|
@ -268,10 +268,31 @@ describe('Gemini Client (client.ts)', () => {
|
|||
let mockConfig: Config;
|
||||
let client: GeminiClient;
|
||||
let mockGenerateContentFn: Mock;
|
||||
let mockMemoryManager: {
|
||||
scheduleExtract: ReturnType<typeof vi.fn>;
|
||||
scheduleDream: ReturnType<typeof vi.fn>;
|
||||
recall: ReturnType<typeof vi.fn>;
|
||||
};
|
||||
beforeEach(async () => {
|
||||
vi.resetAllMocks();
|
||||
vi.mocked(uiTelemetryService.setLastPromptTokenCount).mockClear();
|
||||
|
||||
mockMemoryManager = {
|
||||
scheduleExtract: vi.fn().mockResolvedValue({
|
||||
touchedTopics: [],
|
||||
cursor: { updatedAt: new Date(0).toISOString() },
|
||||
}),
|
||||
scheduleDream: vi.fn().mockResolvedValue({
|
||||
status: 'skipped',
|
||||
skippedReason: 'min_sessions',
|
||||
}),
|
||||
recall: vi.fn().mockResolvedValue({
|
||||
prompt: '',
|
||||
selectedDocs: [],
|
||||
strategy: 'none',
|
||||
}),
|
||||
};
|
||||
|
||||
mockGenerateContentFn = vi.fn().mockResolvedValue({
|
||||
candidates: [{ content: { parts: [{ text: '{"key": "value"}' }] } }],
|
||||
});
|
||||
|
|
@ -365,6 +386,8 @@ describe('Gemini Client (client.ts)', () => {
|
|||
getChatRecordingService: vi.fn().mockReturnValue(undefined),
|
||||
getResumedSessionData: vi.fn().mockReturnValue(undefined),
|
||||
getArenaAgentClient: vi.fn().mockReturnValue(null),
|
||||
getManagedAutoMemoryEnabled: vi.fn().mockReturnValue(true),
|
||||
getMemoryManager: vi.fn().mockReturnValue(mockMemoryManager),
|
||||
getDisableAllHooks: vi.fn().mockReturnValue(true),
|
||||
getArenaManager: vi.fn().mockReturnValue(null),
|
||||
getMessageBus: vi.fn().mockReturnValue(undefined),
|
||||
|
|
@ -1415,6 +1438,182 @@ hello
|
|||
});
|
||||
});
|
||||
|
||||
it('should prepend relevant managed auto-memory prompt when recall returns content', async () => {
|
||||
mockMemoryManager.recall.mockResolvedValue({
|
||||
prompt: '## Relevant memory\n\nUser prefers terse responses.',
|
||||
selectedDocs: [
|
||||
{
|
||||
type: 'user',
|
||||
filePath: '/test/project/root/.qwen/memory/user.md',
|
||||
relativePath: 'user.md',
|
||||
filename: 'user.md',
|
||||
title: 'User Memory',
|
||||
description: 'User preferences',
|
||||
body: '- User prefers terse responses.',
|
||||
mtimeMs: 1,
|
||||
},
|
||||
],
|
||||
strategy: 'model',
|
||||
});
|
||||
|
||||
const mockStream = (async function* () {
|
||||
yield { type: 'content', value: 'Hello' };
|
||||
})();
|
||||
mockTurnRunFn.mockReturnValue(mockStream);
|
||||
|
||||
const mockChat: Partial<GeminiChat> = {
|
||||
addHistory: vi.fn(),
|
||||
getHistory: vi.fn().mockReturnValue([]),
|
||||
stripThoughtsFromHistory: vi.fn(),
|
||||
};
|
||||
client['chat'] = mockChat as GeminiChat;
|
||||
|
||||
const stream = client.sendMessageStream(
|
||||
[{ text: 'Please answer tersely' }],
|
||||
new AbortController().signal,
|
||||
'prompt-id-memory',
|
||||
);
|
||||
for await (const _ of stream) {
|
||||
// consume stream
|
||||
}
|
||||
|
||||
expect(mockMemoryManager.recall).toHaveBeenCalledWith(
|
||||
'/test/project/root',
|
||||
'Please answer tersely',
|
||||
expect.objectContaining({
|
||||
config: mockConfig,
|
||||
excludedFilePaths: expect.any(Set),
|
||||
}),
|
||||
);
|
||||
expect(mockTurnRunFn).toHaveBeenCalledWith(
|
||||
'test-model',
|
||||
expect.arrayContaining([
|
||||
'## Relevant memory\n\nUser prefers terse responses.',
|
||||
'Please answer tersely',
|
||||
]),
|
||||
expect.any(AbortSignal),
|
||||
);
|
||||
});
|
||||
|
||||
it('should track surfaced managed memory paths across user queries', async () => {
|
||||
mockMemoryManager.recall
|
||||
.mockResolvedValueOnce({
|
||||
prompt: '## Relevant memory\n\nUser prefers terse responses.',
|
||||
selectedDocs: [
|
||||
{
|
||||
type: 'user',
|
||||
filePath: '/test/project/root/.qwen/memory/user.md',
|
||||
relativePath: 'user.md',
|
||||
filename: 'user.md',
|
||||
title: 'User Memory',
|
||||
description: 'User preferences',
|
||||
body: '- User prefers terse responses.',
|
||||
mtimeMs: 1,
|
||||
},
|
||||
],
|
||||
strategy: 'model',
|
||||
})
|
||||
.mockResolvedValueOnce({
|
||||
prompt: '',
|
||||
selectedDocs: [],
|
||||
strategy: 'none',
|
||||
});
|
||||
|
||||
const mockStream = (async function* () {
|
||||
yield { type: 'content', value: 'Hello' };
|
||||
})();
|
||||
mockTurnRunFn.mockReturnValue(mockStream);
|
||||
|
||||
const mockChat: Partial<GeminiChat> = {
|
||||
addHistory: vi.fn(),
|
||||
getHistory: vi.fn().mockReturnValue([]),
|
||||
stripThoughtsFromHistory: vi.fn(),
|
||||
};
|
||||
client['chat'] = mockChat as GeminiChat;
|
||||
|
||||
const first = client.sendMessageStream(
|
||||
[{ text: 'Please answer tersely' }],
|
||||
new AbortController().signal,
|
||||
'prompt-id-memory-1',
|
||||
);
|
||||
for await (const _ of first) {
|
||||
// consume stream
|
||||
}
|
||||
|
||||
const second = client.sendMessageStream(
|
||||
[{ text: 'Keep it short again' }],
|
||||
new AbortController().signal,
|
||||
'prompt-id-memory-2',
|
||||
);
|
||||
for await (const _ of second) {
|
||||
// consume stream
|
||||
}
|
||||
|
||||
expect(mockMemoryManager.recall).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
'/test/project/root',
|
||||
'Keep it short again',
|
||||
expect.objectContaining({
|
||||
excludedFilePaths: new Set([
|
||||
'/test/project/root/.qwen/memory/user.md',
|
||||
]),
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should run managed auto-memory extraction after a completed user query', async () => {
|
||||
mockMemoryManager.scheduleExtract.mockResolvedValue({
|
||||
touchedTopics: ['user'],
|
||||
cursor: {
|
||||
sessionId: 'test-session-id',
|
||||
processedOffset: 2,
|
||||
updatedAt: new Date(0).toISOString(),
|
||||
},
|
||||
systemMessage: 'Managed auto-memory updated: user.md',
|
||||
});
|
||||
|
||||
const mockStream = (async function* () {
|
||||
yield { type: GeminiEventType.Content, value: 'Done' };
|
||||
})();
|
||||
mockTurnRunFn.mockReturnValue(mockStream);
|
||||
|
||||
const mockChat: Partial<GeminiChat> = {
|
||||
addHistory: vi.fn(),
|
||||
getHistory: vi.fn().mockReturnValue([
|
||||
{ role: 'user', parts: [{ text: 'I prefer terse responses.' }] },
|
||||
{ role: 'model', parts: [{ text: 'Done' }] },
|
||||
]),
|
||||
stripThoughtsFromHistory: vi.fn(),
|
||||
};
|
||||
client['chat'] = mockChat as GeminiChat;
|
||||
|
||||
const events = await fromAsync(
|
||||
client.sendMessageStream(
|
||||
[{ text: 'Please answer tersely' }],
|
||||
new AbortController().signal,
|
||||
'prompt-id-extract',
|
||||
),
|
||||
);
|
||||
|
||||
const recordedHistory = mockChat.getHistory?.();
|
||||
|
||||
expect(mockMemoryManager.scheduleExtract).toHaveBeenCalledWith({
|
||||
projectRoot: '/test/project/root',
|
||||
sessionId: 'test-session-id',
|
||||
history: recordedHistory,
|
||||
config: mockConfig,
|
||||
});
|
||||
expect(mockMemoryManager.scheduleDream).toHaveBeenCalledWith({
|
||||
projectRoot: '/test/project/root',
|
||||
sessionId: 'test-session-id',
|
||||
config: mockConfig,
|
||||
});
|
||||
expect(events).not.toContainEqual({
|
||||
type: GeminiEventType.HookSystemMessage,
|
||||
value: 'Managed auto-memory updated: user.md',
|
||||
});
|
||||
});
|
||||
|
||||
it('should add context if ideMode is enabled and there are open files but no active file', async () => {
|
||||
// Arrange
|
||||
vi.mocked(ideContextStore.get).mockReturnValue({
|
||||
|
|
|
|||
|
|
@ -48,6 +48,7 @@ import { LoopDetectionService } from '../services/loopDetectionService.js';
|
|||
|
||||
// Tools
|
||||
import { AgentTool } from '../tools/agent.js';
|
||||
import type { RelevantAutoMemoryPromptResult } from '../memory/manager.js';
|
||||
|
||||
// Telemetry
|
||||
import {
|
||||
|
|
@ -56,11 +57,11 @@ import {
|
|||
} from '../telemetry/index.js';
|
||||
import { uiTelemetryService } from '../telemetry/uiTelemetry.js';
|
||||
|
||||
// Forked query cache
|
||||
// Forked agent cache
|
||||
import {
|
||||
saveCacheSafeParams,
|
||||
clearCacheSafeParams,
|
||||
} from '../followup/forkedQuery.js';
|
||||
} from '../utils/forkedAgent.js';
|
||||
|
||||
// Utilities
|
||||
import {
|
||||
|
|
@ -114,9 +115,16 @@ export interface SendMessageOptions {
|
|||
modelOverride?: string;
|
||||
}
|
||||
|
||||
const EMPTY_RELEVANT_AUTO_MEMORY_RESULT: RelevantAutoMemoryPromptResult = {
|
||||
prompt: '',
|
||||
selectedDocs: [],
|
||||
strategy: 'none',
|
||||
};
|
||||
|
||||
export class GeminiClient {
|
||||
private chat?: GeminiChat;
|
||||
private sessionTurnCount = 0;
|
||||
private readonly surfacedRelevantAutoMemoryPaths = new Set<string>();
|
||||
|
||||
private readonly loopDetector: LoopDetectionService;
|
||||
private lastPromptId: string | undefined = undefined;
|
||||
|
|
@ -129,6 +137,13 @@ export class GeminiClient {
|
|||
*/
|
||||
private hasFailedCompressionAttempt = false;
|
||||
|
||||
/**
|
||||
* Promises for pending background memory tasks (dream / extract).
|
||||
* Each promise resolves with a count of memory files touched (0 = nothing written).
|
||||
* Consumed by the CLI via `consumePendingMemoryTaskPromises()`.
|
||||
*/
|
||||
private pendingMemoryTaskPromises: Array<Promise<number>> = [];
|
||||
|
||||
/**
|
||||
* Timestamp (epoch ms) of the last completed API call.
|
||||
* Used to detect idle periods for thinking block cleanup.
|
||||
|
|
@ -221,6 +236,7 @@ export class GeminiClient {
|
|||
}
|
||||
|
||||
async resetChat(): Promise<void> {
|
||||
this.surfacedRelevantAutoMemoryPaths.clear();
|
||||
// Reset thinking clear latch — fresh chat, no prior thinking to clean up
|
||||
this.thinkingClearLatched = false;
|
||||
this.lastApiCompletionTimestamp = null;
|
||||
|
|
@ -489,6 +505,77 @@ export class GeminiClient {
|
|||
}
|
||||
}
|
||||
|
||||
private runManagedAutoMemoryBackgroundTasks(
|
||||
messageType: SendMessageType,
|
||||
): void {
|
||||
if (messageType !== SendMessageType.UserQuery) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!this.config.getManagedAutoMemoryEnabled()) {
|
||||
return;
|
||||
}
|
||||
|
||||
const projectRoot = this.config.getProjectRoot();
|
||||
const sessionId = this.config.getSessionId();
|
||||
const history = this.getHistory();
|
||||
const mgr = this.config.getMemoryManager();
|
||||
|
||||
const extractPromise = mgr
|
||||
.scheduleExtract({
|
||||
projectRoot,
|
||||
sessionId,
|
||||
history,
|
||||
config: this.config,
|
||||
})
|
||||
.then((result) => result.touchedTopics.length)
|
||||
.catch((error: unknown) => {
|
||||
debugLogger.warn(
|
||||
'Failed to schedule managed auto-memory extraction.',
|
||||
error,
|
||||
);
|
||||
return 0;
|
||||
});
|
||||
this.pendingMemoryTaskPromises.push(extractPromise);
|
||||
|
||||
const dreamPromise = mgr
|
||||
.scheduleDream({
|
||||
projectRoot,
|
||||
sessionId,
|
||||
config: this.config,
|
||||
})
|
||||
.then((schedResult) => {
|
||||
if (schedResult.status === 'scheduled' && schedResult.promise) {
|
||||
return schedResult.promise.then((state) => {
|
||||
const topics = state.metadata?.['touchedTopics'] as
|
||||
| string[]
|
||||
| undefined;
|
||||
return topics ? topics.length : 0;
|
||||
});
|
||||
}
|
||||
return 0;
|
||||
})
|
||||
.catch((error: unknown) => {
|
||||
debugLogger.warn(
|
||||
'Failed to schedule managed auto-memory dream.',
|
||||
error,
|
||||
);
|
||||
return 0;
|
||||
});
|
||||
this.pendingMemoryTaskPromises.push(dreamPromise);
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns and clears the list of pending background memory task promises.
|
||||
* Each promise resolves with the number of memory files touched (0 = nothing
|
||||
* was written, caller should ignore).
|
||||
*/
|
||||
consumePendingMemoryTaskPromises(): Array<Promise<number>> {
|
||||
const promises = this.pendingMemoryTaskPromises;
|
||||
this.pendingMemoryTaskPromises = [];
|
||||
return promises;
|
||||
}
|
||||
|
||||
async *sendMessageStream(
|
||||
request: PartListUnion,
|
||||
signal: AbortSignal,
|
||||
|
|
@ -497,6 +584,9 @@ export class GeminiClient {
|
|||
turns: number = MAX_TURNS,
|
||||
): AsyncGenerator<ServerGeminiStreamEvent, Turn> {
|
||||
const messageType = options?.type ?? SendMessageType.UserQuery;
|
||||
let relevantAutoMemoryPromise:
|
||||
| Promise<RelevantAutoMemoryPromptResult>
|
||||
| undefined;
|
||||
|
||||
if (messageType === SendMessageType.Retry) {
|
||||
this.stripOrphanedUserEntriesFromHistory();
|
||||
|
|
@ -559,6 +649,22 @@ export class GeminiClient {
|
|||
this.loopDetector.reset(prompt_id);
|
||||
this.lastPromptId = prompt_id;
|
||||
|
||||
if (this.config.getManagedAutoMemoryEnabled()) {
|
||||
relevantAutoMemoryPromise = this.config
|
||||
.getMemoryManager()
|
||||
.recall(this.config.getProjectRoot(), partToString(request), {
|
||||
config: this.config,
|
||||
excludedFilePaths: this.surfacedRelevantAutoMemoryPaths,
|
||||
})
|
||||
.catch((error: unknown) => {
|
||||
debugLogger.warn(
|
||||
'Managed auto-memory recall prefetch failed.',
|
||||
error,
|
||||
);
|
||||
return EMPTY_RELEVANT_AUTO_MEMORY_RESULT;
|
||||
});
|
||||
}
|
||||
|
||||
// record user message for session management
|
||||
this.config.getChatRecordingService()?.recordUserMessage(request);
|
||||
|
||||
|
|
@ -700,6 +806,17 @@ export class GeminiClient {
|
|||
messageType === SendMessageType.Cron
|
||||
) {
|
||||
const systemReminders = [];
|
||||
const relevantAutoMemory = relevantAutoMemoryPromise
|
||||
? await relevantAutoMemoryPromise
|
||||
: EMPTY_RELEVANT_AUTO_MEMORY_RESULT;
|
||||
const relevantAutoMemoryPrompt = relevantAutoMemory.prompt;
|
||||
|
||||
if (relevantAutoMemoryPrompt) {
|
||||
systemReminders.push(relevantAutoMemoryPrompt);
|
||||
for (const doc of relevantAutoMemory.selectedDocs) {
|
||||
this.surfacedRelevantAutoMemoryPaths.add(doc.filePath);
|
||||
}
|
||||
}
|
||||
|
||||
// add subagent system reminder if there are subagents
|
||||
const hasAgentTool = this.config
|
||||
|
|
@ -880,7 +997,28 @@ export class GeminiClient {
|
|||
}
|
||||
|
||||
if (!turn.pendingToolCalls.length && signal && !signal.aborted) {
|
||||
// Save cache-safe params here — before any early return — so that
|
||||
// background extract/dream agents calling getCacheSafeParams() always
|
||||
// see the current turn's history regardless of which path exits below.
|
||||
try {
|
||||
const chat = this.getChat();
|
||||
const fullHistory = chat.getHistory(true);
|
||||
const maxHistoryForCache = 40;
|
||||
const cachedHistory =
|
||||
fullHistory.length > maxHistoryForCache
|
||||
? fullHistory.slice(-maxHistoryForCache)
|
||||
: fullHistory;
|
||||
saveCacheSafeParams(
|
||||
chat.getGenerationConfig(),
|
||||
cachedHistory,
|
||||
this.config.getModel(),
|
||||
);
|
||||
} catch {
|
||||
// Best-effort — don't block the main flow
|
||||
}
|
||||
|
||||
if (this.config.getSkipNextSpeakerCheck()) {
|
||||
this.runManagedAutoMemoryBackgroundTasks(messageType);
|
||||
// Report completed before returning — agent has no more work to do
|
||||
if (arenaAgentClient) {
|
||||
await arenaAgentClient.reportCompleted();
|
||||
|
|
@ -913,7 +1051,11 @@ export class GeminiClient {
|
|||
options,
|
||||
boundedTurns - 1,
|
||||
);
|
||||
} else if (arenaAgentClient) {
|
||||
}
|
||||
|
||||
this.runManagedAutoMemoryBackgroundTasks(messageType);
|
||||
|
||||
if (arenaAgentClient) {
|
||||
// No continuation needed — agent completed its task
|
||||
await arenaAgentClient.reportCompleted();
|
||||
}
|
||||
|
|
@ -924,27 +1066,6 @@ export class GeminiClient {
|
|||
await arenaAgentClient.reportCancelled();
|
||||
}
|
||||
|
||||
// Save cache-safe params on successful completion (non-abort) for forked queries
|
||||
if (!signal?.aborted && this.isInitialized()) {
|
||||
try {
|
||||
const chat = this.getChat();
|
||||
// Clone history then truncate to last 40 entries to avoid full-session deep copy overhead
|
||||
const fullHistory = chat.getHistory(true);
|
||||
const maxHistoryForCache = 40;
|
||||
const cachedHistory =
|
||||
fullHistory.length > maxHistoryForCache
|
||||
? fullHistory.slice(-maxHistoryForCache)
|
||||
: fullHistory;
|
||||
saveCacheSafeParams(
|
||||
chat.getGenerationConfig(),
|
||||
cachedHistory,
|
||||
this.config.getModel(),
|
||||
);
|
||||
} catch {
|
||||
// Best-effort — don't block the main flow
|
||||
}
|
||||
}
|
||||
|
||||
return turn;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ import { isGitRepository } from '../utils/gitUtils.js';
|
|||
import fs from 'node:fs';
|
||||
import os from 'node:os';
|
||||
import path from 'node:path';
|
||||
import { QWEN_CONFIG_DIR } from '../tools/memoryTool.js';
|
||||
import { QWEN_CONFIG_DIR } from '../memory/const.js';
|
||||
|
||||
// Mock tool names if they are dynamically generated or complex
|
||||
vi.mock('../tools/ls', () => ({ LSTool: { Name: 'list_directory' } }));
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ import os from 'node:os';
|
|||
import { ToolNames } from '../tools/tool-names.js';
|
||||
import process from 'node:process';
|
||||
import { isGitRepository } from '../utils/gitUtils.js';
|
||||
import { QWEN_CONFIG_DIR } from '../tools/memoryTool.js';
|
||||
import { QWEN_CONFIG_DIR } from '../memory/const.js';
|
||||
import type { GenerateContentConfig } from '@google/genai';
|
||||
import { createDebugLogger } from '../utils/debugLogger.js';
|
||||
|
||||
|
|
@ -267,7 +267,6 @@ IMPORTANT: Always use the ${ToolNames.TODO_WRITE} tool to plan and track tasks t
|
|||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the '${ToolNames.TODO_WRITE}' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the '${ToolNames.AGENT}' tool in order to reduce context usage. You should proactively use the '${ToolNames.AGENT}' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the '${ToolNames.MEMORY}' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
|
|
|
|||
|
|
@ -1,267 +0,0 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*
|
||||
* Forked Query Infrastructure
|
||||
*
|
||||
* Enables cache-aware secondary LLM calls that share the main conversation's
|
||||
* prompt prefix (systemInstruction + history) for cache hits.
|
||||
*
|
||||
* DashScope already enables cache_control via X-DashScope-CacheControl header.
|
||||
* By constructing the forked GeminiChat with identical generationConfig and
|
||||
* history prefix, the fork automatically benefits from prefix caching.
|
||||
*
|
||||
* Note: `runForkedQuery` overrides `tools: []` at the per-request level so the
|
||||
* model cannot produce function calls. `createForkedChat` retains the full
|
||||
* generationConfig (including tools) for callers like speculation that need them.
|
||||
*/
|
||||
|
||||
import type {
|
||||
Content,
|
||||
GenerateContentConfig,
|
||||
GenerateContentResponseUsageMetadata,
|
||||
} from '@google/genai';
|
||||
import { GeminiChat, StreamEventType } from '../core/geminiChat.js';
|
||||
import type { Config } from '../config/config.js';
|
||||
|
||||
/** Per-request config that strips tools so the model never produces function calls. */
|
||||
const NO_TOOLS = Object.freeze({ tools: [] as const }) as Pick<
|
||||
GenerateContentConfig,
|
||||
'tools'
|
||||
>;
|
||||
|
||||
/**
|
||||
* Snapshot of the main conversation's cache-critical parameters.
|
||||
* Captured after each successful main turn so forked queries share the same prefix.
|
||||
*/
|
||||
export interface CacheSafeParams {
|
||||
/** Full generation config including systemInstruction and tools */
|
||||
generationConfig: GenerateContentConfig;
|
||||
/** Curated conversation history (deep clone) */
|
||||
history: Content[];
|
||||
/** Model identifier */
|
||||
model: string;
|
||||
/** Version number — increments when systemInstruction or tools change */
|
||||
version: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result from a forked query.
|
||||
*/
|
||||
export interface ForkedQueryResult {
|
||||
/** Extracted text response, or null if no text */
|
||||
text: string | null;
|
||||
/** Parsed JSON result if schema was provided */
|
||||
jsonResult?: Record<string, unknown>;
|
||||
/** Token usage metrics */
|
||||
usage: {
|
||||
inputTokens: number;
|
||||
outputTokens: number;
|
||||
cacheHitTokens: number;
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Global cache params slot
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
let currentCacheSafeParams: CacheSafeParams | null = null;
|
||||
let currentVersion = 0;
|
||||
|
||||
/**
|
||||
* Save cache-safe params after a successful main conversation turn.
|
||||
* Called from GeminiClient.sendMessageStream() on successful completion.
|
||||
*/
|
||||
export function saveCacheSafeParams(
|
||||
generationConfig: GenerateContentConfig,
|
||||
history: Content[],
|
||||
model: string,
|
||||
): void {
|
||||
// Detect if systemInstruction or tools changed
|
||||
const prevConfig = currentCacheSafeParams?.generationConfig;
|
||||
const sysChanged =
|
||||
!prevConfig ||
|
||||
JSON.stringify(prevConfig.systemInstruction) !==
|
||||
JSON.stringify(generationConfig.systemInstruction);
|
||||
const toolsChanged =
|
||||
!prevConfig ||
|
||||
JSON.stringify(prevConfig.tools) !== JSON.stringify(generationConfig.tools);
|
||||
|
||||
if (sysChanged || toolsChanged) {
|
||||
currentVersion++;
|
||||
}
|
||||
|
||||
currentCacheSafeParams = {
|
||||
generationConfig: structuredClone(generationConfig),
|
||||
history, // caller passes structuredClone'd curated history (from getHistory(true))
|
||||
model,
|
||||
version: currentVersion,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the current cache-safe params, or null if not yet captured.
|
||||
*/
|
||||
export function getCacheSafeParams(): CacheSafeParams | null {
|
||||
return currentCacheSafeParams
|
||||
? structuredClone(currentCacheSafeParams)
|
||||
: null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear cache-safe params (e.g., on session reset).
|
||||
*/
|
||||
export function clearCacheSafeParams(): void {
|
||||
currentCacheSafeParams = null;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Forked chat creation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Create an isolated GeminiChat that shares the main conversation's
|
||||
* generationConfig (including systemInstruction, tools, and history).
|
||||
*
|
||||
* The full config is retained so that callers like `runSpeculativeLoop`
|
||||
* can execute tool calls during speculation. For pure-text callers like
|
||||
* `runForkedQuery`, tools are stripped at the per-request level via
|
||||
* `NO_TOOLS` — see {@link runForkedQuery}.
|
||||
*
|
||||
* The fork does NOT have chatRecordingService or telemetryService to avoid
|
||||
* polluting the main session's recordings and token counts.
|
||||
*/
|
||||
export function createForkedChat(
|
||||
config: Config,
|
||||
params: CacheSafeParams,
|
||||
): GeminiChat {
|
||||
// Limit history to avoid excessive cost
|
||||
const maxHistoryEntries = 40;
|
||||
const history =
|
||||
params.history.length > maxHistoryEntries
|
||||
? params.history.slice(-maxHistoryEntries)
|
||||
: params.history;
|
||||
|
||||
// params.generationConfig and params.history are already deep-cloned snapshots
|
||||
// from saveCacheSafeParams (which clones generationConfig) and getHistory(true)
|
||||
// (which structuredClones the history). Slice creates a new array but shares
|
||||
// Content references — GeminiChat only reads history, never mutates entries,
|
||||
// so sharing is safe and avoids a redundant deep clone.
|
||||
return new GeminiChat(
|
||||
config,
|
||||
{
|
||||
...params.generationConfig,
|
||||
// Disable thinking for forked queries — suggestions/speculation don't need
|
||||
// reasoning tokens and it wastes cost + latency on the fast model path.
|
||||
// This doesn't affect cache prefix (system + tools + history).
|
||||
thinkingConfig: { includeThoughts: false },
|
||||
},
|
||||
[...history], // shallow copy — entries are read-only
|
||||
undefined, // no chatRecordingService
|
||||
undefined, // no telemetryService
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Forked query execution
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function extractUsage(
|
||||
metadata?: GenerateContentResponseUsageMetadata,
|
||||
): ForkedQueryResult['usage'] {
|
||||
return {
|
||||
inputTokens: metadata?.promptTokenCount ?? 0,
|
||||
outputTokens: metadata?.candidatesTokenCount ?? 0,
|
||||
cacheHitTokens: metadata?.cachedContentTokenCount ?? 0,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Run a forked query using a GeminiChat that shares the main conversation's
|
||||
* cache prefix. This is a single-turn, tool-free request (no function calls).
|
||||
*
|
||||
* @param config - App config
|
||||
* @param userMessage - The user message to send (e.g., SUGGESTION_PROMPT)
|
||||
* @param options - Optional configuration
|
||||
* @returns Query result with text, optional JSON, and usage metrics
|
||||
*/
|
||||
export async function runForkedQuery(
|
||||
config: Config,
|
||||
userMessage: string,
|
||||
options?: {
|
||||
abortSignal?: AbortSignal;
|
||||
/** JSON schema for structured output */
|
||||
jsonSchema?: Record<string, unknown>;
|
||||
/** Override model (e.g., for speculation with a cheaper model) */
|
||||
model?: string;
|
||||
},
|
||||
): Promise<ForkedQueryResult> {
|
||||
const params = getCacheSafeParams();
|
||||
if (!params) {
|
||||
throw new Error('CacheSafeParams not available');
|
||||
}
|
||||
|
||||
const model = options?.model ?? params.model;
|
||||
const chat = createForkedChat(config, params);
|
||||
|
||||
// Build per-request config overrides.
|
||||
// NO_TOOLS prevents the model from producing function calls — forked
|
||||
// queries are pure text completion and must not appear in tool-call UI.
|
||||
const requestConfig: GenerateContentConfig = { ...NO_TOOLS };
|
||||
if (options?.abortSignal) {
|
||||
requestConfig.abortSignal = options.abortSignal;
|
||||
}
|
||||
if (options?.jsonSchema) {
|
||||
requestConfig.responseMimeType = 'application/json';
|
||||
requestConfig.responseJsonSchema = options.jsonSchema;
|
||||
}
|
||||
|
||||
const stream = await chat.sendMessageStream(
|
||||
model,
|
||||
{
|
||||
message: [{ text: userMessage }],
|
||||
config: requestConfig,
|
||||
},
|
||||
'forked_query',
|
||||
);
|
||||
|
||||
// Collect the full response
|
||||
let fullText = '';
|
||||
let usage: ForkedQueryResult['usage'] = {
|
||||
inputTokens: 0,
|
||||
outputTokens: 0,
|
||||
cacheHitTokens: 0,
|
||||
};
|
||||
|
||||
for await (const event of stream) {
|
||||
if (event.type !== StreamEventType.CHUNK) continue;
|
||||
const response = event.value;
|
||||
// Extract text from candidates, skipping thought/reasoning parts.
|
||||
// Some providers may return thinking content even with enable_thinking: false.
|
||||
const text = response.candidates?.[0]?.content?.parts
|
||||
?.filter((p) => !(p as Record<string, unknown>)['thought'])
|
||||
.map((p) => p.text ?? '')
|
||||
.join('');
|
||||
if (text) {
|
||||
fullText += text;
|
||||
}
|
||||
if (response.usageMetadata) {
|
||||
usage = extractUsage(response.usageMetadata);
|
||||
}
|
||||
}
|
||||
|
||||
const trimmed = fullText.trim() || null;
|
||||
|
||||
// Parse JSON if schema was provided
|
||||
let jsonResult: Record<string, unknown> | undefined;
|
||||
if (options?.jsonSchema && trimmed) {
|
||||
try {
|
||||
jsonResult = JSON.parse(trimmed) as Record<string, unknown>;
|
||||
} catch {
|
||||
// Model returned non-JSON despite schema constraint — treat as text
|
||||
}
|
||||
}
|
||||
|
||||
return { text: trimmed, jsonResult, usage };
|
||||
}
|
||||
|
|
@ -10,7 +10,6 @@
|
|||
|
||||
export * from './followupState.js';
|
||||
export * from './suggestionGenerator.js';
|
||||
export * from './forkedQuery.js';
|
||||
export * from './overlayFs.js';
|
||||
export * from './speculationToolGate.js';
|
||||
export * from './speculation.js';
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ import {
|
|||
saveCacheSafeParams,
|
||||
getCacheSafeParams,
|
||||
clearCacheSafeParams,
|
||||
} from './forkedQuery.js';
|
||||
} from '../utils/forkedAgent.js';
|
||||
import { ensureToolResultPairing } from './speculation.js';
|
||||
import { ToolNames } from '../tools/tool-names.js';
|
||||
import { ApprovalMode } from '../config/config.js';
|
||||
|
|
|
|||
|
|
@ -24,8 +24,8 @@ import { evaluateToolCall, rewritePathArgs } from './speculationToolGate.js';
|
|||
import {
|
||||
getCacheSafeParams,
|
||||
createForkedChat,
|
||||
runForkedQuery,
|
||||
} from './forkedQuery.js';
|
||||
runForkedAgent,
|
||||
} from '../utils/forkedAgent.js';
|
||||
import { getFilterReason, SUGGESTION_PROMPT } from './suggestionGenerator.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
|
|
@ -197,7 +197,7 @@ interface LoopResult {
|
|||
async function runSpeculativeLoop(
|
||||
config: Config,
|
||||
state: SpeculationState,
|
||||
cacheSafe: import('./forkedQuery.js').CacheSafeParams,
|
||||
cacheSafe: import('../utils/forkedAgent.js').CacheSafeParams,
|
||||
modelOverride?: string,
|
||||
): Promise<LoopResult> {
|
||||
const chat = createForkedChat(config, cacheSafe);
|
||||
|
|
@ -537,10 +537,15 @@ The assistant responded: ${speculatedSummary || '(tool calls executed)'}
|
|||
|
||||
${SUGGESTION_PROMPT}`;
|
||||
|
||||
const result = await runForkedQuery(config, augmentedPrompt, {
|
||||
abortSignal,
|
||||
const cacheSafeParams = getCacheSafeParams();
|
||||
if (!cacheSafeParams) return null;
|
||||
const result = await runForkedAgent({
|
||||
config,
|
||||
userMessage: augmentedPrompt,
|
||||
cacheSafeParams,
|
||||
jsonSchema: PIPELINED_SCHEMA,
|
||||
model: modelOverride,
|
||||
abortSignal,
|
||||
});
|
||||
|
||||
if (abortSignal.aborted) return null;
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@
|
|||
|
||||
import type { Content } from '@google/genai';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { getCacheSafeParams, runForkedQuery } from './forkedQuery.js';
|
||||
import { getCacheSafeParams, runForkedAgent } from '../utils/forkedAgent.js';
|
||||
import {
|
||||
uiTelemetryService,
|
||||
EVENT_API_RESPONSE,
|
||||
|
|
@ -152,9 +152,13 @@ async function generateViaForkedQuery(
|
|||
modelOverride?: string,
|
||||
): Promise<string | null> {
|
||||
const model = modelOverride || config.getModel();
|
||||
const cacheSafeParams = getCacheSafeParams();
|
||||
if (!cacheSafeParams) return null;
|
||||
const startTime = Date.now();
|
||||
const result = await runForkedQuery(config, SUGGESTION_PROMPT, {
|
||||
abortSignal,
|
||||
const result = await runForkedAgent({
|
||||
config,
|
||||
userMessage: SUGGESTION_PROMPT,
|
||||
cacheSafeParams,
|
||||
jsonSchema: SUGGESTION_SCHEMA,
|
||||
model,
|
||||
});
|
||||
|
|
|
|||
|
|
@ -84,7 +84,7 @@ export * from './tools/lsp.js';
|
|||
export * from './tools/mcp-client.js';
|
||||
export * from './tools/mcp-client-manager.js';
|
||||
export * from './tools/mcp-tool.js';
|
||||
export * from './tools/memoryTool.js';
|
||||
export * from './memory/const.js';
|
||||
export * from './tools/read-file.js';
|
||||
export * from './tools/ripGrep.js';
|
||||
export * from './tools/sdk-control-client-transport.js';
|
||||
|
|
@ -114,6 +114,22 @@ export * from './services/gitWorktreeService.js';
|
|||
export * from './services/sessionService.js';
|
||||
export * from './services/shellExecutionService.js';
|
||||
|
||||
// ============================================================================
|
||||
// Managed Auto-Memory
|
||||
// ============================================================================
|
||||
|
||||
// MemoryManager is the single public API for all memory operations.
|
||||
// Production code: config.getMemoryManager().method(...)
|
||||
// Tests: new MemoryManager()
|
||||
export * from './memory/manager.js';
|
||||
|
||||
// Foundational utilities (paths, storage scaffold, type definitions, constants)
|
||||
// that are legitimately needed by UI code (MemoryDialog, commands, etc.)
|
||||
export * from './memory/types.js';
|
||||
export * from './memory/paths.js';
|
||||
export * from './memory/store.js';
|
||||
export * from './memory/const.js';
|
||||
|
||||
// ============================================================================
|
||||
// IDE Support
|
||||
// ============================================================================
|
||||
|
|
@ -251,6 +267,8 @@ export * from './utils/toml-to-markdown-converter.js';
|
|||
export * from './utils/tool-utils.js';
|
||||
export * from './utils/workspaceContext.js';
|
||||
export * from './utils/yaml-parser.js';
|
||||
export * from './utils/forkedAgent.js';
|
||||
export * from './utils/sideQuery.js';
|
||||
|
||||
// ============================================================================
|
||||
// OAuth & Authentication
|
||||
|
|
|
|||
49
packages/core/src/memory/const.test.ts
Normal file
49
packages/core/src/memory/const.test.ts
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { vi, describe, it, expect } from 'vitest';
|
||||
import {
|
||||
setGeminiMdFilename,
|
||||
getCurrentGeminiMdFilename,
|
||||
getAllGeminiMdFilenames,
|
||||
} from './const.js';
|
||||
|
||||
// Mock dependencies
|
||||
vi.mock(import('node:fs/promises'), async (importOriginal) => {
|
||||
const actual = await importOriginal();
|
||||
return {
|
||||
...actual,
|
||||
mkdir: vi.fn(),
|
||||
readFile: vi.fn(),
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('os');
|
||||
|
||||
|
||||
describe('setGeminiMdFilename', () => {
|
||||
it('should update currentGeminiMdFilename when a valid new name is provided', () => {
|
||||
const newName = 'CUSTOM_CONTEXT.md';
|
||||
setGeminiMdFilename(newName);
|
||||
expect(getCurrentGeminiMdFilename()).toBe(newName);
|
||||
});
|
||||
|
||||
it('should not update currentGeminiMdFilename if the new name is empty or whitespace', () => {
|
||||
const initialName = getCurrentGeminiMdFilename(); // Get current before trying to change
|
||||
setGeminiMdFilename(' ');
|
||||
expect(getCurrentGeminiMdFilename()).toBe(initialName);
|
||||
|
||||
setGeminiMdFilename('');
|
||||
expect(getCurrentGeminiMdFilename()).toBe(initialName);
|
||||
});
|
||||
|
||||
it('should handle an array of filenames', () => {
|
||||
const newNames = ['CUSTOM_CONTEXT.md', 'ANOTHER_CONTEXT.md'];
|
||||
setGeminiMdFilename(newNames);
|
||||
expect(getCurrentGeminiMdFilename()).toBe('CUSTOM_CONTEXT.md');
|
||||
expect(getAllGeminiMdFilenames()).toEqual(newNames);
|
||||
});
|
||||
});
|
||||
42
packages/core/src/memory/const.ts
Normal file
42
packages/core/src/memory/const.ts
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
export const QWEN_CONFIG_DIR = '.qwen';
|
||||
export const DEFAULT_CONTEXT_FILENAME = 'QWEN.md';
|
||||
export const AGENT_CONTEXT_FILENAME = 'AGENTS.md';
|
||||
export const MEMORY_SECTION_HEADER = '## Qwen Added Memories';
|
||||
|
||||
// This variable will hold the currently configured filename for context files.
|
||||
// It defaults to include both QWEN.md and AGENTS.md but can be overridden by setGeminiMdFilename.
|
||||
// QWEN.md is first to maintain backward compatibility (used by /init command tool).
|
||||
let currentGeminiMdFilename: string | string[] = [
|
||||
DEFAULT_CONTEXT_FILENAME,
|
||||
AGENT_CONTEXT_FILENAME,
|
||||
];
|
||||
|
||||
export function setGeminiMdFilename(newFilename: string | string[]): void {
|
||||
if (Array.isArray(newFilename)) {
|
||||
if (newFilename.length > 0) {
|
||||
currentGeminiMdFilename = newFilename.map((name) => name.trim());
|
||||
}
|
||||
} else if (newFilename && newFilename.trim() !== '') {
|
||||
currentGeminiMdFilename = newFilename.trim();
|
||||
}
|
||||
}
|
||||
|
||||
export function getCurrentGeminiMdFilename(): string {
|
||||
if (Array.isArray(currentGeminiMdFilename)) {
|
||||
return currentGeminiMdFilename[0];
|
||||
}
|
||||
return currentGeminiMdFilename;
|
||||
}
|
||||
|
||||
export function getAllGeminiMdFilenames(): string[] {
|
||||
if (Array.isArray(currentGeminiMdFilename)) {
|
||||
return currentGeminiMdFilename;
|
||||
}
|
||||
return [currentGeminiMdFilename];
|
||||
}
|
||||
92
packages/core/src/memory/dream.test.ts
Normal file
92
packages/core/src/memory/dream.test.ts
Normal file
|
|
@ -0,0 +1,92 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import * as os from 'node:os';
|
||||
import * as path from 'node:path';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { runManagedAutoMemoryDream } from './dream.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
|
||||
vi.mock('./dreamAgentPlanner.js', () => ({
|
||||
planManagedAutoMemoryDreamByAgent: vi.fn(),
|
||||
}));
|
||||
|
||||
import { planManagedAutoMemoryDreamByAgent } from './dreamAgentPlanner.js';
|
||||
|
||||
describe('managed auto-memory dream', () => {
|
||||
let tempDir: string;
|
||||
let projectRoot: string;
|
||||
let mockConfig: Config;
|
||||
|
||||
beforeEach(async () => {
|
||||
tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'auto-memory-dream-'));
|
||||
projectRoot = path.join(tempDir, 'project');
|
||||
await fs.mkdir(projectRoot, { recursive: true });
|
||||
await ensureAutoMemoryScaffold(projectRoot);
|
||||
vi.mocked(planManagedAutoMemoryDreamByAgent).mockReset();
|
||||
mockConfig = {
|
||||
getSessionId: vi.fn().mockReturnValue('session-1'),
|
||||
getModel: vi.fn().mockReturnValue('qwen-test'),
|
||||
getApprovalMode: vi.fn(),
|
||||
} as unknown as Config;
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await fs.rm(tempDir, {
|
||||
recursive: true,
|
||||
force: true,
|
||||
maxRetries: 3,
|
||||
retryDelay: 10,
|
||||
});
|
||||
});
|
||||
|
||||
it('throws when config is missing', async () => {
|
||||
await expect(runManagedAutoMemoryDream(projectRoot)).rejects.toThrow(
|
||||
'Managed auto-memory dream requires config',
|
||||
);
|
||||
});
|
||||
|
||||
it('returns touched topics derived from files touched by the dream agent', async () => {
|
||||
vi.mocked(planManagedAutoMemoryDreamByAgent).mockResolvedValue({
|
||||
status: 'completed',
|
||||
finalText: 'Merged duplicate user memories.',
|
||||
filesTouched: [
|
||||
path.join(projectRoot, '.qwen', 'memory', 'user', 'prefs.md'),
|
||||
path.join(projectRoot, '.qwen', 'memory', 'reference', 'dash.md'),
|
||||
],
|
||||
});
|
||||
|
||||
const result = await runManagedAutoMemoryDream(
|
||||
projectRoot,
|
||||
new Date('2026-04-02T00:00:00.000Z'),
|
||||
mockConfig,
|
||||
);
|
||||
|
||||
expect(result.touchedTopics).toEqual(
|
||||
expect.arrayContaining(['user', 'reference']),
|
||||
);
|
||||
expect(result.dedupedEntries).toBe(0);
|
||||
expect(result.systemMessage).toContain(
|
||||
'Managed auto-memory dream (agent):',
|
||||
);
|
||||
});
|
||||
|
||||
it('propagates planner failures', async () => {
|
||||
vi.mocked(planManagedAutoMemoryDreamByAgent).mockRejectedValue(
|
||||
new Error('agent failed'),
|
||||
);
|
||||
|
||||
await expect(
|
||||
runManagedAutoMemoryDream(
|
||||
projectRoot,
|
||||
new Date('2026-04-02T00:00:00.000Z'),
|
||||
mockConfig,
|
||||
),
|
||||
).rejects.toThrow('agent failed');
|
||||
});
|
||||
});
|
||||
147
packages/core/src/memory/dream.ts
Normal file
147
packages/core/src/memory/dream.ts
Normal file
|
|
@ -0,0 +1,147 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { getAutoMemoryMetadataPath } from './paths.js';
|
||||
import { planManagedAutoMemoryDreamByAgent } from './dreamAgentPlanner.js';
|
||||
import { rebuildManagedAutoMemoryIndex } from './indexer.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
import {
|
||||
AUTO_MEMORY_TYPES,
|
||||
type AutoMemoryMetadata,
|
||||
type AutoMemoryType,
|
||||
} from './types.js';
|
||||
import { logMemoryDream, MemoryDreamEvent } from '../telemetry/index.js';
|
||||
|
||||
export interface AutoMemoryDreamResult {
|
||||
touchedTopics: AutoMemoryType[];
|
||||
dedupedEntries: number;
|
||||
systemMessage?: string;
|
||||
}
|
||||
|
||||
async function bumpMetadata(projectRoot: string, now: Date): Promise<void> {
|
||||
const metadataPath = getAutoMemoryMetadataPath(projectRoot);
|
||||
try {
|
||||
const content = await fs.readFile(metadataPath, 'utf-8');
|
||||
const metadata = JSON.parse(content) as AutoMemoryMetadata;
|
||||
metadata.updatedAt = now.toISOString();
|
||||
metadata.lastDreamAt = now.toISOString();
|
||||
await fs.writeFile(
|
||||
metadataPath,
|
||||
`${JSON.stringify(metadata, null, 2)}\n`,
|
||||
'utf-8',
|
||||
);
|
||||
} catch {
|
||||
// Best-effort metadata bump.
|
||||
}
|
||||
}
|
||||
|
||||
async function runDreamByAgent(
|
||||
projectRoot: string,
|
||||
config: Config,
|
||||
): Promise<AutoMemoryDreamResult> {
|
||||
const result = await planManagedAutoMemoryDreamByAgent(config, projectRoot);
|
||||
|
||||
// Infer which topics were touched from the file paths
|
||||
const touchedTopics = new Set<AutoMemoryType>();
|
||||
for (const filePath of result.filesTouched) {
|
||||
const normalized = filePath.replace(/\\/g, '/');
|
||||
for (const type of AUTO_MEMORY_TYPES) {
|
||||
if (normalized.includes(`/${type}/`)) {
|
||||
touchedTopics.add(type);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const summary = result.finalText
|
||||
? result.finalText.trim().slice(0, 300)
|
||||
: `updated ${result.filesTouched.length} file(s)`;
|
||||
|
||||
return {
|
||||
touchedTopics: [...touchedTopics],
|
||||
dedupedEntries: 0,
|
||||
systemMessage: `Managed auto-memory dream (agent): ${summary}`,
|
||||
};
|
||||
}
|
||||
|
||||
export async function runManagedAutoMemoryDream(
|
||||
projectRoot: string,
|
||||
now = new Date(),
|
||||
config?: Config,
|
||||
): Promise<AutoMemoryDreamResult> {
|
||||
await ensureAutoMemoryScaffold(projectRoot, now);
|
||||
const t0 = Date.now();
|
||||
|
||||
if (!config) {
|
||||
throw new Error(
|
||||
'Managed auto-memory dream requires config for forked-agent execution.',
|
||||
);
|
||||
}
|
||||
|
||||
const agentResult = await runDreamByAgent(projectRoot, config);
|
||||
if (agentResult.touchedTopics.length > 0) {
|
||||
await bumpMetadata(projectRoot, now);
|
||||
await rebuildManagedAutoMemoryIndex(projectRoot);
|
||||
}
|
||||
|
||||
await updateDreamMetadataResult(projectRoot, now, agentResult.touchedTopics);
|
||||
|
||||
logMemoryDream(
|
||||
config,
|
||||
new MemoryDreamEvent({
|
||||
trigger: 'auto',
|
||||
status: agentResult.touchedTopics.length > 0 ? 'updated' : 'noop',
|
||||
deduped_entries: agentResult.dedupedEntries,
|
||||
touched_topics: agentResult.touchedTopics,
|
||||
duration_ms: Date.now() - t0,
|
||||
}),
|
||||
);
|
||||
return agentResult;
|
||||
}
|
||||
|
||||
async function updateDreamMetadataResult(
|
||||
projectRoot: string,
|
||||
now: Date,
|
||||
touchedTopics: AutoMemoryType[],
|
||||
sessionId?: string,
|
||||
): Promise<void> {
|
||||
const metadataPath = getAutoMemoryMetadataPath(projectRoot);
|
||||
try {
|
||||
const content = await fs.readFile(metadataPath, 'utf-8');
|
||||
const metadata = JSON.parse(content) as AutoMemoryMetadata;
|
||||
metadata.updatedAt = now.toISOString();
|
||||
metadata.lastDreamAt = now.toISOString();
|
||||
metadata.lastDreamTouchedTopics = touchedTopics;
|
||||
metadata.lastDreamStatus = touchedTopics.length > 0 ? 'updated' : 'noop';
|
||||
if (sessionId !== undefined) {
|
||||
metadata.lastDreamSessionId = sessionId;
|
||||
metadata.recentSessionIdsSinceDream = [];
|
||||
}
|
||||
await fs.writeFile(
|
||||
metadataPath,
|
||||
`${JSON.stringify(metadata, null, 2)}\n`,
|
||||
'utf-8',
|
||||
);
|
||||
} catch {
|
||||
// Best-effort metadata bump.
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Record that the user manually ran /dream. Called from the CLI command's
|
||||
* onComplete callback after the main agent turn finishes writing memory files.
|
||||
* Writes lastDreamAt, lastDreamSessionId, and resets recentSessionIdsSinceDream
|
||||
* so that the scheduler's same-session dedupe check prevents a redundant
|
||||
* auto-dream from firing in the same session.
|
||||
*/
|
||||
export async function writeDreamManualRunToMetadata(
|
||||
projectRoot: string,
|
||||
sessionId: string,
|
||||
now = new Date(),
|
||||
): Promise<void> {
|
||||
return updateDreamMetadataResult(projectRoot, now, [], sessionId);
|
||||
}
|
||||
104
packages/core/src/memory/dreamAgentPlanner.test.ts
Normal file
104
packages/core/src/memory/dreamAgentPlanner.test.ts
Normal file
|
|
@ -0,0 +1,104 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import * as os from 'node:os';
|
||||
import * as path from 'node:path';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import type { Config } from '../config/config.js';
|
||||
import type { ForkedAgentResult } from '../utils/forkedAgent.js';
|
||||
import { runForkedAgent } from '../utils/forkedAgent.js';
|
||||
import { planManagedAutoMemoryDreamByAgent } from './dreamAgentPlanner.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
|
||||
vi.mock('../utils/forkedAgent.js', () => ({
|
||||
runForkedAgent: vi.fn(),
|
||||
}));
|
||||
|
||||
describe('dreamAgentPlanner', () => {
|
||||
let tempDir: string;
|
||||
let projectRoot: string;
|
||||
let config: Config;
|
||||
|
||||
beforeEach(async () => {
|
||||
tempDir = await fs.mkdtemp(
|
||||
path.join(os.tmpdir(), 'auto-memory-dream-agent-'),
|
||||
);
|
||||
projectRoot = path.join(tempDir, 'project');
|
||||
await fs.mkdir(projectRoot, { recursive: true });
|
||||
await ensureAutoMemoryScaffold(projectRoot);
|
||||
config = {
|
||||
getSessionId: vi.fn().mockReturnValue('session-1'),
|
||||
getModel: vi.fn().mockReturnValue('qwen-test'),
|
||||
getApprovalMode: vi.fn(),
|
||||
} as unknown as Config;
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await fs.rm(tempDir, {
|
||||
recursive: true,
|
||||
force: true,
|
||||
maxRetries: 3,
|
||||
retryDelay: 10,
|
||||
});
|
||||
});
|
||||
|
||||
it('returns the forked agent result', async () => {
|
||||
const mockResult: ForkedAgentResult = {
|
||||
status: 'completed',
|
||||
finalText: 'Merged 2 duplicate Vim entries into prefers-vim.md.',
|
||||
filesTouched: [
|
||||
path.join(projectRoot, '.qwen', 'memory', 'user', 'prefers-vim.md'),
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(runForkedAgent).mockResolvedValue(mockResult);
|
||||
|
||||
const result = await planManagedAutoMemoryDreamByAgent(config, projectRoot);
|
||||
|
||||
expect(result).toBe(mockResult);
|
||||
expect(runForkedAgent).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
maxTurns: 8,
|
||||
maxTimeMinutes: 5,
|
||||
tools: [
|
||||
'read_file',
|
||||
'grep_search',
|
||||
'glob',
|
||||
'list_directory',
|
||||
'run_shell_command',
|
||||
'write_file',
|
||||
'edit',
|
||||
],
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('throws when the agent fails', async () => {
|
||||
vi.mocked(runForkedAgent).mockResolvedValue({
|
||||
status: 'failed',
|
||||
terminateReason: 'Model timed out',
|
||||
filesTouched: [],
|
||||
} satisfies ForkedAgentResult);
|
||||
|
||||
await expect(
|
||||
planManagedAutoMemoryDreamByAgent(config, projectRoot),
|
||||
).rejects.toThrow('Model timed out');
|
||||
});
|
||||
|
||||
it('returns cancelled result without throwing', async () => {
|
||||
const mockResult: ForkedAgentResult = {
|
||||
status: 'cancelled',
|
||||
filesTouched: [],
|
||||
};
|
||||
|
||||
vi.mocked(runForkedAgent).mockResolvedValue(mockResult);
|
||||
|
||||
const result = await planManagedAutoMemoryDreamByAgent(config, projectRoot);
|
||||
expect(result.status).toBe('cancelled');
|
||||
expect(result.filesTouched).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
246
packages/core/src/memory/dreamAgentPlanner.ts
Normal file
246
packages/core/src/memory/dreamAgentPlanner.ts
Normal file
|
|
@ -0,0 +1,246 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type { Config } from '../config/config.js';
|
||||
import {
|
||||
runForkedAgent,
|
||||
type ForkedAgentResult,
|
||||
} from '../utils/forkedAgent.js';
|
||||
import { getProjectHash, QWEN_DIR } from '../utils/paths.js';
|
||||
import {
|
||||
AUTO_MEMORY_INDEX_FILENAME,
|
||||
getAutoMemoryRoot,
|
||||
isAutoMemPath,
|
||||
} from './paths.js';
|
||||
import { ToolNames } from '../tools/tool-names.js';
|
||||
import type { PermissionManager } from '../permissions/permission-manager.js';
|
||||
import type {
|
||||
PermissionCheckContext,
|
||||
PermissionDecision,
|
||||
} from '../permissions/types.js';
|
||||
import { isShellCommandReadOnlyAST } from '../utils/shellAstParser.js';
|
||||
import { stripShellWrapper } from '../utils/shell-utils.js';
|
||||
|
||||
const MAX_TURNS = 8;
|
||||
const MAX_TIME_MINUTES = 5;
|
||||
|
||||
type MemoryScopedPermissionManager = Pick<
|
||||
PermissionManager,
|
||||
| 'evaluate'
|
||||
| 'findMatchingDenyRule'
|
||||
| 'hasMatchingAskRule'
|
||||
| 'hasRelevantRules'
|
||||
| 'isToolEnabled'
|
||||
>;
|
||||
|
||||
function isScopedTool(toolName: string): boolean {
|
||||
return (
|
||||
toolName === ToolNames.SHELL ||
|
||||
toolName === ToolNames.EDIT ||
|
||||
toolName === ToolNames.WRITE_FILE
|
||||
);
|
||||
}
|
||||
|
||||
function mergePermissionDecision(
|
||||
scopedDecision: PermissionDecision,
|
||||
baseDecision: PermissionDecision,
|
||||
): PermissionDecision {
|
||||
const priority: Record<PermissionDecision, number> = {
|
||||
deny: 4,
|
||||
ask: 3,
|
||||
allow: 2,
|
||||
default: 1,
|
||||
};
|
||||
return priority[baseDecision] > priority[scopedDecision]
|
||||
? baseDecision
|
||||
: scopedDecision;
|
||||
}
|
||||
|
||||
async function evaluateScopedDecision(
|
||||
ctx: PermissionCheckContext,
|
||||
projectRoot: string,
|
||||
): Promise<PermissionDecision> {
|
||||
switch (ctx.toolName) {
|
||||
case ToolNames.SHELL: {
|
||||
if (!ctx.command) {
|
||||
return 'deny';
|
||||
}
|
||||
const isReadOnly = await isShellCommandReadOnlyAST(
|
||||
stripShellWrapper(ctx.command),
|
||||
);
|
||||
return isReadOnly ? 'allow' : 'deny';
|
||||
}
|
||||
case ToolNames.EDIT:
|
||||
case ToolNames.WRITE_FILE:
|
||||
return ctx.filePath && isAutoMemPath(ctx.filePath, projectRoot)
|
||||
? 'allow'
|
||||
: 'deny';
|
||||
default:
|
||||
return 'default';
|
||||
}
|
||||
}
|
||||
|
||||
function getScopedDenyRule(
|
||||
ctx: PermissionCheckContext,
|
||||
projectRoot: string,
|
||||
): string | undefined {
|
||||
switch (ctx.toolName) {
|
||||
case ToolNames.SHELL:
|
||||
return 'ManagedAutoMemory(run_shell_command: read-only only)';
|
||||
case ToolNames.EDIT:
|
||||
return `ManagedAutoMemory(edit: only within ${getAutoMemoryRoot(projectRoot)})`;
|
||||
case ToolNames.WRITE_FILE:
|
||||
return `ManagedAutoMemory(write_file: only within ${getAutoMemoryRoot(projectRoot)})`;
|
||||
default:
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
function createMemoryScopedAgentConfig(
|
||||
config: Config,
|
||||
projectRoot: string,
|
||||
): Config {
|
||||
const basePm = config.getPermissionManager?.();
|
||||
const scopedPm: MemoryScopedPermissionManager = {
|
||||
hasRelevantRules(ctx: PermissionCheckContext): boolean {
|
||||
return isScopedTool(ctx.toolName) || !!basePm?.hasRelevantRules(ctx);
|
||||
},
|
||||
hasMatchingAskRule(ctx: PermissionCheckContext): boolean {
|
||||
return basePm?.hasMatchingAskRule(ctx) ?? false;
|
||||
},
|
||||
findMatchingDenyRule(ctx: PermissionCheckContext): string | undefined {
|
||||
const scoped = getScopedDenyRule(ctx, projectRoot);
|
||||
if (scoped) {
|
||||
return scoped;
|
||||
}
|
||||
return basePm?.findMatchingDenyRule(ctx);
|
||||
},
|
||||
async evaluate(ctx: PermissionCheckContext): Promise<PermissionDecision> {
|
||||
const scopedDecision = await evaluateScopedDecision(ctx, projectRoot);
|
||||
if (!basePm) {
|
||||
return scopedDecision;
|
||||
}
|
||||
const baseDecision = basePm.hasRelevantRules(ctx)
|
||||
? await basePm.evaluate(ctx)
|
||||
: 'default';
|
||||
return mergePermissionDecision(scopedDecision, baseDecision);
|
||||
},
|
||||
async isToolEnabled(toolName: string): Promise<boolean> {
|
||||
// Registry-level check: is this tool type allowed at all?
|
||||
// Scoped tools (SHELL/EDIT/WRITE_FILE) are enabled — per-invocation
|
||||
// restrictions are enforced in evaluate().
|
||||
if (isScopedTool(toolName)) {
|
||||
return true;
|
||||
}
|
||||
if (basePm) {
|
||||
return basePm.isToolEnabled(toolName);
|
||||
}
|
||||
return true;
|
||||
},
|
||||
};
|
||||
|
||||
const scopedConfig = Object.create(config) as Config;
|
||||
scopedConfig.getPermissionManager = () =>
|
||||
scopedPm as unknown as PermissionManager;
|
||||
return scopedConfig;
|
||||
}
|
||||
|
||||
const DREAM_AGENT_SYSTEM_PROMPT = `You are performing a managed memory dream — a reflective pass over durable memory files.
|
||||
|
||||
Synthesize what you've learned recently into durable, well-organized memories so that future sessions can orient quickly.
|
||||
|
||||
Rules:
|
||||
- Merge semantically duplicate entries — if the same fact appears in multiple files, consolidate into one file and delete the rest.
|
||||
- Preserve all durable information; do not delete content that is still accurate.
|
||||
- Fix contradicted or stale facts only when the evidence is clear from the existing memory content or recent transcript signal.
|
||||
- Update the MEMORY.md index to accurately reflect surviving files.
|
||||
- Keep the MEMORY.md index concise: one line per file in the format \`- [Title](relative/path.md) — one-line hook\`.
|
||||
- If nothing needs consolidation, do nothing and say so.`;
|
||||
|
||||
function getTranscriptDir(projectRoot: string): string {
|
||||
const projectHash = getProjectHash(projectRoot);
|
||||
return `${QWEN_DIR}/tmp/${projectHash}/chats`;
|
||||
}
|
||||
|
||||
export function buildConsolidationTaskPrompt(
|
||||
memoryRoot: string,
|
||||
transcriptDir: string,
|
||||
): string {
|
||||
return [
|
||||
`Memory directory: \`${memoryRoot}\``,
|
||||
'This directory already exists — write to it directly with the write_file tool (do not run mkdir or check for its existence).',
|
||||
`Session transcripts: \`${transcriptDir}\` (large JSONL files — grep narrowly, don't read whole files)`,
|
||||
'',
|
||||
'## Phase 1 — Orient',
|
||||
'',
|
||||
'- List the memory directory to see what files exist',
|
||||
`- Read \`${memoryRoot}/${AUTO_MEMORY_INDEX_FILENAME}\` to understand the current index`,
|
||||
'- Skim topic subdirectories (`user/`, `project/`, `feedback/`, `reference/`)',
|
||||
'- If `logs/` or `sessions/` subdirectories exist, review recent entries there',
|
||||
'',
|
||||
'## Phase 2 — Gather recent signal',
|
||||
'',
|
||||
'Look for new information worth persisting. Sources in rough priority order:',
|
||||
'',
|
||||
'1. Existing memories that drifted — facts that contradict something you now know from current memory files',
|
||||
'2. Transcript search — if you need specific context, grep session transcripts for narrow terms:',
|
||||
` \`grep -rn "<narrow term>" ${transcriptDir}/ --include="*.jsonl" | tail -50\``,
|
||||
'',
|
||||
"Don't exhaustively read transcripts. Look only for things you already suspect matter.",
|
||||
'',
|
||||
'## Phase 3 — Consolidate',
|
||||
'',
|
||||
'For each topic directory:',
|
||||
'- Identify duplicate or near-duplicate `.md` files (same fact expressed differently)',
|
||||
'- Merge duplicates: write the canonical version into one file, delete the redundant files',
|
||||
'- Fix stale or contradicted facts when clear from the existing content',
|
||||
'- Convert relative dates (for example: "yesterday", "last week") to absolute dates when preserving them',
|
||||
'',
|
||||
'## Phase 4 — Prune and index',
|
||||
'',
|
||||
`Update \`${memoryRoot}/${AUTO_MEMORY_INDEX_FILENAME}\` to reflect surviving files.`,
|
||||
'Each entry: `- [Title](relative/path.md) — one-line hook`',
|
||||
'Keep the index under roughly 200 lines and ~25KB.',
|
||||
'Remove pointers to deleted, stale, wrong, or superseded files. Add pointers to any newly created files.',
|
||||
'If an index line is too verbose, shorten it and move the detail back into the memory file itself.',
|
||||
'',
|
||||
'---',
|
||||
'',
|
||||
'Return a brief summary of what you consolidated, updated, or pruned. If nothing needed consolidation, say so briefly.',
|
||||
].join('\n');
|
||||
}
|
||||
|
||||
export async function planManagedAutoMemoryDreamByAgent(
|
||||
config: Config,
|
||||
projectRoot: string,
|
||||
): Promise<ForkedAgentResult> {
|
||||
const memoryRoot = getAutoMemoryRoot(projectRoot);
|
||||
const transcriptDir = getTranscriptDir(projectRoot);
|
||||
const scopedConfig = createMemoryScopedAgentConfig(config, projectRoot);
|
||||
const result = await runForkedAgent({
|
||||
name: 'managed-auto-memory-dreamer',
|
||||
config: scopedConfig,
|
||||
taskPrompt: buildConsolidationTaskPrompt(memoryRoot, transcriptDir),
|
||||
systemPrompt: DREAM_AGENT_SYSTEM_PROMPT,
|
||||
maxTurns: MAX_TURNS,
|
||||
maxTimeMinutes: MAX_TIME_MINUTES,
|
||||
tools: [
|
||||
ToolNames.READ_FILE,
|
||||
ToolNames.GREP,
|
||||
ToolNames.GLOB,
|
||||
ToolNames.LS,
|
||||
ToolNames.SHELL,
|
||||
ToolNames.WRITE_FILE,
|
||||
ToolNames.EDIT,
|
||||
],
|
||||
});
|
||||
|
||||
if (result.status === 'failed') {
|
||||
throw new Error(result.terminateReason || 'Dream agent failed');
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
34
packages/core/src/memory/entries.test.ts
Normal file
34
packages/core/src/memory/entries.test.ts
Normal file
|
|
@ -0,0 +1,34 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, expect, it } from 'vitest';
|
||||
import { parseAutoMemoryEntries, renderAutoMemoryBody } from './entries.js';
|
||||
|
||||
describe('managed auto-memory entries', () => {
|
||||
it('parses and renders why/apply fields', () => {
|
||||
const body = [
|
||||
'# User Memory',
|
||||
'',
|
||||
'- User prefers terse responses.',
|
||||
' - Why: This reduces back-and-forth.',
|
||||
' - How to apply: Prefer concise summaries first.',
|
||||
].join('\n');
|
||||
|
||||
const entries = parseAutoMemoryEntries(body);
|
||||
expect(entries).toEqual([
|
||||
{
|
||||
summary: 'User prefers terse responses.',
|
||||
why: 'This reduces back-and-forth.',
|
||||
howToApply: 'Prefer concise summaries first.',
|
||||
},
|
||||
]);
|
||||
|
||||
const rendered = renderAutoMemoryBody('# User Memory', entries);
|
||||
expect(rendered).toContain('User prefers terse responses.');
|
||||
expect(rendered).toContain('Why: This reduces back-and-forth.');
|
||||
expect(rendered).toContain('How to apply: Prefer concise summaries first.');
|
||||
});
|
||||
});
|
||||
189
packages/core/src/memory/entries.ts
Normal file
189
packages/core/src/memory/entries.ts
Normal file
|
|
@ -0,0 +1,189 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
export interface ManagedAutoMemoryEntry {
|
||||
summary: string;
|
||||
why?: string;
|
||||
howToApply?: string;
|
||||
}
|
||||
|
||||
function normalizeText(text: string): string {
|
||||
return text.replace(/\s+/g, ' ').trim();
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the `# Heading` line from a body, or a default.
|
||||
* Used when reading old-format multi-entry topic files.
|
||||
*/
|
||||
export function getAutoMemoryBodyHeading(body: string): string {
|
||||
return (
|
||||
body
|
||||
.split('\n')
|
||||
.map((line) => line.trim())
|
||||
.find((line) => line.startsWith('# ')) ?? '# Memory'
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Parses memory entries from a body string.
|
||||
*
|
||||
* Supports two formats:
|
||||
*
|
||||
* **New (per-entry file) format** — the body starts with the plain-text summary,
|
||||
* followed by optional top-level `Why:` / `How to apply:` lines:
|
||||
* ```
|
||||
* Use short responses when debugging
|
||||
*
|
||||
* Why: The user prefers brevity in debug sessions.
|
||||
* How to apply: Keep replies to 3 sentences max.
|
||||
* ```
|
||||
*
|
||||
* **Legacy (multi-entry topic file) format** — each entry begins with a `- bullet`
|
||||
* prefix; nested fields use 2-space indent:
|
||||
* ```
|
||||
* # Feedback Memory
|
||||
*
|
||||
* - Use short responses when debugging
|
||||
* - Why: The user prefers brevity in debug sessions.
|
||||
* - Always use TypeScript strict mode
|
||||
* - Why: Catches bugs early.
|
||||
* ```
|
||||
*/
|
||||
export function parseAutoMemoryEntries(body: string): ManagedAutoMemoryEntry[] {
|
||||
const entries: ManagedAutoMemoryEntry[] = [];
|
||||
let current: ManagedAutoMemoryEntry | null = null;
|
||||
|
||||
for (const rawLine of body.split('\n')) {
|
||||
const trimmed = rawLine.trim();
|
||||
if (
|
||||
!trimmed ||
|
||||
trimmed === '_No entries yet._' ||
|
||||
trimmed.startsWith('# ')
|
||||
) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Indented nested field — legacy format: ` - Why: ...` or ` Why: ...`
|
||||
if (current) {
|
||||
const indentedMatch = rawLine.match(
|
||||
/^[\t ]{2,}(?:[-*][\t ]+)?(Why|How to apply|How_to_apply):[\t ]*(\S.*)$/i,
|
||||
);
|
||||
if (indentedMatch) {
|
||||
const [, rawKey, rawValue] = indentedMatch;
|
||||
const value = normalizeText(rawValue);
|
||||
if (value) {
|
||||
switch (rawKey.toLowerCase()) {
|
||||
case 'why':
|
||||
current.why = value;
|
||||
break;
|
||||
case 'how to apply':
|
||||
case 'how_to_apply':
|
||||
current.howToApply = value;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
// Top-level named field — new format: `Why: ...` or `**How to apply**: ...`
|
||||
const topLevelMatch = trimmed.match(
|
||||
/^(?:\*\*)?(Why|How to apply|How_to_apply)(?:\*\*)?:[ \t]*(\S.*)$/i,
|
||||
);
|
||||
if (topLevelMatch) {
|
||||
const [, rawKey, rawValue] = topLevelMatch;
|
||||
const value = normalizeText(rawValue);
|
||||
if (value && current) {
|
||||
switch (rawKey.toLowerCase()) {
|
||||
case 'why':
|
||||
current.why = value;
|
||||
break;
|
||||
case 'how to apply':
|
||||
case 'how_to_apply':
|
||||
current.howToApply = value;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
// Bullet prefix — legacy format: `- Summary text`
|
||||
if (/^[-*]\s+/.test(trimmed)) {
|
||||
if (current) {
|
||||
entries.push(current);
|
||||
}
|
||||
current = {
|
||||
summary: normalizeText(trimmed.replace(/^[-*]\s+/, '')),
|
||||
};
|
||||
continue;
|
||||
}
|
||||
|
||||
// Plain text — new per-entry format: each plain-text line starts a new
|
||||
// entry. If a current entry is already open, close it first so that
|
||||
// multi-entry bodies produced by renderAutoMemoryBody can round-trip
|
||||
// correctly through parse→rewrite without losing later entries.
|
||||
if (current) {
|
||||
entries.push(current);
|
||||
}
|
||||
current = { summary: normalizeText(trimmed) };
|
||||
}
|
||||
|
||||
if (current) {
|
||||
entries.push(current);
|
||||
}
|
||||
|
||||
return entries;
|
||||
}
|
||||
|
||||
export function renderAutoMemoryBody(
|
||||
_heading: string,
|
||||
entries: ManagedAutoMemoryEntry[],
|
||||
): string {
|
||||
if (entries.length === 0) {
|
||||
return '_No entries yet._';
|
||||
}
|
||||
|
||||
const lines: string[] = [];
|
||||
for (let i = 0; i < entries.length; i++) {
|
||||
if (i > 0) {
|
||||
lines.push('');
|
||||
}
|
||||
const entry = entries[i];
|
||||
lines.push(normalizeText(entry.summary));
|
||||
if (entry.why) {
|
||||
lines.push('', `Why: ${normalizeText(entry.why)}`);
|
||||
}
|
||||
if (entry.howToApply) {
|
||||
lines.push('', `How to apply: ${normalizeText(entry.howToApply)}`);
|
||||
}
|
||||
}
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
export function mergeAutoMemoryEntry(
|
||||
current: ManagedAutoMemoryEntry,
|
||||
incoming: ManagedAutoMemoryEntry,
|
||||
): ManagedAutoMemoryEntry {
|
||||
return {
|
||||
summary: incoming.summary || current.summary,
|
||||
why: current.why ?? incoming.why,
|
||||
howToApply: current.howToApply ?? incoming.howToApply,
|
||||
};
|
||||
}
|
||||
|
||||
export function buildAutoMemoryEntrySearchText(
|
||||
entry: ManagedAutoMemoryEntry,
|
||||
): string {
|
||||
return [entry.summary, entry.why, entry.howToApply]
|
||||
.filter((value): value is string => Boolean(value))
|
||||
.join(' ')
|
||||
.toLowerCase();
|
||||
}
|
||||
115
packages/core/src/memory/extract.test.ts
Normal file
115
packages/core/src/memory/extract.test.ts
Normal file
|
|
@ -0,0 +1,115 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import * as os from 'node:os';
|
||||
import * as path from 'node:path';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { getAutoMemoryExtractCursorPath } from './paths.js';
|
||||
import {
|
||||
buildTranscriptMessages,
|
||||
loadUnprocessedTranscriptSlice,
|
||||
runAutoMemoryExtract,
|
||||
} from './extract.js';
|
||||
import { runAutoMemoryExtractionByAgent } from './extractionAgentPlanner.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
|
||||
vi.mock('./extractionAgentPlanner.js', () => ({
|
||||
runAutoMemoryExtractionByAgent: vi.fn(),
|
||||
}));
|
||||
|
||||
describe('auto-memory extraction', () => {
|
||||
let tempDir: string;
|
||||
let projectRoot: string;
|
||||
let mockConfig: Config;
|
||||
|
||||
beforeEach(async () => {
|
||||
tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'auto-memory-extract-'));
|
||||
projectRoot = path.join(tempDir, 'project');
|
||||
await fs.mkdir(projectRoot, { recursive: true });
|
||||
await ensureAutoMemoryScaffold(projectRoot);
|
||||
mockConfig = {
|
||||
getSessionId: vi.fn().mockReturnValue('session-1'),
|
||||
getModel: vi.fn().mockReturnValue('qwen3-coder-plus'),
|
||||
} as unknown as Config;
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await fs.rm(tempDir, {
|
||||
recursive: true,
|
||||
force: true,
|
||||
maxRetries: 3,
|
||||
retryDelay: 10,
|
||||
});
|
||||
});
|
||||
|
||||
it('builds transcript slices from history and cursor state', () => {
|
||||
const transcript = buildTranscriptMessages([
|
||||
{ role: 'user', parts: [{ text: 'hello' }] },
|
||||
{ role: 'model', parts: [{ text: 'world' }] },
|
||||
{ role: 'user', parts: [{ text: 'I prefer terse responses.' }] },
|
||||
]);
|
||||
|
||||
const slice = loadUnprocessedTranscriptSlice('session-1', transcript, {
|
||||
sessionId: 'session-1',
|
||||
processedOffset: 2,
|
||||
updatedAt: new Date().toISOString(),
|
||||
});
|
||||
|
||||
expect(slice.messages).toHaveLength(1);
|
||||
expect(slice.messages[0]?.text).toBe('I prefer terse responses.');
|
||||
expect(slice.nextProcessedOffset).toBe(3);
|
||||
});
|
||||
|
||||
it('updates cursor and avoids duplicate writes for repeated extraction', async () => {
|
||||
vi.mocked(runAutoMemoryExtractionByAgent).mockResolvedValue({
|
||||
touchedTopics: [],
|
||||
systemMessage: undefined,
|
||||
});
|
||||
|
||||
const history = [
|
||||
{ role: 'user', parts: [{ text: 'I prefer terse responses.' }] },
|
||||
{ role: 'model', parts: [{ text: 'Understood.' }] },
|
||||
];
|
||||
|
||||
const first = await runAutoMemoryExtract({
|
||||
projectRoot,
|
||||
sessionId: 'session-1',
|
||||
config: mockConfig,
|
||||
history: [...history],
|
||||
});
|
||||
const second = await runAutoMemoryExtract({
|
||||
projectRoot,
|
||||
sessionId: 'session-1',
|
||||
config: mockConfig,
|
||||
history: [...history],
|
||||
});
|
||||
|
||||
expect(first.touchedTopics).toEqual([]);
|
||||
expect(second.touchedTopics).toEqual([]);
|
||||
|
||||
const cursor = JSON.parse(
|
||||
await fs.readFile(getAutoMemoryExtractCursorPath(projectRoot), 'utf-8'),
|
||||
) as { processedOffset: number; sessionId: string };
|
||||
|
||||
expect(cursor.sessionId).toBe('session-1');
|
||||
expect(cursor.processedOffset).toBe(2);
|
||||
});
|
||||
|
||||
it('throws when config is missing because heuristic fallback was removed', async () => {
|
||||
await expect(
|
||||
runAutoMemoryExtract({
|
||||
projectRoot,
|
||||
sessionId: 'session-1',
|
||||
history: [
|
||||
{ role: 'user', parts: [{ text: 'I prefer terse responses.' }] },
|
||||
],
|
||||
}),
|
||||
).rejects.toThrow('Managed auto-memory extraction requires config');
|
||||
});
|
||||
});
|
||||
195
packages/core/src/memory/extract.ts
Normal file
195
packages/core/src/memory/extract.ts
Normal file
|
|
@ -0,0 +1,195 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import type { Content } from '@google/genai';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { createDebugLogger } from '../utils/debugLogger.js';
|
||||
import { partToString } from '../utils/partUtils.js';
|
||||
import {
|
||||
getAutoMemoryExtractCursorPath,
|
||||
getAutoMemoryMetadataPath,
|
||||
} from './paths.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
import { runAutoMemoryExtractionByAgent } from './extractionAgentPlanner.js';
|
||||
import { rebuildManagedAutoMemoryIndex } from './indexer.js';
|
||||
import {
|
||||
type AutoMemoryExtractCursor,
|
||||
type AutoMemoryMetadata,
|
||||
type AutoMemoryType,
|
||||
} from './types.js';
|
||||
|
||||
const debugLogger = createDebugLogger('AUTO_MEMORY_EXTRACT');
|
||||
|
||||
export interface AutoMemoryTranscriptMessage {
|
||||
offset: number;
|
||||
role: 'user' | 'model';
|
||||
text: string;
|
||||
}
|
||||
|
||||
export interface AutoMemoryExtractResult {
|
||||
touchedTopics: AutoMemoryType[];
|
||||
skippedReason?: 'already_running' | 'queued' | 'memory_tool';
|
||||
systemMessage?: string;
|
||||
cursor: AutoMemoryExtractCursor;
|
||||
}
|
||||
|
||||
export function buildTranscriptMessages(
|
||||
history: Content[],
|
||||
): AutoMemoryTranscriptMessage[] {
|
||||
return history
|
||||
.map((message, index) => ({
|
||||
offset: index,
|
||||
role: message.role,
|
||||
text: partToString(message.parts ?? [])
|
||||
.replace(/\s+/g, ' ')
|
||||
.trim(),
|
||||
}))
|
||||
.filter(
|
||||
(message): message is AutoMemoryTranscriptMessage =>
|
||||
(message.role === 'user' || message.role === 'model') &&
|
||||
message.text.length > 0,
|
||||
);
|
||||
}
|
||||
|
||||
export function loadUnprocessedTranscriptSlice(
|
||||
sessionId: string,
|
||||
messages: AutoMemoryTranscriptMessage[],
|
||||
cursor: AutoMemoryExtractCursor,
|
||||
): { messages: AutoMemoryTranscriptMessage[]; nextProcessedOffset: number } {
|
||||
const startOffset =
|
||||
cursor.sessionId === sessionId ? (cursor.processedOffset ?? 0) : 0;
|
||||
return {
|
||||
messages: messages.filter((message) => message.offset >= startOffset),
|
||||
nextProcessedOffset: messages.length,
|
||||
};
|
||||
}
|
||||
|
||||
async function readExtractCursor(
|
||||
projectRoot: string,
|
||||
): Promise<AutoMemoryExtractCursor> {
|
||||
try {
|
||||
const content = await fs.readFile(
|
||||
getAutoMemoryExtractCursorPath(projectRoot),
|
||||
'utf-8',
|
||||
);
|
||||
return JSON.parse(content) as AutoMemoryExtractCursor;
|
||||
} catch (error) {
|
||||
const nodeError = error as NodeJS.ErrnoException;
|
||||
if (nodeError.code === 'ENOENT') {
|
||||
return { updatedAt: new Date(0).toISOString() };
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
async function writeExtractCursor(
|
||||
projectRoot: string,
|
||||
cursor: AutoMemoryExtractCursor,
|
||||
): Promise<void> {
|
||||
await fs.writeFile(
|
||||
getAutoMemoryExtractCursorPath(projectRoot),
|
||||
`${JSON.stringify(cursor, null, 2)}\n`,
|
||||
'utf-8',
|
||||
);
|
||||
}
|
||||
|
||||
async function bumpMetadata(
|
||||
projectRoot: string,
|
||||
now: Date,
|
||||
sessionId: string,
|
||||
touchedTopics: AutoMemoryType[],
|
||||
): Promise<void> {
|
||||
try {
|
||||
const content = await fs.readFile(
|
||||
getAutoMemoryMetadataPath(projectRoot),
|
||||
'utf-8',
|
||||
);
|
||||
const metadata = JSON.parse(content) as AutoMemoryMetadata;
|
||||
metadata.updatedAt = now.toISOString();
|
||||
metadata.lastExtractionAt = now.toISOString();
|
||||
metadata.lastExtractionSessionId = sessionId;
|
||||
metadata.lastExtractionTouchedTopics = touchedTopics;
|
||||
metadata.lastExtractionStatus =
|
||||
touchedTopics.length > 0 ? 'updated' : 'noop';
|
||||
await fs.writeFile(
|
||||
getAutoMemoryMetadataPath(projectRoot),
|
||||
`${JSON.stringify(metadata, null, 2)}\n`,
|
||||
'utf-8',
|
||||
);
|
||||
} catch {
|
||||
// Scaffold creation already writes metadata; ignore non-critical update errors.
|
||||
}
|
||||
}
|
||||
|
||||
export async function runAutoMemoryExtract(params: {
|
||||
projectRoot: string;
|
||||
sessionId: string;
|
||||
history: Content[];
|
||||
now?: Date;
|
||||
config?: Config;
|
||||
}): Promise<AutoMemoryExtractResult> {
|
||||
const now = params.now ?? new Date();
|
||||
await ensureAutoMemoryScaffold(params.projectRoot, now);
|
||||
|
||||
const transcript = buildTranscriptMessages(params.history);
|
||||
const currentCursor = await readExtractCursor(params.projectRoot);
|
||||
const slice = loadUnprocessedTranscriptSlice(
|
||||
params.sessionId,
|
||||
transcript,
|
||||
currentCursor,
|
||||
);
|
||||
|
||||
if (!params.config) {
|
||||
throw new Error(
|
||||
'Managed auto-memory extraction requires config for forked-agent execution.',
|
||||
);
|
||||
}
|
||||
|
||||
// Skip if no new user messages in the unprocessed slice.
|
||||
const hasNewUserMessages = slice.messages.some((m) => m.role === 'user');
|
||||
if (!hasNewUserMessages) {
|
||||
const cursor: AutoMemoryExtractCursor = {
|
||||
sessionId: params.sessionId,
|
||||
processedOffset: slice.nextProcessedOffset,
|
||||
updatedAt: now.toISOString(),
|
||||
};
|
||||
await writeExtractCursor(params.projectRoot, cursor);
|
||||
return { touchedTopics: [], cursor };
|
||||
}
|
||||
|
||||
const agentResult = await runAutoMemoryExtractionByAgent(
|
||||
params.config,
|
||||
params.projectRoot,
|
||||
);
|
||||
|
||||
if (agentResult.touchedTopics.length > 0) {
|
||||
await bumpMetadata(
|
||||
params.projectRoot,
|
||||
now,
|
||||
params.sessionId,
|
||||
agentResult.touchedTopics,
|
||||
);
|
||||
await rebuildManagedAutoMemoryIndex(params.projectRoot);
|
||||
}
|
||||
|
||||
const cursor: AutoMemoryExtractCursor = {
|
||||
sessionId: params.sessionId,
|
||||
processedOffset: slice.nextProcessedOffset,
|
||||
updatedAt: now.toISOString(),
|
||||
};
|
||||
await writeExtractCursor(params.projectRoot, cursor);
|
||||
|
||||
debugLogger.debug(
|
||||
`Managed auto-memory extract completed with ${agentResult.touchedTopics.length} touched topic(s).`,
|
||||
);
|
||||
|
||||
return {
|
||||
touchedTopics: agentResult.touchedTopics,
|
||||
cursor,
|
||||
systemMessage: agentResult.systemMessage,
|
||||
};
|
||||
}
|
||||
95
packages/core/src/memory/extractAgent.test.ts
Normal file
95
packages/core/src/memory/extractAgent.test.ts
Normal file
|
|
@ -0,0 +1,95 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import * as os from 'node:os';
|
||||
import * as path from 'node:path';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { runAutoMemoryExtractionByAgent } from './extractionAgentPlanner.js';
|
||||
import { runAutoMemoryExtract } from './extract.js';
|
||||
import { getAutoMemoryRoot } from './paths.js';
|
||||
import { scanAutoMemoryTopicDocuments } from './scan.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
|
||||
vi.mock('./extractionAgentPlanner.js', () => ({
|
||||
runAutoMemoryExtractionByAgent: vi.fn(),
|
||||
}));
|
||||
|
||||
describe('auto-memory extraction with agent planner', () => {
|
||||
let tempDir: string;
|
||||
let projectRoot: string;
|
||||
const mockConfig = {} as Config;
|
||||
|
||||
beforeEach(async () => {
|
||||
tempDir = await fs.mkdtemp(
|
||||
path.join(os.tmpdir(), 'auto-memory-extract-agent-'),
|
||||
);
|
||||
projectRoot = path.join(tempDir, 'project');
|
||||
await fs.mkdir(projectRoot, { recursive: true });
|
||||
await ensureAutoMemoryScaffold(projectRoot);
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await fs.rm(tempDir, {
|
||||
recursive: true,
|
||||
force: true,
|
||||
maxRetries: 3,
|
||||
retryDelay: 10,
|
||||
});
|
||||
});
|
||||
|
||||
it('uses the forked-agent execution path when config is provided', async () => {
|
||||
vi.mocked(runAutoMemoryExtractionByAgent).mockImplementation(async () => {
|
||||
const memoryRoot = getAutoMemoryRoot(projectRoot);
|
||||
const userPath = path.join(memoryRoot, 'user', 'terse-responses.md');
|
||||
await fs.mkdir(path.dirname(userPath), { recursive: true });
|
||||
await fs.writeFile(
|
||||
userPath,
|
||||
[
|
||||
'---',
|
||||
'name: Terse responses',
|
||||
'description: User prefers terse responses.',
|
||||
'type: user',
|
||||
'---',
|
||||
'',
|
||||
'- User prefers terse responses.',
|
||||
'',
|
||||
].join('\n'),
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
return {
|
||||
touchedTopics: ['user'],
|
||||
systemMessage: 'Managed auto-memory updated: user.md',
|
||||
};
|
||||
});
|
||||
|
||||
const result = await runAutoMemoryExtract({
|
||||
projectRoot,
|
||||
sessionId: 'session-1',
|
||||
config: mockConfig,
|
||||
history: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [{ text: 'I prefer terse responses.' }],
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
expect(result.touchedTopics).toEqual(['user']);
|
||||
expect(runAutoMemoryExtractionByAgent).toHaveBeenCalledWith(
|
||||
mockConfig,
|
||||
projectRoot,
|
||||
);
|
||||
|
||||
const docs = await scanAutoMemoryTopicDocuments(projectRoot);
|
||||
expect(docs.find((doc) => doc.type === 'user')?.body).toContain(
|
||||
'User prefers terse responses.',
|
||||
);
|
||||
});
|
||||
});
|
||||
143
packages/core/src/memory/extractionAgentPlanner.test.ts
Normal file
143
packages/core/src/memory/extractionAgentPlanner.test.ts
Normal file
|
|
@ -0,0 +1,143 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { runAutoMemoryExtractionByAgent } from './extractionAgentPlanner.js';
|
||||
import { scanAutoMemoryTopicDocuments } from './scan.js';
|
||||
import { runForkedAgent, getCacheSafeParams } from '../utils/forkedAgent.js';
|
||||
|
||||
vi.mock('./scan.js', async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import('./scan.js')>();
|
||||
return {
|
||||
...actual,
|
||||
scanAutoMemoryTopicDocuments: vi.fn(),
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('./paths.js', async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import('./paths.js')>();
|
||||
return {
|
||||
...actual,
|
||||
getAutoMemoryRoot: vi.fn().mockReturnValue('/tmp/auto-memory'),
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('../utils/forkedAgent.js', () => ({
|
||||
runForkedAgent: vi.fn(),
|
||||
getCacheSafeParams: vi.fn(),
|
||||
}));
|
||||
|
||||
describe('runAutoMemoryExtractionByAgent', () => {
|
||||
const mockConfig = {
|
||||
getSessionId: vi.fn().mockReturnValue('session-1'),
|
||||
getModel: vi.fn().mockReturnValue('qwen3-coder-plus'),
|
||||
getApprovalMode: vi.fn(),
|
||||
} as unknown as Config;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
vi.mocked(getCacheSafeParams).mockReturnValue({
|
||||
generationConfig: {},
|
||||
history: [
|
||||
{ role: 'user', parts: [{ text: 'I prefer terse responses.' }] },
|
||||
{ role: 'model', parts: [{ text: 'Understood.' }] },
|
||||
],
|
||||
model: 'qwen3-coder-plus',
|
||||
version: 1,
|
||||
});
|
||||
vi.mocked(scanAutoMemoryTopicDocuments).mockResolvedValue([
|
||||
{
|
||||
type: 'user',
|
||||
filePath: '/tmp/auto-memory/user/prefs.md',
|
||||
relativePath: 'user/prefs.md',
|
||||
filename: 'prefs.md',
|
||||
title: 'User Memory',
|
||||
description: 'User preferences',
|
||||
body: '- Existing terse preference.',
|
||||
mtimeMs: 1,
|
||||
},
|
||||
]);
|
||||
});
|
||||
|
||||
it('derives touchedTopics from filesTouched and returns systemMessage', async () => {
|
||||
vi.mocked(runForkedAgent).mockResolvedValue({
|
||||
status: 'completed',
|
||||
finalText: '',
|
||||
filesTouched: ['/tmp/auto-memory/user/prefs.md'],
|
||||
});
|
||||
|
||||
const result = await runAutoMemoryExtractionByAgent(mockConfig, '/tmp');
|
||||
|
||||
expect(result).toEqual({
|
||||
touchedTopics: ['user'],
|
||||
systemMessage: 'Managed auto-memory updated: user.md',
|
||||
});
|
||||
expect(runForkedAgent).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
tools: [
|
||||
'read_file',
|
||||
'grep_search',
|
||||
'glob',
|
||||
'list_directory',
|
||||
'run_shell_command',
|
||||
'write_file',
|
||||
'edit',
|
||||
],
|
||||
maxTurns: 5,
|
||||
maxTimeMinutes: 2,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('returns empty touchedTopics when agent touches no files', async () => {
|
||||
vi.mocked(runForkedAgent).mockResolvedValue({
|
||||
status: 'completed',
|
||||
finalText: '',
|
||||
filesTouched: [],
|
||||
});
|
||||
|
||||
const result = await runAutoMemoryExtractionByAgent(mockConfig, '/tmp');
|
||||
expect(result).toEqual({ touchedTopics: [] });
|
||||
});
|
||||
|
||||
it('throws when getCacheSafeParams returns null', async () => {
|
||||
vi.mocked(getCacheSafeParams).mockReturnValue(null);
|
||||
await expect(
|
||||
runAutoMemoryExtractionByAgent(mockConfig, '/tmp'),
|
||||
).rejects.toThrow('no cache-safe params');
|
||||
});
|
||||
|
||||
it('throws when the agent fails to complete', async () => {
|
||||
vi.mocked(runForkedAgent).mockResolvedValue({
|
||||
status: 'failed',
|
||||
terminateReason: 'timeout',
|
||||
filesTouched: [],
|
||||
});
|
||||
|
||||
await expect(
|
||||
runAutoMemoryExtractionByAgent(mockConfig, '/tmp/project'),
|
||||
).rejects.toThrow('timeout');
|
||||
});
|
||||
|
||||
it('ignores non-memory file paths in filesTouched', async () => {
|
||||
vi.mocked(runForkedAgent).mockResolvedValue({
|
||||
status: 'completed',
|
||||
finalText: '',
|
||||
filesTouched: [
|
||||
'/tmp/auto-memory/project/arch.md',
|
||||
'/tmp/auto-memory/reference/api.md',
|
||||
'/tmp/some/other/file.ts',
|
||||
],
|
||||
});
|
||||
|
||||
const result = await runAutoMemoryExtractionByAgent(mockConfig, '/tmp');
|
||||
expect(result.touchedTopics).toEqual(
|
||||
expect.arrayContaining(['project', 'reference']),
|
||||
);
|
||||
expect(result.touchedTopics).not.toContain('user');
|
||||
});
|
||||
});
|
||||
349
packages/core/src/memory/extractionAgentPlanner.ts
Normal file
349
packages/core/src/memory/extractionAgentPlanner.ts
Normal file
|
|
@ -0,0 +1,349 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type { Config } from '../config/config.js';
|
||||
import { runForkedAgent, getCacheSafeParams } from '../utils/forkedAgent.js';
|
||||
import { buildFunctionResponseParts } from '../agents/runtime/forkSubagent.js';
|
||||
import type { Content } from '@google/genai';
|
||||
import type { PermissionManager } from '../permissions/permission-manager.js';
|
||||
import type {
|
||||
PermissionCheckContext,
|
||||
PermissionDecision,
|
||||
} from '../permissions/types.js';
|
||||
import {
|
||||
MEMORY_FRONTMATTER_EXAMPLE,
|
||||
TYPES_SECTION_INDIVIDUAL,
|
||||
WHAT_NOT_TO_SAVE_SECTION,
|
||||
} from './prompt.js';
|
||||
import { AUTO_MEMORY_INDEX_FILENAME, getAutoMemoryRoot } from './paths.js';
|
||||
import type { AutoMemoryType } from './types.js';
|
||||
import { scanAutoMemoryTopicDocuments } from './scan.js';
|
||||
import { ToolNames } from '../tools/tool-names.js';
|
||||
import { isShellCommandReadOnlyAST } from '../utils/shellAstParser.js';
|
||||
import { stripShellWrapper } from '../utils/shell-utils.js';
|
||||
import { isAutoMemPath } from './paths.js';
|
||||
|
||||
const MAX_TOPIC_SUMMARY_CHARS = 280;
|
||||
|
||||
type MemoryScopedPermissionManager = Pick<
|
||||
PermissionManager,
|
||||
| 'evaluate'
|
||||
| 'findMatchingDenyRule'
|
||||
| 'hasMatchingAskRule'
|
||||
| 'hasRelevantRules'
|
||||
| 'isToolEnabled'
|
||||
>;
|
||||
|
||||
function isScopedTool(toolName: string): boolean {
|
||||
return (
|
||||
toolName === ToolNames.SHELL ||
|
||||
toolName === ToolNames.EDIT ||
|
||||
toolName === ToolNames.WRITE_FILE
|
||||
);
|
||||
}
|
||||
|
||||
function mergePermissionDecision(
|
||||
scopedDecision: PermissionDecision,
|
||||
baseDecision: PermissionDecision,
|
||||
): PermissionDecision {
|
||||
const priority: Record<PermissionDecision, number> = {
|
||||
deny: 4,
|
||||
ask: 3,
|
||||
allow: 2,
|
||||
default: 1,
|
||||
};
|
||||
return priority[baseDecision] > priority[scopedDecision]
|
||||
? baseDecision
|
||||
: scopedDecision;
|
||||
}
|
||||
|
||||
async function evaluateScopedDecision(
|
||||
ctx: PermissionCheckContext,
|
||||
projectRoot: string,
|
||||
): Promise<PermissionDecision> {
|
||||
switch (ctx.toolName) {
|
||||
case ToolNames.SHELL: {
|
||||
if (!ctx.command) {
|
||||
return 'deny';
|
||||
}
|
||||
const isReadOnly = await isShellCommandReadOnlyAST(
|
||||
stripShellWrapper(ctx.command),
|
||||
);
|
||||
return isReadOnly ? 'allow' : 'deny';
|
||||
}
|
||||
case ToolNames.EDIT:
|
||||
case ToolNames.WRITE_FILE:
|
||||
return ctx.filePath && isAutoMemPath(ctx.filePath, projectRoot)
|
||||
? 'allow'
|
||||
: 'deny';
|
||||
default:
|
||||
return 'default';
|
||||
}
|
||||
}
|
||||
|
||||
function getScopedDenyRule(
|
||||
ctx: PermissionCheckContext,
|
||||
projectRoot: string,
|
||||
): string | undefined {
|
||||
switch (ctx.toolName) {
|
||||
case ToolNames.SHELL:
|
||||
return 'ManagedAutoMemory(run_shell_command: read-only only)';
|
||||
case ToolNames.EDIT:
|
||||
return `ManagedAutoMemory(edit: only within ${getAutoMemoryRoot(projectRoot)})`;
|
||||
case ToolNames.WRITE_FILE:
|
||||
return `ManagedAutoMemory(write_file: only within ${getAutoMemoryRoot(projectRoot)})`;
|
||||
default:
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
function createMemoryScopedAgentConfig(
|
||||
config: Config,
|
||||
projectRoot: string,
|
||||
): Config {
|
||||
const basePm = config.getPermissionManager?.();
|
||||
const scopedPm: MemoryScopedPermissionManager = {
|
||||
hasRelevantRules(ctx: PermissionCheckContext): boolean {
|
||||
return isScopedTool(ctx.toolName) || !!basePm?.hasRelevantRules(ctx);
|
||||
},
|
||||
hasMatchingAskRule(ctx: PermissionCheckContext): boolean {
|
||||
return basePm?.hasMatchingAskRule(ctx) ?? false;
|
||||
},
|
||||
findMatchingDenyRule(ctx: PermissionCheckContext): string | undefined {
|
||||
const scoped = getScopedDenyRule(ctx, projectRoot);
|
||||
if (scoped) {
|
||||
return scoped;
|
||||
}
|
||||
return basePm?.findMatchingDenyRule(ctx);
|
||||
},
|
||||
async evaluate(ctx: PermissionCheckContext): Promise<PermissionDecision> {
|
||||
const scopedDecision = await evaluateScopedDecision(ctx, projectRoot);
|
||||
if (!basePm) {
|
||||
return scopedDecision;
|
||||
}
|
||||
const baseDecision = basePm.hasRelevantRules(ctx)
|
||||
? await basePm.evaluate(ctx)
|
||||
: 'default';
|
||||
return mergePermissionDecision(scopedDecision, baseDecision);
|
||||
},
|
||||
async isToolEnabled(toolName: string): Promise<boolean> {
|
||||
// Registry-level check: is this tool type allowed at all?
|
||||
// Scoped tools (SHELL/EDIT/WRITE_FILE) are enabled — per-invocation
|
||||
// restrictions are enforced in evaluate().
|
||||
if (isScopedTool(toolName)) {
|
||||
return true;
|
||||
}
|
||||
if (basePm) {
|
||||
return basePm.isToolEnabled(toolName);
|
||||
}
|
||||
return true;
|
||||
},
|
||||
};
|
||||
|
||||
const scopedConfig = Object.create(config) as Config;
|
||||
scopedConfig.getPermissionManager = () =>
|
||||
scopedPm as unknown as PermissionManager;
|
||||
return scopedConfig;
|
||||
}
|
||||
|
||||
const EXTRACTION_AGENT_SYSTEM_PROMPT = [
|
||||
'You are now acting as the managed memory extraction subagent for an AI coding assistant.',
|
||||
'',
|
||||
'The recent conversation history is already in your context. Analyze only that recent conversation and use it to update persistent managed memory.',
|
||||
'',
|
||||
'Rules:',
|
||||
'- Read existing memory files first to avoid creating duplicates.',
|
||||
'- Extract only durable facts stated by the user.',
|
||||
'- Ignore temporary, session-specific, speculative, or question content.',
|
||||
'- If the user explicitly asks the assistant to remember something durable, preserve it.',
|
||||
'- Use one of the allowed topics: user, feedback, project, reference.',
|
||||
'- Keep entries concise and suitable for bullet points. No leading bullet markers.',
|
||||
'- Do not investigate repository code, git history, or unrelated files.',
|
||||
'- Work only from the conversation history in your context and the existing memory files.',
|
||||
'- If nothing durable should be saved, make no file changes.',
|
||||
'',
|
||||
...TYPES_SECTION_INDIVIDUAL,
|
||||
...WHAT_NOT_TO_SAVE_SECTION,
|
||||
'',
|
||||
'Memory file format reference:',
|
||||
...MEMORY_FRONTMATTER_EXAMPLE,
|
||||
].join('\n');
|
||||
|
||||
export interface AutoMemoryExtractionExecutionResult {
|
||||
touchedTopics: AutoMemoryType[];
|
||||
systemMessage?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure the history slice ends with a `model` text message so that
|
||||
* agent-headless can send the task prompt as the first user turn without
|
||||
* creating consecutive user messages (Gemini API constraint).
|
||||
*
|
||||
* - Trailing `user` message: drop it.
|
||||
* - Last `model` message has open function calls: close them with placeholder
|
||||
* responses and append a model ack so the sequence stays valid.
|
||||
* - Otherwise: return a shallow copy as-is.
|
||||
*/
|
||||
function buildAgentHistory(history: Content[]): Content[] {
|
||||
if (history.length === 0) return [];
|
||||
const last = history[history.length - 1];
|
||||
if (last.role !== 'model') {
|
||||
return history.slice(0, -1);
|
||||
}
|
||||
const openCalls = (last.parts ?? []).filter((p) => p.functionCall);
|
||||
if (openCalls.length === 0) {
|
||||
return [...history];
|
||||
}
|
||||
const toolResponses = buildFunctionResponseParts(
|
||||
last,
|
||||
'Background extraction started.',
|
||||
);
|
||||
return [
|
||||
...history,
|
||||
{ role: 'user' as const, parts: toolResponses },
|
||||
{ role: 'model' as const, parts: [{ text: 'Acknowledged.' }] },
|
||||
];
|
||||
}
|
||||
|
||||
function truncate(text: string, maxChars: number): string {
|
||||
const normalized = text.replace(/\s+/g, ' ').trim();
|
||||
if (normalized.length <= maxChars) {
|
||||
return normalized;
|
||||
}
|
||||
return `${normalized.slice(0, maxChars).trimEnd()}…`;
|
||||
}
|
||||
|
||||
async function buildTopicSummaryBlock(projectRoot: string): Promise<string> {
|
||||
const docs = await scanAutoMemoryTopicDocuments(projectRoot);
|
||||
if (docs.length === 0) {
|
||||
return '';
|
||||
}
|
||||
return docs
|
||||
.map((doc) => {
|
||||
const body = truncate(
|
||||
doc.body === '_No entries yet._' ? '' : doc.body,
|
||||
MAX_TOPIC_SUMMARY_CHARS,
|
||||
);
|
||||
return [
|
||||
`- [${doc.title}](${doc.relativePath}) — ${doc.description || '(no description)'}`,
|
||||
` topic=${doc.type}`,
|
||||
` path=${doc.filePath}`,
|
||||
` current=${body || '(empty)'}`,
|
||||
].join('\n');
|
||||
})
|
||||
.join('\n\n');
|
||||
}
|
||||
|
||||
function buildTaskPrompt(memoryRoot: string, topicSummaries: string): string {
|
||||
return [
|
||||
`Managed memory directory: \`${memoryRoot}\``,
|
||||
'',
|
||||
'Scan the recent conversation history in your context and update durable managed memory.',
|
||||
'',
|
||||
'Available tools in this run: `read_file`, `grep_search`, `glob`, `list_directory`, read-only `run_shell_command`, and `write_file`/`edit` for paths inside the managed memory directory only.',
|
||||
'- Do not use any other tools.',
|
||||
'- You have a limited turn budget. `edit` requires a prior `read_file` of the same file, so the efficient strategy is: first issue all reads in parallel for every file you might update; then issue all `write_file`/`edit` calls in parallel. Do not interleave reads and writes across multiple turns.',
|
||||
'- You MUST only use content from the recent conversation history in your context plus the current managed memory files.',
|
||||
'- Do not inspect repository code, git history, or unrelated files.',
|
||||
'- Prefer updating an existing memory file over creating a duplicate.',
|
||||
'- Keep one durable memory per file under `user/`, `feedback/`, `project/`, or `reference/`.',
|
||||
'',
|
||||
'## How to save memories',
|
||||
'',
|
||||
'**Step 1** — write or update the memory file itself using the required frontmatter format.',
|
||||
`**Step 2** — update \`${memoryRoot}/${AUTO_MEMORY_INDEX_FILENAME}\`. It is an index, not a memory: each entry must be one line in the form \`- [Title](relative/path.md) — one-line hook\`. Never write memory content directly into the index.`,
|
||||
'- If you create or delete a memory file, also update the managed memory index.',
|
||||
'- If nothing durable should be saved, make no file changes.',
|
||||
'',
|
||||
'## Existing memory files',
|
||||
'',
|
||||
topicSummaries || '(none yet)',
|
||||
].join('\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Derive which memory topics were touched from the list of file paths written
|
||||
* during the agent run. Avoids requiring JSON output from the agent.
|
||||
*/
|
||||
function touchedTopicsFromFilePaths(
|
||||
filePaths: string[],
|
||||
projectRoot: string,
|
||||
): AutoMemoryType[] {
|
||||
const memoryRoot = getAutoMemoryRoot(projectRoot);
|
||||
const topicSet = new Set<AutoMemoryType>();
|
||||
for (const p of filePaths) {
|
||||
if (!p.startsWith(memoryRoot)) continue;
|
||||
const rel = p.slice(memoryRoot.length).replace(/^\//, '');
|
||||
const segment = rel.split('/')[0] as AutoMemoryType;
|
||||
if (
|
||||
segment === 'user' ||
|
||||
segment === 'feedback' ||
|
||||
segment === 'project' ||
|
||||
segment === 'reference'
|
||||
) {
|
||||
topicSet.add(segment);
|
||||
}
|
||||
}
|
||||
return [...topicSet];
|
||||
}
|
||||
|
||||
export async function runAutoMemoryExtractionByAgent(
|
||||
config: Config,
|
||||
projectRoot: string,
|
||||
): Promise<AutoMemoryExtractionExecutionResult> {
|
||||
const cacheSafe = getCacheSafeParams();
|
||||
if (!cacheSafe) {
|
||||
throw new Error(
|
||||
'runAutoMemoryExtractionByAgent: no cache-safe params available; ' +
|
||||
'extraction must run after a completed main turn.',
|
||||
);
|
||||
}
|
||||
const extraHistory = buildAgentHistory(cacheSafe.history);
|
||||
|
||||
const topicSummaries = await buildTopicSummaryBlock(projectRoot);
|
||||
const memoryRoot = getAutoMemoryRoot(projectRoot);
|
||||
const scopedConfig = createMemoryScopedAgentConfig(config, projectRoot);
|
||||
|
||||
const result = await runForkedAgent({
|
||||
name: 'managed-auto-memory-extractor',
|
||||
config: scopedConfig,
|
||||
taskPrompt: buildTaskPrompt(memoryRoot, topicSummaries),
|
||||
systemPrompt: EXTRACTION_AGENT_SYSTEM_PROMPT,
|
||||
maxTurns: 5,
|
||||
maxTimeMinutes: 2,
|
||||
tools: [
|
||||
ToolNames.READ_FILE,
|
||||
ToolNames.GREP,
|
||||
ToolNames.GLOB,
|
||||
ToolNames.LS,
|
||||
ToolNames.SHELL,
|
||||
ToolNames.WRITE_FILE,
|
||||
ToolNames.EDIT,
|
||||
],
|
||||
extraHistory,
|
||||
skipEnvHistory: true,
|
||||
});
|
||||
|
||||
if (result.status !== 'completed') {
|
||||
throw new Error(
|
||||
result.terminateReason ||
|
||||
'Extraction agent did not complete successfully',
|
||||
);
|
||||
}
|
||||
|
||||
const touchedTopics = touchedTopicsFromFilePaths(
|
||||
result.filesTouched,
|
||||
projectRoot,
|
||||
);
|
||||
|
||||
return {
|
||||
touchedTopics,
|
||||
systemMessage:
|
||||
touchedTopics.length > 0
|
||||
? `Managed auto-memory updated: ${touchedTopics.map((t) => `${t}.md`).join(', ')}`
|
||||
: undefined,
|
||||
};
|
||||
}
|
||||
11
packages/core/src/memory/extractionPlanner.ts
Normal file
11
packages/core/src/memory/extractionPlanner.ts
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
// Deprecated: managed auto-memory extraction no longer has a separate
|
||||
// model-planner stage. Extraction now runs directly through the forked agent
|
||||
// path implemented in extractionAgentPlanner.ts.
|
||||
|
||||
export {};
|
||||
342
packages/core/src/memory/forget.ts
Normal file
342
packages/core/src/memory/forget.ts
Normal file
|
|
@ -0,0 +1,342 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import type { Content } from '@google/genai';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { runSideQuery } from '../utils/sideQuery.js';
|
||||
import {
|
||||
buildAutoMemoryEntrySearchText,
|
||||
getAutoMemoryBodyHeading,
|
||||
parseAutoMemoryEntries,
|
||||
renderAutoMemoryBody,
|
||||
} from './entries.js';
|
||||
import { rebuildManagedAutoMemoryIndex } from './indexer.js';
|
||||
import { getAutoMemoryMetadataPath } from './paths.js';
|
||||
import { scanAutoMemoryTopicDocuments } from './scan.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
import type { AutoMemoryMetadata, AutoMemoryType } from './types.js';
|
||||
|
||||
export interface AutoMemoryForgetMatch {
|
||||
topic: AutoMemoryType;
|
||||
summary: string;
|
||||
filePath: string;
|
||||
}
|
||||
|
||||
export interface AutoMemoryForgetResult {
|
||||
query: string;
|
||||
removedEntries: AutoMemoryForgetMatch[];
|
||||
touchedTopics: AutoMemoryType[];
|
||||
systemMessage?: string;
|
||||
}
|
||||
|
||||
export interface AutoMemoryForgetSelectionResult {
|
||||
matches: AutoMemoryForgetMatch[];
|
||||
strategy: 'none' | 'heuristic' | 'model';
|
||||
reasoning?: string;
|
||||
}
|
||||
|
||||
interface IndexedForgetCandidate extends AutoMemoryForgetMatch {
|
||||
id: string;
|
||||
why?: string;
|
||||
howToApply?: string;
|
||||
}
|
||||
|
||||
const FORGET_SELECTION_RESPONSE_SCHEMA: Record<string, unknown> = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
selectedCandidateIds: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
},
|
||||
reasoning: {
|
||||
type: 'string',
|
||||
},
|
||||
},
|
||||
required: ['selectedCandidateIds'],
|
||||
};
|
||||
|
||||
interface ForgetSelectionResponse {
|
||||
selectedCandidateIds: string[];
|
||||
reasoning?: string;
|
||||
}
|
||||
|
||||
async function listIndexedForgetCandidates(
|
||||
projectRoot: string,
|
||||
): Promise<IndexedForgetCandidate[]> {
|
||||
const docs = await scanAutoMemoryTopicDocuments(projectRoot);
|
||||
const candidates: IndexedForgetCandidate[] = [];
|
||||
|
||||
for (const doc of docs) {
|
||||
const entries = parseAutoMemoryEntries(doc.body);
|
||||
for (let i = 0; i < entries.length; i++) {
|
||||
const entry = entries[i];
|
||||
candidates.push({
|
||||
// Use a stable per-entry ID so the model can target individual entries
|
||||
// in multi-entry files without accidentally removing siblings.
|
||||
id:
|
||||
entries.length === 1 ? doc.relativePath : `${doc.relativePath}:${i}`,
|
||||
topic: doc.type,
|
||||
summary: entry.summary,
|
||||
filePath: doc.filePath,
|
||||
why: entry.why,
|
||||
howToApply: entry.howToApply,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return candidates;
|
||||
}
|
||||
|
||||
function buildForgetSelectionPrompt(
|
||||
query: string,
|
||||
candidates: IndexedForgetCandidate[],
|
||||
limit: number,
|
||||
): string {
|
||||
return [
|
||||
'Select the managed auto-memory entries that most likely match the user request to forget something.',
|
||||
`Return at most ${limit} candidate ids.`,
|
||||
'Prefer semantically matching entries even if the wording differs slightly.',
|
||||
'If nothing should be forgotten, return an empty array.',
|
||||
'',
|
||||
`Forget request: ${query.trim()}`,
|
||||
'',
|
||||
'Candidates:',
|
||||
...candidates.map((candidate, index) =>
|
||||
[
|
||||
`Candidate ${index + 1}`,
|
||||
`id: ${candidate.id}`,
|
||||
`topic: ${candidate.topic}`,
|
||||
`summary: ${candidate.summary}`,
|
||||
`why: ${candidate.why ?? '(none)'}`,
|
||||
`howToApply: ${candidate.howToApply ?? '(none)'}`,
|
||||
].join('\n'),
|
||||
),
|
||||
].join('\n');
|
||||
}
|
||||
|
||||
async function selectByModel(
|
||||
candidates: IndexedForgetCandidate[],
|
||||
query: string,
|
||||
config: Config,
|
||||
limit: number,
|
||||
): Promise<AutoMemoryForgetSelectionResult> {
|
||||
const response = await runSideQuery<ForgetSelectionResponse>(config, {
|
||||
purpose: 'auto-memory-forget-selection',
|
||||
contents: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [
|
||||
{
|
||||
text: buildForgetSelectionPrompt(query, candidates, limit),
|
||||
},
|
||||
],
|
||||
},
|
||||
] as Content[],
|
||||
schema: FORGET_SELECTION_RESPONSE_SCHEMA,
|
||||
abortSignal: AbortSignal.timeout(8_000),
|
||||
config: {
|
||||
temperature: 0,
|
||||
},
|
||||
validate: (value) => {
|
||||
const candidateIds = new Set(candidates.map((c) => c.id));
|
||||
for (const id of value.selectedCandidateIds) {
|
||||
if (!candidateIds.has(id)) {
|
||||
return `Unknown candidate id: ${id}`;
|
||||
}
|
||||
}
|
||||
return null;
|
||||
},
|
||||
});
|
||||
|
||||
const selectedIds = new Set(response.selectedCandidateIds);
|
||||
const matches = candidates
|
||||
.filter((candidate) => selectedIds.has(candidate.id))
|
||||
.slice(0, limit)
|
||||
.map(({ topic, summary, filePath }) => ({ topic, summary, filePath }));
|
||||
|
||||
return {
|
||||
matches,
|
||||
strategy: matches.length > 0 ? 'model' : 'none',
|
||||
reasoning: response.reasoning,
|
||||
};
|
||||
}
|
||||
|
||||
function selectByHeuristic(
|
||||
candidates: IndexedForgetCandidate[],
|
||||
query: string,
|
||||
limit: number,
|
||||
): AutoMemoryForgetSelectionResult {
|
||||
const normalizedQuery = query.replace(/\s+/g, ' ').trim();
|
||||
const queryLower = normalizedQuery.toLowerCase();
|
||||
const matches = candidates
|
||||
.filter((candidate) =>
|
||||
buildAutoMemoryEntrySearchText(candidate).includes(queryLower),
|
||||
)
|
||||
.slice(0, limit)
|
||||
.map(({ topic, summary, filePath }) => ({ topic, summary, filePath }));
|
||||
|
||||
return {
|
||||
matches,
|
||||
strategy: matches.length > 0 ? 'heuristic' : 'none',
|
||||
};
|
||||
}
|
||||
|
||||
export async function selectManagedAutoMemoryForgetCandidates(
|
||||
projectRoot: string,
|
||||
query: string,
|
||||
options: {
|
||||
config?: Config;
|
||||
limit?: number;
|
||||
} = {},
|
||||
): Promise<AutoMemoryForgetSelectionResult> {
|
||||
const limit = options.limit ?? 5;
|
||||
const candidates = await listIndexedForgetCandidates(projectRoot);
|
||||
if (candidates.length === 0) {
|
||||
return { matches: [], strategy: 'none' };
|
||||
}
|
||||
|
||||
if (options.config) {
|
||||
try {
|
||||
return await selectByModel(candidates, query, options.config, limit);
|
||||
} catch {
|
||||
// Fall through to heuristic.
|
||||
}
|
||||
}
|
||||
|
||||
return selectByHeuristic(candidates, query, limit);
|
||||
}
|
||||
|
||||
async function bumpMetadata(projectRoot: string, now: Date): Promise<void> {
|
||||
try {
|
||||
const content = await fs.readFile(
|
||||
getAutoMemoryMetadataPath(projectRoot),
|
||||
'utf-8',
|
||||
);
|
||||
const metadata = JSON.parse(content) as AutoMemoryMetadata;
|
||||
metadata.updatedAt = now.toISOString();
|
||||
await fs.writeFile(
|
||||
getAutoMemoryMetadataPath(projectRoot),
|
||||
`${JSON.stringify(metadata, null, 2)}\n`,
|
||||
'utf-8',
|
||||
);
|
||||
} catch {
|
||||
// Best-effort metadata bump.
|
||||
}
|
||||
}
|
||||
|
||||
export async function forgetManagedAutoMemoryMatches(
|
||||
projectRoot: string,
|
||||
matches: AutoMemoryForgetMatch[],
|
||||
now = new Date(),
|
||||
): Promise<AutoMemoryForgetResult> {
|
||||
if (matches.length === 0) {
|
||||
return {
|
||||
query: '',
|
||||
removedEntries: [],
|
||||
touchedTopics: [],
|
||||
systemMessage: undefined,
|
||||
};
|
||||
}
|
||||
await ensureAutoMemoryScaffold(projectRoot, now);
|
||||
|
||||
const removedEntries: AutoMemoryForgetMatch[] = [];
|
||||
const touchedTopics = new Set<AutoMemoryType>();
|
||||
|
||||
// Group matches by file so we can do per-entry removal rather than
|
||||
// blindly deleting entire files (which would destroy unrelated entries in
|
||||
// legacy multi-entry files).
|
||||
const matchesByFile = new Map<string, AutoMemoryForgetMatch[]>();
|
||||
for (const match of matches) {
|
||||
const existing = matchesByFile.get(match.filePath) ?? [];
|
||||
existing.push(match);
|
||||
matchesByFile.set(match.filePath, existing);
|
||||
}
|
||||
|
||||
for (const [filePath, fileMatches] of matchesByFile) {
|
||||
try {
|
||||
const rawContent = await fs.readFile(filePath, 'utf-8');
|
||||
const fmMatch = rawContent.match(/^---\n([\s\S]*?)\n---\n?([\s\S]*)$/);
|
||||
|
||||
if (!fmMatch) {
|
||||
// No frontmatter — delete the whole file.
|
||||
await fs.unlink(filePath);
|
||||
removedEntries.push(...fileMatches);
|
||||
for (const m of fileMatches) touchedTopics.add(m.topic);
|
||||
continue;
|
||||
}
|
||||
|
||||
const [, frontmatter, rawBody] = fmMatch;
|
||||
const allEntries = parseAutoMemoryEntries(rawBody.trim());
|
||||
const matchedSummaries = new Set(
|
||||
fileMatches.map((m) => m.summary.toLowerCase()),
|
||||
);
|
||||
const kept = allEntries.filter(
|
||||
(e) => !matchedSummaries.has(e.summary.toLowerCase()),
|
||||
);
|
||||
|
||||
if (kept.length === 0) {
|
||||
await fs.unlink(filePath);
|
||||
} else {
|
||||
const heading = getAutoMemoryBodyHeading(rawBody);
|
||||
const newBody = renderAutoMemoryBody(heading, kept);
|
||||
await fs.writeFile(
|
||||
filePath,
|
||||
`---\n${frontmatter}\n---\n\n${newBody}\n`,
|
||||
'utf-8',
|
||||
);
|
||||
}
|
||||
|
||||
// Record the entries that were actually removed (by summary match count).
|
||||
const removedCount = allEntries.length - kept.length;
|
||||
removedEntries.push(...fileMatches.slice(0, removedCount));
|
||||
for (const m of fileMatches.slice(0, removedCount)) {
|
||||
touchedTopics.add(m.topic);
|
||||
}
|
||||
} catch {
|
||||
// File may have already been removed; continue.
|
||||
}
|
||||
}
|
||||
|
||||
if (touchedTopics.size > 0) {
|
||||
await bumpMetadata(projectRoot, now);
|
||||
await rebuildManagedAutoMemoryIndex(projectRoot);
|
||||
}
|
||||
|
||||
return {
|
||||
query: '',
|
||||
removedEntries,
|
||||
touchedTopics: [...touchedTopics],
|
||||
systemMessage:
|
||||
removedEntries.length > 0
|
||||
? `Managed auto-memory forgot ${removedEntries.length} entr${removedEntries.length === 1 ? 'y' : 'ies'} from: ${[...touchedTopics].map((topic) => `${topic}/`).join(', ')}`
|
||||
: undefined,
|
||||
};
|
||||
}
|
||||
|
||||
export async function forgetManagedAutoMemoryEntries(
|
||||
projectRoot: string,
|
||||
query: string,
|
||||
options: { config?: Config } = {},
|
||||
now = new Date(),
|
||||
): Promise<AutoMemoryForgetResult> {
|
||||
const trimmedQuery = query.trim();
|
||||
if (!trimmedQuery) {
|
||||
return { query: trimmedQuery, removedEntries: [], touchedTopics: [] };
|
||||
}
|
||||
|
||||
const selection = await selectManagedAutoMemoryForgetCandidates(
|
||||
projectRoot,
|
||||
trimmedQuery,
|
||||
{ ...options, limit: Number.MAX_SAFE_INTEGER },
|
||||
);
|
||||
const result = await forgetManagedAutoMemoryMatches(
|
||||
projectRoot,
|
||||
selection.matches,
|
||||
now,
|
||||
);
|
||||
return { ...result, query: trimmedQuery };
|
||||
}
|
||||
352
packages/core/src/memory/governance.ts
Normal file
352
packages/core/src/memory/governance.ts
Normal file
|
|
@ -0,0 +1,352 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type { Content } from '@google/genai';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { runSideQuery } from '../utils/sideQuery.js';
|
||||
import { parseAutoMemoryEntries } from './entries.js';
|
||||
import { scanAutoMemoryTopicDocuments } from './scan.js';
|
||||
import type { AutoMemoryType } from './types.js';
|
||||
|
||||
export type AutoMemoryGovernanceSuggestionType =
|
||||
| 'duplicate'
|
||||
| 'conflict'
|
||||
| 'outdated'
|
||||
| 'promote'
|
||||
| 'migrate'
|
||||
| 'forget';
|
||||
|
||||
export interface AutoMemoryGovernanceSuggestion {
|
||||
type: AutoMemoryGovernanceSuggestionType;
|
||||
topic: AutoMemoryType;
|
||||
summary: string;
|
||||
rationale: string;
|
||||
relatedTopic?: AutoMemoryType;
|
||||
relatedSummary?: string;
|
||||
suggestedTargetTopic?: AutoMemoryType;
|
||||
}
|
||||
|
||||
export interface AutoMemoryGovernanceReview {
|
||||
suggestions: AutoMemoryGovernanceSuggestion[];
|
||||
strategy: 'none' | 'heuristic' | 'model';
|
||||
}
|
||||
|
||||
interface IndexedGovernanceEntry {
|
||||
/** Relative path of the file (used as stable ID). */
|
||||
id: string;
|
||||
filePath: string;
|
||||
topic: AutoMemoryType;
|
||||
summary: string;
|
||||
why?: string;
|
||||
howToApply?: string;
|
||||
}
|
||||
|
||||
const RESPONSE_SCHEMA: Record<string, unknown> = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
suggestions: {
|
||||
type: 'array',
|
||||
items: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
type: {
|
||||
type: 'string',
|
||||
enum: [
|
||||
'duplicate',
|
||||
'conflict',
|
||||
'outdated',
|
||||
'promote',
|
||||
'migrate',
|
||||
'forget',
|
||||
],
|
||||
},
|
||||
entryId: { type: 'string' },
|
||||
relatedEntryId: { type: 'string' },
|
||||
suggestedTargetTopic: {
|
||||
type: 'string',
|
||||
enum: ['user', 'feedback', 'project', 'reference'],
|
||||
},
|
||||
rationale: { type: 'string' },
|
||||
},
|
||||
required: ['type', 'entryId', 'rationale'],
|
||||
},
|
||||
},
|
||||
},
|
||||
required: ['suggestions'],
|
||||
};
|
||||
|
||||
interface GovernanceResponse {
|
||||
suggestions: Array<{
|
||||
type: AutoMemoryGovernanceSuggestionType;
|
||||
entryId: string;
|
||||
relatedEntryId?: string;
|
||||
suggestedTargetTopic?: AutoMemoryType;
|
||||
rationale: string;
|
||||
}>;
|
||||
}
|
||||
|
||||
async function listGovernanceEntries(
|
||||
projectRoot: string,
|
||||
): Promise<IndexedGovernanceEntry[]> {
|
||||
const docs = await scanAutoMemoryTopicDocuments(projectRoot);
|
||||
const entries: IndexedGovernanceEntry[] = [];
|
||||
|
||||
for (const doc of docs) {
|
||||
const docEntries = parseAutoMemoryEntries(doc.body);
|
||||
for (const entry of docEntries) {
|
||||
entries.push({
|
||||
id: doc.relativePath,
|
||||
filePath: doc.filePath,
|
||||
topic: doc.type,
|
||||
summary: entry.summary,
|
||||
why: entry.why,
|
||||
howToApply: entry.howToApply,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return entries;
|
||||
}
|
||||
|
||||
function classifyExpectedTopic(summary: string): AutoMemoryType | null {
|
||||
if (
|
||||
/https?:\/\/|\b(grafana|dashboard|runbook|ticket|docs?|wiki|notion|jira)\b/i.test(
|
||||
summary,
|
||||
)
|
||||
) {
|
||||
return 'reference';
|
||||
}
|
||||
if (
|
||||
/\b(i|we)\s+(prefer|like|need|want)\b|\bmy\s+(preferred|favorite)\b/i.test(
|
||||
summary,
|
||||
)
|
||||
) {
|
||||
return 'user';
|
||||
}
|
||||
if (
|
||||
/\b(please|always|never|avoid|respond|format|style|terse|concise|detailed)\b/i.test(
|
||||
summary,
|
||||
)
|
||||
) {
|
||||
return 'feedback';
|
||||
}
|
||||
if (
|
||||
/\b(project|repo|repository|service|release|deadline|freeze|incident|environment|stack)\b/i.test(
|
||||
summary,
|
||||
)
|
||||
) {
|
||||
return 'project';
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function maybeConflict(a: string, b: string): boolean {
|
||||
const pairChecks: Array<[RegExp, RegExp]> = [
|
||||
[/\balways\b/i, /\bnever\b/i],
|
||||
[/\bterse|concise\b/i, /\bdetailed\b/i],
|
||||
];
|
||||
return pairChecks.some(
|
||||
([left, right]) =>
|
||||
(left.test(a) && right.test(b)) || (left.test(b) && right.test(a)),
|
||||
);
|
||||
}
|
||||
|
||||
function buildModelPrompt(entries: IndexedGovernanceEntry[]): string {
|
||||
return [
|
||||
'Review managed auto-memory entries and emit governance suggestions.',
|
||||
'Only suggest duplicate, conflict, outdated, promote, migrate, or forget when the case is strong.',
|
||||
'Prefer promote suggestions for entries that are durable but still missing why/howToApply context.',
|
||||
'',
|
||||
'Entries:',
|
||||
...entries.map((entry, index) =>
|
||||
[
|
||||
`Entry ${index + 1}`,
|
||||
`id: ${entry.id}`,
|
||||
`topic: ${entry.topic}`,
|
||||
`summary: ${entry.summary}`,
|
||||
`why: ${entry.why ?? '(none)'}`,
|
||||
`howToApply: ${entry.howToApply ?? '(none)'}`,
|
||||
].join('\n'),
|
||||
),
|
||||
'',
|
||||
'Return JSON matching the response schema.',
|
||||
].join('\n');
|
||||
}
|
||||
|
||||
function buildHeuristicSuggestions(
|
||||
entries: IndexedGovernanceEntry[],
|
||||
): AutoMemoryGovernanceSuggestion[] {
|
||||
const suggestions: AutoMemoryGovernanceSuggestion[] = [];
|
||||
|
||||
// Duplicate detection: same summary (case-insensitive) in same topic
|
||||
const summaryByTopic = new Map<string, IndexedGovernanceEntry>();
|
||||
for (const entry of entries) {
|
||||
const key = `${entry.topic}:${entry.summary.toLowerCase()}`;
|
||||
const existing = summaryByTopic.get(key);
|
||||
if (existing) {
|
||||
suggestions.push({
|
||||
type: 'duplicate',
|
||||
topic: entry.topic,
|
||||
summary: entry.summary,
|
||||
relatedTopic: existing.topic,
|
||||
relatedSummary: existing.summary,
|
||||
rationale: 'Two entries share the same summary text.',
|
||||
});
|
||||
} else {
|
||||
summaryByTopic.set(key, entry);
|
||||
}
|
||||
}
|
||||
|
||||
for (const entry of entries) {
|
||||
// Migration suggestion: entry may belong in a different topic
|
||||
const expectedTopic = classifyExpectedTopic(entry.summary);
|
||||
if (expectedTopic && expectedTopic !== entry.topic) {
|
||||
suggestions.push({
|
||||
type: 'migrate',
|
||||
topic: entry.topic,
|
||||
summary: entry.summary,
|
||||
suggestedTargetTopic: expectedTopic,
|
||||
rationale: `Entry heuristically belongs in '${expectedTopic}' rather than '${entry.topic}'.`,
|
||||
});
|
||||
}
|
||||
|
||||
// Outdated markers
|
||||
if (
|
||||
/\b(today|now|currently|for this task|this session|temporary|temporarily)\b/i.test(
|
||||
entry.summary,
|
||||
)
|
||||
) {
|
||||
suggestions.push({
|
||||
type: 'outdated',
|
||||
topic: entry.topic,
|
||||
summary: entry.summary,
|
||||
rationale: 'The entry appears temporary rather than durable.',
|
||||
});
|
||||
}
|
||||
|
||||
if (/\b(deprecated|obsolete|sunset|legacy|old)\b/i.test(entry.summary)) {
|
||||
suggestions.push({
|
||||
type: 'outdated',
|
||||
topic: entry.topic,
|
||||
summary: entry.summary,
|
||||
rationale:
|
||||
'The entry contains wording that suggests it may be outdated.',
|
||||
});
|
||||
}
|
||||
|
||||
// Promote: durable entry missing why/howToApply metadata
|
||||
if (!entry.why || !entry.howToApply) {
|
||||
suggestions.push({
|
||||
type: 'promote',
|
||||
topic: entry.topic,
|
||||
summary: entry.summary,
|
||||
rationale:
|
||||
'This durable entry could be upgraded with why/howToApply metadata.',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Conflict detection: entries in the same topic that contradict each other
|
||||
for (let i = 0; i < entries.length; i += 1) {
|
||||
for (let j = i + 1; j < entries.length; j += 1) {
|
||||
const left = entries[i];
|
||||
const right = entries[j];
|
||||
if (left.topic !== right.topic) {
|
||||
continue;
|
||||
}
|
||||
if (maybeConflict(left.summary, right.summary)) {
|
||||
suggestions.push({
|
||||
type: 'conflict',
|
||||
topic: right.topic,
|
||||
summary: right.summary,
|
||||
relatedTopic: left.topic,
|
||||
relatedSummary: left.summary,
|
||||
rationale: 'These entries may encode conflicting guidance.',
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return suggestions.slice(0, 20);
|
||||
}
|
||||
|
||||
export async function reviewManagedAutoMemoryGovernance(
|
||||
projectRoot: string,
|
||||
options: {
|
||||
config?: Config;
|
||||
} = {},
|
||||
): Promise<AutoMemoryGovernanceReview> {
|
||||
const entries = await listGovernanceEntries(projectRoot);
|
||||
if (entries.length === 0) {
|
||||
return { suggestions: [], strategy: 'none' };
|
||||
}
|
||||
|
||||
if (options.config) {
|
||||
try {
|
||||
const entryById = new Map(entries.map((entry) => [entry.id, entry]));
|
||||
const response = await runSideQuery<GovernanceResponse>(options.config, {
|
||||
purpose: 'auto-memory-governance-review',
|
||||
contents: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [{ text: buildModelPrompt(entries) }],
|
||||
},
|
||||
] as Content[],
|
||||
schema: RESPONSE_SCHEMA,
|
||||
abortSignal: AbortSignal.timeout(8_000),
|
||||
config: {
|
||||
temperature: 0,
|
||||
},
|
||||
validate: (value) => {
|
||||
if (
|
||||
value.suggestions.some(
|
||||
(suggestion) => !entryById.has(suggestion.entryId),
|
||||
)
|
||||
) {
|
||||
return 'Governance reviewer returned an unknown entry id';
|
||||
}
|
||||
if (
|
||||
value.suggestions.some(
|
||||
(suggestion) =>
|
||||
suggestion.relatedEntryId &&
|
||||
!entryById.has(suggestion.relatedEntryId),
|
||||
)
|
||||
) {
|
||||
return 'Governance reviewer returned an unknown related entry id';
|
||||
}
|
||||
return null;
|
||||
},
|
||||
});
|
||||
|
||||
return {
|
||||
suggestions: response.suggestions.map((suggestion) => {
|
||||
const entry = entryById.get(suggestion.entryId)!;
|
||||
const related = suggestion.relatedEntryId
|
||||
? entryById.get(suggestion.relatedEntryId)
|
||||
: undefined;
|
||||
return {
|
||||
type: suggestion.type,
|
||||
topic: entry.topic,
|
||||
summary: entry.summary,
|
||||
rationale: suggestion.rationale,
|
||||
relatedTopic: related?.topic,
|
||||
relatedSummary: related?.summary,
|
||||
suggestedTargetTopic: suggestion.suggestedTargetTopic,
|
||||
} satisfies AutoMemoryGovernanceSuggestion;
|
||||
}),
|
||||
strategy: response.suggestions.length > 0 ? 'model' : 'none',
|
||||
};
|
||||
} catch {
|
||||
// Fall back to heuristics.
|
||||
}
|
||||
}
|
||||
|
||||
const suggestions = buildHeuristicSuggestions(entries);
|
||||
return {
|
||||
suggestions,
|
||||
strategy: suggestions.length > 0 ? 'heuristic' : 'none',
|
||||
};
|
||||
}
|
||||
83
packages/core/src/memory/indexer.test.ts
Normal file
83
packages/core/src/memory/indexer.test.ts
Normal file
|
|
@ -0,0 +1,83 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import * as os from 'node:os';
|
||||
import * as path from 'node:path';
|
||||
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
|
||||
import { getAutoMemoryFilePath, getAutoMemoryIndexPath } from './paths.js';
|
||||
import {
|
||||
buildManagedAutoMemoryIndex,
|
||||
rebuildManagedAutoMemoryIndex,
|
||||
} from './indexer.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
|
||||
describe('managed auto-memory indexer', () => {
|
||||
let tempDir: string;
|
||||
let projectRoot: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'auto-memory-indexer-'));
|
||||
projectRoot = path.join(tempDir, 'project');
|
||||
await fs.mkdir(projectRoot, { recursive: true });
|
||||
await ensureAutoMemoryScaffold(projectRoot, new Date('2026-04-01T00:00:00.000Z'));
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await fs.rm(tempDir, {
|
||||
recursive: true,
|
||||
force: true,
|
||||
maxRetries: 3,
|
||||
retryDelay: 10,
|
||||
});
|
||||
});
|
||||
|
||||
it('formats a compact file-based MEMORY.md index view', () => {
|
||||
const content = buildManagedAutoMemoryIndex([
|
||||
{
|
||||
type: 'user',
|
||||
filePath: '/tmp/user/terse.md',
|
||||
relativePath: 'user/terse.md',
|
||||
filename: 'terse.md',
|
||||
title: 'User Memory',
|
||||
description: 'User profile',
|
||||
body: 'User prefers terse responses.',
|
||||
mtimeMs: 0,
|
||||
},
|
||||
]);
|
||||
|
||||
expect(content).toBe(
|
||||
'- [User Memory](user/terse.md) — User profile',
|
||||
);
|
||||
});
|
||||
|
||||
it('rewrites MEMORY.md from topic file contents', async () => {
|
||||
const projectFile = getAutoMemoryFilePath(
|
||||
projectRoot,
|
||||
path.join('project', 'repo-workspaces.md'),
|
||||
);
|
||||
await fs.mkdir(path.dirname(projectFile), { recursive: true });
|
||||
await fs.writeFile(
|
||||
projectFile,
|
||||
[
|
||||
'---',
|
||||
'type: project',
|
||||
'name: Project Memory',
|
||||
'description: The repo uses pnpm workspaces.',
|
||||
'---',
|
||||
'',
|
||||
'The repo uses pnpm workspaces.',
|
||||
].join('\n'),
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
await rebuildManagedAutoMemoryIndex(projectRoot);
|
||||
|
||||
const index = await fs.readFile(getAutoMemoryIndexPath(projectRoot), 'utf-8');
|
||||
expect(index).toContain('[Project Memory](project/repo-workspaces.md)');
|
||||
expect(index).toContain('The repo uses pnpm workspaces.');
|
||||
});
|
||||
});
|
||||
72
packages/core/src/memory/indexer.ts
Normal file
72
packages/core/src/memory/indexer.ts
Normal file
|
|
@ -0,0 +1,72 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import { getAutoMemoryIndexPath, getAutoMemoryMetadataPath } from './paths.js';
|
||||
import { scanAutoMemoryTopicDocuments, type ScannedAutoMemoryDocument } from './scan.js';
|
||||
import type { AutoMemoryMetadata } from './types.js';
|
||||
|
||||
const MAX_INDEX_LINE_CHARS = 150;
|
||||
const MAX_INDEX_LINES = 200;
|
||||
const MAX_INDEX_BYTES = 25_000;
|
||||
|
||||
function truncateIndexLine(text: string): string {
|
||||
if (text.length <= MAX_INDEX_LINE_CHARS) {
|
||||
return text;
|
||||
}
|
||||
return `${text.slice(0, MAX_INDEX_LINE_CHARS - 1).trimEnd()}…`;
|
||||
}
|
||||
|
||||
export function buildManagedAutoMemoryIndex(
|
||||
docs: ScannedAutoMemoryDocument[],
|
||||
_metadata?: Pick<AutoMemoryMetadata, 'updatedAt' | 'lastDreamAt' | 'lastDreamSessionId'>,
|
||||
): string {
|
||||
const raw = docs
|
||||
.map((doc) =>
|
||||
truncateIndexLine(
|
||||
`- [${doc.title}](${doc.relativePath}) — ${doc.description || doc.type}`,
|
||||
),
|
||||
)
|
||||
.join('\n');
|
||||
|
||||
const lines = raw.split('\n');
|
||||
const wasLineTruncated = lines.length > MAX_INDEX_LINES;
|
||||
let truncated = wasLineTruncated ? lines.slice(0, MAX_INDEX_LINES).join('\n') : raw;
|
||||
|
||||
if (truncated.length > MAX_INDEX_BYTES) {
|
||||
const cutAt = truncated.lastIndexOf('\n', MAX_INDEX_BYTES);
|
||||
truncated = truncated.slice(0, cutAt > 0 ? cutAt : MAX_INDEX_BYTES);
|
||||
}
|
||||
|
||||
if (!wasLineTruncated && truncated.length === raw.length) {
|
||||
return truncated;
|
||||
}
|
||||
|
||||
return `${truncated}\n\n> WARNING: MEMORY.md is too large; only part of it was written. Keep index entries concise and move detail into topic files.`;
|
||||
}
|
||||
|
||||
async function readAutoMemoryMetadata(
|
||||
projectRoot: string,
|
||||
): Promise<AutoMemoryMetadata | undefined> {
|
||||
try {
|
||||
const content = await fs.readFile(getAutoMemoryMetadataPath(projectRoot), 'utf-8');
|
||||
return JSON.parse(content) as AutoMemoryMetadata;
|
||||
} catch {
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
export async function rebuildManagedAutoMemoryIndex(
|
||||
projectRoot: string,
|
||||
): Promise<string> {
|
||||
const [docs, metadata] = await Promise.all([
|
||||
scanAutoMemoryTopicDocuments(projectRoot),
|
||||
readAutoMemoryMetadata(projectRoot),
|
||||
]);
|
||||
const content = buildManagedAutoMemoryIndex(docs, metadata);
|
||||
await fs.writeFile(getAutoMemoryIndexPath(projectRoot), content, 'utf-8');
|
||||
return content;
|
||||
}
|
||||
471
packages/core/src/memory/manager.test.ts
Normal file
471
packages/core/src/memory/manager.test.ts
Normal file
|
|
@ -0,0 +1,471 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import * as os from 'node:os';
|
||||
import * as path from 'node:path';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import { globalMemoryManager, MemoryManager } from './manager.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
import {
|
||||
getAutoMemoryMetadataPath,
|
||||
getAutoMemoryConsolidationLockPath,
|
||||
clearAutoMemoryRootCache,
|
||||
} from './paths.js';
|
||||
import type { Config } from '../config/config.js';
|
||||
|
||||
// ─── Mocks ────────────────────────────────────────────────────────────────────
|
||||
|
||||
vi.mock('./extract.js', () => ({
|
||||
runAutoMemoryExtract: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('./dream.js', () => ({
|
||||
runManagedAutoMemoryDream: vi.fn(),
|
||||
}));
|
||||
|
||||
import { runAutoMemoryExtract } from './extract.js';
|
||||
import { runManagedAutoMemoryDream } from './dream.js';
|
||||
|
||||
// ─── Helpers ──────────────────────────────────────────────────────────────────
|
||||
|
||||
function makeMockConfig(overrides: Partial<Config> = {}): Config {
|
||||
return {
|
||||
getManagedAutoMemoryEnabled: vi.fn().mockReturnValue(true),
|
||||
getManagedAutoDreamEnabled: vi.fn().mockReturnValue(true),
|
||||
getSessionId: vi.fn().mockReturnValue('session-1'),
|
||||
getModel: vi.fn().mockReturnValue('test-model'),
|
||||
logEvent: vi.fn(),
|
||||
...overrides,
|
||||
} as unknown as Config;
|
||||
}
|
||||
|
||||
// ─── MemoryManager ────────────────────────────────────────────────────────────
|
||||
|
||||
describe('MemoryManager', () => {
|
||||
describe('globalMemoryManager', () => {
|
||||
it('is a MemoryManager instance', () => {
|
||||
expect(globalMemoryManager).toBeInstanceOf(MemoryManager);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── drain() ──────────────────────────────────────────────────────────────
|
||||
|
||||
describe('drain()', () => {
|
||||
it('resolves true immediately when there are no in-flight tasks', async () => {
|
||||
const mgr = new MemoryManager();
|
||||
expect(await mgr.drain()).toBe(true);
|
||||
});
|
||||
|
||||
it('resolves false when drain times out while a task is in-flight', async () => {
|
||||
const mgr = new MemoryManager();
|
||||
let resolveExtract!: (
|
||||
v: Awaited<ReturnType<typeof runAutoMemoryExtract>>,
|
||||
) => void;
|
||||
|
||||
vi.mocked(runAutoMemoryExtract).mockReturnValue(
|
||||
new Promise<Awaited<ReturnType<typeof runAutoMemoryExtract>>>(
|
||||
(resolve) => {
|
||||
resolveExtract = resolve;
|
||||
},
|
||||
),
|
||||
);
|
||||
|
||||
void mgr.scheduleExtract({
|
||||
projectRoot: '/project',
|
||||
sessionId: 'sess',
|
||||
history: [{ role: 'user', parts: [{ text: 'hi' }] }],
|
||||
});
|
||||
|
||||
expect(await mgr.drain({ timeoutMs: 20 })).toBe(false);
|
||||
|
||||
resolveExtract({
|
||||
touchedTopics: [],
|
||||
cursor: { sessionId: 'sess', updatedAt: new Date().toISOString() },
|
||||
});
|
||||
expect(await mgr.drain()).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── scheduleExtract() ────────────────────────────────────────────────────
|
||||
|
||||
describe('scheduleExtract()', () => {
|
||||
let tempDir: string;
|
||||
let projectRoot: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
vi.resetAllMocks();
|
||||
process.env['QWEN_CODE_MEMORY_LOCAL'] = '1';
|
||||
clearAutoMemoryRootCache();
|
||||
tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'mgr-extract-'));
|
||||
projectRoot = path.join(tempDir, 'project');
|
||||
await fs.mkdir(projectRoot, { recursive: true });
|
||||
await ensureAutoMemoryScaffold(projectRoot);
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
delete process.env['QWEN_CODE_MEMORY_LOCAL'];
|
||||
clearAutoMemoryRootCache();
|
||||
await fs.rm(tempDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('runs extract and records a completed task', async () => {
|
||||
vi.mocked(runAutoMemoryExtract).mockResolvedValue({
|
||||
touchedTopics: ['user'],
|
||||
cursor: { sessionId: 'sess-1', updatedAt: new Date().toISOString() },
|
||||
});
|
||||
|
||||
const mgr = new MemoryManager();
|
||||
const result = await mgr.scheduleExtract({
|
||||
projectRoot,
|
||||
sessionId: 'sess-1',
|
||||
history: [{ role: 'user', parts: [{ text: 'hi' }] }],
|
||||
});
|
||||
|
||||
expect(result.touchedTopics).toEqual(['user']);
|
||||
await mgr.drain();
|
||||
const tasks = mgr.listTasksByType('extract', projectRoot);
|
||||
expect(tasks.some((t) => t.status === 'completed')).toBe(true);
|
||||
});
|
||||
|
||||
it('skips extraction when history writes to a memory file', async () => {
|
||||
const mgr = new MemoryManager();
|
||||
const result = await mgr.scheduleExtract({
|
||||
projectRoot,
|
||||
sessionId: 'sess-1',
|
||||
history: [
|
||||
{
|
||||
role: 'model',
|
||||
parts: [
|
||||
{
|
||||
functionCall: {
|
||||
name: 'write_file',
|
||||
args: {
|
||||
file_path: `${projectRoot}/.qwen/memory/user/test.md`,
|
||||
},
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
expect(result.skippedReason).toBe('memory_tool');
|
||||
expect(vi.mocked(runAutoMemoryExtract)).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('queues a trailing extract when one is already running', async () => {
|
||||
let resolveFirst!: (
|
||||
v: Awaited<ReturnType<typeof runAutoMemoryExtract>>,
|
||||
) => void;
|
||||
vi.mocked(runAutoMemoryExtract)
|
||||
.mockReturnValueOnce(
|
||||
new Promise<Awaited<ReturnType<typeof runAutoMemoryExtract>>>(
|
||||
(resolve) => {
|
||||
resolveFirst = resolve;
|
||||
},
|
||||
),
|
||||
)
|
||||
.mockResolvedValueOnce({
|
||||
touchedTopics: ['reference'],
|
||||
cursor: { sessionId: 'sess-1', updatedAt: new Date().toISOString() },
|
||||
});
|
||||
|
||||
const mgr = new MemoryManager();
|
||||
const firstPromise = mgr.scheduleExtract({
|
||||
projectRoot,
|
||||
sessionId: 'sess-1',
|
||||
history: [{ role: 'user', parts: [{ text: 'first' }] }],
|
||||
});
|
||||
|
||||
// Second call while first is in-flight — should be queued
|
||||
const queued = await mgr.scheduleExtract({
|
||||
projectRoot,
|
||||
sessionId: 'sess-1',
|
||||
history: [{ role: 'user', parts: [{ text: 'second' }] }],
|
||||
});
|
||||
expect(queued.skippedReason).toBe('queued');
|
||||
|
||||
// Resolve first so queued one can start
|
||||
resolveFirst({
|
||||
touchedTopics: ['user'],
|
||||
cursor: { sessionId: 'sess-1', updatedAt: new Date().toISOString() },
|
||||
});
|
||||
await firstPromise;
|
||||
await mgr.drain({ timeoutMs: 1_000 });
|
||||
|
||||
// Both extractions should have run
|
||||
expect(vi.mocked(runAutoMemoryExtract)).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
|
||||
it('isolates state between manager instances', async () => {
|
||||
vi.mocked(runAutoMemoryExtract).mockResolvedValue({
|
||||
touchedTopics: ['user'],
|
||||
cursor: { sessionId: 'sess-1', updatedAt: new Date().toISOString() },
|
||||
});
|
||||
|
||||
const mgrA = new MemoryManager();
|
||||
const mgrB = new MemoryManager();
|
||||
|
||||
await mgrA.scheduleExtract({
|
||||
projectRoot,
|
||||
sessionId: 'sess-a',
|
||||
history: [{ role: 'user', parts: [{ text: 'hi' }] }],
|
||||
});
|
||||
await mgrA.drain();
|
||||
|
||||
expect(mgrA.listTasksByType('extract', projectRoot)).toHaveLength(1);
|
||||
expect(mgrB.listTasksByType('extract', projectRoot)).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── listTasksByType() ────────────────────────────────────────────────────
|
||||
|
||||
describe('listTasksByType()', () => {
|
||||
it('returns empty array when no tasks of that type exist', () => {
|
||||
const mgr = new MemoryManager();
|
||||
expect(mgr.listTasksByType('extract')).toEqual([]);
|
||||
expect(mgr.listTasksByType('dream')).toEqual([]);
|
||||
});
|
||||
|
||||
it('filters by projectRoot when provided', async () => {
|
||||
vi.mocked(runAutoMemoryExtract).mockResolvedValue({
|
||||
touchedTopics: [],
|
||||
cursor: { sessionId: 'sess', updatedAt: new Date().toISOString() },
|
||||
});
|
||||
|
||||
const mgr = new MemoryManager();
|
||||
|
||||
// Two extractions for different project roots
|
||||
await Promise.all([
|
||||
mgr.scheduleExtract({
|
||||
projectRoot: '/project-a',
|
||||
sessionId: 'sess',
|
||||
history: [{ role: 'user', parts: [{ text: 'hi' }] }],
|
||||
}),
|
||||
mgr.scheduleExtract({
|
||||
projectRoot: '/project-b',
|
||||
sessionId: 'sess',
|
||||
history: [{ role: 'user', parts: [{ text: 'hi' }] }],
|
||||
}),
|
||||
]);
|
||||
await mgr.drain();
|
||||
|
||||
expect(mgr.listTasksByType('extract', '/project-a')).toHaveLength(1);
|
||||
expect(mgr.listTasksByType('extract', '/project-b')).toHaveLength(1);
|
||||
expect(mgr.listTasksByType('extract')).toHaveLength(2);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── scheduleDream() ─────────────────────────────────────────────────────
|
||||
|
||||
describe('scheduleDream()', () => {
|
||||
let tempDir: string;
|
||||
let projectRoot: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
vi.resetAllMocks();
|
||||
process.env['QWEN_CODE_MEMORY_LOCAL'] = '1';
|
||||
clearAutoMemoryRootCache();
|
||||
tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'mgr-dream-'));
|
||||
projectRoot = path.join(tempDir, 'project');
|
||||
await fs.mkdir(projectRoot, { recursive: true });
|
||||
await ensureAutoMemoryScaffold(
|
||||
projectRoot,
|
||||
new Date('2026-04-01T00:00:00.000Z'),
|
||||
);
|
||||
vi.mocked(runManagedAutoMemoryDream).mockResolvedValue({
|
||||
touchedTopics: [],
|
||||
dedupedEntries: 0,
|
||||
systemMessage: undefined,
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
delete process.env['QWEN_CODE_MEMORY_LOCAL'];
|
||||
clearAutoMemoryRootCache();
|
||||
await fs.rm(tempDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('skips when dream is disabled in config', async () => {
|
||||
const mgr = new MemoryManager(async () => [
|
||||
'sess-0',
|
||||
'sess-1',
|
||||
'sess-2',
|
||||
'sess-3',
|
||||
'sess-4',
|
||||
]);
|
||||
const config = makeMockConfig({
|
||||
getManagedAutoDreamEnabled: vi.fn().mockReturnValue(false),
|
||||
});
|
||||
|
||||
const result = await mgr.scheduleDream({
|
||||
projectRoot,
|
||||
sessionId: 'sess-5',
|
||||
config,
|
||||
now: new Date('2026-04-01T10:00:00.000Z'),
|
||||
minHoursBetweenDreams: 0,
|
||||
minSessionsBetweenDreams: 1,
|
||||
});
|
||||
|
||||
expect(result).toEqual({ status: 'skipped', skippedReason: 'disabled' });
|
||||
});
|
||||
|
||||
it('skips when called again in the same session', async () => {
|
||||
const scanner = vi
|
||||
.fn()
|
||||
.mockResolvedValue(['sess-0', 'sess-1', 'sess-2', 'sess-3', 'sess-4']);
|
||||
const mgr = new MemoryManager(scanner);
|
||||
|
||||
const first = await mgr.scheduleDream({
|
||||
projectRoot,
|
||||
sessionId: 'sess-x',
|
||||
now: new Date('2026-04-01T10:00:00.000Z'),
|
||||
minHoursBetweenDreams: 0,
|
||||
minSessionsBetweenDreams: 1,
|
||||
});
|
||||
expect(first.status).toBe('scheduled');
|
||||
await first.promise;
|
||||
|
||||
const second = await mgr.scheduleDream({
|
||||
projectRoot,
|
||||
sessionId: 'sess-x',
|
||||
now: new Date('2026-04-01T11:00:00.000Z'),
|
||||
minHoursBetweenDreams: 0,
|
||||
minSessionsBetweenDreams: 1,
|
||||
});
|
||||
expect(second).toEqual({
|
||||
status: 'skipped',
|
||||
skippedReason: 'same_session',
|
||||
});
|
||||
});
|
||||
|
||||
it('skips when min_hours has not elapsed', async () => {
|
||||
const mgr = new MemoryManager(async () => [
|
||||
'sess-0',
|
||||
'sess-1',
|
||||
'sess-2',
|
||||
'sess-3',
|
||||
'sess-4',
|
||||
]);
|
||||
|
||||
// Inject lastDreamAt that is very recent
|
||||
const metaPath = getAutoMemoryMetadataPath(projectRoot);
|
||||
const metadata = JSON.parse(
|
||||
await fs.readFile(metaPath, 'utf-8'),
|
||||
) as Record<string, unknown>;
|
||||
metadata['lastDreamAt'] = new Date(
|
||||
'2026-04-01T09:00:00.000Z',
|
||||
).toISOString();
|
||||
await fs.writeFile(metaPath, JSON.stringify(metadata, null, 2), 'utf-8');
|
||||
|
||||
const result = await mgr.scheduleDream({
|
||||
projectRoot,
|
||||
sessionId: 'sess-new',
|
||||
now: new Date('2026-04-01T10:00:00.000Z'),
|
||||
minHoursBetweenDreams: 24,
|
||||
minSessionsBetweenDreams: 1,
|
||||
});
|
||||
|
||||
expect(result).toEqual({ status: 'skipped', skippedReason: 'min_hours' });
|
||||
});
|
||||
|
||||
it('skips when session count is below threshold (via session scanner)', async () => {
|
||||
// Only 1 session — need 5
|
||||
const mgr = new MemoryManager(async () => ['sess-0']);
|
||||
|
||||
const result = await mgr.scheduleDream({
|
||||
projectRoot,
|
||||
sessionId: 'sess-new',
|
||||
now: new Date('2026-04-01T10:00:00.000Z'),
|
||||
minHoursBetweenDreams: 0,
|
||||
minSessionsBetweenDreams: 5,
|
||||
});
|
||||
|
||||
expect(result.status).toBe('skipped');
|
||||
expect(result.skippedReason).toBe('min_sessions');
|
||||
});
|
||||
|
||||
it('schedules when all conditions are met, releases lock, and records metadata', async () => {
|
||||
vi.mocked(runManagedAutoMemoryDream).mockResolvedValue({
|
||||
touchedTopics: ['user'],
|
||||
dedupedEntries: 1,
|
||||
systemMessage: 'Dream complete.',
|
||||
});
|
||||
|
||||
const mgr = new MemoryManager(async () => ['s0', 's1', 's2', 's3', 's4']);
|
||||
|
||||
const result = await mgr.scheduleDream({
|
||||
projectRoot,
|
||||
sessionId: 'sess-x',
|
||||
now: new Date('2026-04-01T10:00:00.000Z'),
|
||||
minHoursBetweenDreams: 0,
|
||||
minSessionsBetweenDreams: 3,
|
||||
});
|
||||
|
||||
expect(result.status).toBe('scheduled');
|
||||
const finalRecord = await result.promise;
|
||||
expect(finalRecord?.status).toBe('completed');
|
||||
expect(finalRecord?.metadata?.['touchedTopics']).toEqual(['user']);
|
||||
|
||||
// Lock must be released
|
||||
await expect(
|
||||
fs.access(getAutoMemoryConsolidationLockPath(projectRoot)),
|
||||
).rejects.toThrow();
|
||||
|
||||
// Metadata must be updated
|
||||
const meta = JSON.parse(
|
||||
await fs.readFile(getAutoMemoryMetadataPath(projectRoot), 'utf-8'),
|
||||
) as { lastDreamSessionId?: string; lastDreamAt?: string };
|
||||
expect(meta.lastDreamSessionId).toBe('sess-x');
|
||||
expect(meta.lastDreamAt).toBe('2026-04-01T10:00:00.000Z');
|
||||
});
|
||||
});
|
||||
|
||||
// ─── resetExtractStateForTests() ─────────────────────────────────────────
|
||||
|
||||
describe('resetExtractStateForTests()', () => {
|
||||
it('clears in-flight extract state so subsequent calls are not blocked', async () => {
|
||||
let resolveExtract!: (
|
||||
v: Awaited<ReturnType<typeof runAutoMemoryExtract>>,
|
||||
) => void;
|
||||
vi.mocked(runAutoMemoryExtract)
|
||||
.mockReturnValueOnce(
|
||||
new Promise<Awaited<ReturnType<typeof runAutoMemoryExtract>>>(
|
||||
(resolve) => {
|
||||
resolveExtract = resolve;
|
||||
},
|
||||
),
|
||||
)
|
||||
.mockResolvedValueOnce({
|
||||
touchedTopics: [],
|
||||
cursor: { sessionId: 'sess', updatedAt: new Date().toISOString() },
|
||||
});
|
||||
|
||||
const mgr = new MemoryManager();
|
||||
void mgr.scheduleExtract({
|
||||
projectRoot: '/project',
|
||||
sessionId: 'sess',
|
||||
history: [{ role: 'user', parts: [{ text: 'hi' }] }],
|
||||
});
|
||||
|
||||
mgr.resetExtractStateForTests();
|
||||
|
||||
// After reset, a new schedule call should not return 'already_running'
|
||||
const result = await mgr.scheduleExtract({
|
||||
projectRoot: '/project',
|
||||
sessionId: 'sess-2',
|
||||
history: [{ role: 'user', parts: [{ text: 'hi' }] }],
|
||||
});
|
||||
expect(result.skippedReason).not.toBe('already_running');
|
||||
|
||||
resolveExtract({
|
||||
touchedTopics: [],
|
||||
cursor: { sessionId: 'sess', updatedAt: new Date().toISOString() },
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
900
packages/core/src/memory/manager.ts
Normal file
900
packages/core/src/memory/manager.ts
Normal file
|
|
@ -0,0 +1,900 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
/**
|
||||
* MemoryManager — the single entry-point for all memory module operations.
|
||||
*
|
||||
* # Design
|
||||
* All background-task state (in-flight promises, per-project extraction queues,
|
||||
* per-project dream-scan timestamps, task records) is owned directly by
|
||||
* MemoryManager using plain Maps and sets. There are no separate
|
||||
* BackgroundTaskRegistry / BackgroundTaskDrainer / BackgroundTaskScheduler
|
||||
* helper classes; those abstractions are replaced by straightforward inline
|
||||
* state management inside this class.
|
||||
*
|
||||
* Public API — everything external callers need:
|
||||
* config.getMemoryManager().scheduleExtract(params)
|
||||
* config.getMemoryManager().scheduleDream(params)
|
||||
* config.getMemoryManager().recall(projectRoot, query, options)
|
||||
* config.getMemoryManager().forget(projectRoot, query, options)
|
||||
* config.getMemoryManager().getStatus(projectRoot)
|
||||
* config.getMemoryManager().drain(options?)
|
||||
* config.getMemoryManager().appendToUserMemory(userMemory, projectRoot)
|
||||
*
|
||||
* # Task records
|
||||
* Each scheduled operation is tracked as a lightweight MemoryTaskRecord.
|
||||
* These are queryable by type and projectRoot for status display.
|
||||
*
|
||||
* # Injection for tests
|
||||
* Production code uses `config.getMemoryManager()`. Tests that need isolation
|
||||
* construct `new MemoryManager()` directly.
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import * as path from 'node:path';
|
||||
import { randomUUID } from 'node:crypto';
|
||||
import type { Content, Part } from '@google/genai';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { Storage } from '../config/storage.js';
|
||||
import { logMemoryExtract, MemoryExtractEvent } from '../telemetry/index.js';
|
||||
import { isAutoMemPath } from './paths.js';
|
||||
import {
|
||||
getAutoMemoryConsolidationLockPath,
|
||||
getAutoMemoryMetadataPath,
|
||||
} from './paths.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
import { runAutoMemoryExtract } from './extract.js';
|
||||
import { runManagedAutoMemoryDream } from './dream.js';
|
||||
import {
|
||||
forgetManagedAutoMemoryEntries,
|
||||
forgetManagedAutoMemoryMatches,
|
||||
selectManagedAutoMemoryForgetCandidates,
|
||||
type AutoMemoryForgetMatch,
|
||||
type AutoMemoryForgetResult,
|
||||
type AutoMemoryForgetSelectionResult,
|
||||
} from './forget.js';
|
||||
import {
|
||||
resolveRelevantAutoMemoryPromptForQuery,
|
||||
type RelevantAutoMemoryPromptResult,
|
||||
type ResolveRelevantAutoMemoryPromptOptions,
|
||||
} from './recall.js';
|
||||
import { getManagedAutoMemoryStatus } from './status.js';
|
||||
import { appendManagedAutoMemoryToUserMemory } from './prompt.js';
|
||||
import { writeDreamManualRunToMetadata } from './dream.js';
|
||||
import { buildConsolidationTaskPrompt } from './dreamAgentPlanner.js';
|
||||
import type { AutoMemoryMetadata } from './types.js';
|
||||
|
||||
// ─── Re-export public types consumed by callers ───────────────────────────────
|
||||
|
||||
export type {
|
||||
AutoMemoryForgetResult,
|
||||
AutoMemoryForgetMatch,
|
||||
AutoMemoryForgetSelectionResult,
|
||||
};
|
||||
export type {
|
||||
RelevantAutoMemoryPromptResult,
|
||||
ResolveRelevantAutoMemoryPromptOptions,
|
||||
};
|
||||
export type { ManagedAutoMemoryStatus } from './status.js';
|
||||
|
||||
// ─── Task record ──────────────────────────────────────────────────────────────
|
||||
|
||||
export type MemoryTaskStatus =
|
||||
| 'pending'
|
||||
| 'running'
|
||||
| 'completed'
|
||||
| 'failed'
|
||||
| 'skipped';
|
||||
|
||||
export interface MemoryTaskRecord {
|
||||
id: string;
|
||||
taskType: 'extract' | 'dream';
|
||||
projectRoot: string;
|
||||
sessionId?: string;
|
||||
status: MemoryTaskStatus;
|
||||
createdAt: string;
|
||||
updatedAt: string;
|
||||
progressText?: string;
|
||||
error?: string;
|
||||
metadata?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
// ─── Extract params / result ──────────────────────────────────────────────────
|
||||
|
||||
export interface ScheduleExtractParams {
|
||||
projectRoot: string;
|
||||
sessionId: string;
|
||||
history: Content[];
|
||||
now?: Date;
|
||||
config?: Config;
|
||||
}
|
||||
|
||||
// AutoMemoryExtractResult is re-used as the return type
|
||||
export type { AutoMemoryExtractResult as ExtractResult } from './extract.js';
|
||||
|
||||
// ─── Dream params / result ────────────────────────────────────────────────────
|
||||
|
||||
export interface ScheduleDreamParams {
|
||||
projectRoot: string;
|
||||
sessionId: string;
|
||||
config?: Config;
|
||||
now?: Date;
|
||||
minHoursBetweenDreams?: number;
|
||||
minSessionsBetweenDreams?: number;
|
||||
}
|
||||
|
||||
export interface DreamScheduleResult {
|
||||
status: 'scheduled' | 'skipped';
|
||||
taskId?: string;
|
||||
skippedReason?:
|
||||
| 'disabled'
|
||||
| 'same_session'
|
||||
| 'min_hours'
|
||||
| 'min_sessions'
|
||||
| 'scan_throttled'
|
||||
| 'locked'
|
||||
| 'running';
|
||||
promise?: Promise<MemoryTaskRecord>;
|
||||
}
|
||||
|
||||
/** Function type for scanning session files by mtime. Injected for testing. */
|
||||
export type SessionScannerFn = (
|
||||
projectRoot: string,
|
||||
sinceMs: number,
|
||||
excludeSessionId: string,
|
||||
) => Promise<string[]>;
|
||||
|
||||
// ─── Drain options ────────────────────────────────────────────────────────────
|
||||
|
||||
export interface DrainOptions {
|
||||
timeoutMs?: number;
|
||||
}
|
||||
|
||||
// ─── Constants ────────────────────────────────────────────────────────────────
|
||||
|
||||
export const EXTRACT_TASK_TYPE = 'managed-auto-memory-extraction' as const;
|
||||
export const DREAM_TASK_TYPE = 'managed-auto-memory-dream' as const;
|
||||
|
||||
export const DEFAULT_AUTO_DREAM_MIN_HOURS = 24;
|
||||
export const DEFAULT_AUTO_DREAM_MIN_SESSIONS = 5;
|
||||
|
||||
const DREAM_LOCK_STALE_MS = 60 * 60 * 1000; // 1 hour
|
||||
const SESSION_SCAN_INTERVAL_MS = 10 * 60 * 1000; // 10 minutes
|
||||
|
||||
const WRITE_TOOL_NAMES = new Set([
|
||||
'write_file',
|
||||
'edit',
|
||||
'replace',
|
||||
'create_file',
|
||||
]);
|
||||
|
||||
// ─── Internal helpers ─────────────────────────────────────────────────────────
|
||||
|
||||
function makeTaskRecord(
|
||||
type: 'extract' | 'dream',
|
||||
projectRoot: string,
|
||||
sessionId?: string,
|
||||
): MemoryTaskRecord {
|
||||
const now = new Date().toISOString();
|
||||
return {
|
||||
id: randomUUID(),
|
||||
taskType: type,
|
||||
projectRoot,
|
||||
sessionId,
|
||||
status: 'pending',
|
||||
createdAt: now,
|
||||
updatedAt: now,
|
||||
};
|
||||
}
|
||||
|
||||
function updateRecord(
|
||||
record: MemoryTaskRecord,
|
||||
patch: Partial<
|
||||
Pick<MemoryTaskRecord, 'status' | 'progressText' | 'error' | 'metadata'>
|
||||
>,
|
||||
): void {
|
||||
if (patch.status !== undefined) record.status = patch.status;
|
||||
if (patch.progressText !== undefined)
|
||||
record.progressText = patch.progressText;
|
||||
if (patch.error !== undefined) record.error = patch.error;
|
||||
if (patch.metadata !== undefined) {
|
||||
record.metadata = { ...(record.metadata ?? {}), ...patch.metadata };
|
||||
}
|
||||
record.updatedAt = new Date().toISOString();
|
||||
}
|
||||
|
||||
function partWritesToMemory(part: Part, projectRoot: string): boolean {
|
||||
const name = part.functionCall?.name;
|
||||
if (name && WRITE_TOOL_NAMES.has(name)) {
|
||||
const args = part.functionCall?.args as Record<string, unknown> | undefined;
|
||||
const filePath =
|
||||
args?.['file_path'] ?? args?.['path'] ?? args?.['target_file'];
|
||||
if (typeof filePath === 'string' && isAutoMemPath(filePath, projectRoot)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
function historyWritesToMemory(
|
||||
history: Content[],
|
||||
projectRoot: string,
|
||||
): boolean {
|
||||
return history.some((msg) =>
|
||||
(msg.parts ?? []).some((p) => partWritesToMemory(p, projectRoot)),
|
||||
);
|
||||
}
|
||||
|
||||
function isProcessRunning(pid: number): boolean {
|
||||
try {
|
||||
process.kill(pid, 0);
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async function readDreamMetadata(
|
||||
projectRoot: string,
|
||||
): Promise<AutoMemoryMetadata> {
|
||||
const content = await fs.readFile(
|
||||
getAutoMemoryMetadataPath(projectRoot),
|
||||
'utf-8',
|
||||
);
|
||||
return JSON.parse(content) as AutoMemoryMetadata;
|
||||
}
|
||||
|
||||
async function writeDreamMetadata(
|
||||
projectRoot: string,
|
||||
metadata: AutoMemoryMetadata,
|
||||
): Promise<void> {
|
||||
await fs.writeFile(
|
||||
getAutoMemoryMetadataPath(projectRoot),
|
||||
`${JSON.stringify(metadata, null, 2)}\n`,
|
||||
'utf-8',
|
||||
);
|
||||
}
|
||||
|
||||
function hoursSince(lastDreamAt: string | undefined, now: Date): number | null {
|
||||
if (!lastDreamAt) return null;
|
||||
const timestamp = Date.parse(lastDreamAt);
|
||||
if (Number.isNaN(timestamp)) return null;
|
||||
return (now.getTime() - timestamp) / (1000 * 60 * 60);
|
||||
}
|
||||
|
||||
const SESSION_FILE_PATTERN = /^[0-9a-fA-F-]{32,36}\.jsonl$/;
|
||||
|
||||
async function defaultSessionScanner(
|
||||
projectRoot: string,
|
||||
sinceMs: number,
|
||||
excludeSessionId: string,
|
||||
): Promise<string[]> {
|
||||
const chatsDir = path.join(new Storage(projectRoot).getProjectDir(), 'chats');
|
||||
let names: string[];
|
||||
try {
|
||||
names = await fs.readdir(chatsDir);
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
const results: string[] = [];
|
||||
await Promise.all(
|
||||
names.map(async (name) => {
|
||||
if (!SESSION_FILE_PATTERN.test(name)) return;
|
||||
const sessionId = name.slice(0, -'.jsonl'.length);
|
||||
if (sessionId === excludeSessionId) return;
|
||||
try {
|
||||
const stats = await fs.stat(path.join(chatsDir, name));
|
||||
if (stats.mtimeMs > sinceMs) results.push(sessionId);
|
||||
} catch {
|
||||
// skip unreadable files
|
||||
}
|
||||
}),
|
||||
);
|
||||
return results;
|
||||
}
|
||||
|
||||
async function dreamLockExists(projectRoot: string): Promise<boolean> {
|
||||
const lockPath = getAutoMemoryConsolidationLockPath(projectRoot);
|
||||
let mtimeMs: number;
|
||||
let holderPid: number | undefined;
|
||||
try {
|
||||
const [stats, content] = await Promise.all([
|
||||
fs.stat(lockPath),
|
||||
fs.readFile(lockPath, 'utf-8').catch(() => ''),
|
||||
]);
|
||||
mtimeMs = stats.mtimeMs;
|
||||
const parsed = parseInt(content.trim(), 10);
|
||||
holderPid = Number.isFinite(parsed) && parsed > 0 ? parsed : undefined;
|
||||
} catch {
|
||||
return false; // ENOENT — no lock
|
||||
}
|
||||
const ageMs = Date.now() - mtimeMs;
|
||||
if (ageMs <= DREAM_LOCK_STALE_MS) {
|
||||
if (holderPid !== undefined && isProcessRunning(holderPid)) return true;
|
||||
await fs.rm(lockPath, { force: true });
|
||||
return false;
|
||||
}
|
||||
await fs.rm(lockPath, { force: true });
|
||||
return false;
|
||||
}
|
||||
|
||||
async function acquireDreamLock(projectRoot: string): Promise<void> {
|
||||
await fs.writeFile(
|
||||
getAutoMemoryConsolidationLockPath(projectRoot),
|
||||
String(process.pid),
|
||||
{ flag: 'wx' },
|
||||
);
|
||||
}
|
||||
|
||||
async function releaseDreamLock(projectRoot: string): Promise<void> {
|
||||
await fs.rm(getAutoMemoryConsolidationLockPath(projectRoot), {
|
||||
force: true,
|
||||
});
|
||||
}
|
||||
|
||||
// ─── MemoryManager ────────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* MemoryManager owns all runtime state for the memory subsystem and exposes a
|
||||
* clean, stable API. It is created once per Config instance and returned by
|
||||
* `config.getMemoryManager()`. Tests pass a fresh `new MemoryManager()`.
|
||||
*/
|
||||
export class MemoryManager {
|
||||
// ── Task records ────────────────────────────────────────────────────────────
|
||||
private readonly tasks = new Map<string, MemoryTaskRecord>();
|
||||
// ── Subscribers (useSyncExternalStore / custom listeners) ────────────────
|
||||
private readonly subscribers = new Set<() => void>();
|
||||
// ── In-flight promises (for drain) ──────────────────────────────────────────
|
||||
private readonly inFlight = new Map<string, Promise<unknown>>();
|
||||
|
||||
// ── Extract scheduling state ─────────────────────────────────────────────────
|
||||
private readonly extractRunning = new Set<string>();
|
||||
private readonly extractCurrentTaskId = new Map<string, string>();
|
||||
private readonly extractQueued = new Map<
|
||||
string,
|
||||
{ taskId: string; params: ScheduleExtractParams }
|
||||
>();
|
||||
|
||||
// ── Dream scheduling state ───────────────────────────────────────────────────
|
||||
private readonly dreamInFlightByKey = new Map<string, string>();
|
||||
private readonly dreamLastSessionScanAt = new Map<string, number>();
|
||||
private readonly sessionScanner: SessionScannerFn;
|
||||
|
||||
constructor(sessionScanner: SessionScannerFn = defaultSessionScanner) {
|
||||
this.sessionScanner = sessionScanner;
|
||||
}
|
||||
// ─── Subscribe ───────────────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Register a listener that is called whenever any task record changes.
|
||||
* Compatible with React’s `useSyncExternalStore`.
|
||||
* Returns an unsubscribe function.
|
||||
*/
|
||||
subscribe(listener: () => void): () => void {
|
||||
this.subscribers.add(listener);
|
||||
return () => this.subscribers.delete(listener);
|
||||
}
|
||||
|
||||
private notify(): void {
|
||||
for (const fn of this.subscribers) fn();
|
||||
}
|
||||
|
||||
/** Update a record and notify subscribers. */
|
||||
private update(
|
||||
record: MemoryTaskRecord,
|
||||
patch: Partial<
|
||||
Pick<MemoryTaskRecord, 'status' | 'progressText' | 'error' | 'metadata'>
|
||||
>,
|
||||
): void {
|
||||
updateRecord(record, patch);
|
||||
this.notify();
|
||||
}
|
||||
|
||||
/**
|
||||
* Register a brand-new record in the task map and notify once.
|
||||
* Use this for records that start in 'pending' and need no immediate patch.
|
||||
*/
|
||||
private store(record: MemoryTaskRecord): void {
|
||||
this.tasks.set(record.id, record);
|
||||
this.notify();
|
||||
}
|
||||
|
||||
/**
|
||||
* Register a brand-new record AND apply an initial status patch in a single
|
||||
* notify. Avoids the double-render that separate store()+update() causes.
|
||||
*/
|
||||
private storeWith(
|
||||
record: MemoryTaskRecord,
|
||||
patch: Partial<
|
||||
Pick<MemoryTaskRecord, 'status' | 'progressText' | 'error' | 'metadata'>
|
||||
>,
|
||||
): void {
|
||||
updateRecord(record, patch);
|
||||
this.tasks.set(record.id, record);
|
||||
this.notify();
|
||||
}
|
||||
// ─── Task record query ────────────────────────────────────────────────────────
|
||||
|
||||
/** Return task records filtered by type and optionally by projectRoot. */
|
||||
listTasksByType(
|
||||
taskType: 'extract' | 'dream',
|
||||
projectRoot?: string,
|
||||
): MemoryTaskRecord[] {
|
||||
return [...this.tasks.values()]
|
||||
.filter(
|
||||
(t) =>
|
||||
t.taskType === taskType &&
|
||||
(!projectRoot || t.projectRoot === projectRoot),
|
||||
)
|
||||
.sort((a, b) => b.updatedAt.localeCompare(a.updatedAt));
|
||||
}
|
||||
|
||||
// ─── Drain ────────────────────────────────────────────────────────────────────
|
||||
|
||||
/** Wait for all in-flight tasks to settle, with optional timeout. */
|
||||
async drain(options: DrainOptions = {}): Promise<boolean> {
|
||||
const promises = [...this.inFlight.values()];
|
||||
if (promises.length === 0) return true;
|
||||
const waitAll = Promise.allSettled(promises).then(() => true);
|
||||
if (!options.timeoutMs || options.timeoutMs <= 0) return waitAll;
|
||||
return Promise.race<boolean>([
|
||||
waitAll,
|
||||
new Promise<boolean>((resolve) =>
|
||||
setTimeout(() => resolve(false), options.timeoutMs),
|
||||
),
|
||||
]);
|
||||
}
|
||||
|
||||
private track<T>(taskId: string, promise: Promise<T>): Promise<T> {
|
||||
this.inFlight.set(taskId, promise);
|
||||
void promise.finally(() => this.inFlight.delete(taskId));
|
||||
return promise;
|
||||
}
|
||||
|
||||
// ─── Extract ──────────────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Schedule a managed auto-memory extraction for the given session turn.
|
||||
*
|
||||
* Returns immediately with a skipped result if:
|
||||
* - The last history turn wrote to a memory file (memory_tool)
|
||||
* - Extraction is already running for this project (queues trailing request)
|
||||
*
|
||||
* The trailing request starts automatically when the active extraction
|
||||
* completes.
|
||||
*/
|
||||
async scheduleExtract(
|
||||
params: ScheduleExtractParams,
|
||||
): Promise<
|
||||
ReturnType<typeof runAutoMemoryExtract> extends Promise<infer T> ? T : never
|
||||
> {
|
||||
if (historyWritesToMemory(params.history, params.projectRoot)) {
|
||||
const record = makeTaskRecord(
|
||||
'extract',
|
||||
params.projectRoot,
|
||||
params.sessionId,
|
||||
);
|
||||
this.storeWith(record, {
|
||||
status: 'skipped',
|
||||
progressText: 'Skipped: main agent wrote to memory files this turn.',
|
||||
metadata: {
|
||||
skippedReason: 'memory_tool',
|
||||
historyLength: params.history.length,
|
||||
},
|
||||
});
|
||||
return {
|
||||
touchedTopics: [],
|
||||
skippedReason: 'memory_tool' as const,
|
||||
cursor: {
|
||||
sessionId: params.sessionId,
|
||||
updatedAt: (params.now ?? new Date()).toISOString(),
|
||||
},
|
||||
} as never;
|
||||
}
|
||||
|
||||
if (this.extractRunning.has(params.projectRoot)) {
|
||||
const currentTaskId = this.extractCurrentTaskId.get(params.projectRoot);
|
||||
if (!currentTaskId) {
|
||||
return {
|
||||
touchedTopics: [],
|
||||
skippedReason: 'already_running' as const,
|
||||
cursor: {
|
||||
sessionId: params.sessionId,
|
||||
updatedAt: (params.now ?? new Date()).toISOString(),
|
||||
},
|
||||
} as never;
|
||||
}
|
||||
|
||||
const queued = this.extractQueued.get(params.projectRoot);
|
||||
if (queued) {
|
||||
// Supersede the existing queued request with newer params
|
||||
queued.params = params;
|
||||
const queuedRecord = this.tasks.get(queued.taskId);
|
||||
if (queuedRecord) {
|
||||
this.update(queuedRecord, {
|
||||
status: 'pending',
|
||||
progressText:
|
||||
'Updated trailing managed auto-memory extraction request while another extraction is running.',
|
||||
metadata: {
|
||||
queuedBehindTaskId: currentTaskId,
|
||||
historyLength: params.history.length,
|
||||
supersededAt: new Date().toISOString(),
|
||||
},
|
||||
});
|
||||
}
|
||||
} else {
|
||||
const record = makeTaskRecord(
|
||||
'extract',
|
||||
params.projectRoot,
|
||||
params.sessionId,
|
||||
);
|
||||
this.storeWith(record, {
|
||||
status: 'pending',
|
||||
progressText:
|
||||
'Queued trailing managed auto-memory extraction until the active extraction completes.',
|
||||
metadata: {
|
||||
trailing: true,
|
||||
queuedBehindTaskId: currentTaskId,
|
||||
historyLength: params.history.length,
|
||||
},
|
||||
});
|
||||
this.extractQueued.set(params.projectRoot, {
|
||||
taskId: record.id,
|
||||
params,
|
||||
});
|
||||
}
|
||||
|
||||
return {
|
||||
touchedTopics: [],
|
||||
skippedReason: 'queued' as const,
|
||||
cursor: {
|
||||
sessionId: params.sessionId,
|
||||
updatedAt: (params.now ?? new Date()).toISOString(),
|
||||
},
|
||||
} as never;
|
||||
}
|
||||
|
||||
const record = makeTaskRecord(
|
||||
'extract',
|
||||
params.projectRoot,
|
||||
params.sessionId,
|
||||
);
|
||||
this.store(record);
|
||||
return this.track(record.id, this.runExtract(record.id, params)) as never;
|
||||
}
|
||||
|
||||
private async runExtract(
|
||||
taskId: string,
|
||||
params: ScheduleExtractParams,
|
||||
): Promise<Awaited<ReturnType<typeof runAutoMemoryExtract>>> {
|
||||
const record = this.tasks.get(taskId)!;
|
||||
this.extractCurrentTaskId.set(params.projectRoot, taskId);
|
||||
this.extractRunning.add(params.projectRoot);
|
||||
this.update(record, {
|
||||
status: 'running',
|
||||
progressText: 'Running managed auto-memory extraction.',
|
||||
metadata: { historyLength: params.history.length },
|
||||
});
|
||||
|
||||
const t0 = Date.now();
|
||||
try {
|
||||
const result = await runAutoMemoryExtract(params);
|
||||
const durationMs = Date.now() - t0;
|
||||
this.update(record, {
|
||||
status: result.skippedReason ? 'skipped' : 'completed',
|
||||
progressText:
|
||||
result.systemMessage ??
|
||||
(result.touchedTopics.length > 0
|
||||
? `Managed auto-memory updated: ${result.touchedTopics.join(', ')}.`
|
||||
: 'Managed auto-memory extraction completed without durable changes.'),
|
||||
metadata: {
|
||||
touchedTopics: result.touchedTopics,
|
||||
processedOffset: result.cursor.processedOffset,
|
||||
skippedReason: result.skippedReason,
|
||||
},
|
||||
});
|
||||
if (params.config) {
|
||||
logMemoryExtract(
|
||||
params.config,
|
||||
new MemoryExtractEvent({
|
||||
trigger: 'auto',
|
||||
status: 'completed',
|
||||
patches_count: result.touchedTopics.length,
|
||||
touched_topics: result.touchedTopics,
|
||||
duration_ms: durationMs,
|
||||
}),
|
||||
);
|
||||
}
|
||||
return result;
|
||||
} catch (error) {
|
||||
const durationMs = Date.now() - t0;
|
||||
this.update(record, {
|
||||
status: 'failed',
|
||||
error: error instanceof Error ? error.message : String(error),
|
||||
});
|
||||
if (params.config) {
|
||||
logMemoryExtract(
|
||||
params.config,
|
||||
new MemoryExtractEvent({
|
||||
trigger: 'auto',
|
||||
status: 'failed',
|
||||
patches_count: 0,
|
||||
touched_topics: [],
|
||||
duration_ms: durationMs,
|
||||
}),
|
||||
);
|
||||
}
|
||||
throw error;
|
||||
} finally {
|
||||
this.extractCurrentTaskId.delete(params.projectRoot);
|
||||
this.extractRunning.delete(params.projectRoot);
|
||||
void this.startQueuedExtract(params.projectRoot);
|
||||
}
|
||||
}
|
||||
|
||||
private async startQueuedExtract(projectRoot: string): Promise<void> {
|
||||
if (this.extractRunning.has(projectRoot)) return;
|
||||
const queued = this.extractQueued.get(projectRoot);
|
||||
if (!queued) return;
|
||||
this.extractQueued.delete(projectRoot);
|
||||
await this.track(
|
||||
queued.taskId,
|
||||
this.runExtract(queued.taskId, queued.params),
|
||||
);
|
||||
}
|
||||
|
||||
// ─── Dream ────────────────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Maybe schedule a managed auto-memory dream (consolidation).
|
||||
* Returns immediately if preconditions aren't met (time gate, session count,
|
||||
* lock, or duplicate).
|
||||
*/
|
||||
async scheduleDream(
|
||||
params: ScheduleDreamParams,
|
||||
): Promise<DreamScheduleResult> {
|
||||
if (params.config && !params.config.getManagedAutoDreamEnabled()) {
|
||||
return { status: 'skipped', skippedReason: 'disabled' };
|
||||
}
|
||||
|
||||
const now = params.now ?? new Date();
|
||||
const minHours =
|
||||
params.minHoursBetweenDreams ?? DEFAULT_AUTO_DREAM_MIN_HOURS;
|
||||
const minSessions =
|
||||
params.minSessionsBetweenDreams ?? DEFAULT_AUTO_DREAM_MIN_SESSIONS;
|
||||
|
||||
await ensureAutoMemoryScaffold(params.projectRoot, now);
|
||||
const metadata = await readDreamMetadata(params.projectRoot);
|
||||
|
||||
if (metadata.lastDreamSessionId === params.sessionId) {
|
||||
return { status: 'skipped', skippedReason: 'same_session' };
|
||||
}
|
||||
|
||||
const elapsedHours = hoursSince(metadata.lastDreamAt, now);
|
||||
if (elapsedHours !== null && elapsedHours < minHours) {
|
||||
return { status: 'skipped', skippedReason: 'min_hours' };
|
||||
}
|
||||
|
||||
// Throttle the expensive session-count filesystem scan.
|
||||
// Return a distinct reason so callers can tell the difference between
|
||||
// "we know there aren't enough sessions" and "we haven't checked yet".
|
||||
const lastScan = this.dreamLastSessionScanAt.get(params.projectRoot) ?? 0;
|
||||
if (now.getTime() - lastScan < SESSION_SCAN_INTERVAL_MS) {
|
||||
return { status: 'skipped', skippedReason: 'scan_throttled' };
|
||||
}
|
||||
|
||||
const lastDreamMs = metadata.lastDreamAt
|
||||
? Date.parse(metadata.lastDreamAt)
|
||||
: 0;
|
||||
const sessionIds = await this.sessionScanner(
|
||||
params.projectRoot,
|
||||
lastDreamMs,
|
||||
params.sessionId,
|
||||
);
|
||||
// Record scan time only after we actually performed the filesystem scan.
|
||||
this.dreamLastSessionScanAt.set(params.projectRoot, now.getTime());
|
||||
if (sessionIds.length < minSessions) {
|
||||
return { status: 'skipped', skippedReason: 'min_sessions' };
|
||||
}
|
||||
|
||||
if (await dreamLockExists(params.projectRoot)) {
|
||||
return { status: 'skipped', skippedReason: 'locked' };
|
||||
}
|
||||
|
||||
// Deduplication — only one dream per projectRoot at a time
|
||||
const dedupeKey = `${DREAM_TASK_TYPE}:${params.projectRoot}`;
|
||||
const existingId = this.dreamInFlightByKey.get(dedupeKey);
|
||||
if (existingId) {
|
||||
return {
|
||||
status: 'skipped',
|
||||
skippedReason: 'running',
|
||||
taskId: existingId,
|
||||
};
|
||||
}
|
||||
|
||||
const record = makeTaskRecord(
|
||||
'dream',
|
||||
params.projectRoot,
|
||||
params.sessionId,
|
||||
);
|
||||
this.storeWith(record, {
|
||||
status: 'running',
|
||||
metadata: { sessionCount: sessionIds.length },
|
||||
});
|
||||
this.dreamInFlightByKey.set(dedupeKey, record.id);
|
||||
|
||||
const promise = this.track(
|
||||
record.id,
|
||||
this.runDream(record, dedupeKey, params, now),
|
||||
);
|
||||
|
||||
return { status: 'scheduled', taskId: record.id, promise };
|
||||
}
|
||||
|
||||
private async runDream(
|
||||
record: MemoryTaskRecord,
|
||||
dedupeKey: string,
|
||||
params: ScheduleDreamParams,
|
||||
now: Date,
|
||||
): Promise<MemoryTaskRecord> {
|
||||
try {
|
||||
try {
|
||||
await acquireDreamLock(params.projectRoot);
|
||||
} catch (error) {
|
||||
if ((error as NodeJS.ErrnoException).code === 'EEXIST') {
|
||||
this.update(record, {
|
||||
status: 'skipped',
|
||||
progressText:
|
||||
'Skipped managed auto-memory dream: consolidation lock already exists.',
|
||||
metadata: { skippedReason: 'locked' },
|
||||
});
|
||||
return record;
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
|
||||
try {
|
||||
const result = await runManagedAutoMemoryDream(
|
||||
params.projectRoot,
|
||||
now,
|
||||
params.config,
|
||||
);
|
||||
const nextMetadata = await readDreamMetadata(params.projectRoot);
|
||||
nextMetadata.lastDreamAt = now.toISOString();
|
||||
nextMetadata.lastDreamSessionId = params.sessionId;
|
||||
nextMetadata.updatedAt = now.toISOString();
|
||||
await writeDreamMetadata(params.projectRoot, nextMetadata);
|
||||
|
||||
this.update(record, {
|
||||
status: 'completed',
|
||||
progressText:
|
||||
result.systemMessage ?? 'Managed auto-memory dream completed.',
|
||||
metadata: {
|
||||
touchedTopics: result.touchedTopics,
|
||||
dedupedEntries: result.dedupedEntries,
|
||||
lastDreamAt: now.toISOString(),
|
||||
},
|
||||
});
|
||||
} finally {
|
||||
await releaseDreamLock(params.projectRoot);
|
||||
}
|
||||
} catch (error) {
|
||||
this.update(record, {
|
||||
status: 'failed',
|
||||
error: error instanceof Error ? error.message : String(error),
|
||||
});
|
||||
} finally {
|
||||
this.dreamInFlightByKey.delete(dedupeKey);
|
||||
}
|
||||
return record;
|
||||
}
|
||||
|
||||
// ─── Recall ───────────────────────────────────────────────────────────────────
|
||||
|
||||
/** Select and format relevant memory for the given query. */
|
||||
recall(
|
||||
projectRoot: string,
|
||||
query: string,
|
||||
options: ResolveRelevantAutoMemoryPromptOptions = {},
|
||||
): Promise<RelevantAutoMemoryPromptResult> {
|
||||
return resolveRelevantAutoMemoryPromptForQuery(projectRoot, query, options);
|
||||
}
|
||||
|
||||
// ─── Forget ───────────────────────────────────────────────────────────────────
|
||||
|
||||
/** Select candidate memory entries matching the given query (step 1 of forget). */
|
||||
selectForgetCandidates(
|
||||
projectRoot: string,
|
||||
query: string,
|
||||
options: { config?: Config; limit?: number } = {},
|
||||
): Promise<AutoMemoryForgetSelectionResult> {
|
||||
return selectManagedAutoMemoryForgetCandidates(projectRoot, query, options);
|
||||
}
|
||||
|
||||
/** Remove the selected memory entries (step 2 of forget). */
|
||||
forgetMatches(
|
||||
projectRoot: string,
|
||||
matches: AutoMemoryForgetMatch[],
|
||||
now?: Date,
|
||||
): Promise<AutoMemoryForgetResult> {
|
||||
return forgetManagedAutoMemoryMatches(projectRoot, matches, now);
|
||||
}
|
||||
|
||||
/** Convenience: select + remove in a single call. */
|
||||
forget(
|
||||
projectRoot: string,
|
||||
query: string,
|
||||
options: { config?: Config } = {},
|
||||
now?: Date,
|
||||
): Promise<AutoMemoryForgetResult> {
|
||||
return forgetManagedAutoMemoryEntries(projectRoot, query, options, now);
|
||||
}
|
||||
|
||||
// ─── Status ───────────────────────────────────────────────────────────────────
|
||||
|
||||
/** Return a full status snapshot for the given project's memory. */
|
||||
getStatus(projectRoot: string) {
|
||||
return getManagedAutoMemoryStatus(projectRoot, this);
|
||||
}
|
||||
|
||||
// ─── Prompt append ────────────────────────────────────────────────────────────
|
||||
|
||||
/** Append the managed auto-memory section to a user memory string. */
|
||||
appendToUserMemory(
|
||||
userMemory: string,
|
||||
memoryDir: string,
|
||||
indexContent?: string | null,
|
||||
): string {
|
||||
return appendManagedAutoMemoryToUserMemory(
|
||||
userMemory,
|
||||
memoryDir,
|
||||
indexContent,
|
||||
);
|
||||
}
|
||||
|
||||
// ─── Dream utilities ──────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Record that a manual dream run has completed for the given session.
|
||||
* Call this from the dreamCommand's onComplete callback.
|
||||
*/
|
||||
writeDreamManualRun(
|
||||
projectRoot: string,
|
||||
sessionId: string,
|
||||
now?: Date,
|
||||
): Promise<void> {
|
||||
return writeDreamManualRunToMetadata(projectRoot, sessionId, now);
|
||||
}
|
||||
|
||||
/**
|
||||
* Build the consolidation task prompt used by the dream slash command.
|
||||
* Returns a prompt string describing what the agent should do.
|
||||
*/
|
||||
buildConsolidationPrompt(memoryRoot: string, transcriptDir: string): string {
|
||||
return buildConsolidationTaskPrompt(memoryRoot, transcriptDir);
|
||||
}
|
||||
|
||||
// ─── Test helpers ─────────────────────────────────────────────────────────────
|
||||
|
||||
/** Reset all extract scheduling state. Call from afterEach in tests. */
|
||||
resetExtractStateForTests(): void {
|
||||
this.extractRunning.clear();
|
||||
this.extractCurrentTaskId.clear();
|
||||
this.extractQueued.clear();
|
||||
}
|
||||
|
||||
/** Reset all dream scheduling state. */
|
||||
resetDreamStateForTests(): void {
|
||||
this.dreamInFlightByKey.clear();
|
||||
this.dreamLastSessionScanAt.clear();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Application-wide singleton. In a fully wired application Config creates its
|
||||
* own MemoryManager accessible via `config.getMemoryManager()`.
|
||||
*/
|
||||
export const globalMemoryManager = new MemoryManager();
|
||||
51
packages/core/src/memory/memoryAge.ts
Normal file
51
packages/core/src/memory/memoryAge.ts
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
/**
|
||||
* Days elapsed since mtime. Floor-rounded — 0 for today, 1 for
|
||||
* yesterday, 2+ for older. Negative inputs (future mtime, clock skew)
|
||||
* clamp to 0.
|
||||
*/
|
||||
export function memoryAgeDays(mtimeMs: number): number {
|
||||
return Math.max(0, Math.floor((Date.now() - mtimeMs) / 86_400_000));
|
||||
}
|
||||
|
||||
/**
|
||||
* Human-readable age string. Models are poor at date arithmetic —
|
||||
* a raw ISO timestamp doesn't trigger staleness reasoning the way
|
||||
* "47 days ago" does.
|
||||
*/
|
||||
export function memoryAge(mtimeMs: number): string {
|
||||
const d = memoryAgeDays(mtimeMs);
|
||||
if (d === 0) return 'today';
|
||||
if (d === 1) return 'yesterday';
|
||||
return `${d} days ago`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Plain-text staleness caveat for memories >1 day old. Returns ''
|
||||
* for fresh (today/yesterday) memories — warning there is noise.
|
||||
*/
|
||||
export function memoryFreshnessText(mtimeMs: number): string {
|
||||
const d = memoryAgeDays(mtimeMs);
|
||||
if (d <= 1) return '';
|
||||
return (
|
||||
`This memory is ${d} days old. ` +
|
||||
'Memories are point-in-time observations, not live state — ' +
|
||||
'claims about code behavior or file:line citations may be outdated. ' +
|
||||
'Verify against current code before asserting as fact.'
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Per-memory staleness note wrapped in <system-reminder> tags.
|
||||
* Returns '' for memories ≤ 1 day old.
|
||||
*/
|
||||
export function memoryFreshnessNote(mtimeMs: number): string {
|
||||
const text = memoryFreshnessText(mtimeMs);
|
||||
if (!text) return '';
|
||||
return `<system-reminder>${text}</system-reminder>\n`;
|
||||
}
|
||||
229
packages/core/src/memory/memoryLifecycle.integration.test.ts
Normal file
229
packages/core/src/memory/memoryLifecycle.integration.test.ts
Normal file
|
|
@ -0,0 +1,229 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import * as os from 'node:os';
|
||||
import * as path from 'node:path';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { runAutoMemoryExtractionByAgent } from './extractionAgentPlanner.js';
|
||||
import { runManagedAutoMemoryDream } from './dream.js';
|
||||
import { planManagedAutoMemoryDreamByAgent } from './dreamAgentPlanner.js';
|
||||
import { MemoryManager } from './manager.js';
|
||||
import { rebuildManagedAutoMemoryIndex } from './indexer.js';
|
||||
import { getAutoMemoryFilePath, getAutoMemoryIndexPath } from './paths.js';
|
||||
import { resolveRelevantAutoMemoryPromptForQuery } from './recall.js';
|
||||
import { scanAutoMemoryTopicDocuments } from './scan.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
|
||||
vi.mock('./extractionAgentPlanner.js', () => ({
|
||||
runAutoMemoryExtractionByAgent: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('./dreamAgentPlanner.js', () => ({
|
||||
planManagedAutoMemoryDreamByAgent: vi.fn(),
|
||||
}));
|
||||
|
||||
describe('managed auto-memory lifecycle integration', () => {
|
||||
let tempDir: string;
|
||||
let projectRoot: string;
|
||||
let mockConfig: Config;
|
||||
let extractionCount: number;
|
||||
let mgr: MemoryManager;
|
||||
|
||||
beforeEach(async () => {
|
||||
mgr = new MemoryManager();
|
||||
tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'memory-lifecycle-int-'));
|
||||
projectRoot = path.join(tempDir, 'project');
|
||||
await fs.mkdir(projectRoot, { recursive: true });
|
||||
await ensureAutoMemoryScaffold(
|
||||
projectRoot,
|
||||
new Date('2026-04-01T00:00:00.000Z'),
|
||||
);
|
||||
mockConfig = {
|
||||
getSessionId: () => 'session-1',
|
||||
getModel: () => 'qwen3-coder-plus',
|
||||
} as Config;
|
||||
vi.clearAllMocks();
|
||||
extractionCount = 0;
|
||||
vi.mocked(runAutoMemoryExtractionByAgent).mockImplementation(
|
||||
async (_config, root: string) => {
|
||||
extractionCount += 1;
|
||||
const topic = extractionCount > 1 ? 'reference' : 'user';
|
||||
const relativePath =
|
||||
topic === 'reference'
|
||||
? path.join('reference', 'latency-dashboard.md')
|
||||
: path.join('user', 'terse-responses.md');
|
||||
const filePath = getAutoMemoryFilePath(root, relativePath);
|
||||
await fs.mkdir(path.dirname(filePath), { recursive: true });
|
||||
const description =
|
||||
topic === 'reference'
|
||||
? 'https://grafana.example/d/api-latency'
|
||||
: 'I prefer terse responses.';
|
||||
await fs.writeFile(
|
||||
filePath,
|
||||
[
|
||||
'---',
|
||||
`type: ${topic}`,
|
||||
`name: ${topic === 'reference' ? 'Latency Dashboard' : 'Terse Responses'}`,
|
||||
`description: ${description}`,
|
||||
'---',
|
||||
'',
|
||||
description,
|
||||
'',
|
||||
].join('\n'),
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
return {
|
||||
touchedTopics: [topic],
|
||||
systemMessage: undefined,
|
||||
};
|
||||
},
|
||||
);
|
||||
vi.mocked(planManagedAutoMemoryDreamByAgent).mockResolvedValue({
|
||||
status: 'completed',
|
||||
finalText: 'Consolidated memory files and updated the index.',
|
||||
filesTouched: [
|
||||
getAutoMemoryFilePath(
|
||||
projectRoot,
|
||||
path.join('user', 'terse-responses.md'),
|
||||
),
|
||||
getAutoMemoryFilePath(
|
||||
projectRoot,
|
||||
path.join('reference', 'latency-dashboard.md'),
|
||||
),
|
||||
],
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
mgr.resetExtractStateForTests();
|
||||
await fs.rm(tempDir, {
|
||||
recursive: true,
|
||||
force: true,
|
||||
maxRetries: 3,
|
||||
retryDelay: 10,
|
||||
});
|
||||
});
|
||||
|
||||
it('supports a durable memory lifecycle across extraction, recall, and dream', async () => {
|
||||
const firstExtraction = mgr.scheduleExtract({
|
||||
projectRoot,
|
||||
sessionId: 'session-1',
|
||||
config: mockConfig,
|
||||
history: [
|
||||
{ role: 'user', parts: [{ text: 'I prefer terse responses.' }] },
|
||||
],
|
||||
});
|
||||
|
||||
const queuedExtraction = await mgr.scheduleExtract({
|
||||
projectRoot,
|
||||
sessionId: 'session-1',
|
||||
config: mockConfig,
|
||||
history: [
|
||||
{ role: 'user', parts: [{ text: 'I prefer terse responses.' }] },
|
||||
{ role: 'model', parts: [{ text: 'Understood.' }] },
|
||||
{
|
||||
role: 'user',
|
||||
parts: [
|
||||
{
|
||||
text: 'The latency dashboard is https://grafana.example/d/api-latency',
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
expect(queuedExtraction.skippedReason).toBe('queued');
|
||||
|
||||
const firstResult = await firstExtraction;
|
||||
expect(firstResult.touchedTopics).toEqual(['user']);
|
||||
|
||||
const drained = await mgr.drain({
|
||||
timeoutMs: 1_000,
|
||||
});
|
||||
expect(drained).toBe(true);
|
||||
|
||||
const projectPath = getAutoMemoryFilePath(
|
||||
projectRoot,
|
||||
path.join('project', 'latency-dashboard.md'),
|
||||
);
|
||||
await fs.mkdir(path.dirname(projectPath), { recursive: true });
|
||||
await fs.writeFile(
|
||||
projectPath,
|
||||
[
|
||||
'---',
|
||||
'type: project',
|
||||
'name: Latency Dashboard',
|
||||
'description: The latency dashboard is https://grafana.example/d/api-latency',
|
||||
'---',
|
||||
'',
|
||||
'The latency dashboard is https://grafana.example/d/api-latency',
|
||||
'',
|
||||
'Why: This is temporary for this task.',
|
||||
].join('\n'),
|
||||
'utf-8',
|
||||
);
|
||||
await rebuildManagedAutoMemoryIndex(projectRoot);
|
||||
|
||||
const duplicateUserPath = getAutoMemoryFilePath(
|
||||
projectRoot,
|
||||
path.join('user', 'terse-duplicate.md'),
|
||||
);
|
||||
await fs.mkdir(path.dirname(duplicateUserPath), { recursive: true });
|
||||
await fs.writeFile(
|
||||
duplicateUserPath,
|
||||
[
|
||||
'---',
|
||||
'type: user',
|
||||
'name: User Memory Duplicate',
|
||||
'description: Duplicate terse preference',
|
||||
'---',
|
||||
'',
|
||||
'I prefer terse responses.',
|
||||
'',
|
||||
'Why: User repeatedly asks for concise replies.',
|
||||
].join('\n'),
|
||||
'utf-8',
|
||||
);
|
||||
await rebuildManagedAutoMemoryIndex(projectRoot);
|
||||
|
||||
const dreamResult = await runManagedAutoMemoryDream(
|
||||
projectRoot,
|
||||
new Date('2026-04-01T03:00:00.000Z'),
|
||||
mockConfig,
|
||||
);
|
||||
expect(dreamResult.touchedTopics).toContain('user');
|
||||
expect(dreamResult.dedupedEntries).toBe(0);
|
||||
|
||||
const indexContent = await fs.readFile(
|
||||
getAutoMemoryIndexPath(projectRoot),
|
||||
'utf-8',
|
||||
);
|
||||
const docs = await scanAutoMemoryTopicDocuments(projectRoot);
|
||||
const userDoc = docs.find((doc) => doc.type === 'user');
|
||||
const projectDoc = docs.find((doc) => doc.type === 'project');
|
||||
const referenceDoc = docs.find((doc) => doc.type === 'reference');
|
||||
|
||||
expect(userDoc?.body).toContain('I prefer terse responses.');
|
||||
expect(userDoc?.body).toContain(
|
||||
'Why: User repeatedly asks for concise replies.',
|
||||
);
|
||||
expect(referenceDoc?.body).toContain('grafana.example/d/api-latency');
|
||||
expect(projectDoc?.body).toContain('This is temporary for this task.');
|
||||
expect(indexContent).toContain('user/');
|
||||
|
||||
const recall = await resolveRelevantAutoMemoryPromptForQuery(
|
||||
projectRoot,
|
||||
'Check the latency dashboard and use a terse answer.',
|
||||
);
|
||||
expect(recall.strategy).toBe('heuristic');
|
||||
expect(recall.prompt).toContain('## Relevant memory');
|
||||
expect(recall.prompt).toContain('user/');
|
||||
expect(recall.prompt).toContain('reference/');
|
||||
});
|
||||
});
|
||||
199
packages/core/src/memory/paths.ts
Normal file
199
packages/core/src/memory/paths.ts
Normal file
|
|
@ -0,0 +1,199 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs';
|
||||
import * as os from 'node:os';
|
||||
import * as path from 'node:path';
|
||||
import { QWEN_DIR, sanitizeCwd } from '../utils/paths.js';
|
||||
import type { AutoMemoryType } from './types.js';
|
||||
|
||||
export const AUTO_MEMORY_DIRNAME = 'memory';
|
||||
export const AUTO_MEMORY_INDEX_FILENAME = 'MEMORY.md';
|
||||
export const AUTO_MEMORY_METADATA_FILENAME = 'meta.json';
|
||||
export const AUTO_MEMORY_EXTRACT_CURSOR_FILENAME = 'extract-cursor.json';
|
||||
export const AUTO_MEMORY_CONSOLIDATION_LOCK_FILENAME = 'consolidation.lock';
|
||||
|
||||
function findGitRoot(startPath: string): string | null {
|
||||
let current = path.resolve(startPath);
|
||||
|
||||
while (true) {
|
||||
const gitPath = path.join(current, '.git');
|
||||
if (fs.existsSync(gitPath)) {
|
||||
return current;
|
||||
}
|
||||
|
||||
const parent = path.dirname(current);
|
||||
if (parent === current) {
|
||||
return null;
|
||||
}
|
||||
current = parent;
|
||||
}
|
||||
}
|
||||
|
||||
function findCanonicalGitRoot(startPath: string): string | null {
|
||||
const gitRoot = findGitRoot(startPath);
|
||||
if (!gitRoot) {
|
||||
return null;
|
||||
}
|
||||
|
||||
try {
|
||||
const gitContent = fs
|
||||
.readFileSync(path.join(gitRoot, '.git'), 'utf-8')
|
||||
.trim();
|
||||
if (!gitContent.startsWith('gitdir:')) {
|
||||
return gitRoot;
|
||||
}
|
||||
|
||||
const worktreeGitDir = path.resolve(
|
||||
gitRoot,
|
||||
gitContent.slice('gitdir:'.length).trim(),
|
||||
);
|
||||
const commonDir = path.resolve(
|
||||
worktreeGitDir,
|
||||
fs.readFileSync(path.join(worktreeGitDir, 'commondir'), 'utf-8').trim(),
|
||||
);
|
||||
|
||||
if (
|
||||
path.resolve(path.dirname(worktreeGitDir)) !==
|
||||
path.join(commonDir, 'worktrees')
|
||||
) {
|
||||
return gitRoot;
|
||||
}
|
||||
|
||||
const backlink = fs.realpathSync(
|
||||
fs.readFileSync(path.join(worktreeGitDir, 'gitdir'), 'utf-8').trim(),
|
||||
);
|
||||
if (backlink !== path.join(fs.realpathSync(gitRoot), '.git')) {
|
||||
return gitRoot;
|
||||
}
|
||||
|
||||
if (path.basename(commonDir) !== '.git') {
|
||||
return commonDir.normalize('NFC');
|
||||
}
|
||||
return path.dirname(commonDir).normalize('NFC');
|
||||
} catch {
|
||||
return gitRoot;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the base directory for all auto-memory storage.
|
||||
* Defaults to `~/.qwen`; overridable via QWEN_CODE_MEMORY_BASE_DIR for tests.
|
||||
*/
|
||||
export function getMemoryBaseDir(): string {
|
||||
if (process.env['QWEN_CODE_MEMORY_BASE_DIR']) {
|
||||
return process.env['QWEN_CODE_MEMORY_BASE_DIR'];
|
||||
}
|
||||
return path.join(os.homedir(), QWEN_DIR);
|
||||
}
|
||||
|
||||
// Memoize by projectRoot — findCanonicalGitRoot() walks the file system (existsSync
|
||||
// per directory) and is called from hot-path code such as schedulers and scanners.
|
||||
const _autoMemoryRootCache = new Map<string, string>();
|
||||
|
||||
export function getAutoMemoryRoot(projectRoot: string): string {
|
||||
const cached = _autoMemoryRootCache.get(projectRoot);
|
||||
if (cached !== undefined) return cached;
|
||||
|
||||
let result: string;
|
||||
if (process.env['QWEN_CODE_MEMORY_LOCAL'] === '1') {
|
||||
result = path.join(projectRoot, QWEN_DIR, AUTO_MEMORY_DIRNAME);
|
||||
} else {
|
||||
const canonicalRoot =
|
||||
findCanonicalGitRoot(projectRoot) ?? path.resolve(projectRoot);
|
||||
result = path.join(
|
||||
getMemoryBaseDir(),
|
||||
'projects',
|
||||
sanitizeCwd(canonicalRoot),
|
||||
AUTO_MEMORY_DIRNAME,
|
||||
);
|
||||
}
|
||||
_autoMemoryRootCache.set(projectRoot, result);
|
||||
return result;
|
||||
}
|
||||
|
||||
/** Clear the memoization cache (for tests that change environment or git layout). */
|
||||
export function clearAutoMemoryRootCache(): void {
|
||||
_autoMemoryRootCache.clear();
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the project-level state directory that holds auxiliary files
|
||||
* (meta.json, extract-cursor.json, consolidation.lock) for the given project.
|
||||
* This is the parent of getAutoMemoryRoot(), so memory/ stays clean:
|
||||
* only MEMORY.md and topic files live inside it.
|
||||
*/
|
||||
export function getAutoMemoryProjectStateDir(projectRoot: string): string {
|
||||
return path.dirname(getAutoMemoryRoot(projectRoot));
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns true if the given absolute path is inside the auto-memory root for
|
||||
* the given project.
|
||||
*
|
||||
* Uses path.relative() instead of startsWith() to correctly handle
|
||||
* platform path-separator differences (e.g. Windows backslash vs forward
|
||||
* slash) and to be resilient against path-traversal edge cases.
|
||||
*/
|
||||
export function isAutoMemPath(
|
||||
absolutePath: string,
|
||||
projectRoot: string,
|
||||
): boolean {
|
||||
const normalizedPath = path.normalize(absolutePath);
|
||||
const memRoot = path.normalize(getAutoMemoryRoot(projectRoot));
|
||||
const rel = path.relative(memRoot, normalizedPath);
|
||||
// rel === '' means absolutePath IS memRoot itself.
|
||||
// !rel.startsWith('..') && !path.isAbsolute(rel) means it's strictly inside.
|
||||
return rel === '' || (!rel.startsWith('..') && !path.isAbsolute(rel));
|
||||
}
|
||||
|
||||
export function getAutoMemoryIndexPath(projectRoot: string): string {
|
||||
return path.join(getAutoMemoryRoot(projectRoot), AUTO_MEMORY_INDEX_FILENAME);
|
||||
}
|
||||
|
||||
export function getAutoMemoryMetadataPath(projectRoot: string): string {
|
||||
return path.join(
|
||||
getAutoMemoryProjectStateDir(projectRoot),
|
||||
AUTO_MEMORY_METADATA_FILENAME,
|
||||
);
|
||||
}
|
||||
|
||||
export function getAutoMemoryExtractCursorPath(projectRoot: string): string {
|
||||
return path.join(
|
||||
getAutoMemoryProjectStateDir(projectRoot),
|
||||
AUTO_MEMORY_EXTRACT_CURSOR_FILENAME,
|
||||
);
|
||||
}
|
||||
|
||||
export function getAutoMemoryConsolidationLockPath(
|
||||
projectRoot: string,
|
||||
): string {
|
||||
return path.join(
|
||||
getAutoMemoryProjectStateDir(projectRoot),
|
||||
AUTO_MEMORY_CONSOLIDATION_LOCK_FILENAME,
|
||||
);
|
||||
}
|
||||
|
||||
export function getAutoMemoryTopicFilename(type: AutoMemoryType): string {
|
||||
return `${type}.md`;
|
||||
}
|
||||
|
||||
export function getAutoMemoryTopicPath(
|
||||
projectRoot: string,
|
||||
type: AutoMemoryType,
|
||||
): string {
|
||||
return path.join(
|
||||
getAutoMemoryRoot(projectRoot),
|
||||
getAutoMemoryTopicFilename(type),
|
||||
);
|
||||
}
|
||||
|
||||
export function getAutoMemoryFilePath(
|
||||
projectRoot: string,
|
||||
relativePath: string,
|
||||
): string {
|
||||
return path.join(getAutoMemoryRoot(projectRoot), relativePath);
|
||||
}
|
||||
73
packages/core/src/memory/prompt.test.ts
Normal file
73
packages/core/src/memory/prompt.test.ts
Normal file
|
|
@ -0,0 +1,73 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, expect, it } from 'vitest';
|
||||
import {
|
||||
appendManagedAutoMemoryToUserMemory,
|
||||
buildManagedAutoMemoryPrompt,
|
||||
MAX_MANAGED_AUTO_MEMORY_INDEX_LINES,
|
||||
} from './prompt.js';
|
||||
|
||||
describe('managed auto-memory prompt helpers', () => {
|
||||
it('builds the memory mechanics prompt even when MEMORY.md is empty', () => {
|
||||
const prompt = buildManagedAutoMemoryPrompt('/tmp/project/.qwen/memory');
|
||||
|
||||
expect(prompt).toContain('# auto memory');
|
||||
expect(prompt).toContain('persistent, file-based memory system');
|
||||
expect(prompt).toContain('/tmp/project/.qwen/memory');
|
||||
expect(prompt).toContain('Your MEMORY.md is currently empty');
|
||||
});
|
||||
|
||||
it('embeds the current MEMORY.md index content', () => {
|
||||
const prompt = buildManagedAutoMemoryPrompt(
|
||||
'/tmp/project/.qwen/memory',
|
||||
'- [User Memory](user/terse.md) — User prefers terse responses.',
|
||||
);
|
||||
|
||||
expect(prompt).toContain('## /tmp/project/.qwen/memory/MEMORY.md');
|
||||
expect(prompt).toContain('[User Memory](user/terse.md)');
|
||||
expect(prompt).toContain('User prefers terse responses.');
|
||||
});
|
||||
|
||||
it('appends managed auto-memory after existing hierarchical memory', () => {
|
||||
const result = appendManagedAutoMemoryToUserMemory(
|
||||
'--- Context from: QWEN.md ---\nProject rules',
|
||||
'/tmp/project/.qwen/memory',
|
||||
'- [Project Memory](project/release-freeze.md) — Release freeze starts Friday.',
|
||||
);
|
||||
|
||||
expect(result).toContain('Project rules');
|
||||
expect(result).toContain('\n\n---\n\n');
|
||||
expect(result).toContain('# auto memory');
|
||||
});
|
||||
|
||||
it('returns only managed auto-memory when hierarchical memory is empty', () => {
|
||||
const result = appendManagedAutoMemoryToUserMemory(
|
||||
' ',
|
||||
'/tmp/project/.qwen/memory',
|
||||
'- [Reference](reference/grafana.md) — Grafana dashboard link.',
|
||||
);
|
||||
|
||||
expect(result).toContain('# auto memory');
|
||||
expect(result.startsWith('# auto memory')).toBe(true);
|
||||
});
|
||||
|
||||
it('truncates oversized managed auto-memory index content', () => {
|
||||
const oversizedIndex = Array.from(
|
||||
{ length: MAX_MANAGED_AUTO_MEMORY_INDEX_LINES + 50 },
|
||||
(_, index) => `- [Memory ${index}](memory-${index}.md) — hook ${index}`,
|
||||
).join('\n');
|
||||
const result = buildManagedAutoMemoryPrompt(
|
||||
'/tmp/project/.qwen/memory',
|
||||
oversizedIndex,
|
||||
);
|
||||
|
||||
expect(result).toContain(
|
||||
'WARNING: MEMORY.md is 250 lines (limit: 200). Only part of it was loaded.',
|
||||
);
|
||||
expect(result.split('\n').length).toBeLessThan(400);
|
||||
});
|
||||
});
|
||||
236
packages/core/src/memory/prompt.ts
Normal file
236
packages/core/src/memory/prompt.ts
Normal file
|
|
@ -0,0 +1,236 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
const MAX_MANAGED_AUTO_MEMORY_INDEX_LINES = 200;
|
||||
const MAX_MANAGED_AUTO_MEMORY_INDEX_BYTES = 25_000;
|
||||
|
||||
const DIR_EXISTS_GUIDANCE =
|
||||
'This directory already exists — write to it directly with the write_file tool (do not run mkdir or check for its existence).';
|
||||
|
||||
export const MEMORY_FRONTMATTER_EXAMPLE: readonly string[] = [
|
||||
'```markdown',
|
||||
'---',
|
||||
'name: {{memory name}}',
|
||||
'description: {{one-line description — used to decide relevance in future conversations, so be specific}}',
|
||||
'type: {{user, feedback, project, reference}}',
|
||||
'---',
|
||||
'',
|
||||
'{{memory content — for feedback/project types, structure as: rule/fact, then **Why:** and **How to apply:** lines}}',
|
||||
'```',
|
||||
];
|
||||
|
||||
export const TYPES_SECTION_INDIVIDUAL: readonly string[] = [
|
||||
'## Types of memory',
|
||||
'',
|
||||
'There are several discrete types of memory that you can store in your memory system:',
|
||||
'',
|
||||
'<types>',
|
||||
'<type>',
|
||||
' <name>user</name>',
|
||||
" <description>Contain information about the user's role, goals, responsibilities, and knowledge. Great user memories help you tailor your future behavior to the user's preferences and perspective. Your goal in reading and writing these memories is to build up an understanding of who the user is and how you can be most helpful to them specifically. For example, you should collaborate with a senior software engineer differently than a student who is coding for the very first time. Keep in mind, that the aim here is to be helpful to the user. Avoid writing memories about the user that could be viewed as a negative judgement or that are not relevant to the work you're trying to accomplish together.</description>",
|
||||
" <when_to_save>When you learn any details about the user's role, preferences, responsibilities, or knowledge</when_to_save>",
|
||||
" <how_to_use>When your work should be informed by the user's profile or perspective. For example, if the user is asking you to explain a part of the code, you should answer that question in a way that is tailored to the specific details that they will find most valuable or that helps them build their mental model in relation to domain knowledge they already have.</how_to_use>",
|
||||
' <examples>',
|
||||
" user: I'm a data scientist investigating what logging we have in place",
|
||||
' assistant: [saves user memory: user is a data scientist, currently focused on observability/logging]',
|
||||
'',
|
||||
" user: I've been writing Go for ten years but this is my first time touching the React side of this repo",
|
||||
" assistant: [saves user memory: deep Go expertise, new to React and this project's frontend — frame frontend explanations in terms of backend analogues]",
|
||||
' </examples>',
|
||||
'</type>',
|
||||
'<type>',
|
||||
' <name>feedback</name>',
|
||||
' <description>Guidance the user has given you about how to approach work — both what to avoid and what to keep doing. These are a very important type of memory to read and write as they allow you to remain coherent and responsive to the way you should approach work in the project. Record from failure AND success: if you only save corrections, you will avoid past mistakes but drift away from approaches the user has already validated, and may grow overly cautious.</description>',
|
||||
' <when_to_save>Any time the user corrects your approach ("no not that", "don\'t", "stop doing X") OR confirms a non-obvious approach worked ("yes exactly", "perfect, keep doing that", accepting an unusual choice without pushback). Corrections are easy to notice; confirmations are quieter — watch for them. In both cases, save what is applicable to future conversations, especially if surprising or not obvious from the code. Include *why* so you can judge edge cases later.</when_to_save>',
|
||||
' <how_to_use>Let these memories guide your behavior so that the user does not need to offer the same guidance twice.</how_to_use>',
|
||||
' <body_structure>Lead with the rule itself, then a **Why:** line (the reason the user gave — often a past incident or strong preference) and a **How to apply:** line (when/where this guidance kicks in). Knowing *why* lets you judge edge cases instead of blindly following the rule.</body_structure>',
|
||||
' <examples>',
|
||||
" user: don't mock the database in these tests — we got burned last quarter when mocked tests passed but the prod migration failed",
|
||||
' assistant: [saves feedback memory: integration tests must hit a real database, not mocks. Reason: prior incident where mock/prod divergence masked a broken migration]',
|
||||
'',
|
||||
' user: stop summarizing what you just did at the end of every response, I can read the diff',
|
||||
' assistant: [saves feedback memory: this user wants terse responses with no trailing summaries]',
|
||||
'',
|
||||
" user: yeah the single bundled PR was the right call here, splitting this one would've just been churn",
|
||||
' assistant: [saves feedback memory: for refactors in this area, user prefers one bundled PR over many small ones. Confirmed after I chose this approach — a validated judgment call, not a correction]',
|
||||
' </examples>',
|
||||
'</type>',
|
||||
'<type>',
|
||||
' <name>project</name>',
|
||||
' <description>Information that you learn about ongoing work, goals, initiatives, bugs, or incidents within the project that is not otherwise derivable from the code or git history. Project memories help you understand the broader context and motivation behind the work the user is doing within this working directory.</description>',
|
||||
' <when_to_save>When you learn who is doing what, why, or by when. These states change relatively quickly so try to keep your understanding of this up to date. Always convert relative dates in user messages to absolute dates when saving (e.g., "Thursday" → "2026-03-05"), so the memory remains interpretable after time passes.</when_to_save>',
|
||||
" <how_to_use>Use these memories to more fully understand the details and nuance behind the user's request and make better informed suggestions.</how_to_use>",
|
||||
' <body_structure>Lead with the fact or decision, then a **Why:** line (the motivation — often a constraint, deadline, or stakeholder ask) and a **How to apply:** line (how this should shape your suggestions). Project memories decay fast, so the why helps future-you judge whether the memory is still load-bearing.</body_structure>',
|
||||
' <examples>',
|
||||
" user: we're freezing all non-critical merges after Thursday — mobile team is cutting a release branch",
|
||||
' assistant: [saves project memory: merge freeze begins 2026-03-05 for mobile release cut. Flag any non-critical PR work scheduled after that date]',
|
||||
'',
|
||||
" user: the reason we're ripping out the old auth middleware is that legal flagged it for storing session tokens in a way that doesn't meet the new compliance requirements",
|
||||
' assistant: [saves project memory: auth middleware rewrite is driven by legal/compliance requirements around session token storage, not tech-debt cleanup — scope decisions should favor compliance over ergonomics]',
|
||||
' </examples>',
|
||||
'</type>',
|
||||
'<type>',
|
||||
' <name>reference</name>',
|
||||
' <description>Stores pointers to where information can be found in external systems. These memories allow you to remember where to look to find up-to-date information outside of the project directory.</description>',
|
||||
' <when_to_save>When you learn about resources in external systems and their purpose. For example, that bugs are tracked in a specific project in Linear or that feedback can be found in a specific Slack channel.</when_to_save>',
|
||||
' <how_to_use>When the user references an external system or information that may be in an external system.</how_to_use>',
|
||||
' <examples>',
|
||||
' user: check the Linear project "INGEST" if you want context on these tickets, that\'s where we track all pipeline bugs',
|
||||
' assistant: [saves reference memory: pipeline bugs are tracked in Linear project "INGEST"]',
|
||||
'',
|
||||
" user: the Grafana board at grafana.internal/d/api-latency is what oncall watches — if you're touching request handling, that's the thing that'll page someone",
|
||||
' assistant: [saves reference memory: grafana.internal/d/api-latency is the oncall latency dashboard — check it when editing request-path code]',
|
||||
' </examples>',
|
||||
'</type>',
|
||||
'</types>',
|
||||
'',
|
||||
];
|
||||
|
||||
export const WHAT_NOT_TO_SAVE_SECTION: readonly string[] = [
|
||||
'## What NOT to save in memory',
|
||||
'',
|
||||
'- Code patterns, conventions, architecture, file paths, or project structure — these can be derived by reading the current project state.',
|
||||
'- Git history, recent changes, or who-changed-what — `git log` / `git blame` are authoritative.',
|
||||
'- Debugging solutions or fix recipes — the fix is in the code; the commit message has the context.',
|
||||
'- Anything already documented in QWEN.md or AGENTS.md files.',
|
||||
'- Ephemeral task details: in-progress work, temporary state, current conversation context.',
|
||||
'',
|
||||
'These exclusions apply even when the user explicitly asks you to save. If they ask you to save a PR list or activity summary, ask what was *surprising* or *non-obvious* about it — that is the part worth keeping.',
|
||||
];
|
||||
|
||||
export const MEMORY_DRIFT_CAVEAT =
|
||||
'- Memory records can become stale over time. Use memory as context for what was true at a given point in time. Before answering the user or building assumptions based solely on information in memory records, verify that the memory is still correct and up-to-date by reading the current state of the files or resources. If a recalled memory conflicts with current information, trust what you observe now — and update or remove the stale memory rather than acting on it.';
|
||||
|
||||
export const WHEN_TO_ACCESS_SECTION: readonly string[] = [
|
||||
'## When to access memories',
|
||||
'- When memories seem relevant, or the user references prior-conversation work.',
|
||||
'- You MUST access memory when the user explicitly asks you to check, recall, or remember.',
|
||||
'- If the user says to *ignore* or *not use* memory: proceed as if MEMORY.md were empty. Do not apply remembered facts, cite, compare against, or mention memory content.',
|
||||
MEMORY_DRIFT_CAVEAT,
|
||||
];
|
||||
|
||||
export const TRUSTING_RECALL_SECTION: readonly string[] = [
|
||||
'## Before recommending from memory',
|
||||
'',
|
||||
'A memory that names a specific function, file, or flag is a claim that it existed when the memory was written. It may have been renamed, removed, or never merged. Before recommending it:',
|
||||
'',
|
||||
'- If the memory names a file path: check the file exists.',
|
||||
'- If the memory names a function or flag: grep for it.',
|
||||
'- If the user is about to act on your recommendation (not just asking about history), verify first.',
|
||||
'',
|
||||
'"The memory says X exists" is not the same as "X exists now."',
|
||||
'',
|
||||
'A memory that summarizes repo state (activity logs, architecture snapshots) is frozen in time. If the user asks about *recent* or *current* state, prefer `git log` or reading the code over recalling the snapshot.',
|
||||
];
|
||||
|
||||
function truncateManagedAutoMemoryIndex(indexContent: string): string {
|
||||
const trimmed = indexContent.trim();
|
||||
const lines = trimmed.split('\n');
|
||||
const lineCount = lines.length;
|
||||
const byteCount = trimmed.length;
|
||||
const wasLineTruncated = lineCount > MAX_MANAGED_AUTO_MEMORY_INDEX_LINES;
|
||||
const wasByteTruncated = byteCount > MAX_MANAGED_AUTO_MEMORY_INDEX_BYTES;
|
||||
|
||||
if (!wasLineTruncated && !wasByteTruncated) {
|
||||
return trimmed;
|
||||
}
|
||||
|
||||
let truncated = wasLineTruncated
|
||||
? lines.slice(0, MAX_MANAGED_AUTO_MEMORY_INDEX_LINES).join('\n')
|
||||
: trimmed;
|
||||
|
||||
if (truncated.length > MAX_MANAGED_AUTO_MEMORY_INDEX_BYTES) {
|
||||
const cutAt = truncated.lastIndexOf('\n', MAX_MANAGED_AUTO_MEMORY_INDEX_BYTES);
|
||||
truncated = truncated.slice(
|
||||
0,
|
||||
cutAt > 0 ? cutAt : MAX_MANAGED_AUTO_MEMORY_INDEX_BYTES,
|
||||
);
|
||||
}
|
||||
|
||||
const reason =
|
||||
wasByteTruncated && !wasLineTruncated
|
||||
? `${(byteCount / 1024).toFixed(1)} KB (limit: ${(MAX_MANAGED_AUTO_MEMORY_INDEX_BYTES / 1024).toFixed(1)} KB) — index entries are too long`
|
||||
: wasLineTruncated && !wasByteTruncated
|
||||
? `${lineCount} lines (limit: ${MAX_MANAGED_AUTO_MEMORY_INDEX_LINES})`
|
||||
: `${lineCount} lines and ${(byteCount / 1024).toFixed(1)} KB`;
|
||||
|
||||
return `${truncated}\n\n> WARNING: MEMORY.md is ${reason}. Only part of it was loaded. Keep index entries to one line under ~200 chars; move detail into topic files.`;
|
||||
}
|
||||
|
||||
export function buildManagedAutoMemoryPrompt(
|
||||
memoryDir: string,
|
||||
indexContent?: string | null,
|
||||
): string {
|
||||
const trimmed = indexContent?.trim();
|
||||
|
||||
const lines = [
|
||||
'# auto memory',
|
||||
'',
|
||||
`You have a persistent, file-based memory system at \`${memoryDir}\`. ${DIR_EXISTS_GUIDANCE}`,
|
||||
'',
|
||||
"You should build up this memory system over time so that future conversations can have a complete picture of who the user is, how they'd like to collaborate with you, what behaviors to avoid or repeat, and the context behind the work the user gives you.",
|
||||
'',
|
||||
'If the user explicitly asks you to remember something, save it immediately as whichever type fits best. If they ask you to forget something, find and remove the relevant entry.',
|
||||
'',
|
||||
...TYPES_SECTION_INDIVIDUAL,
|
||||
...WHAT_NOT_TO_SAVE_SECTION,
|
||||
'',
|
||||
'## How to save memories',
|
||||
'',
|
||||
'Saving a memory is a two-step process:',
|
||||
'',
|
||||
'**Step 1** — write the memory to its own file (e.g., `user_role.md`, `feedback_testing.md`) using this frontmatter format:',
|
||||
'',
|
||||
...MEMORY_FRONTMATTER_EXAMPLE,
|
||||
'',
|
||||
`**Step 2** — add a pointer to that file in \`${memoryDir}/MEMORY.md\` (the full absolute path). This index file is an index, not a memory — each entry should be one line, under ~150 characters: \`- [Title](file.md) — one-line hook\`. It has no frontmatter. Never write memory content directly into \`${memoryDir}/MEMORY.md\`.`,
|
||||
'',
|
||||
`- \`${memoryDir}/MEMORY.md\` is always loaded into your conversation context — lines after ${MAX_MANAGED_AUTO_MEMORY_INDEX_LINES} will be truncated, so keep the index concise`,
|
||||
'- Keep the name, description, and type fields in memory files up-to-date with the content',
|
||||
'- Organize memory semantically by topic, not chronologically.',
|
||||
'- Update or remove memories that turn out to be wrong or outdated.',
|
||||
'- Do not write duplicate memories. First check if there is an existing memory you can update before writing a new one.',
|
||||
'',
|
||||
...WHEN_TO_ACCESS_SECTION,
|
||||
'',
|
||||
...TRUSTING_RECALL_SECTION,
|
||||
'',
|
||||
'## Memory and other forms of persistence',
|
||||
'Memory is one of several persistence mechanisms available to you as you assist the user in a given conversation. The distinction is often that memory can be recalled in future conversations and should not be used for persisting information that is only useful within the scope of the current conversation.',
|
||||
'- When to use or update a plan instead of memory: If you are about to start a non-trivial implementation task and would like to reach alignment with the user on your approach you should use a Plan rather than saving this information to memory. Similarly, if you already have a plan within the conversation and you have changed your approach persist that change by updating the plan rather than saving a memory.',
|
||||
'- When to use or update tasks instead of memory: When you need to break your work in current conversation into discrete steps or keep track of your progress use tasks instead of saving to memory. Tasks are great for persisting information about the work that needs to be done in the current conversation, but memory should be reserved for information that will be useful in future conversations.',
|
||||
'',
|
||||
`## ${memoryDir}/MEMORY.md`,
|
||||
'',
|
||||
trimmed
|
||||
? truncateManagedAutoMemoryIndex(trimmed)
|
||||
: 'Your MEMORY.md is currently empty. When you save new memories, they will appear here.',
|
||||
];
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
export function appendManagedAutoMemoryToUserMemory(
|
||||
userMemory: string,
|
||||
memoryDir: string,
|
||||
indexContent?: string | null,
|
||||
): string {
|
||||
const managedPrompt = buildManagedAutoMemoryPrompt(memoryDir, indexContent);
|
||||
const trimmedUserMemory = userMemory.trim();
|
||||
|
||||
if (!managedPrompt) {
|
||||
return userMemory;
|
||||
}
|
||||
if (!trimmedUserMemory) {
|
||||
return managedPrompt;
|
||||
}
|
||||
|
||||
return `${trimmedUserMemory}\n\n---\n\n${managedPrompt}`;
|
||||
}
|
||||
|
||||
export {
|
||||
MAX_MANAGED_AUTO_MEMORY_INDEX_LINES,
|
||||
};
|
||||
132
packages/core/src/memory/recall.test.ts
Normal file
132
packages/core/src/memory/recall.test.ts
Normal file
|
|
@ -0,0 +1,132 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import {
|
||||
buildRelevantAutoMemoryPrompt,
|
||||
resolveRelevantAutoMemoryPromptForQuery,
|
||||
selectRelevantAutoMemoryDocuments,
|
||||
} from './recall.js';
|
||||
import type { ScannedAutoMemoryDocument } from './scan.js';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { scanAutoMemoryTopicDocuments } from './scan.js';
|
||||
import { selectRelevantAutoMemoryDocumentsByModel } from './relevanceSelector.js';
|
||||
|
||||
vi.mock('./scan.js', async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import('./scan.js')>();
|
||||
return {
|
||||
...actual,
|
||||
scanAutoMemoryTopicDocuments: vi.fn(),
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('./relevanceSelector.js', () => ({
|
||||
selectRelevantAutoMemoryDocumentsByModel: vi.fn(),
|
||||
}));
|
||||
|
||||
const docs: ScannedAutoMemoryDocument[] = [
|
||||
{
|
||||
type: 'reference',
|
||||
filePath: '/tmp/reference.md',
|
||||
relativePath: 'reference.md',
|
||||
filename: 'reference.md',
|
||||
title: 'Reference Memory',
|
||||
description: 'Dashboards and external docs',
|
||||
body: '# Reference Memory\n\n- Grafana dashboard: grafana.internal/d/api-latency',
|
||||
mtimeMs: 3,
|
||||
},
|
||||
{
|
||||
type: 'project',
|
||||
filePath: '/tmp/project.md',
|
||||
relativePath: 'project.md',
|
||||
filename: 'project.md',
|
||||
title: 'Project Memory',
|
||||
description: 'Project constraints and release context',
|
||||
body: '# Project Memory\n\n- Release freeze starts Friday.',
|
||||
mtimeMs: 2,
|
||||
},
|
||||
{
|
||||
type: 'user',
|
||||
filePath: '/tmp/user.md',
|
||||
relativePath: 'user.md',
|
||||
filename: 'user.md',
|
||||
title: 'User Memory',
|
||||
description: 'User preferences',
|
||||
body: '# User Memory\n\n- User prefers terse responses.',
|
||||
mtimeMs: 1,
|
||||
},
|
||||
];
|
||||
|
||||
describe('auto-memory relevant recall', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('selects the most relevant documents for a query', () => {
|
||||
const selected = selectRelevantAutoMemoryDocuments(
|
||||
'check the dashboard reference for latency',
|
||||
docs,
|
||||
);
|
||||
|
||||
expect(selected[0]?.type).toBe('reference');
|
||||
expect(selected.map((doc) => doc.type)).toContain('reference');
|
||||
});
|
||||
|
||||
it('returns an empty list for an empty query', () => {
|
||||
expect(selectRelevantAutoMemoryDocuments(' ', docs)).toEqual([]);
|
||||
});
|
||||
|
||||
it('formats selected documents as a prompt block', () => {
|
||||
const prompt = buildRelevantAutoMemoryPrompt([docs[0], docs[2]]);
|
||||
|
||||
expect(prompt).toContain('## Relevant memory');
|
||||
expect(prompt).toContain('Reference Memory (reference.md)');
|
||||
expect(prompt).toContain('User Memory (user.md)');
|
||||
});
|
||||
|
||||
it('uses model-driven selection when config is provided', async () => {
|
||||
vi.mocked(scanAutoMemoryTopicDocuments).mockResolvedValue(docs);
|
||||
vi.mocked(selectRelevantAutoMemoryDocumentsByModel).mockResolvedValue([
|
||||
docs[0],
|
||||
]);
|
||||
|
||||
const result = await resolveRelevantAutoMemoryPromptForQuery(
|
||||
'/tmp/project',
|
||||
'check the dashboard reference for latency',
|
||||
{
|
||||
config: {} as Config,
|
||||
},
|
||||
);
|
||||
|
||||
expect(result.strategy).toBe('model');
|
||||
expect(result.selectedDocs).toEqual([docs[0]]);
|
||||
expect(result.prompt).toContain('Reference Memory (reference.md)');
|
||||
});
|
||||
|
||||
it('falls back to heuristic selection when model-driven selection fails', async () => {
|
||||
vi.mocked(scanAutoMemoryTopicDocuments).mockResolvedValue(docs);
|
||||
vi.mocked(selectRelevantAutoMemoryDocumentsByModel).mockRejectedValue(
|
||||
new Error('selector failed'),
|
||||
);
|
||||
|
||||
const result = await resolveRelevantAutoMemoryPromptForQuery(
|
||||
'/tmp/project',
|
||||
'check the dashboard reference for latency',
|
||||
{
|
||||
config: {} as Config,
|
||||
excludedFilePaths: ['/tmp/user.md'],
|
||||
},
|
||||
);
|
||||
|
||||
expect(result.strategy).toBe('heuristic');
|
||||
expect(result.selectedDocs.map((doc) => doc.filePath)).toContain(
|
||||
'/tmp/reference.md',
|
||||
);
|
||||
expect(result.selectedDocs.map((doc) => doc.filePath)).not.toContain(
|
||||
'/tmp/user.md',
|
||||
);
|
||||
});
|
||||
});
|
||||
257
packages/core/src/memory/recall.ts
Normal file
257
packages/core/src/memory/recall.ts
Normal file
|
|
@ -0,0 +1,257 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as path from 'node:path';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { createDebugLogger } from '../utils/debugLogger.js';
|
||||
import {
|
||||
scanAutoMemoryTopicDocuments,
|
||||
type ScannedAutoMemoryDocument,
|
||||
} from './scan.js';
|
||||
import { memoryAge, memoryFreshnessText } from './memoryAge.js';
|
||||
import { selectRelevantAutoMemoryDocumentsByModel } from './relevanceSelector.js';
|
||||
import { logMemoryRecall, MemoryRecallEvent } from '../telemetry/index.js';
|
||||
|
||||
const MAX_RELEVANT_DOCS = 5;
|
||||
const MAX_DOC_BODY_CHARS = 1_200;
|
||||
const debugLogger = createDebugLogger('AUTO_MEMORY_RECALL');
|
||||
|
||||
const TYPE_KEYWORDS: Record<string, string[]> = {
|
||||
user: ['user', 'preference', 'preferences', 'background', 'role', 'terse'],
|
||||
feedback: ['feedback', 'rule', 'rules', 'avoid', 'style', 'summary'],
|
||||
project: ['project', 'goal', 'goals', 'incident', 'deadline', 'release'],
|
||||
reference: ['reference', 'dashboard', 'ticket', 'docs', 'doc', 'link'],
|
||||
};
|
||||
|
||||
function tokenize(text: string): string[] {
|
||||
return Array.from(
|
||||
new Set(
|
||||
text
|
||||
.toLowerCase()
|
||||
.split(/[^a-z0-9]+/)
|
||||
.map((token) => token.trim())
|
||||
.filter((token) => token.length >= 3),
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
function normalizeBody(body: string): string {
|
||||
const trimmed = body.trim();
|
||||
if (trimmed === '_No entries yet._') {
|
||||
return '';
|
||||
}
|
||||
return trimmed;
|
||||
}
|
||||
|
||||
function scoreDocument(
|
||||
queryTokens: string[],
|
||||
doc: ScannedAutoMemoryDocument,
|
||||
): number {
|
||||
const normalizedBody = normalizeBody(doc.body);
|
||||
const haystack = [doc.type, doc.title, doc.description, normalizedBody]
|
||||
.join(' ')
|
||||
.toLowerCase();
|
||||
|
||||
let score = 0;
|
||||
for (const token of queryTokens) {
|
||||
if (haystack.includes(token)) {
|
||||
score += 2;
|
||||
}
|
||||
if (TYPE_KEYWORDS[doc.type]?.includes(token)) {
|
||||
score += 1;
|
||||
}
|
||||
}
|
||||
|
||||
if (normalizedBody.length > 0) {
|
||||
score += 1;
|
||||
}
|
||||
|
||||
return score;
|
||||
}
|
||||
|
||||
export function selectRelevantAutoMemoryDocuments(
|
||||
query: string,
|
||||
docs: ScannedAutoMemoryDocument[],
|
||||
limit = MAX_RELEVANT_DOCS,
|
||||
): ScannedAutoMemoryDocument[] {
|
||||
const queryTokens = tokenize(query);
|
||||
if (queryTokens.length === 0) {
|
||||
return [];
|
||||
}
|
||||
|
||||
return docs
|
||||
.map((doc) => ({ doc, score: scoreDocument(queryTokens, doc) }))
|
||||
.filter(({ score }) => score > 0)
|
||||
.sort((a, b) => b.score - a.score || a.doc.type.localeCompare(b.doc.type))
|
||||
.slice(0, limit)
|
||||
.map(({ doc }) => doc);
|
||||
}
|
||||
|
||||
function truncateBody(body: string): string {
|
||||
const normalized = normalizeBody(body);
|
||||
if (normalized.length <= MAX_DOC_BODY_CHARS) {
|
||||
return normalized;
|
||||
}
|
||||
return `${normalized.slice(0, MAX_DOC_BODY_CHARS).trimEnd()}\n\n> NOTE: Relevant memory truncated for prompt budget.`;
|
||||
}
|
||||
|
||||
export function buildRelevantAutoMemoryPrompt(
|
||||
docs: ScannedAutoMemoryDocument[],
|
||||
): string {
|
||||
if (docs.length === 0) {
|
||||
return '';
|
||||
}
|
||||
|
||||
return [
|
||||
'## Relevant memory',
|
||||
'',
|
||||
'Use the following memories only when they are directly relevant to the current request. Verify file/function claims before relying on them.',
|
||||
'',
|
||||
...docs.flatMap((doc) => {
|
||||
const body = truncateBody(doc.body);
|
||||
const staleness = memoryFreshnessText(doc.mtimeMs);
|
||||
return [
|
||||
`### ${doc.title} (${doc.relativePath || path.basename(doc.filePath)})`,
|
||||
`Saved ${memoryAge(doc.mtimeMs)}.`,
|
||||
doc.description,
|
||||
'',
|
||||
body || '_No detailed entries yet._',
|
||||
...(staleness ? ['', `> NOTE: ${staleness}`] : []),
|
||||
'',
|
||||
];
|
||||
}),
|
||||
].join('\n');
|
||||
}
|
||||
|
||||
export interface ResolveRelevantAutoMemoryPromptOptions {
|
||||
config?: Config;
|
||||
excludedFilePaths?: Iterable<string>;
|
||||
limit?: number;
|
||||
recentTools?: readonly string[];
|
||||
}
|
||||
|
||||
export interface RelevantAutoMemoryPromptResult {
|
||||
prompt: string;
|
||||
selectedDocs: ScannedAutoMemoryDocument[];
|
||||
strategy: 'none' | 'heuristic' | 'model';
|
||||
}
|
||||
|
||||
function filterExcludedAutoMemoryDocuments(
|
||||
docs: ScannedAutoMemoryDocument[],
|
||||
excludedFilePaths?: Iterable<string>,
|
||||
): ScannedAutoMemoryDocument[] {
|
||||
if (!excludedFilePaths) {
|
||||
return docs;
|
||||
}
|
||||
|
||||
const excluded = new Set(excludedFilePaths);
|
||||
if (excluded.size === 0) {
|
||||
return docs;
|
||||
}
|
||||
|
||||
return docs.filter((doc) => !excluded.has(doc.filePath));
|
||||
}
|
||||
|
||||
export async function resolveRelevantAutoMemoryPromptForQuery(
|
||||
projectRoot: string,
|
||||
query: string,
|
||||
options: ResolveRelevantAutoMemoryPromptOptions = {},
|
||||
): Promise<RelevantAutoMemoryPromptResult> {
|
||||
const t0 = Date.now();
|
||||
const docs = filterExcludedAutoMemoryDocuments(
|
||||
await scanAutoMemoryTopicDocuments(projectRoot),
|
||||
options.excludedFilePaths,
|
||||
);
|
||||
const limit = options.limit ?? MAX_RELEVANT_DOCS;
|
||||
|
||||
if (query.trim().length === 0 || docs.length === 0 || limit <= 0) {
|
||||
if (options.config) {
|
||||
logMemoryRecall(
|
||||
options.config,
|
||||
new MemoryRecallEvent({
|
||||
query_length: query.length,
|
||||
docs_scanned: docs.length,
|
||||
docs_selected: 0,
|
||||
strategy: 'none',
|
||||
duration_ms: Date.now() - t0,
|
||||
}),
|
||||
);
|
||||
}
|
||||
return {
|
||||
prompt: '',
|
||||
selectedDocs: [],
|
||||
strategy: 'none',
|
||||
};
|
||||
}
|
||||
|
||||
if (options.config) {
|
||||
try {
|
||||
const selectedDocs = await selectRelevantAutoMemoryDocumentsByModel(
|
||||
options.config,
|
||||
query,
|
||||
docs,
|
||||
limit,
|
||||
options.recentTools ?? [],
|
||||
);
|
||||
const strategy: RelevantAutoMemoryPromptResult['strategy'] =
|
||||
selectedDocs.length > 0 ? 'model' : 'none';
|
||||
logMemoryRecall(
|
||||
options.config,
|
||||
new MemoryRecallEvent({
|
||||
query_length: query.length,
|
||||
docs_scanned: docs.length,
|
||||
docs_selected: selectedDocs.length,
|
||||
strategy,
|
||||
duration_ms: Date.now() - t0,
|
||||
}),
|
||||
);
|
||||
return {
|
||||
prompt: buildRelevantAutoMemoryPrompt(selectedDocs),
|
||||
selectedDocs,
|
||||
strategy,
|
||||
};
|
||||
} catch (error) {
|
||||
debugLogger.warn(
|
||||
'Model-driven auto-memory recall failed; falling back to heuristic selection.',
|
||||
error,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
const selectedDocs = selectRelevantAutoMemoryDocuments(query, docs, limit);
|
||||
const strategy: RelevantAutoMemoryPromptResult['strategy'] =
|
||||
selectedDocs.length > 0 ? 'heuristic' : 'none';
|
||||
if (options.config) {
|
||||
logMemoryRecall(
|
||||
options.config,
|
||||
new MemoryRecallEvent({
|
||||
query_length: query.length,
|
||||
docs_scanned: docs.length,
|
||||
docs_selected: selectedDocs.length,
|
||||
strategy,
|
||||
duration_ms: Date.now() - t0,
|
||||
}),
|
||||
);
|
||||
}
|
||||
return {
|
||||
prompt: buildRelevantAutoMemoryPrompt(selectedDocs),
|
||||
selectedDocs,
|
||||
strategy,
|
||||
};
|
||||
}
|
||||
|
||||
export async function buildRelevantAutoMemoryPromptForQuery(
|
||||
projectRoot: string,
|
||||
query: string,
|
||||
options: ResolveRelevantAutoMemoryPromptOptions = {},
|
||||
): Promise<string> {
|
||||
const result = await resolveRelevantAutoMemoryPromptForQuery(
|
||||
projectRoot,
|
||||
query,
|
||||
options,
|
||||
);
|
||||
return result.prompt;
|
||||
}
|
||||
99
packages/core/src/memory/relevanceSelector.test.ts
Normal file
99
packages/core/src/memory/relevanceSelector.test.ts
Normal file
|
|
@ -0,0 +1,99 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { runSideQuery } from '../utils/sideQuery.js';
|
||||
import type { ScannedAutoMemoryDocument } from './scan.js';
|
||||
import { selectRelevantAutoMemoryDocumentsByModel } from './relevanceSelector.js';
|
||||
|
||||
vi.mock('../utils/sideQuery.js', () => ({
|
||||
runSideQuery: vi.fn(),
|
||||
}));
|
||||
|
||||
const docs: ScannedAutoMemoryDocument[] = [
|
||||
{
|
||||
type: 'user',
|
||||
filePath: '/tmp/user.md',
|
||||
relativePath: 'user.md',
|
||||
filename: 'user.md',
|
||||
title: 'User Memory',
|
||||
description: 'User preferences',
|
||||
body: '- User prefers terse responses.',
|
||||
mtimeMs: 1,
|
||||
},
|
||||
{
|
||||
type: 'reference',
|
||||
filePath: '/tmp/reference.md',
|
||||
relativePath: 'reference.md',
|
||||
filename: 'reference.md',
|
||||
title: 'Reference Memory',
|
||||
description: 'Operational references',
|
||||
body: '- Grafana dashboard: https://grafana.internal/d/api-latency',
|
||||
mtimeMs: 2,
|
||||
},
|
||||
];
|
||||
|
||||
describe('selectRelevantAutoMemoryDocumentsByModel', () => {
|
||||
const mockConfig = {} as Config;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('returns documents chosen by the side-query selector', async () => {
|
||||
vi.mocked(runSideQuery).mockResolvedValue({
|
||||
selected_memories: ['reference.md'],
|
||||
});
|
||||
|
||||
const selected = await selectRelevantAutoMemoryDocumentsByModel(
|
||||
mockConfig,
|
||||
'check the latency dashboard',
|
||||
docs,
|
||||
2,
|
||||
);
|
||||
|
||||
expect(selected).toEqual([docs[1]]);
|
||||
expect(runSideQuery).toHaveBeenCalledWith(
|
||||
mockConfig,
|
||||
expect.objectContaining({
|
||||
purpose: 'auto-memory-recall',
|
||||
config: { temperature: 0 },
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('returns an empty list for empty query or no docs', async () => {
|
||||
await expect(
|
||||
selectRelevantAutoMemoryDocumentsByModel(mockConfig, ' ', docs, 2),
|
||||
).resolves.toEqual([]);
|
||||
await expect(
|
||||
selectRelevantAutoMemoryDocumentsByModel(mockConfig, 'hello', [], 2),
|
||||
).resolves.toEqual([]);
|
||||
expect(runSideQuery).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('throws when selector returns unknown relative paths', async () => {
|
||||
vi.mocked(runSideQuery).mockImplementation(async (_config, options) => {
|
||||
const error = options.validate?.({
|
||||
selected_memories: ['unknown.md'],
|
||||
});
|
||||
if (error) {
|
||||
throw new Error(error);
|
||||
}
|
||||
return { selected_memories: [] };
|
||||
});
|
||||
|
||||
await expect(
|
||||
selectRelevantAutoMemoryDocumentsByModel(
|
||||
mockConfig,
|
||||
'check memory',
|
||||
docs,
|
||||
2,
|
||||
),
|
||||
).rejects.toThrow('Recall selector returned unknown relative path');
|
||||
});
|
||||
});
|
||||
120
packages/core/src/memory/relevanceSelector.ts
Normal file
120
packages/core/src/memory/relevanceSelector.ts
Normal file
|
|
@ -0,0 +1,120 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type { Content } from '@google/genai';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { runSideQuery } from '../utils/sideQuery.js';
|
||||
import type { ScannedAutoMemoryDocument } from './scan.js';
|
||||
|
||||
/**
|
||||
* System prompt for the selector side-query.
|
||||
*/
|
||||
const SELECT_MEMORIES_SYSTEM_PROMPT = `You are selecting memories that will be useful to an AI coding assistant as it processes a user's query. You will be given the user's query and a list of available memory files with their filenames and descriptions.
|
||||
|
||||
Return a list of filenames for the memories that will clearly be useful to the assistant as it processes the user's query (up to 5). Only include memories that you are certain will be helpful based on their name and description.
|
||||
- If you are unsure if a memory will be useful in processing the user's query, then do not include it in your list. Be selective and discerning.
|
||||
- If there are no memories in the list that would clearly be useful, feel free to return an empty list.
|
||||
- If a list of recently-used tools is provided, do not select memories that are usage reference or API documentation for those tools (the assistant is already exercising them). DO still select memories containing warnings, gotchas, or known issues about those tools — active use is exactly when those matter.`;
|
||||
|
||||
const RESPONSE_SCHEMA: Record<string, unknown> = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
selected_memories: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
},
|
||||
},
|
||||
required: ['selected_memories'],
|
||||
additionalProperties: false,
|
||||
};
|
||||
|
||||
interface RecallSelectorResponse {
|
||||
selected_memories: string[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Format memory headers as a text manifest: one line per file with
|
||||
* [type] relativePath (ISO-timestamp): description.
|
||||
* Selector sees only the header (type, path, age, description), not the body content.
|
||||
*/
|
||||
function formatMemoryManifest(docs: ScannedAutoMemoryDocument[]): string {
|
||||
return docs
|
||||
.map((doc) => {
|
||||
const tag = `[${doc.type}] `;
|
||||
const ts = new Date(doc.mtimeMs).toISOString();
|
||||
return doc.description
|
||||
? `- ${tag}${doc.relativePath} (${ts}): ${doc.description}`
|
||||
: `- ${tag}${doc.relativePath} (${ts})`;
|
||||
})
|
||||
.join('\n');
|
||||
}
|
||||
|
||||
export async function selectRelevantAutoMemoryDocumentsByModel(
|
||||
config: Config,
|
||||
query: string,
|
||||
docs: ScannedAutoMemoryDocument[],
|
||||
limit: number,
|
||||
recentTools: readonly string[] = [],
|
||||
): Promise<ScannedAutoMemoryDocument[]> {
|
||||
if (docs.length === 0 || limit <= 0 || query.trim().length === 0) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const manifest = formatMemoryManifest(docs);
|
||||
|
||||
// When the assistant is actively using a tool, surfacing that tool's
|
||||
// reference docs is noise. Pass the tool list so the selector can skip them.
|
||||
const toolsSection =
|
||||
recentTools.length > 0
|
||||
? `\n\nRecently used tools: ${recentTools.join(', ')}`
|
||||
: '';
|
||||
|
||||
const contents: Content[] = [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [
|
||||
{
|
||||
text: `Query: ${query.trim()}\n\nAvailable memories:\n${manifest}${toolsSection}`,
|
||||
},
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const validRelativePaths = new Set(docs.map((doc) => doc.relativePath));
|
||||
const byRelativePath = new Map(docs.map((doc) => [doc.relativePath, doc]));
|
||||
|
||||
const response = await runSideQuery<RecallSelectorResponse>(config, {
|
||||
purpose: 'auto-memory-recall',
|
||||
contents,
|
||||
schema: RESPONSE_SCHEMA,
|
||||
abortSignal: AbortSignal.timeout(5_000),
|
||||
systemInstruction: SELECT_MEMORIES_SYSTEM_PROMPT,
|
||||
config: {
|
||||
temperature: 0,
|
||||
},
|
||||
validate: (value) => {
|
||||
if (!Array.isArray(value.selected_memories)) {
|
||||
return 'Recall selector must return selected_memories array';
|
||||
}
|
||||
if (value.selected_memories.length > limit) {
|
||||
return `Recall selector returned too many documents: ${value.selected_memories.length}`;
|
||||
}
|
||||
if (
|
||||
value.selected_memories.some(
|
||||
(relativePath) => !validRelativePaths.has(relativePath),
|
||||
)
|
||||
) {
|
||||
return 'Recall selector returned unknown relative path';
|
||||
}
|
||||
return null;
|
||||
},
|
||||
});
|
||||
|
||||
return response.selected_memories
|
||||
.map((relativePath) => byRelativePath.get(relativePath))
|
||||
.filter((doc): doc is ScannedAutoMemoryDocument => doc !== undefined)
|
||||
.slice(0, limit);
|
||||
}
|
||||
93
packages/core/src/memory/scan.test.ts
Normal file
93
packages/core/src/memory/scan.test.ts
Normal file
|
|
@ -0,0 +1,93 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import * as os from 'node:os';
|
||||
import * as path from 'node:path';
|
||||
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
|
||||
import { getAutoMemoryFilePath } from './paths.js';
|
||||
import {
|
||||
parseAutoMemoryTopicDocument,
|
||||
scanAutoMemoryTopicDocuments,
|
||||
} from './scan.js';
|
||||
import { ensureAutoMemoryScaffold } from './store.js';
|
||||
|
||||
describe('auto-memory topic scanning', () => {
|
||||
let tempDir: string;
|
||||
let projectRoot: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'auto-memory-scan-'));
|
||||
projectRoot = path.join(tempDir, 'project');
|
||||
await fs.mkdir(projectRoot, { recursive: true });
|
||||
await ensureAutoMemoryScaffold(projectRoot);
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await fs.rm(tempDir, {
|
||||
recursive: true,
|
||||
force: true,
|
||||
maxRetries: 3,
|
||||
retryDelay: 10,
|
||||
});
|
||||
});
|
||||
|
||||
it('parses a managed auto-memory topic document', () => {
|
||||
const parsed = parseAutoMemoryTopicDocument(
|
||||
'/tmp/project.md',
|
||||
[
|
||||
'---',
|
||||
'type: project',
|
||||
'title: Project Memory',
|
||||
'description: Project context',
|
||||
'---',
|
||||
'',
|
||||
'# Project Memory',
|
||||
'',
|
||||
'- Release freeze starts Friday.',
|
||||
].join('\n'),
|
||||
);
|
||||
|
||||
expect(parsed).toEqual({
|
||||
type: 'project',
|
||||
filePath: '/tmp/project.md',
|
||||
relativePath: 'project.md',
|
||||
filename: 'project.md',
|
||||
title: 'Project Memory',
|
||||
description: 'Project context',
|
||||
body: '# Project Memory\n\n- Release freeze starts Friday.',
|
||||
mtimeMs: 0,
|
||||
});
|
||||
});
|
||||
|
||||
it('scans existing auto-memory files from nested topic folders', async () => {
|
||||
const referencePath = getAutoMemoryFilePath(
|
||||
projectRoot,
|
||||
path.join('reference', 'grafana.md'),
|
||||
);
|
||||
await fs.mkdir(path.dirname(referencePath), { recursive: true });
|
||||
await fs.writeFile(
|
||||
referencePath,
|
||||
[
|
||||
'---',
|
||||
'type: reference',
|
||||
'name: Reference Memory',
|
||||
'description: External references',
|
||||
'---',
|
||||
'',
|
||||
'Oncall dashboard: grafana.internal/d/api-latency',
|
||||
].join('\n'),
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
const docs = await scanAutoMemoryTopicDocuments(projectRoot);
|
||||
const referenceDoc = docs.find((doc) => doc.type === 'reference');
|
||||
|
||||
expect(referenceDoc?.description).toBe('External references');
|
||||
expect(referenceDoc?.relativePath).toBe('reference/grafana.md');
|
||||
expect(referenceDoc?.body).toContain('grafana.internal/d/api-latency');
|
||||
});
|
||||
});
|
||||
118
packages/core/src/memory/scan.ts
Normal file
118
packages/core/src/memory/scan.ts
Normal file
|
|
@ -0,0 +1,118 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import * as path from 'node:path';
|
||||
import { AUTO_MEMORY_TYPES, type AutoMemoryType } from './types.js';
|
||||
import { AUTO_MEMORY_INDEX_FILENAME, getAutoMemoryRoot } from './paths.js';
|
||||
|
||||
const MAX_SCANNED_MEMORY_FILES = 200;
|
||||
|
||||
export interface ScannedAutoMemoryDocument {
|
||||
type: AutoMemoryType;
|
||||
filePath: string;
|
||||
relativePath: string;
|
||||
filename: string;
|
||||
title: string;
|
||||
description: string;
|
||||
body: string;
|
||||
mtimeMs: number;
|
||||
}
|
||||
|
||||
function parseFrontmatterValue(
|
||||
frontmatter: string,
|
||||
key: string,
|
||||
): string | undefined {
|
||||
const match = frontmatter.match(new RegExp(`^${key}:\\s*(.+)$`, 'm'));
|
||||
return match?.[1]?.trim();
|
||||
}
|
||||
|
||||
export function parseAutoMemoryTopicDocument(
|
||||
filePath: string,
|
||||
content: string,
|
||||
mtimeMs = 0,
|
||||
relativePath = path.basename(filePath),
|
||||
): ScannedAutoMemoryDocument | null {
|
||||
const frontmatterMatch = content.match(/^---\n([\s\S]*?)\n---\n?([\s\S]*)$/);
|
||||
if (!frontmatterMatch) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const [, frontmatter, bodyContent] = frontmatterMatch;
|
||||
const rawType = parseFrontmatterValue(frontmatter, 'type');
|
||||
if (!rawType || !AUTO_MEMORY_TYPES.includes(rawType as AutoMemoryType)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
type: rawType as AutoMemoryType,
|
||||
filePath,
|
||||
relativePath,
|
||||
filename: path.basename(filePath),
|
||||
title:
|
||||
parseFrontmatterValue(frontmatter, 'name') ??
|
||||
parseFrontmatterValue(frontmatter, 'title') ??
|
||||
rawType,
|
||||
description: parseFrontmatterValue(frontmatter, 'description') ?? '',
|
||||
body: bodyContent.trim(),
|
||||
mtimeMs,
|
||||
};
|
||||
}
|
||||
|
||||
async function listMarkdownFiles(root: string): Promise<string[]> {
|
||||
try {
|
||||
const entries = await fs.readdir(root, { recursive: true });
|
||||
return (
|
||||
entries
|
||||
.filter(
|
||||
(entry): entry is string =>
|
||||
typeof entry === 'string' &&
|
||||
entry.endsWith('.md') &&
|
||||
path.basename(entry) !== AUTO_MEMORY_INDEX_FILENAME,
|
||||
)
|
||||
// Normalize to forward slashes so relative paths are valid URL segments
|
||||
// on all platforms (Windows readdir returns backslash-separated paths).
|
||||
.map((entry) => entry.replaceAll('\\', '/'))
|
||||
.sort()
|
||||
);
|
||||
} catch (error) {
|
||||
const nodeError = error as NodeJS.ErrnoException;
|
||||
if (nodeError.code === 'ENOENT') {
|
||||
return [];
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
export async function scanAutoMemoryTopicDocuments(
|
||||
projectRoot: string,
|
||||
): Promise<ScannedAutoMemoryDocument[]> {
|
||||
const root = getAutoMemoryRoot(projectRoot);
|
||||
const relativePaths = await listMarkdownFiles(root);
|
||||
const docs = await Promise.all(
|
||||
relativePaths.map(async (relativePath) => {
|
||||
const filePath = path.join(root, relativePath);
|
||||
const [content, stats] = await Promise.all([
|
||||
fs.readFile(filePath, 'utf-8'),
|
||||
fs.stat(filePath),
|
||||
]);
|
||||
return parseAutoMemoryTopicDocument(
|
||||
filePath,
|
||||
content,
|
||||
stats.mtimeMs,
|
||||
relativePath,
|
||||
);
|
||||
}),
|
||||
);
|
||||
|
||||
return docs
|
||||
.filter((doc): doc is ScannedAutoMemoryDocument => doc !== null)
|
||||
.filter((doc) => AUTO_MEMORY_TYPES.includes(doc.type))
|
||||
.sort(
|
||||
(a, b) => b.mtimeMs - a.mtimeMs || a.filename.localeCompare(b.filename),
|
||||
)
|
||||
.slice(0, MAX_SCANNED_MEMORY_FILES);
|
||||
}
|
||||
98
packages/core/src/memory/status.ts
Normal file
98
packages/core/src/memory/status.ts
Normal file
|
|
@ -0,0 +1,98 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2026 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import { type MemoryManager, type MemoryTaskRecord } from './manager.js';
|
||||
import {
|
||||
getAutoMemoryExtractCursorPath,
|
||||
getAutoMemoryIndexPath,
|
||||
getAutoMemoryMetadataPath,
|
||||
getAutoMemoryRoot,
|
||||
} from './paths.js';
|
||||
import { scanAutoMemoryTopicDocuments } from './scan.js';
|
||||
import type {
|
||||
AutoMemoryExtractCursor,
|
||||
AutoMemoryMetadata,
|
||||
AutoMemoryType,
|
||||
} from './types.js';
|
||||
import { AUTO_MEMORY_TYPES } from './types.js';
|
||||
|
||||
export interface ManagedAutoMemoryTopicStatus {
|
||||
topic: AutoMemoryType;
|
||||
entryCount: number;
|
||||
filePaths: string[];
|
||||
}
|
||||
|
||||
export interface ManagedAutoMemoryStatus {
|
||||
root: string;
|
||||
indexPath: string;
|
||||
indexContent: string;
|
||||
cursor?: AutoMemoryExtractCursor;
|
||||
metadata?: AutoMemoryMetadata;
|
||||
extractionRunning: boolean;
|
||||
topics: ManagedAutoMemoryTopicStatus[];
|
||||
extractionTasks: MemoryTaskRecord[];
|
||||
dreamTasks: MemoryTaskRecord[];
|
||||
}
|
||||
|
||||
async function readJsonFile<T>(filePath: string): Promise<T | undefined> {
|
||||
try {
|
||||
const content = await fs.readFile(filePath, 'utf-8');
|
||||
return JSON.parse(content) as T;
|
||||
} catch {
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
export async function getManagedAutoMemoryStatus(
|
||||
projectRoot: string,
|
||||
manager: MemoryManager,
|
||||
): Promise<ManagedAutoMemoryStatus> {
|
||||
const root = getAutoMemoryRoot(projectRoot);
|
||||
const indexPath = getAutoMemoryIndexPath(projectRoot);
|
||||
|
||||
const [indexContent, cursor, metadata, docs] = await Promise.all([
|
||||
fs.readFile(indexPath, 'utf-8').catch(() => ''),
|
||||
readJsonFile<AutoMemoryExtractCursor>(
|
||||
getAutoMemoryExtractCursorPath(projectRoot),
|
||||
),
|
||||
readJsonFile<AutoMemoryMetadata>(getAutoMemoryMetadataPath(projectRoot)),
|
||||
scanAutoMemoryTopicDocuments(projectRoot),
|
||||
]);
|
||||
|
||||
// Aggregate per-entry files by topic
|
||||
const byTopic = new Map<AutoMemoryType, string[]>();
|
||||
for (const doc of docs) {
|
||||
const list = byTopic.get(doc.type) ?? [];
|
||||
list.push(doc.filePath);
|
||||
byTopic.set(doc.type, list);
|
||||
}
|
||||
|
||||
const topics = AUTO_MEMORY_TYPES.map((topic) => ({
|
||||
topic,
|
||||
entryCount: byTopic.get(topic)?.length ?? 0,
|
||||
filePaths: byTopic.get(topic) ?? [],
|
||||
}));
|
||||
|
||||
const extractTaskType = 'extract' as const;
|
||||
const dreamTaskType = 'dream' as const;
|
||||
|
||||
return {
|
||||
root,
|
||||
indexPath,
|
||||
indexContent,
|
||||
cursor,
|
||||
metadata,
|
||||
extractionRunning: manager
|
||||
.listTasksByType(extractTaskType, projectRoot)
|
||||
.some((t) => t.status === 'running'),
|
||||
topics,
|
||||
extractionTasks: manager
|
||||
.listTasksByType(extractTaskType, projectRoot)
|
||||
.slice(0, 8),
|
||||
dreamTasks: manager.listTasksByType(dreamTaskType, projectRoot).slice(0, 5),
|
||||
};
|
||||
}
|
||||
109
packages/core/src/memory/store.test.ts
Normal file
109
packages/core/src/memory/store.test.ts
Normal file
|
|
@ -0,0 +1,109 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import * as os from 'node:os';
|
||||
import * as path from 'node:path';
|
||||
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
|
||||
import {
|
||||
getAutoMemoryConsolidationLockPath,
|
||||
getAutoMemoryExtractCursorPath,
|
||||
getAutoMemoryIndexPath,
|
||||
getAutoMemoryMetadataPath,
|
||||
getAutoMemoryRoot,
|
||||
getAutoMemoryTopicPath,
|
||||
} from './paths.js';
|
||||
import {
|
||||
createDefaultAutoMemoryIndex,
|
||||
createDefaultAutoMemoryMetadata,
|
||||
ensureAutoMemoryScaffold,
|
||||
readAutoMemoryIndex,
|
||||
} from './store.js';
|
||||
|
||||
describe('auto-memory storage scaffold', () => {
|
||||
let tempDir: string;
|
||||
let projectRoot: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'auto-memory-'));
|
||||
projectRoot = path.join(tempDir, 'project');
|
||||
await fs.mkdir(projectRoot, { recursive: true });
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await fs.rm(tempDir, {
|
||||
recursive: true,
|
||||
force: true,
|
||||
maxRetries: 3,
|
||||
retryDelay: 10,
|
||||
});
|
||||
});
|
||||
|
||||
it('builds stable auto-memory paths under project .qwen directory', () => {
|
||||
expect(getAutoMemoryRoot(projectRoot)).toBe(
|
||||
path.join(projectRoot, '.qwen', 'memory'),
|
||||
);
|
||||
expect(getAutoMemoryIndexPath(projectRoot)).toBe(
|
||||
path.join(projectRoot, '.qwen', 'memory', 'MEMORY.md'),
|
||||
);
|
||||
expect(getAutoMemoryMetadataPath(projectRoot)).toBe(
|
||||
path.join(projectRoot, '.qwen', 'meta.json'),
|
||||
);
|
||||
expect(getAutoMemoryExtractCursorPath(projectRoot)).toBe(
|
||||
path.join(projectRoot, '.qwen', 'extract-cursor.json'),
|
||||
);
|
||||
expect(getAutoMemoryConsolidationLockPath(projectRoot)).toBe(
|
||||
path.join(projectRoot, '.qwen', 'consolidation.lock'),
|
||||
);
|
||||
expect(getAutoMemoryTopicPath(projectRoot, 'feedback')).toBe(
|
||||
path.join(projectRoot, '.qwen', 'memory', 'feedback.md'),
|
||||
);
|
||||
});
|
||||
|
||||
it('creates a complete managed auto-memory scaffold', async () => {
|
||||
const now = new Date('2026-04-01T08:00:00.000Z');
|
||||
await ensureAutoMemoryScaffold(projectRoot, now);
|
||||
|
||||
const index = await fs.readFile(getAutoMemoryIndexPath(projectRoot), 'utf-8');
|
||||
expect(index).toBe(createDefaultAutoMemoryIndex());
|
||||
|
||||
const metadata = JSON.parse(
|
||||
await fs.readFile(getAutoMemoryMetadataPath(projectRoot), 'utf-8'),
|
||||
);
|
||||
expect(metadata).toEqual(createDefaultAutoMemoryMetadata(now));
|
||||
|
||||
const cursor = JSON.parse(
|
||||
await fs.readFile(getAutoMemoryExtractCursorPath(projectRoot), 'utf-8'),
|
||||
);
|
||||
expect(cursor).toEqual({
|
||||
updatedAt: '2026-04-01T08:00:00.000Z',
|
||||
});
|
||||
|
||||
await expect(fs.stat(getAutoMemoryRoot(projectRoot))).resolves.toBeDefined();
|
||||
await expect(fs.access(getAutoMemoryTopicPath(projectRoot, 'user'))).rejects.toThrow();
|
||||
});
|
||||
|
||||
it('is idempotent and preserves existing index content', async () => {
|
||||
await ensureAutoMemoryScaffold(projectRoot, new Date('2026-04-01T08:00:00.000Z'));
|
||||
const customIndex = '# Existing Index\n\n- keep me\n';
|
||||
await fs.writeFile(getAutoMemoryIndexPath(projectRoot), customIndex, 'utf-8');
|
||||
|
||||
await ensureAutoMemoryScaffold(projectRoot, new Date('2026-04-02T08:00:00.000Z'));
|
||||
|
||||
await expect(fs.readFile(getAutoMemoryIndexPath(projectRoot), 'utf-8')).resolves.toBe(
|
||||
customIndex,
|
||||
);
|
||||
});
|
||||
|
||||
it('returns null when the auto-memory index does not exist yet', async () => {
|
||||
await expect(readAutoMemoryIndex(projectRoot)).resolves.toBeNull();
|
||||
});
|
||||
|
||||
it('reads the managed auto-memory index after scaffold creation', async () => {
|
||||
await ensureAutoMemoryScaffold(projectRoot);
|
||||
await expect(readAutoMemoryIndex(projectRoot)).resolves.toBe('');
|
||||
});
|
||||
});
|
||||
98
packages/core/src/memory/store.ts
Normal file
98
packages/core/src/memory/store.ts
Normal file
|
|
@ -0,0 +1,98 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as fs from 'node:fs/promises';
|
||||
import {
|
||||
AUTO_MEMORY_INDEX_FILENAME,
|
||||
getAutoMemoryExtractCursorPath,
|
||||
getAutoMemoryIndexPath,
|
||||
getAutoMemoryMetadataPath,
|
||||
getAutoMemoryRoot,
|
||||
} from './paths.js';
|
||||
import {
|
||||
AUTO_MEMORY_SCHEMA_VERSION,
|
||||
type AutoMemoryExtractCursor,
|
||||
type AutoMemoryMetadata,
|
||||
} from './types.js';
|
||||
|
||||
export function createDefaultAutoMemoryMetadata(
|
||||
now = new Date(),
|
||||
): AutoMemoryMetadata {
|
||||
const iso = now.toISOString();
|
||||
return {
|
||||
version: AUTO_MEMORY_SCHEMA_VERSION,
|
||||
createdAt: iso,
|
||||
updatedAt: iso,
|
||||
};
|
||||
}
|
||||
|
||||
export function createDefaultAutoMemoryExtractCursor(
|
||||
now = new Date(),
|
||||
): AutoMemoryExtractCursor {
|
||||
return {
|
||||
updatedAt: now.toISOString(),
|
||||
};
|
||||
}
|
||||
|
||||
export function createDefaultAutoMemoryIndex(): string {
|
||||
return '';
|
||||
}
|
||||
|
||||
async function writeFileIfMissing(
|
||||
filePath: string,
|
||||
content: string,
|
||||
): Promise<void> {
|
||||
try {
|
||||
await fs.writeFile(filePath, content, {
|
||||
encoding: 'utf-8',
|
||||
flag: 'wx',
|
||||
});
|
||||
} catch (error) {
|
||||
const nodeError = error as NodeJS.ErrnoException;
|
||||
if (nodeError.code !== 'EEXIST') {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export async function ensureAutoMemoryScaffold(
|
||||
projectRoot: string,
|
||||
now = new Date(),
|
||||
): Promise<void> {
|
||||
const root = getAutoMemoryRoot(projectRoot);
|
||||
await fs.mkdir(root, { recursive: true });
|
||||
|
||||
await writeFileIfMissing(
|
||||
getAutoMemoryIndexPath(projectRoot),
|
||||
createDefaultAutoMemoryIndex(),
|
||||
);
|
||||
await writeFileIfMissing(
|
||||
getAutoMemoryMetadataPath(projectRoot),
|
||||
JSON.stringify(createDefaultAutoMemoryMetadata(now), null, 2) + '\n',
|
||||
);
|
||||
await writeFileIfMissing(
|
||||
getAutoMemoryExtractCursorPath(projectRoot),
|
||||
JSON.stringify(createDefaultAutoMemoryExtractCursor(now), null, 2) + '\n',
|
||||
);
|
||||
}
|
||||
|
||||
export async function readAutoMemoryIndex(
|
||||
projectRoot: string,
|
||||
): Promise<string | null> {
|
||||
try {
|
||||
return await fs.readFile(getAutoMemoryIndexPath(projectRoot), 'utf-8');
|
||||
} catch (error) {
|
||||
const nodeError = error as NodeJS.ErrnoException;
|
||||
if (nodeError.code === 'ENOENT') {
|
||||
return null;
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
export {
|
||||
AUTO_MEMORY_INDEX_FILENAME,
|
||||
};
|
||||
43
packages/core/src/memory/types.ts
Normal file
43
packages/core/src/memory/types.ts
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
export const AUTO_MEMORY_TYPES = [
|
||||
'user',
|
||||
'feedback',
|
||||
'project',
|
||||
'reference',
|
||||
] as const;
|
||||
|
||||
export type AutoMemoryType = (typeof AUTO_MEMORY_TYPES)[number];
|
||||
|
||||
export const AUTO_MEMORY_SCHEMA_VERSION = 1;
|
||||
|
||||
export interface AutoMemorySourceRef {
|
||||
sessionId?: string;
|
||||
recordedAt: string;
|
||||
messageIds?: string[];
|
||||
}
|
||||
|
||||
export interface AutoMemoryMetadata {
|
||||
version: typeof AUTO_MEMORY_SCHEMA_VERSION;
|
||||
createdAt: string;
|
||||
updatedAt: string;
|
||||
lastExtractionAt?: string;
|
||||
lastExtractionSessionId?: string;
|
||||
lastExtractionTouchedTopics?: AutoMemoryType[];
|
||||
lastExtractionStatus?: 'updated' | 'noop';
|
||||
lastDreamAt?: string;
|
||||
lastDreamSessionId?: string;
|
||||
lastDreamTouchedTopics?: AutoMemoryType[];
|
||||
lastDreamStatus?: 'updated' | 'noop';
|
||||
recentSessionIdsSinceDream?: string[];
|
||||
}
|
||||
|
||||
export interface AutoMemoryExtractCursor {
|
||||
sessionId?: string;
|
||||
processedOffset?: number;
|
||||
updatedAt: string;
|
||||
}
|
||||
|
|
@ -80,11 +80,6 @@ export const TOOL_NAME_ALIASES: Readonly<Record<string, string>> = {
|
|||
ListFilesTool: 'list_directory',
|
||||
ReadFolder: 'list_directory', // legacy display name
|
||||
|
||||
// Memory tool
|
||||
save_memory: 'save_memory',
|
||||
SaveMemory: 'save_memory',
|
||||
SaveMemoryTool: 'save_memory',
|
||||
|
||||
// TodoWrite tool
|
||||
todo_write: 'todo_write',
|
||||
TodoWrite: 'todo_write',
|
||||
|
|
|
|||
|
|
@ -53,3 +53,8 @@ export const EVENT_STARTUP_PERFORMANCE = 'qwen-code.startup.performance';
|
|||
export const EVENT_MEMORY_USAGE = 'qwen-code.memory.usage';
|
||||
export const EVENT_PERFORMANCE_BASELINE = 'qwen-code.performance.baseline';
|
||||
export const EVENT_PERFORMANCE_REGRESSION = 'qwen-code.performance.regression';
|
||||
|
||||
// Managed Auto-Memory Events
|
||||
export const EVENT_MEMORY_EXTRACT = 'qwen-code.memory.extract';
|
||||
export const EVENT_MEMORY_DREAM = 'qwen-code.memory.dream';
|
||||
export const EVENT_MEMORY_RECALL = 'qwen-code.memory.recall';
|
||||
|
|
|
|||
|
|
@ -52,6 +52,9 @@ export {
|
|||
logArenaSessionStarted,
|
||||
logArenaAgentCompleted,
|
||||
logArenaSessionEnded,
|
||||
logMemoryExtract,
|
||||
logMemoryDream,
|
||||
logMemoryRecall,
|
||||
} from './loggers.js';
|
||||
export type { SlashCommandEvent, ChatCompressionEvent } from './types.js';
|
||||
export {
|
||||
|
|
@ -78,6 +81,9 @@ export {
|
|||
makeArenaSessionStartedEvent,
|
||||
makeArenaAgentCompletedEvent,
|
||||
makeArenaSessionEndedEvent,
|
||||
MemoryExtractEvent,
|
||||
MemoryDreamEvent,
|
||||
MemoryRecallEvent,
|
||||
} from './types.js';
|
||||
export { makeSlashCommandEvent, makeChatCompressionEvent } from './types.js';
|
||||
export type {
|
||||
|
|
@ -117,6 +123,10 @@ export {
|
|||
recordArenaSessionStartedMetrics,
|
||||
recordArenaAgentCompletedMetrics,
|
||||
recordArenaSessionEndedMetrics,
|
||||
// Auto-Memory metrics functions
|
||||
recordMemoryExtractMetrics,
|
||||
recordMemoryDreamMetrics,
|
||||
recordMemoryRecallMetrics,
|
||||
// Performance monitoring types
|
||||
PerformanceMetricType,
|
||||
MemoryMetricType,
|
||||
|
|
|
|||
|
|
@ -47,6 +47,9 @@ import {
|
|||
EVENT_ARENA_SESSION_ENDED,
|
||||
EVENT_PROMPT_SUGGESTION,
|
||||
EVENT_SPECULATION,
|
||||
EVENT_MEMORY_EXTRACT,
|
||||
EVENT_MEMORY_DREAM,
|
||||
EVENT_MEMORY_RECALL,
|
||||
} from './constants.js';
|
||||
import {
|
||||
recordApiErrorMetrics,
|
||||
|
|
@ -63,6 +66,9 @@ import {
|
|||
recordArenaSessionStartedMetrics,
|
||||
recordArenaAgentCompletedMetrics,
|
||||
recordArenaSessionEndedMetrics,
|
||||
recordMemoryExtractMetrics,
|
||||
recordMemoryDreamMetrics,
|
||||
recordMemoryRecallMetrics,
|
||||
} from './metrics.js';
|
||||
import { QwenLogger } from './qwen-logger/qwen-logger.js';
|
||||
import { isTelemetrySdkInitialized } from './sdk.js';
|
||||
|
|
@ -106,6 +112,9 @@ import type {
|
|||
ArenaSessionEndedEvent,
|
||||
PromptSuggestionEvent,
|
||||
SpeculationEvent,
|
||||
MemoryExtractEvent,
|
||||
MemoryDreamEvent,
|
||||
MemoryRecallEvent,
|
||||
} from './types.js';
|
||||
import type { HookCallEvent } from './types.js';
|
||||
import type { UiEvent } from './uiTelemetry.js';
|
||||
|
|
@ -1155,3 +1164,92 @@ export function logSpeculation(config: Config, event: SpeculationEvent): void {
|
|||
};
|
||||
logger.emit(logRecord);
|
||||
}
|
||||
|
||||
// ─── Auto-Memory Log Functions ───────────────────────────────────────────────
|
||||
|
||||
export function logMemoryExtract(
|
||||
config: Config,
|
||||
event: MemoryExtractEvent,
|
||||
): void {
|
||||
if (!isTelemetrySdkInitialized()) return;
|
||||
|
||||
const attributes: LogAttributes = {
|
||||
...getCommonAttributes(config),
|
||||
'event.name': EVENT_MEMORY_EXTRACT,
|
||||
'event.timestamp': event['event.timestamp'],
|
||||
trigger: event.trigger,
|
||||
status: event.status,
|
||||
patches_count: event.patches_count,
|
||||
touched_topics: event.touched_topics,
|
||||
duration_ms: event.duration_ms,
|
||||
};
|
||||
if (event.skipped_reason) {
|
||||
attributes['skipped_reason'] = event.skipped_reason;
|
||||
}
|
||||
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
logger.emit({
|
||||
body: `Memory extract: ${event.status}. Patches: ${event.patches_count}. Topics: ${event.touched_topics || 'none'}.`,
|
||||
attributes,
|
||||
});
|
||||
recordMemoryExtractMetrics(config, event.duration_ms, {
|
||||
trigger: event.trigger,
|
||||
status: event.status,
|
||||
patches_count: event.patches_count,
|
||||
});
|
||||
}
|
||||
|
||||
export function logMemoryDream(config: Config, event: MemoryDreamEvent): void {
|
||||
if (!isTelemetrySdkInitialized()) return;
|
||||
|
||||
const attributes: LogAttributes = {
|
||||
...getCommonAttributes(config),
|
||||
'event.name': EVENT_MEMORY_DREAM,
|
||||
'event.timestamp': event['event.timestamp'],
|
||||
trigger: event.trigger,
|
||||
status: event.status,
|
||||
deduped_entries: event.deduped_entries,
|
||||
touched_topics_count: event.touched_topics_count,
|
||||
touched_topics: event.touched_topics,
|
||||
duration_ms: event.duration_ms,
|
||||
};
|
||||
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
logger.emit({
|
||||
body: `Memory dream: ${event.status}. Deduped: ${event.deduped_entries}. Topics: ${event.touched_topics || 'none'}.`,
|
||||
attributes,
|
||||
});
|
||||
recordMemoryDreamMetrics(config, event.duration_ms, {
|
||||
trigger: event.trigger,
|
||||
status: event.status,
|
||||
deduped_entries: event.deduped_entries,
|
||||
});
|
||||
}
|
||||
|
||||
export function logMemoryRecall(
|
||||
config: Config,
|
||||
event: MemoryRecallEvent,
|
||||
): void {
|
||||
if (!isTelemetrySdkInitialized()) return;
|
||||
|
||||
const attributes: LogAttributes = {
|
||||
...getCommonAttributes(config),
|
||||
'event.name': EVENT_MEMORY_RECALL,
|
||||
'event.timestamp': event['event.timestamp'],
|
||||
query_length: event.query_length,
|
||||
docs_scanned: event.docs_scanned,
|
||||
docs_selected: event.docs_selected,
|
||||
strategy: event.strategy,
|
||||
duration_ms: event.duration_ms,
|
||||
};
|
||||
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
logger.emit({
|
||||
body: `Memory recall: strategy=${event.strategy}. Selected ${event.docs_selected}/${event.docs_scanned} docs.`,
|
||||
attributes,
|
||||
});
|
||||
recordMemoryRecallMetrics(config, event.duration_ms, {
|
||||
strategy: event.strategy,
|
||||
docs_selected: event.docs_selected,
|
||||
});
|
||||
}
|
||||
|
|
|
|||
|
|
@ -44,6 +44,14 @@ const REGRESSION_DETECTION = `${SERVICE_NAME}.performance.regression`;
|
|||
const REGRESSION_PERCENTAGE_CHANGE = `${SERVICE_NAME}.performance.regression.percentage_change`;
|
||||
const BASELINE_COMPARISON = `${SERVICE_NAME}.performance.baseline.comparison`;
|
||||
|
||||
// Auto-Memory Metrics
|
||||
const MEMORY_EXTRACT_COUNT = `${SERVICE_NAME}.memory.extract.count`;
|
||||
const MEMORY_EXTRACT_DURATION = `${SERVICE_NAME}.memory.extract.duration`;
|
||||
const MEMORY_DREAM_COUNT = `${SERVICE_NAME}.memory.dream.count`;
|
||||
const MEMORY_DREAM_DURATION = `${SERVICE_NAME}.memory.dream.duration`;
|
||||
const MEMORY_RECALL_COUNT = `${SERVICE_NAME}.memory.recall.count`;
|
||||
const MEMORY_RECALL_DURATION = `${SERVICE_NAME}.memory.recall.duration`;
|
||||
|
||||
const baseMetricDefinition = {
|
||||
getCommonAttributes: (config: Config): Attributes => ({
|
||||
'session.id': config.getSessionId(),
|
||||
|
|
@ -361,6 +369,14 @@ let arenaAgentDurationHistogram: Histogram | undefined;
|
|||
let arenaAgentTokensCounter: Counter | undefined;
|
||||
let arenaResultSelectedCounter: Counter | undefined;
|
||||
|
||||
// Auto-Memory Metrics
|
||||
let memoryExtractCounter: Counter | undefined;
|
||||
let memoryExtractDurationHistogram: Histogram | undefined;
|
||||
let memoryDreamCounter: Counter | undefined;
|
||||
let memoryDreamDurationHistogram: Histogram | undefined;
|
||||
let memoryRecallCounter: Counter | undefined;
|
||||
let memoryRecallDurationHistogram: Histogram | undefined;
|
||||
|
||||
let isMetricsInitialized = false;
|
||||
let isPerformanceMonitoringEnabled = false;
|
||||
|
||||
|
|
@ -429,6 +445,42 @@ export function initializeMetrics(config: Config): void {
|
|||
// Increment session counter after all metrics are initialized
|
||||
sessionCounter?.add(1, baseMetricDefinition.getCommonAttributes(config));
|
||||
|
||||
// Auto-Memory metrics
|
||||
memoryExtractCounter = meter.createCounter(MEMORY_EXTRACT_COUNT, {
|
||||
description:
|
||||
'Counts auto-memory extraction runs, tagged by trigger and status.',
|
||||
valueType: ValueType.INT,
|
||||
});
|
||||
memoryExtractDurationHistogram = meter.createHistogram(
|
||||
MEMORY_EXTRACT_DURATION,
|
||||
{
|
||||
description: 'Duration of auto-memory extraction in milliseconds.',
|
||||
unit: 'ms',
|
||||
valueType: ValueType.INT,
|
||||
},
|
||||
);
|
||||
memoryDreamCounter = meter.createCounter(MEMORY_DREAM_COUNT, {
|
||||
description:
|
||||
'Counts auto-memory dream (consolidation) runs, tagged by trigger and status.',
|
||||
valueType: ValueType.INT,
|
||||
});
|
||||
memoryDreamDurationHistogram = meter.createHistogram(MEMORY_DREAM_DURATION, {
|
||||
description: 'Duration of auto-memory dream runs in milliseconds.',
|
||||
unit: 'ms',
|
||||
valueType: ValueType.INT,
|
||||
});
|
||||
memoryRecallCounter = meter.createCounter(MEMORY_RECALL_COUNT, {
|
||||
description: 'Counts auto-memory recall operations, tagged by strategy.',
|
||||
valueType: ValueType.INT,
|
||||
});
|
||||
memoryRecallDurationHistogram = meter.createHistogram(
|
||||
MEMORY_RECALL_DURATION,
|
||||
{
|
||||
description: 'Duration of auto-memory recall operations in milliseconds.',
|
||||
unit: 'ms',
|
||||
valueType: ValueType.INT,
|
||||
},
|
||||
);
|
||||
// Initialize performance monitoring metrics if enabled
|
||||
initializePerformanceMonitoring(config);
|
||||
|
||||
|
|
@ -876,3 +928,65 @@ export function recordArenaSessionEndedMetrics(
|
|||
});
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Auto-Memory Metric Recording Functions ─────────────────────────────────
|
||||
|
||||
export function recordMemoryExtractMetrics(
|
||||
config: Config,
|
||||
durationMs: number,
|
||||
attrs: {
|
||||
trigger: 'auto' | 'manual';
|
||||
status: 'completed' | 'skipped' | 'failed';
|
||||
patches_count: number;
|
||||
},
|
||||
): void {
|
||||
if (!isMetricsInitialized) return;
|
||||
const common = baseMetricDefinition.getCommonAttributes(config);
|
||||
memoryExtractCounter?.add(1, {
|
||||
...common,
|
||||
trigger: attrs.trigger,
|
||||
status: attrs.status,
|
||||
});
|
||||
memoryExtractDurationHistogram?.record(durationMs, {
|
||||
...common,
|
||||
trigger: attrs.trigger,
|
||||
status: attrs.status,
|
||||
});
|
||||
}
|
||||
|
||||
export function recordMemoryDreamMetrics(
|
||||
config: Config,
|
||||
durationMs: number,
|
||||
attrs: {
|
||||
trigger: 'auto' | 'manual';
|
||||
status: 'updated' | 'noop' | 'failed';
|
||||
deduped_entries: number;
|
||||
},
|
||||
): void {
|
||||
if (!isMetricsInitialized) return;
|
||||
const common = baseMetricDefinition.getCommonAttributes(config);
|
||||
memoryDreamCounter?.add(1, {
|
||||
...common,
|
||||
trigger: attrs.trigger,
|
||||
status: attrs.status,
|
||||
});
|
||||
memoryDreamDurationHistogram?.record(durationMs, {
|
||||
...common,
|
||||
trigger: attrs.trigger,
|
||||
status: attrs.status,
|
||||
});
|
||||
}
|
||||
|
||||
export function recordMemoryRecallMetrics(
|
||||
config: Config,
|
||||
durationMs: number,
|
||||
attrs: { strategy: 'none' | 'heuristic' | 'model'; docs_selected: number },
|
||||
): void {
|
||||
if (!isMetricsInitialized) return;
|
||||
const common = baseMetricDefinition.getCommonAttributes(config);
|
||||
memoryRecallCounter?.add(1, { ...common, strategy: attrs.strategy });
|
||||
memoryRecallDurationHistogram?.record(durationMs, {
|
||||
...common,
|
||||
strategy: attrs.strategy,
|
||||
});
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1138,3 +1138,92 @@ export class SpeculationEvent implements BaseTelemetryEvent {
|
|||
this.had_pipelined_suggestion = params.had_pipelined_suggestion;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Managed Auto-Memory Events
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class MemoryExtractEvent implements BaseTelemetryEvent {
|
||||
'event.name': 'qwen-code.memory.extract';
|
||||
'event.timestamp': string;
|
||||
/** 'auto' = triggered by session turn; 'manual' = user-initiated */
|
||||
trigger: 'auto' | 'manual';
|
||||
status: 'completed' | 'skipped' | 'failed';
|
||||
skipped_reason?: 'already_running' | 'queued' | 'memory_tool';
|
||||
patches_count: number;
|
||||
touched_topics: string;
|
||||
duration_ms: number;
|
||||
|
||||
constructor(params: {
|
||||
trigger: 'auto' | 'manual';
|
||||
status: 'completed' | 'skipped' | 'failed';
|
||||
skipped_reason?: 'already_running' | 'queued' | 'memory_tool';
|
||||
patches_count: number;
|
||||
touched_topics: string[];
|
||||
duration_ms: number;
|
||||
}) {
|
||||
this['event.name'] = 'qwen-code.memory.extract';
|
||||
this['event.timestamp'] = new Date().toISOString();
|
||||
this.trigger = params.trigger;
|
||||
this.status = params.status;
|
||||
this.skipped_reason = params.skipped_reason;
|
||||
this.patches_count = params.patches_count;
|
||||
this.touched_topics = params.touched_topics.join(',');
|
||||
this.duration_ms = params.duration_ms;
|
||||
}
|
||||
}
|
||||
|
||||
export class MemoryDreamEvent implements BaseTelemetryEvent {
|
||||
'event.name': 'qwen-code.memory.dream';
|
||||
'event.timestamp': string;
|
||||
/** 'auto' = scheduler-triggered; 'manual' = user ran /dream */
|
||||
trigger: 'auto' | 'manual';
|
||||
status: 'updated' | 'noop' | 'failed';
|
||||
deduped_entries: number;
|
||||
touched_topics_count: number;
|
||||
touched_topics: string;
|
||||
duration_ms: number;
|
||||
|
||||
constructor(params: {
|
||||
trigger: 'auto' | 'manual';
|
||||
status: 'updated' | 'noop' | 'failed';
|
||||
deduped_entries: number;
|
||||
touched_topics: string[];
|
||||
duration_ms: number;
|
||||
}) {
|
||||
this['event.name'] = 'qwen-code.memory.dream';
|
||||
this['event.timestamp'] = new Date().toISOString();
|
||||
this.trigger = params.trigger;
|
||||
this.status = params.status;
|
||||
this.deduped_entries = params.deduped_entries;
|
||||
this.touched_topics_count = params.touched_topics.length;
|
||||
this.touched_topics = params.touched_topics.join(',');
|
||||
this.duration_ms = params.duration_ms;
|
||||
}
|
||||
}
|
||||
|
||||
export class MemoryRecallEvent implements BaseTelemetryEvent {
|
||||
'event.name': 'qwen-code.memory.recall';
|
||||
'event.timestamp': string;
|
||||
query_length: number;
|
||||
docs_scanned: number;
|
||||
docs_selected: number;
|
||||
strategy: 'none' | 'heuristic' | 'model';
|
||||
duration_ms: number;
|
||||
|
||||
constructor(params: {
|
||||
query_length: number;
|
||||
docs_scanned: number;
|
||||
docs_selected: number;
|
||||
strategy: 'none' | 'heuristic' | 'model';
|
||||
duration_ms: number;
|
||||
}) {
|
||||
this['event.name'] = 'qwen-code.memory.recall';
|
||||
this['event.timestamp'] = new Date().toISOString();
|
||||
this.query_length = params.query_length;
|
||||
this.docs_scanned = params.docs_scanned;
|
||||
this.docs_selected = params.docs_selected;
|
||||
this.strategy = params.strategy;
|
||||
this.duration_ms = params.duration_ms;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue