- Add "Merge extraction chunks" checkbox (default: off) so long
chats produce multiple smaller blocks instead of one massive one
- Add "dates and times" to extraction prompt's WHAT TO EXTRACT list
as a gentle nudge for temporal context
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Main tab gets "Memory Extraction" header with "Automatic" checkbox
- "Select all" moved inline with "Character Attachments" header
- New charMemory_sectionHeader CSS for label + control on same line
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Auto extraction checkbox moved next to Extract Now / View Edit
- Batch Extract tab renamed to Batch Extraction
- Added "Character Attachments" header with tooltip above chat list
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Activity Log labeled and always visible below tab content
- Diagnostics moved from Main tab to permanent pane at bottom
- Main tab now only has Extract Now and View/Edit buttons
- Click activity log to expand, hover also works
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Shows last 3 log entries always visible below tabs, expandable
on click for more history. Uses max-height transition for smooth
expand/collapse.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace hidden textarea + Custom option with <details> disclosure
showing the editable prompt for each strategy. Restore Default button
appears when a preset has been customized.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace nested drawer-based layout with top-level tabs (Main, Consolidate,
Batch Extract, Settings, Log). Move consolidation controls from Settings to
dedicated Consolidate tab. Move diagnostics from sub-tab into Main tab.
Remove Tools & Diagnostics drawer wrapper.
All element IDs preserved for JS compatibility.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Rename Test to Test Model. Send a specific echo prompt and check
whether the model follows the instruction. Show model name, response
time, and actual reply in the inline status. Yellow warning if the
model responds but doesn't follow the instruction.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace auto-fetch-on-keystroke with explicit Connect button next to the
API key. Connect fetches the model list with inline status feedback.
Test button moved below model dropdown for logical flow: enter key →
Connect → pick model → Test.
API key reveal now auto-hides after 10 seconds as a security best practice.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Models like GLM-4.7 on NVIDIA use reasoning tokens that consume the
response budget before producing content. The previous max of 2000
was insufficient — bumped to 4000 to give thinking models headroom.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Default source changed from Main LLM to Dedicated API for better
out-of-box extraction quality
- Dropdown reordered: Dedicated API first, WebLLM second, Main LLM last
- Removed "If you just want to get started quickly" section from
getting started guide — steering all users toward the recommended path
- Updated all docs to reflect new ordering and default
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Clearer labels that tell users what the setting actually does.
Internal values (provider, main_llm, webllm) are unchanged.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add GETTING-STARTED.md with step-by-step setup instructions covering
installation, provider setup, Vector Storage config, and settings tuning
- Raise auto-extraction interval default from 10 to 20 messages for
better extraction quality (more context per LLM call)
- Raise interval slider max from 50 to 100
- Update README with detailed slider interaction docs, Pin as Memory
explanation, and link to getting started guide
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add {{charCard}} placeholder that injects the character's description
and personality into the extraction prompt. This lets the LLM see what
is baseline character knowledge and avoid extracting it as memories.
Update default prompt with CHARACTER CARD section and strengthened
instructions to skip card-redundant info. Add note recommending API
Provider over Main LLM due to prompt pollution.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add support for 11 LLM providers (OpenAI, Anthropic, OpenRouter, Groq,
DeepSeek, Mistral, xAI, NanoGPT, Ollama, Pollinations, Custom) with
per-provider settings, preset-based dispatch, and automatic migration
from the previous NanoGPT-only configuration.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Retaining only 50 entries was losing history during long batch runs.
Bump to 500 and add a download button so the log can be saved as a
timestamped text file.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Previously only reset the active chat's metadata, so batch extraction
still saw other chats as fully processed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds phase logging (Sending/Waiting/Response received with timing)
always visible in activity log. When "Verbose" checkbox is enabled,
also logs the full prompt and raw LLM response.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Extraction prompt:
- Replace broad "sexual mechanics" AVOID with targeted "repetitive minutiae"
- Add NOTE: capture vivid memorable details, skip sequential play-by-play
- Add clear boundary markers (===== sections) between existing memories and
chat content to prevent weaker models from contaminating extractions
- Add CRITICAL instruction: only extract from RECENT CHAT MESSAGES section
Per-message buttons:
- Add addButtonsToExistingMessages() to inject brain/bookmark buttons on
already-rendered messages when a chat loads (called from onChatChanged)
UX polish:
- Add descriptive tooltips to all UI elements (stats bar, buttons, sliders,
settings, tabs)
- Rename "Extract every N messages" to "Auto-extract every N new messages"
- Improve "no new messages" toast on manual Extract Now to suggest Reset
Extraction State
- Update Extract Now tooltip to explain the reset workflow
Docs:
- Add "Choosing an LLM for Memory Extraction" section to README with
recommended models (DeepSeek V3.1, Qwen3-235B, Mistral Large 3, Hermes 4),
models to avoid, and troubleshooting guide
- Update README for cooldown, tabbed panel, stats bar, per-message buttons
- Update PLAN.md with completed items and new extraction quality ideas
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace default extraction prompt with higher-quality version: AVOID list
(sexual mechanics, temporary states, dialogue), FOCUS categories, past
tense rule, consolidation/fact-checking rules, better examples
- Bump default responseLength from 500 to 800
- Add minCooldownMinutes setting (default 10, range 0-30) to prevent
rapid-fire extractions; manual extractions bypass cooldown
- Combine Activity Log + Diagnostics into single tabbed drawer
- Expand stats bar from 2 to 4 items: file, memory count, extraction
progress (msgs/interval), cooldown timer with auto-refresh
- Fix stats bar showing stale count after Clear All Memories
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add collapsible Activity Log panel that shows timestamped events:
chat switches, extraction state, message collection, LLM responses
- Fix bug where lastExtractedIndex advanced even when LLM returned
NO_NEW_MEMORIES, preventing subsequent manual extraction from
processing messages on a switched-to chat
- Now only advance lastExtractedIndex when memories are actually saved;
always reset messagesSinceExtraction to prevent re-trigger loops
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add Test button next to API key to verify connection with a minimal request
- Add model filter checkboxes (Subscription, Open Source, Roleplay, Reasoning)
that narrow the dropdown with intersection logic
- Store additional model fields (isOpenSource, category, capabilities, costEstimate)
- Fix bug where switching chats didn't seed messagesSinceExtraction with
unextracted message count, preventing automatic extraction from triggering
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Allows using NanoGPT's OpenAI-compatible API for memory extraction
and consolidation, independent of the main chat LLM. Fetches model
list with subscription status, supports custom system prompts.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Allows resetting extraction tracking without deleting memories,
so the next extraction re-reads all messages from the beginning.
Useful after manually editing or deleting memories.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Show a side-by-side Before/After preview popup before applying
consolidation results, and add an Undo Consolidation button that
restores the pre-consolidation memories from an in-memory backup.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Switch from ## Memory N headers to <memory chat="..." date="..."> tag blocks
with individual bullet parsing. Memory manager now shows grouped extraction
cards with per-bullet edit/delete controls. Stats bar simplified to file name
and total bullet count. Diagnostics panel shows vectorization status.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Rename extension header from CharMemory to Character Memory
- Add always-visible stats bar showing active file, memory count, and
extraction progress
- Flatten Memory Status sub-drawer so controls are immediately visible
- Merge Advanced section into Settings with separator dividers
- Rename buttons: Manage Memories → View / Edit, Reset → Clear All
Memories (moved to Settings with danger styling)
- Add tooltips, helper text styling, and separator/danger-button CSS
- Call updateStatusDisplay after consolidation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Auto-generate memory file names from character name (CharName-memories.md),
with optional per-chat isolation (CharName-chat{id}-memories.md). Add memory
info to diagnostics panel showing active file, memory count, and last
extraction result. Migrate old hardcoded 'char-memories.md' default to
auto-naming.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add Reset button to clear extraction state and re-extract from beginning
- Add Advanced section with configurable Data Bank file name (replaces hardcoded char-memories.md)
- Auto-migrate saved prompts with old blank-line separator instruction to --- separator
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Memories now stored as numbered `## Memory N` entries with timestamps
- Auto-migrates existing flat-text memories to structured format
- "Manage Memories" popup with edit/delete per entry (replaces "View Memories")
- "Consolidate" button and `/consolidate-memories` slash command to merge duplicates via LLM
- Extraction now splits LLM output on `---` separators into individual entries
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Automatically extracts structured character memories from chat and stores
them in character-scoped Data Bank files for vector retrieval.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>