- Bump version to 2.1.10
- Enable auto_update and set homePage in manifest.json
- Add CHANGELOG entry for truncation fix
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Consolidation: increase max_tokens from responseLength*2 to
Math.max(responseLength*4, 4000) — ensures at least 4k output tokens
regardless of the user's Response Length setting
- Reformat/Convert: increase from responseLength to
Math.max(responseLength*2, 4000) — same floor
- Add truncation detection in consolidation: warns in activity log and
toastr if response contains <memory> tags but doesn't end with
</memory>, pointing user to Response Length setting
Fixes#13
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Step 1 of the wizard now offers a toggle between "Dedicated API"
(default) and "Connection Profile" before configuring the LLM.
Previously, Connection Profile was only available in the Settings
Modal after completing the wizard.
- Source toggle buttons with active/inactive styling
- Profile section: dropdown via CMRS.handleDropdown(), Test Connection
- Step 3 summary adapts to show profile name or provider/model
- Updated getting-started.md with both paths and new screenshot
- Updated CHANGELOG.md with improvements and tooltip fix
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Settings and Troubleshooter modals used a side-by-side flex layout
(130px nav + content) that left the content panel too narrow on phone
screens. In phone mode, the nav now renders as horizontal tabs above
the content, giving it the full popup width.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Swipes re-render an existing message slot without adding a new message,
but CHARACTER_MESSAGE_RENDERED fired for them and incremented
messagesSinceExtraction — causing extraction to trigger early.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Lets users reuse saved SillyTavern connection profiles for memory
extraction via ConnectionManagerRequestService, instead of configuring
a separate dedicated API. Includes profile picker dropdown, test
connection, system prompt override, and health check integration.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The #charMemory_convertSource dropdown doesn't exist in the
Troubleshooter context, so previewConversion() now accepts an
optional sourceFileUrl parameter to bypass the dropdown lookup.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The convert button in the Data Bank file browser previously just set a
hidden form value and showed a toast telling the user to "open the
Convert section," which was confusing. Now it directly opens the
conversion preview dialog.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Users sometimes paste the full completions endpoint URL instead of the
base URL, causing model fetching to hit /chat/completions/models (404).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Address user feedback about not understanding what the extension does
after setup. Adds plain-language explanations of the extraction →
storage → retrieval pipeline, cross-chat memory, Vector Storage
setup steps, and a button quick reference.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Address user feedback about not understanding what the extension does
after setup. Adds plain-language explanations of the extraction →
storage → retrieval pipeline, cross-chat memory, Vector Storage
setup steps, and a button quick reference.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The protection buffer calculated a reduced effectiveEnd but passed the
original endIndex (null) to collectRecentMessages(), which used
chat.length — extracting the "protected" messages anyway.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Complete rewrite of the UI and significant feature additions since v1.6.1.
UX Redesign (v2.0):
- Single-view dashboard replaces 4-tab sidebar
- Settings, Prompts, Troubleshooter, Memory Manager moved to center-screen modals
- Activity log in slide-out drawer
- Setup Wizard for first-run configuration
- Prompt version tracking with update notifications
- Health indicator in stats bar
Injection Viewer (v1.6–v2.1.6):
- Per-message injection data: see exactly what memories, lorebook entries,
and extension prompts were injected for any generation
- Context/Prompt Breakdown with per-category token counts (System, Char card,
Lorebook, Data Bank, Examples, Chat history) via ST Prompt Itemization
- Stacked bar visualization, token hints in headers, Tips popup
- Context overflow and heavy injection warnings
Memory Management:
- Unified block editor across all 5 editing surfaces (Memory Manager,
Consolidation, Conversion, Reformat, Data Bank browser)
- Find & Replace with highlighting across all editors
- Undo support for all edit operations
- Group chat character picker in Memory Manager
Other features:
- Tablet & phone display modes with touch-friendly controls
- Topic-tagged memory format for better vector retrieval
- Self-closing memory tag handling (GLM-4.7 compatibility)
- Protect recent messages from extraction feedback loop
- 9-point health check system with retrieve chunks and score threshold
- Shared editor factory (editor.js), pure utility library (lib.js)
- Vitest test suite: unit, snapshot, and live LLM tests
- Full documentation suite in docs/
See CHANGELOG.md for detailed per-version notes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Bar segments now sized relative to the model's full context window
so unused context shows as grey background — makes it immediately
obvious how much of the context window is consumed
- Summary line now reads "X / Y tk — Z% of context used"
- Breakdown table shows each category as % of context (not % of total)
- Section renamed back to "Context" per user preference
- Tips popup container is left-aligned
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Removes the Context Budget section entirely. The injection viewer now
shows a Prompt Breakdown at the top that loads automatically on open —
no collapsed panel to discover or click. Exact per-category token counts
(System, Char card, Lorebook, Data Bank, Examples, Chat history) come
from ST's Prompt Itemization; falls back to injection-only estimates for
snapshots from previous sessions.
Tips popup expanded with a Char Card / System Prompt section and updated
intro text to reflect the full-prompt breakdown.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Combines "Context Budget" and "Prompt Breakdown" into a single "Context"
section at the top of the injection drawer. The header shows the stacked
bar and summary; expanding loads the full breakdown — exact counts from
ST's Prompt Itemization when available, or estimated fallback for old
snapshots.
Fixes a long-standing inaccuracy where Lorebook token estimates were
computed by summing truncated entry content (200 chars/entry from
WORLD_INFO_ACTIVATED), producing large overestimates. Now uses the actual
injected worldInfoString from itemizedPrompts, matching the prompt
breakdown numbers. Data Bank char count similarly updated to use
dataBankVectorsString. Labels changed to "Data Bank" throughout to match
ST terminology.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds a "Prompt Breakdown" section at the bottom of the injection drawer
that lazy-loads on first expand. Reads ST's itemizedPrompts data for the
message and calls itemizedParams() to get exact token counts from the
active tokenizer.
Shows:
- Proportional stacked bar across all prompt categories (system, char
card, lorebook, data bank/CharMemory, examples, chat history)
- Total tokens / usable context with % of context used
- Per-category token count table with % breakdown
- Model and tokenizer name
For OAI (chat completion) the counts are already pre-computed and the
load is near-instant. For text completion backends the counts are
computed async with a loading spinner.
Falls back to a friendly message if no itemized data exists (previous
session or Prompt Itemization disabled in ST settings).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Import getMaxContextSize() from script.js instead of hand-rolling
context size detection. This correctly handles all API backends
(OpenAI, text completion, etc.) and subtracts reserved response tokens.
- Bar segments now fill proportionally between tracked sources
(CharMemory / Lorebook / other EPs) so the bar is always visible.
Previously, segments were sized as fraction of total context, making
them < 1px wide on large context models (e.g. 1k tokens / 190k ctx).
- Summary text now shows "~N / M tk (X%)" — the context percentage
is in the label rather than implied by bar width.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds a "Context Budget" collapsible section to the per-message injection
viewer showing estimated token usage across CharMemory, Lorebook, and other
extension prompts as a horizontal stacked bar against the model's context
limit. Expanding the section reveals a per-source breakdown table with a
"Tips to reduce" link that opens an actionable guidance popup.
Also adds:
- ~N tk hints in CharMemory, Lorebook, and Extension Prompts headers
- Token cost + injection position/depth metadata in EP and WI cards
- depth now captured in extension prompt snapshots (was missing)
- Red/yellow health notes for context overflow and heavy injection (>40%)
- getMainContextMaxTokens() reads oai_settings or textCompletionSettings
- estimateTokens() helper (~4 chars/token)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Technical design for issue #6. Includes research findings on why the
disabled_attachments toggle approach won't work and details the
recommended chat_metadata + setExtensionPrompt() approach.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the Memory Manager's custom per-bullet popup editing with the
same createMemoryEditor + renderConsolidatedCards used by Consolidation,
Conversion, Reformat, and the Data Bank browser. Adds inline editing,
undo, add/delete blocks and bullets, and uses Save/Cancel buttons.
Group chats now show a character picker instead of all members inline.
Also converts the Data Bank file editor from POPUP_TYPE.TEXT with an
inline Save button to POPUP_TYPE.CONFIRM with Save/Cancel, matching
all other editor dialogs.
Removes dead code: editMemory, deleteMemory, deleteBlock, reindexManager
functions and associated CSS rules.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Some models produce <memory chat="..."></memory> with bullets after the
closing tag instead of inside it. The consolidation parser returned an
empty result, preventing the dialog from appearing. The extraction
pipeline had the same vulnerability but degraded more gracefully.
Both parsers now use three-tier matching: normal content-inside-tags,
then content-after-self-closing-tags, then whole-response fallback.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Consolidation, conversion, and reformat dialogs now show explicit
Save/Cancel instead of ambiguous Yes/No defaults.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replaces ambiguous Yes/No default buttons with explicit Save/Cancel
to prevent accidentally closing without saving.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When a group has members but all are in disabled_members, the Activity
Log now says "all group members are disabled in SillyTavern — re-enable
at least one in the group settings" instead of the generic "no targets
found". Verified locally that this is the root cause of issue #4.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds the group's generation_mode value to the Group Debug section of
the Diagnostic Report. This is the key field needed to diagnose issue
#4 — groups using Append (with disabled) mode (mode 2) have all
members in the disabled list by design, which was not visible before.
Also logs generation_mode in the console warning when active member
count is zero.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Users who customized a prompt in 1.x and upgraded to 2.x were never
notified of default prompt changes. checkPromptVersions() now detects
a customized prompt with no version record and sets a 'pre-2.0'
sentinel so hasPromptUpdate() fires. A toast notification appears 2s
after load whenever any prompt has a pending update.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
In group chats, context.characterId flips to whichever member last
replied. The existing context-change guard checked characterId after
each LLM call, causing the entire extraction to be thrown away
silently whenever a character responded during the (sometimes 10s+)
API round-trip.
Fix: in group chats (isMultiTarget), only guard on chatId changing.
characterId is only checked in 1:1 chats where a change genuinely
means the user switched characters.
Also promotes the discard log from console.log to logActivity so
it appears in the Activity Log drawer instead of being invisible.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The stale metadata check in onChatChanged was resetting
lastExtractedIndex to -1 every session after consolidation.
Consolidated blocks carry thematic labels (e.g. "First Meeting")
rather than the original chatId, so the chat-specific match
b.chat === chatId always failed, falsely detecting stale metadata.
Fix: reset only when the memory file has no blocks at all (memories
genuinely deleted), not when blocks exist with non-matching labels.
Also brings in group chat diagnostics from issue-4-debug:
- getGroupMembers() now warns in console when group/avatar lookup fails
- Diagnostic report includes Group Debug section with member counts
and unresolved avatar list
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
getCharacterName() returns null in group chats because ST leaves
context.characterId undefined when multiple characters are active.
The guard `!charName || !target` was short-circuiting on that null
and showing "No character selected" even though getMemoryTargets()
had valid members.
Switch the gate to `!targets.length` — the only thing that actually
matters — in both the initial HTML build and rebuildDataBankList.
Also use target.name as fallback for the 1:1 subtitle in case charName
is ever unexpectedly null.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The previous auto-refresh captured targets/charName at modal-open time
(stale closure), so if the troubleshooter was opened before a character
context was available (e.g. before CHAT_CHANGED fired), the Data Bank
section would never populate.
Now rebuildDataBankList() calls getMemoryTargets()/getCharacterName()
fresh on each tick. Also fires immediately on modal open and whenever
the user navigates to the Data Bank tab, so there's no 2s wait for
the first render.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Filter out disabled attachments (tracked in extension_settings.disabled_attachments)
from the Data Bank browser — previously all files appeared regardless of enabled state
- Move import row outside #cm_ts_dataBankList so the auto-refresh can safely
replace the file list div without destroying the file input's event handler
- Auto-refresh the Data Bank file list every 2s while the databank section is
active; skips the DOM write when content hasn't changed, cleared on modal close
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
New toggle in Settings > Extraction that excludes the most recent N
messages from auto-extraction (default: 4). This prevents a feedback
loop where just-extracted memories constrain swipes and regenerations
— the model would see memories about events that just happened and
force the swipe to repeat the same plot.
Skipped messages are picked up on the next extraction cycle as newer
messages push them out of the buffer zone. Extract Now and Extract
Here are unaffected (manual actions process the full range).
Closes#3
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
KoboldCPP doesn't store its embedding model name in Vector Storage
settings — it's discovered dynamically from the API at embedding time.
The health check looked up `extension_settings.vectors.koboldcpp_model`
(which doesn't exist), sent no model to `/api/vector/list`, and the
server resolved `String(undefined)` to literal "undefined" — looking
in the wrong folder.
Fix: discover the model by calling `/api/backends/kobold/embed` with
empty items (the same technique VS uses internally), then use the
returned model name for the list query.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Docs:
- troubleshooting: downgrade file vectorization and no-memories-injected
checks from RED to YELLOW (false alarm fixes from v2.0.1)
- managing-memories: update topic tag example to include character name
- injection-viewer: mention Display Mode setting for tablet/phone
- README: add tablet & phone support to Feature Highlights with beta
testing call-to-action
- getting-started: add backup reminder section
- changelog: add nudge banner button rename
Tests:
- Add escaping.test.js (13 tests) and format-detection.test.js (24 tests)
Repo:
- Add .gitignore for screenshots, .DS_Store, .playwright-mcp, drafts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The button opens the Troubleshooter to show health checks — it doesn't
proactively fix anything. "View" accurately describes the action.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>