- Content Settings: Fixed Persona not working when enabled
- Content Settings: Added support for Author's Note, can be enabled
- Settings> Chat Participation: Fixed backspace and input issue with AI Replies per Comment
- Content Settings: Fixed Persona not working when enabled
- Content Settings: Added support for Author's Note, can be enabled
- Settings> Chat Participation: Fixed backspace and input issue with AI Replies per Comment
- Chat Participation: Used regex to fix names that has spaces in them
- Settings > Chat Participation: Added an override option to set token limit for Chat Participation responses (default is ST's max token response setting.)
* Chat Participation: You can now send messages and chat with others. Supports @mentions and comments in general. Set your username, choose an avatar color, and how many respond to you. Thanks to RetiredHippie for getting this feature started.
* Live icon is now clickable, allowing you to quickly enable/disable Livestream. It turns orange and pulses to indicate it is processing in the background, then turns red when done and remains Live.
* New chat style: Dark Roast. For when you want comedians to roast your story or roleplay.
* Fancy new settings menu, giving you quick access to all EchoChamber settings and options.
* Pop-out floating panel: now you can create a floating EchoChamber and resize it however you like and place it anywhere in SillyTavern. It remembers the position and size, even after restarting ST.
* Drag and reorder chat styles in any order you'd like.
* Mobile: When minimized, the entire bar can be tapped to restore EchoChamber.
* Narrator-based chat styles like Ava/Kai (NSFW) and HypeBot continue to respond and react when Livestream is enabled.
* Miscellaneous visual improvements and bedazzling.
* Secret cow level added. (Kidding!)
Bugs/Issues Fixed:
* EchoChamber failed to process when a SillyTavern panel was pinned
* World Info setting token count too low, now set to 0 to use ST's max context and you can set it to any amount manually
* EchoChamber erroneously triggering and processing when a very slow or unresponsive LLM is used
* Style Manager not parsing and understanding {{user}} and {{char}}
v4.2.0 update by SpicyMarinara:
* Pop-out window: Open the chat in a separate window to move to another screen
* Improved panel controls: Power button now truly enables/disables the extension (hides panel AND stops generation). * Separate collapse arrow for just hiding the panel.
* Include Summary: Option to include chat summary from the Summarize extension
* Include World Info: Option to include active World Info/Lorebook entries
* Include Persona/Character: Options to include persona and character descriptions in context (thanks to leDissolution!)
Style dropdown fix: Menu now opens upward when panel is at the bottom position
Livestream resume: Messages continue rolling after page refresh
… Plus some bug fixes.
When Claude extended thinking is used, generateRaw may return just the
first empty text block ('\n\n') WITHOUT throwing, since it's technically
truthy. The catch block never fires, and only a few or zero messages
get parsed.
Fix: after generateRaw returns, always compare result length against
the captured raw API data. If raw data has significantly more content,
use that instead.
Also added diagnostic console.warn logging to livestream flow
(stopLivestream caller trace, queue counts, display progress) to help
diagnose if the timer chain is being interrupted.
SillyTavern's extractMessageFromData uses .find() to get the FIRST
type:'text' block from Claude's response. With extended thinking,
the first text block is just '\n\n' (empty preamble), and the actual
content is in a later text block. This causes generateRaw to throw
'No message generated'.
Fix: temporarily intercept the fetch call to the chat-completions
backend to capture the raw API response. If generateRaw fails with
'No message generated', fall back to extracting text from the captured
raw data using extractTextFromResponse, which correctly handles
content arrays by joining ALL text blocks.
The previous fix assumed response.content would always be the content
array, but sendRequest with extractData may return the response in
different shapes. Added a unified extractTextFromResponse() helper that
handles: plain strings, content arrays (Anthropic extended thinking),
the response itself being an array, OpenAI choices format, and other
common fields.
Also added temporary debug logging (console.error) to capture the
actual response shape from sendRequest for diagnosis.
When Claude uses extended thinking, response.content is an array of
content blocks [{type:'text',text:'...'},{type:'thinking',...}] instead
of a plain string. This caused all parsed messages to be empty.
Now filters for text-type blocks and extracts their text content in
all response parsing paths (profile, OpenAI-compatible, connection_utils).
- Don't stop active livestream when new message events trigger
- Add try-catch to prevent errors from killing the queue
- Save fullGeneratedHtml to enable resume after page refresh
- Track livestreamComplete status to know when to resume
- Fix restore logic to handle empty generatedHtml for fresh livestreams
- Create container if missing instead of using fallback that broke queue
- Ensure all queued messages continue to display properly
- Add scroll to top when new message appears
- Handle popout window container creation consistently
- Use proper multi-turn message arrays for all API sources (Profile, Ollama, OpenAI, Default ST)
- Fix persona retrieval using {{persona}} macro
- Fix world info retrieval using getWorldInfoPrompt() function
- Remove names from chat history messages (role indicates speaker)
- Wrap all Include in Context items in <lore> tags
- Remove artificial context depth and character limits
- Style changes no longer trigger generation
- Added isGenerating flag to prevent concurrent generation requests
- Skip auto-generation when character editor popup is open
- Skip auto-generation when character creation panel is visible
- Skip auto-generation when there's no valid chatId (not in a real conversation)
- Properly abort in-flight requests before starting new ones
- Added separate 'paused' state for power button (keeps panel visible but stops generation)
- Settings checkbox now fully enables/disables extension (hides/shows panel)
- Panel now stays hidden after refresh when disabled via settings
- Removed toastr notifications for pause/unpause
- Restored Pop Out option in panel's layout menu
- Fixed pop-out to properly open window instead of hiding panel
- Added CSS for disabled visual state (50% opacity on content, hidden LIVE indicator)
Updated the generation logic for Connection Profile, Ollama, and OpenAI-Compatible sources to use your global Max Response Length setting in SillyTavern.
Added an optional API Key field to the "OpenAI Compatible" settings section.
v4.1.1
* Fixed Generate Upon Receiving A New Message for Livestream triggering upon switching chats and starting new ones.
* Fixed OpenAI/Ollama URL and model settings not being saved.
* You can now edit timeframes from Livestream mode.
- Add Livestream mode with three generation modes (Manual, On Message, On Complete)
- Messages appear gradually over 5-60 second intervals with slide-in animation
- Add Live indicator with pulsing red light in panel header
- Livestream respects user count while using batch size for message count
- Add new AO3/Wattpad comment style
- Update timing intervals and improve logging
Increased max_tokens and num_predict values from ~500 to 4096 for all three generation sources. This should fix the issue of the user count setting not being obeyed.
- Added the ability to toggle on/off auto-generation for new messages.
- Added the ability to customize how many messages from the chat history are included in the prompt.
- You can now include previously generated by the extension content in your next generations.
- You can now freely import and export styles.
- Updated the prompts and the recommended formats to use an improved, <XML> based-one.
- Introduced a button that clears the currently generated and cached extension data for a selected chat.
- You can now abort generations.
- Refreshing the page does not clear the last generated content.
- Improved and unified CSS.
- Fixed mobile display.
- Added version indicator.