talemate/docs
veguAI 42a8863e65
Some checks failed
ci / container-build (push) Has been cancelled
ci / deploy-docs (push) Has been cancelled
Python Tests / test (3.11) (push) Has been cancelled
0.36.0 (#255)
Major Features

- API key encryption at rest using Fernet (OS keyring with file fallback)
- Prompt Manager: unified UI with template groups, priority ordering, override tracking, response extractors
- Scene context history review panel with token budgets and best-fit mode
- Multiple concurrent director chats with auto-generated titles
- Granular scene state reset dialog
- Time passage insert/edit/delete in scene view
- Image analysis via OpenAI-compatible and Talemate Client backends
- Volatile context placement after scene history for improved prompt caching

Improvements

- Configurable narrator generation length per narration type
- AI Aware conversation mode
- Summarizer: custom instructions, writing style inclusion, short line filtering
- Anthropic: adaptive thinking support, updated model list (opus-4-5/4-6, haiku-4-5)
- Google: gemini-3.1 support
- World editor: generate from topic, quick create state reinforcement, reorganized menus
- Node editor: promote scene modules to global
- Frontend: version mismatch detection, hideable bracket content, required scene name
- TTS: improved pause handling, audio tag support for vocal markers (ElevenLabs v3)
- Writing style template for AI-generated instructions
- Added Kimi.jinja2 LLM prompt template
- Option to disable character names in stopping strings
- Client response length enforcement options
- Graduated token count sliders
- Increased summarizer token threshold max

Bugfixes

- Fix bracket/paren/brace terminators stripped from message ends
- Fix colon in conversation causing content loss
- Fix "Use as reference" navigating to blank page
- Fix avatar regeneration and manual regenerate
- Fix conversation agent ignoring generation length
- Fix duplicate length instructions with reasoning enabled
- Fix trailing newline on message edits
- Fix summarize dialogue sending too much context with layered history
- Fix layered history inspection and construction issues
- Fix empty response handling in summarization
- Fix context ID dot notation with dotted character names
- Fix recursive retry in focal agent
- Fix leading whitespace causing duplicate prepared responses
- Fix summarization not stripping ANALYSIS OF lines
- Fix template group selection/removal in prompt manager
- Fix multiline text in parentheses/brackets parser
- Fix determine_character_name resolution
- Fix character activate/deactivate desyncing creative menu
- Fix character image generation missing context
- Fix LMStudio client not sending token limits
- Fix Recent Scene images on newer Chromium
- Fix sequential reinforcement messages cut off at first linebreak
- Fix reinforcement removal not clearing state
- Fixes #252, #256, #258

Deprecations

- Removed context investigations (replaced by AI-assisted RAG mixin)
- Removed deprecated prompt templates (fix-continuity-errors, fix-exposition, etc.)
- Removed conversation/edit.jinja2, auto break repetition, CLI reset layered history
---------

Co-authored-by: theDTV2 <47825738+theDTV2@users.noreply.github.com>
2026-03-15 12:00:57 +02:00
..
dev 0.34.0 (#239) 2025-12-06 11:19:48 +02:00
getting-started 0.36.0 (#255) 2026-03-15 12:00:57 +02:00
img 0.36.0 (#255) 2026-03-15 12:00:57 +02:00
snippets 0.36.0 (#255) 2026-03-15 12:00:57 +02:00
user-guide 0.36.0 (#255) 2026-03-15 12:00:57 +02:00
.pages 0.26.0 (#133) 2024-07-26 21:43:06 +03:00
cleanup.py linting 2025-06-29 19:51:08 +03:00
index.md 0.35.0 (#242) 2026-01-27 10:22:41 +02:00
style.css 0.30.0 (#184) 2025-06-03 12:26:12 +03:00
talemate-scene-schema.json Prep 0.11.0 (#19) 2023-10-28 11:33:51 +03:00