mirror of
https://github.com/vegu-ai/talemate.git
synced 2025-09-04 19:39:14 +00:00
* set 0.29.0 * tweaks for dig layered history (wip) * move director agent to directory * relock * remove "none" from dig_layered_history response * determine character development * update character sheet from character development (wip) * org imports * alert outdated template overrides during startup * editor controls normalization of exposition * dialogue formatting refactor * fix narrator.clean_result forcing * regardless of editor fix exposition setting * move more of the dialogue cleanup logic into the editor fix exposition handlers * remove cruft * change ot normal selects and add some margin * move formatting option up * always strip partial sentences * separates exposition fixes from other dialogue cleanup operations, since we still want those * add novel formatting style * honor formatting config when no markers are supplied * fix issue where sometimes character message formatting would miss character name * director can now guide actors through scene analysis * style fixes * typo * select correct system message on direction type * prompt tweaks * disable by default * add support for dynamic instruction injection and include missing guide for internal note usage * change favicon and also indicate business through favicon * img * support xtc, dry and smoothing in text gen webui * prompt tweaks * support xtc, dry, smoothing in koboldcpp client * reorder * dry, xtc and smoothing factor exposed to tabby api client * urls to third party API documentation * remove bos token * add missing preset * focal * focal progress * focal progress and generated suggestions progress * fix issue with discard all suggestions * apply suggestions * move suggestion ux into the world state manager * support generation options for suggestion generation * unused import * refactor focal to json based approach * focal and character suggestion tweaks * rmeove cruft * remove cruft * relock * prompt tweaks * layout spacing updates * ux elements for removal of scenes from quick load menu * context investigation refactor WIP * context investigation refactor * context investigation refactor * context investigation refactor * cleanup * move scene analysis to summarizer agent * remove deprecated context investigation logic * context investigation refactor continued - split into separate file for easier maint * allow direct specification of response context length * context investigation and scene analyzation progress * change analysis length config to number * remove old dig-layered-history templates * summarizer - deep analysis is only available if there is layered history * move world_state agent to dedicated directory * remove unused imports * automatic character progression WIP * character suggestions progress * app busy flag based on agent business * indicate suggestions in world state overview * fix issue with user input cleanup * move conversation agent to a dedicated submodule * Response in action analyze_text_and_extract_context is too short #162 * move narrator agent to its own submodule * narrator improvements WIP * narration improvements WIP * fix issue with regen of character exit narration * narration improvements WIP * prompt tweaks * last_message_of_type can set max iterations * fix multiline parsing * prompt tweaks * director guide actors based of scene analysis * director guidance for actors * prompt tweaks * prompt tweaks * prompt tweaks * fix automatic character proposals not propagating to the ux * fix analysis length * support director guidance in legacy chat format * typo * prompt tweaks * prompt tweaks * error handling * length config * prompt tweaks * typo * remove cruft * prompt tweak * prompt tweak * time passage style changes * remove cruft * deep analysis context investigations honor call limit * refactor conversation agent long term memory to use new memory rag mixin - also streamline prompts * tweaks to RAG mixin agent config * fix narration highlighting * context investgiation fixes director narration guidance summarization tweaks * direactor guide narration progress context investigation fixes that would cause looping of investigations and failure to dig into the correct layers * prompt tweaks * summarization improvements * separate deep analysis chapter selection from analysis into its own prompt * character entry and exit * cache analysis per subtype and some narrator prompt tweaks * separate layered history logic into its own summarizer mixin and expose some additional options * scene can now set an overral writing style using writing style templates narrator option to enable writing style * narrate query writing style support * scene tools - narrator actions refactor to handler and own component * narrator query / look at narrations emitted as context investigation messages refactor context investigation messaage display scene message meta data object * include narrative direction * improve context investigation message prompt insert * reorg supported parameters * fix bug when no message history exists * WIP make regenerate work nicely with director guidance * WIP make regenerate work nicely with director guidance * regenerate conversation fixes * help text * ux tweaks * relock * turn off deep analysis and context investigations by default * long term memory options for director and summarizer * long term memory caching * fix summarization cache toggle not showing up in ux * ux tweaks * layered history summarization includes character information for mentioned characters * deepseek client added * Add fork button to narrator message * analyze and guidance support for time passage narration * cache based on message fingerprint instead of id * configurable system prompts WIP * configurable system prompts WIP * client overrides for system prompts wired to ux * system prompt overhaul * fix issue with unknown system prompt kind * add button to manually request dynamic choices from the director move the generate choices logic of the director agent to its own submodule * remove cruft * 30 may be too long and is causing the client to disappear temporarly * suppoert dynamic choice generate for non player characters * enable `actor` tab for player characters * creator agent now has access to rag tools improve acting instruction generation * client timeout fixes * fix issue where scene removal menu stayed open after remove * expose scene restore functionality to ux * create initial restore point * fix creator extra-context template * didn't mean to remove this * intro scene should be edited through world editor * fix alert * fix partial quotes regardless of editor setting director guidance for conversation reminds to put speech in quotes * fix @ instructions not being passed through to director guidance prompt * anthropic mode list updated * default off * cohere model list updated * reset actAs on next scene load * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * remove debug cruft * relock * docs on changing host / port * fix issue with narrator / director actiosn not available on fresh install * fix issue with long content classification determination result * take this reminder to put speech into quotes out for now, it seems to do more harm than good * fix some remaining issues with auto expositon fixes * prompt tweaks * prompt tweaks * fix issue during reload * expensive and warning ux passthrough for agent config * layered sumamry analysation defaults to on * what's new info block added * docs * what's new updated * remove old images * old img cleanup script * prompt tweaks * improve auto prompt template detection via huggingface * add gpt-4o-realtime-preview add gpt-4o-mini-realtime-preview * add o1 and o3-mini * fix o1 and o3 * fix o1 and o3 * more o1 / o3 fixes * o3 fixes
70 lines
3.2 KiB
Markdown
70 lines
3.2 KiB
Markdown
# :material-tune: Presets
|
|
|
|
Change inference parameters, embedding parameters and global system prompt overrides.
|
|
|
|
## :material-matrix: Inference
|
|
|
|
!!! danger "Advanced settings. Use with caution."
|
|
If these settings don't mean anything to you, you probably shouldn't be changing them. They control the way the AI generates text and can have a big impact on the quality of the output.
|
|
|
|
This document will NOT explain what each setting does.
|
|
|
|

|
|
|
|
If you're familiar with editing inference parameters from other similar applications, be aware that there is a significant difference in how TaleMate handles these settings.
|
|
|
|
Agents take different actions, and based on that action one of the presets is selected.
|
|
|
|
That means that ALL presets are relevant and will be used at some point.
|
|
|
|
For example analysis will use the `Anlytical` preset, which is configured to be less random and more deterministic.
|
|
|
|
The `Conversation` preset is used by the conversation agent during dialogue gneration.
|
|
|
|
The other presets are used for various creative tasks.
|
|
|
|
These are all experimental and will probably change / get merged in the future.
|
|
|
|
## :material-cube-unfolded: Embeddings
|
|
|
|

|
|
|
|
Allows you to add, remove and manage various embedding models for the memory agent to use via chromadb.
|
|
|
|
--8<-- "docs/user-guide/agents/memory/embeddings.md:embeddings_setup"
|
|
|
|
## :material-text-box: System Prompts
|
|
|
|

|
|
|
|
This allows you to override the global system prompts for the entire application for each overarching prompt kind.
|
|
|
|
If these are not set the default system prompt will be read from the templates that exist in `src/talemate/prompts/templates/{agent}/system-*.jinja2`.
|
|
|
|
This is useful if you want to change the default system prompts for the entire application.
|
|
|
|
The effect these have, varies from model to model.
|
|
|
|
### Prompt types
|
|
|
|
- Conversation - Use for dialogue generation.
|
|
- Narration - Used for narrative generation.
|
|
- Creation - Used for other creative tasks like making new characters, locations etc.
|
|
- Direction - Used for guidance prompts and general scene direction.
|
|
- Analysis (JSON) - Used for analytical tasks that expect a JSON response.
|
|
- Analysis - Used for analytical tasks that expect a text response.
|
|
- Editing - Used for post-processing tasks like fixing exposition, adding detail etc.
|
|
- World State - Used for generating world state information. (This is sort of a mix of analysis and creation prompts.)
|
|
- Summarization - Used for summarizing text.
|
|
|
|
### Normal / Uncensored
|
|
|
|
Overrides are maintained for both normal and uncensored modes.
|
|
|
|
Currently local API clients (koboldcpp, textgenwebui, tabbyapi, llmstudio) will use the uncensored prompts, while the clients targeting official third party APIs will use the normal prompts.
|
|
|
|
The uncensored prompts are a work-around to prevent the LLM from refusing to generate text based on topic or content.
|
|
|
|
|
|
!!! note "Future plans"
|
|
A toggle to switch between normal and uncensored prompts regardless of the client is planned for a future release.
|