* groq client
* adjust max token length
* more openai image download fixes
* graphic novel style
* dialogue cleanup
* fix issue where auto-break repetition would trigger on empty responses
* reduce default convo retries to 1
* prompt tweaks
* fix some clients not handling autocomplete well
* screenplay dialogue generation tweaks
* message flags
* better cleanup of redundant change_ai_character calls
* super experimental continuity error fix mode for editor agent
* clamp temperature
* tweaks to continuity error fixing and expose to ux
* expose to ux
* allow CmdFixContinuityErrors to work even if editor has check_continuity_errors disabled
* prompt tweak
* support --endofline-- as well
* double coercion client option added
* fix issue with double coercion inserting "None" if not set
* client ux refactor to make room for coercion config
* rest of -- can be treated as *
* disable double coercion when json coercion is active since it kills accuracy
* prompt tweaks
* prompt tweaks
* show coercion status in client list
* change preset for edit_fix_continuity
* interim commit of coninuity error handling progress
* tag based presets
* special tokens to keep trailing whitespace if needed
* fix continuity errors finalized for now
* change double coercion formatting
* 0.24.0 and relock
* add groq and cohere to supported services
* linting
* dockerfiles and docker-compose
* containerization fixes
* docker instructions
* readme
* readme
* dont mount src by default, readme
* hf template determine fixes
* auto determine prompt template
* script to start talemate listening only to 127.0.0.1
* prompt tweaks
* auto narrate round every 3 rounds
* tweaks
* Add return to startscreen button
* Only show return to start screen button if scene is active
* improvements to character creation
* dedicated property for scene title separate fromn the save directory name
* filter out negations into negative keywords
* increase auto narrate delay
* add character portrait keyword
* summarization should ignore most recent message, as it is often regenerated.
* cohere client
* specify python3
* improve viable runpod text gen detection
* fix formatting in template preview
* cohere command-r plus template that i am not sure if correct or not
* mistral client set to decensor
* fix issue with parsing json responses
* command-r prompts updated
* use official mistralai python client
* send max_tokens
* new input autocomplete functionality
* prompt tweeaks
* llama 3 templates
* add <|eot_id|> to stopping strings
* prompt tweak
* tooltip
* llama-3 identifier
* command-r and command-r plus prompt identifiers
* text-gen-webui client tweaks to make llama3 eos tokens work correctly
* better llama-3 detection
* better llama-3 finalizing of parameters
* streamline client prompt finalizers
reduce YY model smoothing factor from 0.3 to 0.1 for text-generation-webui client
* relock
* linting
* set 0.23.0
* add new gpt-4 models
* set 0.23.0
* add note about conecting to text-gen-webui from docker
* fix openai image generation no longer working
* default to concept_art
* linux dev instance shortcuts
* add voice samples to gitignore
* direction mode: inner monologue
* actor direction fixes
* py script support for scene logic
* fix end_simulation call
* port sim suite logic to python
* remove dupe log
* fix typing
* section off the text
* fix end simulation command
* simulation goal, prompt tweaks
* prompt tweaks
* dialogue format improvements
* director action logged with message
* call director action log and other fixes
* generate character dialogue instructions, prompt fixes, director action ux
* fix question / answer call
* generate dialogue instructions when loading from character cards
* more dialogue format improvements
* set scene content context more reliably.
* fix innermonologue perspective
* conversation prompt should honor the client's decensor setting
* fix comfyui checkpoint list not loading
* more dialogue format fixes
* prompt tweaks
* fix sim suite group characters, prompt fixes
* npm relock
* handle inanimate objects, handle player name change issues
* don't rename details if the original name was "You"
* As the conversation goes on, dialogue instructions should be moved backwards further to have a weaker effect on immediate generations.
* add more context to character creation prompt
* fix select next talking actor when natural language flow is turned on and the LLM returns multiple character names
* prompt fixes for dialogue generation
* summarization fixes
* default to script format
* seperate dialogue prompt by formatting style, tweak conversation system prompt
* remove cruft
* add gen format to agent details
* relock
* relock
* prep 0.22.0
* add claude-3-haiku-20240307
* readme