* set 0.29.0

* tweaks for dig layered history (wip)

* move director agent to directory

* relock

* remove "none" from dig_layered_history response

* determine character development

* update character sheet from character development (wip)

* org imports

* alert outdated template overrides during startup

* editor controls normalization of exposition

* dialogue formatting refactor

* fix narrator.clean_result forcing * regardless of editor fix exposition setting

* move more of the dialogue cleanup logic into the editor fix exposition handlers

* remove cruft

* change ot normal selects and add some margin

* move formatting option up

* always strip partial sentences

* separates exposition fixes from other dialogue cleanup operations, since we still want those

* add novel formatting style

* honor formatting config when no markers are supplied

* fix issue where sometimes character message formatting would miss character name

* director can now guide actors through scene analysis

* style fixes

* typo

* select correct system message on direction type

* prompt tweaks

* disable by default

* add support for dynamic instruction injection and include missing guide for internal note usage

* change favicon and also indicate business through favicon

* img

* support xtc, dry and smoothing in text gen webui

* prompt tweaks

* support xtc, dry, smoothing in koboldcpp client

* reorder

* dry, xtc and smoothing factor exposed to tabby api client

* urls to third party API documentation

* remove bos token

* add missing preset

* focal

* focal progress

* focal progress and generated suggestions progress

* fix issue with discard all suggestions

* apply suggestions

* move suggestion ux into the world state manager

* support generation options for suggestion generation

* unused import

* refactor focal to json based approach

* focal and character suggestion tweaks

* rmeove cruft

* remove cruft

* relock

* prompt tweaks

* layout spacing updates

* ux elements for removal of scenes from quick load menu

* context investigation refactor WIP

* context investigation refactor

* context investigation refactor

* context investigation refactor

* cleanup

* move scene analysis to summarizer agent

* remove deprecated context investigation logic

* context investigation refactor continued - split into separate file for easier maint

* allow direct specification of response context length

* context investigation and scene analyzation progress

* change analysis length config to number

* remove old dig-layered-history templates

* summarizer - deep analysis is only available if there is layered history

* move world_state agent to dedicated directory

* remove unused imports

* automatic character progression WIP

* character suggestions progress

* app busy flag based on agent business

* indicate suggestions in world state overview

* fix issue with user input cleanup

* move conversation agent to a dedicated submodule

* Response in action analyze_text_and_extract_context is too short #162

* move narrator agent to its own submodule

* narrator improvements WIP

* narration improvements WIP

* fix issue with regen of character exit narration

* narration improvements WIP

* prompt tweaks

* last_message_of_type can set max iterations

* fix multiline parsing

* prompt tweaks

* director guide actors based of scene analysis

* director guidance for actors

* prompt tweaks

* prompt tweaks

* prompt tweaks

* fix automatic character proposals not propagating to the ux

* fix analysis length

* support director guidance in legacy chat format

* typo

* prompt tweaks

* prompt tweaks

* error handling

* length config

* prompt tweaks

* typo

* remove cruft

* prompt tweak

* prompt tweak

* time passage style changes

* remove cruft

* deep analysis context investigations honor call limit

* refactor conversation agent long term memory to use new memory rag mixin - also streamline prompts

* tweaks to RAG mixin agent config

* fix narration highlighting

* context investgiation fixes
director narration guidance
summarization tweaks

* direactor guide narration progress
context investigation fixes that would cause looping of investigations and failure to dig into the correct layers

* prompt tweaks

* summarization improvements

* separate deep analysis chapter selection from analysis into its own prompt

* character entry and exit

* cache analysis per subtype and some narrator prompt tweaks

* separate layered history logic into its own summarizer mixin and expose some additional options

* scene can now set an overral writing style using writing style templates
narrator option to enable writing style

* narrate query writing style support

* scene tools - narrator actions refactor to handler and own component

* narrator query / look at narrations emitted as context investigation messages
refactor context investigation messaage display
scene message meta data object

* include narrative direction

* improve context investigation message prompt insert

* reorg supported parameters

* fix bug when no message history exists

* WIP make regenerate work nicely with director guidance

* WIP make regenerate work nicely with director guidance

* regenerate conversation fixes

* help text

* ux tweaks

* relock

* turn off deep analysis and context investigations by default

* long term memory options for director and summarizer

* long term memory caching

* fix summarization cache toggle not showing up in ux

* ux tweaks

* layered history summarization includes character information for mentioned characters

* deepseek client added

* Add fork button to narrator message

* analyze and guidance support for time passage narration

* cache based on message fingerprint instead of id

* configurable system prompts WIP

* configurable system prompts WIP

* client overrides for system prompts wired to ux

* system prompt overhaul

* fix issue with unknown system prompt kind

* add button to manually request dynamic choices from the director
move the generate choices logic of the director agent to its own submodule

* remove cruft

* 30 may be too long and is causing the client to disappear temporarly

* suppoert dynamic choice generate for non player characters

* enable `actor` tab for player characters

* creator agent now has access to rag tools
improve acting instruction generation

* client timeout fixes

* fix issue where scene removal menu stayed open after remove

* expose scene restore functionality to ux

* create initial restore point

* fix creator extra-context template

* didn't mean to remove this

* intro scene should be edited through world editor

* fix alert

* fix partial quotes regardless of editor setting
director guidance for conversation reminds to put speech in quotes

* fix @ instructions not being passed through to director guidance prompt

* anthropic mode list updated

* default off

* cohere model list updated

* reset actAs on next scene load

* prompt tweaks

* prompt tweaks

* prompt tweaks

* prompt tweaks

* prompt tweaks

* remove debug cruft

* relock

* docs on changing host / port

* fix issue with narrator / director actiosn not available on fresh install

* fix issue with long content classification determination result

* take this reminder to put speech into quotes out for now, it seems to do more harm than good

* fix some remaining issues with auto expositon fixes

* prompt tweaks

* prompt tweaks

* fix issue during reload

* expensive and warning ux passthrough for agent config

* layered sumamry analysation defaults to on

* what's new info block added

* docs

* what's new updated

* remove old images

* old img cleanup script

* prompt tweaks

* improve auto prompt template detection via huggingface

* add gpt-4o-realtime-preview
add gpt-4o-mini-realtime-preview

* add o1 and o3-mini

* fix o1 and o3

* fix o1 and o3

* more o1 / o3 fixes

* o3 fixes
This commit is contained in:
veguAI 2025-02-01 17:44:06 +02:00 committed by GitHub
parent 736e6702f5
commit 113553c306
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
319 changed files with 11491 additions and 4604 deletions

View file

@ -2,10 +2,10 @@
Roleplay with AI with a focus on strong narration and consistent world and game state tracking. Roleplay with AI with a focus on strong narration and consistent world and game state tracking.
|![Screenshot 3](docs/img/0.17.0/ss-1.png)|![Screenshot 3](docs/img/0.17.0/ss-2.png)| |![Screenshot 3](docs/img/ss-1.png)|![Screenshot 3](docs/img/ss-2.png)|
|------------------------------------------|------------------------------------------| |------------------------------------------|------------------------------------------|
|![Screenshot 4](docs/img/0.17.0/ss-4.png)|![Screenshot 1](docs/img/0.19.0/Screenshot_15.png)| |![Screenshot 4](docs/img/ss-4.png)|![Screenshot 1](docs/img/Screenshot_15.png)|
|![Screenshot 2](docs/img/0.19.0/Screenshot_16.png)|![Screenshot 3](docs/img/0.19.0/Screenshot_17.png)| |![Screenshot 2](docs/img/Screenshot_16.png)|![Screenshot 3](docs/img/Screenshot_17.png)|
## Core Features ## Core Features

166
docs/cleanup.py Normal file
View file

@ -0,0 +1,166 @@
import os
import re
import subprocess
from pathlib import Path
import argparse
def find_image_references(md_file):
"""Find all image references in a markdown file."""
with open(md_file, 'r', encoding='utf-8') as f:
content = f.read()
pattern = r'!\[.*?\]\((.*?)\)'
matches = re.findall(pattern, content)
cleaned_paths = []
for match in matches:
path = match.lstrip('/')
if 'img/' in path:
path = path[path.index('img/') + 4:]
# Only keep references to versioned images
parts = os.path.normpath(path).split(os.sep)
if len(parts) >= 2 and parts[0].replace('.', '').isdigit():
cleaned_paths.append(path)
return cleaned_paths
def scan_markdown_files(docs_dir):
"""Recursively scan all markdown files in the docs directory."""
md_files = []
for root, _, files in os.walk(docs_dir):
for file in files:
if file.endswith('.md'):
md_files.append(os.path.join(root, file))
return md_files
def find_all_images(img_dir):
"""Find all image files in version subdirectories."""
image_files = []
for root, _, files in os.walk(img_dir):
# Get the relative path from img_dir to current directory
rel_dir = os.path.relpath(root, img_dir)
# Skip if we're in the root img directory
if rel_dir == '.':
continue
# Check if the immediate parent directory is a version number
parent_dir = rel_dir.split(os.sep)[0]
if not parent_dir.replace('.', '').isdigit():
continue
for file in files:
if file.lower().endswith(('.png', '.jpg', '.jpeg', '.gif', '.svg')):
rel_path = os.path.relpath(os.path.join(root, file), img_dir)
image_files.append(rel_path)
return image_files
def grep_check_image(docs_dir, image_path):
"""
Check if versioned image is referenced anywhere using grep.
Returns True if any reference is found, False otherwise.
"""
try:
# Split the image path to get version and filename
parts = os.path.normpath(image_path).split(os.sep)
version = parts[0] # e.g., "0.29.0"
filename = parts[-1] # e.g., "world-state-suggestions-2.png"
# For versioned images, require both version and filename to match
version_pattern = f"{version}.*{filename}"
try:
result = subprocess.run(
['grep', '-r', '-l', version_pattern, docs_dir],
capture_output=True,
text=True
)
if result.stdout.strip():
print(f"Found reference to {image_path} with version pattern: {version_pattern}")
return True
except subprocess.CalledProcessError:
pass
except Exception as e:
print(f"Error during grep check for {image_path}: {e}")
return False
def main():
parser = argparse.ArgumentParser(description='Find and optionally delete unused versioned images in MkDocs project')
parser.add_argument('--docs-dir', type=str, required=True, help='Path to the docs directory')
parser.add_argument('--img-dir', type=str, required=True, help='Path to the images directory')
parser.add_argument('--delete', action='store_true', help='Delete unused images')
parser.add_argument('--verbose', action='store_true', help='Show all found references and files')
parser.add_argument('--skip-grep', action='store_true', help='Skip the additional grep validation')
args = parser.parse_args()
# Convert paths to absolute paths
docs_dir = os.path.abspath(args.docs_dir)
img_dir = os.path.abspath(args.img_dir)
print(f"Scanning markdown files in: {docs_dir}")
print(f"Looking for versioned images in: {img_dir}")
# Get all markdown files
md_files = scan_markdown_files(docs_dir)
print(f"Found {len(md_files)} markdown files")
# Collect all image references
used_images = set()
for md_file in md_files:
refs = find_image_references(md_file)
used_images.update(refs)
# Get all actual images (only from version directories)
all_images = set(find_all_images(img_dir))
if args.verbose:
print("\nAll versioned image references found in markdown:")
for img in sorted(used_images):
print(f"- {img}")
print("\nAll versioned images in directory:")
for img in sorted(all_images):
print(f"- {img}")
# Find potentially unused images
unused_images = all_images - used_images
# Additional grep validation if not skipped
if not args.skip_grep and unused_images:
print("\nPerforming additional grep validation...")
actually_unused = set()
for img in unused_images:
if not grep_check_image(docs_dir, img):
actually_unused.add(img)
if len(actually_unused) != len(unused_images):
print(f"\nGrep validation found {len(unused_images) - len(actually_unused)} additional image references!")
unused_images = actually_unused
# Report findings
print("\nResults:")
print(f"Total versioned images found: {len(all_images)}")
print(f"Versioned images referenced in markdown: {len(used_images)}")
print(f"Unused versioned images: {len(unused_images)}")
if unused_images:
print("\nUnused versioned images:")
for img in sorted(unused_images):
print(f"- {img}")
if args.delete:
print("\nDeleting unused versioned images...")
for img in unused_images:
full_path = os.path.join(img_dir, img)
try:
os.remove(full_path)
print(f"Deleted: {img}")
except Exception as e:
print(f"Error deleting {img}: {e}")
print("\nDeletion complete")
else:
print("\nNo unused versioned images found!")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,14 @@
## Third Party API docs
### Chat completions
- [Anthropic](https://docs.anthropic.com/en/api/messages)
- [Cohere](https://docs.cohere.com/reference/chat)
- [Google AI](https://ai.google.dev/api/generate-content#v1beta.GenerationConfig)
- [Groq](https://console.groq.com/docs/api-reference#chat-create)
- [KoboldCpp](https://lite.koboldai.net/koboldcpp_api#/api/v1)
- [LMStudio](https://lmstudio.ai/docs/api/rest-api)
- [Mistral AI](https://docs.mistral.ai/api/)
- [OpenAI](https://platform.openai.com/docs/api-reference/completions)
- [TabbyAPI](https://theroyallab.github.io/tabbyAPI/#operation/chat_completion_request_v1_chat_completions_post)
- [Text-Generation-WebUI](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/openai/typing.py)

View file

@ -0,0 +1,3 @@
nav:
- change-host-and-port.md
- ...

View file

@ -0,0 +1,102 @@
# Changing host and port
## Backend
By default, the backend listens on `localhost:5050`.
To run the server on a different host and port, you need to change the values passed to the `--host` and `--port` parameters during startup and also make sure the frontend knows the new values.
### Changing the host and port for the backend
#### :material-linux: Linux
Copy `start.sh` to `start_custom.sh` and edit the `--host` and `--port` parameters in the `uvicorn` command.
```bash
#!/bin/sh
. talemate_env/bin/activate
python src/talemate/server/run.py runserver --host 0.0.0.0 --port 1234
```
#### :material-microsoft-windows: Windows
Copy `start.bat` to `start_custom.bat` and edit the `--host` and `--port` parameters in the `uvicorn` command.
```batch
start cmd /k "cd talemate_env\Scripts && activate && cd ../../ && python src\talemate\server\run.py runserver --host 0.0.0.0 --port 1234"
```
### Letting the frontend know about the new host and port
Copy `talemate_frontend/example.env.development.local` to `talemate_frontend/.env.production.local` and edit the `VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL`.
```env
VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL=ws://localhost:1234
```
Next rebuild the frontend.
```bash
cd talemate_frontend
npm run build
```
### Start the backend and frontend
Start the backend and frontend as usual.
#### :material-linux: Linux
```bash
./start_custom.sh
```
#### :material-microsoft-windows: Windows
```batch
start_custom.bat
```
## Frontend
By default, the frontend listens on `localhost:8080`.
To change the frontend host and port, you need to change the values passed to the `--frontend-host` and `--frontend-port` parameters during startup.
### Changing the host and port for the frontend
#### :material-linux: Linux
Copy `start.sh` to `start_custom.sh` and edit the `--frontend-host` and `--frontend-port` parameters.
```bash
#!/bin/sh
. talemate_env/bin/activate
python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5055 \
--frontend-host localhost --frontend-port 8082
```
#### :material-microsoft-windows: Windows
Copy `start.bat` to `start_custom.bat` and edit the `--frontend-host` and `--frontend-port` parameters.
```batch
start cmd /k "cd talemate_env\Scripts && activate && cd ../../ && python src\talemate\server\run.py runserver --host 0.0.0.0 --port 5055 --frontend-host localhost --frontend-port 8082"
```
### Start the backend and frontend
Start the backend and frontend as usual.
#### :material-linux: Linux
```bash
./start_custom.sh
```
#### :material-microsoft-windows: Windows
```batch
start_custom.bat
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 418 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View file

Before

Width:  |  Height:  |  Size: 418 KiB

After

Width:  |  Height:  |  Size: 418 KiB

View file

Before

Width:  |  Height:  |  Size: 413 KiB

After

Width:  |  Height:  |  Size: 413 KiB

View file

Before

Width:  |  Height:  |  Size: 364 KiB

After

Width:  |  Height:  |  Size: 364 KiB

View file

Before

Width:  |  Height:  |  Size: 449 KiB

After

Width:  |  Height:  |  Size: 449 KiB

View file

Before

Width:  |  Height:  |  Size: 449 KiB

After

Width:  |  Height:  |  Size: 449 KiB

View file

Before

Width:  |  Height:  |  Size: 396 KiB

After

Width:  |  Height:  |  Size: 396 KiB

View file

Before

Width:  |  Height:  |  Size: 468 KiB

After

Width:  |  Height:  |  Size: 468 KiB

View file

@ -50,4 +50,49 @@
Tracked states occassionally re-inforce the state of the world or a character. This re-inforcement is kept in the context sent to the AI during generation, giving it a better understanding about the current truth of the world. Tracked states occassionally re-inforce the state of the world or a character. This re-inforcement is kept in the context sent to the AI during generation, giving it a better understanding about the current truth of the world.
Some examples could be, tracking a characters physical state, time of day, or the current location of a character. Some examples could be, tracking a characters physical state, time of day, or the current location of a character.
<!--- --8<-- [end:what_is_a_tracked_state] --> <!--- --8<-- [end:what_is_a_tracked_state] -->
<!--- --8<-- [start:agent_long_term_memory_settings] -->
![Agent long term memory settings](/talemate/img/0.29.0/agent-long-term-memory-settings.png)
If enabled will inject relevant information into the context using relevancy through the [Memory Agent](/talemate/user-guide/agents/memory).
##### Context Retrieval Method
What method to use for long term memory selection
- `Context queries based on recent context` - will take the last 3 messages in the scene and select relevant context from them. This is the fastest method, but may not always be the most relevant.
- `Context queries generated by AI` - will generate a set of context queries based on the current scene and select relevant context from them. This is slower, but may be more relevant.
- `AI compiled questions and answers` - will use the AI to generate a set of questions and answers based on the current scene and select relevant context from them. This is the slowest, and not necessarily better than the other methods.
##### Number of queries
This settings means different things depending on the context retrieval method.
- For `Context queries based on recent context` this is the number of messages to consider.
- For `Context queries generated by AI` this is the number of queries to generate.
- For `AI compiled questions and answers` this is the number of questions to generate.
##### Answer length
The maximum response length of the generated answers.
##### Cache
Enables the agent wide cache of the long term memory retrieval. That means any agents that share the same long term memory settings will share the same cache. This can be useful to reduce the number of queries to the memory agent.
<!--- --8<-- [end:agent_long_term_memory_settings] -->
<!--- --8<-- [start:character_change_proposals] -->
When a proposal has been generated it, if the character currently is acknowledged in the worldstate, a lightbulb :material-lightbulb-on: will appear next to the character name.
![Character change proposal](/talemate/img/0.29.0/world-state-suggestions-1.png)
Click the name to expand the character entry and then click the :material-lightbulb-on: to view the proposal.
![Character change proposal expanded](/talemate/img/0.29.0/world-state-suggestions-2.png)
You will be taken to the world editor suggestions tab where you can view the proposal and accept or reject it.
![Character change proposal expanded](/talemate/img/0.29.0/world-editor-suggestions-1.png)
<!--- --8<-- [end:character_change_proposals] -->

View file

@ -2,7 +2,7 @@
## General ## General
![Conversation agent general settings](/talemate/img/0.28.0/conversation-general-settings.png) ![Conversation agent general settings](/talemate/img/0.29.0/conversation-general-settings.png)
!!! note "Inference perameters" !!! note "Inference perameters"
Inference parameters are NOT configured through any individual agent. Inference parameters are NOT configured through any individual agent.
@ -29,21 +29,9 @@ Maximum turns the AI gets in succession, before the player gets a turn no matter
The maximum number of turns a character can go without speaking before the AI will force them to speak. The maximum number of turns a character can go without speaking before the AI will force them to speak.
##### Long Term Memory
If checked will inject relevant information into the context using relevancy through the [Memory Agent](/talemate/user-guide/agents/memory).
##### Context Retrieval Method
What method to use for long term memory selection
- `Context queries based on recent context` - will take the last 3 messages in the scene and select relevant context from them. This is the fastest method, but may not always be the most relevant.
- `Context queries generated by AI` - will generate a set of context queries based on the current scene and select relevant context from them. This is slower, but may be more relevant.
- `AI compiled questions and answers` - will use the AI to generate a set of questions and answers based on the current scene and select relevant context from them. This is the slowest, and not necessarily better than the other methods.
## Generation ## Generation
![Conversation agent generation settings](/talemate/img/0.28.0/conversation-generation-settings.png) ![Conversation agent generation settings](/talemate/img/0.29.0/conversation-generation-settings.png)
##### Format ##### Format
@ -76,29 +64,6 @@ General, broad isntructions for ALL actors in the scene. This will be appended t
If > 0 will offset the instructions for the actor (both broad and character specific) into the history by that many turns. Some LLMs struggle to generate coherent continuations if the scene is interrupted by instructions right before the AI is asked to generate dialogue. This allows to shift the instruction backwards. If > 0 will offset the instructions for the actor (both broad and character specific) into the history by that many turns. Some LLMs struggle to generate coherent continuations if the scene is interrupted by instructions right before the AI is asked to generate dialogue. This allows to shift the instruction backwards.
## Context Investigation ## Long Term Memory
A new :material-flask: experimental feature introduced in `0.28.0` alongside the [layered history summarization](/talemate/user-guide/agents/summarizer/settings#layered-history).
If enabled, the AI will investigate the history for relevant information to include in the conversation prompt. Investigation works by digging through the various layers of the history, and extracting relevant information based on the final message in the scene.
This can be **very slow** depending on how many layers are enabled and generated. It can lead to a great improvement in the quality of the generated dialogue, but it currently still is a mixed bag. A strong LLM is almost a hard requirement for it produce anything useful. 22B+ models are recommended.
![Conversation agent context investigation settings](/talemate/img/0.28.0/conversation-context-investigation-settings.png)
!!! note "Tips"
- This is experimental and results WILL vary in quality.
- Requires a strong LLM. 22B+ models are recommended.
- Good, clean summarization of the history is a hard requirement for this to work well. Regenerate your history if it's messy. (World Editor -> History -> Regenerate)
##### Enable context investigation
Enable or disable the context investigation feature.
##### Trigger
Allows you to specify when the context investigation should be triggered.
- Agent decides - the AI will decide when to trigger the context investigation based on the scene.
- Only when a question is asked - the AI will only trigger the context investigation when a question is asked.
--8<-- "docs/snippets/tips.md:agent_long_term_memory_settings"

View file

@ -1,10 +1,10 @@
# Overview # Overview
The director agent is responsible for guiding the scene progression and generating dynamic actions.
The director agent is responsible for orchestrating the scene and directing characters. In the future it will shift / expose more of a game master role, controlling the progression of the story.
This currently happens in a very limited way and is very much a work in progress. ### Dynamic Actions
Will occasionally generate clickable choices for the user during scene progression. This can be used to allow the user to make choices that will affect the scene or the story in some way without having to manually type out the choice.
It rquires a text-generation client to be configured and assigned. ### Guide Scene
Will use the summarizer agent's scene analysis to guide characters and the narrator for the next generation, hopefully improving the quality of the generated content.
!!! warning "Experimental"
This agent is currently experimental and may not work as expected.

View file

@ -2,7 +2,7 @@
## General ## General
![Director agent settings](/talemate/img/0.28.0/director-general-settings.png) ![Director agent settings](/talemate/img/0.29.0/director-general-settings.png)
##### Direct ##### Direct
@ -35,11 +35,15 @@ If `Direction` is selected, the actor will be given the direction as a direct in
If `Inner Monologue` is selected, the actor will be given the direction as a thought. If `Inner Monologue` is selected, the actor will be given the direction as a thought.
## Long Term Memory
--8<-- "docs/snippets/tips.md:agent_long_term_memory_settings"
## Dynamic Actions ## Dynamic Actions
Dynamic actions are introduced in `0.28.0` and allow the director to generate a set of clickable choices for the player to choose from. Dynamic actions are introduced in `0.28.0` and allow the director to generate a set of clickable choices for the player to choose from.
![Director agent dynamic actions settings](/talemate/img/0.28.0/director-dynamic-actions-settings.png) ![Director agent dynamic actions settings](/talemate/img/0.29.0/director-dynamic-actions-settings.png)
##### Enable Dynamic Actions ##### Enable Dynamic Actions
@ -63,4 +67,25 @@ If this is checked and you pick an action, the scene will NOT automatically pass
Allows you to provide extra specific instructions to director on how to generate the dynamic actions. Allows you to provide extra specific instructions to director on how to generate the dynamic actions.
For example you could provide a list of actions to choose from, or a list of actions to avoid. Or specify that you always want a certain action to be included. For example you could provide a list of actions to choose from, or a list of actions to avoid. Or specify that you always want a certain action to be included.
## Guide Scene
![Director agent guide scene settings](/talemate/img/0.29.0/director-guide-scene-settings.png)
The director can use the summarizer agent's scene analysis to guide characters and the narrator for the next generation, hopefully improving the quality of the generated content.
!!! danger "This may break dumber models"
The guidance generated is inserted **after** the message history and **right before** the next generation. Some older models may struggle with this and generate incoherent responses.
##### Guide Actors
If enabled the director will guide the actors in the scene.
##### Guide Narrator
If enabled the director will guide the narrator in the scene.
##### Max. Guidance Length
The maximum number of tokens for the guidance. (e.g., how long should the guidance be).

View file

@ -1,8 +1,6 @@
# Overview # Overview
The editor improves generated text by making sure quotes and actions are correctly formatted. The editor agent is resposible for post-processing the generated content. It can be used to add additional detail to dialogue and fix exposition markers.
Can also add additional details and attempt to fix continuity issues. !!! example "Experimental"
!!! warning "Experimental"
This agent is currently experimental and may not work as expected. This agent is currently experimental and may not work as expected.

View file

@ -1,17 +1,21 @@
# Settings # Settings
![Editor agent settings](/talemate/img/0.26.0/editor-agent-settings.png) ![Editor agent settings](/talemate/img/0.29.0/editor-agent-settings.png)
##### Fix exposition ##### Fix exposition
If enabled the editor will attempt to fix exposition in the generated dialogue. If enabled the editor will attempt to fix exposition in the generated dialogue.
That means it will ensure that actions are correctly encased in `*` and that quotes are correctly applied to spoken text. It will do this based on the selected format.
###### Fix narrator messages ###### Fix narrator messages
Applies the same rules as above to the narrator messages. Applies the same rules as above to the narrator messages.
###### Fix user input
Applies the same rules as above to the user input messages.
##### Add detail ##### Add detail
Will take the generate message and attempt to add more detail to it. Will take the generate message and attempt to add more detail to it.
@ -20,7 +24,7 @@ Will take the generate message and attempt to add more detail to it.
Will attempt to fix continuity errors in the generated text. Will attempt to fix continuity errors in the generated text.
!!! warning "Experimental, and doesn't work most of the time" !!! example "Experimental, and doesn't work most of the time"
There is something about accurately identifying continuity errors that is currently very There is something about accurately identifying continuity errors that is currently very
difficult for AI to do. So this feature is very hit and miss. More miss than hit. difficult for AI to do. So this feature is very hit and miss. More miss than hit.

View file

@ -6,6 +6,7 @@ You can manage your available embeddings through the application settings.
In the settings dialogue go to **:material-tune: Presets** and then **:material-cube-unfolded: Embeddings**. In the settings dialogue go to **:material-tune: Presets** and then **:material-cube-unfolded: Embeddings**.
<!--- --8<-- [start:embeddings_setup] -->
## Pre-configured Embeddings ## Pre-configured Embeddings
### all-MiniLM-L6-v2 ### all-MiniLM-L6-v2
@ -78,4 +79,5 @@ This is a tag to mark the embedding as needing a GPU. It doesn't actually do any
##### Local ##### Local
This is a tag to mark the embedding as local. It doesn't actually do anything, but can be useful for sorting later on. This is a tag to mark the embedding as local. It doesn't actually do anything, but can be useful for sorting later on.
<!--- --8<-- [end:embeddings_setup] -->

View file

@ -1,5 +1,9 @@
# Overview # Overview
The narrator agent handles the generation of narrative text. It is responsible for setting the scene, describing the environment, and providing context to the player. The narrator agent handles the generation of narrative text. This could be progressing the story, describing the scene, or providing exposition and answers to questions.
It requires a client to be connected to an AI text generation API. ### :material-script: Content
The narrator agent is the first agent that can be influenced by one of your writing style templates.
Make sure the a writing style is selected in the [Scene Settings](/talemate/user-guide/world-editor/scene/settings) to apply the writing style to the generated content.

View file

@ -1,12 +1,12 @@
# Settings # Settings
![Narrator agent settings](/talemate/img/0.26.0/narrator-agent-settings.png) ## :material-cog: General
![Narrator agent settings](/talemate/img/0.29.0/narrator-general-settings.png)
##### Client ##### Client
The text-generation client to use for conversation generation. The text-generation client to use for conversation generation.
##### Generation Override ##### Generation Override
Checkbox that exposes further settings to configure the conversation agent generation. Checkbox that exposes further settings to configure the conversation agent generation.
@ -19,9 +19,21 @@ Extra instructions for the generation. This should be short and generic as it wi
If checked and talemate detects a repetitive response (based on a threshold), it will automatically re-generate the resposne with increased randomness parameters. If checked and talemate detects a repetitive response (based on a threshold), it will automatically re-generate the resposne with increased randomness parameters.
##### Narrate time passaage ## :material-script-text: Content
Whenever you indicate a passage of time using the [Scene tools](/talemate/user-guide/scenario-tools), the narrator will automatically narrate the passage of time. ![Narrator agent content settings](/talemate/img/0.29.0/narrator-content-settings.png)
The narrator agent is the first agent that can be influenced by one of your writing style templates.
Enable this setting to apply a writing style to the generated content.
Make sure the a writing style is selected in the [Scene Settings](/talemate/user-guide/world-editor/scene/settings) to apply the writing style to the generated content.
## :material-clock-fast: Narrate time passage
![Narrator agent time passage settings](/talemate/img/0.29.0/narrator-narrate-time-passage-settings.png)
The narrator can automatically narrate the passage of time when you indicate it using the [Scene tools](/talemate/user-guide/scenario-tools).
##### Guide time narration via prompt ##### Guide time narration via prompt
@ -29,6 +41,12 @@ Wheneever you indicate a passage of time using the [Scene tools](/talemate/user-
This allows you to explain what happens during the passage of time. This allows you to explain what happens during the passage of time.
##### Narrate after dialogue ## :material-forum-plus-outline: Narrate after dialogue
Whenever a character speaks, the narrator will automatically narrate the scene after. ![Narrator agent after dialogue settings](/talemate/img/0.29.0/narrator-narrate-after-dialogue-settings.png)
Whenever a character speaks, the narrator will automatically narrate the scene after.
## :material-brain: Long Term Memory
--8<-- "docs/snippets/tips.md:agent_long_term_memory_settings"

View file

@ -1,10 +1,24 @@
# Overview # Overview
The summarizer agent is responsible for summarizing the generated content and other analytical tasks.
The summarization agent will regularly summarize the current progress of the scene. ### :material-forum: Dialogue summarization
Dialogue is summarized regularly to keep the conversation backlogs from getting too large.
This summarization happens at two points: ### :material-layers: Layered history
Summarized dialogue is then further summarized into a layered history, where each layer represents a different level of detail.
1. When a token threshold is reached. Maintaining a layered history should theoretically allow to keep the entire history in the context, albeit at a lower level of detail the further back in history you go.
2. When a time advance is triggered.
It rquires a text-generation client to be configured and assigned. ### :material-lightbulb: Scene analysis
As of version 0.29 the summarizer agent also has the ability to analyze the scene and provide this analysis to other agents for hopefully improve the quality of the generated content.
### :material-layers-search: Context investigation
Context investigations are when the summarizer agent will dig into the layers of the history to find context that may be relevant to the current scene.
!!! danger "This can result in many extra prompts being generated."
This can be useful for generating more contextually relevant content, but can also result in a lot of extra prompts being generated.
This is currently only used when the scene analysis with **deep analysis** is enabled.
!!! example "Experimental"
The results of this are sort of hit and miss. It can be useful, but it can also be a bit of a mess and actually make the generated content worse. (e.g., context isn't correctly identified as being relevant, which A LOT of llms still seem to struggle with in my testing.)

View file

@ -4,7 +4,7 @@
General summarization settings. General summarization settings.
![Summarizer agent general settings](/talemate/img/0.28.0/summarizer-general-settings.png) ![Summarizer agent general settings](/talemate/img/0.29.0/summarizer-general-settings.png)
##### Summarize to long term memory archive ##### Summarize to long term memory archive
@ -37,7 +37,7 @@ Not only does this allow to keep more context in the history, albeit with earlie
Right now this is considered an experimental feature, and whether or not its feasible in the long term will depend on how well it works in practice. Right now this is considered an experimental feature, and whether or not its feasible in the long term will depend on how well it works in practice.
![Summarizer agent layered history settings](/talemate/img/0.28.0/summarizer-layered-history-settings.png) ![Summarizer agent layered history settings](/talemate/img/0.29.0/summarizer-layered-history-settings.png)
##### Enable layered history ##### Enable layered history
@ -58,4 +58,76 @@ The maximum number of layers that can be created. Raising this limit past 3 is l
Smaller LLMs may struggle with accurately summarizing long texts. This setting will split the text into chunks and summarize each chunk separately, then stitch them together in the next layer. If you're using a strong LLM (70B+), you can try setting this to be the same as the threshold. Smaller LLMs may struggle with accurately summarizing long texts. This setting will split the text into chunks and summarize each chunk separately, then stitch them together in the next layer. If you're using a strong LLM (70B+), you can try setting this to be the same as the threshold.
Setting this higher than the token threshold does nothing. Setting this higher than the token threshold does nothing.
##### Chunk size
During the summarization itself, the text will be furhter split into chunks where each chunk is summarized separately. This setting controls the size of those chunks. This is a character length setting, **NOT** token length.
##### Enable analyzation
Enables the analyzation of the chunks and their relationship to each other before summarization. This can greatly improve the quality of the summarization, but will also result in a bigger size requirement of the output.
##### Maximum response length
The maximum length of the response that the summarizer agent will generate.
!!! info "Analyzation requires a bigger length"
If you enable analyzation, you should set this to be high enough so the response has room for both the analysis and the summary of all the chunks.
## Long term memory
--8<-- "docs/snippets/tips.md:agent_long_term_memory_settings"
## Scene Analysis
![Summarizer agent scene analysis settings](/talemate/img/0.29.0/summarizer-scene-analysis-settings.png)
When enabled scene analysis will be performed during conversation and narration tasks. This analysis will be used to provide additional context to other agents, which should hopefully improve the quality of the generated content.
##### Length of analysis
The maximum number of tokens for the response. (e.g., how long should the analysis be).
##### Conversation
Enable scene analysis for conversation tasks.
##### Narration
Enable scene analysis for narration tasks.
##### Deep analysis
Enable context investigations based on the initial analysis.
##### Max. content investigations
The maximum number of content investigations that can be performed. This is a safety feature to prevent the AI from going overboard with the investigations. The number here is to be taken per layer in the history. So if this is set to 1 and there are 2 layers, this will perform 2 investigations.
##### Cache analysis
Cache the analysis results for the scene. Enable this to prevent regenerationg the analysis when you regenerate the most recent output.
!!! info
This cache is anchored to the last message in the scene (excluding the current message). Editing that message will invalidate the cache.
## Context investigation
![Summarizer agent context investigation settings](/talemate/img/0.29.0/summarizer-context-investigation-settings.png)
When enabled, the summarizer agent will dig into the layers of the history to find context that may be relevant to the current scene.
!!! info
This is currently only triggered during deep analysis as part of the scene analysis. Disabling context investigation will also disable the deep analysis.
##### Answer length
The maximum length of the answer that the AI will generate.
##### Update method
How to update the context with the new information.
- `Replace` - replace the context with the new information
- `Smart merge` - merge the new information with the existing context (uses another LLM promp to generate the merge)

View file

@ -4,4 +4,12 @@ The world state agent handles the world state snapshot generation and reinforcem
It requires a text-generation client to be configured and assigned. It requires a text-generation client to be configured and assigned.
--8<-- "docs/snippets/tips.md:what_is_a_tracked_state" --8<-- "docs/snippets/tips.md:what_is_a_tracked_state"
### :material-earth: World State
The world state is a snapshot of the current state of the world. This can include things like the current location, the time of day, the weather, the state of the characters, etc.
### :material-account-switch: Character Progression
The world state agent can be used to regularly check progression of the scene against old character information and then propose changes to a character's description and attributes based on how the story has progressed.

View file

@ -1,6 +1,8 @@
# Settings # Settings
![World state agent settings](/talemate/img/0.26.0/world-state-agent-settings.png) ## General
![World state agent settings](/talemate/img/0.29.0/world-state-general-settings.png)
##### Update world state ##### Update world state
@ -24,4 +26,24 @@ Will attempt to evaluate and update any due [conditional context pins](/talemate
###### Turns ###### Turns
How many turns to wait before the conditional context pins are updated. How many turns to wait before the conditional context pins are updated.
## Character Progression
![World state agent character progression settings](/talemate/img/0.29.0/world-state-character-progression-settings.png)
##### Frquency of checks
How often ot check for character progression.
This is in terms of full rounds, not individual turns.
##### Propose as suggestions
If enabled, the proposed changes will be presented as suggestions to the player.
--8<-- "docs/snippets/tips.md:character_change_proposals"
##### Player character
Enable this to have the player character be included in the progression checks.

View file

View file

@ -0,0 +1,7 @@
# :material-palette-outline: Appearance
## :material-script: Scene
![App settings - Appearance - Scene](/talemate/img/0.29.0/app-settings-appearance-scene.png)
Allows you some control over how the message history is displayed.

View file

@ -0,0 +1,5 @@
# :material-application-outline: Application
![App settings - Application](/talemate/img/0.29.0/app-settings-application.png)
Configure various API keys for integration with external services. (OpenAI, Anthropic, etc.)

View file

@ -0,0 +1,26 @@
# Game
## :material-cog: General
![App settings - Game - General](/talemate/img/0.29.0/app-settings-game-general.png)
##### Auto save
If enabled the scene will save everytime the game loop completes. This can also be toggled on or off directly from the main screen.
If a scene is set to be immutable, this setting will be disabled.
##### Auto progress
If enabled the game will automatically progress to the next character after your turn. This can also be toggled on or off directly from the main screen.
##### Max backscroll
The maximum number of messages that will be displayed in the backscroll. This is a display only setting and does not affect the game in any way. (If you find your interface feels sluggish, try reducing this number.)
## :material-human-edit: Default character
![App settings - Game - Default Character](/talemate/img/0.29.0/app-settings-game-default-character.png)
Lets you manage a basic default character.
This is only relevant when loading scenes that do not come with a default character. (e.g., mostly from other application exports, like ST character cards.)

View file

@ -0,0 +1,70 @@
# :material-tune: Presets
Change inference parameters, embedding parameters and global system prompt overrides.
## :material-matrix: Inference
!!! danger "Advanced settings. Use with caution."
If these settings don't mean anything to you, you probably shouldn't be changing them. They control the way the AI generates text and can have a big impact on the quality of the output.
This document will NOT explain what each setting does.
![App settings - Application](/talemate/img/0.29.0/app-settings-presets-inference.png)
If you're familiar with editing inference parameters from other similar applications, be aware that there is a significant difference in how TaleMate handles these settings.
Agents take different actions, and based on that action one of the presets is selected.
That means that ALL presets are relevant and will be used at some point.
For example analysis will use the `Anlytical` preset, which is configured to be less random and more deterministic.
The `Conversation` preset is used by the conversation agent during dialogue gneration.
The other presets are used for various creative tasks.
These are all experimental and will probably change / get merged in the future.
## :material-cube-unfolded: Embeddings
![App settings - Application](/talemate/img/0.29.0/app-settings-presets-embeddings.png)
Allows you to add, remove and manage various embedding models for the memory agent to use via chromadb.
--8<-- "docs/user-guide/agents/memory/embeddings.md:embeddings_setup"
## :material-text-box: System Prompts
![App settings - Application](/talemate/img/0.29.0/app-settings-presets-system-prompts.png)
This allows you to override the global system prompts for the entire application for each overarching prompt kind.
If these are not set the default system prompt will be read from the templates that exist in `src/talemate/prompts/templates/{agent}/system-*.jinja2`.
This is useful if you want to change the default system prompts for the entire application.
The effect these have, varies from model to model.
### Prompt types
- Conversation - Use for dialogue generation.
- Narration - Used for narrative generation.
- Creation - Used for other creative tasks like making new characters, locations etc.
- Direction - Used for guidance prompts and general scene direction.
- Analysis (JSON) - Used for analytical tasks that expect a JSON response.
- Analysis - Used for analytical tasks that expect a text response.
- Editing - Used for post-processing tasks like fixing exposition, adding detail etc.
- World State - Used for generating world state information. (This is sort of a mix of analysis and creation prompts.)
- Summarization - Used for summarizing text.
### Normal / Uncensored
Overrides are maintained for both normal and uncensored modes.
Currently local API clients (koboldcpp, textgenwebui, tabbyapi, llmstudio) will use the uncensored prompts, while the clients targeting official third party APIs will use the normal prompts.
The uncensored prompts are a work-around to prevent the LLM from refusing to generate text based on topic or content.
!!! note "Future plans"
A toggle to switch between normal and uncensored prompts regardless of the client is planned for a future release.

View file

@ -2,7 +2,11 @@
The `Settings` tab allows you to configure various settings for the scene. The `Settings` tab allows you to configure various settings for the scene.
![World editor scene settings 1](/talemate/img/0.26.0/world-editor-scene-settings-1.png) ![World editor scene settings 1](/talemate/img/0.29.0/world-editor-scene-settings-1.png)
### Writing Style
If you have any [writing style templates](/talemate/user-guide/world-editor/templates/writing-style/) set up, you can select one here. Some agents may use this to influence their output.
### Locked save file ### Locked save file
@ -12,4 +16,10 @@ The user (or you) will be forced to save a new copy of the scene if they want to
### Experimental ### Experimental
This is simply a tag that lets the user know that this scene is experimental, and may take a strong LLM to perform well. This is simply a tag that lets the user know that this scene is experimental, and may take a strong LLM to perform well.
### Restoration Settings
Allows you to specific another save file of the same project to serve as a restoration point. Once set you can use the **:material-backup-restore: Restore Scene** button to restore the scene to that point.
This will create a new copy of the scene with the restoration point as the base.

Some files were not shown because too many files have changed in this diff Show more