0.29.0 (#167)
* set 0.29.0 * tweaks for dig layered history (wip) * move director agent to directory * relock * remove "none" from dig_layered_history response * determine character development * update character sheet from character development (wip) * org imports * alert outdated template overrides during startup * editor controls normalization of exposition * dialogue formatting refactor * fix narrator.clean_result forcing * regardless of editor fix exposition setting * move more of the dialogue cleanup logic into the editor fix exposition handlers * remove cruft * change ot normal selects and add some margin * move formatting option up * always strip partial sentences * separates exposition fixes from other dialogue cleanup operations, since we still want those * add novel formatting style * honor formatting config when no markers are supplied * fix issue where sometimes character message formatting would miss character name * director can now guide actors through scene analysis * style fixes * typo * select correct system message on direction type * prompt tweaks * disable by default * add support for dynamic instruction injection and include missing guide for internal note usage * change favicon and also indicate business through favicon * img * support xtc, dry and smoothing in text gen webui * prompt tweaks * support xtc, dry, smoothing in koboldcpp client * reorder * dry, xtc and smoothing factor exposed to tabby api client * urls to third party API documentation * remove bos token * add missing preset * focal * focal progress * focal progress and generated suggestions progress * fix issue with discard all suggestions * apply suggestions * move suggestion ux into the world state manager * support generation options for suggestion generation * unused import * refactor focal to json based approach * focal and character suggestion tweaks * rmeove cruft * remove cruft * relock * prompt tweaks * layout spacing updates * ux elements for removal of scenes from quick load menu * context investigation refactor WIP * context investigation refactor * context investigation refactor * context investigation refactor * cleanup * move scene analysis to summarizer agent * remove deprecated context investigation logic * context investigation refactor continued - split into separate file for easier maint * allow direct specification of response context length * context investigation and scene analyzation progress * change analysis length config to number * remove old dig-layered-history templates * summarizer - deep analysis is only available if there is layered history * move world_state agent to dedicated directory * remove unused imports * automatic character progression WIP * character suggestions progress * app busy flag based on agent business * indicate suggestions in world state overview * fix issue with user input cleanup * move conversation agent to a dedicated submodule * Response in action analyze_text_and_extract_context is too short #162 * move narrator agent to its own submodule * narrator improvements WIP * narration improvements WIP * fix issue with regen of character exit narration * narration improvements WIP * prompt tweaks * last_message_of_type can set max iterations * fix multiline parsing * prompt tweaks * director guide actors based of scene analysis * director guidance for actors * prompt tweaks * prompt tweaks * prompt tweaks * fix automatic character proposals not propagating to the ux * fix analysis length * support director guidance in legacy chat format * typo * prompt tweaks * prompt tweaks * error handling * length config * prompt tweaks * typo * remove cruft * prompt tweak * prompt tweak * time passage style changes * remove cruft * deep analysis context investigations honor call limit * refactor conversation agent long term memory to use new memory rag mixin - also streamline prompts * tweaks to RAG mixin agent config * fix narration highlighting * context investgiation fixes director narration guidance summarization tweaks * direactor guide narration progress context investigation fixes that would cause looping of investigations and failure to dig into the correct layers * prompt tweaks * summarization improvements * separate deep analysis chapter selection from analysis into its own prompt * character entry and exit * cache analysis per subtype and some narrator prompt tweaks * separate layered history logic into its own summarizer mixin and expose some additional options * scene can now set an overral writing style using writing style templates narrator option to enable writing style * narrate query writing style support * scene tools - narrator actions refactor to handler and own component * narrator query / look at narrations emitted as context investigation messages refactor context investigation messaage display scene message meta data object * include narrative direction * improve context investigation message prompt insert * reorg supported parameters * fix bug when no message history exists * WIP make regenerate work nicely with director guidance * WIP make regenerate work nicely with director guidance * regenerate conversation fixes * help text * ux tweaks * relock * turn off deep analysis and context investigations by default * long term memory options for director and summarizer * long term memory caching * fix summarization cache toggle not showing up in ux * ux tweaks * layered history summarization includes character information for mentioned characters * deepseek client added * Add fork button to narrator message * analyze and guidance support for time passage narration * cache based on message fingerprint instead of id * configurable system prompts WIP * configurable system prompts WIP * client overrides for system prompts wired to ux * system prompt overhaul * fix issue with unknown system prompt kind * add button to manually request dynamic choices from the director move the generate choices logic of the director agent to its own submodule * remove cruft * 30 may be too long and is causing the client to disappear temporarly * suppoert dynamic choice generate for non player characters * enable `actor` tab for player characters * creator agent now has access to rag tools improve acting instruction generation * client timeout fixes * fix issue where scene removal menu stayed open after remove * expose scene restore functionality to ux * create initial restore point * fix creator extra-context template * didn't mean to remove this * intro scene should be edited through world editor * fix alert * fix partial quotes regardless of editor setting director guidance for conversation reminds to put speech in quotes * fix @ instructions not being passed through to director guidance prompt * anthropic mode list updated * default off * cohere model list updated * reset actAs on next scene load * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * remove debug cruft * relock * docs on changing host / port * fix issue with narrator / director actiosn not available on fresh install * fix issue with long content classification determination result * take this reminder to put speech into quotes out for now, it seems to do more harm than good * fix some remaining issues with auto expositon fixes * prompt tweaks * prompt tweaks * fix issue during reload * expensive and warning ux passthrough for agent config * layered sumamry analysation defaults to on * what's new info block added * docs * what's new updated * remove old images * old img cleanup script * prompt tweaks * improve auto prompt template detection via huggingface * add gpt-4o-realtime-preview add gpt-4o-mini-realtime-preview * add o1 and o3-mini * fix o1 and o3 * fix o1 and o3 * more o1 / o3 fixes * o3 fixes
|
@ -2,10 +2,10 @@
|
|||
|
||||
Roleplay with AI with a focus on strong narration and consistent world and game state tracking.
|
||||
|
||||
|||
|
||||
|||
|
||||
|------------------------------------------|------------------------------------------|
|
||||
|||
|
||||
|||
|
||||
|||
|
||||
|||
|
||||
|
||||
## Core Features
|
||||
|
||||
|
|
166
docs/cleanup.py
Normal file
|
@ -0,0 +1,166 @@
|
|||
import os
|
||||
import re
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
import argparse
|
||||
|
||||
def find_image_references(md_file):
|
||||
"""Find all image references in a markdown file."""
|
||||
with open(md_file, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
pattern = r'!\[.*?\]\((.*?)\)'
|
||||
matches = re.findall(pattern, content)
|
||||
|
||||
cleaned_paths = []
|
||||
for match in matches:
|
||||
path = match.lstrip('/')
|
||||
if 'img/' in path:
|
||||
path = path[path.index('img/') + 4:]
|
||||
# Only keep references to versioned images
|
||||
parts = os.path.normpath(path).split(os.sep)
|
||||
if len(parts) >= 2 and parts[0].replace('.', '').isdigit():
|
||||
cleaned_paths.append(path)
|
||||
|
||||
return cleaned_paths
|
||||
|
||||
def scan_markdown_files(docs_dir):
|
||||
"""Recursively scan all markdown files in the docs directory."""
|
||||
md_files = []
|
||||
for root, _, files in os.walk(docs_dir):
|
||||
for file in files:
|
||||
if file.endswith('.md'):
|
||||
md_files.append(os.path.join(root, file))
|
||||
return md_files
|
||||
|
||||
def find_all_images(img_dir):
|
||||
"""Find all image files in version subdirectories."""
|
||||
image_files = []
|
||||
for root, _, files in os.walk(img_dir):
|
||||
# Get the relative path from img_dir to current directory
|
||||
rel_dir = os.path.relpath(root, img_dir)
|
||||
|
||||
# Skip if we're in the root img directory
|
||||
if rel_dir == '.':
|
||||
continue
|
||||
|
||||
# Check if the immediate parent directory is a version number
|
||||
parent_dir = rel_dir.split(os.sep)[0]
|
||||
if not parent_dir.replace('.', '').isdigit():
|
||||
continue
|
||||
|
||||
for file in files:
|
||||
if file.lower().endswith(('.png', '.jpg', '.jpeg', '.gif', '.svg')):
|
||||
rel_path = os.path.relpath(os.path.join(root, file), img_dir)
|
||||
image_files.append(rel_path)
|
||||
return image_files
|
||||
|
||||
def grep_check_image(docs_dir, image_path):
|
||||
"""
|
||||
Check if versioned image is referenced anywhere using grep.
|
||||
Returns True if any reference is found, False otherwise.
|
||||
"""
|
||||
try:
|
||||
# Split the image path to get version and filename
|
||||
parts = os.path.normpath(image_path).split(os.sep)
|
||||
version = parts[0] # e.g., "0.29.0"
|
||||
filename = parts[-1] # e.g., "world-state-suggestions-2.png"
|
||||
|
||||
# For versioned images, require both version and filename to match
|
||||
version_pattern = f"{version}.*{filename}"
|
||||
try:
|
||||
result = subprocess.run(
|
||||
['grep', '-r', '-l', version_pattern, docs_dir],
|
||||
capture_output=True,
|
||||
text=True
|
||||
)
|
||||
if result.stdout.strip():
|
||||
print(f"Found reference to {image_path} with version pattern: {version_pattern}")
|
||||
return True
|
||||
except subprocess.CalledProcessError:
|
||||
pass
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error during grep check for {image_path}: {e}")
|
||||
|
||||
return False
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description='Find and optionally delete unused versioned images in MkDocs project')
|
||||
parser.add_argument('--docs-dir', type=str, required=True, help='Path to the docs directory')
|
||||
parser.add_argument('--img-dir', type=str, required=True, help='Path to the images directory')
|
||||
parser.add_argument('--delete', action='store_true', help='Delete unused images')
|
||||
parser.add_argument('--verbose', action='store_true', help='Show all found references and files')
|
||||
parser.add_argument('--skip-grep', action='store_true', help='Skip the additional grep validation')
|
||||
args = parser.parse_args()
|
||||
|
||||
# Convert paths to absolute paths
|
||||
docs_dir = os.path.abspath(args.docs_dir)
|
||||
img_dir = os.path.abspath(args.img_dir)
|
||||
|
||||
print(f"Scanning markdown files in: {docs_dir}")
|
||||
print(f"Looking for versioned images in: {img_dir}")
|
||||
|
||||
# Get all markdown files
|
||||
md_files = scan_markdown_files(docs_dir)
|
||||
print(f"Found {len(md_files)} markdown files")
|
||||
|
||||
# Collect all image references
|
||||
used_images = set()
|
||||
for md_file in md_files:
|
||||
refs = find_image_references(md_file)
|
||||
used_images.update(refs)
|
||||
|
||||
# Get all actual images (only from version directories)
|
||||
all_images = set(find_all_images(img_dir))
|
||||
|
||||
if args.verbose:
|
||||
print("\nAll versioned image references found in markdown:")
|
||||
for img in sorted(used_images):
|
||||
print(f"- {img}")
|
||||
|
||||
print("\nAll versioned images in directory:")
|
||||
for img in sorted(all_images):
|
||||
print(f"- {img}")
|
||||
|
||||
# Find potentially unused images
|
||||
unused_images = all_images - used_images
|
||||
|
||||
# Additional grep validation if not skipped
|
||||
if not args.skip_grep and unused_images:
|
||||
print("\nPerforming additional grep validation...")
|
||||
actually_unused = set()
|
||||
for img in unused_images:
|
||||
if not grep_check_image(docs_dir, img):
|
||||
actually_unused.add(img)
|
||||
|
||||
if len(actually_unused) != len(unused_images):
|
||||
print(f"\nGrep validation found {len(unused_images) - len(actually_unused)} additional image references!")
|
||||
unused_images = actually_unused
|
||||
|
||||
# Report findings
|
||||
print("\nResults:")
|
||||
print(f"Total versioned images found: {len(all_images)}")
|
||||
print(f"Versioned images referenced in markdown: {len(used_images)}")
|
||||
print(f"Unused versioned images: {len(unused_images)}")
|
||||
|
||||
if unused_images:
|
||||
print("\nUnused versioned images:")
|
||||
for img in sorted(unused_images):
|
||||
print(f"- {img}")
|
||||
|
||||
if args.delete:
|
||||
print("\nDeleting unused versioned images...")
|
||||
for img in unused_images:
|
||||
full_path = os.path.join(img_dir, img)
|
||||
try:
|
||||
os.remove(full_path)
|
||||
print(f"Deleted: {img}")
|
||||
except Exception as e:
|
||||
print(f"Error deleting {img}: {e}")
|
||||
print("\nDeletion complete")
|
||||
else:
|
||||
print("\nNo unused versioned images found!")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
14
docs/dev/third-party-reference.md
Normal file
|
@ -0,0 +1,14 @@
|
|||
## Third Party API docs
|
||||
|
||||
### Chat completions
|
||||
|
||||
- [Anthropic](https://docs.anthropic.com/en/api/messages)
|
||||
- [Cohere](https://docs.cohere.com/reference/chat)
|
||||
- [Google AI](https://ai.google.dev/api/generate-content#v1beta.GenerationConfig)
|
||||
- [Groq](https://console.groq.com/docs/api-reference#chat-create)
|
||||
- [KoboldCpp](https://lite.koboldai.net/koboldcpp_api#/api/v1)
|
||||
- [LMStudio](https://lmstudio.ai/docs/api/rest-api)
|
||||
- [Mistral AI](https://docs.mistral.ai/api/)
|
||||
- [OpenAI](https://platform.openai.com/docs/api-reference/completions)
|
||||
- [TabbyAPI](https://theroyallab.github.io/tabbyAPI/#operation/chat_completion_request_v1_chat_completions_post)
|
||||
- [Text-Generation-WebUI](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/openai/typing.py)
|
3
docs/getting-started/advanced/.pages
Normal file
|
@ -0,0 +1,3 @@
|
|||
nav:
|
||||
- change-host-and-port.md
|
||||
- ...
|
102
docs/getting-started/advanced/change-host-and-port.md
Normal file
|
@ -0,0 +1,102 @@
|
|||
# Changing host and port
|
||||
|
||||
## Backend
|
||||
|
||||
By default, the backend listens on `localhost:5050`.
|
||||
|
||||
To run the server on a different host and port, you need to change the values passed to the `--host` and `--port` parameters during startup and also make sure the frontend knows the new values.
|
||||
|
||||
### Changing the host and port for the backend
|
||||
|
||||
#### :material-linux: Linux
|
||||
|
||||
Copy `start.sh` to `start_custom.sh` and edit the `--host` and `--port` parameters in the `uvicorn` command.
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
. talemate_env/bin/activate
|
||||
python src/talemate/server/run.py runserver --host 0.0.0.0 --port 1234
|
||||
```
|
||||
|
||||
#### :material-microsoft-windows: Windows
|
||||
|
||||
Copy `start.bat` to `start_custom.bat` and edit the `--host` and `--port` parameters in the `uvicorn` command.
|
||||
|
||||
```batch
|
||||
start cmd /k "cd talemate_env\Scripts && activate && cd ../../ && python src\talemate\server\run.py runserver --host 0.0.0.0 --port 1234"
|
||||
```
|
||||
|
||||
### Letting the frontend know about the new host and port
|
||||
|
||||
Copy `talemate_frontend/example.env.development.local` to `talemate_frontend/.env.production.local` and edit the `VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL`.
|
||||
|
||||
```env
|
||||
VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL=ws://localhost:1234
|
||||
```
|
||||
|
||||
Next rebuild the frontend.
|
||||
|
||||
```bash
|
||||
cd talemate_frontend
|
||||
npm run build
|
||||
```
|
||||
|
||||
### Start the backend and frontend
|
||||
|
||||
Start the backend and frontend as usual.
|
||||
|
||||
#### :material-linux: Linux
|
||||
|
||||
```bash
|
||||
./start_custom.sh
|
||||
```
|
||||
|
||||
#### :material-microsoft-windows: Windows
|
||||
|
||||
```batch
|
||||
start_custom.bat
|
||||
```
|
||||
|
||||
## Frontend
|
||||
|
||||
By default, the frontend listens on `localhost:8080`.
|
||||
|
||||
To change the frontend host and port, you need to change the values passed to the `--frontend-host` and `--frontend-port` parameters during startup.
|
||||
|
||||
### Changing the host and port for the frontend
|
||||
|
||||
#### :material-linux: Linux
|
||||
|
||||
Copy `start.sh` to `start_custom.sh` and edit the `--frontend-host` and `--frontend-port` parameters.
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
. talemate_env/bin/activate
|
||||
python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5055 \
|
||||
--frontend-host localhost --frontend-port 8082
|
||||
```
|
||||
|
||||
#### :material-microsoft-windows: Windows
|
||||
|
||||
Copy `start.bat` to `start_custom.bat` and edit the `--frontend-host` and `--frontend-port` parameters.
|
||||
|
||||
```batch
|
||||
start cmd /k "cd talemate_env\Scripts && activate && cd ../../ && python src\talemate\server\run.py runserver --host 0.0.0.0 --port 5055 --frontend-host localhost --frontend-port 8082"
|
||||
```
|
||||
|
||||
### Start the backend and frontend
|
||||
|
||||
Start the backend and frontend as usual.
|
||||
|
||||
#### :material-linux: Linux
|
||||
|
||||
```bash
|
||||
./start_custom.sh
|
||||
```
|
||||
|
||||
#### :material-microsoft-windows: Windows
|
||||
|
||||
```batch
|
||||
start_custom.bat
|
||||
```
|
||||
|
Before Width: | Height: | Size: 5.6 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 4.7 KiB |
Before Width: | Height: | Size: 32 KiB |
Before Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 30 KiB |
Before Width: | Height: | Size: 2.9 KiB |
Before Width: | Height: | Size: 56 KiB |
Before Width: | Height: | Size: 7.1 KiB |
Before Width: | Height: | Size: 35 KiB |
Before Width: | Height: | Size: 20 KiB |
Before Width: | Height: | Size: 17 KiB |
Before Width: | Height: | Size: 43 KiB |
Before Width: | Height: | Size: 47 KiB |
Before Width: | Height: | Size: 49 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 59 KiB |
Before Width: | Height: | Size: 5.3 KiB |
Before Width: | Height: | Size: 7.5 KiB |
Before Width: | Height: | Size: 64 KiB |
Before Width: | Height: | Size: 46 KiB |
Before Width: | Height: | Size: 54 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 45 KiB |
Before Width: | Height: | Size: 52 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 38 KiB |
Before Width: | Height: | Size: 418 KiB |
Before Width: | Height: | Size: 8.7 KiB |
Before Width: | Height: | Size: 124 KiB |
Before Width: | Height: | Size: 48 KiB |
Before Width: | Height: | Size: 50 KiB |
Before Width: | Height: | Size: 33 KiB |
Before Width: | Height: | Size: 50 KiB |
Before Width: | Height: | Size: 42 KiB |
Before Width: | Height: | Size: 53 KiB |
Before Width: | Height: | Size: 45 KiB |
Before Width: | Height: | Size: 60 KiB |
Before Width: | Height: | Size: 58 KiB |
BIN
docs/img/0.29.0/agent-long-term-memory-settings.png
Normal file
After Width: | Height: | Size: 32 KiB |
BIN
docs/img/0.29.0/app-settings-appearance-scene.png
Normal file
After Width: | Height: | Size: 63 KiB |
BIN
docs/img/0.29.0/app-settings-application.png
Normal file
After Width: | Height: | Size: 39 KiB |
BIN
docs/img/0.29.0/app-settings-game-default-character.png
Normal file
After Width: | Height: | Size: 40 KiB |
BIN
docs/img/0.29.0/app-settings-game-general.png
Normal file
After Width: | Height: | Size: 37 KiB |
BIN
docs/img/0.29.0/app-settings-presets-embeddings.png
Normal file
After Width: | Height: | Size: 76 KiB |
BIN
docs/img/0.29.0/app-settings-presets-inference.png
Normal file
After Width: | Height: | Size: 86 KiB |
BIN
docs/img/0.29.0/app-settings-presets-system-prompts.png
Normal file
After Width: | Height: | Size: 62 KiB |
BIN
docs/img/0.29.0/conversation-general-settings.png
Normal file
After Width: | Height: | Size: 33 KiB |
BIN
docs/img/0.29.0/conversation-generation-settings.png
Normal file
After Width: | Height: | Size: 39 KiB |
BIN
docs/img/0.29.0/director-dynamic-actions-settings.png
Normal file
After Width: | Height: | Size: 51 KiB |
BIN
docs/img/0.29.0/director-general-settings.png
Normal file
After Width: | Height: | Size: 45 KiB |
BIN
docs/img/0.29.0/director-guide-scene-settings.png
Normal file
After Width: | Height: | Size: 47 KiB |
BIN
docs/img/0.29.0/editor-agent-settings.png
Normal file
After Width: | Height: | Size: 58 KiB |
BIN
docs/img/0.29.0/narrator-content-settings.png
Normal file
After Width: | Height: | Size: 22 KiB |
BIN
docs/img/0.29.0/narrator-general-settings.png
Normal file
After Width: | Height: | Size: 39 KiB |
BIN
docs/img/0.29.0/narrator-narrate-after-dialogue-settings.png
Normal file
After Width: | Height: | Size: 27 KiB |
BIN
docs/img/0.29.0/narrator-narrate-time-passage-settings.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
docs/img/0.29.0/summarizer-context-investigation-settings.png
Normal file
After Width: | Height: | Size: 36 KiB |
BIN
docs/img/0.29.0/summarizer-general-settings.png
Normal file
After Width: | Height: | Size: 63 KiB |
BIN
docs/img/0.29.0/summarizer-layered-history-settings.png
Normal file
After Width: | Height: | Size: 74 KiB |
BIN
docs/img/0.29.0/summarizer-scene-analysis-settings.png
Normal file
After Width: | Height: | Size: 60 KiB |
BIN
docs/img/0.29.0/world-editor-scene-settings-1.png
Normal file
After Width: | Height: | Size: 71 KiB |
BIN
docs/img/0.29.0/world-editor-suggestions-1.png
Normal file
After Width: | Height: | Size: 85 KiB |
BIN
docs/img/0.29.0/world-state-character-progression-settings.png
Normal file
After Width: | Height: | Size: 41 KiB |
BIN
docs/img/0.29.0/world-state-general-settings.png
Normal file
After Width: | Height: | Size: 50 KiB |
BIN
docs/img/0.29.0/world-state-suggestions-1.png
Normal file
After Width: | Height: | Size: 5.5 KiB |
BIN
docs/img/0.29.0/world-state-suggestions-2.png
Normal file
After Width: | Height: | Size: 12 KiB |
Before Width: | Height: | Size: 418 KiB After Width: | Height: | Size: 418 KiB |
Before Width: | Height: | Size: 413 KiB After Width: | Height: | Size: 413 KiB |
Before Width: | Height: | Size: 364 KiB After Width: | Height: | Size: 364 KiB |
Before Width: | Height: | Size: 449 KiB After Width: | Height: | Size: 449 KiB |
Before Width: | Height: | Size: 449 KiB After Width: | Height: | Size: 449 KiB |
Before Width: | Height: | Size: 396 KiB After Width: | Height: | Size: 396 KiB |
Before Width: | Height: | Size: 468 KiB After Width: | Height: | Size: 468 KiB |
|
@ -50,4 +50,49 @@
|
|||
Tracked states occassionally re-inforce the state of the world or a character. This re-inforcement is kept in the context sent to the AI during generation, giving it a better understanding about the current truth of the world.
|
||||
|
||||
Some examples could be, tracking a characters physical state, time of day, or the current location of a character.
|
||||
<!--- --8<-- [end:what_is_a_tracked_state] -->
|
||||
<!--- --8<-- [end:what_is_a_tracked_state] -->
|
||||
|
||||
<!--- --8<-- [start:agent_long_term_memory_settings] -->
|
||||

|
||||
|
||||
If enabled will inject relevant information into the context using relevancy through the [Memory Agent](/talemate/user-guide/agents/memory).
|
||||
|
||||
##### Context Retrieval Method
|
||||
|
||||
What method to use for long term memory selection
|
||||
|
||||
- `Context queries based on recent context` - will take the last 3 messages in the scene and select relevant context from them. This is the fastest method, but may not always be the most relevant.
|
||||
- `Context queries generated by AI` - will generate a set of context queries based on the current scene and select relevant context from them. This is slower, but may be more relevant.
|
||||
- `AI compiled questions and answers` - will use the AI to generate a set of questions and answers based on the current scene and select relevant context from them. This is the slowest, and not necessarily better than the other methods.
|
||||
|
||||
##### Number of queries
|
||||
|
||||
This settings means different things depending on the context retrieval method.
|
||||
|
||||
- For `Context queries based on recent context` this is the number of messages to consider.
|
||||
- For `Context queries generated by AI` this is the number of queries to generate.
|
||||
- For `AI compiled questions and answers` this is the number of questions to generate.
|
||||
|
||||
##### Answer length
|
||||
|
||||
The maximum response length of the generated answers.
|
||||
|
||||
##### Cache
|
||||
|
||||
Enables the agent wide cache of the long term memory retrieval. That means any agents that share the same long term memory settings will share the same cache. This can be useful to reduce the number of queries to the memory agent.
|
||||
|
||||
<!--- --8<-- [end:agent_long_term_memory_settings] -->
|
||||
|
||||
<!--- --8<-- [start:character_change_proposals] -->
|
||||
When a proposal has been generated it, if the character currently is acknowledged in the worldstate, a lightbulb :material-lightbulb-on: will appear next to the character name.
|
||||
|
||||

|
||||
|
||||
Click the name to expand the character entry and then click the :material-lightbulb-on: to view the proposal.
|
||||
|
||||

|
||||
|
||||
You will be taken to the world editor suggestions tab where you can view the proposal and accept or reject it.
|
||||
|
||||

|
||||
<!--- --8<-- [end:character_change_proposals] -->
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
## General
|
||||
|
||||

|
||||

|
||||
|
||||
!!! note "Inference perameters"
|
||||
Inference parameters are NOT configured through any individual agent.
|
||||
|
@ -29,21 +29,9 @@ Maximum turns the AI gets in succession, before the player gets a turn no matter
|
|||
|
||||
The maximum number of turns a character can go without speaking before the AI will force them to speak.
|
||||
|
||||
##### Long Term Memory
|
||||
|
||||
If checked will inject relevant information into the context using relevancy through the [Memory Agent](/talemate/user-guide/agents/memory).
|
||||
|
||||
##### Context Retrieval Method
|
||||
|
||||
What method to use for long term memory selection
|
||||
|
||||
- `Context queries based on recent context` - will take the last 3 messages in the scene and select relevant context from them. This is the fastest method, but may not always be the most relevant.
|
||||
- `Context queries generated by AI` - will generate a set of context queries based on the current scene and select relevant context from them. This is slower, but may be more relevant.
|
||||
- `AI compiled questions and answers` - will use the AI to generate a set of questions and answers based on the current scene and select relevant context from them. This is the slowest, and not necessarily better than the other methods.
|
||||
|
||||
## Generation
|
||||
|
||||

|
||||

|
||||
|
||||
##### Format
|
||||
|
||||
|
@ -76,29 +64,6 @@ General, broad isntructions for ALL actors in the scene. This will be appended t
|
|||
|
||||
If > 0 will offset the instructions for the actor (both broad and character specific) into the history by that many turns. Some LLMs struggle to generate coherent continuations if the scene is interrupted by instructions right before the AI is asked to generate dialogue. This allows to shift the instruction backwards.
|
||||
|
||||
## Context Investigation
|
||||
|
||||
A new :material-flask: experimental feature introduced in `0.28.0` alongside the [layered history summarization](/talemate/user-guide/agents/summarizer/settings#layered-history).
|
||||
|
||||
If enabled, the AI will investigate the history for relevant information to include in the conversation prompt. Investigation works by digging through the various layers of the history, and extracting relevant information based on the final message in the scene.
|
||||
|
||||
This can be **very slow** depending on how many layers are enabled and generated. It can lead to a great improvement in the quality of the generated dialogue, but it currently still is a mixed bag. A strong LLM is almost a hard requirement for it produce anything useful. 22B+ models are recommended.
|
||||
|
||||

|
||||
|
||||
!!! note "Tips"
|
||||
- This is experimental and results WILL vary in quality.
|
||||
- Requires a strong LLM. 22B+ models are recommended.
|
||||
- Good, clean summarization of the history is a hard requirement for this to work well. Regenerate your history if it's messy. (World Editor -> History -> Regenerate)
|
||||
|
||||
##### Enable context investigation
|
||||
|
||||
Enable or disable the context investigation feature.
|
||||
|
||||
##### Trigger
|
||||
|
||||
Allows you to specify when the context investigation should be triggered.
|
||||
|
||||
- Agent decides - the AI will decide when to trigger the context investigation based on the scene.
|
||||
- Only when a question is asked - the AI will only trigger the context investigation when a question is asked.
|
||||
## Long Term Memory
|
||||
|
||||
--8<-- "docs/snippets/tips.md:agent_long_term_memory_settings"
|
|
@ -1,10 +1,10 @@
|
|||
# Overview
|
||||
The director agent is responsible for guiding the scene progression and generating dynamic actions.
|
||||
|
||||
The director agent is responsible for orchestrating the scene and directing characters.
|
||||
In the future it will shift / expose more of a game master role, controlling the progression of the story.
|
||||
|
||||
This currently happens in a very limited way and is very much a work in progress.
|
||||
### Dynamic Actions
|
||||
Will occasionally generate clickable choices for the user during scene progression. This can be used to allow the user to make choices that will affect the scene or the story in some way without having to manually type out the choice.
|
||||
|
||||
It rquires a text-generation client to be configured and assigned.
|
||||
|
||||
!!! warning "Experimental"
|
||||
This agent is currently experimental and may not work as expected.
|
||||
### Guide Scene
|
||||
Will use the summarizer agent's scene analysis to guide characters and the narrator for the next generation, hopefully improving the quality of the generated content.
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
## General
|
||||
|
||||

|
||||

|
||||
|
||||
##### Direct
|
||||
|
||||
|
@ -35,11 +35,15 @@ If `Direction` is selected, the actor will be given the direction as a direct in
|
|||
|
||||
If `Inner Monologue` is selected, the actor will be given the direction as a thought.
|
||||
|
||||
## Long Term Memory
|
||||
|
||||
--8<-- "docs/snippets/tips.md:agent_long_term_memory_settings"
|
||||
|
||||
## Dynamic Actions
|
||||
|
||||
Dynamic actions are introduced in `0.28.0` and allow the director to generate a set of clickable choices for the player to choose from.
|
||||
|
||||

|
||||

|
||||
|
||||
##### Enable Dynamic Actions
|
||||
|
||||
|
@ -63,4 +67,25 @@ If this is checked and you pick an action, the scene will NOT automatically pass
|
|||
|
||||
Allows you to provide extra specific instructions to director on how to generate the dynamic actions.
|
||||
|
||||
For example you could provide a list of actions to choose from, or a list of actions to avoid. Or specify that you always want a certain action to be included.
|
||||
For example you could provide a list of actions to choose from, or a list of actions to avoid. Or specify that you always want a certain action to be included.
|
||||
|
||||
## Guide Scene
|
||||
|
||||

|
||||
|
||||
The director can use the summarizer agent's scene analysis to guide characters and the narrator for the next generation, hopefully improving the quality of the generated content.
|
||||
|
||||
!!! danger "This may break dumber models"
|
||||
The guidance generated is inserted **after** the message history and **right before** the next generation. Some older models may struggle with this and generate incoherent responses.
|
||||
|
||||
##### Guide Actors
|
||||
|
||||
If enabled the director will guide the actors in the scene.
|
||||
|
||||
##### Guide Narrator
|
||||
|
||||
If enabled the director will guide the narrator in the scene.
|
||||
|
||||
##### Max. Guidance Length
|
||||
|
||||
The maximum number of tokens for the guidance. (e.g., how long should the guidance be).
|
|
@ -1,8 +1,6 @@
|
|||
# Overview
|
||||
|
||||
The editor improves generated text by making sure quotes and actions are correctly formatted.
|
||||
The editor agent is resposible for post-processing the generated content. It can be used to add additional detail to dialogue and fix exposition markers.
|
||||
|
||||
Can also add additional details and attempt to fix continuity issues.
|
||||
|
||||
!!! warning "Experimental"
|
||||
!!! example "Experimental"
|
||||
This agent is currently experimental and may not work as expected.
|
|
@ -1,17 +1,21 @@
|
|||
# Settings
|
||||
|
||||

|
||||

|
||||
|
||||
##### Fix exposition
|
||||
|
||||
If enabled the editor will attempt to fix exposition in the generated dialogue.
|
||||
|
||||
That means it will ensure that actions are correctly encased in `*` and that quotes are correctly applied to spoken text.
|
||||
It will do this based on the selected format.
|
||||
|
||||
###### Fix narrator messages
|
||||
|
||||
Applies the same rules as above to the narrator messages.
|
||||
|
||||
###### Fix user input
|
||||
|
||||
Applies the same rules as above to the user input messages.
|
||||
|
||||
##### Add detail
|
||||
|
||||
Will take the generate message and attempt to add more detail to it.
|
||||
|
@ -20,7 +24,7 @@ Will take the generate message and attempt to add more detail to it.
|
|||
|
||||
Will attempt to fix continuity errors in the generated text.
|
||||
|
||||
!!! warning "Experimental, and doesn't work most of the time"
|
||||
!!! example "Experimental, and doesn't work most of the time"
|
||||
There is something about accurately identifying continuity errors that is currently very
|
||||
difficult for AI to do. So this feature is very hit and miss. More miss than hit.
|
||||
|
||||
|
|
|
@ -6,6 +6,7 @@ You can manage your available embeddings through the application settings.
|
|||
|
||||
In the settings dialogue go to **:material-tune: Presets** and then **:material-cube-unfolded: Embeddings**.
|
||||
|
||||
<!--- --8<-- [start:embeddings_setup] -->
|
||||
## Pre-configured Embeddings
|
||||
|
||||
### all-MiniLM-L6-v2
|
||||
|
@ -78,4 +79,5 @@ This is a tag to mark the embedding as needing a GPU. It doesn't actually do any
|
|||
|
||||
##### Local
|
||||
|
||||
This is a tag to mark the embedding as local. It doesn't actually do anything, but can be useful for sorting later on.
|
||||
This is a tag to mark the embedding as local. It doesn't actually do anything, but can be useful for sorting later on.
|
||||
<!--- --8<-- [end:embeddings_setup] -->
|
|
@ -1,5 +1,9 @@
|
|||
# Overview
|
||||
|
||||
The narrator agent handles the generation of narrative text. It is responsible for setting the scene, describing the environment, and providing context to the player.
|
||||
The narrator agent handles the generation of narrative text. This could be progressing the story, describing the scene, or providing exposition and answers to questions.
|
||||
|
||||
It requires a client to be connected to an AI text generation API.
|
||||
### :material-script: Content
|
||||
|
||||
The narrator agent is the first agent that can be influenced by one of your writing style templates.
|
||||
|
||||
Make sure the a writing style is selected in the [Scene Settings](/talemate/user-guide/world-editor/scene/settings) to apply the writing style to the generated content.
|
|
@ -1,12 +1,12 @@
|
|||
# Settings
|
||||
|
||||

|
||||
## :material-cog: General
|
||||

|
||||
|
||||
##### Client
|
||||
|
||||
The text-generation client to use for conversation generation.
|
||||
|
||||
|
||||
##### Generation Override
|
||||
|
||||
Checkbox that exposes further settings to configure the conversation agent generation.
|
||||
|
@ -19,9 +19,21 @@ Extra instructions for the generation. This should be short and generic as it wi
|
|||
|
||||
If checked and talemate detects a repetitive response (based on a threshold), it will automatically re-generate the resposne with increased randomness parameters.
|
||||
|
||||
##### Narrate time passaage
|
||||
## :material-script-text: Content
|
||||
|
||||
Whenever you indicate a passage of time using the [Scene tools](/talemate/user-guide/scenario-tools), the narrator will automatically narrate the passage of time.
|
||||

|
||||
|
||||
The narrator agent is the first agent that can be influenced by one of your writing style templates.
|
||||
|
||||
Enable this setting to apply a writing style to the generated content.
|
||||
|
||||
Make sure the a writing style is selected in the [Scene Settings](/talemate/user-guide/world-editor/scene/settings) to apply the writing style to the generated content.
|
||||
|
||||
## :material-clock-fast: Narrate time passage
|
||||
|
||||

|
||||
|
||||
The narrator can automatically narrate the passage of time when you indicate it using the [Scene tools](/talemate/user-guide/scenario-tools).
|
||||
|
||||
##### Guide time narration via prompt
|
||||
|
||||
|
@ -29,6 +41,12 @@ Wheneever you indicate a passage of time using the [Scene tools](/talemate/user-
|
|||
|
||||
This allows you to explain what happens during the passage of time.
|
||||
|
||||
##### Narrate after dialogue
|
||||
## :material-forum-plus-outline: Narrate after dialogue
|
||||
|
||||
Whenever a character speaks, the narrator will automatically narrate the scene after.
|
||||

|
||||
|
||||
Whenever a character speaks, the narrator will automatically narrate the scene after.
|
||||
|
||||
## :material-brain: Long Term Memory
|
||||
|
||||
--8<-- "docs/snippets/tips.md:agent_long_term_memory_settings"
|
|
@ -1,10 +1,24 @@
|
|||
# Overview
|
||||
The summarizer agent is responsible for summarizing the generated content and other analytical tasks.
|
||||
|
||||
The summarization agent will regularly summarize the current progress of the scene.
|
||||
### :material-forum: Dialogue summarization
|
||||
Dialogue is summarized regularly to keep the conversation backlogs from getting too large.
|
||||
|
||||
This summarization happens at two points:
|
||||
### :material-layers: Layered history
|
||||
Summarized dialogue is then further summarized into a layered history, where each layer represents a different level of detail.
|
||||
|
||||
1. When a token threshold is reached.
|
||||
2. When a time advance is triggered.
|
||||
Maintaining a layered history should theoretically allow to keep the entire history in the context, albeit at a lower level of detail the further back in history you go.
|
||||
|
||||
It rquires a text-generation client to be configured and assigned.
|
||||
### :material-lightbulb: Scene analysis
|
||||
As of version 0.29 the summarizer agent also has the ability to analyze the scene and provide this analysis to other agents for hopefully improve the quality of the generated content.
|
||||
|
||||
### :material-layers-search: Context investigation
|
||||
Context investigations are when the summarizer agent will dig into the layers of the history to find context that may be relevant to the current scene.
|
||||
|
||||
!!! danger "This can result in many extra prompts being generated."
|
||||
This can be useful for generating more contextually relevant content, but can also result in a lot of extra prompts being generated.
|
||||
|
||||
This is currently only used when the scene analysis with **deep analysis** is enabled.
|
||||
|
||||
!!! example "Experimental"
|
||||
The results of this are sort of hit and miss. It can be useful, but it can also be a bit of a mess and actually make the generated content worse. (e.g., context isn't correctly identified as being relevant, which A LOT of llms still seem to struggle with in my testing.)
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
General summarization settings.
|
||||
|
||||

|
||||

|
||||
|
||||
##### Summarize to long term memory archive
|
||||
|
||||
|
@ -37,7 +37,7 @@ Not only does this allow to keep more context in the history, albeit with earlie
|
|||
|
||||
Right now this is considered an experimental feature, and whether or not its feasible in the long term will depend on how well it works in practice.
|
||||
|
||||

|
||||

|
||||
|
||||
##### Enable layered history
|
||||
|
||||
|
@ -58,4 +58,76 @@ The maximum number of layers that can be created. Raising this limit past 3 is l
|
|||
|
||||
Smaller LLMs may struggle with accurately summarizing long texts. This setting will split the text into chunks and summarize each chunk separately, then stitch them together in the next layer. If you're using a strong LLM (70B+), you can try setting this to be the same as the threshold.
|
||||
|
||||
Setting this higher than the token threshold does nothing.
|
||||
Setting this higher than the token threshold does nothing.
|
||||
|
||||
##### Chunk size
|
||||
|
||||
During the summarization itself, the text will be furhter split into chunks where each chunk is summarized separately. This setting controls the size of those chunks. This is a character length setting, **NOT** token length.
|
||||
|
||||
##### Enable analyzation
|
||||
|
||||
Enables the analyzation of the chunks and their relationship to each other before summarization. This can greatly improve the quality of the summarization, but will also result in a bigger size requirement of the output.
|
||||
|
||||
##### Maximum response length
|
||||
|
||||
The maximum length of the response that the summarizer agent will generate.
|
||||
|
||||
!!! info "Analyzation requires a bigger length"
|
||||
If you enable analyzation, you should set this to be high enough so the response has room for both the analysis and the summary of all the chunks.
|
||||
|
||||
## Long term memory
|
||||
|
||||
--8<-- "docs/snippets/tips.md:agent_long_term_memory_settings"
|
||||
|
||||
## Scene Analysis
|
||||
|
||||

|
||||
|
||||
When enabled scene analysis will be performed during conversation and narration tasks. This analysis will be used to provide additional context to other agents, which should hopefully improve the quality of the generated content.
|
||||
|
||||
##### Length of analysis
|
||||
|
||||
The maximum number of tokens for the response. (e.g., how long should the analysis be).
|
||||
|
||||
##### Conversation
|
||||
|
||||
Enable scene analysis for conversation tasks.
|
||||
|
||||
##### Narration
|
||||
|
||||
Enable scene analysis for narration tasks.
|
||||
|
||||
##### Deep analysis
|
||||
|
||||
Enable context investigations based on the initial analysis.
|
||||
|
||||
##### Max. content investigations
|
||||
|
||||
The maximum number of content investigations that can be performed. This is a safety feature to prevent the AI from going overboard with the investigations. The number here is to be taken per layer in the history. So if this is set to 1 and there are 2 layers, this will perform 2 investigations.
|
||||
|
||||
##### Cache analysis
|
||||
|
||||
Cache the analysis results for the scene. Enable this to prevent regenerationg the analysis when you regenerate the most recent output.
|
||||
|
||||
!!! info
|
||||
This cache is anchored to the last message in the scene (excluding the current message). Editing that message will invalidate the cache.
|
||||
|
||||
## Context investigation
|
||||
|
||||

|
||||
|
||||
When enabled, the summarizer agent will dig into the layers of the history to find context that may be relevant to the current scene.
|
||||
|
||||
!!! info
|
||||
This is currently only triggered during deep analysis as part of the scene analysis. Disabling context investigation will also disable the deep analysis.
|
||||
|
||||
##### Answer length
|
||||
|
||||
The maximum length of the answer that the AI will generate.
|
||||
|
||||
##### Update method
|
||||
|
||||
How to update the context with the new information.
|
||||
|
||||
- `Replace` - replace the context with the new information
|
||||
- `Smart merge` - merge the new information with the existing context (uses another LLM promp to generate the merge)
|
|
@ -4,4 +4,12 @@ The world state agent handles the world state snapshot generation and reinforcem
|
|||
|
||||
It requires a text-generation client to be configured and assigned.
|
||||
|
||||
--8<-- "docs/snippets/tips.md:what_is_a_tracked_state"
|
||||
--8<-- "docs/snippets/tips.md:what_is_a_tracked_state"
|
||||
|
||||
### :material-earth: World State
|
||||
|
||||
The world state is a snapshot of the current state of the world. This can include things like the current location, the time of day, the weather, the state of the characters, etc.
|
||||
|
||||
### :material-account-switch: Character Progression
|
||||
|
||||
The world state agent can be used to regularly check progression of the scene against old character information and then propose changes to a character's description and attributes based on how the story has progressed.
|
|
@ -1,6 +1,8 @@
|
|||
# Settings
|
||||
|
||||

|
||||
## General
|
||||
|
||||

|
||||
|
||||
##### Update world state
|
||||
|
||||
|
@ -24,4 +26,24 @@ Will attempt to evaluate and update any due [conditional context pins](/talemate
|
|||
|
||||
###### Turns
|
||||
|
||||
How many turns to wait before the conditional context pins are updated.
|
||||
How many turns to wait before the conditional context pins are updated.
|
||||
|
||||
## Character Progression
|
||||
|
||||

|
||||
|
||||
##### Frquency of checks
|
||||
|
||||
How often ot check for character progression.
|
||||
|
||||
This is in terms of full rounds, not individual turns.
|
||||
|
||||
##### Propose as suggestions
|
||||
|
||||
If enabled, the proposed changes will be presented as suggestions to the player.
|
||||
|
||||
--8<-- "docs/snippets/tips.md:character_change_proposals"
|
||||
|
||||
##### Player character
|
||||
|
||||
Enable this to have the player character be included in the progression checks.
|
||||
|
|
0
docs/user-guide/app-settings/.pages
Normal file
7
docs/user-guide/app-settings/appearance.md
Normal file
|
@ -0,0 +1,7 @@
|
|||
# :material-palette-outline: Appearance
|
||||
|
||||
## :material-script: Scene
|
||||
|
||||

|
||||
|
||||
Allows you some control over how the message history is displayed.
|
5
docs/user-guide/app-settings/application.md
Normal file
|
@ -0,0 +1,5 @@
|
|||
# :material-application-outline: Application
|
||||
|
||||

|
||||
|
||||
Configure various API keys for integration with external services. (OpenAI, Anthropic, etc.)
|
26
docs/user-guide/app-settings/game.md
Normal file
|
@ -0,0 +1,26 @@
|
|||
# Game
|
||||
## :material-cog: General
|
||||
|
||||

|
||||
|
||||
##### Auto save
|
||||
|
||||
If enabled the scene will save everytime the game loop completes. This can also be toggled on or off directly from the main screen.
|
||||
|
||||
If a scene is set to be immutable, this setting will be disabled.
|
||||
|
||||
##### Auto progress
|
||||
|
||||
If enabled the game will automatically progress to the next character after your turn. This can also be toggled on or off directly from the main screen.
|
||||
|
||||
##### Max backscroll
|
||||
|
||||
The maximum number of messages that will be displayed in the backscroll. This is a display only setting and does not affect the game in any way. (If you find your interface feels sluggish, try reducing this number.)
|
||||
|
||||
## :material-human-edit: Default character
|
||||
|
||||

|
||||
|
||||
Lets you manage a basic default character.
|
||||
|
||||
This is only relevant when loading scenes that do not come with a default character. (e.g., mostly from other application exports, like ST character cards.)
|
70
docs/user-guide/app-settings/presets.md
Normal file
|
@ -0,0 +1,70 @@
|
|||
# :material-tune: Presets
|
||||
|
||||
Change inference parameters, embedding parameters and global system prompt overrides.
|
||||
|
||||
## :material-matrix: Inference
|
||||
|
||||
!!! danger "Advanced settings. Use with caution."
|
||||
If these settings don't mean anything to you, you probably shouldn't be changing them. They control the way the AI generates text and can have a big impact on the quality of the output.
|
||||
|
||||
This document will NOT explain what each setting does.
|
||||
|
||||

|
||||
|
||||
If you're familiar with editing inference parameters from other similar applications, be aware that there is a significant difference in how TaleMate handles these settings.
|
||||
|
||||
Agents take different actions, and based on that action one of the presets is selected.
|
||||
|
||||
That means that ALL presets are relevant and will be used at some point.
|
||||
|
||||
For example analysis will use the `Anlytical` preset, which is configured to be less random and more deterministic.
|
||||
|
||||
The `Conversation` preset is used by the conversation agent during dialogue gneration.
|
||||
|
||||
The other presets are used for various creative tasks.
|
||||
|
||||
These are all experimental and will probably change / get merged in the future.
|
||||
|
||||
## :material-cube-unfolded: Embeddings
|
||||
|
||||

|
||||
|
||||
Allows you to add, remove and manage various embedding models for the memory agent to use via chromadb.
|
||||
|
||||
--8<-- "docs/user-guide/agents/memory/embeddings.md:embeddings_setup"
|
||||
|
||||
## :material-text-box: System Prompts
|
||||
|
||||

|
||||
|
||||
This allows you to override the global system prompts for the entire application for each overarching prompt kind.
|
||||
|
||||
If these are not set the default system prompt will be read from the templates that exist in `src/talemate/prompts/templates/{agent}/system-*.jinja2`.
|
||||
|
||||
This is useful if you want to change the default system prompts for the entire application.
|
||||
|
||||
The effect these have, varies from model to model.
|
||||
|
||||
### Prompt types
|
||||
|
||||
- Conversation - Use for dialogue generation.
|
||||
- Narration - Used for narrative generation.
|
||||
- Creation - Used for other creative tasks like making new characters, locations etc.
|
||||
- Direction - Used for guidance prompts and general scene direction.
|
||||
- Analysis (JSON) - Used for analytical tasks that expect a JSON response.
|
||||
- Analysis - Used for analytical tasks that expect a text response.
|
||||
- Editing - Used for post-processing tasks like fixing exposition, adding detail etc.
|
||||
- World State - Used for generating world state information. (This is sort of a mix of analysis and creation prompts.)
|
||||
- Summarization - Used for summarizing text.
|
||||
|
||||
### Normal / Uncensored
|
||||
|
||||
Overrides are maintained for both normal and uncensored modes.
|
||||
|
||||
Currently local API clients (koboldcpp, textgenwebui, tabbyapi, llmstudio) will use the uncensored prompts, while the clients targeting official third party APIs will use the normal prompts.
|
||||
|
||||
The uncensored prompts are a work-around to prevent the LLM from refusing to generate text based on topic or content.
|
||||
|
||||
|
||||
!!! note "Future plans"
|
||||
A toggle to switch between normal and uncensored prompts regardless of the client is planned for a future release.
|
|
@ -2,7 +2,11 @@
|
|||
|
||||
The `Settings` tab allows you to configure various settings for the scene.
|
||||
|
||||

|
||||

|
||||
|
||||
### Writing Style
|
||||
|
||||
If you have any [writing style templates](/talemate/user-guide/world-editor/templates/writing-style/) set up, you can select one here. Some agents may use this to influence their output.
|
||||
|
||||
### Locked save file
|
||||
|
||||
|
@ -12,4 +16,10 @@ The user (or you) will be forced to save a new copy of the scene if they want to
|
|||
|
||||
### Experimental
|
||||
|
||||
This is simply a tag that lets the user know that this scene is experimental, and may take a strong LLM to perform well.
|
||||
This is simply a tag that lets the user know that this scene is experimental, and may take a strong LLM to perform well.
|
||||
|
||||
### Restoration Settings
|
||||
|
||||
Allows you to specific another save file of the same project to serve as a restoration point. Once set you can use the **:material-backup-restore: Restore Scene** button to restore the scene to that point.
|
||||
|
||||
This will create a new copy of the scene with the restoration point as the base.
|