talemate/docs/user-guide/interacting.md
veguAI 80256012ad
0.28.0 (#148)
* fix issue where saving a new scene would save into a "new scenario" directory instead instead of a relevantly named directory

* implement function to fork new scene file from specific message

* dynamic choice generation

* dynamic choice generation progress

* prompt tweaks

* disable choice generation by default
prompt tweaks

* prompt tweaks for assisted RAG tasks

* allow analyze_text_and_extract_context to include character context

* more prompt tweaks for RAG assist during conversation generation

* open director settings from dynamic action dialog

* adjust wording

* remove player choice message if the trigger message is removed (or regenerated)

* fix issue with dialogue cleaqup where narration over multiple lines would end up being marked incorrectly

* dynamic action generation custom instructions
dynamic action generation narration for sensory actions

* fix actions when acting as another character

* 0.28.0

* conversation agent: split out generation settings, add actor instructions extension, add actor instruction offset slider

* prompt tweaks

* fix ai message regenerate if generated from choice

* cruft

* layered history implementation through summarizer
summarization tweaks

* show layered history in ux

* layered history fixes and tweaks
conversation actor instruction fixes

* more summarization fixes

* fix missing actor instructions

* prompt tweaks

* prompt tweaks

* force lower case when checking sensory type

* agent modal polish
implement find-natural-scene-termination summarizer action
some summarization tweaks

* integrate find_natural_scene_termination with layered history

* collect all denouements at once

* relock

* fix some issues with screenplay type formatting in conversation agent

* cleanup

* revert layered history summarization to use max_process_tokens instead of using ai to fine scene termination as that process falls apart in layer 1 and higher, at that point every item is a scene in itself.

* implement ai assisted digging through layered history to answer queries

* dig_layered_history tweaks and improvements

* prompt tweaks

* adjust budget

* adjust budget for RAG context

* layered_history disabled by default

* prompt tweaks to reinforcement updates

* prompt tweaks

* dig layered history - response without function call to be treated as answer

* clarify style keywords to avoid bleeding into the prompt as subject matter

* fix issue with cover image updates

* fix missing dialogue from context history

* fix issue where new scenes wouldn't load

* fix crash with layered summarization

* more context history fixes

* fix assured dialogue message in context history

* prompt tweaks

* tweaks to layered history generation

* prompt tweaks

* conversation agent can dig layered history for extra context

* some fixes to dig layered history

* scene fork adjust layered history

* layered history status indication

* allow configuration of message styles and colors

* fix issue where layered history generate would get stuck on layer 0

* dig layered history default to false

* prompt tweaks

* context investigation messages

* tweaks to context investigation

* context investigation polish of UX and allow specifying trigger

* prompt tweaks

* allow hiding of ci and director messages

* wire ci shrotcut buttons

* prompt tweaks

* prompt tweaks

* carry on analysis when digging layered history

* improve quality of generate choices by anchoring to last line in the scene

* update hint message

* prompt tweaks

* change default value for max_process_tokens

* docs

* dig layered history only if there are layers

* always enforce num choices limit

* relock

* typos

* prompt tweaks

* docs for forking a scene

* prompt tweaks

* world editor rubber banding fixes follow up

* layered history cleanup fixes

* gracefully handle malformed dig() call

* handle malformed answer() call

* only generate choices if last content isn't player message

* include more context in autocomplete prompts

* prompt tweaks

* typo

* fix issue where inactive characters could not be deleted

* more character delete bugs

* dig layered history fixes

* discard empty content investigations

* fix issue with autocomplete no longer working in world editor

* prompt tweaks

* support single quotes

* prompt tweaks

* fix issue with context investigation if final message was narrator text

* Include the query in the context investigation message

* context investigvations should note when historic events occured

* instructions on how to use internal notes

* time_diff return empty string no time supplied

* prompt tweaks

* fix date calculations for historic entries

* change default values

* prompt tweaks

* fix history regenerate continuing through page reload

* reorganize websocket tasks

* allow cancelling of history regenerate

* Capitalize first letter of summarization

* include base layer in context investigations

* prompt tweaks

* fix issue where context investigations would expand too much of the history at once

* attempt to determine character knowledge during context investigation

* prompt tweaks

* prompt tweaks

* fix mising timestamps

* more context during layer history digging

* fix issue with act-as not being able to select past the first npc if a scene had more than one active npcs in it

* docs

* error handling for malformed answer call

* timestamp calculation fixes and summarization improvements

* lock message manipulation while the ux is busy

* prompt tweaks

* toggling 'log debug messages' will log all messages to console even if no filter is specified

* layered history generation cancellable from ux

* prevent loading scene while another scene is currently loading

* improvements to choice generation prompt and error handling

* prompt tweaks

* prompt tweaks

* prompt tweaks

* fix issue with successive scene load not working

* correctly display timestamps and generated layers during history regen

* summarization improvements

* clean up context investigation prompt

* prompt tweaks

* increase response token size for dig_layered_history

* define missing presets

* missing preset

* prompt tweaks

* fix simulation suite

* attach punkt download to backend start, not frontend start

* dig layered history fixes

* prompt tweaks

* fix summarize_and_pin

* more fixes for time calculations

* relock

* prompt tweaks

* remove dupe entry from layered history

* bash version of update script

* prompt tweaks

* layered history defaults to enabled

* default decreased to 0.3 chance

* fix multi character natural flow selection with clients that don't support LLM coercion

* fix simulation suite call to change a character

* typo

* remove deprecated test

* use python3

* add missing 4o models

* add proper configs for 4o models

* prompt tweaks

* update reinforcement prompt ignores context investigations

* scene.snapshot formatting and dig_layered_history ignores reinforcments

* use end date instead of start date

* Reword 'Moments ago' to 'Recently' as it is more forgiving and applicable to longer time ranges

* fix time calculation issues during summarization of new entries

* no need for scoping

* dont display as range if start and end of entry are identical

* prompt tweaks
2024-11-24 15:43:27 +02:00

3.8 KiB

Interacting with the scene

There two main ways to interact with the scene, through dialogue and through scene actions.

Your turn!

Whenever the input element at the bottom of the screen is available, it means it is your turn to do something.

By default the main player character will be selected, but you can act as any active character or even the narrator. See the section on acting as another character.

Dialogue input

Write a message and hit enter to send it to the scene.

Separate actions and dialogue

When writing out your character's actions, spoken words should go into " and actions should be written in *. Talemate will automatically supply the other if you supply one.

That means if you enter Elmer enters the room. "Hello everyone!", Talemate will automatically convert it to *Elmer enters the room.* "Hello everyone!".

Likewise if you enter *Elmer enters the room.* Hello everyone!, Talemate will automatically convert it to *Elmer enters the room.* "Hello everyone!" as well.

If no markers are provided, Talemate will assume the text is spoken.

Linebreaks are ok!

You can use linebreaks in your messages, to do so press shift+enter to create a new line.

Acting as another character

Version 0.26 introduces a new act-as feature, which allows you to act as another character in the scene. This can be done by hitting the tab key while the input is focused. It will cycle through all active characters and finally the narrator before returning to the main player character.

Dialogue input - act as other character

Dialogue input - act as narrator

Quick action

If you start a message with the @ character you can have the AI generate the response based on what action you are taking. This is useful if you want to quickly generate a response without having to type out the full action and narration yourself.

Quick action

Quick action generated text

This functionality was added in version 0.28.0

Autocomplete

When typing out your action / dialogue, you can hit the ctrl+enter key combination to generate an autocompletion of your current text.

!!! abstract "This works best if the client is in control of the prompt template" Success rate on this feature when the text generation api controls the prompt template is reduced, as Talemate cannot prefix the partial text.

See [Prompt Templates](/talemate/user-guide/clients/prompt-templates) for more information.

Auto progress

By default Talemate will give the next turn to the AI after you have sent a message, automatically progressing the scene.

You can turn this off by disabling the auto progress setting, either in the game settings or with the shortcut by the interaction input.

auto progress off

Scene Actions

Tool bar

A set of tools to help you interact with the scenario. Find out more about the various actions in the Scene Tools section of the user guide.

Cancel Generation

Sometimes Talemate will be generating a response (or go through a chain of generations) and you want to cancel it. You can do this by hitting the :material-stop-circle-outline: button that will appear in the scene tools bar.

Cancel generation

!!! info While the generation is cancelled immediately, the current inference request will still be processed by the LLM backend. The Talemate UI will be responsive but the LLM api may require some time to finish the request.