* fix issue where saving a new scene would save into a "new scenario" directory instead instead of a relevantly named directory

* implement function to fork new scene file from specific message

* dynamic choice generation

* dynamic choice generation progress

* prompt tweaks

* disable choice generation by default
prompt tweaks

* prompt tweaks for assisted RAG tasks

* allow analyze_text_and_extract_context to include character context

* more prompt tweaks for RAG assist during conversation generation

* open director settings from dynamic action dialog

* adjust wording

* remove player choice message if the trigger message is removed (or regenerated)

* fix issue with dialogue cleaqup where narration over multiple lines would end up being marked incorrectly

* dynamic action generation custom instructions
dynamic action generation narration for sensory actions

* fix actions when acting as another character

* 0.28.0

* conversation agent: split out generation settings, add actor instructions extension, add actor instruction offset slider

* prompt tweaks

* fix ai message regenerate if generated from choice

* cruft

* layered history implementation through summarizer
summarization tweaks

* show layered history in ux

* layered history fixes and tweaks
conversation actor instruction fixes

* more summarization fixes

* fix missing actor instructions

* prompt tweaks

* prompt tweaks

* force lower case when checking sensory type

* agent modal polish
implement find-natural-scene-termination summarizer action
some summarization tweaks

* integrate find_natural_scene_termination with layered history

* collect all denouements at once

* relock

* fix some issues with screenplay type formatting in conversation agent

* cleanup

* revert layered history summarization to use max_process_tokens instead of using ai to fine scene termination as that process falls apart in layer 1 and higher, at that point every item is a scene in itself.

* implement ai assisted digging through layered history to answer queries

* dig_layered_history tweaks and improvements

* prompt tweaks

* adjust budget

* adjust budget for RAG context

* layered_history disabled by default

* prompt tweaks to reinforcement updates

* prompt tweaks

* dig layered history - response without function call to be treated as answer

* clarify style keywords to avoid bleeding into the prompt as subject matter

* fix issue with cover image updates

* fix missing dialogue from context history

* fix issue where new scenes wouldn't load

* fix crash with layered summarization

* more context history fixes

* fix assured dialogue message in context history

* prompt tweaks

* tweaks to layered history generation

* prompt tweaks

* conversation agent can dig layered history for extra context

* some fixes to dig layered history

* scene fork adjust layered history

* layered history status indication

* allow configuration of message styles and colors

* fix issue where layered history generate would get stuck on layer 0

* dig layered history default to false

* prompt tweaks

* context investigation messages

* tweaks to context investigation

* context investigation polish of UX and allow specifying trigger

* prompt tweaks

* allow hiding of ci and director messages

* wire ci shrotcut buttons

* prompt tweaks

* prompt tweaks

* carry on analysis when digging layered history

* improve quality of generate choices by anchoring to last line in the scene

* update hint message

* prompt tweaks

* change default value for max_process_tokens

* docs

* dig layered history only if there are layers

* always enforce num choices limit

* relock

* typos

* prompt tweaks

* docs for forking a scene

* prompt tweaks

* world editor rubber banding fixes follow up

* layered history cleanup fixes

* gracefully handle malformed dig() call

* handle malformed answer() call

* only generate choices if last content isn't player message

* include more context in autocomplete prompts

* prompt tweaks

* typo

* fix issue where inactive characters could not be deleted

* more character delete bugs

* dig layered history fixes

* discard empty content investigations

* fix issue with autocomplete no longer working in world editor

* prompt tweaks

* support single quotes

* prompt tweaks

* fix issue with context investigation if final message was narrator text

* Include the query in the context investigation message

* context investigvations should note when historic events occured

* instructions on how to use internal notes

* time_diff return empty string no time supplied

* prompt tweaks

* fix date calculations for historic entries

* change default values

* prompt tweaks

* fix history regenerate continuing through page reload

* reorganize websocket tasks

* allow cancelling of history regenerate

* Capitalize first letter of summarization

* include base layer in context investigations

* prompt tweaks

* fix issue where context investigations would expand too much of the history at once

* attempt to determine character knowledge during context investigation

* prompt tweaks

* prompt tweaks

* fix mising timestamps

* more context during layer history digging

* fix issue with act-as not being able to select past the first npc if a scene had more than one active npcs in it

* docs

* error handling for malformed answer call

* timestamp calculation fixes and summarization improvements

* lock message manipulation while the ux is busy

* prompt tweaks

* toggling 'log debug messages' will log all messages to console even if no filter is specified

* layered history generation cancellable from ux

* prevent loading scene while another scene is currently loading

* improvements to choice generation prompt and error handling

* prompt tweaks

* prompt tweaks

* prompt tweaks

* fix issue with successive scene load not working

* correctly display timestamps and generated layers during history regen

* summarization improvements

* clean up context investigation prompt

* prompt tweaks

* increase response token size for dig_layered_history

* define missing presets

* missing preset

* prompt tweaks

* fix simulation suite

* attach punkt download to backend start, not frontend start

* dig layered history fixes

* prompt tweaks

* fix summarize_and_pin

* more fixes for time calculations

* relock

* prompt tweaks

* remove dupe entry from layered history

* bash version of update script

* prompt tweaks

* layered history defaults to enabled

* default decreased to 0.3 chance

* fix multi character natural flow selection with clients that don't support LLM coercion

* fix simulation suite call to change a character

* typo

* remove deprecated test

* use python3

* add missing 4o models

* add proper configs for 4o models

* prompt tweaks

* update reinforcement prompt ignores context investigations

* scene.snapshot formatting and dig_layered_history ignores reinforcments

* use end date instead of start date

* Reword 'Moments ago' to 'Recently' as it is more forgiving and applicable to longer time ranges

* fix time calculation issues during summarization of new entries

* no need for scoping

* dont display as range if start and end of entry are identical

* prompt tweaks
This commit is contained in:
veguAI 2024-11-24 15:43:27 +02:00 committed by GitHub
parent bb1cf6941b
commit 80256012ad
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
108 changed files with 5715 additions and 2501 deletions

View file

@ -1,3 +1,3 @@
# Coning soon
# Coming soon
Developer documentation is coming soon. Stay tuned!

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

View file

@ -1,6 +1,8 @@
# Settings
![Conversation agent settings](/talemate/img/0.26.0/conversation-agent-settings.png)
## General
![Conversation agent general settings](/talemate/img/0.28.0/conversation-general-settings.png)
!!! note "Inference perameters"
Inference parameters are NOT configured through any individual agent.
@ -11,33 +13,6 @@
The text-generation client to use for conversation generation.
##### Generation settings
Checkbox that exposes further settings to configure the conversation agent generation.
##### Format
The dialogue format as the AI will see it.
This currently comes in two choices:
- `Screenplay`
- `Chat (legacy)`
Visually this will make no difference to what you see, it may however affect how the AI interprets the dialogue.
##### Generation Length
The maximum length of the generated dialogue. (tokens)
##### Instructions
Extra instructions for the generation. This should be short and generic as it will be applied for all characters.
##### Jiggle
The amount of randomness to apply to the generation. This can help to avoid repetitive responses.
##### Auto Break Repetition
If checked and talemate detects a repetitive response (based on a threshold), it will automatically re-generate the resposne with increased randomness parameters.
@ -62,7 +37,68 @@ If checked will inject relevant information into the context using relevancy thr
What method to use for long term memory selection
- `Context queries based on recent context` - will take the last 3 messagews in the scene and select relevant context from them. This is the fastes method, but may not always be the most relevant.
- `Context queries generated by AI` - will generaste a set of context queries based on the current scene and select relevant context from them. This is slower, but may be more relevant.
- `Context queries based on recent context` - will take the last 3 messages in the scene and select relevant context from them. This is the fastest method, but may not always be the most relevant.
- `Context queries generated by AI` - will generate a set of context queries based on the current scene and select relevant context from them. This is slower, but may be more relevant.
- `AI compiled questions and answers` - will use the AI to generate a set of questions and answers based on the current scene and select relevant context from them. This is the slowest, and not necessarily better than the other methods.
## Generation
![Conversation agent generation settings](/talemate/img/0.28.0/conversation-generation-settings.png)
##### Format
The dialogue format as the AI will see it.
This currently comes in two choices:
- `Screenplay`
- `Chat (legacy)`
Visually this will make no difference to what you see, it may however affect how the AI interprets the dialogue.
##### Generation Length
The maximum length of the generated dialogue. (tokens)
##### Jiggle
The amount of randomness to apply to the generation. This can help to avoid repetitive responses.
##### Task Instructions
Extra instructions for the generation. This should be short and generic as it will be applied for all characters. This will be appended to the existing task instrunctions in the conversation prompt BEFORE the conversation history.
##### Actor Instructions
General, broad isntructions for ALL actors in the scene. This will be appended to the existing actor instructions in the conversation prompt AFTER the conversation history.
##### Actor Instructions Offset
If > 0 will offset the instructions for the actor (both broad and character specific) into the history by that many turns. Some LLMs struggle to generate coherent continuations if the scene is interrupted by instructions right before the AI is asked to generate dialogue. This allows to shift the instruction backwards.
## Context Investigation
A new :material-flask: experimental feature introduced in `0.28.0` alongside the [layered history summarization](/talemate/user-guide/agents/summarizer/settings#layered-history).
If enabled, the AI will investigate the history for relevant information to include in the conversation prompt. Investigation works by digging through the various layers of the history, and extracting relevant information based on the final message in the scene.
This can be **very slow** depending on how many layers are enabled and generated. It can lead to a great improvement in the quality of the generated dialogue, but it currently still is a mixed bag. A strong LLM is almost a hard requirement for it produce anything useful. 22B+ models are recommended.
![Conversation agent context investigation settings](/talemate/img/0.28.0/conversation-context-investigation-settings.png)
!!! note "Tips"
- This is experimental and results WILL vary in quality.
- Requires a strong LLM. 22B+ models are recommended.
- Good, clean summarization of the history is a hard requirement for this to work well. Regenerate your history if it's messy. (World Editor -> History -> Regenerate)
##### Enable context investigation
Enable or disable the context investigation feature.
##### Trigger
Allows you to specify when the context investigation should be triggered.
- Agent decides - the AI will decide when to trigger the context investigation based on the scene.
- Only when a question is asked - the AI will only trigger the context investigation when a question is asked.

View file

@ -1,6 +1,8 @@
# Settings
![Director agent settings](/talemate/img/0.26.0/director-agent-settings.png)
## General
![Director agent settings](/talemate/img/0.28.0/director-general-settings.png)
##### Direct
@ -31,4 +33,34 @@ When an actor is given a direction, how is it to be injected into the context
If `Direction` is selected, the actor will be given the direction as a direct instruction, by the director.
If `Inner Monologue` is selected, the actor will be given the direction as a thought.
If `Inner Monologue` is selected, the actor will be given the direction as a thought.
## Dynamic Actions
Dynamic actions are introduced in `0.28.0` and allow the director to generate a set of clickable choices for the player to choose from.
![Director agent dynamic actions settings](/talemate/img/0.28.0/director-dynamic-actions-settings.png)
##### Enable Dynamic Actions
If enabled the director will generate a set of clickable choices for the player to choose from.
##### Chance
The chance that the director will generate a set of dynamic actions when its the players turn.
This ranges from `0` to `1`. `0` means the director will never generate dynamic actions, `1` means the director will always generate dynamic actions.
##### Number of Actions
The number of actions to generate.
##### Never auto progress on action selection
If this is checked and you pick an action, the scene will NOT automatically pass the turn to the next actor.
##### Instructions
Allows you to provide extra specific instructions to director on how to generate the dynamic actions.
For example you could provide a list of actions to choose from, or a list of actions to avoid. Or specify that you always want a certain action to be included.

View file

@ -1,6 +1,10 @@
# Settings
![Summarizer agent settings](/talemate/img/0.26.0/summarizer-agent-settings.png)
## General
General summarization settings.
![Summarizer agent general settings](/talemate/img/0.28.0/summarizer-general-settings.png)
##### Summarize to long term memory archive
@ -21,4 +25,37 @@ The method used to summarize the scene dialogue.
###### Use preceeding summaries to strengthen context
Help the AI summarize by including the last few summaries as additional context. Some models may incorporate this context into the new summary directly, so if you find yourself with a bunch of similar history entries, try setting this to 0.
Help the AI summarize by including the last few summaries as additional context. Some models may incorporate this context into the new summary directly, so if you find yourself with a bunch of similar history entries, try setting this to 0.
## Layered History
Settings for the layered history summarization.
Talemate `0.28.0` introduces a new feature called layered history summarization. This feature allows the AI to summarize the scene dialogue in layers, with each layer providing a different level of detail.
Not only does this allow to keep more context in the history, albeit with earlier layers containing less detail, but it also allows us to do history investgations to extract relevant information from the history during conversation and narration prompts.
Right now this is considered an experimental feature, and whether or not its feasible in the long term will depend on how well it works in practice.
![Summarizer agent layered history settings](/talemate/img/0.28.0/summarizer-layered-history-settings.png)
##### Enable layered history
Allows you to enable or disable the layered history summarization.
!!! note "Enabling this on big scenes"
If you enable this on a big established scene, the next time the summarization agent runs, it will take a while to process the entire history and generate the layers.
##### Token threshold
The number of tokens in the layer that will trigger the summarization process to the next layer.
##### Maximum number of layers
The maximum number of layers that can be created. Raising this limit past 3 is likely to have dimishing returns. We have observed that usually by layer 3 you are down to single sentences for individual events, making it difficult to summarize further in a meaningful way.
##### Maximum tokens to process
Smaller LLMs may struggle with accurately summarizing long texts. This setting will split the text into chunks and summarize each chunk separately, then stitch them together in the next layer. If you're using a strong LLM (70B+), you can try setting this to be the same as the threshold.
Setting this higher than the token threshold does nothing.

View file

@ -4,4 +4,4 @@ If you have not configured the ElevenLabs TTS API, the voice agent will show tha
![Elevenlaps api key missing](/talemate/img/0.26.0/voice-agent-missing-api-key.png)
See the [ElevenLabs API setup](/apis/elevenlabs.md) for instructions on how to set up the API key.
See the [ElevenLabs API setup](/talemate/user-guide/apis/elevenlabs/) for instructions on how to set up the API key.

View file

@ -34,6 +34,16 @@ Version `0.26` introduces a new `act-as` feature, which allows you to act as ano
![Dialogue input - act as narrator](/talemate/img/0.26.0/interacting-input-act-as-narrator.png)
### Quick action
If you start a message with the `@` character you can have the AI generate the response based on what action you are taking. This is useful if you want to quickly generate a response without having to type out the full action and narration yourself.
![Quick action](/talemate/img/0.28.0/quick-action.png)
![Quick action generated text](/talemate/img/0.28.0/quick-action-generated-text.png)
This functionality was added in version `0.28.0`
### Autocomplete
When typing out your action / dialogue, you can hit the `ctrl+enter` key combination to generate an autocompletion of your current text.

View file

@ -28,4 +28,10 @@ Some scenes start out with a locked save file. This is so that this particular s
!!! info
Alternatively you can also unlock the save file through the [Scene editor](/talemate/user-guide/world-editor/scene/settings) found in **:material-earth-box: World Editor** :material-arrow-right: **:material-script: Scene** :material-arrow-right: **:material-cogs: Settings**.
Alternatively you can also unlock the save file through the [Scene editor](/talemate/user-guide/world-editor/scene/settings) found in **:material-earth-box: World Editor** :material-arrow-right: **:material-script: Scene** :material-arrow-right: **:material-cogs: Settings**.
## Forking a copy of a scene
You can create a new copy of a scene from any message in the scene by clicking the :material-source-fork: **Fork** button underneath the message.
All progress after the target message will be removed and a new scene will be created with the previous messages.

3571
poetry.lock generated

File diff suppressed because it is too large Load diff

View file

@ -481,7 +481,11 @@ def game(TM):
TM.log.debug("SIMULATION SUITE: transform npc", npc=npc)
character_attributes = TM.agents.world_state.extract_character_sheet(name=npc.name, alteration_instructions=self.player_message.raw)
character_attributes = TM.agents.world_state.extract_character_sheet(
name=npc.name,
text=inject,
alteration_instructions=self.player_message.raw
)
TM.scene.set_character_attributes(npc.name, character_attributes)
character_description = TM.agents.creator.determine_character_description(npc.name)

View file

@ -65,6 +65,8 @@ class AgentAction(pydantic.BaseModel):
condition: Union[AgentActionConditional, None] = None
container: bool = False
icon: Union[str, None] = None
can_be_disabled: bool = False
experimental: bool = False
class AgentDetail(pydantic.BaseModel):

View file

@ -21,7 +21,7 @@ from talemate.emit import emit
from talemate.events import GameLoopEvent
from talemate.exceptions import LLMAccuracyError
from talemate.prompts import Prompt
from talemate.scene_message import CharacterMessage, DirectorMessage
from talemate.scene_message import CharacterMessage, DirectorMessage, ContextInvestigationMessage, NarratorMessage
from .base import (
Agent,
@ -86,7 +86,9 @@ class ConversationAgent(Agent):
self.actions = {
"generation_override": AgentAction(
enabled=True,
label="Generation Settings",
container=True,
icon="mdi-atom-variant",
label="Generation",
config={
"format": AgentActionConfig(
type="text",
@ -107,12 +109,6 @@ class ConversationAgent(Agent):
max=512,
step=32,
),
"instructions": AgentActionConfig(
type="text",
label="Instructions",
value="Write 1-3 sentences. Never wax poetic.",
description="Extra instructions to give the AI for dialog generatrion.",
),
"jiggle": AgentActionConfig(
type="number",
label="Jiggle (Increased Randomness)",
@ -122,6 +118,29 @@ class ConversationAgent(Agent):
max=1.0,
step=0.1,
),
"instructions": AgentActionConfig(
type="blob",
label="Task Instructions",
value="Write 1-3 sentences. Never wax poetic.",
description="Allows to extend the task instructions - placed above the context history.",
),
"actor_instructions": AgentActionConfig(
type="blob",
label="Actor Instructions",
value="",
description="Allows to extend the actor instructions - placed towards the end of the context history.",
),
"actor_instructions_offset": AgentActionConfig(
type="number",
label="Actor Instructions Offset",
value=3,
description="Offsets the actor instructions into the context history, shifting it up N number of messages. 0 = at the end of the context history.",
min=0,
max=20,
step=1,
),
},
),
"auto_break_repetition": AgentAction(
@ -176,11 +195,32 @@ class ConversationAgent(Agent):
{
"label": "AI compiled question and answers (slow)",
"value": "questions",
},
}
],
),
},
),
"investigate_context": AgentAction(
enabled=False,
label="Context Investigation",
container=True,
icon="mdi-text-search",
can_be_disabled=True,
experimental=True,
description="Will investigate the layered history of the scene to extract relevant information. This can be very slow, especially as number of layers increase. Layered history needs to be enabled in the summarizer agent.",
config={
"trigger": AgentActionConfig(
type="text",
label="Trigger",
description="The trigger to start the context investigation",
value="ai",
choices=[
{"label": "Agent decides", "value": "ai"},
{"label": "Only when a question is asked", "value": "question"},
]
),
}
),
}
@property
@ -219,6 +259,26 @@ class ConversationAgent(Agent):
return details
@property
def generation_settings_task_instructions(self):
return self.actions["generation_override"].config["instructions"].value
@property
def generation_settings_actor_instructions(self):
return self.actions["generation_override"].config["actor_instructions"].value
@property
def generation_settings_actor_instructions_offset(self):
return self.actions["generation_override"].config["actor_instructions_offset"].value
@property
def investigate_context(self):
return self.actions["investigate_context"].enabled
@property
def investigate_context_trigger(self):
return self.actions["investigate_context"].config["trigger"].value
def connect(self, scene):
super().connect(scene)
talemate.emit.async_signals.get("game_loop").connect(self.on_game_loop)
@ -433,6 +493,7 @@ class ConversationAgent(Agent):
self,
character: Character,
char_message: Optional[str] = "",
instruction: Optional[str] = None,
):
"""
Builds the prompt that drives the AI's conversational response
@ -471,12 +532,9 @@ class ConversationAgent(Agent):
director_message = isinstance(scene_and_dialogue[-1], DirectorMessage)
except IndexError:
director_message = False
extra_instructions = ""
if self.actions["generation_override"].enabled:
extra_instructions = (
self.actions["generation_override"].config["instructions"].value
)
if self.investigate_context:
await self.run_context_investigation(character)
conversation_format = self.conversation_format
prompt = Prompt.get(
@ -493,7 +551,11 @@ class ConversationAgent(Agent):
"talking_character": character,
"partial_message": char_message,
"director_message": director_message,
"extra_instructions": extra_instructions,
"extra_instructions": self.generation_settings_task_instructions, #backward compatibility
"task_instructions": self.generation_settings_task_instructions,
"actor_instructions": self.generation_settings_actor_instructions,
"actor_instructions_offset": self.generation_settings_actor_instructions_offset,
"direct_instruction": instruction,
"decensor": self.client.decensor_enabled,
},
)
@ -526,11 +588,8 @@ class ConversationAgent(Agent):
if retrieval_method != "direct":
world_state = instance.get_agent("world_state")
history = self.scene.context_history(
min_dialogue=3,
max_dialogue=15,
keep_director=False,
sections=False,
add_archieved_history=False,
budget=int(self.client.max_token_length * 0.75),
)
text = "\n".join(history)
log.debug(
@ -542,13 +601,15 @@ class ConversationAgent(Agent):
if retrieval_method == "questions":
self.current_memory_context = (
await world_state.analyze_text_and_extract_context(
text, f"continue the conversation as {character.name}"
text, f"continue the conversation as {character.name}",
include_character_context=True
)
).split("\n")
elif retrieval_method == "queries":
self.current_memory_context = (
await world_state.analyze_text_and_extract_context_via_queries(
text, f"continue the conversation as {character.name}"
text, f"continue the conversation as {character.name}",
include_character_context=True
)
)
@ -567,10 +628,40 @@ class ConversationAgent(Agent):
return self.current_memory_context
async def build_prompt(self, character, char_message: str = ""):
async def run_context_investigation(self, character: Character | None = None):
# go backwards in the history if there is a ContextInvestigation message before
# there is a character or narrator message, just return
for idx in range(len(self.scene.history) - 1, -1, -1):
if isinstance(self.scene.history[idx], ContextInvestigationMessage):
return
if isinstance(self.scene.history[idx], (CharacterMessage, NarratorMessage)):
break
last_message = self.scene.last_message_of_type(["character", "narrator"])
if self.investigate_context_trigger == "question":
if not last_message:
return
if "?" not in str(last_message):
return
summarizer = instance.get_agent("summarizer")
result = await summarizer.dig_layered_history(str(last_message), character=character)
if not result.strip():
return
message = ContextInvestigationMessage(message=result)
self.scene.push_history([message])
emit("context_investigation", message)
async def build_prompt(self, character, char_message: str = "", instruction:str = None):
fn = self.build_prompt_default
return await fn(character, char_message=char_message)
return await fn(character, char_message=char_message, instruction=instruction)
def clean_result(self, result, character):
if "#" in result:
@ -607,7 +698,7 @@ class ConversationAgent(Agent):
set_client_context_attribute("nuke_repetition", nuke_repetition)
@set_processing
async def converse(self, actor):
async def converse(self, actor, only_generate:bool = False, instruction:str = None) -> list[str] | list[CharacterMessage]:
"""
Have a conversation with the AI
"""
@ -625,7 +716,7 @@ class ConversationAgent(Agent):
self.set_generation_overrides()
result = await self.client.send_prompt(await self.build_prompt(character))
result = await self.client.send_prompt(await self.build_prompt(character, instruction=instruction))
result = self.clean_result(result, character)
@ -707,6 +798,9 @@ class ConversationAgent(Agent):
response_message = util.parse_messages_from_str(total_result, [character.name])
log.info("conversation agent", result=response_message)
if only_generate:
return response_message
emission = ConversationAgentEmission(
agent=self, generation=response_message, actor=actor, character=character

View file

@ -1,6 +1,7 @@
import asyncio
import json
import random
import uuid
from typing import TYPE_CHECKING, Tuple, Union
import pydantic
@ -317,3 +318,88 @@ class AssistantMixin:
emit("autocomplete_suggestion", response)
return response
@set_processing
async def fork_scene(
self,
message_id: int,
save_name: str | None = None,
):
"""
Allows to fork a new scene from a specific message
in the current scene.
All content after the message will be removed and the
context database will be re imported ensuring a clean state.
All state reinforcements will be reset to their most recent
state before the message.
"""
emit("status", "Creating scene fork ...", status="busy")
try:
if not save_name:
# build a save name
uuid_str = str(uuid.uuid4())[:8]
save_name = f"{uuid_str}-forked"
log.info(f"Forking scene", message_id=message_id, save_name=save_name)
world_state = get_agent("world_state")
# does a message with the given id exist?
index = self.scene.message_index(message_id)
if index is None:
raise ValueError(f"Message with id {message_id} not found.")
# truncate scene.history keeping index as the last element
self.scene.history = self.scene.history[:index + 1]
# truncate scene.archived_history keeping the element where `end` is < `index`
# as the last element
self.scene.archived_history = [
x for x in self.scene.archived_history if "end" not in x or x["end"] < index
]
# the same needs to be done for layered history
# where each layer is truncated based on what's left in the previous layer
# using similar logic as above (checking `end` vs `index`)
# layer 0 checks archived_history
new_layered_history = []
for layer_number, layer in enumerate(self.scene.layered_history):
if layer_number == 0:
index = len(self.scene.archived_history) - 1
else:
index = len(new_layered_history[layer_number - 1]) - 1
new_layer = [
x for x in layer if x["end"] < index
]
new_layered_history.append(new_layer)
self.scene.layered_history = new_layered_history
# save the scene
await self.scene.save(copy_name=save_name)
log.info(f"Scene forked", save_name=save_name)
# re-emit history
await self.scene.emit_history()
emit("status", f"Updating world state ...", status="busy")
# reset state reinforcements
await world_state.update_reinforcements(force = True, reset= True)
# update world state
await self.scene.world_state.request_update()
emit("status", f"Scene forked", status="success")
except Exception as e:
log.exception("Scene fork failed", exc=e)
emit("status", "Scene fork failed", status="error")

View file

@ -17,7 +17,7 @@ from talemate.emit import emit, wait_for_input
from talemate.events import GameLoopActorIterEvent, GameLoopStartEvent, SceneStateEvent
from talemate.game.engine import GameInstructionsMixin
from talemate.prompts import Prompt
from talemate.scene_message import DirectorMessage, NarratorMessage
from talemate.scene_message import DirectorMessage, NarratorMessage, CharacterMessage
from .base import Agent, AgentAction, AgentActionConfig, set_processing
from .registry import register
@ -83,6 +83,51 @@ class DirectorAgent(GameInstructionsMixin, Agent):
),
},
),
"_generate_choices": AgentAction(
enabled=True,
container=True,
can_be_disabled=True,
experimental=True,
label="Dynamic Actions",
icon="mdi-tournament",
description="Allows the director to generate clickable choices for the player.",
config={
"chance": AgentActionConfig(
type="number",
label="Chance",
description="The chance to generate actions. 0 = never, 1 = always",
value=0.3,
min=0,
max=1,
step=0.1,
),
"num_choices": AgentActionConfig(
type="number",
label="Number of Actions",
description="The number of actions to generate",
value=3,
min=1,
max=10,
step=1,
),
"never_auto_progress": AgentActionConfig(
type="bool",
label="Never Auto Progress on Action Selection",
description="If enabled, the scene will not auto progress after you select an action.",
value=False,
),
"instructions": AgentActionConfig(
type="blob",
label="Instructions",
description="Provide some instructions to the director for generating actions.",
value="",
),
}
),
}
@property
@ -113,6 +158,26 @@ class DirectorAgent(GameInstructionsMixin, Agent):
def actor_direction_mode(self):
return self.actions["direct"].config["actor_direction_mode"].value
@property
def generate_choices_enabled(self):
return self.actions["_generate_choices"].enabled
@property
def generate_choices_chance(self):
return self.actions["_generate_choices"].config["chance"].value
@property
def generate_choices_num_choices(self):
return self.actions["_generate_choices"].config["num_choices"].value
@property
def generate_choices_never_auto_progress(self):
return self.actions["_generate_choices"].config["never_auto_progress"].value
@property
def generate_choices_instructions(self):
return self.actions["_generate_choices"].config["instructions"].value
def connect(self, scene):
super().connect(scene)
talemate.emit.async_signals.get("agent.conversation.before_generate").connect(
@ -122,6 +187,7 @@ class DirectorAgent(GameInstructionsMixin, Agent):
self.on_player_dialog
)
talemate.emit.async_signals.get("scene_init").connect(self.on_scene_init)
talemate.emit.async_signals.get("player_turn_start").connect(self.on_player_turn_start)
async def on_scene_init(self, event: SceneStateEvent):
"""
@ -172,6 +238,31 @@ class DirectorAgent(GameInstructionsMixin, Agent):
event.game_loop.had_passive_narration = await self.direct(None)
async def on_player_turn_start(self, event: GameLoopStartEvent):
if not self.enabled:
return
if self.generate_choices_enabled:
# look backwards through history and abort if we encounter
# a character message with source "player" before either
# a character message with a different source or a narrator message
#
# this is so choices aren't generated when the player message was
# the most recent content in the scene
for i in range(len(self.scene.history) - 1, -1, -1):
message = self.scene.history[i]
if isinstance(message, NarratorMessage):
break
if isinstance(message, CharacterMessage):
if message.source == "player":
return
break
if random.random() < self.generate_choices_chance:
await self.generate_choices()
async def direct(self, character: Character) -> bool:
if not self.actions["direct"].enabled:
return False
@ -432,3 +523,50 @@ class DirectorAgent(GameInstructionsMixin, Agent):
self, kind: str, agent_function_name: str, auto: bool = False
):
return True
@set_processing
async def generate_choices(
self,
):
log.info("generate_choices")
response = await Prompt.request(
"director.generate-choices",
self.client,
"direction_long",
vars={
"max_tokens": self.client.max_token_length,
"scene": self.scene,
"player_character": self.scene.get_player_character(),
"num_choices": self.generate_choices_num_choices,
"instructions": self.generate_choices_instructions,
},
)
try:
choice_text = response.split("ACTIONS:", 1)[1]
choices = util.extract_list(choice_text)
# strip quotes
choices = [choice.strip().strip('"') for choice in choices]
# limit to num_choices
choices = choices[:self.generate_choices_num_choices]
except Exception as e:
log.error("generate_choices failed", error=str(e), response=response)
return
log.info("generate_choices done", choices=choices)
emit(
"player_choice",
response,
data = {
"choices": choices
},
websocket_passthrough=True
)

View file

@ -795,7 +795,7 @@ class ChromaDBMemoryAgent(MemoryAgent):
elif not where["$and"]:
where = None
# log.debug("crhomadb agent get", text=text, where=where)
log.debug("crhomadb agent get", text=text, where=where)
_results = self.db.query(query_texts=[text], where=where, n_results=limit)
@ -875,6 +875,10 @@ class ChromaDBMemoryAgent(MemoryAgent):
return None
def _get_document(self, id) -> dict:
if not id:
return {}
result = self.db.get(ids=[id] if isinstance(id, str) else id)
documents = {}

File diff suppressed because it is too large Load diff

View file

@ -83,24 +83,24 @@ class Style(pydantic.BaseModel):
# Almost taken straight from some of the fooocus style presets, credit goes to the original author
STYLE_MAP["digital_art"] = Style(
keywords="digital artwork, masterpiece, best quality, high detail".split(", "),
keywords="in the style of a digital artwork, masterpiece, best quality, high detail".split(", "),
negative_keywords="text, watermark, low quality, blurry, photo".split(", "),
)
STYLE_MAP["concept_art"] = Style(
keywords="concept art, conceptual sketch, masterpiece, best quality, high detail".split(
keywords="in the style of concept art, conceptual sketch, masterpiece, best quality, high detail".split(
", "
),
negative_keywords="text, watermark, low quality, blurry, photo".split(", "),
)
STYLE_MAP["ink_illustration"] = Style(
keywords="ink illustration, painting, masterpiece, best quality".split(", "),
keywords="in the style of ink illustration, painting, masterpiece, best quality".split(", "),
negative_keywords="text, watermark, low quality, blurry, photo".split(", "),
)
STYLE_MAP["anime"] = Style(
keywords="anime, masterpiece, best quality, illustration".split(", "),
keywords="in the style of anime, masterpiece, best quality, illustration".split(", "),
negative_keywords="text, watermark, low quality, blurry, photo, 3d".split(", "),
)

View file

@ -16,7 +16,6 @@ from talemate.events import GameLoopEvent
from talemate.instance import get_agent
from talemate.prompts import Prompt
from talemate.scene_message import (
DirectorMessage,
ReinforcementMessage,
TimePassageMessage,
)
@ -291,16 +290,18 @@ class WorldStateAgent(Agent):
self,
text: str,
goal: str,
include_character_context: bool = False,
):
response = await Prompt.request(
"world_state.analyze-text-and-extract-context",
self.client,
"analyze_freeform",
"analyze_freeform_long",
vars={
"scene": self.scene,
"max_tokens": self.client.max_token_length,
"text": text,
"goal": goal,
"include_character_context": include_character_context,
},
)
@ -315,6 +316,7 @@ class WorldStateAgent(Agent):
self,
text: str,
goal: str,
include_character_context: bool = False,
) -> list[str]:
response = await Prompt.request(
"world_state.analyze-text-and-generate-rag-queries",
@ -325,6 +327,7 @@ class WorldStateAgent(Agent):
"max_tokens": self.client.max_token_length,
"text": text,
"goal": goal,
"include_character_context": include_character_context,
},
)
@ -506,7 +509,7 @@ class WorldStateAgent(Agent):
return response
@set_processing
async def update_reinforcements(self, force: bool = False):
async def update_reinforcements(self, force: bool = False, reset: bool = False):
"""
Queries due worldstate re-inforcements
"""
@ -514,7 +517,7 @@ class WorldStateAgent(Agent):
for reinforcement in self.scene.world_state.reinforce:
if reinforcement.due <= 0 or force:
await self.update_reinforcement(
reinforcement.question, reinforcement.character
reinforcement.question, reinforcement.character, reset=reset
)
else:
reinforcement.due -= 1
@ -692,7 +695,7 @@ class WorldStateAgent(Agent):
summary = await summarizer.summarize(
text,
extra_context=extra_context,
extra_context=[extra_context],
method="short",
extra_instructions="Pay particularly close attention to decisions, agreements or promises made.",
)

View file

@ -29,6 +29,9 @@ SUPPORTED_MODELS = [
"gpt-4-turbo-2024-04-09",
"gpt-4-turbo",
"gpt-4o-2024-05-13",
"gpt-4o-2024-08-06",
"gpt-4o-2024-11-20",
"gpt-4o-latest",
"gpt-4o",
"gpt-4o-mini",
"o1-preview",
@ -38,6 +41,9 @@ SUPPORTED_MODELS = [
# any model starting with gpt-4- is assumed to support 'json_object'
# for others we need to explicitly state the model name
JSON_OBJECT_RESPONSE_MODELS = [
"gpt-4o-2024-08-06",
"gpt-4o-2024-11-20",
"gpt-4o-latest",
"gpt-4o",
"gpt-4o-mini",
"gpt-3.5-turbo-0125",
@ -209,6 +215,10 @@ class OpenAIClient(ClientBase):
self.max_token_length = min(max_token_length or 8192, 8192)
elif model == "gpt-3.5-turbo-16k":
self.max_token_length = min(max_token_length or 16384, 16384)
elif model.startswith("gpt-4o") and model != "gpt-4o-2024-05-13":
self.max_token_length = min(max_token_length or 16384, 16384)
elif model == "gpt-4o-2024-05-13":
self.max_token_length = min(max_token_length or 4096, 4096)
elif model == "gpt-4-1106-preview":
self.max_token_length = min(max_token_length or 128000, 128000)
else:

View file

@ -83,6 +83,8 @@ PRESET_SUBSTRING_MAPPINGS = {
"creative": "creative",
"analytical": "analytical",
"analyze": "analytical",
"direction": "scene_direction",
"summarize": "summarization",
}
PRESET_MAPPING = {
@ -93,6 +95,8 @@ PRESET_MAPPING = {
"analyze_long": "analytical",
"analyze_freeform": "analytical",
"analyze_freeform_short": "analytical",
"analyze_freeform_medium": "analytical",
"analyze_freeform_medium_short": "analytical",
"narrate": "creative",
"create": "creative_instruction",
"create_short": "creative_instruction",
@ -132,7 +136,7 @@ def preset_for_kind(kind: str, client: "ClientBase") -> dict:
TOKEN_MAPPING = {
"conversation": 75,
"conversation_select_talking_actor": 30,
"summarize": 500,
"summarize": 512,
"analyze": 500,
"analyze_long": 2048,
"analyze_freeform": 500,
@ -154,7 +158,9 @@ TOKEN_MAPPING = {
TOKEN_SUBSTRING_MAPPINGS = {
"extensive": 2048,
"long": 1024,
"medium3": 750,
"medium2": 512,
"list": 300,
"medium": 192,
"short2": 128,
"short": 75,

View file

@ -3,7 +3,7 @@ from .cmd_autocomplete import *
from .cmd_characters import *
from .cmd_debug_tools import *
from .cmd_dialogue import *
from .cmd_director import CmdDirectorDirect, CmdDirectorDirectWithOverride
from .cmd_director import *
from .cmd_exit import CmdExit
from .cmd_help import CmdHelp
from .cmd_info import CmdInfo

View file

@ -7,12 +7,17 @@ import structlog
from talemate.commands.base import TalemateCommand
from talemate.commands.manager import register
from talemate.prompts.base import set_default_sectioning_handler
from talemate.instance import get_agent
__all__ = [
"CmdDebugOn",
"CmdDebugOff",
"CmdPromptChangeSectioning",
"CmdRunAutomatic",
"CmdSummarizerGenerateTimeline",
"CmdSummarizerUpdatedLayeredHistory",
"CmdSummarizerResetLayeredHistory",
"CmdSummarizerDigLayeredHistory",
]
log = structlog.get_logger("talemate.commands.cmd_debug_tools")
@ -178,3 +183,68 @@ class CmdDumpSceneSerialization(TalemateCommand):
async def run(self):
log.debug("dump_scene_serialization", serialization=self.scene.json)
@register
class CmdSummarizerGenerateTimeline(TalemateCommand):
"""
Command class for the 'summarizer_generate_timeline' command
"""
name = "summarizer_generate_timeline"
description = "Generate a timeline from the scene"
aliases = ["generate_timeline"]
async def run(self):
summarizer = get_agent("summarizer")
await summarizer.generate_timeline()
@register
class CmdSummarizerUpdatedLayeredHistory(TalemateCommand):
"""
Command class for the 'summarizer_updated_layered_history' command
"""
name = "summarizer_updated_layered_history"
description = "Update the stepped archive for the summarizer"
aliases = ["update_layered_history"]
async def run(self):
summarizer = get_agent("summarizer")
await summarizer.summarize_to_layered_history()
@register
class CmdSummarizerResetLayeredHistory(TalemateCommand):
"""
Command class for the 'summarizer_reset_layered_history' command
"""
name = "summarizer_reset_layered_history"
description = "Reset the stepped archive for the summarizer"
aliases = ["reset_layered_history"]
async def run(self):
summarizer = get_agent("summarizer")
self.scene.layered_history = []
await summarizer.summarize_to_layered_history()
@register
class CmdSummarizerDigLayeredHistory(TalemateCommand):
"""
Command class for the 'summarizer_dig_layered_history' command
"""
name = "summarizer_dig_layered_history"
description = "Dig into the layered history"
aliases = ["dig_layered_history"]
async def run(self):
if not self.args:
self.emit("system", "You must specify a query")
query = self.args[0]
summarizer = get_agent("summarizer")
await summarizer.dig_layered_history(query)

View file

@ -4,6 +4,11 @@ from talemate.emit import emit, wait_for_input
from talemate.scene_message import DirectorMessage
from talemate.util import colored_text, wrap_text
__all__ = [
"CmdDirectorDirect",
"CmdDirectorDirectWithOverride",
"CmdDirectorGenerateChoices",
]
@register
class CmdDirectorDirect(TalemateCommand):
@ -64,3 +69,22 @@ class CmdDirectorDirectWithOverride(CmdDirectorDirect):
async def run(self):
await super().run(ask_for_input=True)
@register
class CmdDirectorGenerateChoices(TalemateCommand):
"""
Command class for the 'director' command
"""
name = "director_generate_choices"
description = "Calls a director to generate choices for a character"
aliases = ["generate_choices"]
async def run(self, ask_for_input=True):
director = self.scene.get_helper("director")
if not director:
self.system_message("No director found")
return True
choices = await director.agent.generate_choices()

View file

@ -17,6 +17,7 @@ class CmdRebuildArchive(TalemateCommand):
async def run(self):
summarizer = self.scene.get_helper("summarizer")
memory = self.scene.get_helper("memory")
if not summarizer:
self.system_message("No summarizer found")
@ -27,11 +28,9 @@ class CmdRebuildArchive(TalemateCommand):
ah for ah in self.scene.archived_history if ah.get("end") is None
]
self.scene.ts = (
self.scene.archived_history[-1].ts
if self.scene.archived_history
else "PT0S"
)
self.scene.ts = "PT0S"
memory.delete({"typ": "history"})
entries = 0
total_entries = summarizer.agent.estimated_entry_count
@ -42,7 +41,10 @@ class CmdRebuildArchive(TalemateCommand):
status="busy",
)
more = await summarizer.agent.build_archive(self.scene)
self.scene.sync_time()
entries += 1
if not more:
break

View file

@ -434,6 +434,33 @@ AnnotatedClient = Annotated[
]
class HistoryMessageStyle(BaseModel):
italic: bool = False
bold: bool = False
# Leave None for default color
color: str | None = None
class HidableHistoryMessageStyle(HistoryMessageStyle):
# certain messages can be hidden, but all messages are shown by default
show: bool = True
class SceneAppearance(BaseModel):
narrator_messages: HistoryMessageStyle = HistoryMessageStyle(italic=True)
character_messages: HistoryMessageStyle = HistoryMessageStyle()
director_messages: HidableHistoryMessageStyle = HidableHistoryMessageStyle()
time_messages: HistoryMessageStyle = HistoryMessageStyle()
context_investigation_messages: HidableHistoryMessageStyle = HidableHistoryMessageStyle()
class Appearance(BaseModel):
scene: SceneAppearance = SceneAppearance()
class Config(BaseModel):
clients: Dict[str, AnnotatedClient] = {}
@ -466,6 +493,8 @@ class Config(BaseModel):
recent_scenes: RecentScenes = RecentScenes()
presets: Presets = Presets()
appearance: Appearance = Appearance()
class Config:
extra = "ignore"

View file

@ -3,7 +3,10 @@ from contextvars import ContextVar
import pydantic
import structlog
from talemate.exceptions import SceneInactiveError
__all__ = [
"assert_active_scene",
"scene_is_loading",
"rerun_context",
"active_scene",
@ -19,6 +22,8 @@ log = structlog.get_logger(__name__)
class InteractionState(pydantic.BaseModel):
act_as: str | None = None
from_choice: str | None = None
input: str | None = None
scene_is_loading = ContextVar("scene_is_loading", default=None)
@ -79,3 +84,11 @@ class Interaction:
def __exit__(self, *args):
interaction.reset(self.token)
def assert_active_scene(scene: object):
if not active_scene.get():
raise SceneInactiveError("Scene is not active")
if active_scene.get() != scene:
raise SceneInactiveError("Scene has changed")

View file

@ -123,7 +123,17 @@ async def wait_for_input(
while input_received["message"] is None:
await asyncio.sleep(0.1)
interaction_state = interaction.get()
if interaction_state.input:
input_received["message"] = interaction_state.input
input_received["interaction"] = interaction_state
input_received["from_choice"] = interaction_state.from_choice
interaction_state.input = None
interaction_state.from_choice = None
break
handlers["receive_input"].disconnect(input_receiver)
if input_received["message"] == "!abort":

View file

@ -8,6 +8,8 @@ DirectorMessage = signal("director")
TimePassageMessage = signal("time")
StatusMessage = signal("status")
ReinforcementMessage = signal("reinforcement")
PlayerChoiceMessage = signal("player_choice")
ContextInvestigationMessage = signal("context_investigation")
ClearScreen = signal("clear_screen")
@ -49,6 +51,7 @@ handlers = {
"player": PlayerMessage,
"director": DirectorMessage,
"time": TimePassageMessage,
"context_investigation": ContextInvestigationMessage,
"reinforcement": ReinforcementMessage,
"request_input": RequestInput,
"receive_input": ReceiveInput,
@ -73,4 +76,5 @@ handlers = {
"autocomplete_suggestion": AutocompleteSuggestion,
"spice_applied": SpiceApplied,
"memory_request": MemoryRequest,
"player_choice": PlayerChoiceMessage,
}

View file

@ -65,3 +65,8 @@ class GameLoopActorIterEvent(GameLoopBase):
@dataclass
class GameLoopNewMessageEvent(GameLoopBase):
message: SceneMessage
@dataclass
class PlayerTurnStartEvent(Event):
pass

View file

@ -93,7 +93,7 @@ def create(scene: "Scene") -> "ScopedAPI":
validated = Arguments(budget=budget, keep_director=keep_director)
return scene.context_history(validated.budget, validated.keep_director)
return scene.context_history(validated.budget, keep_director=validated.keep_director)
def get_player_character(self) -> schema.CharacterSchema | None:
"""

View file

@ -14,6 +14,7 @@ from talemate.instance import get_agent
from talemate.scene_message import SceneMessage
from talemate.util import iso8601_diff_to_human
from talemate.world_state.templates import GenerationOptions
from talemate.exceptions import GenerationCancelled
if TYPE_CHECKING:
from talemate.tale_mate import Scene
@ -78,7 +79,11 @@ def history_with_relative_time(history: list[str], scene_time: str) -> list[dict
{
"text": entry["text"],
"ts": entry["ts"],
"ts_start": entry.get("ts_start", None),
"ts_end": entry.get("ts_end", None),
"time": iso8601_diff_to_human(scene_time, entry["ts"]),
"time_start": iso8601_diff_to_human(scene_time, entry["ts_start"] if entry.get("ts_start") else None),
"time_end": iso8601_diff_to_human(scene_time, entry["ts_end"] if entry.get("ts_end") else None),
}
for entry in history
]
@ -97,10 +102,12 @@ async def rebuild_history(
scene.archived_history = [
ah for ah in scene.archived_history if ah.get("end") is None
]
scene.layered_history = []
scene.saved = False
scene.ts = scene.archived_history[-1].ts if scene.archived_history else "PT0S"
scene.sync_time()
summarizer = get_agent("summarizer")
@ -109,6 +116,8 @@ async def rebuild_history(
try:
while True:
await asyncio.sleep(0.1)
if not scene.active:
# scene is no longer active
@ -120,20 +129,25 @@ async def rebuild_history(
"status",
message=f"Rebuilding historical archive... {entries}/~{total_entries}",
status="busy",
data={"cancellable": True},
)
more = await summarizer.build_archive(
scene, generation_options=generation_options
)
scene.ts = scene.archived_history[-1]["ts"]
scene.sync_time()
if callback:
callback()
await callback()
entries += 1
if not more:
break
except GenerationCancelled:
log.info("Generation cancelled, stopping rebuild of historical archive")
emit("status", message="Rebuilding of archive cancelled", status="info")
return
except Exception as e:
log.exception("Error rebuilding historical archive", error=e)
emit("status", message="Error rebuilding historical archive", status="error")
@ -141,4 +155,9 @@ async def rebuild_history(
scene.sync_time()
await scene.commit_to_memory()
if summarizer.layered_history_enabled:
emit("status", message="Rebuilding layered history...", status="busy")
await summarizer.summarize_to_layered_history()
emit("status", message="Historical archive rebuilt", status="success")

View file

@ -228,6 +228,7 @@ async def load_scene_from_data(
scene.memory_session_id = scene_data.get("memory_session_id", None)
scene.history = _load_history(scene_data["history"])
scene.archived_history = scene_data["archived_history"]
scene.layered_history = scene_data.get("layered_history", [])
scene.world_state = WorldState(**scene_data.get("world_state", {}))
scene.game_state = GameState(**scene_data.get("game_state", {}))
scene.context = scene_data.get("context", "")
@ -237,7 +238,7 @@ async def load_scene_from_data(
scene.assets.cover_image = scene_data.get("assets", {}).get("cover_image", None)
scene.assets.load_assets(scene_data.get("assets", {}).get("assets", {}))
scene.sync_time()
scene.fix_time()
log.debug("scene time", ts=scene.ts)
loading_status("Initializing long-term memory...")

View file

@ -23,7 +23,7 @@ import structlog
import talemate.instance as instance
import talemate.thematic_generators as thematic_generators
from talemate.config import load_config
from talemate.context import rerun_context
from talemate.context import rerun_context, active_scene
from talemate.emit import emit
from talemate.exceptions import LLMAccuracyError, RenderPromptError
from talemate.util import (
@ -32,6 +32,7 @@ from talemate.util import (
extract_json,
fix_faulty_json,
remove_extra_linebreaks,
iso8601_diff_to_human,
)
from talemate.util.prompt import condensed
@ -366,8 +367,10 @@ class Prompt:
env.globals["instruct_text"] = self.instruct_text
env.globals["agent_action"] = self.agent_action
env.globals["retrieve_memories"] = self.retrieve_memories
env.globals["time_diff"] = self.time_diff
env.globals["uuidgen"] = lambda: str(uuid.uuid4())
env.globals["to_int"] = lambda x: int(x)
env.globals["to_str"] = lambda x: str(x)
env.globals["config"] = self.config
env.globals["len"] = lambda x: len(x)
env.globals["max"] = lambda x, y: max(x, y)
@ -386,6 +389,7 @@ class Prompt:
env.globals["llm_can_be_coerced"] = lambda: (
self.client.can_be_coerced if self.client else False
)
env.globals["text_to_chunks"] = self.text_to_chunks
env.globals["emit_narrator"] = lambda message: emit("system", message=message)
env.filters["condensed"] = condensed
ctx.update(self.vars)
@ -400,7 +404,7 @@ class Prompt:
# Render the template with the prompt variables
self.eval_context = {}
self.dedupe_enabled = True
#self.dedupe_enabled = True
try:
self.prompt = template.render(ctx)
if not sectioning_handler:
@ -599,6 +603,44 @@ class Prompt:
else:
emit("status", status=status, message=message)
def time_diff(self, iso8601_time: str):
scene = active_scene.get()
if not iso8601_time:
return ""
return iso8601_diff_to_human(iso8601_time, scene.ts)
def text_to_chunks(self, text:str, chunk_size:int=512) -> list[str]:
"""
Takes a text string and splits it into chunks based length of the text.
Arguments:
- text: The text to split into chunks.
- chunk_size: number of characters in each chunk.
"""
chunks = []
for i, line in enumerate(text.split("\n")):
# dont push empty lines into empty chunks
if not line.strip() and (not chunks or not chunks[-1]):
continue
if not chunks:
chunks.append([line])
continue
if len("\n".join(chunks[-1])) + len(line) < chunk_size:
chunks[-1].append(line)
else:
chunks.append([line])
return ["\n\n".join(chunk) for chunk in chunks]
def set_prepared_response(self, response: str, prepend: str = ""):
"""
Set the prepared response.

View file

@ -45,7 +45,7 @@ You may chose to have {{ talking_character.name}} respond to the conversation, o
Always contain actions in asterisks. For example, *{{ talking_character.name}} smiles*.
Always contain dialogue in quotation marks. For example, {{ talking_character.name}}: "Hello!"
{{ extra_instructions }}
{{ task_instructions }}
{% if scene.count_messages() >= 5 and not talking_character.dialogue_instructions %}Use an informal and colloquial register with a conversational tone. Overall, {{ talking_character.name }}'s dialog is informal, conversational, natural, and spontaneous, with a sense of immediacy.
{% endif -%}
@ -90,12 +90,8 @@ Always contain dialogue in quotation marks. For example, {{ talking_character.na
{% endblock -%}
{% block scene_history -%}
{% set scene_context = scene.context_history(budget=max_tokens-200-count_tokens(self.rendered_context()), min_dialogue=15, sections=False, keep_director=talking_character.name) -%}
{%- if talking_character.dialogue_instructions and scene.count_messages() > 5 -%}
{%- if scene.count_messages() < 15 -%}
{%- set _ = scene_context.insert(-3, "(Internal acting instructions for "+talking_character.name+": "+talking_character.dialogue_instructions+")") -%}
{%- else -%}
{%- set _ = scene_context.insert(-10, "(Internal acting instructions for "+talking_character.name+": "+talking_character.dialogue_instructions+")") -%}
{%- endif -%}
{%- if actor_instructions_offset > 0 and talking_character.dialogue_instructions and scene.count_messages() > actor_instructions_offset -%}
{%- set _ = scene_context.insert(-actor_instructions_offset, "(Internal acting instructions for "+talking_character.name+": "+talking_character.dialogue_instructions+" "+actor_instructions+")") -%}
{% endif -%}
{% for scene_line in scene_context -%}
{{ scene_line }}
@ -103,8 +99,11 @@ Always contain dialogue in quotation marks. For example, {{ talking_character.na
{% endfor %}
{% endblock -%}
<|CLOSE_SECTION|>
{% if scene.count_messages() < 5 %}
{% if not talking_character.dialogue_instructions %}(Use an informal and colloquial register with a conversational tone. Overall, {{ talking_character.name }}'s dialog is informal, conversational, natural, and spontaneous, with a sense of immediacy.){% else %}(Internal acting instructions for {{ talking_character.name }}: {{ talking_character.dialogue_instructions }}){% endif -%}
{% if scene.count_messages() < actor_instructions_offset or actor_instructions_offset == 0 %}
{% if not talking_character.dialogue_instructions %}({% if actor_instructions %} {{ actor_instructions }}{% else %}Use an informal and colloquial register with a conversational tone. Overall, {{ talking_character.name }}'s dialog is informal, conversational, natural, and spontaneous, with a sense of immediacy.{% endif -%}){% else %}(Internal acting instructions for {{ talking_character.name }}: {{ talking_character.dialogue_instructions }}{% if actor_instructions %} {{ actor_instructions }}{% endif %}){% endif -%}
{% endif -%}
{% if layered_history_investigation %}
(Internal notes - historic context: {{ layered_history_investigation }})
{% endif -%}
{% if rerun_context and rerun_context.direction -%}
{% if rerun_context.method == 'replace' -%}
@ -115,4 +114,9 @@ Always contain dialogue in quotation marks. For example, {{ talking_character.na
# Requested changes: {{ rerun_context.direction }}
{% endif -%}
{% endif -%}
{% if direct_instruction -%}
{{ talking_character.name }}'s next action: {{ direct_instruction }}
You must not add additional actions.
{% endif -%}
{{ bot_token }}{{ talking_character.name }}:{{ partial_message }}

View file

@ -52,7 +52,7 @@ Emotions and actions should be written in italics. For example:
*smiles* "I'm so glad you're here."
END-OF-LINE
{{ extra_instructions }}
{{ task_instructions }}
STAY IN THE SCENE. YOU MUST NOT BREAK CHARACTER. YOU MUST NOT BREAK THE FOURTH WALL.
@ -63,6 +63,11 @@ YOU MUST ONLY WRITE NEW DIALOGUE FOR {{ talking_character.name.upper() }}.
{% if scene.count_messages() >= 5 and not talking_character.dialogue_instructions %}Use an informal and colloquial register with a conversational tone. Overall, {{ talking_character.name }}'s dialog is informal, conversational, natural, and spontaneous, with a sense of immediacy.
{% endif -%}
<|CLOSE_SECTION|>
<|SECTION:How to use internal notes|>
Internal notes may be given to you to help you with consistency when writing.
They may be instructions on how the character should act or simply add some context that may inform the character's next dialogue.
<|CLOSE_SECTION|>
{% set general_reinforcements = scene.world_state.filter_reinforcements(insert=['all-context']) %}
{% set char_reinforcements = scene.world_state.filter_reinforcements(character=talking_character.name, insert=["conversation-context"]) %}
@ -104,21 +109,17 @@ YOU MUST ONLY WRITE NEW DIALOGUE FOR {{ talking_character.name.upper() }}.
{% endblock -%}
{% block scene_history -%}
{% set scene_context = scene.context_history(budget=max_tokens-200-count_tokens(self.rendered_context()), min_dialogue=15, sections=False, keep_director=talking_character.name) -%}
{%- if talking_character.dialogue_instructions and scene.count_messages() > 5 -%}
{%- if scene.count_messages() < 15 -%}
{%- set _ = scene_context.insert(-3, "(Internal acting instructions for "+talking_character.name+": "+talking_character.dialogue_instructions+")") -%}
{%- else -%}
{%- set _ = scene_context.insert(-10, "(Internal acting instructions for "+talking_character.name+": "+talking_character.dialogue_instructions+")") -%}
{%- endif -%}
{%- if actor_instructions_offset > 0 and talking_character.dialogue_instructions and scene.count_messages() > actor_instructions_offset -%}
{%- set _ = scene_context.insert(-actor_instructions_offset, "(Internal acting instructions for "+talking_character.name+": "+talking_character.dialogue_instructions+" "+actor_instructions+")") -%}
{% endif -%}
{% for scene_line in scene_context -%}
{{ scene_line }}END-OF-LINE
{{ scene_line }}
{% endfor %}
{% endblock -%}
<|CLOSE_SECTION|>
{% if scene.count_messages() < 5 %}
{% if not talking_character.dialogue_instructions %}(Use an informal and colloquial register with a conversational tone. Overall, {{ talking_character.name }}'s dialog is informal, conversational, natural, and spontaneous, with a sense of immediacy.){% else %}(Internal acting instructions for {{ talking_character.name }}: {{ talking_character.dialogue_instructions }}){% endif -%}
{% if scene.count_messages() < actor_instructions_offset or actor_instructions_offset == 0 %}
{% if not talking_character.dialogue_instructions %}({% if actor_instructions %} {{ actor_instructions }}{% else %}Use an informal and colloquial register with a conversational tone. Overall, {{ talking_character.name }}'s dialog is informal, conversational, natural, and spontaneous, with a sense of immediacy.{% endif -%}){% else %}(Internal acting instructions for {{ talking_character.name }}: {{ talking_character.dialogue_instructions }}{% if actor_instructions %} {{ actor_instructions }}{% endif %}){% endif -%}
{% endif -%}
{% if rerun_context and rerun_context.direction -%}
{% if rerun_context.method == 'replace' -%}
@ -129,6 +130,11 @@ YOU MUST ONLY WRITE NEW DIALOGUE FOR {{ talking_character.name.upper() }}.
# Requested changes: {{ rerun_context.direction }}
{% endif -%}
{% endif -%}
{% if direct_instruction -%}
{{ talking_character.name }}'s next action: {{ direct_instruction }}
You must not add additional actions. You must not add additional actions. Dialogue generated should be natural sounding and realistic. Less is more.
{% endif -%}
{{ bot_token }}{{ talking_character.name.upper() }}
{% if partial_message -%}
{{ partial_message.strip() }}

View file

@ -16,10 +16,10 @@ Only respond with the character name. For example, if you want to pick the chara
{% for scene_context in scene.context_history(budget=250, sections=False, add_archieved_history=False) -%}
{{ scene_context }}
{% endfor %}
{% if scene.history[-1].type == "narrator" %}
{% if llm_can_be_coerced() %}{% if scene.history[-1].type == "narrator" %}
{{ bot_token }}The next character to speak is
{% elif scene.prev_actor -%}
{{ bot_token }}The next character to respond to '{{ scene.history[-1].message }}' is
{% else -%}
{{ bot_token }}The next character to respond is
{% endif %}
{% endif %}{% endif %}

View file

@ -6,7 +6,7 @@
<|CLOSE_SECTION|>
{% endblock -%}
<|SECTION:SCENE|>
{% for scene_context in scene.context_history(budget=min(2048, max_tokens-300-count_tokens(self.rendered_context())), min_dialogue=20, sections=False) -%}
{% for scene_context in scene.context_history(budget=max_tokens-300-count_tokens(self.rendered_context()), min_dialogue=20, sections=False) -%}
{{ scene_context }}
{% endfor %}
<|CLOSE_SECTION|>

View file

@ -0,0 +1,12 @@
<|SECTION:CHARACTERS|>
{% for character in scene.characters %}
### {{ character.name }}
{% if max_tokens > 6000 -%}
{{ character.sheet }}
{% else -%}
{{ character.filtered_sheet(['age', 'gender']) }}
{{ query_memory("what is "+character.name+"'s personality?", as_question_answer=False) }}
{% endif %}
{{ character.description }}
{% endfor %}
<|CLOSE_SECTION|>

View file

@ -0,0 +1,50 @@
{% block rendered_context -%}
{% include "character-context.jinja2" -%}
{% endblock %}
<|SECTION:SCENE|>
{% block scene_history -%}
{% set scene_context = scene.context_history(budget=max_tokens-300-count_tokens(self.rendered_context()), min_dialogue=15, sections=False, keep_director=False) -%}
{% for scene_line in scene_context -%}
{{ scene_line }}
{% endfor %}
{% endblock -%}
<|CLOSE_SECTION|>
<|SECTION:TASK|>
Generate {{ num_choices }} interesting actions for {{ player_character.name }} to advance the current scene in this text adventure game. Consider:
1. Examining intriguing objects or characters for more detail
2. Interacting with the environment in meaningful ways
3. Taking actions that naturally progress the story
Format each action as a short, concise command from {{ player_character.name }}'s perspective, such as:
"Look at the strange artifact."
"Ask the merchant about the rumors."
"Climb the crumbling staircase."
"Inspect the mysterious footprints."
"Eavesdrop on the whispering guards."
"Pick up the discarded letter."
"Offer the beggar a coin."
"Attempt to decipher the ancient runes."
"Search the bookshelf for hidden compartments."
"Try the rusty key in the lock."
Requirements:
- The actions MUST fit the scene's tone and writing style. This is {{ scene.context }}.{% if decensor %} The reader has already consented and is expecting to be exposed to such themes.{% endif %}
- The text describing the action must be short and concise.
- Offer varied options without drastic pacing changes, that make sequential sense at the ending of the scene.
- The actions must be significantly different from each other.
{% if instructions %}- {{ instructions}}{% endif %}
Expected Response:
You MUST provide your response in the following format:
ANALYSIS: <Brief analysis of what happens at the end of the scene. Specifically pay attention to whether or not another character has had dialogue that could be responded to.>
PLANNING: <Think through directions to take the scene next. The directions must make sense in relation to the ending state of the scene as it is currently. If another character has said something, include direct responses to that character. Remember, {{ player_character.name }} is the next one to act, so plan from {{ player_character.name }}'s perspective.>
ACTIONS:
1. <first action choice, formatted as a short button label>
2. <second action choice, formatted as a short button label>
...
{{ set_prepared_response('ANALYSIS:') }}

View file

@ -0,0 +1,12 @@
<|SECTION:CHARACTERS|>
{% for character in scene.characters %}
### {{ character.name }}
{% if max_tokens > 6000 -%}
{{ character.sheet }}
{% else -%}
{{ character.filtered_sheet(['age', 'gender']) }}
{{ query_memory("what is "+character.name+"'s personality?", as_question_answer=False) }}
{% endif %}
{{ character.description }}
{% endfor %}
<|CLOSE_SECTION|>

View file

@ -14,6 +14,7 @@
{% set final_line_number=len(scene_history) %}
{% for scene_context in scene_history -%}
{{ loop.index }}. {{ scene_context }}
{% endfor %}
<|SECTION:TASK|>
{% if query.endswith("?") -%}

View file

@ -1,6 +1,8 @@
{{ dialogue }}
<|SECTION:TASK|>
Examine the dialogue from the beginning and find the last line that marks a scene change. Repeat the line back to me exactly as it is written.
Examine the scene progress from the beginning and find the first line that marks the ending of a scene. Think of this in terms of a TV show or a play, where there is a build up, peak and denouement. You must identify the denouement point.
Repeat the line back to me exactly as it is written.
<|CLOSE_SECTION|>
{{ bot_token }}The first line that marks a scene change is:
{{ bot_token }}The first line that marks a denouement point is:

View file

@ -0,0 +1,12 @@
<|SECTION:CHARACTERS|>
{% for character in scene.characters %}
### {{ character.name }}
{% if max_tokens > 6000 -%}
{{ character.sheet }}
{% else -%}
{{ character.filtered_sheet(['age', 'gender']) }}
{{ query_memory("what is "+character.name+"'s personality?", as_question_answer=False) }}
{% endif %}
{{ character.description }}
{% endfor %}
<|CLOSE_SECTION|>

View file

@ -0,0 +1,135 @@
{% if context %}
<|SECTION:HISTORY|>
{% for entry in context %}
{{ entry["text"] }}
{% endfor %}
{% endif %}
{% set can_dig = layer > -1 %}
{% for entry in entries %}
{% if entry.get("layer") > -1 or layer == -1 %}<|SECTION:CHAPTER {{ loop.index }}|>
{{ time_diff(entry.get("ts_end", entry.get("ts"))) }}
{{ entry["text"] }}
<|CLOSE_SECTION|>{% endif %}
{% endfor %}
{% if is_initial -%}
<|SECTION:CURRENT SCENE|>
{% for entry in entries %}
{% if entry.get("layer") == -1 %}{{ entry["text"] }}
{% endif %}
{% endfor %}
{{ scene.snapshot(lines=15, ignore=['director', 'reinforcement']) }}
<|CLOSE_SECTION|>
{% endif %}
{% if is_initial or dig_question %}
<|SECTION:QUERY|>
{{ dig_question or query }}
{% endif %}
<|SECTION:TASK|>
The author of the scene has given YOU - the analyst - a query and is asking you to provide additional context to the actors in the scene.
{% if is_initial %}- Understand the query, what do we want to find out?
- For a query to be valid any of the following must be true:
- A character is trying to retrieve information in the form of a question.
- A location, event, off-scene person or object is refered to that you could gather more information about.
- The query is invalid if any of these are true:
- The answer to the query is already contained within the current scene.
- If the query is invalid you must call abort() immediately.
{% endif -%}
- Read the provided chapters and select one that holds the answer or relevant context.{% if can_dig %} You can also decide to dig chapters for more information.{% else %}
- If no answer can be provided, but you can provide additional relevant context, that is also acceptable.{% endif %}
- Select a function to call to process the request.
### Available Functions
{% if can_dig %}- `dig(chapter_number, question)` to dig into a specific chapter for more information - number must be available and listed as a chapter above. You must call dig multiple times if there are multiple promising chapters to investigate.
- Valid chapters to dig: {% for entry in entries %}{% if entry.get("layer") > -1 %}{{ loop.index }}{% if not loop.last %}, {% endif %}{% endif %}{% endfor %}
- The question you pass to the dig query must contain enough context to accurately target the event you want to query. Don't be vague, be specific by providing any relevant context you have learned so far. If you are targeting a specific event mention it using a detailed description that leaves no doubt.
- Do not mention chapters in your question.{% else %}- `answer(answer)` to provide an answer or context or both.
- Use the history for context, but source the answer from the Chapter(s).
- You MUST NOT let the query impact the answer. The chapters are the source of truth. The query may imply or assume incorrect things.
- The answer MUST be factional information and MUST NOT mention chapter numbers.
- Answer the query and provide contextual and circumstantial details.
- Limit the answer to two paragraphs.
- The answer text must be explanatory summarization, NOT narration.
- For historic context include a note about how long ago the situation occured and use past tense. You must always mention how long ago your sourced information was the truth.
{% if character %}- Also include a note as to how aware {{ character.name }} is of the information you provided in your answer.{% endif %}
{% endif %}
- `abort()` to stop the process if there are no avenues left to explore and there is no information to satisfy the query.
### Rules
- You MUST NOT mix functions
{%- if can_dig %}
- Digging is expensive. Only dig chapters if they are highly likely to be related to the query.{% endif %}
{%- if not can_dig %}
- When using the `answer()` function always write from the perspective of the investigator.{% endif %}
- Use untyped code blocks, so ``` instead of ```python.
- You must never invent information. Dig instead.
- End with `DONE` after calling a function.
- You must not invent or guess, you can however decide to provide extra context if a factual answer is not possible.
{% if is_initial %}- If the answer contained in current scene the query is invalid and you must abort.{% endif %}
### Response Format
Follow this format exactly:
{% if is_initial %}QUERY: <Analysis of the query, what could be the reason this query was given to you? Be very strict with your evaluation. Many queries are given in error.>
ANALYSIS:
- character trying retrieve information: <yes or no>.
- answer contained in current scene: <yes or no>.
- location, event, off-scene person or object mentioned: <yes or no>.
- query valid based on the above: <yes or no>.
<Quick Analysis of the provided information>
{% else %}
ANALYSIS: <Quick Analysis of the provided information>
{% endif -%}
FUNCTION SELECTED: <Quickly explain which function you have selected and why.>
CALL:
```
<function_name>(<arguments>)
```
DONE
<|CLOSE_SECTION|>
<|SECTION:EXAMPLES|>
{% if can_dig %}Digging:
CALL:
```
dig(3, "What is the significance of the red door? The red door here refers to the red door in Jason's basement.")
```
DONE
Digging multiple times:
Start with the most promising chapter first, then move to the next most promising chapter.
CALL:
```
dig(3, "What is the significance of the red door? The red door here refers to the red door in Jason's basement.")
dig(5, "What is the significance of the red door? The red door here refers to the red door in Jason's basement.")
```
DONE{% endif %}
{% if not can_dig %}Answering:
CALL:
```
answer("Two weeks ago James discovered that the red door led to the secret chamber where the treasure was hidden. James learned about it from his grandfather.{% if character %} James knows this information, as he was the one to discover it.{% endif %}")
```
DONE{% endif %}
Aborting:
CALL:
```
abort()
```
DONE
{{ bot_token }}{% if is_initial %}QUERY:{% else %}ANALYSIS:{% endif %}

View file

@ -0,0 +1,22 @@
<|SECTION:SCENE|>
{{ events[0] }}
<|CLOSE_SECTION|>
{% for event in events[1:] %}
<|SECTION:PROGRESS {{ loop.index }}|>
{{ event }}
<|CLOSE_SECTION|>
{% endfor %}
<|SECTION:TASK|>
Examine the scene progress from the beginning and find the progress items that mark the ending of a scene. Think of this in terms of a TV show or a play, where there is a build up, peak and denouement. You must identify the denouement points.
Provide a list of denounment points in the following format:
- Progress {N}
- Progress {N}
...
<|CLOSE_SECTION|>
{{ set_prepared_response("-") }}

View file

@ -1,38 +1,21 @@
{% set summary_target = "chapter "+to_str(num_extra_context+1) %}
{% if summarization_method == "facts" -%}
{% set output_type = "factual list" -%}
{% set max_length = "" %}
{% else -%}
{% set output_type = "narrative description" -%}
{% set max_length = " Length: 1 - 2 paragraphs" %}
{% endif -%}
{% if extra_context -%}
<|SECTION:PREVIOUS CONTEXT|>
{{ extra_context }}
<|SECTION:PREVIOUS CHAPTERS|>
{% for chapter_summary in extra_context %}
## Chapter {{ loop.index }}
{{ chapter_summary }}
{% endfor %}
<|CLOSE_SECTION|>
{% endif -%}
<|SECTION:TASK|>
Question: What happens explicitly within the dialogue section alpha below? Summarize into a {{output_type}}.
Content Context: This is a specific scene from {{ scene.context }}
{% if output_type == "narrative description" %}
Use an informal and colloquial register with a conversational tone. Overall, the narrative is informal, conversational, natural, and spontaneous, with a sense of immediacy.
{% endif %}
{% if summarization_method == "long" -%}
This should be a detailed summary of the dialogue, including all the juicy details.
{% elif summarization_method == "short" -%}
This should be a short and specific summary of the dialogue, including the most important details. 2 - 3 sentences.
{% endif -%}
YOU MUST ONLY SUMMARIZE THE CONTENT IN DIALOGUE SECTION ALPHA.
{% if output_type == "narrative description" %}
Expected Answer: A summarized {{output_type}} of the dialogue section alpha, that can be inserted into the ongoing story in place of the dialogue.
{% elif output_type == "factual list" %}
Expected Answer: A highly accurate numerical chronological list of the events and state changes that occur in the dialogue section alpha. Important is anything that causes a state change in the scene, characters or objects. Use simple, clear language, and note details. Use exact words. Note all the state changes. Leave nothing out.
{% endif %}
{% if extra_instructions -%}
{{ extra_instructions }}
{% endif -%}
<|CLOSE_SECTION|>
<|SECTION:DIALOGUE SECTION ALPHA|>
<|SECTION:{{ summary_target.upper() }} (To be summarized)|>
{{ dialogue }}
<|CLOSE_SECTION|>
{% if generation_options and generation_options.writing_style %}
@ -40,5 +23,41 @@ Expected Answer: A highly accurate numerical chronological list of the events an
{{ generation_options.writing_style.instructions }}
<|CLOSE_SECTION|>
{% endif %}
<|SECTION:SUMMARIZATION OF DIALOGUE SECTION ALPHA|>
{{ bot_token }}In the dialogue section alpha,
<|SECTION:TASK|>
Summarize {{ summary_target }} into a {{output_type}}.
This is a specific chapter from {{ scene.context }}.
{% if output_type == "narrative description" %}
The tone of the summary must match the tone of the dialogue.
{% endif %}
{% if summarization_method == "long" -%}
This should be a detailed summary of the dialogue, including all the juicy details.
{% set max_length = " Length: 1 - 3 paragraphs" %}
{% elif summarization_method == "short" -%}
This should be a short and specific summary of the dialogue, including the most important details. 2 - 3 sentences.
{% set max_length = " Length: 1 paragraph" %}
{% endif -%}
YOU MUST ONLY SUMMARIZE THE CONTENT EXPLICITLY STATED WITHIN {{ summary_target.upper() }}.
YOU MUST NOT INCLUDE OR REPEAT THE PREVIOUS CONTEXT iN YOUR SUMMARY.
YOU MUST NOT QUOTE DIALOGUE.
{% if output_type == "narrative description" %}
Provide a summarized {{output_type}} of {{ summary_target }}.
{% elif output_type == "factual list" %}
Provide a highly accurate numerical chronological list of the events and state changes that occur in {{ summary_target }}. Important is anything that causes a state change in the scene, characters or objects. Use simple, clear language, and note details. Use exact words. Note all the state changes. Leave nothing out.
{% endif %}
{% if extra_context %}Use the previous context to inform your understanding of the whole story, but only summarize what is explicitly mentioned in {{ summary_target }}.{% endif -%}
{% if extra_instructions -%}
{{ extra_instructions }}
{% endif -%}
Your response must follow this format:
ANALYSIS: <brief analysis the cross over point from previous chapters to {{ summary_target }}. How does {{ summary_target }} start and what should be in the summary.>
SUMMARY: <summary of {{ summary_target }} based on analysis.{{ max_length }}>
<|CLOSE_SECTION|>
<|SECTION:SUMMARY OF {{ summary_target.upper() }}|>
{{ set_prepared_response("ANALYSIS:") }}

View file

@ -0,0 +1,14 @@
{% if extra_context %}{% set section_name = "chapter la" %}{% else %}{% set section_name = "chapter 1" %}{% endif %}
{% if extra_context %}
<|SECTION:PREVIOUS CONTEXT|>
{{ extra_context }}
<|CLOSE_SECTION|>
{% endif -%}
<|SECTION:{{ section_name }}|>
{{ content }}
<|CLOSE_SECTION|>
<|SECTION:TASK|>
List up to five major story developments that happen in {{ section_name }}.
<|CLOSE_SECTION|>
{{ set_prepared_response("1.") }}

View file

@ -1,3 +1,46 @@
Instruction: Summarize the events within the dialogue as accurately as you can.
Expected Answer: A list short narrative descriptions
Narrator answers:
{% if extra_context %}{% set section_name = "chapter 2" %}{% else %}{% set section_name = "chapter 1" %}{% endif %}
{% include "character-context.jinja2" -%}
{% if extra_context %}
<|SECTION:HISTORY|>
{{ extra_context }}
<|CLOSE_SECTION|>
{% endif -%}
<|SECTION:{{ section_name }}|>
{{ section_name.upper() }} START
{% for chunk in text_to_chunks(dialogue, chunk_size=2500) %}
CHUNK {{ loop.index }}:
{{ chunk }}
{% endfor %}
{{ section_name.upper() }} END
<|CLOSE_SECTION|>
<|SECTION:TASK|>
Provide a compressed, short summary for {{ section_name }}.
Do not repeat any information from the previous context.
Compress each individual chunk, keeping the start and ending points as anchors.
Ensure the persistence of all important moments, decisions and story developments.
Specifically mention characters, locations and objects by name.
Consider the other chunks and the history to inform the context of the summarizations. Each chunk must be summarized in a way that it leads into the next chunk.
YOU MUST NOT ADD COMMENTARY.
YOU MUST NOT ADD COMBINED SUMMARIZATION OF ALL CHUNKS.
You must provide your response in the following format:
CHUNK 1: <summary of the first chunk>
CHUNK 2: <summary of the second chunk>
...
<|CLOSE_SECTION|>
{% if generation_options and generation_options.writing_style %}
<|SECTION:WRITING STYLE|>
{{ generation_options.writing_style.instructions }}
<|CLOSE_SECTION|>
{% endif %}
{{ set_prepared_response("CHUNK 1:")}}

View file

@ -0,0 +1,14 @@
<|SECTION:STORY|>
{% for event in events %}
{{ event["text"] }}
{% endfor %}
<|CLOSE_SECTION|>
<|SECTION:TASK|>
Identify the major events and milestone in the provided story.
Summarize them into a consice list of events. Each item should be a single sentence, be concise and to the point.
The list must be in chronological order, with the earliest event at the top and the latest event at the bottom.
<|CLOSE_SECTION|>

View file

@ -1,5 +1,17 @@
{% set questions = instruct_text("Ask the author 5 important questions to gather more context from the past for the continuation of this conversation. If a character is asking about a state, location or information about an item or another character, make sure to include question(s) that help gather context for this. You also have unlimited access to the world database and can just ask for information directly. If you don't know what something is just ask directly.", text) %}
<|SECTION:CONTEXT|>
{% block character_context %}
{% if include_character_context %}{% include "character-context.jinja2" %}{% endif %}
{% endblock %}
{% set questions = instruct_text("Ask the narrator 1 important question to gather additional context to assist with the following goal: "+goal+"
1. Focus on established facts, lore, and background information.
2. Avoid asking for information already provided in the given context.
3. Address gaps in the current narrative or explore relevant backstory.
4. If characters mention specific states, locations, items, or other characters, prioritize queries about these.
5. Phrase queries as direct requests for information from the world database.
6. For unfamiliar elements, ask straightforward questions to clarify their nature or significance.
Your response must be the question only. Do not include any additional text or explanations.", self.character_context() + "\n\n" + text) %}
{%- with memory_query=questions -%}
{% include "extra-context.jinja2" %}
{% endwith %}

View file

@ -2,23 +2,51 @@
<|SECTION:CONTEXT|>
{% include "extra-context.jinja2" %}
<|CLOSE_SECTION|>
{% if include_character_context %}{% include "character-context.jinja2" %}{% endif %}
{% endblock -%}
<|SECTION:SCENE|>
{{ text }}
<|SECTION:TASK|>
You have access to a vector database to retrieve relevant data to gather more established context for the continuation of this conversation. If a character is asking about a state, location or information about an item or another character, make sure to include queries that help gather context for this.
You are assisting with an ongoing story. You have access to a vector database containing factual information about the characters, locations, events, and lore of this narrative world. Your task is to generate up to 5 specific, targeted queries to gather additional context for the current scene or conversation.
Please compile a list of up to 10 short queries to the database that will help us gather additional context for the actors to continue the ongoing conversation.
Gather additional context to assist with the following goal: {{ goal}}
Each query must be a short trigger keyword phrase and the database will match on semantic similarity.
Before generating the queries, you will be provided with:
1. A brief summary of the story context
2. Key character names and their roles
3. The most recent dialogue or scene description
Each query must be on its own line as raw unformatted text.
Using this information, create queries that:
- Seek new information not already provided in the given context
- Explore potential gaps in the current narrative
- Investigate background details that could enrich the scene
- Look for connections between current elements and established lore
Your response should look like this and contain only the queries and nothing else:
Your queries should focus on:
- Historical information about characters or locations
- Established relationships between characters
- Known facts about objects or concepts in the story world
- Past events that may be relevant to the current scene
Avoid queries that:
- Repeat information already given in the context
- Ask about characters' current thoughts, feelings, or intentions
- Seek speculative or future events
- Request information that would not be part of established lore or backstory
Each query should be:
- A short, focused keyword phrase
- Relevant to the current story context, but not redundant
- Designed to elicit specific, factual information not yet revealed
Format your response as a list of raw, unformatted text queries, each on its own line:
- <query 1>
- <query 2>
- ...
- <query 10>
Do not include any additional text, explanations, or formatting in your response.
After receiving the story context and recent dialogue, generate your list of targeted, non-redundant, lore-focused queries.
<|CLOSE_SECTION|>
{{ set_prepared_response('-') }}

View file

@ -0,0 +1,12 @@
<|SECTION:CHARACTERS|>
{% for character in scene.characters %}
### {{ character.name }}
{% if max_tokens > 6000 -%}
{{ character.sheet }}
{% else -%}
{{ character.filtered_sheet(['age', 'gender']) }}
{{ query_memory("what is "+character.name+"'s personality?", as_question_answer=False) }}
{% endif %}
{{ character.description }}
{% endfor %}
<|CLOSE_SECTION|>

View file

@ -1,3 +1,4 @@
{% block rendered_context -%}
<|SECTION:CONTEXT|>
{%- with memory_query=scene.snapshot() -%}
@ -11,10 +12,14 @@
<|CLOSE_SECTION|>
{% endblock -%}
<|SECTION:SCENE|>
{% set scene_history=scene.context_history(budget=max_tokens-200-count_tokens(self.rendered_context())) -%}
{% set final_line_number=len(scene_history) -%}
{% set scene_history = scene.context_history(budget=max_tokens-200-count_tokens(self.rendered_context()), keep_context_investigation=False) -%}
{% set last_message = scene_history[-1] %}
{% set last_message_is_reinforcement = ("internal notes" in last_message.lower() and question in last_message)%}
{% if not last_message_is_reinforcement %}{% set final_line_number=len(scene_history) %}{% else %}{% set final_line_number=len(scene_history)-1 %}{% endif %}
{% for scene_context in scene_history -%}
{% if not (loop.last and last_message_is_reinforcement) -%}
{{ loop.index }}. {{ scene_context }}
{% endif -%}
{% endfor -%}
{% if not scene.history -%}
No dialogue so far
@ -45,7 +50,7 @@ YOUR ANSWER IS CONFIDENT, MAKE CREATIVE CHOICES AS NEEDED.
{% endif %}
The tone of your answer should be consistent with the tone of the story so far.
Question: {{ question }}
Question: {{ question }} (At line {{ final_line_number }} in the scene progression)
{% if answer %}Previous Answer: {{ answer }}
{% endif -%}
<|CLOSE_SECTION|>

View file

@ -2,7 +2,17 @@ import enum
import re
from dataclasses import dataclass, field
import isodate
__all__ = [
"SceneMessage",
"CharacterMessage",
"NarratorMessage",
"DirectorMessage",
"TimePassageMessage",
"ReinforcementMessage",
"ContextInvestigationMessage",
"Flags",
"MESSAGES",
]
_message_id = 0
@ -110,6 +120,7 @@ class SceneMessage:
class CharacterMessage(SceneMessage):
typ = "character"
source: str = "ai"
from_choice: str | None = None
def __str__(self):
return self.message
@ -125,6 +136,10 @@ class CharacterMessage(SceneMessage):
@property
def raw(self):
return self.message.split(":", 1)[1].replace('"', "").replace("*", "").strip()
@property
def without_name(self) -> str:
return self.message.split(":", 1)[1]
@property
def as_movie_script(self):
@ -138,7 +153,15 @@ class CharacterMessage(SceneMessage):
message = self.message.split(":", 1)[1].replace('"', "").strip()
return f"\n{self.character_name.upper()}\n{message}\n"
return f"\n{self.character_name.upper()}\n{message}\nEND-OF-LINE\n"
def __dict__(self):
rv = super().__dict__()
if self.from_choice:
rv["from_choice"] = self.from_choice
return rv
def as_format(self, format: str, **kwargs) -> str:
if format == "movie_script":
@ -266,14 +289,32 @@ class ReinforcementMessage(SceneMessage):
def __str__(self):
question, _ = self.source.split(":", 1)
return (
f"# Internal notes for {self.character_name} - {question}: {self.message}"
f"# Internal note for {self.character_name} - {question}\n{self.message}"
)
def as_format(self, format: str, **kwargs) -> str:
if format == "movie_script":
message = str(self)[2:]
return f"\n({message})\n"
return self.message
return f"\n{self.message}\n"
@dataclass
class ContextInvestigationMessage(SceneMessage):
typ = "context_investigation"
source: str = "ai"
def __str__(self):
return (
f"# Internal note - {self.message}"
)
def as_format(self, format: str, **kwargs) -> str:
if format == "movie_script":
message = str(self)[2:]
return f"\n({message})\n"
return f"\n{self.message}\n"
MESSAGES = {
@ -283,4 +324,5 @@ MESSAGES = {
"director": DirectorMessage,
"time": TimePassageMessage,
"reinforcement": ReinforcementMessage,
"context_investigation": ContextInvestigationMessage,
}

View file

@ -24,167 +24,194 @@ async def websocket_endpoint(websocket, path):
log.info("frontend connected")
try:
# Create a task to send messages from the queue
async def send_messages():
while True:
# check if there are messages in the queue
if message_queue.empty():
await asyncio.sleep(0.01)
continue
message = await message_queue.get()
await websocket.send(json.dumps(message))
send_messages_task = asyncio.create_task(send_messages())
# Create a task to send regular client status updates
async def send_status():
while True:
await instance.emit_clients_status()
await instance.agent_ready_checks()
await asyncio.sleep(3)
send_status_task = asyncio.create_task(send_status())
# create a task that will retriece client boostrap information
async def send_client_bootstraps():
while True:
try:
await instance.sync_client_bootstraps()
except Exception as e:
log.error(
"send_client_bootstraps",
error=e,
traceback=traceback.format_exc(),
)
await asyncio.sleep(15)
send_client_bootstraps_task = asyncio.create_task(send_client_bootstraps())
while True:
data = await websocket.recv()
data = json.loads(data)
action_type = data.get("type")
scene_data = None
log.debug("frontend message", action_type=action_type)
with ActiveScene(handler.scene):
if action_type == "load_scene":
if scene_task:
handler.scene.continue_scene = False
scene_task.cancel()
file_path = data.get("file_path")
scene_data = data.get("scene_data")
filename = data.get("filename")
reset = data.get("reset", False)
await message_queue.put(
{
"type": "system",
"message": "Loading scene file ...",
"id": "scene.loading",
"status": "loading",
}
)
async def scene_loading_done():
await message_queue.put(
{
"type": "system",
"message": "Scene file loaded ...",
"id": "scene.loaded",
"status": "success",
"data": {
"hidden": True,
"environment": handler.scene.environment,
},
}
)
if scene_data and filename:
file_path = handler.handle_character_card_upload(
scene_data, filename
)
log.info("load_scene", file_path=file_path, reset=reset)
# Create a task to load the scene in the background
scene_task = asyncio.create_task(
handler.load_scene(
file_path, reset=reset, callback=scene_loading_done
)
)
elif action_type == "interact":
log.debug("interact", data=data)
text = data.get("text")
with Interaction(act_as=data.get("act_as")):
if handler.waiting_for_input:
handler.send_input(text)
elif action_type == "request_scenes_list":
query = data.get("query", "")
handler.request_scenes_list(query)
elif action_type == "configure_clients":
await handler.configure_clients(data.get("clients"))
elif action_type == "configure_agents":
await handler.configure_agents(data.get("agents"))
elif action_type == "request_client_status":
await handler.request_client_status()
elif action_type == "delete_message":
handler.delete_message(data.get("id"))
elif action_type == "scene_config":
log.info("scene_config", data=data)
handler.apply_scene_config(data.get("scene_config"))
elif action_type == "request_scene_assets":
log.info("request_scene_assets", data=data)
handler.request_scene_assets(data.get("asset_ids"))
elif action_type == "upload_scene_asset":
log.info("upload_scene_asset")
handler.add_scene_asset(data=data)
elif action_type == "request_scene_history":
log.info("request_scene_history")
handler.request_scene_history()
elif action_type == "request_assets":
log.info("request_assets")
handler.request_assets(data.get("assets"))
elif action_type == "edit_message":
log.info("edit_message", data=data)
handler.edit_message(data.get("id"), data.get("text"))
elif action_type == "interrupt":
log.info("interrupt")
handler.scene.interrupt()
elif action_type == "request_app_config":
log.info("request_app_config")
await message_queue.put(
{
"type": "app_config",
"data": load_config(),
"version": VERSION,
}
)
else:
log.info("Routing to sub-handler", action_type=action_type)
await handler.route(data)
# handle disconnects
except (
websockets.exceptions.ConnectionClosed,
starlette.websockets.WebSocketDisconnect,
RuntimeError,
):
log.warning("frontend disconnected")
async def frontend_disconnect(exc):
nonlocal scene_task
log.warning(f"frontend disconnected: {exc}")
main_task.cancel()
send_messages_task.cancel()
send_status_task.cancel()
send_client_bootstraps_task.cancel()
test_connection_task.cancel()
handler.disconnect()
if handler.scene:
handler.scene.active = False
handler.scene.continue_scene = False
if scene_task:
scene_task.cancel()
# Create a task to send messages from the queue
async def send_messages():
while True:
# check if there are messages in the queue
if message_queue.empty():
await asyncio.sleep(0.01)
continue
message = await message_queue.get()
await websocket.send(json.dumps(message))
# Create a task to send regular client status updates
async def send_status():
while True:
await instance.emit_clients_status()
await instance.agent_ready_checks()
await asyncio.sleep(3)
# create a task that will retriece client boostrap information
async def send_client_bootstraps():
while True:
try:
await instance.sync_client_bootstraps()
except Exception as e:
log.error(
"send_client_bootstraps",
error=e,
traceback=traceback.format_exc(),
)
await asyncio.sleep(15)
# task to test connection
async def test_connection():
while True:
try:
await websocket.send(json.dumps({"type": "ping"}))
except Exception as e:
await frontend_disconnect(e)
await asyncio.sleep(1)
# main loop task
async def handle_messages():
nonlocal scene_task
try:
while True:
data = await websocket.recv()
data = json.loads(data)
action_type = data.get("type")
scene_data = None
log.debug("frontend message", action_type=action_type)
with ActiveScene(handler.scene):
if action_type == "load_scene":
if scene_task:
log.info("Unloading current scene")
handler.scene.continue_scene = False
scene_task.cancel()
file_path = data.get("file_path")
scene_data = data.get("scene_data")
filename = data.get("filename")
reset = data.get("reset", False)
await message_queue.put(
{
"type": "system",
"message": "Loading scene file ...",
"id": "scene.loading",
"status": "loading",
}
)
async def scene_loading_done():
await message_queue.put(
{
"type": "system",
"message": "Scene file loaded ...",
"id": "scene.loaded",
"status": "success",
"data": {
"hidden": True,
"environment": handler.scene.environment,
},
}
)
if scene_data and filename:
file_path = handler.handle_character_card_upload(
scene_data, filename
)
log.info("load_scene", file_path=file_path, reset=reset)
# Create a task to load the scene in the background
scene_task = asyncio.create_task(
handler.load_scene(
file_path, reset=reset, callback=scene_loading_done
)
)
elif action_type == "interact":
log.debug("interact", data=data)
text = data.get("text")
with Interaction(act_as=data.get("act_as")):
if handler.waiting_for_input:
handler.send_input(text)
elif action_type == "request_scenes_list":
query = data.get("query", "")
handler.request_scenes_list(query)
elif action_type == "configure_clients":
await handler.configure_clients(data.get("clients"))
elif action_type == "configure_agents":
await handler.configure_agents(data.get("agents"))
elif action_type == "request_client_status":
await handler.request_client_status()
elif action_type == "delete_message":
handler.delete_message(data.get("id"))
elif action_type == "scene_config":
log.info("scene_config", data=data)
handler.apply_scene_config(data.get("scene_config"))
elif action_type == "request_scene_assets":
log.info("request_scene_assets", data=data)
handler.request_scene_assets(data.get("asset_ids"))
elif action_type == "upload_scene_asset":
log.info("upload_scene_asset")
handler.add_scene_asset(data=data)
elif action_type == "request_scene_history":
log.info("request_scene_history")
handler.request_scene_history()
elif action_type == "request_assets":
log.info("request_assets")
handler.request_assets(data.get("assets"))
elif action_type == "edit_message":
log.info("edit_message", data=data)
handler.edit_message(data.get("id"), data.get("text"))
elif action_type == "interrupt":
log.info("interrupt")
handler.scene.interrupt()
elif action_type == "request_app_config":
log.info("request_app_config")
await message_queue.put(
{
"type": "app_config",
"data": load_config(),
"version": VERSION,
}
)
else:
log.info("Routing to sub-handler", action_type=action_type)
await handler.route(data)
# handle disconnects
except (
websockets.exceptions.ConnectionClosed,
starlette.websockets.WebSocketDisconnect,
RuntimeError,
) as exc:
await frontend_disconnect(exc)
main_task = asyncio.create_task(handle_messages())
send_messages_task = asyncio.create_task(send_messages())
send_status_task = asyncio.create_task(send_status())
send_client_bootstraps_task = asyncio.create_task(send_client_bootstraps())
test_connection_task = asyncio.create_task(test_connection())
await asyncio.gather(main_task, send_messages_task, send_status_task, send_client_bootstraps_task, test_connection_task)

View file

@ -8,6 +8,10 @@ from talemate.instance import get_agent
log = structlog.get_logger("talemate.server.assistant")
class ForkScenePayload(pydantic.BaseModel):
message_id: int
save_name: str | None = None
class AssistantPlugin:
router = "assistant"
@ -86,3 +90,24 @@ class AssistantPlugin:
except Exception as e:
log.error("Error running autocomplete", error=str(e))
emit("autocomplete_suggestion", "")
async def handle_fork_new_scene(self, data: dict):
"""
Allows to fork a new scene from a specific message
in the current scene.
All content after the message will be removed and the
context database will be re imported ensuring a clean state.
All state reinforcements will be reset to their most recent
state before the message.
"""
payload = ForkScenePayload(**data)
creator = get_agent("creator")
await creator.fork_scene(payload.message_id, payload.save_name)

View file

@ -0,0 +1,45 @@
import pydantic
import structlog
import talemate.util as util
from talemate.emit import emit
from talemate.context import interaction
from talemate.instance import get_agent
from talemate.scene_message import CharacterMessage
log = structlog.get_logger("talemate.server.director")
class SelectChoicePayload(pydantic.BaseModel):
choice: str
class DirectorPlugin:
router = "director"
@property
def scene(self):
return self.websocket_handler.scene
def __init__(self, websocket_handler):
self.websocket_handler = websocket_handler
async def handle(self, data: dict):
log.info("director action", action=data.get("action"))
fn = getattr(self, f"handle_{data.get('action')}", None)
if fn is None:
return
await fn(data)
async def handle_generate_choices(self, data: dict):
director = get_agent("director")
await director.generate_choices()
async def handle_select_choice(self, data: dict):
payload = SelectChoicePayload(**data)
character = self.scene.get_player_character()
actor = character.actor
await actor.generate_from_choice(payload.choice)

View file

@ -31,6 +31,7 @@ async def install_punkt():
log.info("Downloading NLTK punkt tokenizer")
await asyncio.get_event_loop().run_in_executor(None, nltk.download, "punkt")
await asyncio.get_event_loop().run_in_executor(None, nltk.download, "punkt_tab")
log.info("Download complete")
async def log_stream(stream, log_func):
@ -65,7 +66,6 @@ async def run_frontend(host: str = "localhost", port: int = 8080):
preexec_fn=os.setsid if sys.platform != "win32" else None
)
asyncio.create_task(install_punkt())
log.info(f"talemate frontend started", host=host, port=port, server="uvicorn", process=process.pid)
@ -115,6 +115,9 @@ def run_server(args):
loop.run_until_complete(start_server)
# start task to unstall punkt
loop.create_task(install_punkt())
if not args.backend_only:
frontend_task = loop.create_task(run_frontend(args.frontend_host, args.frontend_port))
else:

View file

@ -21,6 +21,7 @@ from talemate.server import (
character_importer,
config,
devtools,
director,
quick_settings,
world_state_manager,
)
@ -72,6 +73,7 @@ class WebsocketHandler(Receiver):
self
),
devtools.DevToolsPlugin.router: devtools.DevToolsPlugin(self),
director.DirectorPlugin.router: director.DirectorPlugin(self),
}
self.set_agent_routers()
@ -474,6 +476,18 @@ class WebsocketHandler(Receiver):
),
}
)
def handle_context_investigation(self, emission: Emission):
self.queue_put(
{
"type": "context_investigation",
"message": emission.message,
"id": emission.id,
"flags": (
int(emission.message_object.flags) if emission.message_object else 0
),
}
)
def handle_prompt_sent(self, emission: Emission):
self.queue_put(

View file

@ -1,4 +1,4 @@
import base64
import asyncio
import uuid
from typing import Any, Union
@ -161,6 +161,7 @@ class SceneSettingsPayload(pydantic.BaseModel):
class SaveScenePayload(pydantic.BaseModel):
save_as: str | None = None
project_name: str | None = None
class RegenerateHistoryPayload(pydantic.BaseModel):
@ -869,7 +870,7 @@ class WorldStateManagerPlugin:
}
)
await self.scene.remove_actor(character.actor)
await self.scene.remove_character(character)
await self.signal_operation_done()
await self.handle_get_character_list({})
self.scene.emit_status()
@ -1000,44 +1001,71 @@ class WorldStateManagerPlugin:
async def handle_save_scene(self, data):
payload = SaveScenePayload(**data)
log.debug("Save scene", copy=payload.save_as)
log.debug("Save scene", copy=payload.save_as, project_name=payload.project_name)
if not self.scene.filename:
# scene has never been saved before
# specify project name (directory name)
self.scene.name = payload.project_name
await self.scene.save(auto=False, force=True, copy_name=payload.save_as)
self.scene.emit_status()
async def handle_request_scene_history(self, data):
history = history_with_relative_time(self.scene.archived_history, self.scene.ts)
layered_history = []
summarizer = get_agent("summarizer")
if summarizer.layered_history_enabled:
for layer in self.scene.layered_history:
layered_history.append(
history_with_relative_time(layer, self.scene.ts)
)
self.websocket_handler.queue_put(
{"type": "world_state_manager", "action": "scene_history", "data": history}
{"type": "world_state_manager", "action": "scene_history", "data": {
"history": history,
"layered_history": layered_history,
}}
)
async def handle_regenerate_history(self, data):
payload = RegenerateHistoryPayload(**data)
async def callback():
self.scene.emit_status()
await self.handle_request_scene_history(data)
#self.websocket_handler.queue_put(
# {
# "type": "world_state_manager",
# "action": "history_entry_added",
# "data": history_with_relative_time(
# self.scene.archived_history, self.scene.ts
# ),
# }
#)
def callback():
task = asyncio.create_task(rebuild_history(
self.scene, callback=callback, generation_options=payload.generation_options
))
async def done():
self.websocket_handler.queue_put(
{
"type": "world_state_manager",
"action": "history_entry_added",
"data": history_with_relative_time(
self.scene.archived_history, self.scene.ts
),
"action": "history_regenerated",
"data": payload.model_dump(),
}
)
await rebuild_history(
self.scene, callback=callback, generation_options=payload.generation_options
)
await self.signal_operation_done()
await self.handle_request_scene_history(data)
# when task is done, queue a message to the client
task.add_done_callback(lambda _: asyncio.create_task(done()))
self.websocket_handler.queue_put(
{
"type": "world_state_manager",
"action": "history_regenerated",
"data": payload.model_dump(),
}
)
await self.signal_operation_done()
await self.handle_request_scene_history(data)

View file

@ -45,6 +45,8 @@ from talemate.scene_message import (
ReinforcementMessage,
SceneMessage,
TimePassageMessage,
ContextInvestigationMessage,
MESSAGES as MESSAGE_TYPES,
)
from talemate.util import colored_text, count_tokens, extract_metadata, wrap_text
from talemate.util.prompt import condensed
@ -67,6 +69,7 @@ async_signals.register("game_loop_start")
async_signals.register("game_loop")
async_signals.register("game_loop_actor_iter")
async_signals.register("game_loop_new_message")
async_signals.register("player_turn_start")
class ActedAsCharacter(Exception):
@ -570,7 +573,7 @@ class Actor:
def history(self):
return self.scene.history
async def talk(self):
async def talk(self, instruction: str = None):
"""
Set the message to be sent to the AI
"""
@ -588,7 +591,7 @@ class Actor:
)
with ClientContext(conversation=conversation_context):
messages = await self.agent.converse(self)
messages = await self.agent.converse(self, instruction=instruction)
return messages
@ -619,7 +622,6 @@ class Player(Actor):
if not message:
# Display scene history length before the player character name
history_length = self.scene.history_length()
name = colored_text(self.character.name + ": ", self.character.color)
input = await wait_for_input(
f"[{history_length}] {name}",
@ -633,7 +635,27 @@ class Player(Actor):
if not message:
return
if not commands.Manager.is_command(message):
if message.startswith("@"):
character_message = await self.generate_from_choice(
message[1:], process=False, character=None if not act_as else self.scene.get_character(act_as)
)
if not character_message:
return
self.message = character_message.without_name
self.scene.push_history(character_message)
if act_as:
character = self.scene.get_character(act_as)
self.scene.process_npc_dialogue(character.actor, [character_message])
raise ActedAsCharacter()
else:
emit("character", character_message, character=self.character)
message = self.message
elif not commands.Manager.is_command(message):
if '"' not in message and "*" not in message:
message = f'"{message}"'
@ -659,16 +681,88 @@ class Player(Actor):
else:
# acting as the main player character
self.message = message
extra = {}
if input.get("from_choice"):
extra["from_choice"] = input["from_choice"]
self.scene.push_history(
CharacterMessage(
f"{self.character.name}: {message}", source="player"
f"{self.character.name}: {message}", source="player", **extra
)
)
emit("character", self.history[-1], character=self.character)
return message
async def generate_from_choice(self, choice:str, process:bool=True, character:Character=None) -> CharacterMessage:
character = self.character if not character else character
if not character:
raise TalemateError("Character not found during generate_from_choice")
actor = character.actor
conversation = self.scene.get_helper("conversation").agent
director = self.scene.get_helper("director").agent
narrator = self.scene.get_helper("narrator").agent
# sensory checks
sensory_checks = ["look", "listen", "smell", "taste", "touch", "feel"]
sensory_action = {
"look": "see",
"inspect": "see",
"examine": "see",
"observe": "see",
"watch": "see",
"view": "see",
"see": "see",
"listen": "hear",
"smell": "smell",
"taste": "taste",
"touch": "feel",
"feel": "feel",
}
if choice.lower().startswith(tuple(sensory_checks)):
# extract the sensory type
sensory_type = choice.split(" ", 1)[0].lower()
sensory_suffix = sensory_action.get(sensory_type, "experience")
log.debug("generate_from_choice", choice=choice, sensory_checks=True)
# sensory checks should trigger a narrator query instead of conversation
await narrator.action_to_narration(
"narrate_query",
emit_message=True,
query=f"{character.name} wants to \"{choice}\" - what does {character.name} {sensory_suffix} (your answer must be descriptive and detailed)?",
)
return
messages = await conversation.converse(actor, only_generate=True, instruction=choice)
message = messages[0]
message = util.ensure_dialog_format(message.strip(), character.name)
character_message = CharacterMessage(
message, source="player" if isinstance(actor, Player) else "ai", from_choice=choice
)
if not process:
return character_message
interaction_state = interaction.get()
if director.generate_choices_never_auto_progress:
self.scene.push_history(character_message)
emit("character", character_message, character=character)
else:
interaction_state.from_choice = choice
interaction_state.input = character_message.without_name
return character_message
class Scene(Emitter):
"""
@ -693,6 +787,7 @@ class Scene(Emitter):
self.history = []
self.archived_history = []
self.inactive_characters = {}
self.layered_history = []
self.assets = SceneAssets(scene=self)
self.description = ""
self.intro = ""
@ -760,6 +855,7 @@ class Scene(Emitter):
"game_loop_actor_iter": async_signals.get("game_loop_actor_iter"),
"game_loop_new_message": async_signals.get("game_loop_new_message"),
"scene_init": async_signals.get("scene_init"),
"player_turn_start": async_signals.get("player_turn_start"),
}
self.setup_emitter(scene=self)
@ -1039,9 +1135,27 @@ class Scene(Emitter):
if isinstance(self.history[idx], CharacterMessage):
if self.history[idx].source == "player":
return self.history[idx]
def last_message_of_type(self, typ: str | list[str], source: str = None):
"""
Returns the last message of the given type and source
"""
if not isinstance(typ, list):
typ = [typ]
for idx in range(len(self.history) - 1, -1, -1):
if self.history[idx].typ in typ and (
self.history[idx].source == source or not source
):
return self.history[idx]
def collect_messages(
self, typ: str = None, source: str = None, max_iterations: int = 100
self,
typ: str = None,
source: str = None,
max_iterations: int = 100,
max_messages: int | None = None,
):
"""
Finds all messages in the history that match the given typ and source
@ -1049,11 +1163,16 @@ class Scene(Emitter):
messages = []
iterations = 0
collected = 0
for idx in range(len(self.history) - 1, -1, -1):
if (not typ or self.history[idx].typ == typ) and (
not source or self.history[idx].source == source
):
messages.append(self.history[idx])
collected += 1
if max_messages is not None and collected >= max_messages:
break
iterations += 1
if iterations >= max_iterations:
@ -1061,13 +1180,31 @@ class Scene(Emitter):
return messages
def snapshot(self, lines: int = 3, ignore: list = None, start: int = None) -> str:
def snapshot(
self,
lines: int = 3,
ignore: list[str | SceneMessage] = None,
start: int = None,
as_format: str = "movie_script",
) -> str:
"""
Returns a snapshot of the scene history
"""
if not ignore:
ignore = [ReinforcementMessage, DirectorMessage]
ignore = [ReinforcementMessage, DirectorMessage, ContextInvestigationMessage]
else:
# ignore me also be a list of message type strings (e.g. 'director')
# convert to class types
_ignore = []
for item in ignore:
if isinstance(item, str):
_ignore.append(MESSAGE_TYPES.get(item))
elif isinstance(item, SceneMessage):
_ignore.append(item)
else:
raise ValueError("ignore must be a list of strings or SceneMessage types")
ignore = _ignore
collected = []
@ -1082,7 +1219,7 @@ class Scene(Emitter):
if len(collected) >= lines:
break
return "\n".join([str(message) for message in collected])
return "\n".join([message.as_format(as_format) for message in collected])
def push_archive(self, entry: data_objects.ArchiveEntry):
"""
@ -1158,6 +1295,23 @@ class Scene(Emitter):
if memory_helper:
await actor.character.commit_to_memory(memory_helper.agent)
async def remove_character(self, character: Character):
"""
Remove a character from the scene
Class remove_actor if the character is active
otherwise remove from inactive_characters.
"""
for actor in self.actors:
if actor.character == character:
await self.remove_actor(actor)
if character.name in self.inactive_characters:
del self.inactive_characters[character.name]
async def remove_actor(self, actor: Actor):
"""
Remove an actor from the scene
@ -1332,51 +1486,144 @@ class Scene(Emitter):
return summary
def context_history(
self, budget: int = 2048, keep_director: Union[bool, str] = False, **kwargs
self, budget: int = 8192, **kwargs
):
parts_context = []
parts_dialogue = []
budget_context = int(0.5 * budget)
budget_dialogue = int(0.5 * budget)
keep_director = kwargs.get("keep_director", False)
keep_context_investigation = kwargs.get("keep_context_investigation", True)
conversation_format = self.conversation_format
actor_direction_mode = self.get_helper("director").agent.actor_direction_mode
history_offset = kwargs.get("history_offset", 0)
message_id = kwargs.get("message_id")
layered_history_enabled = self.get_helper("summarizer").agent.layered_history_enabled
include_reinfocements = kwargs.get("include_reinfocements", True)
assured_dialogue_num = kwargs.get("assured_dialogue_num", 5)
# if message id is provided, find the message in the history
if message_id:
history_len = len(self.history)
if history_offset:
log.warning(
"context_history",
message="history_offset is ignored when message_id is provided",
)
message_index = self.message_index(message_id)
history_start = message_index - 1
# CONTEXT
# collect context, ignore where end > len(history) - count
if not self.layered_history or not layered_history_enabled or not self.layered_history[0]:
# no layered history available
for i in range(len(self.archived_history) - 1, -1, -1):
archive_history_entry = self.archived_history[i]
end = archive_history_entry.get("end")
if end is None:
continue
try:
time_message = util.iso8601_diff_to_human(
archive_history_entry["ts"], self.ts
)
text = f"{time_message}: {archive_history_entry['text']}"
except Exception as e:
log.error("context_history", error=e, traceback=traceback.format_exc())
text = archive_history_entry["text"]
if count_tokens(parts_context) + count_tokens(text) > budget_context:
break
parts_context.insert(0, condensed(text))
else:
history_start = len(self.history) - (1 + history_offset)
# layered history available
# start with the last layer and work backwards
next_layer_start = None
for i in range(len(self.layered_history) - 1, -1, -1):
log.debug("context_history - layered history", i=i, next_layer_start=next_layer_start)
if not self.layered_history[i]:
continue
for layered_history_entry in self.layered_history[i][next_layer_start if next_layer_start is not None else 0:]:
time_message_start = util.iso8601_diff_to_human(
layered_history_entry["ts_start"], self.ts
)
time_message_end = util.iso8601_diff_to_human(
layered_history_entry["ts_end"], self.ts
)
if time_message_start == time_message_end:
time_message = time_message_start
else:
time_message = f"Start:{time_message_start}, End:{time_message_end}" if time_message_start != time_message_end else time_message_start
text = f"{time_message} {layered_history_entry['text']}"
parts_context.append(text)
next_layer_start = layered_history_entry["end"] + 1
# collect archived history entries that have not yet been
# summarized to the layered history
base_layer_start = self.layered_history[0][-1]["end"] + 1 if self.layered_history[0] else None
if base_layer_start is not None:
for archive_history_entry in self.archived_history[base_layer_start:]:
time_message = util.iso8601_diff_to_human(
archive_history_entry["ts"], self.ts
)
text = f"{time_message}: {archive_history_entry['text']}"
parts_context.append(condensed(text))
# collect dialogue
count = 0
for i in range(history_start, -1, -1):
count += 1
# log.warn if parts_context token count > budget_context
if count_tokens(parts_context) > budget_context:
log.warning(
"context_history",
message="context exceeds budget",
context_tokens=count_tokens(parts_context),
budget=budget_context,
)
# chop off the top until it fits
while count_tokens(parts_context) > budget_context:
parts_context.pop(0)
# DIALOGUE
try:
summarized_to = self.archived_history[-1]["end"] if self.archived_history else 0
except KeyError:
# only static archived history entries exist (pre-entered history
# that doesnt have start and end timestamps)
summarized_to = 0
# if summarized_to somehow is bigger than the length of the history
# since we have no way to determine where they sync up just put as much of
# the dialogue as possible
if summarized_to and summarized_to >= history_len:
log.warning("context_history", message="summarized_to is greater than history length - may want to regenerate history")
summarized_to = 0
log.debug("context_history", summarized_to=summarized_to, history_len=history_len)
dialogue_messages_collected = 0
#for message in self.history[summarized_to if summarized_to is not None else 0:]:
for i in range(len(self.history) - 1, -1, -1):
message = self.history[i]
if i < summarized_to and dialogue_messages_collected >= assured_dialogue_num:
break
if message.hidden:
continue
if isinstance(message, ReinforcementMessage) and not include_reinfocements:
continue
if isinstance(message, DirectorMessage):
elif isinstance(message, DirectorMessage):
if not keep_director:
continue
@ -1387,45 +1634,30 @@ class Scene(Emitter):
elif isinstance(keep_director, str) and message.source != keep_director:
continue
elif isinstance(message, ContextInvestigationMessage) and not keep_context_investigation:
continue
if count_tokens(parts_dialogue) + count_tokens(message) > budget_dialogue:
break
parts_dialogue.insert(
0, message.as_format(conversation_format, mode=actor_direction_mode)
0,
message.as_format(conversation_format, mode=actor_direction_mode)
)
# collect context, ignore where end > len(history) - count
for i in range(len(self.archived_history) - 1, -1, -1):
archive_history_entry = self.archived_history[i]
end = archive_history_entry.get("end")
start = archive_history_entry.get("start")
if end is None:
continue
if start > len(self.history) - count:
continue
try:
time_message = util.iso8601_diff_to_human(
archive_history_entry["ts"], self.ts
)
text = f"{time_message}: {archive_history_entry['text']}"
except Exception as e:
log.error("context_history", error=e, traceback=traceback.format_exc())
text = archive_history_entry["text"]
if count_tokens(parts_context) + count_tokens(text) > budget_context:
break
parts_context.insert(0, condensed(text))
if isinstance(message, CharacterMessage):
dialogue_messages_collected += 1
if count_tokens(parts_context + parts_dialogue) < 1024:
intro = self.get_intro()
if intro:
parts_context.insert(0, intro)
return list(map(str, parts_context)) + list(map(str, parts_dialogue))
@ -1450,13 +1682,14 @@ class Scene(Emitter):
popped_reinforcement_messages = []
while isinstance(message, ReinforcementMessage):
while isinstance(message, (ReinforcementMessage, ContextInvestigationMessage)):
popped_reinforcement_messages.append(self.history.pop())
message = self.history[idx]
log.debug(f"Rerunning message: {message} [{message.id}]")
if message.source == "player":
if message.source == "player" and not message.from_choice:
log.warning("Cannot rerun player's message", message=message)
return
current_rerun_context = rerun_context.get()
@ -1562,6 +1795,13 @@ class Scene(Emitter):
character = self.get_character(character_name)
if character.is_player:
if message.from_choice:
log.info(f"Rerunning player's generated message: {message} [{message.id}]")
emit("remove_message", "", id=message.id)
await character.actor.generate_from_choice(message.from_choice)
return
emit("system", "Cannot rerun player's message")
return
@ -1569,8 +1809,8 @@ class Scene(Emitter):
# Call talk() for the most recent AI Actor
actor = character.actor
new_messages = await actor.talk()
new_messages = await actor.talk(instruction=message.from_choice)
# Print the new messages
for item in new_messages:
@ -1658,7 +1898,7 @@ class Scene(Emitter):
"scene_status",
scene=self.name,
scene_time=self.ts,
human_ts=util.iso8601_duration_to_human(self.ts, suffix=""),
human_ts=util.iso8601_duration_to_human(self.ts, suffix="") if self.ts else None,
saved=self.saved,
)
@ -1723,6 +1963,84 @@ class Scene(Emitter):
# TODO: need to adjust archived_history ts as well
# but removal also probably means the history needs to be regenerated
# anyway.
def fix_time(self):
"""
New implementation of sync_time that will fix time across the board
using the base history as the sole source of truth.
This means first identifying the time jumps in the base history by
looking for TimePassageMessages and then applying those time jumps
to the archived history and the layered history based on their start and end
indexes.
"""
try:
ts = self.ts
self._fix_time()
except Exception as e:
log.exception("fix_time", exc=e)
self.ts = ts
def _fix_time(self):
starting_time = "PT0S"
for archived_entry in self.archived_history:
if "ts" in archived_entry and "end" not in archived_entry:
starting_time = archived_entry["ts"]
elif "end" in archived_entry:
break
# store time jumps by index
time_jumps = []
for idx, message in enumerate(self.history):
if isinstance(message, TimePassageMessage):
time_jumps.append((idx, message.ts))
# now make the timejumps cumulative, meaning that each time jump
# will be the sum of all time jumps up to that point
cumulative_time_jumps = []
ts = starting_time
for idx, ts_jump in time_jumps:
ts = util.iso8601_add(ts, ts_jump)
cumulative_time_jumps.append((idx, ts))
try:
ending_time = cumulative_time_jumps[-1][1]
except IndexError:
# no time jumps found
ending_time = starting_time
self.ts = ending_time
return
# apply time jumps to the archived history
ts = starting_time
for _, entry in enumerate(self.archived_history):
if "end" not in entry:
continue
# we need to find best_ts by comparing entry["end"]
# index to time_jumps (find the closest time jump that is
# smaller than entry["end"])
best_ts = None
for jump_idx, jump_ts in cumulative_time_jumps:
if jump_idx < entry["end"]:
best_ts = jump_ts
else:
break
if best_ts:
entry["ts"] = best_ts
ts = entry["ts"]
else:
entry["ts"] = ts
# finally set scene time to last entry in time_jumps
log.debug("fix_time", ending_time=ending_time)
self.ts = ending_time
def calc_time(self, start_idx: int = 0, end_idx: int = None):
"""
@ -1737,7 +2055,7 @@ class Scene(Emitter):
for message in self.history[start_idx:end_idx]:
if isinstance(message, TimePassageMessage):
util.iso8601_add(ts, message.ts)
ts = util.iso8601_add(ts, message.ts)
found = True
if not found:
@ -1914,6 +2232,7 @@ class Scene(Emitter):
if signal_game_loop:
await self.signals["game_loop"].send(game_loop)
turn_start = signal_game_loop
signal_game_loop = True
for actor in self.actors:
@ -1952,6 +2271,13 @@ class Scene(Emitter):
if not actor.character.is_player:
await self.call_automated_actions()
elif turn_start:
await self.signals["player_turn_start"].send(
events.PlayerTurnStartEvent(
scene=self,
event_type="player_turn_start",
)
)
try:
message = await actor.talk()
@ -2140,6 +2466,7 @@ class Scene(Emitter):
"history": scene.history,
"environment": scene.environment,
"archived_history": scene.archived_history,
"layered_history": scene.layered_history,
"characters": [actor.character.serialize for actor in scene.actors],
"inactive_characters": {
name: character.serialize
@ -2259,6 +2586,7 @@ class Scene(Emitter):
"history": scene.history,
"environment": scene.environment,
"archived_history": scene.archived_history,
"layered_history": scene.layered_history,
"characters": [actor.character.serialize for actor in scene.actors],
"inactive_characters": {
name: character.serialize

View file

@ -18,6 +18,7 @@ from thefuzz import fuzz
from talemate.scene_message import SceneMessage
from talemate.util.dialogue import *
from talemate.util.prompt import *
from talemate.util.response import *
log = structlog.get_logger("talemate.util")
@ -476,32 +477,35 @@ def duration_to_timedelta(duration):
return duration
# If it's an isodate.Duration object with separate year, month, day, hour, minute, second attributes
days = int(duration.years) * 365 + int(duration.months) * 30 + int(duration.days)
seconds = duration.tdelta.seconds
days = int(duration.years * 365 + duration.months * 30 + duration.days)
seconds = int(duration.tdelta.seconds if hasattr(duration, 'tdelta') else 0)
return datetime.timedelta(days=days, seconds=seconds)
def timedelta_to_duration(delta):
"""Convert a datetime.timedelta object to an isodate.Duration object."""
# Extract days and convert to years, months, and days
days = delta.days
years = days // 365
days %= 365
months = days // 30
days %= 30
# Convert remaining seconds to hours, minutes, and seconds
total_days = delta.days
# Convert days back to years and months
years = total_days // 365
remaining_days = total_days % 365
months = remaining_days // 30
days = remaining_days % 30
# Convert remaining seconds
seconds = delta.seconds
hours = seconds // 3600
seconds %= 3600
minutes = seconds // 60
seconds %= 60
return isodate.Duration(
years=years,
months=months,
days=days,
hours=hours,
minutes=minutes,
seconds=seconds,
seconds=seconds
)
@ -531,7 +535,58 @@ def iso8601_diff(duration_str1, duration_str2):
return difference
def iso8601_duration_to_human(iso_duration, suffix: str = " ago"):
def flatten_duration_components(years: int, months: int, weeks: int, days: int,
hours: int, minutes: int, seconds: int):
"""
Flatten duration components based on total duration following specific rules.
Returns adjusted component values based on the total duration.
"""
total_days = years * 365 + months * 30 + weeks * 7 + days
total_months = total_days // 30
# Less than 1 day - keep original granularity
if total_days < 1:
return years, months, weeks, days, hours, minutes, seconds
# Less than 3 days - show only days and hours
elif total_days < 3:
if minutes >= 30: # Round up hours if 30+ minutes
hours += 1
return 0, 0, 0, total_days, hours, 0, 0
# Less than a month - show only days
elif total_days < 30:
return 0, 0, 0, total_days, 0, 0, 0
# Less than 6 months - show months and days
elif total_days < 180:
new_months = total_days // 30
new_days = total_days % 30
return 0, new_months, 0, new_days, 0, 0, 0
# Less than 1 year - show only months
elif total_months < 12:
new_months = total_months
if days > 15: # Round up months if 15+ days remain
new_months += 1
return 0, new_months, 0, 0, 0, 0, 0
# Less than 3 years - show years and months
elif total_months < 36:
new_years = total_months // 12
new_months = total_months % 12
return new_years, new_months, 0, 0, 0, 0, 0
# More than 3 years - show only years
else:
new_years = total_months // 12
if months >= 6: # Round up years if 6+ months remain
new_years += 1
return new_years, 0, 0, 0, 0, 0, 0
def iso8601_duration_to_human(iso_duration, suffix: str = " ago",
zero_time_default: str = "Recently", flatten: bool = True):
# Parse the ISO8601 duration string into an isodate duration object
if not isinstance(iso_duration, isodate.Duration):
duration = isodate.parse_duration(iso_duration)
@ -554,10 +609,15 @@ def iso8601_duration_to_human(iso_duration, suffix: str = " ago"):
minutes = (duration.seconds % 3600) // 60
seconds = duration.seconds % 60
# Adjust for cases where duration is a timedelta object
# Convert days to weeks and days if applicable
weeks, days = divmod(days, 7)
# If flattening is requested, adjust the components
if flatten:
years, months, weeks, days, hours, minutes, seconds = flatten_duration_components(
years, months, weeks, days, hours, minutes, seconds
)
# Build the human-readable components
components = []
if years:
@ -582,18 +642,18 @@ def iso8601_duration_to_human(iso_duration, suffix: str = " ago"):
elif components:
human_str = components[0]
else:
human_str = "Moments"
return zero_time_default
return f"{human_str}{suffix}"
def iso8601_diff_to_human(start, end):
def iso8601_diff_to_human(start, end, flatten: bool = True):
if not start or not end:
return ""
diff = iso8601_diff(start, end)
return iso8601_duration_to_human(diff)
return iso8601_duration_to_human(diff, flatten=flatten)
def iso8601_add(date_a: str, date_b: str) -> str:
@ -935,6 +995,14 @@ def ensure_dialog_format(line: str, talking_character: str = None) -> str:
if talking_character:
line = line[len(talking_character) + 1 :].lstrip()
if line.startswith('*') and line.startswith('*'):
if line.count("*") == 2 and not line.count('"'):
return f"{talking_character}: {line}" if talking_character else line
if line.startswith('"') and line.endswith('"'):
if line.count('"') == 2 and not line.count('*'):
return f"{talking_character}: {line}" if talking_character else line
lines = []

View file

@ -17,8 +17,10 @@ def extract_list(response: str) -> list:
items = []
# Locate the beginning of the list
lines = response.split("\n")
# strip empty lines
lines = [line for line in lines if line.strip() != ""]
list_start = None

View file

@ -1,3 +1,3 @@
__all__ = ["VERSION"]
VERSION = "0.27.0"
VERSION = "0.28.0"

View file

@ -1,12 +1,12 @@
{
"name": "talemate_frontend",
"version": "0.27.0",
"version": "0.28.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "talemate_frontend",
"version": "0.27.0",
"version": "0.28.0",
"dependencies": {
"@codemirror/lang-markdown": "^6.2.5",
"@codemirror/theme-one-dark": "^6.1.2",
@ -4701,9 +4701,9 @@
"dev": true
},
"node_modules/cookie": {
"version": "0.6.0",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.6.0.tgz",
"integrity": "sha512-U71cyTamuh1CRNCfpGY6to28lxvNwPG4Guz/EVjgf3Jmzv0vlDp1atT9eS5dDjMYHucpHbWns6Lwf3BKz6svdw==",
"version": "0.7.1",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz",
"integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==",
"dev": true,
"engines": {
"node": ">= 0.6"
@ -4808,9 +4808,9 @@
"integrity": "sha512-VQ2MBenTq1fWZUH9DJNGti7kKv6EeAuYr3cLwxUWhIu1baTaXh4Ib5W2CqHVqib4/MqbYGJqiL3Zb8GJZr3l4g=="
},
"node_modules/cross-spawn": {
"version": "7.0.3",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz",
"integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==",
"version": "7.0.6",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz",
"integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==",
"dev": true,
"dependencies": {
"path-key": "^3.1.0",
@ -6161,9 +6161,9 @@
}
},
"node_modules/execa/node_modules/cross-spawn": {
"version": "6.0.5",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-6.0.5.tgz",
"integrity": "sha512-eTVLrBSt7fjbDygz805pMnstIs2VTBNkRm0qxZd+M7A5XDdxVRWO5MxGBXZhjY4cqLYLdtrGqRf8mBPmzwSpWQ==",
"version": "6.0.6",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-6.0.6.tgz",
"integrity": "sha512-VqCUuhcd1iB+dsv8gxPttb5iZh/D0iubSP21g36KXdEuf6I5JiioesUVjpCdHV9MZRUfVFlvwtIUyPfxo5trtw==",
"dev": true,
"dependencies": {
"nice-try": "^1.0.4",
@ -6228,9 +6228,9 @@
}
},
"node_modules/express": {
"version": "4.21.0",
"resolved": "https://registry.npmjs.org/express/-/express-4.21.0.tgz",
"integrity": "sha512-VqcNGcj/Id5ZT1LZ/cfihi3ttTn+NJmkli2eZADigjq29qTlWi/hAQ43t/VLPq8+UX06FCEx3ByOYet6ZFblng==",
"version": "4.21.1",
"resolved": "https://registry.npmjs.org/express/-/express-4.21.1.tgz",
"integrity": "sha512-YSFlK1Ee0/GC8QaO91tHcDxJiE/X4FbpAyQWkxAvG6AXCuR65YzK8ua6D9hvi/TzUfZMpc+BwuM1IPw8fmQBiQ==",
"dev": true,
"dependencies": {
"accepts": "~1.3.8",
@ -6238,7 +6238,7 @@
"body-parser": "1.20.3",
"content-disposition": "0.5.4",
"content-type": "~1.0.4",
"cookie": "0.6.0",
"cookie": "0.7.1",
"cookie-signature": "1.0.6",
"debug": "2.6.9",
"depd": "2.0.0",
@ -7084,9 +7084,9 @@
}
},
"node_modules/http-proxy-middleware": {
"version": "2.0.6",
"resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.6.tgz",
"integrity": "sha512-ya/UeJ6HVBYxrgYotAZo1KvPWlgB48kUJLDePFeneHsVujFaW5WNj2NgWCAE//B1Dl02BIfYlpNgBy8Kf8Rjmw==",
"version": "2.0.7",
"resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.7.tgz",
"integrity": "sha512-fgVY8AV7qU7z/MmXJ/rxwbrtQH4jBQ9m7kp3llF0liB7glmFeVZFBepQb32T3y8n8k2+AEYuMPCpinYW+/CuRA==",
"dev": true,
"dependencies": {
"@types/http-proxy": "^1.17.8",

View file

@ -1,6 +1,6 @@
{
"name": "talemate_frontend",
"version": "0.27.0",
"version": "0.28.0",
"private": true,
"scripts": {
"serve": "vue-cli-service serve",

View file

@ -62,7 +62,7 @@
-->
</v-list-item>
</v-list>
<AgentModal :dialog="state.dialog" :formTitle="state.formTitle" @save="saveAgent" @update:dialog="updateDialog"></AgentModal>
<AgentModal :dialog="state.dialog" :formTitle="state.formTitle" @save="saveAgent" @update:dialog="updateDialog" ref="modal"></AgentModal>
</div>
</template>
@ -162,6 +162,14 @@ export default {
updateDialog(newVal) {
this.state.dialog = newVal;
},
openSettings(agentName, section) {
let index = this.state.agents.findIndex(a => a.name === agentName);
if (index !== -1) {
this.editAgent(index);
if(section)
this.$refs.modal.tab = section;
}
},
handleMessage(data) {
// Handle agent_status message type
if (data.type === 'agent_status') {

View file

@ -19,46 +19,51 @@
<v-card-text class="scrollable-content">
<v-row>
<v-col cols="3">
<v-col cols="4">
<v-tabs v-model="tab" color="primary" direction="vertical">
<v-tab v-for="item in tabs" :key="item.name" v-model="tab" :value="item.name">
<v-icon>{{ item.icon }}</v-icon>
{{ item.label }}
</v-tab>
</v-tabs>
</v-col>
<v-col cols="9">
<v-col cols="8">
<v-window v-model="tab">
<v-window-item :value="item.name" v-for="item in tabs" :key="item.name">
<v-select v-if="agent.data.requires_llm_client && tab === '_config'" v-model="selectedClient" :items="agent.data.client" label="Client" @update:modelValue="save(false)"></v-select>
<v-alert type="warning" variant="tonal" density="compact" v-if="agent.data.experimental">
This agent is currently experimental and may significantly decrease performance and / or require
strong LLMs to function properly.
</v-alert>
<v-sheet v-for="(action, key) in actionsForTab" :key="key" density="compact">
<div v-if="testActionConditional(action)">
<div>
<v-checkbox v-if="!actionAlwaysEnabled(key) && !action.container" :label="agent.data.actions[key].label" :messages="agent.data.actions[key].description" density="compact" color="primary" v-model="action.enabled" @update:modelValue="save(false)">
<v-checkbox v-if="!actionAlwaysVisible(key, action) && !action.container" :label="agent.data.actions[key].label" :messages="agent.data.actions[key].description" density="compact" color="primary" v-model="action.enabled" @update:modelValue="save(false)">
<!-- template details slot -->
<template v-slot:message="{ message }">
<span class="text-caption text-grey">{{ message }}</span>
<div class="text-caption text-grey mb-8">{{ message }}</div>
</template>
</v-checkbox>
<p v-else-if="action.container">{{ agent.data.actions[key].description }}</p>
<p v-else-if="action.container" class="text-muted mt-2">
{{ agent.data.actions[key].description }}
</p>
</div>
<div class="mt-2">
<div v-if="action.container && action.can_be_disabled">
<v-checkbox :label="'Enable '+action.label" color="primary" v-model="action.enabled" @update:modelValue="save(false)">
<!-- template details slot -->
</v-checkbox>
</div>
<div v-for="(action_config, config_key) in agent.data.actions[key].config" :key="config_key">
<div v-if="action.enabled || actionAlwaysEnabled(key)">
<div v-if="action.enabled || actionAlwaysVisible(key, action)">
<!-- render config widgets based on action_config.type (int, str, bool, float) -->
<v-text-field v-if="action_config.type === 'text' && action_config.choices === null" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" density="compact" @keyup="save(true)"></v-text-field>
<v-textarea v-else-if="action_config.type === 'blob'" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" density="compact" @keyup="save(true)" rows="5"></v-textarea>
<v-autocomplete v-else-if="action_config.type === 'text' && action_config.choices !== null" v-model="action.config[config_key].value" :items="action_config.choices" :label="action_config.label" :hint="action_config.description" density="compact" item-title="label" item-value="value" @update:modelValue="save(false)"></v-autocomplete>
<v-slider v-if="action_config.type === 'number' && action_config.step !== null" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" :min="action_config.min" :max="action_config.max" :step="action_config.step" density="compact" thumb-label @update:modelValue="save(true)" color="primary"></v-slider>
<v-slider v-if="action_config.type === 'number' && action_config.step !== null" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" :min="action_config.min" :max="action_config.max" :step="action_config.step" density="compact" @update:modelValue="save(true)" color="primary" thumb-label="always"></v-slider>
<v-checkbox v-if="action_config.type === 'bool'" v-model="action.config[config_key].value" :label="action_config.label" :messages="action_config.description" density="compact" @update:modelValue="save(false)" color="primary">
<!-- template details slot -->
@ -70,6 +75,7 @@
</v-checkbox>
<v-alert v-if="action_config.note != null" variant="outlined" density="compact" color="grey-darken-1" icon="mdi-information">
<div class="text-caption text-mutedheader">{{ action_config.label }}</div>
{{ action_config.note }}
</v-alert>
</div>
@ -81,6 +87,18 @@
</v-window>
</v-col>
</v-row>
<v-row>
<v-col cols="12">
<v-alert type="warning" variant="outlined" density="compact" v-if="agent.data.experimental">
<!-- small icon -->
<span class="text-caption">
This agent is currently experimental and may significantly decrease performance and / or require
strong LLMs to function properly.
</span>
</v-alert>
</v-col>
</v-row>
</v-card-text>
</v-card>
</v-dialog>
@ -109,7 +127,9 @@ export default {
// will cycle through all actions, and each each action that has `container` = True, will be added to the tabs
// will always add a general tab for the general agent settings
let tabs = [{ name: "_config", label: "General", icon: "mdi-cog" }];
let tabs = [{ name: "_config", label: "General", icon: "mdi-cog", action: {} }];
console.log("Agent: ", this.agent);
for (let key in this.agent.actions) {
let action = this.agent.actions[key];
@ -119,7 +139,7 @@ export default {
if(this.testActionConditional(action) === false)
continue;
tabs.push({ name: key, label: action.label, icon: action.icon });
tabs.push({ name: key, label: action.label, icon: action.icon, action:action });
}
}
@ -177,8 +197,8 @@ export default {
return 'Enable';
}
},
actionAlwaysEnabled(actionName) {
if (actionName.charAt(0) === '_') {
actionAlwaysVisible(actionName, action) {
if (actionName.charAt(0) === '_' || action.container) {
return true;
} else {
return false;
@ -220,7 +240,7 @@ export default {
this.saveTimeout = setTimeout(() => {
this.$emit('save', this.agent);
}, 500);
}, 1500);
//this.$emit('save', this.agent);
}

View file

@ -7,6 +7,10 @@
<v-icon start>mdi-gamepad-square</v-icon>
Game
</v-tab>
<v-tab value="appearance">
<v-icon start>mdi-palette-outline</v-icon>
Appearance
</v-tab>
<v-tab value="application">
<v-icon start>mdi-application</v-icon>
Application
@ -90,6 +94,16 @@
</v-card>
</v-window-item>
<!-- APPEARANCE -->
<v-window-item value="appearance">
<AppConfigAppearance
ref="appearance"
:immutableConfig="app_config"
:sceneActive="sceneActive"
></AppConfigAppearance>
</v-window-item>
<!-- APPLICATION -->
<v-window-item value="application">
@ -341,11 +355,13 @@
<script>
import AppConfigPresets from './AppConfigPresets.vue';
import AppConfigAppearance from './AppConfigAppearance.vue';
export default {
name: 'AppConfig',
components: {
AppConfigPresets,
AppConfigAppearance,
},
props: {
agentStatus: Object,
@ -362,6 +378,9 @@ export default {
{title: 'General', icon: 'mdi-cog', value: 'general'},
{title: 'Default Character', icon: 'mdi-human-edit', value: 'character'},
],
appearance: [
{title: 'Scene', icon: 'mdi-script-text', value: 'scene'},
],
application: [
{title: 'OpenAI', icon: 'mdi-api', value: 'openai_api'},
{title: 'mistral.ai', icon: 'mdi-api', value: 'mistralai_api'},
@ -472,6 +491,12 @@ export default {
}
}
// check if appearance component is present
if(this.$refs.appearance) {
// update app_config.appearance from $refs.appearance.config
this.app_config.appearance = this.$refs.appearance.get_config();
}
this.sendRequest({
action: 'save',
config: this.app_config,

View file

@ -0,0 +1,51 @@
<template>
<v-tabs color="secondary" v-model="tab">
<v-tab v-for="t in tabs" :key="t.value" :value="t.value">
<v-icon start>{{ t.icon }}</v-icon>
{{ t.title }}
</v-tab>
</v-tabs>
<v-window v-model="tab">
<v-window-item value="scene">
<AppConfigAppearanceScene ref="scene" :immutableConfig="immutableConfig" :sceneActive="sceneActive"></AppConfigAppearanceScene>
</v-window-item>
</v-window>
</template>
<script>
import AppConfigAppearanceScene from './AppConfigAppearanceScene.vue';
export default {
name: 'AppConfigAppearance',
components: {
AppConfigAppearanceScene,
},
props: {
immutableConfig: Object,
sceneActive: Boolean,
},
emits: [
],
data() {
return {
tab: 'scene',
tabs: [
{ title: 'Scene', icon: 'mdi-script-text', value: 'scene' },
]
}
},
methods: {
get_config() {
let config = {
scene: this.immutableConfig.appearance.scene,
};
if(this.$refs.scene) {
config.scene = this.$refs.scene.config;
}
return config;
}
},
}
</script>

View file

@ -0,0 +1,189 @@
<template>
<v-row class="ma-5" no-gutters>
<v-col cols="12">
<v-form v-for="config, typ in config" :key="typ">
<v-row>
<v-col cols="3" :class="(colorPickerTarget === typ ? 'text-highlight5' : '')">
<div class="text-caption">{{ typLabelMap[typ] }}</div>
</v-col>
<v-col cols="2">
<v-checkbox :disabled="!canSetStyleOn[typ]" density="compact" v-model="config.italic" label="Italic"></v-checkbox>
</v-col>
<v-col cols="2">
<v-checkbox :disabled="!canSetStyleOn[typ]" density="compact" v-model="config.bold" label="Bold"></v-checkbox>
</v-col>
<v-col cols="2">
<v-checkbox v-if="config.show !== undefined" density="compact" v-model="config.show" label="Show"></v-checkbox>
</v-col>
<v-col class="text-right" cols="3" v-if="canSetColorOn[typ]">
<v-icon class="mt-2" :color="getColor(typ, config.color)" @click="openColorPicker(typ, getColor(typ, config.color))">mdi-circle</v-icon>
<v-btn size="x-small" color="secondary" variant="text" class="mt-2" prepend-icon="mdi-refresh" @click="reset(typ, config)">Reset</v-btn>
</v-col>
</v-row>
</v-form>
</v-col>
</v-row>
<v-row class="ma-5" no-gutters>
<v-col cols="8">
<v-card elevation="7">
<v-card-text>
<div>
<span :style="buildCssStyles('narrator_messages', config.narrator_messages)">
The quick brown fox jumps over the lazy dog
</span>
<span :style="buildCssStyles('character_messages', config.character_messages)">
"Wow, that was a quick brown fox - did you see it?"
</span>
<div class="mt-3">
<v-chip :color="getColor('director_messages', config.director_messages.color)">
<v-icon class="mr-2">mdi-bullhorn</v-icon>
<span @click="toggle()">Guy looking at fox</span>
</v-chip>
</div>
<div class="mt-3" :style="buildCssStyles('director_messages', config.director_messages)">
<span>Director instructs</span>
<span class="ml-1 text-decoration-underline">Guy looking at fox</span>
<span class="ml-1">Stop looking at the fox.</span>
</div>
<div class="mt-3">
<v-chip :color="getColor('time_messages', config.time_messages.color)">
<v-icon class="mr-2">mdi-clock-outline</v-icon>
<span>3 days later</span>
</v-chip>
</div>
<div class="mt-3">
<!-- context investigations, similar to director messages, with both chip and text -->
<v-chip :color="getColor('context_investigation_messages', config.context_investigation_messages.color)">
<v-icon class="mr-2">mdi-text-search</v-icon>
<span>Context Investigation</span>
</v-chip>
</div>
<div class="mt-3" :style="buildCssStyles('context_investigation_messages', config.context_investigation_messages)">
<span>
"The fox was last seen in the forest"
</span>
</div>
</div>
</v-card-text>
</v-card>
</v-col>
<v-col cols="4">
<v-card :style="'opacity: '+(colorPickerTarget ? 1 : 0)">
<v-card-text>
<v-color-picker hide-inputs :disabled="colorPickerTarget === null" v-model="color" @update:model-value="onColorChange"></v-color-picker>
</v-card-text>
</v-card>
</v-col>
</v-row>
</template>
<script>
export default {
name: 'AppConfigAppearanceScene',
components: {
},
props: {
immutableConfig: Object,
sceneActive: Boolean,
},
emits: [
],
watch: {
immutableConfig: {
handler: function(newVal) {
console.log('immutableConfig changed', newVal);
if(!newVal) {
this.config = {};
return;
}
this.config = {...newVal.appearance.scene};
},
immediate: true,
deep: true,
},
},
data() {
return {
colorPicker: null,
color: "#000000",
colorPickerTarget: null,
defaultColors: {
"narrator_messages": "#B39DDB",
"character_messages": "#FFFFFF",
"director_messages": "#FF5722",
"time_messages": "#B39DDB",
"context_investigation_messages": "#607D8B",
},
typLabelMap: {
"narrator_messages": "Narrator Messages",
"character_messages": "Character Messages",
"director_messages": "Director Messages",
"time_messages": "Time Messages",
"context_investigation_messages": "Context Investigations",
},
config: {
scene: {}
},
canSetStyleOn: {
"narrator_messages": true,
"character_messages": true,
"director_messages": true,
"context_investigation_messages": true,
//"time_messages": true,
},
canSetColorOn: {
"narrator_messages": true,
"character_messages": true,
"director_messages": true,
"time_messages": true,
"context_investigation_messages": true,
},
}
},
methods: {
reset(typ, config) {
config.color = null;
this.color = this.getColor(typ, config.color);
},
onColorChange() {
this.config[this.colorPickerTarget].color = this.color;
},
buildCssStyles(typ, config) {
let styles = "";
if (config.italic) {
styles += "font-style: italic;";
}
if (config.bold) {
styles += "font-weight: bold;";
}
styles += "color: " + this.getColor(typ, config.color) + ";";
return styles;
},
openColorPicker(target, targetColor) {
this.color = targetColor;
this.colorPicker = true;
this.colorPickerTarget = target;
},
getColor(typ, color) {
// if color is None load the default color
if (color === null) {
return this.defaultColors[typ];
}
return color;
}
},
}
</script>

View file

@ -1,7 +1,7 @@
<template>
<v-alert variant="text" :color="color" icon="mdi-chat-outline" elevation="0" density="compact" @mouseover="hovered=true" @mouseleave="hovered=false">
<template v-slot:close>
<v-btn size="x-small" icon @click="deleteMessage">
<v-btn size="x-small" icon @click="deleteMessage" :disabled="uxLocked">
<v-icon>mdi-close</v-icon>
</v-btn>
</template>
@ -32,7 +32,7 @@
>
</v-textarea>
<div v-else class="character-text" @dblclick="startEdit()">
<span v-for="(part, index) in parts" :key="index" :class="{ highlight: part.isNarrative, 'text-narrator': part.isNarrative }">
<span v-for="(part, index) in parts" :key="index" :style="getMessageStyle(part.isNarrative ? 'narrator' : 'character')">
<span>{{ part.text }}</span>
</span>
</div>
@ -44,14 +44,19 @@
<v-chip size="x-small" color="grey-lighten-1" v-else-if="!editing && hovered" variant="text" class="mr-1">
<v-icon>mdi-pencil</v-icon>
Double-click to edit.</v-chip>
<v-chip size="x-small" label color="success" v-if="!editing && hovered" variant="outlined" @click="createPin(message_id)">
<!-- create pin -->
<v-chip size="x-small" label color="success" v-if="!editing && hovered" variant="outlined" @click="createPin(message_id)" :disabled="uxLocked">
<v-icon class="mr-1">mdi-pin</v-icon>
Create Pin
</v-chip>
<v-chip size="x-small" class="ml-2" label color="primary" v-if="!editing && hovered" variant="outlined" @click="fixMessageContinuityErrors(message_id)">
<v-icon class="mr-1">mdi-call-split</v-icon>
Fix Continuity Errors
<!-- fork scene -->
<v-chip size="x-small" class="ml-2" label color="primary" v-if="!editing && hovered" variant="outlined" @click="forkSceneInitiate(message_id)" :disabled="uxLocked">
<v-icon class="mr-1">mdi-source-fork</v-icon>
Fork Scene
</v-chip>
</v-sheet>
<div v-else style="height:24px">
@ -61,8 +66,8 @@
<script>
export default {
props: ['character', 'text', 'color', 'message_id'],
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin', 'fixMessageContinuityErrors', 'autocompleteRequest', 'autocompleteInfoMessage'],
props: ['character', 'text', 'color', 'message_id', 'uxLocked'],
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin', 'forkSceneInitiate', 'fixMessageContinuityErrors', 'autocompleteRequest', 'autocompleteInfoMessage', 'getMessageStyle'],
computed: {
parts() {
const parts = [];

View file

@ -0,0 +1,75 @@
<template>
<div>
<div class="context-investigation-container" v-if="show && minimized" >
<v-chip closable :color="getMessageColor('context_investigation', null)" class="clickable" @click:close="deleteMessage()" :disabled="uxLocked">
<v-icon class="mr-2">{{ icon }}</v-icon>
<span @click="toggle()">Context Investigation</span>
</v-chip>
</div>
<v-alert @click="toggle()" v-else-if="show" class="clickable" variant="text" type="info" :icon="icon" elevation="0" density="compact" @click:close="deleteMessage()" :color="getMessageColor('context_investigation', null)">
<span>{{ text }}</span>
<v-sheet color="transparent">
<v-btn color="secondary" variant="text" size="x-small" prepend-icon="mdi-eye-off" @click.stop="openAppConfig('appearance', 'scene')">Hide these messages</v-btn>
<v-btn color="primary" variant="text" size="x-small" prepend-icon="mdi-cogs" @click.stop="openAgentSettings('conversation', 'investigate_context')">Disable context Investigations.</v-btn>
</v-sheet>
</v-alert>
</div>
</template>
<script>
export default {
name: 'ContextInvestigationMessage',
data() {
return {
show: true,
minimized: true
}
},
computed: {
icon() {
return "mdi-text-search";
}
},
props: ['text', 'message_id', 'uxLocked'],
inject: ['requestDeleteMessage', 'getMessageStyle', 'getMessageColor', 'openAppConfig', 'openAgentSettings'],
methods: {
toggle() {
this.minimized = !this.minimized;
},
deleteMessage() {
this.requestDeleteMessage(this.message_id);
}
}
}
</script>
<style scoped>
.highlight {
font-style: italic;
margin-left: 2px;
margin-right: 2px;
}
.clickable {
cursor: pointer;
}
.highlight:before {
--content: "*";
}
.highlight:after {
--content: "*";
}
.context-investigation-container {
margin-left: 10px;
}
.context-investigation-text::after {
content: '"';
}
.context-investigation-text::before {
content: '"';
}
</style>

View file

@ -65,12 +65,14 @@ export default {
handleMessage(data) {
if(this.log_socket_messages) {
if(this.filter_socket_messages) {
if(this.filter_socket_messages != "" && this.filter_socket_messages != null) {
if(data.type.indexOf(this.filter_socket_messages) === -1) {
return;
}
}
console.log(data);
}
}
},

View file

@ -2,30 +2,29 @@
<div v-if="character">
<!-- actor instructions (character direction)-->
<div class="director-container" v-if="show && minimized" >
<v-chip closable color="deep-orange" class="clickable" @click:close="deleteMessage()">
<v-chip closable :color="getMessageColor('director', null)" class="clickable" @click:close="deleteMessage()" :disabled="uxLocked">
<v-icon class="mr-2">{{ icon }}</v-icon>
<span @click="toggle()">{{ character }}</span>
</v-chip>
</div>
<v-alert v-else-if="show" color="deep-orange" class="director-message clickable" variant="text" type="info" :icon="icon"
elevation="0" density="compact" @click:close="deleteMessage()" >
<v-alert v-else-if="show" class="clickable" variant="text" type="info" :icon="icon" :style="getMessageStyle('director')" elevation="0" density="compact" @click:close="deleteMessage()" :color="getMessageColor('director', null)">
<span v-if="direction_mode==='internal_monologue'">
<!-- internal monologue -->
<span class="director-character text-decoration-underline" @click="toggle()">{{ character }}</span>
<span class="director-instructs ml-1" @click="toggle()">thinks</span>
<span class="director-text ml-1" @click="toggle()">{{ text }}</span>
<span :style="getMessageStyle('director')" class="text-decoration-underline" @click="toggle()">{{ character }}</span>
<span :style="getMessageStyle('director')" class="ml-1" @click="toggle()">thinks</span>
<span :style="getMessageStyle('director')" class="director-text ml-1" @click="toggle()">{{ text }}</span>
</span>
<span v-else>
<!-- director instructs -->
<span class="director-instructs" @click="toggle()">Director instructs</span>
<span class="director-character ml-1 text-decoration-underline" @click="toggle()">{{ character }}</span>
<span class="director-text ml-1" @click="toggle()">{{ text }}</span>
<span :style="getMessageStyle('director')" @click="toggle()">Director instructs</span>
<span :style="getMessageStyle('director')" class="ml-1 text-decoration-underline" @click="toggle()">{{ character }}</span>
<span :style="getMessageStyle('director')" class="director-text ml-1" @click="toggle()">{{ text }}</span>
</span>
</v-alert>
</div>
<div v-else-if="action">
<v-alert color="deep-purple-lighten-2" class="director-message" variant="text" type="info" :icon="icon"
<v-alert :color="getMessageColor('director', null)" variant="text" type="info" :icon="icon"
elevation="0" density="compact" >
<div>{{ text }}</div>
@ -54,8 +53,8 @@ export default {
}
}
},
props: ['text', 'message_id', 'character', 'direction_mode', 'action'],
inject: ['requestDeleteMessage'],
props: ['text', 'message_id', 'character', 'direction_mode', 'action', 'uxLocked'],
inject: ['requestDeleteMessage', 'getMessageStyle', 'getMessageColor'],
methods: {
toggle() {
this.minimized = !this.minimized;
@ -69,7 +68,6 @@ export default {
<style scoped>
.highlight {
color: #9FA8DA;
font-style: italic;
margin-left: 2px;
margin-right: 2px;
@ -87,23 +85,10 @@ export default {
--content: "*";
}
.director-message {
color: #9FA8DA;
}
.director-container {
margin-left: 10px;
}
.director-instructs {
/* Add your CSS styles for "Director instructs" here */
color: #BF360C;
}
.director-text {
/* Add your CSS styles for the actual instruction here */
color: #EF6C00;
}
.director-text::after {
content: '"';
}

View file

@ -18,7 +18,7 @@
<v-card-text v-if="config != null">
<div class="tiles">
<div class="tile" v-for="(scene, index) in recentScenes()" :key="index">
<v-card density="compact" elevation="7" @click="loadScene(scene)" color="primary" variant="outlined">
<v-card :disabled="!sceneLoadingAvailable || sceneIsLoading" density="compact" elevation="7" @click="loadScene(scene)" color="primary" variant="outlined">
<v-card-title>
{{ filenameToTitle(scene.filename) }}
</v-card-title>
@ -42,6 +42,7 @@
export default {
name: 'IntroRecentScenes',
props: {
sceneIsLoading: Boolean,
sceneLoadingAvailable: Boolean,
config: Object,
},
@ -177,4 +178,8 @@ export default {
max-width: 275px;
}
.v-card:disabled {
opacity: 0.5;
}
</style>

View file

@ -1,7 +1,7 @@
<template>
<v-row>
<v-col cols="12" v-if="sceneLoadingAvailable">
<IntroRecentScenes :config="config" :scene-loading-available="sceneLoadingAvailable" @request-scene-load="requestSceneLoad"/>
<IntroRecentScenes :config="config" :scene-is-loading="sceneIsLoading" :scene-loading-available="sceneLoadingAvailable" @request-scene-load="requestSceneLoad"/>
</v-col>
</v-row>
<v-row v-if="false">
@ -38,6 +38,7 @@ export default {
props: {
version: String,
sceneLoadingAvailable: Boolean,
sceneIsLoading: Boolean,
config: Object,
},
emits: ['request-scene-load'],

View file

@ -1,7 +1,7 @@
<template>
<v-alert variant="text" color="narrator" icon="mdi-script-text-outline" elevation="0" density="compact" @mouseover="hovered=true" @mouseleave="hovered=false">
<template v-slot:close>
<v-btn size="x-small" icon @click="deleteMessage">
<v-btn size="x-small" icon @click="deleteMessage" :disabled="uxLocked">
<v-icon>mdi-close</v-icon>
</v-btn>
</template>
@ -24,7 +24,7 @@
@keydown.escape.prevent="cancelEdit()">
</v-textarea>
<div v-else class="narrator-text" @dblclick="startEdit()">
<span v-for="(part, index) in parts" :key="index" :class="{ highlight: part.isNarrative, 'text-narrator': part.isNarrative }">
<span v-for="(part, index) in parts" :key="index" :style="getMessageStyle(part.isNarrative ? 'narrator' : 'character')">
{{ part.text }}
</span>
</div>
@ -50,8 +50,8 @@
<script>
export default {
props: ['text', 'message_id'],
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin', 'fixMessageContinuityErrors', 'autocompleteRequest', 'autocompleteInfoMessage'],
props: ['text', 'message_id', 'uxLocked'],
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin', 'fixMessageContinuityErrors', 'autocompleteRequest', 'autocompleteInfoMessage', 'getMessageStyle'],
computed: {
parts() {
const parts = [];

View file

@ -0,0 +1,80 @@
<template>
<v-alert color="muted" variant="text">
<template v-slot:close>
<v-btn size="x-small" icon @click="cancel">
<v-icon>mdi-close</v-icon>
</v-btn>
</template>
<v-card-title class="text-subtitle-1">
The
<span class="text-director text-secondary"><v-icon size="small">mdi-bullhorn</v-icon> Director</span>
suggests some actions
<v-btn variant="text" size="small" color="secondary" prepend-icon="mdi-refresh" @click="regenerate" :disabled="busy">Regenerate</v-btn>
<v-btn variant="text" size="small" color="primary" prepend-icon="mdi-cogs" @click="settings" :disabled="busy">Settings</v-btn>
</v-card-title>
<p v-if="busy">
<v-progress-linear color="primary" height="2" indeterminate></v-progress-linear>
</p>
<v-list density="compact" :disabled="busy">
<v-list-item v-for="(choice, index) in choices" :key="index" @click="selectChoice(index)">
<v-list-item-title>
{{ choice }}
</v-list-item-title>
</v-list-item>
<v-list-item @click="cancel" prepend-icon="mdi-cancel">
<v-list-item-title>Cancel</v-list-item-title>
</v-list-item>
</v-list>
</v-alert>
</template>
<script>
export default {
name: 'PlayerChoiceMessage',
props: {
choices: Array,
},
data() {
return {
busy: false,
}
},
watch: {
choices() {
this.busy = false;
}
},
inject: ['getWebsocket', 'registerMessageHandler', 'setWaitingForInput', 'unregisterMessageHandler', 'openAgentSettings'],
emits: ['close'],
methods: {
selectChoice(index) {
this.$emit('close');
this.getWebsocket().send(JSON.stringify({
type: "director",
action: "select_choice",
choice: this.choices[index],
}));
},
settings() {
this.openAgentSettings('director', '_generate_choices');
},
cancel() {
this.$emit('close');
},
regenerate() {
this.busy = true;
this.getWebsocket().send(JSON.stringify({
type: "director",
action: "generate_choices",
}));
},
}
}
</script>

View file

@ -5,6 +5,11 @@
<span class="headline">{{ title }}</span>
</v-card-title>
<v-card-text>
<v-alert v-if="instructions" color="muted" variant="text">
{{ instructions }}
</v-alert>
<v-form @submit.prevent="proceed" ref="form" v-model="valid">
<v-row v-if="inputType === 'multiline'">
<v-col cols="12">
@ -45,6 +50,7 @@ export default {
name: "RequestInput",
props: {
title: String,
instructions: String,
inputType: {
type: String,
default: 'text',
@ -54,6 +60,7 @@ export default {
return {
open: false,
valid: false,
extra_params: {},
input: '',
rules: {
required: value => !!value || 'Required.',
@ -69,15 +76,16 @@ export default {
return;
}
this.$emit('continue', this.input);
this.$emit('continue', this.input, this.extra_params);
this.open = false;
},
cancel() {
this.$emit('cancel');
this.open = false;
},
openDialog() {
openDialog(extra_params) {
this.open = true;
this.extra_params = extra_params;
this.input = '';
}
}

View file

@ -1,10 +1,16 @@
<template>
<RequestInput
ref="requestForkName"
title="Save Forked Scene As"
instructions="A new copy of the scene will be forked from the message you've selected. All progress after the message will be removed, allowing you to make new choices and take the scene in a different direction."
@continue="(name, params) => { forkScene(params.message_id, name) }" />
<div class="message-container" ref="messageContainer" style="flex-grow: 1; overflow-y: auto;">
<div v-for="(message, index) in messages" :key="index">
<div v-if="message.type === 'character' || message.type === 'processing_input'"
:class="`message ${message.type}`" :id="`message-${message.id}`" :style="{ borderColor: message.color }">
<div class="character-message">
<CharacterMessage :character="message.character" :text="message.text" :color="message.color" :message_id="message.id" />
<CharacterMessage :character="message.character" :text="message.text" :color="message.color" :message_id="message.id" :uxLocked="uxLocked" />
</div>
</div>
<div v-else-if="message.type === 'request_input' && message.choices">
@ -37,21 +43,31 @@
</div>
<div v-else-if="message.type === 'narrator'" :class="`message ${message.type}`">
<div class="narrator-message" :id="`message-${message.id}`">
<NarratorMessage :text="message.text" :message_id="message.id" />
<NarratorMessage :text="message.text" :message_id="message.id" :uxLocked="uxLocked" />
</div>
</div>
<div v-else-if="message.type === 'director'" :class="`message ${message.type}`">
<div v-else-if="message.type === 'director' && !getMessageTypeHidden(message.type)" :class="`message ${message.type}`">
<div class="director-message" :id="`message-${message.id}`">
<DirectorMessage :text="message.text" :message_id="message.id" :character="message.character" :direction_mode="message.direction_mode" :action="message.action"/>
<DirectorMessage :text="message.text" :message_id="message.id" :character="message.character" :direction_mode="message.direction_mode" :action="message.action" :uxLocked="uxLocked"/>
</div>
</div>
<div v-else-if="message.type === 'time'" :class="`message ${message.type}`">
<div class="time-message" :id="`message-${message.id}`">
<TimePassageMessage :text="message.text" :message_id="message.id" :ts="message.ts" />
<TimePassageMessage :text="message.text" :message_id="message.id" :ts="message.ts" :uxLocked="uxLocked" />
</div>
</div>
<div v-else-if="message.type === 'player_choice'" :class="`message ${message.type}`">
<div class="player-choice-message" :id="`message-player-choice`">
<PlayerChoiceMessage :choices="message.data.choices" @close="closePlayerChoice" :uxLocked="uxLocked" />
</div>
</div>
<div v-else-if="message.type === 'context_investigation' && !getMessageTypeHidden(message.type)" :class="`message ${message.type}`">
<div class="context-investigation-message" :id="`message-${message.id}`">
<ContextInvestigationMessage :text="message.text" :message_id="message.id" :uxLocked="uxLocked" />
</div>
</div>
<div v-else :class="`message ${message.type}`">
<div v-else-if="!getMessageTypeHidden(message.type)" :class="`message ${message.type}`">
{{ message.text }}
</div>
</div>
@ -64,6 +80,9 @@ import NarratorMessage from './NarratorMessage.vue';
import DirectorMessage from './DirectorMessage.vue';
import TimePassageMessage from './TimePassageMessage.vue';
import StatusMessage from './StatusMessage.vue';
import RequestInput from './RequestInput.vue';
import PlayerChoiceMessage from './PlayerChoiceMessage.vue';
import ContextInvestigationMessage from './ContextInvestigationMessage.vue';
const MESSAGE_FLAGS = {
NONE: 0,
@ -72,16 +91,35 @@ const MESSAGE_FLAGS = {
export default {
name: 'SceneMessages',
props: {
appearanceConfig: {
type: Object,
},
uxLocked: {
type: Boolean,
default: false,
},
},
components: {
CharacterMessage,
NarratorMessage,
DirectorMessage,
TimePassageMessage,
StatusMessage,
RequestInput,
PlayerChoiceMessage,
ContextInvestigationMessage,
},
data() {
return {
messages: [],
defaultColors: {
"narrator": "#B39DDB",
"character": "#FFFFFF",
"director": "#FF5722",
"time": "#B39DDB",
"context_investigation": "#607D8B",
},
}
},
inject: ['getWebsocket', 'registerMessageHandler', 'setWaitingForInput'],
@ -90,10 +128,47 @@ export default {
requestDeleteMessage: this.requestDeleteMessage,
createPin: this.createPin,
fixMessageContinuityErrors: this.fixMessageContinuityErrors,
forkSceneInitiate: this.forkSceneInitiate,
getMessageColor: this.getMessageColor,
getMessageStyle: this.getMessageStyle,
}
},
methods: {
getMessageColor(typ,color) {
if(!this.appearanceConfig || !this.appearanceConfig.scene[`${typ}_messages`].color) {
return this.defaultColors[typ];
}
return color || this.appearanceConfig.scene[`${typ}_messages`].color;
},
getMessageTypeHidden(typ) {
// messages are hidden if appearanceCOnfig.scene[`${typ}_messages`].show is false
// true and undefined are the same
if(!this.appearanceConfig || !this.appearanceConfig.scene[`${typ}_messages`]) {
return false;
} else if(this.appearanceConfig && this.appearanceConfig.scene[`${typ}_messages`].show === false) {
return true;
}
return false;
},
getMessageStyle(typ) {
let styles = "";
let config = this.appearanceConfig.scene[`${typ}_messages`];
if (config.italic) {
styles += "font-style: italic;";
}
if (config.bold) {
styles += "font-weight: bold;";
}
styles += "color: " + this.getMessageColor(typ, config.color) + ";";
return styles;
},
clear() {
this.messages = [];
},
@ -164,6 +239,31 @@ export default {
].includes(type);
},
closePlayerChoice() {
// find the most recent player choice message and remove it
for (let i = this.messages.length - 1; i >= 0; i--) {
if (this.messages[i].type === 'player_choice') {
this.messages.splice(i, 1);
break;
}
}
},
forkSceneInitiate(message_id) {
this.$refs.requestForkName.openDialog(
{ message_id: message_id }
);
},
forkScene(message_id, save_name) {
this.getWebsocket().send(JSON.stringify({
type: 'assistant',
action: 'fork_new_scene',
message_id: message_id,
save_name: save_name,
}));
},
handleMessage(data) {
var i;
@ -174,6 +274,14 @@ export default {
if (data.type == "remove_message") {
// if the last message is a player_choice message
// and the second to last message is the message to remove
// also remove the player_choice message
if (this.messages.length > 1 && this.messages[this.messages.length - 1].type === 'player_choice' && this.messages[this.messages.length - 2].id === data.id) {
this.messages.pop();
}
// find message where type == "character" and id == data.id
// remove that message from the array
let newMessages = [];
@ -229,6 +337,13 @@ export default {
return;
}
// if the previous message was a player choice message, remove it
if (this.messageTypeIsSceneMessage(data.type)) {
if(this.messages.length > 0 && this.messages[this.messages.length - 1].type === 'player_choice') {
this.messages.pop();
}
}
if (data.type === 'character') {
const parts = data.message.split(':');
const character = parts.shift();
@ -244,6 +359,9 @@ export default {
action: data.action
}
);
} else if (data.type === 'player_choice') {
console.log('player_choice', data);
this.messages.push({ id: data.id, type: data.type, data: data.data });
} else if (this.messageTypeIsSceneMessage(data.type)) {
this.messages.push({ id: data.id, type: data.type, text: data.message, color: data.color, character: data.character, status:data.status, ts:data.ts }); // Add color property to the message
} else if (data.type === 'status' && data.data && data.data.as_scene_message === true) {

View file

@ -3,7 +3,7 @@
<v-progress-circular v-if="statusMessageType === 'busy'" indeterminate="disable-shrink" color="primary" size="20"></v-progress-circular>
<v-icon v-else>{{ notificationIcon() }}</v-icon>
<span class="ml-2">{{ statusMessageText }}</span>
<v-btn v-if="cancellable" class="ml-2" size="small" color="delete" icon rounded="0" elevation="0" variant="text" @click="cancel"><v-icon>mdi-cancel</v-icon></v-btn>
</v-snackbar>
</template>
@ -15,11 +15,15 @@ export default {
statusMessage: false,
statusMessageText: '',
statusMessageType: '',
cancellable: false,
}
},
inject: ['getWebsocket', 'registerMessageHandler', 'setWaitingForInput'],
methods: {
cancel: function() {
this.getWebsocket().send(JSON.stringify({type: 'interrupt'}));
},
notificationTimeout: function() {
switch(this.statusMessageType) {
@ -76,6 +80,7 @@ export default {
this.statusMessage = true;
this.statusMessageText = data.message;
this.statusMessageType = data.status;
this.cancellable = data.data && data.data.cancellable;
}
}
},

View file

@ -144,6 +144,7 @@
@request-scene-load="(path) => { resetViews(); $refs.loadScene.loadJsonSceneFromPath(path); }"
:version="version"
:scene-loading-available="ready && connected"
:scene-is-loading="loading"
:config="appConfig" />
</v-tabs-window-item>
<!-- SCENE -->
@ -160,7 +161,7 @@
</v-alert>
</div>
<SceneMessages ref="sceneMessages" />
<SceneMessages ref="sceneMessages" :appearance-config="appConfig ? appConfig.appearance : {}" :ux-locked="uxLocked" />
<div style="flex-shrink: 0;">
<SceneTools
@ -373,6 +374,25 @@ export default {
return Object.keys(this.clientStatus).sort((a, b) => {
return this.clientStatus[a].label.localeCompare(this.clientStatus[b].label);
});
},
uxLocked() {
// no scene loaded, not locked
if(!this.sceneActive) {
return false;
}
// if loading, ux is locked
if(this.loading) {
return true;
}
// if not waiting for input then ux is locked
if(!this.waitingForInput) {
return true;
}
return false;
}
},
mounted() {
@ -398,6 +418,7 @@ export default {
scene: () => this.scene,
getClients: () => this.getClients(),
getAgents: () => this.getAgents(),
openAgentSettings: this.openAgentSettings,
requestSceneAssets: (asset_ids) => this.requestSceneAssets(asset_ids),
requestAssets: (assets) => this.requestAssets(assets),
openCharacterSheet: (characterName) => this.openCharacterSheet(characterName),
@ -405,12 +426,13 @@ export default {
creativeEditor: () => this.$refs.creativeEditor,
requestAppConfig: () => this.requestAppConfig(),
appConfig: () => this.appConfig,
openAppConfig: this.openAppConfig,
configurationRequired: () => this.configurationRequired(),
getTrackedCharacterState: (name, question) => this.$refs.worldState.trackedCharacterState(name, question),
getTrackedWorldState: (question) => this.$refs.worldState.trackedWorldState(question),
getPlayerCharacterName: () => this.getPlayerCharacterName(),
formatWorldStateTemplateString: (templateString, chracterName) => this.formatWorldStateTemplateString(templateString, chracterName),
autocompleteRequest: (partialInput, callback, focus_element) => this.autocompleteRequest(partialInput, callback, focus_element),
autocompleteRequest: (partialInput, callback, focus_element, delay) => this.autocompleteRequest(partialInput, callback, focus_element, delay),
autocompleteInfoMessage: (active) => this.autocompleteInfoMessage(active),
toLabel: (value) => this.toLabel(value),
};
@ -723,7 +745,8 @@ export default {
this.autocompleting = false
this.messageInput += completion;
},
this.$refs.messageInput
this.$refs.messageInput,
100,
);
return;
}
@ -749,9 +772,13 @@ export default {
}));
},
autocompleteRequest(param, callback, focus_element) {
autocompleteRequest(param, callback, focus_element, delay=500) {
this.autocompleteCallback = callback;
this.autocompleteCallback = (completion) => {
setTimeout(() => {
callback(completion);
}, delay);
};
this.autocompleteFocusElement = focus_element;
this.autocompletePartialInput = param.partial;
@ -778,16 +805,30 @@ export default {
}
let selectedCharacter = null;
let foundActAs = false;
for(let characterName of this.activeCharacters) {
if(this.actAs === null && characterName === playerCharacterName) {
continue;
// actAs is $narrator so we take the first character in the list
if(this.actAs === "$narrator") {
selectedCharacter = characterName;
break;
}
if(this.actAs === characterName) {
continue;
// actAs is null, so we take the first character in the list that is not
// the player character
if(this.actAs === null && characterName !== playerCharacterName) {
selectedCharacter = characterName;
break;
}
// actAs is set, so we find the first non player character after the current actAs
// if actAs is the last character in the list, we set actAs to null
if(foundActAs) {
selectedCharacter = characterName;
break;
} else {
if(characterName === this.actAs) {
foundActAs = true;
}
}
selectedCharacter = characterName;
break;
}
if(selectedCharacter === null || selectedCharacter === playerCharacterName) {
@ -869,6 +910,9 @@ export default {
}
return null;
},
openAgentSettings(agentName, section) {
this.$refs.aiAgent.openSettings(agentName, section);
},
configurationRequired() {
if (!this.$refs.aiClient || this.connecting || (!this.connecting && !this.connected)) {
return false;
@ -969,7 +1013,7 @@ export default {
},
messageInputLongHint() {
const DIALOG_HINT = "Ctrl+Enter to autocomplete, Shift+Enter for newline, Tab to act as another character";
const DIALOG_HINT = "Ctrl+Enter to autocomplete, Shift+Enter for newline, Tab to act as another character. Start messages with '@' to do an action. (e.g., '@look at the door')";
if(this.waitingForInput) {
if(this.inputRequestInfo.reason === "talk") {

View file

@ -1,6 +1,6 @@
<template>
<div class="time-container" v-if="show && minimized" >
<v-chip closable @click:close="deleteMessage()" color="deep-purple-lighten-3">
<v-chip closable @click:close="deleteMessage()" :color="getMessageColor('time',null)" :disabled="uxLocked">
<v-icon class="mr-2">mdi-clock-outline</v-icon>
<span>{{ text }}</span>
</v-chip>
@ -15,8 +15,8 @@ export default {
minimized: true
}
},
props: ['text', 'message_id', 'ts'],
inject: ['requestDeleteMessage'],
props: ['text', 'message_id', 'ts', 'uxLocked'],
inject: ['requestDeleteMessage', 'getMessageStyle', 'getMessageColor'],
methods: {
toggle() {
this.minimized = !this.minimized;

View file

@ -413,6 +413,7 @@ export default {
type: 'world_state_manager',
action: 'save_scene',
save_as: copy ? copy : null,
project_name: this.scene.data.filename ? this.scene.data.name : this.scene.data.title
}));
},

View file

@ -395,7 +395,7 @@ export default {
handleMessage(message) {
if(message.type == "image_generated") {
this.coverImageBusy = false;
if(message.data.context.character_name === this.character.name) {
if(this.character && message.data.context.character_name === this.character.name) {
this.loadCharacter(this.character.name);
}
}

View file

@ -29,7 +29,8 @@
placeholder="speak less formally, use more contractions, and be more casual."
v-model="dialogueInstructions" label="Acting Instructions"
:color="dialogueInstructionsDirty ? 'info' : null"
@update:model-value="queueUpdateCharacterActor()"
@update:model-value="dialogueInstructionsDirty = true"
@blur="updateCharacterActor(true)"
rows="3"
auto-grow></v-textarea>
<v-alert icon="mdi-bullhorn" density="compact" variant="text" color="grey">
@ -57,15 +58,15 @@
:character="character.name"
:rewrite-enabled="false"
:generation-options="generationOptions"
@generate="content => { dialogueExamples.push(content); queueUpdateCharacterActor(500); }"
@generate="content => { dialogueExamples.push(content); updateCharacterActor(); }"
/>
<v-text-field v-model="dialogueExample" label="Add Dialogue Example" @keyup.enter="dialogueExamples.push(dialogueExample); dialogueExample = ''; queueUpdateCharacterActor();" dense></v-text-field>
<v-text-field v-model="dialogueExample" label="Add Dialogue Example" @keyup.enter="dialogueExamples.push(dialogueExample); dialogueExample = ''; updateCharacterActor();" dense></v-text-field>
<v-list density="compact" nav>
<v-list-item v-for="(example, index) in dialogueExamplesWithNameStripped" :key="index">
<template v-slot:prepend>
<v-btn color="red-darken-2" rounded="sm" size="x-small" icon variant="text" @click="dialogueExamples.splice(index, 1); queueUpdateCharacterActor()">
<v-btn color="red-darken-2" rounded="sm" size="x-small" icon variant="text" @click="dialogueExamples.splice(index, 1); updateCharacterActor()">
<v-icon>mdi-close-box-outline</v-icon>
</v-btn>
</template>
@ -132,16 +133,13 @@ export default {
],
inject: ['getWebsocket', 'registerMessageHandler'],
methods: {
queueUpdateCharacterActor(delay = 1500) {
this.dialogueInstructionsDirty = true;
if (this.updateCharacterActorTimeout) {
clearTimeout(this.updateCharacterActorTimeout);
}
this.updateCharacterActorTimeout = setTimeout(this.updateCharacterActor, delay);
},
updateCharacterActor() {
updateCharacterActor(only_if_dirty = false) {
if(only_if_dirty && !this.dialogueInstructionsDirty) {
return;
}
this.getWebsocket().send(JSON.stringify({
type: "world_state_manager",
action: "update_character_actor",

View file

@ -78,7 +78,8 @@
:hint="autocompleteInfoMessage(busy)"
@keyup.ctrl.enter.stop="sendAutocompleteRequest"
@update:modelValue="queueUpdate(selected)"
@update:modelValue="dirty = true"
@blur="update(selected, true)"
v-model="character.base_attributes[selected]">
</v-textarea>
@ -254,19 +255,12 @@ export default {
}
},
queueUpdate(name, delay = 1500) {
if (this.updateTimeout !== null) {
clearTimeout(this.updateTimeout);
update(name, only_if_dirty = false) {
if(only_if_dirty && !this.dirty) {
return;
}
this.dirty = true;
this.updateTimeout = setTimeout(() => {
this.update(name);
}, delay);
},
update(name) {
return this.getWebsocket().send(JSON.stringify({
type: 'world_state_manager',
action: 'update_character_attribute',

View file

@ -18,7 +18,8 @@
:loading="busy"
@keyup.ctrl.enter.stop="sendAutocompleteRequest"
@update:model-value="queueUpdate()"
@update:model-value="dirty = true"
@blur="update(true)"
label="Description"
:hint="'A short description of the character. '+autocompleteInfoMessage(busy)">
</v-textarea>
@ -75,18 +76,12 @@ export default {
}
},
methods: {
queueUpdate(delay = 1500) {
if (this.updateTimeout !== null) {
clearTimeout(this.updateTimeout);
update(only_if_dirty = false) {
if(only_if_dirty && !this.dirty) {
return;
}
this.dirty = true;
this.updateTimeout = setTimeout(() => {
this.update();
}, delay);
},
update() {
this.getWebsocket().send(JSON.stringify({
type: 'world_state_manager',
action: 'update_character_description',

View file

@ -77,7 +77,8 @@
@keyup.ctrl.enter.stop="sendAutocompleteRequest"
@update:modelValue="queueUpdate(selected)"
@update:modelValue="dirty = true"
@blur="update(selected, true)"
v-model="character.details[selected]">
</v-textarea>
@ -270,20 +271,12 @@ export default {
}
},
queueUpdate(name, delay = 1500) {
if (this.updateTimeout !== null) {
clearTimeout(this.updateTimeout);
update(name, only_if_dirty = false) {
if(only_if_dirty && !this.dirty) {
return;
}
this.dirty = true;
this.updateTimeout = setTimeout(() => {
this.update(name);
}, delay);
},
update(name) {
// if field is currently empty, don't send update, because that
// will cause a deletion
if (this.character.details[name] === "") {

View file

@ -58,7 +58,8 @@
:label="selected"
:disabled="working"
v-model="character.reinforcements[selected].answer"
@update:modelValue="queueUpdate(selected)"
@update:modelValue="dirty = true"
@blur="update(selected, false, true)"
:color="dirty ? 'dirty' : ''">
</v-textarea>
@ -70,7 +71,8 @@
type="number" min="1" max="100" step="1"
:disabled="working"
class="mb-2"
@update:modelValue="queueUpdate(selected)"
@update:modelValue="dirty = true"
@blur="update(selected, false, true)"
:color="dirty ? 'dirty' : ''"></v-text-field>
</v-col>
<v-col cols="6">
@ -81,7 +83,8 @@
label="Context Attachment Method"
class="mr-1 mb-1" variant="underlined"
density="compact"
@update:modelValue="queueUpdate(selected)"
@update:modelValue="dirty = true"
@blur="update(selected, false, true)"
:color="dirty ? 'dirty' : ''">
</v-select>
</v-col>
@ -92,7 +95,8 @@
<v-textarea rows="3" auto-grow max-rows="5"
label="Additional instructions to the AI for generating this state."
v-model="character.reinforcements[selected].instructions"
@update:modelValue="queueUpdate(selected)"
@update:modelValue="dirty = true"
@blur="update(selected, false, true)"
:disabled="working"
:color="dirty ? 'dirty' : ''"
></v-textarea>
@ -333,24 +337,16 @@ export default {
this.character.reinforcements[name] = {...this.newReinforcment};
},
queueUpdate(name, delay = 1500) {
if (this.updateTimeout !== null) {
clearTimeout(this.updateTimeout);
update(name, updateState, only_if_dirty = false) {
if(only_if_dirty && !this.dirty) {
return;
}
this.dirty = true;
this.updateTimeout = setTimeout(() => {
this.update(name);
}, delay);
},
update(name, updateState) {
let interval = this.character.reinforcements[name].interval;
let instructions = this.character.reinforcements[name].instructions;
let insert = this.character.reinforcements[name].insert;
if (updateState === true)
this.busy = true;
this.busy = true;
this.getWebsocket().send(JSON.stringify({
type: 'world_state_manager',
action: 'set_character_detail_reinforcement',

View file

@ -1,44 +1,70 @@
<template>
<v-card>
<v-card-text>
<v-alert color="muted" density="compact" variant="text" icon="mdi-timer-sand-complete">
Whenever the scene is summarized a new entry is added to the history.
This summarization happens either when a certain length threshold is met or when the scene time advances.
</v-alert>
<v-card-actions>
<v-spacer></v-spacer>
<ConfirmActionInline
action-label="Regenerate History"
confirm-label="Confirm"
color="warning"
icon="mdi-refresh"
:disabled="busy"
@confirm="regenerate"
/>
<v-spacer></v-spacer>
</v-card-actions>
<p v-if="busy">
<v-progress-linear color="primary" height="2" indeterminate></v-progress-linear>
</p>
<v-divider v-else class="mt-2"></v-divider>
<v-tabs v-model="tab" density="compact" color="secondary">
<v-tab key="base">Base</v-tab>
<v-tab v-for="(layer, index) in layers" :key="index">{{ layer.title }}</v-tab>
</v-tabs>
<v-sheet class="ma-4 text-caption text-center">
<span class="text-muted">Total time passed:</span> {{ scene.data.scene_time }}
</v-sheet>
<v-tabs-window v-model="tab">
<v-tabs-window-item key="base">
<v-card>
<v-card-text>
<v-alert color="muted" density="compact" variant="text" icon="mdi-timer-sand-complete">
Whenever the scene is summarized a new entry is added to the history.
This summarization happens either when a certain length threshold is met or when the scene time advances.
</v-alert>
<v-card-actions>
<v-spacer></v-spacer>
<ConfirmActionInline
action-label="Regenerate History"
confirm-label="Confirm"
color="warning"
icon="mdi-refresh"
:disabled="busy"
@confirm="regenerate"
/>
<v-spacer></v-spacer>
</v-card-actions>
<p v-if="busy">
<v-progress-linear color="primary" height="2" indeterminate></v-progress-linear>
</p>
<v-divider v-else class="mt-2"></v-divider>
<v-sheet class="ma-4 text-caption text-center">
<span class="text-muted">Total time passed:</span> {{ scene.data.scene_time }}
</v-sheet>
<v-list slim density="compact">
<v-list-item v-for="(entry, index) in history" :key="index" class="text-body-2" prepend-icon="mdi-clock">
<v-list-item-subtitle>{{ entry.time }}</v-list-item-subtitle>
<div class="history-entry text-muted">
{{ entry.text }}
</div>
</v-list-item>
</v-list>
</v-card-text>
</v-card>
</v-tabs-window-item>
<v-tabs-window-item v-for="(layer, index) in layers" :key="index">
<v-card>
<v-card-text>
<v-list slim density="compact">
<v-list-item v-for="(entry, index) in layer.entries" :key="index" class="text-body-2" prepend-icon="mdi-clock">
<v-list-item-subtitle>{{ timespan(entry) }}</v-list-item-subtitle>
<div class="history-entry text-muted">
{{ entry.text }}
</div>
</v-list-item>
</v-list>
</v-card-text>
</v-card>
</v-tabs-window-item>
</v-tabs-window>
<v-list slim density="compact">
<v-list-item v-for="(entry, index) in history" :key="index" class="text-body-2" prepend-icon="mdi-clock">
<v-list-item-subtitle>{{ entry.time }}</v-list-item-subtitle>
<div class="history-entry text-muted">
{{ entry.text }}
</div>
</v-list-item>
</v-list>
</v-card-text>
</v-card>
</template>
<script>
@ -57,7 +83,19 @@ export default {
data() {
return {
history: [],
layered_history: [],
busy: false,
tab: 'base',
}
},
computed: {
layers() {
return this.layered_history.map((layer, index) => {
return {
title: `Layer ${index}`,
entries: layer,
}
});
}
},
inject:[
@ -74,6 +112,7 @@ export default {
},
regenerate() {
this.history = [];
this.layered_history = [];
this.busy = true;
this.getWebsocket().send(JSON.stringify({
type: "world_state_manager",
@ -81,6 +120,13 @@ export default {
generation_options: this.generationOptions,
}));
},
timespan(entry) {
// if different display as range
if(entry.time_start != entry.time_end) {
return `${entry.time_start} to ${entry.time_end}`;
}
return `${entry.time_end}`;
},
requestSceneHistory() {
this.getWebsocket().send(JSON.stringify({
type: "world_state_manager",
@ -93,9 +139,11 @@ export default {
}
if(message.action == 'scene_history') {
this.history = message.data;
this.history = message.data.history;
this.layered_history = message.data.layered_history;
// reverse
this.history = this.history.reverse();
this.layered_history = this.layered_history.map(layer => layer.reverse());
} else if (message.action == 'history_entry_added') {
this.history = message.data;
// reverse
@ -114,4 +162,9 @@ export default {
}
}
</script>
</script>
<style scoped>
.history-entry {
white-space: pre-wrap;
}
</style>

Some files were not shown because too many files have changed in this diff Show more