mirror of
https://github.com/vegu-ai/talemate.git
synced 2025-09-01 09:59:08 +00:00
0.23.0 (#91)
* dockerfiles and docker-compose * containerization fixes * docker instructions * readme * readme * dont mount src by default, readme * hf template determine fixes * auto determine prompt template * script to start talemate listening only to 127.0.0.1 * prompt tweaks * auto narrate round every 3 rounds * tweaks * Add return to startscreen button * Only show return to start screen button if scene is active * improvements to character creation * dedicated property for scene title separate fromn the save directory name * filter out negations into negative keywords * increase auto narrate delay * add character portrait keyword * summarization should ignore most recent message, as it is often regenerated. * cohere client * specify python3 * improve viable runpod text gen detection * fix formatting in template preview * cohere command-r plus template that i am not sure if correct or not * mistral client set to decensor * fix issue with parsing json responses * command-r prompts updated * use official mistralai python client * send max_tokens * new input autocomplete functionality * prompt tweeaks * llama 3 templates * add <|eot_id|> to stopping strings * prompt tweak * tooltip * llama-3 identifier * command-r and command-r plus prompt identifiers * text-gen-webui client tweaks to make llama3 eos tokens work correctly * better llama-3 detection * better llama-3 finalizing of parameters * streamline client prompt finalizers reduce YY model smoothing factor from 0.3 to 0.1 for text-generation-webui client * relock * linting * set 0.23.0 * add new gpt-4 models * set 0.23.0 * add note about conecting to text-gen-webui from docker * fix openai image generation no longer working * default to concept_art
This commit is contained in:
parent
27eba3bd63
commit
83027b3a0f
62 changed files with 2105 additions and 1085 deletions
25
Dockerfile.backend
Normal file
25
Dockerfile.backend
Normal file
|
@ -0,0 +1,25 @@
|
|||
# Use an official Python runtime as a parent image
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Set the working directory in the container
|
||||
WORKDIR /app
|
||||
|
||||
# Copy the current directory contents into the container at /app
|
||||
COPY ./src /app/src
|
||||
|
||||
# Copy poetry files
|
||||
COPY pyproject.toml /app/
|
||||
# If there's a poetry lock file, include the following line
|
||||
COPY poetry.lock /app/
|
||||
|
||||
# Install poetry
|
||||
RUN pip install poetry
|
||||
|
||||
# Install dependencies
|
||||
RUN poetry install --no-dev
|
||||
|
||||
# Make port 5050 available to the world outside this container
|
||||
EXPOSE 5050
|
||||
|
||||
# Run backend server
|
||||
CMD ["poetry", "run", "python", "src/talemate/server/run.py", "runserver", "--host", "0.0.0.0", "--port", "5050"]
|
17
Dockerfile.frontend
Normal file
17
Dockerfile.frontend
Normal file
|
@ -0,0 +1,17 @@
|
|||
# Use an official node runtime as a parent image
|
||||
FROM node:20
|
||||
|
||||
# Set the working directory in the container
|
||||
WORKDIR /app
|
||||
|
||||
# Copy the frontend directory contents into the container at /app
|
||||
COPY ./talemate_frontend /app
|
||||
|
||||
# Install any needed packages specified in package.json
|
||||
RUN npm install
|
||||
|
||||
# Make port 8080 available to the world outside this container
|
||||
EXPOSE 8080
|
||||
|
||||
# Run frontend server
|
||||
CMD ["npm", "run", "serve"]
|
20
README.md
20
README.md
|
@ -43,6 +43,7 @@ Please read the documents in the `docs` folder for more advanced configuration a
|
|||
- [Installation](#installation)
|
||||
- [Windows](#windows)
|
||||
- [Linux](#linux)
|
||||
- [Docker](#docker)
|
||||
- [Connecting to an LLM](#connecting-to-an-llm)
|
||||
- [OpenAI / mistral.ai / Anthropic](#openai--mistralai--anthropic)
|
||||
- [Text-generation-webui / LMStudio](#text-generation-webui--lmstudio)
|
||||
|
@ -81,12 +82,29 @@ There is also a [troubleshooting guide](docs/troubleshoot.md) that might help.
|
|||
|
||||
`nodejs v19 or v20` :warning: `v21` not supported yet.
|
||||
|
||||
1. `git clone git@github.com:vegu-ai/talemate`
|
||||
1. `git clone https://github.com/vegu-ai/talemate.git`
|
||||
1. `cd talemate`
|
||||
1. `source install.sh`
|
||||
1. Start the backend: `python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050`.
|
||||
1. Open a new terminal, navigate to the `talemate_frontend` directory, and start the frontend server by running `npm run serve`.
|
||||
|
||||
### Docker
|
||||
|
||||
1. `git clone https://github.com/vegu-ai/talemate.git`
|
||||
1. `cd talemate`
|
||||
1. `docker-compose up`
|
||||
1. Navigate your browser to http://localhost:8080
|
||||
|
||||
:warning: When connecting local APIs running on the hostmachine (e.g. text-generation-webui), you need to use `host.docker.internal` as the hostname.
|
||||
|
||||
#### To shut down the Docker container
|
||||
|
||||
Just closing the terminal window will not stop the Docker container. You need to run `docker-compose down` to stop the container.
|
||||
|
||||
#### How to install Docker
|
||||
|
||||
1. Download and install Docker Desktop from the [official Docker website](https://www.docker.com/products/docker-desktop).
|
||||
|
||||
# Connecting to an LLM
|
||||
|
||||
On the right hand side click the "Add Client" button. If there is no button, you may need to toggle the client options by clicking this button:
|
||||
|
|
27
docker-compose.yml
Normal file
27
docker-compose.yml
Normal file
|
@ -0,0 +1,27 @@
|
|||
version: '3.8'
|
||||
|
||||
services:
|
||||
talemate-backend:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.backend
|
||||
ports:
|
||||
- "5050:5050"
|
||||
volumes:
|
||||
# can uncomment for dev purposes
|
||||
#- ./src/talemate:/app/src/talemate
|
||||
- ./config.yaml:/app/config.yaml
|
||||
- ./scenes:/app/scenes
|
||||
- ./templates:/app/templates
|
||||
- ./chroma:/app/chroma
|
||||
environment:
|
||||
- PYTHONUNBUFFERED=1
|
||||
|
||||
talemate-frontend:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.frontend
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- ./talemate_frontend:/app
|
|
@ -1,7 +1,7 @@
|
|||
#!/bin/bash
|
||||
|
||||
# create a virtual environment
|
||||
python -m venv talemate_env
|
||||
python3 -m venv talemate_env
|
||||
|
||||
# activate the virtual environment
|
||||
source talemate_env/bin/activate
|
||||
|
|
1734
poetry.lock
generated
1734
poetry.lock
generated
File diff suppressed because it is too large
Load diff
|
@ -4,7 +4,7 @@ build-backend = "poetry.masonry.api"
|
|||
|
||||
[tool.poetry]
|
||||
name = "talemate"
|
||||
version = "0.22.0"
|
||||
version = "0.23.0"
|
||||
description = "AI-backed roleplay and narrative tools"
|
||||
authors = ["FinalWombat"]
|
||||
license = "GNU Affero General Public License v3.0"
|
||||
|
@ -18,6 +18,9 @@ rope = "^0.22"
|
|||
isort = "^5.10"
|
||||
jinja2 = "^3.0"
|
||||
openai = ">=1"
|
||||
mistralai = ">=0.1.8"
|
||||
cohere = ">=5.2.2"
|
||||
anthropic = ">=0.19.1"
|
||||
requests = "^2.26"
|
||||
colorama = ">=0.4.6"
|
||||
Pillow = ">=9.5"
|
||||
|
@ -39,7 +42,6 @@ thefuzz = ">=0.20.0"
|
|||
tiktoken = ">=0.5.1"
|
||||
nltk = ">=3.8.1"
|
||||
huggingface-hub = ">=0.20.2"
|
||||
anthropic = ">=0.19.1"
|
||||
RestrictedPython = ">7.1"
|
||||
|
||||
# ChromaDB
|
||||
|
|
|
@ -3,14 +3,16 @@ def game(TM):
|
|||
|
||||
MSG_PROCESSED_INSTRUCTIONS = "Simulation suite processed instructions"
|
||||
|
||||
MSG_HELP = "Instructions to the simulation computer are only process if the computer is addressed at the beginning of the instruction. Please state your commands by addressing the computer by stating \"Computer,\" followed by an instruction. For example ... \"Computer, i want to experience being on a derelict spaceship.\""
|
||||
MSG_HELP = "Instructions to the simulation computer are only processed if the computer is directly addressed at the beginning of the instruction. Please state your commands by addressing the computer by stating \"Computer,\" followed by an instruction. For example ... \"Computer, i want to experience being on a derelict spaceship.\""
|
||||
|
||||
PROMPT_NARRATE_ROUND = "Narrate the simulation and reveal some new details to the player in one paragraph. YOU MUST NOT ADDRESS THE COMPUTER OR THE SIMULATION."
|
||||
|
||||
PROMPT_STARTUP = "Narrate the computer asking the user to state the nature of their desired simulation."
|
||||
PROMPT_STARTUP = "Narrate the computer asking the user to state the nature of their desired simulation in a synthetic and soft sounding voice."
|
||||
|
||||
CTX_PIN_UNAWARE = "Characters in the simulation ARE NOT AWARE OF THE COMPUTER."
|
||||
|
||||
AUTO_NARRATE_INTERVAL = 10
|
||||
|
||||
def parse_sim_call_arguments(call:str) -> str:
|
||||
"""
|
||||
Returns the value between the parentheses of a simulation call
|
||||
|
@ -117,7 +119,7 @@ def game(TM):
|
|||
scene=TM.scene,
|
||||
)
|
||||
|
||||
calls = calls.split("\n")
|
||||
self.calls = calls = calls.split("\n")
|
||||
|
||||
calls = self.prepare_calls(calls)
|
||||
|
||||
|
@ -152,6 +154,33 @@ def game(TM):
|
|||
|
||||
self.update_world_state = True
|
||||
|
||||
self.set_simulation_title(compiled)
|
||||
|
||||
def set_simulation_title(self, compiled_calls):
|
||||
|
||||
"""
|
||||
Generates a fitting title for the simulation based on the user's instructions
|
||||
"""
|
||||
|
||||
TM.log.debug("SIMULATION SUITE: set simulation title", name=TM.scene.title, compiled_calls=compiled_calls)
|
||||
|
||||
if not compiled_calls:
|
||||
return
|
||||
|
||||
if TM.scene.title != "Simulation Suite":
|
||||
# name already changed, no need to do it again
|
||||
return
|
||||
|
||||
title = TM.agents.creator.contextual_generate_from_args(
|
||||
"scene:simulation title",
|
||||
"Create a fitting title for the simulated scenario that the user has requested. You response MUST be a short but exciting, descriptive title.",
|
||||
length=75
|
||||
)
|
||||
|
||||
title = title.strip('"').strip()
|
||||
|
||||
TM.scene.set_title(title)
|
||||
|
||||
def prepare_calls(self, calls):
|
||||
"""
|
||||
Loops through calls and if a `set_player_name` call and a `set_player_persona` call are both
|
||||
|
@ -320,6 +349,20 @@ def game(TM):
|
|||
else:
|
||||
character_name = TM.agents.creator.determine_character_name(character_name=f"{inject} - what is the name of the group of characters to be added to the scene? If no name can extracted from the text, extract a short descriptive name instead. Respond only with the name.", group=True)
|
||||
|
||||
# sometimes add_ai_character and change_ai_character are called in the same instruction targeting
|
||||
# the same character, if this happens we need to combine into a single add_ai_character call
|
||||
|
||||
has_change_ai_character_call = TM.client.query_text_eval(f"Are there any calls to `change_ai_character` in the instruction for {character_name}?", "\n".join(self.calls))
|
||||
|
||||
if has_change_ai_character_call:
|
||||
combined_arg = TM.agents.world_state.analyze_and_follow_instruction(
|
||||
"\n".join(self.calls),
|
||||
f"Combine the arguments of the function calls `add_ai_character` and `change_ai_character` for {character_name} into a single text string. Respond with the new argument."
|
||||
)
|
||||
call = f"add_ai_character({combined_arg})"
|
||||
inject = f"The computer executes the function `{call}`"
|
||||
|
||||
|
||||
TM.emit_status("busy", f"Simulation suite adding character: {character_name}", as_scene_message=True)
|
||||
|
||||
TM.log.debug("SIMULATION SUITE: add npc", name=character_name)
|
||||
|
@ -429,6 +472,14 @@ def game(TM):
|
|||
|
||||
def finalize_round(self):
|
||||
|
||||
# track rounds
|
||||
rounds = TM.game_state.get_var("instr.rounds", 0)
|
||||
|
||||
# increase rounds
|
||||
TM.game_state.set_var("instr.rounds", rounds + 1, commit=False)
|
||||
|
||||
has_issued_instructions = TM.game_state.has_var("instr.has_issued_instructions")
|
||||
|
||||
if self.update_world_state:
|
||||
self.run_update_world_state()
|
||||
|
||||
|
@ -437,7 +488,7 @@ def game(TM):
|
|||
TM.game_state.set_var("instr.lastprocessed_call", self.player_message.id, commit=False)
|
||||
TM.emit_status("success", MSG_PROCESSED_INSTRUCTIONS, as_scene_message=True)
|
||||
|
||||
elif self.player_message and not TM.game_state.has_var("instr.has_issued_instructions"):
|
||||
elif self.player_message and not has_issued_instructions:
|
||||
# simulation started, player message is NOT an instruction, and player has not given
|
||||
# any instructions
|
||||
self.guide_player()
|
||||
|
@ -445,6 +496,10 @@ def game(TM):
|
|||
elif self.player_message and not TM.scene.npc_character_names():
|
||||
# simulation started, player message is NOT an instruction, but there are no npcs to interact with
|
||||
self.narrate_round()
|
||||
|
||||
elif rounds % AUTO_NARRATE_INTERVAL == 0 and rounds and TM.scene.npc_character_names() and has_issued_instructions:
|
||||
# every 3 rounds, narrate the round
|
||||
self.narrate_round()
|
||||
|
||||
def guide_player(self):
|
||||
TM.agents.narrator.action_to_narration(
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
{
|
||||
"name": "Simulation Suite",
|
||||
"title": "Simulation Suite",
|
||||
"environment": "scene",
|
||||
"immutable_save": true,
|
||||
"restore_from": "simulation-suite.json",
|
||||
|
|
|
@ -2,4 +2,4 @@ from .agents import Agent
|
|||
from .client import TextGeneratorWebuiClient
|
||||
from .tale_mate import *
|
||||
|
||||
VERSION = "0.22.0"
|
||||
VERSION = "0.23.0"
|
||||
|
|
|
@ -194,12 +194,12 @@ class Agent(ABC):
|
|||
return {
|
||||
"essential": self.essential,
|
||||
}
|
||||
|
||||
|
||||
@property
|
||||
def sanitized_action_config(self):
|
||||
if not getattr(self, "actions", None):
|
||||
return {}
|
||||
|
||||
|
||||
return {k: v.model_dump() for k, v in self.actions.items()}
|
||||
|
||||
async def _handle_ready_check(self, fut: asyncio.Future):
|
||||
|
|
|
@ -22,7 +22,14 @@ from talemate.events import GameLoopEvent
|
|||
from talemate.prompts import Prompt
|
||||
from talemate.scene_message import CharacterMessage, DirectorMessage
|
||||
|
||||
from .base import Agent, AgentAction, AgentActionConfig, AgentDetail, AgentEmission, set_processing
|
||||
from .base import (
|
||||
Agent,
|
||||
AgentAction,
|
||||
AgentActionConfig,
|
||||
AgentDetail,
|
||||
AgentEmission,
|
||||
set_processing,
|
||||
)
|
||||
from .registry import register
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
@ -180,22 +187,22 @@ class ConversationAgent(Agent):
|
|||
if self.actions["generation_override"].enabled:
|
||||
return self.actions["generation_override"].config["format"].value
|
||||
return "movie_script"
|
||||
|
||||
|
||||
@property
|
||||
def conversation_format_label(self):
|
||||
value = self.conversation_format
|
||||
|
||||
|
||||
choices = self.actions["generation_override"].config["format"].choices
|
||||
|
||||
|
||||
for choice in choices:
|
||||
if choice["value"] == value:
|
||||
return choice["label"]
|
||||
|
||||
|
||||
return value
|
||||
|
||||
|
||||
@property
|
||||
def agent_details(self) -> dict:
|
||||
|
||||
|
||||
details = {
|
||||
"client": AgentDetail(
|
||||
icon="mdi-network-outline",
|
||||
|
@ -208,9 +215,9 @@ class ConversationAgent(Agent):
|
|||
description="Generation format of the scene context, as seen by the AI",
|
||||
).model_dump(),
|
||||
}
|
||||
|
||||
|
||||
return details
|
||||
|
||||
|
||||
def connect(self, scene):
|
||||
super().connect(scene)
|
||||
talemate.emit.async_signals.get("game_loop").connect(self.on_game_loop)
|
||||
|
@ -567,7 +574,7 @@ class ConversationAgent(Agent):
|
|||
def clean_result(self, result, character):
|
||||
if "#" in result:
|
||||
result = result.split("#")[0]
|
||||
|
||||
|
||||
if "(Internal" in result:
|
||||
result = result.split("(Internal")[0]
|
||||
|
||||
|
|
|
@ -1,9 +1,11 @@
|
|||
from typing import TYPE_CHECKING, Union
|
||||
import asyncio
|
||||
from typing import TYPE_CHECKING, Tuple, Union
|
||||
|
||||
import pydantic
|
||||
|
||||
import talemate.util as util
|
||||
from talemate.agents.base import set_processing
|
||||
from talemate.emit import emit
|
||||
from talemate.prompts import Prompt
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
@ -22,7 +24,7 @@ class ContentGenerationContext(pydantic.BaseModel):
|
|||
original: Union[str, None] = None
|
||||
|
||||
@property
|
||||
def computed_context(self) -> (str, str):
|
||||
def computed_context(self) -> Tuple[str, str]:
|
||||
typ, context = self.context.split(":", 1)
|
||||
return typ, context
|
||||
|
||||
|
@ -54,6 +56,8 @@ class AssistantMixin:
|
|||
|
||||
return await self.contextual_generate(generation_context)
|
||||
|
||||
contextual_generate_from_args.exposed = True
|
||||
|
||||
@set_processing
|
||||
async def contextual_generate(
|
||||
self,
|
||||
|
@ -93,3 +97,45 @@ class AssistantMixin:
|
|||
content = util.strip_partial_sentences(content)
|
||||
|
||||
return content.strip()
|
||||
|
||||
@set_processing
|
||||
async def autocomplete_dialogue(
|
||||
self,
|
||||
input: str,
|
||||
character: "Character",
|
||||
emit_signal: bool = True,
|
||||
) -> str:
|
||||
"""
|
||||
Autocomplete dialogue.
|
||||
"""
|
||||
|
||||
response = await Prompt.request(
|
||||
f"creator.autocomplete-dialogue",
|
||||
self.client,
|
||||
"create_short",
|
||||
vars={
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"input": input.strip(),
|
||||
"character": character,
|
||||
"can_coerce": self.client.Meta().requires_prompt_template,
|
||||
},
|
||||
pad_prepended_response=False,
|
||||
dedupe_enabled=False,
|
||||
)
|
||||
|
||||
response = util.clean_dialogue(response, character.name)[
|
||||
len(character.name + ":") :
|
||||
].strip()
|
||||
|
||||
if response.startswith(input):
|
||||
response = response[len(input) :]
|
||||
|
||||
self.scene.log.debug(
|
||||
"autocomplete_suggestion", suggestion=response, input=input
|
||||
)
|
||||
|
||||
if emit_signal:
|
||||
emit("autocomplete_suggestion", response)
|
||||
|
||||
return response
|
||||
|
|
|
@ -192,7 +192,7 @@ class CharacterCreatorMixin:
|
|||
},
|
||||
)
|
||||
return content_context.strip()
|
||||
|
||||
|
||||
@set_processing
|
||||
async def determine_character_dialogue_instructions(
|
||||
self,
|
||||
|
@ -201,13 +201,13 @@ class CharacterCreatorMixin:
|
|||
instructions = await Prompt.request(
|
||||
f"creator.determine-character-dialogue-instructions",
|
||||
self.client,
|
||||
"create",
|
||||
"create_concise",
|
||||
vars={
|
||||
"character": character,
|
||||
},
|
||||
)
|
||||
|
||||
r = instructions.strip().strip('"').strip()
|
||||
|
||||
r = instructions.strip().split("\n")[0].strip('"').strip()
|
||||
return r
|
||||
|
||||
@set_processing
|
||||
|
@ -230,7 +230,7 @@ class CharacterCreatorMixin:
|
|||
self,
|
||||
character_name: str,
|
||||
allowed_names: list[str] = None,
|
||||
group:bool = False,
|
||||
group: bool = False,
|
||||
) -> str:
|
||||
name = await Prompt.request(
|
||||
f"creator.determine-character-name",
|
||||
|
|
|
@ -128,20 +128,19 @@ class ScenarioCreatorMixin:
|
|||
"text": text,
|
||||
},
|
||||
)
|
||||
return description
|
||||
|
||||
return description.strip()
|
||||
|
||||
@set_processing
|
||||
async def determine_content_context_for_description(
|
||||
self,
|
||||
description:str,
|
||||
description: str,
|
||||
):
|
||||
content_context = await Prompt.request(
|
||||
f"creator.determine-content-context",
|
||||
self.client,
|
||||
"create",
|
||||
"create_short",
|
||||
vars={
|
||||
"description": description,
|
||||
},
|
||||
)
|
||||
return content_context.strip()
|
||||
return content_context.lstrip().split("\n")[0].strip('"').strip()
|
||||
|
|
|
@ -15,9 +15,9 @@ from talemate.agents.conversation import ConversationAgentEmission
|
|||
from talemate.automated_action import AutomatedAction
|
||||
from talemate.emit import emit, wait_for_input
|
||||
from talemate.events import GameLoopActorIterEvent, GameLoopStartEvent, SceneStateEvent
|
||||
from talemate.game.engine import GameInstructionsMixin
|
||||
from talemate.prompts import Prompt
|
||||
from talemate.scene_message import DirectorMessage, NarratorMessage
|
||||
from talemate.game.engine import GameInstructionsMixin
|
||||
|
||||
from .base import Agent, AgentAction, AgentActionConfig, set_processing
|
||||
from .registry import register
|
||||
|
@ -78,9 +78,9 @@ class DirectorAgent(GameInstructionsMixin, Agent):
|
|||
{
|
||||
"label": "Inner Monologue",
|
||||
"value": "internal_monologue",
|
||||
}
|
||||
]
|
||||
)
|
||||
},
|
||||
],
|
||||
),
|
||||
},
|
||||
),
|
||||
}
|
||||
|
@ -100,11 +100,11 @@ class DirectorAgent(GameInstructionsMixin, Agent):
|
|||
@property
|
||||
def direct_enabled(self):
|
||||
return self.actions["direct"].enabled
|
||||
|
||||
|
||||
@property
|
||||
def direct_actors_enabled(self):
|
||||
return self.actions["direct"].config["direct_actors"].value
|
||||
|
||||
|
||||
@property
|
||||
def direct_scene_enabled(self):
|
||||
return self.actions["direct"].config["direct_scene"].value
|
||||
|
@ -287,7 +287,6 @@ class DirectorAgent(GameInstructionsMixin, Agent):
|
|||
self.scene.push_history(message)
|
||||
else:
|
||||
await self.run_scene_instructions(self.scene)
|
||||
|
||||
|
||||
@set_processing
|
||||
async def persist_characters_from_worldstate(
|
||||
|
@ -329,7 +328,7 @@ class DirectorAgent(GameInstructionsMixin, Agent):
|
|||
creator = instance.get_agent("creator")
|
||||
|
||||
self.scene.log.debug("persist_character", name=name)
|
||||
|
||||
|
||||
if determine_name:
|
||||
name = await creator.determine_character_name(name)
|
||||
self.scene.log.debug("persist_character", adjusted_name=name)
|
||||
|
@ -367,11 +366,15 @@ class DirectorAgent(GameInstructionsMixin, Agent):
|
|||
|
||||
self.scene.log.debug("persist_character", description=description)
|
||||
|
||||
dialogue_instructions = await creator.determine_character_dialogue_instructions(character)
|
||||
|
||||
dialogue_instructions = await creator.determine_character_dialogue_instructions(
|
||||
character
|
||||
)
|
||||
|
||||
character.dialogue_instructions = dialogue_instructions
|
||||
|
||||
self.scene.log.debug("persist_character", dialogue_instructions=dialogue_instructions)
|
||||
|
||||
self.scene.log.debug(
|
||||
"persist_character", dialogue_instructions=dialogue_instructions
|
||||
)
|
||||
|
||||
actor = self.scene.Actor(
|
||||
character=character, agent=instance.get_agent("conversation")
|
||||
|
@ -404,10 +407,11 @@ class DirectorAgent(GameInstructionsMixin, Agent):
|
|||
self.scene.context = response.strip()
|
||||
self.scene.emit_status()
|
||||
|
||||
async def log_action(self, action:str, action_description:str):
|
||||
async def log_action(self, action: str, action_description: str):
|
||||
message = DirectorMessage(message=action_description, action=action)
|
||||
self.scene.push_history(message)
|
||||
emit("director", message)
|
||||
|
||||
log_action.exposed = True
|
||||
|
||||
def inject_prompt_paramters(
|
||||
|
|
|
@ -617,6 +617,7 @@ class NarratorAgent(Agent):
|
|||
emit("narrator", narrator_message)
|
||||
|
||||
return narrator_message
|
||||
|
||||
action_to_narration.exposed = True
|
||||
|
||||
# LLM client related methods. These are called during or after the client
|
||||
|
|
|
@ -140,7 +140,9 @@ class SummarizeAgent(Agent):
|
|||
if recent_entry:
|
||||
ts = recent_entry.get("ts", ts)
|
||||
|
||||
for i in range(start, len(scene.history)):
|
||||
# we ignore the most recent entry, as the user may still chose to
|
||||
# regenerate it
|
||||
for i in range(start, max(start, len(scene.history) - 1)):
|
||||
dialogue = scene.history[i]
|
||||
|
||||
# log.debug("build_archive", idx=i, content=str(dialogue)[:64]+"...")
|
||||
|
|
|
@ -73,7 +73,7 @@ class VisualBase(Agent):
|
|||
),
|
||||
"default_style": AgentActionConfig(
|
||||
type="text",
|
||||
value="ink_illustration",
|
||||
value="concept_art",
|
||||
choices=MAJOR_STYLES,
|
||||
label="Default Style",
|
||||
description="The default style to use for visual processing",
|
||||
|
@ -219,15 +219,15 @@ class VisualBase(Agent):
|
|||
)
|
||||
|
||||
await super().apply_config(*args, **kwargs)
|
||||
|
||||
|
||||
backend_fn = getattr(self, f"{self.backend.lower()}_apply_config", None)
|
||||
if backend_fn:
|
||||
|
||||
|
||||
if not backend_changed and was_disabled and self.enabled:
|
||||
# If the backend has not changed, but the agent was previously disabled
|
||||
# and is now enabled, we need to trigger the backend apply_config function
|
||||
backend_changed = True
|
||||
|
||||
|
||||
task = asyncio.create_task(
|
||||
backend_fn(backend_changed=backend_changed, *args, **kwargs)
|
||||
)
|
||||
|
@ -351,6 +351,9 @@ class VisualBase(Agent):
|
|||
vis_type_styles = self.vis_type_styles(context.vis_type)
|
||||
prompt = self.prepare_prompt(prompt, [vis_type_styles, thematic_style])
|
||||
|
||||
if context.vis_type == VIS_TYPES.CHARACTER:
|
||||
prompt.keywords.append("character portrait")
|
||||
|
||||
if not prompt:
|
||||
log.error(
|
||||
"generate", error="No prompt provided and no context to generate from"
|
||||
|
@ -429,6 +432,7 @@ class VisualBase(Agent):
|
|||
async def generate_environment_background(self, instructions: str = None):
|
||||
with VisualContext(vis_type=VIS_TYPES.ENVIRONMENT, instructions=instructions):
|
||||
await self.generate(format="landscape")
|
||||
|
||||
generate_environment_background.exposed = True
|
||||
|
||||
async def generate_character_portrait(
|
||||
|
@ -442,8 +446,10 @@ class VisualBase(Agent):
|
|||
instructions=instructions,
|
||||
):
|
||||
await self.generate(format="portrait")
|
||||
|
||||
generate_character_portrait.exposed = True
|
||||
|
||||
|
||||
# apply mixins to the agent (from HANDLERS dict[str, cls])
|
||||
|
||||
for mixin_backend, mixin in HANDLERS.items():
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
import base64
|
||||
import io
|
||||
from urllib.parse import unquote
|
||||
|
||||
import httpx
|
||||
import structlog
|
||||
|
@ -100,6 +101,8 @@ class OpenAIImageMixin:
|
|||
else:
|
||||
resolution = Resolution(width=1024, height=1024)
|
||||
|
||||
log.debug("openai_image_generate", resolution=resolution)
|
||||
|
||||
response = await client.images.generate(
|
||||
model=self.openai_model_type,
|
||||
prompt=prompt.positive_prompt,
|
||||
|
@ -110,8 +113,15 @@ class OpenAIImageMixin:
|
|||
|
||||
download_url = response.data[0].url
|
||||
|
||||
# decode url because httpx will encode it again
|
||||
download_url = unquote(download_url)
|
||||
log.debug("openai_image_generate", download_url=download_url)
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(download_url, timeout=90)
|
||||
log.debug("openai_image_generate", status_code=response.status_code)
|
||||
if response.status_code >= 400:
|
||||
raise ValueError(f"Error downloading image: {response.content}")
|
||||
# bytes to base64encoded
|
||||
image = base64.b64encode(response.content).decode("utf-8")
|
||||
await self.emit_image(image)
|
||||
|
|
|
@ -31,6 +31,14 @@ class Style(pydantic.BaseModel):
|
|||
def load(self, prompt: str, negative_prompt: str = ""):
|
||||
self.keywords = prompt.split(", ")
|
||||
self.negative_keywords = negative_prompt.split(", ")
|
||||
|
||||
# loop through keywords and drop any starting with "no " and add to negative_keywords
|
||||
# with "no " removed
|
||||
for kw in self.keywords:
|
||||
if kw.startswith("no "):
|
||||
self.keywords.remove(kw)
|
||||
self.negative_keywords.append(kw[3:])
|
||||
|
||||
return self
|
||||
|
||||
def prepend(self, *styles):
|
||||
|
|
|
@ -212,6 +212,7 @@ class WorldStateAgent(Agent):
|
|||
|
||||
self.next_update = 0
|
||||
await scene.world_state.request_update()
|
||||
|
||||
update_world_state.exposed = True
|
||||
|
||||
@set_processing
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
import os
|
||||
|
||||
import talemate.client.runpod
|
||||
from talemate.client.lmstudio import LMStudioClient
|
||||
from talemate.client.openai import OpenAIClient
|
||||
from talemate.client.mistral import MistralAIClient
|
||||
from talemate.client.anthropic import AnthropicClient
|
||||
from talemate.client.cohere import CohereClient
|
||||
from talemate.client.lmstudio import LMStudioClient
|
||||
from talemate.client.mistral import MistralAIClient
|
||||
from talemate.client.openai import OpenAIClient
|
||||
from talemate.client.openai_compat import OpenAICompatibleClient
|
||||
from talemate.client.registry import CLIENT_CLASSES, get_client_class, register
|
||||
from talemate.client.textgenwebui import TextGeneratorWebuiClient
|
||||
|
|
|
@ -79,6 +79,8 @@ class ClientBase:
|
|||
conversation_retries: int = 2
|
||||
auto_break_repetition_enabled: bool = True
|
||||
decensor_enabled: bool = True
|
||||
auto_determine_prompt_template: bool = False
|
||||
finalizers: list[str] = []
|
||||
client_type = "base"
|
||||
|
||||
class Meta(pydantic.BaseModel):
|
||||
|
@ -97,6 +99,7 @@ class ClientBase:
|
|||
):
|
||||
self.api_url = api_url
|
||||
self.name = name or self.client_type
|
||||
self.auto_determine_prompt_template_attempt = None
|
||||
self.log = structlog.get_logger(f"client.{self.client_type}")
|
||||
if "max_token_length" in kwargs:
|
||||
self.max_token_length = (
|
||||
|
@ -262,13 +265,30 @@ class ClientBase:
|
|||
self.current_status = status
|
||||
|
||||
prompt_template_example, prompt_template_file = self.prompt_template_example()
|
||||
has_prompt_template = (
|
||||
prompt_template_file and prompt_template_file != "default.jinja2"
|
||||
)
|
||||
|
||||
if not has_prompt_template and self.auto_determine_prompt_template:
|
||||
|
||||
# only attempt to determine the prompt template once per model and
|
||||
# only if the model does not already have a prompt template
|
||||
|
||||
if self.auto_determine_prompt_template_attempt != self.model_name:
|
||||
log.info("auto_determine_prompt_template", model_name=self.model_name)
|
||||
self.auto_determine_prompt_template_attempt = self.model_name
|
||||
self.determine_prompt_template()
|
||||
prompt_template_example, prompt_template_file = (
|
||||
self.prompt_template_example()
|
||||
)
|
||||
has_prompt_template = (
|
||||
prompt_template_file and prompt_template_file != "default.jinja2"
|
||||
)
|
||||
|
||||
data = {
|
||||
"api_key": self.api_key,
|
||||
"prompt_template_example": prompt_template_example,
|
||||
"has_prompt_template": (
|
||||
prompt_template_file and prompt_template_file != "default.jinja2"
|
||||
),
|
||||
"has_prompt_template": has_prompt_template,
|
||||
"template_file": prompt_template_file,
|
||||
"meta": self.Meta().model_dump(),
|
||||
"error_action": None,
|
||||
|
@ -289,6 +309,15 @@ class ClientBase:
|
|||
if status_change:
|
||||
instance.emit_agent_status_by_client(self)
|
||||
|
||||
def determine_prompt_template(self):
|
||||
if not self.model_name:
|
||||
return
|
||||
|
||||
template = model_prompt.query_hf_for_prompt_template_suggestion(self.model_name)
|
||||
|
||||
if template:
|
||||
model_prompt.create_user_override(template, self.model_name)
|
||||
|
||||
async def get_model_name(self):
|
||||
models = await self.client.models.list()
|
||||
try:
|
||||
|
@ -373,6 +402,14 @@ class ClientBase:
|
|||
else:
|
||||
parameters["extra_stopping_strings"] = dialog_stopping_strings
|
||||
|
||||
def finalize(self, parameters: dict, prompt: str):
|
||||
for finalizer in self.finalizers:
|
||||
fn = getattr(self, finalizer, None)
|
||||
prompt, applied = fn(parameters, prompt)
|
||||
if applied:
|
||||
return prompt
|
||||
return prompt
|
||||
|
||||
async def generate(self, prompt: str, parameters: dict, kind: str):
|
||||
"""
|
||||
Generates text from the given prompt and parameters.
|
||||
|
@ -421,6 +458,9 @@ class ClientBase:
|
|||
finalized_prompt = self.prompt_template(
|
||||
self.get_system_message(kind), prompt
|
||||
).strip(" ")
|
||||
|
||||
finalized_prompt = self.finalize(prompt_param, finalized_prompt)
|
||||
|
||||
prompt_param = finalize(prompt_param)
|
||||
|
||||
token_length = self.count_tokens(finalized_prompt)
|
||||
|
|
225
src/talemate/client/cohere.py
Normal file
225
src/talemate/client/cohere.py
Normal file
|
@ -0,0 +1,225 @@
|
|||
import pydantic
|
||||
import structlog
|
||||
from cohere import AsyncClient
|
||||
|
||||
from talemate.client.base import ClientBase, ErrorAction
|
||||
from talemate.client.registry import register
|
||||
from talemate.config import load_config
|
||||
from talemate.emit import emit
|
||||
from talemate.emit.signals import handlers
|
||||
from talemate.util import count_tokens
|
||||
|
||||
__all__ = [
|
||||
"CohereClient",
|
||||
]
|
||||
log = structlog.get_logger("talemate")
|
||||
|
||||
# Edit this to add new models / remove old models
|
||||
SUPPORTED_MODELS = [
|
||||
"command",
|
||||
"command-r",
|
||||
"command-r-plus",
|
||||
]
|
||||
|
||||
|
||||
class Defaults(pydantic.BaseModel):
|
||||
max_token_length: int = 16384
|
||||
model: str = "command-r-plus"
|
||||
|
||||
|
||||
@register()
|
||||
class CohereClient(ClientBase):
|
||||
"""
|
||||
Cohere client for generating text.
|
||||
"""
|
||||
|
||||
client_type = "cohere"
|
||||
conversation_retries = 0
|
||||
auto_break_repetition_enabled = False
|
||||
decensor_enabled = True
|
||||
|
||||
class Meta(ClientBase.Meta):
|
||||
name_prefix: str = "Cohere"
|
||||
title: str = "Cohere"
|
||||
manual_model: bool = True
|
||||
manual_model_choices: list[str] = SUPPORTED_MODELS
|
||||
requires_prompt_template: bool = False
|
||||
defaults: Defaults = Defaults()
|
||||
|
||||
def __init__(self, model="command-r-plus", **kwargs):
|
||||
self.model_name = model
|
||||
self.api_key_status = None
|
||||
self.config = load_config()
|
||||
super().__init__(**kwargs)
|
||||
|
||||
handlers["config_saved"].connect(self.on_config_saved)
|
||||
|
||||
@property
|
||||
def cohere_api_key(self):
|
||||
return self.config.get("cohere", {}).get("api_key")
|
||||
|
||||
def emit_status(self, processing: bool = None):
|
||||
error_action = None
|
||||
if processing is not None:
|
||||
self.processing = processing
|
||||
|
||||
if self.cohere_api_key:
|
||||
status = "busy" if self.processing else "idle"
|
||||
model_name = self.model_name
|
||||
else:
|
||||
status = "error"
|
||||
model_name = "No API key set"
|
||||
error_action = ErrorAction(
|
||||
title="Set API Key",
|
||||
action_name="openAppConfig",
|
||||
icon="mdi-key-variant",
|
||||
arguments=[
|
||||
"application",
|
||||
"cohere_api",
|
||||
],
|
||||
)
|
||||
|
||||
if not self.model_name:
|
||||
status = "error"
|
||||
model_name = "No model loaded"
|
||||
|
||||
self.current_status = status
|
||||
|
||||
emit(
|
||||
"client_status",
|
||||
message=self.client_type,
|
||||
id=self.name,
|
||||
details=model_name,
|
||||
status=status,
|
||||
data={
|
||||
"error_action": error_action.model_dump() if error_action else None,
|
||||
"meta": self.Meta().model_dump(),
|
||||
},
|
||||
)
|
||||
|
||||
def set_client(self, max_token_length: int = None):
|
||||
if not self.cohere_api_key:
|
||||
self.client = AsyncClient("sk-1111")
|
||||
log.error("No cohere API key set")
|
||||
if self.api_key_status:
|
||||
self.api_key_status = False
|
||||
emit("request_client_status")
|
||||
emit("request_agent_status")
|
||||
return
|
||||
|
||||
if not self.model_name:
|
||||
self.model_name = "command-r-plus"
|
||||
|
||||
if max_token_length and not isinstance(max_token_length, int):
|
||||
max_token_length = int(max_token_length)
|
||||
|
||||
model = self.model_name
|
||||
|
||||
self.client = AsyncClient(self.cohere_api_key)
|
||||
self.max_token_length = max_token_length or 16384
|
||||
|
||||
if not self.api_key_status:
|
||||
if self.api_key_status is False:
|
||||
emit("request_client_status")
|
||||
emit("request_agent_status")
|
||||
self.api_key_status = True
|
||||
|
||||
log.info(
|
||||
"cohere set client",
|
||||
max_token_length=self.max_token_length,
|
||||
provided_max_token_length=max_token_length,
|
||||
model=model,
|
||||
)
|
||||
|
||||
def reconfigure(self, **kwargs):
|
||||
if kwargs.get("model"):
|
||||
self.model_name = kwargs["model"]
|
||||
self.set_client(kwargs.get("max_token_length"))
|
||||
|
||||
def on_config_saved(self, event):
|
||||
config = event.data
|
||||
self.config = config
|
||||
self.set_client(max_token_length=self.max_token_length)
|
||||
|
||||
def response_tokens(self, response: str):
|
||||
return count_tokens(response.text)
|
||||
|
||||
def prompt_tokens(self, prompt: str):
|
||||
return count_tokens(prompt)
|
||||
|
||||
async def status(self):
|
||||
self.emit_status()
|
||||
|
||||
def prompt_template(self, system_message: str, prompt: str):
|
||||
if "<|BOT|>" in prompt:
|
||||
_, right = prompt.split("<|BOT|>", 1)
|
||||
if right:
|
||||
prompt = prompt.replace("<|BOT|>", "\nStart your response with: ")
|
||||
else:
|
||||
prompt = prompt.replace("<|BOT|>", "")
|
||||
|
||||
return prompt
|
||||
|
||||
def tune_prompt_parameters(self, parameters: dict, kind: str):
|
||||
super().tune_prompt_parameters(parameters, kind)
|
||||
keys = list(parameters.keys())
|
||||
valid_keys = ["temperature", "max_tokens"]
|
||||
for key in keys:
|
||||
if key not in valid_keys:
|
||||
del parameters[key]
|
||||
|
||||
async def generate(self, prompt: str, parameters: dict, kind: str):
|
||||
"""
|
||||
Generates text from the given prompt and parameters.
|
||||
"""
|
||||
|
||||
if not self.cohere_api_key:
|
||||
raise Exception("No cohere API key set")
|
||||
|
||||
right = None
|
||||
expected_response = None
|
||||
try:
|
||||
_, right = prompt.split("\nStart your response with: ")
|
||||
expected_response = right.strip()
|
||||
except (IndexError, ValueError):
|
||||
pass
|
||||
|
||||
human_message = prompt.strip()
|
||||
system_message = self.get_system_message(kind)
|
||||
|
||||
self.log.debug(
|
||||
"generate",
|
||||
prompt=prompt[:128] + " ...",
|
||||
parameters=parameters,
|
||||
system_message=system_message,
|
||||
)
|
||||
|
||||
try:
|
||||
response = await self.client.chat(
|
||||
model=self.model_name,
|
||||
preamble=system_message,
|
||||
message=human_message,
|
||||
**parameters,
|
||||
)
|
||||
|
||||
self._returned_prompt_tokens = self.prompt_tokens(prompt)
|
||||
self._returned_response_tokens = self.response_tokens(response)
|
||||
|
||||
log.debug("generated response", response=response.text)
|
||||
|
||||
response = response.text
|
||||
|
||||
if expected_response and expected_response.startswith("{"):
|
||||
if response.startswith("```json") and response.endswith("```"):
|
||||
response = response[7:-3].strip()
|
||||
|
||||
if right and response.startswith(right):
|
||||
response = response[len(right) :].strip()
|
||||
|
||||
return response
|
||||
# except PermissionDeniedError as e:
|
||||
# self.log.error("generate error", e=e)
|
||||
# emit("status", message="cohere API: Permission Denied", status="error")
|
||||
# return ""
|
||||
except Exception as e:
|
||||
raise
|
|
@ -12,6 +12,7 @@ class Defaults(pydantic.BaseModel):
|
|||
|
||||
@register()
|
||||
class LMStudioClient(ClientBase):
|
||||
auto_determine_prompt_template: bool = True
|
||||
client_type = "lmstudio"
|
||||
|
||||
class Meta(ClientBase.Meta):
|
||||
|
|
|
@ -1,9 +1,8 @@
|
|||
import json
|
||||
|
||||
import pydantic
|
||||
import structlog
|
||||
import tiktoken
|
||||
from openai import AsyncOpenAI, PermissionDeniedError
|
||||
from mistralai.async_client import MistralAsyncClient
|
||||
from mistralai.exceptions import MistralAPIStatusException
|
||||
from mistralai.models.chat_completion import ChatMessage
|
||||
|
||||
from talemate.client.base import ClientBase, ErrorAction
|
||||
from talemate.client.registry import register
|
||||
|
@ -25,6 +24,8 @@ SUPPORTED_MODELS = [
|
|||
"mistral-large-latest",
|
||||
]
|
||||
|
||||
JSON_OBJECT_RESPONSE_MODELS = SUPPORTED_MODELS
|
||||
|
||||
|
||||
class Defaults(pydantic.BaseModel):
|
||||
max_token_length: int = 16384
|
||||
|
@ -41,7 +42,7 @@ class MistralAIClient(ClientBase):
|
|||
conversation_retries = 0
|
||||
auto_break_repetition_enabled = False
|
||||
# TODO: make this configurable?
|
||||
decensor_enabled = False
|
||||
decensor_enabled = True
|
||||
|
||||
class Meta(ClientBase.Meta):
|
||||
name_prefix: str = "MistralAI"
|
||||
|
@ -104,7 +105,7 @@ class MistralAIClient(ClientBase):
|
|||
|
||||
def set_client(self, max_token_length: int = None):
|
||||
if not self.mistralai_api_key:
|
||||
self.client = AsyncOpenAI(api_key="sk-1111")
|
||||
self.client = MistralAsyncClient(api_key="sk-1111")
|
||||
log.error("No mistral.ai API key set")
|
||||
if self.api_key_status:
|
||||
self.api_key_status = False
|
||||
|
@ -120,9 +121,7 @@ class MistralAIClient(ClientBase):
|
|||
|
||||
model = self.model_name
|
||||
|
||||
self.client = AsyncOpenAI(
|
||||
api_key=self.mistralai_api_key, base_url="https://api.mistral.ai/v1/"
|
||||
)
|
||||
self.client = MistralAsyncClient(api_key=self.mistralai_api_key)
|
||||
self.max_token_length = max_token_length or 16384
|
||||
|
||||
if not self.api_key_status:
|
||||
|
@ -183,16 +182,23 @@ class MistralAIClient(ClientBase):
|
|||
if not self.mistralai_api_key:
|
||||
raise Exception("No mistral.ai API key set")
|
||||
|
||||
supports_json_object = self.model_name in JSON_OBJECT_RESPONSE_MODELS
|
||||
right = None
|
||||
expected_response = None
|
||||
try:
|
||||
_, right = prompt.split("\nStart your response with: ")
|
||||
expected_response = right.strip()
|
||||
if expected_response.startswith("{") and supports_json_object:
|
||||
parameters["response_format"] = {"type": "json_object"}
|
||||
except (IndexError, ValueError):
|
||||
pass
|
||||
|
||||
human_message = {"role": "user", "content": prompt.strip()}
|
||||
system_message = {"role": "system", "content": self.get_system_message(kind)}
|
||||
system_message = self.get_system_message(kind)
|
||||
|
||||
messages = [
|
||||
ChatMessage(role="system", content=system_message),
|
||||
ChatMessage(role="user", content=prompt.strip()),
|
||||
]
|
||||
|
||||
self.log.debug(
|
||||
"generate",
|
||||
|
@ -202,9 +208,9 @@ class MistralAIClient(ClientBase):
|
|||
)
|
||||
|
||||
try:
|
||||
response = await self.client.chat.completions.create(
|
||||
response = await self.client.chat(
|
||||
model=self.model_name,
|
||||
messages=[system_message, human_message],
|
||||
messages=messages,
|
||||
**parameters,
|
||||
)
|
||||
|
||||
|
@ -216,7 +222,11 @@ class MistralAIClient(ClientBase):
|
|||
# older models don't support json_object response coersion
|
||||
# and often like to return the response wrapped in ```json
|
||||
# so we strip that out if the expected response is a json object
|
||||
if expected_response and expected_response.startswith("{"):
|
||||
if (
|
||||
not supports_json_object
|
||||
and expected_response
|
||||
and expected_response.startswith("{")
|
||||
):
|
||||
if response.startswith("```json") and response.endswith("```"):
|
||||
response = response[7:-3].strip()
|
||||
|
||||
|
@ -224,9 +234,14 @@ class MistralAIClient(ClientBase):
|
|||
response = response[len(right) :].strip()
|
||||
|
||||
return response
|
||||
except PermissionDeniedError as e:
|
||||
except MistralAPIStatusException as e:
|
||||
self.log.error("generate error", e=e)
|
||||
emit("status", message="mistral.ai API: Permission Denied", status="error")
|
||||
if e.http_status in [403, 401]:
|
||||
emit(
|
||||
"status",
|
||||
message="mistral.ai API: Permission Denied",
|
||||
status="error",
|
||||
)
|
||||
return ""
|
||||
except Exception as e:
|
||||
raise
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import json
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
|
@ -155,11 +156,19 @@ class ModelPrompt:
|
|||
except ValueError:
|
||||
return None
|
||||
|
||||
models = list(
|
||||
api.list_models(
|
||||
filter=huggingface_hub.ModelFilter(model_name=model_name, author=author)
|
||||
)
|
||||
)
|
||||
branch_name = "main"
|
||||
|
||||
# special popular cases
|
||||
|
||||
# bartowski
|
||||
|
||||
if author == "bartowski" and "exl2" in model_name:
|
||||
# split model_name by exl2 and take the first part with "exl2" readded
|
||||
# the second part is the branch name
|
||||
model_name, branch_name = model_name.split("exl2_", 1)
|
||||
model_name = f"{model_name}exl2"
|
||||
|
||||
models = list(api.list_models(model_name=model_name, author=author))
|
||||
|
||||
if not models:
|
||||
return None
|
||||
|
@ -167,9 +176,14 @@ class ModelPrompt:
|
|||
model = models[0]
|
||||
|
||||
repo_id = f"{author}/{model_name}"
|
||||
|
||||
# Check README.md
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
readme_path = huggingface_hub.hf_hub_download(
|
||||
repo_id=repo_id, filename="README.md", cache_dir=tmpdir
|
||||
repo_id=repo_id,
|
||||
filename="README.md",
|
||||
cache_dir=tmpdir,
|
||||
revision=branch_name,
|
||||
)
|
||||
if not readme_path:
|
||||
return None
|
||||
|
@ -180,6 +194,24 @@ class ModelPrompt:
|
|||
if identifier(readme):
|
||||
return f"{identifier.template_str}.jinja2"
|
||||
|
||||
# Check tokenizer_config.json
|
||||
# "chat_template" key
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
config_path = huggingface_hub.hf_hub_download(
|
||||
repo_id=repo_id,
|
||||
filename="tokenizer_config.json",
|
||||
cache_dir=tmpdir,
|
||||
revision=branch_name,
|
||||
)
|
||||
if not config_path:
|
||||
return None
|
||||
with open(config_path) as f:
|
||||
config = json.load(f)
|
||||
for identifer_cls in TEMPLATE_IDENTIFIERS:
|
||||
identifier = identifer_cls()
|
||||
if identifier(config.get("chat_template", "")):
|
||||
return f"{identifier.template_str}.jinja2"
|
||||
|
||||
|
||||
model_prompt = ModelPrompt()
|
||||
|
||||
|
@ -197,6 +229,14 @@ class Llama2Identifier(TemplateIdentifier):
|
|||
return "[INST]" in content and "[/INST]" in content
|
||||
|
||||
|
||||
@register_template_identifier
|
||||
class Llama3Identifier(TemplateIdentifier):
|
||||
template_str = "Llama3"
|
||||
|
||||
def __call__(self, content: str):
|
||||
return "<|start_header_id|>" in content and "<|end_header_id|>" in content
|
||||
|
||||
|
||||
@register_template_identifier
|
||||
class ChatMLIdentifier(TemplateIdentifier):
|
||||
template_str = "ChatML"
|
||||
|
@ -211,11 +251,42 @@ class ChatMLIdentifier(TemplateIdentifier):
|
|||
{{ coercion_message }}
|
||||
"""
|
||||
|
||||
return "<|im_start|>" in content and "<|im_end|>" in content
|
||||
|
||||
|
||||
@register_template_identifier
|
||||
class CommandRIdentifier(TemplateIdentifier):
|
||||
template_str = "CommandR"
|
||||
|
||||
def __call__(self, content: str):
|
||||
"""
|
||||
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{{ system_message }}
|
||||
{{ user_message }}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|>
|
||||
<|CHATBOT_TOKEN|>{{ coercion_message }}
|
||||
"""
|
||||
|
||||
return (
|
||||
"<|im_start|>system" in content
|
||||
and "<|im_end|>" in content
|
||||
and "<|im_start|>user" in content
|
||||
and "<|im_start|>assistant" in content
|
||||
"<|START_OF_TURN_TOKEN|>" in content
|
||||
and "<|END_OF_TURN_TOKEN|>" in content
|
||||
and "<|SYSTEM_TOKEN|>" not in content
|
||||
)
|
||||
|
||||
|
||||
@register_template_identifier
|
||||
class CommandRPlusIdentifier(TemplateIdentifier):
|
||||
template_str = "CommandRPlus"
|
||||
|
||||
def __call__(self, content: str):
|
||||
"""
|
||||
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{{ system_message }}
|
||||
<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{{ user_message }}
|
||||
<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>{{ coercion_message }}
|
||||
"""
|
||||
|
||||
return (
|
||||
"<|START_OF_TURN_TOKEN|>" in content
|
||||
and "<|END_OF_TURN_TOKEN|>" in content
|
||||
and "<|SYSTEM_TOKEN|>" in content
|
||||
)
|
||||
|
||||
|
||||
|
|
|
@ -26,6 +26,8 @@ SUPPORTED_MODELS = [
|
|||
"gpt-4-1106-preview",
|
||||
"gpt-4-0125-preview",
|
||||
"gpt-4-turbo-preview",
|
||||
"gpt-4-turbo-2024-04-09",
|
||||
"gpt-4-turbo",
|
||||
]
|
||||
|
||||
JSON_OBJECT_RESPONSE_MODELS = [
|
||||
|
@ -90,7 +92,7 @@ def num_tokens_from_messages(messages: list[dict], model: str = "gpt-3.5-turbo-0
|
|||
|
||||
class Defaults(pydantic.BaseModel):
|
||||
max_token_length: int = 16384
|
||||
model: str = "gpt-4-turbo-preview"
|
||||
model: str = "gpt-4-turbo"
|
||||
|
||||
|
||||
@register()
|
||||
|
@ -113,7 +115,7 @@ class OpenAIClient(ClientBase):
|
|||
requires_prompt_template: bool = False
|
||||
defaults: Defaults = Defaults()
|
||||
|
||||
def __init__(self, model="gpt-4-turbo-preview", **kwargs):
|
||||
def __init__(self, model="gpt-4-turbo", **kwargs):
|
||||
self.model_name = model
|
||||
self.api_key_status = None
|
||||
self.config = load_config()
|
||||
|
|
|
@ -1,12 +1,13 @@
|
|||
import urllib
|
||||
|
||||
import pydantic
|
||||
import structlog
|
||||
import urllib
|
||||
from openai import AsyncOpenAI, NotFoundError, PermissionDeniedError
|
||||
|
||||
from talemate.client.base import ClientBase, ExtraField
|
||||
from talemate.client.registry import register
|
||||
from talemate.emit import emit
|
||||
from talemate.config import Client as BaseClientConfig
|
||||
from talemate.emit import emit
|
||||
|
||||
log = structlog.get_logger("talemate.client.openai_compat")
|
||||
|
||||
|
|
|
@ -21,11 +21,13 @@ dotenv.load_dotenv()
|
|||
|
||||
runpod.api_key = load_config().get("runpod", {}).get("api_key", "")
|
||||
|
||||
TEXTGEN_IDENTIFIERS = ["textgen", "thebloke llms", "text-generation-webui"]
|
||||
|
||||
|
||||
def is_textgen_pod(pod):
|
||||
name = pod["name"].lower()
|
||||
|
||||
if "textgen" in name or "thebloke llms" in name:
|
||||
if any(identifier in name for identifier in TEXTGEN_IDENTIFIERS):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
|
|
@ -13,6 +13,12 @@ log = structlog.get_logger("talemate.client.textgenwebui")
|
|||
|
||||
@register()
|
||||
class TextGeneratorWebuiClient(ClientBase):
|
||||
auto_determine_prompt_template: bool = True
|
||||
finalizers: list[str] = [
|
||||
"finalize_llama3",
|
||||
"finalize_YI",
|
||||
]
|
||||
|
||||
client_type = "textgenwebui"
|
||||
|
||||
class Meta(ClientBase.Meta):
|
||||
|
@ -28,23 +34,42 @@ class TextGeneratorWebuiClient(ClientBase):
|
|||
parameters["max_new_tokens"] = parameters["max_tokens"]
|
||||
parameters["stop"] = parameters["stopping_strings"]
|
||||
|
||||
# Half temperature on -Yi- models
|
||||
if self.model_name and self.is_yi_model():
|
||||
parameters["smoothing_factor"] = 0.3
|
||||
# also half the temperature
|
||||
parameters["temperature"] = max(0.1, parameters["temperature"] / 2)
|
||||
log.debug(
|
||||
"applying temperature smoothing for Yi model",
|
||||
)
|
||||
|
||||
def set_client(self, **kwargs):
|
||||
self.client = AsyncOpenAI(base_url=self.api_url + "/v1", api_key="sk-1111")
|
||||
|
||||
def is_yi_model(self):
|
||||
def finalize_llama3(self, parameters: dict, prompt: str) -> tuple[str, bool]:
|
||||
|
||||
if "<|eot_id|>" not in prompt:
|
||||
return prompt, False
|
||||
|
||||
# llama3 instruct models need to add "<|eot_id|>", "<|end_of_text|>" to the stopping strings
|
||||
parameters["stopping_strings"] += ["<|eot_id|>", "<|end_of_text|>"]
|
||||
|
||||
# also needs to add `skip_special_tokens`= False to the parameters
|
||||
parameters["skip_special_tokens"] = False
|
||||
log.debug("finalizing llama3 instruct parameters", parameters=parameters)
|
||||
|
||||
if prompt.endswith("<|end_header_id|>"):
|
||||
# append two linebreaks
|
||||
prompt += "\n\n"
|
||||
log.debug("adjusting llama3 instruct prompt: missing linebreaks")
|
||||
|
||||
return prompt, True
|
||||
|
||||
def finalize_YI(self, parameters: dict, prompt: str) -> tuple[str, bool]:
|
||||
model_name = self.model_name.lower()
|
||||
# regex match for yi encased by non-word characters
|
||||
if not bool(re.search(r"[\-_]yi[\-_]", model_name)):
|
||||
return prompt, False
|
||||
|
||||
return bool(re.search(r"[\-_]yi[\-_]", model_name))
|
||||
parameters["smoothing_factor"] = 0.1
|
||||
# also half the temperature
|
||||
parameters["temperature"] = max(0.1, parameters["temperature"] / 2)
|
||||
log.debug(
|
||||
"finalizing YI parameters",
|
||||
parameters=parameters,
|
||||
)
|
||||
return prompt, True
|
||||
|
||||
async def get_model_name(self):
|
||||
async with httpx.AsyncClient() as client:
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
from .base import TalemateCommand
|
||||
from .cmd_autocomplete import *
|
||||
from .cmd_characters import *
|
||||
from .cmd_debug_tools import *
|
||||
from .cmd_dialogue import *
|
||||
|
|
26
src/talemate/commands/cmd_autocomplete.py
Normal file
26
src/talemate/commands/cmd_autocomplete.py
Normal file
|
@ -0,0 +1,26 @@
|
|||
from talemate.commands.base import TalemateCommand
|
||||
from talemate.commands.manager import register
|
||||
from talemate.emit import emit
|
||||
|
||||
__all__ = [
|
||||
"CmdAutocompleteDialogue",
|
||||
]
|
||||
|
||||
|
||||
@register
|
||||
class CmdAutocompleteDialogue(TalemateCommand):
|
||||
"""
|
||||
Command class for the 'autocomplete_dialogue' command
|
||||
"""
|
||||
|
||||
name = "autocomplete_dialogue"
|
||||
description = "Generate dialogue for an AI selected actor"
|
||||
aliases = ["acdlg"]
|
||||
|
||||
async def run(self):
|
||||
|
||||
input = self.args[0]
|
||||
creator = self.scene.get_helper("creator").agent
|
||||
character = self.scene.get_player_character()
|
||||
|
||||
await creator.autocomplete_dialogue(input, character, emit_signal=True)
|
|
@ -1,13 +1,13 @@
|
|||
import copy
|
||||
import datetime
|
||||
import os
|
||||
import copy
|
||||
from typing import TYPE_CHECKING, ClassVar, Dict, Optional, TypeVar, Union, Any
|
||||
from typing_extensions import Annotated
|
||||
from typing import TYPE_CHECKING, Any, ClassVar, Dict, Optional, TypeVar, Union
|
||||
|
||||
import pydantic
|
||||
import structlog
|
||||
import yaml
|
||||
from pydantic import BaseModel, Field
|
||||
from typing_extensions import Annotated
|
||||
|
||||
from talemate.agents.registry import get_agent_class
|
||||
from talemate.client.registry import get_client_class
|
||||
|
@ -140,6 +140,10 @@ class AnthropicConfig(BaseModel):
|
|||
api_key: Union[str, None] = None
|
||||
|
||||
|
||||
class CohereConfig(BaseModel):
|
||||
api_key: Union[str, None] = None
|
||||
|
||||
|
||||
class RunPodConfig(BaseModel):
|
||||
api_key: Union[str, None] = None
|
||||
|
||||
|
@ -322,6 +326,8 @@ class Config(BaseModel):
|
|||
|
||||
anthropic: AnthropicConfig = AnthropicConfig()
|
||||
|
||||
cohere: CohereConfig = CohereConfig()
|
||||
|
||||
runpod: RunPodConfig = RunPodConfig()
|
||||
|
||||
chromadb: ChromaDB = ChromaDB()
|
||||
|
|
|
@ -36,6 +36,8 @@ ConfigSaved = signal("config_saved")
|
|||
|
||||
ImageGenerated = signal("image_generated")
|
||||
|
||||
AutocompleteSuggestion = signal("autocomplete_suggestion")
|
||||
|
||||
handlers = {
|
||||
"system": SystemMessage,
|
||||
"narrator": NarratorMessage,
|
||||
|
@ -63,4 +65,5 @@ handlers = {
|
|||
"config_saved": ConfigSaved,
|
||||
"status": StatusMessage,
|
||||
"image_generated": ImageGenerated,
|
||||
"autocomplete_suggestion": AutocompleteSuggestion,
|
||||
}
|
||||
|
|
|
@ -1,13 +1,14 @@
|
|||
import os
|
||||
import importlib
|
||||
import asyncio
|
||||
import nest_asyncio
|
||||
import structlog
|
||||
import pydantic
|
||||
import importlib
|
||||
import os
|
||||
from typing import TYPE_CHECKING, Coroutine
|
||||
|
||||
import nest_asyncio
|
||||
import pydantic
|
||||
import structlog
|
||||
from RestrictedPython import compile_restricted, safe_globals
|
||||
from RestrictedPython.Eval import default_guarded_getiter,default_guarded_getitem
|
||||
from RestrictedPython.Guards import guarded_iter_unpack_sequence,safer_getattr
|
||||
from RestrictedPython.Eval import default_guarded_getitem, default_guarded_getiter
|
||||
from RestrictedPython.Guards import guarded_iter_unpack_sequence, safer_getattr
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from talemate.tale_mate import Scene
|
||||
|
@ -20,9 +21,12 @@ nest_asyncio.apply()
|
|||
|
||||
DEV_MODE = True
|
||||
|
||||
def compile_scene_module(module_code:str, **kwargs):
|
||||
|
||||
def compile_scene_module(module_code: str, **kwargs):
|
||||
# Compile the module code using RestrictedPython
|
||||
compiled_code = compile_restricted(module_code, filename='<scene instructions>', mode='exec')
|
||||
compiled_code = compile_restricted(
|
||||
module_code, filename="<scene instructions>", mode="exec"
|
||||
)
|
||||
|
||||
# Create a restricted globals dictionary
|
||||
restricted_globals = safe_globals.copy()
|
||||
|
@ -30,62 +34,64 @@ def compile_scene_module(module_code:str, **kwargs):
|
|||
|
||||
# Add custom variables, functions, or objects to the restricted globals
|
||||
restricted_globals.update(kwargs)
|
||||
restricted_globals['__name__'] = '__main__'
|
||||
restricted_globals['__metaclass__'] = type
|
||||
restricted_globals['_getiter_'] = default_guarded_getiter
|
||||
restricted_globals['_getitem_'] = default_guarded_getitem
|
||||
restricted_globals['_iter_unpack_sequence_'] = guarded_iter_unpack_sequence
|
||||
restricted_globals['getattr'] = safer_getattr
|
||||
restricted_globals["__name__"] = "__main__"
|
||||
restricted_globals["__metaclass__"] = type
|
||||
restricted_globals["_getiter_"] = default_guarded_getiter
|
||||
restricted_globals["_getitem_"] = default_guarded_getitem
|
||||
restricted_globals["_iter_unpack_sequence_"] = guarded_iter_unpack_sequence
|
||||
restricted_globals["getattr"] = safer_getattr
|
||||
restricted_globals["_write_"] = lambda x: x
|
||||
restricted_globals["hasattr"] = hasattr
|
||||
|
||||
|
||||
# Execute the compiled code with the restricted globals
|
||||
exec(compiled_code, restricted_globals, safe_locals)
|
||||
return safe_locals.get("game")
|
||||
|
||||
|
||||
|
||||
class GameInstructionsMixin:
|
||||
|
||||
"""
|
||||
Game instructions mixin for director agent.
|
||||
|
||||
This allows Talemate scenarios to hook into the python api for more sophisticated
|
||||
gameplate mechanics and direct exposure to AI functionality.
|
||||
"""
|
||||
|
||||
|
||||
@property
|
||||
def scene_module_path(self):
|
||||
return os.path.join(self.scene.save_dir, "game.py")
|
||||
|
||||
|
||||
async def scene_has_instructions(self, scene: "Scene") -> bool:
|
||||
"""Returns True if the scene has instructions."""
|
||||
return await self.scene_has_module(scene) or await self.scene_has_template_instructions(scene)
|
||||
|
||||
return await self.scene_has_module(
|
||||
scene
|
||||
) or await self.scene_has_template_instructions(scene)
|
||||
|
||||
async def run_scene_instructions(self, scene: "Scene"):
|
||||
"""
|
||||
runs the game/__init__.py of the scene
|
||||
"""
|
||||
|
||||
|
||||
if await self.scene_has_module(scene):
|
||||
await self.run_scene_module(scene)
|
||||
else:
|
||||
return await self.run_scene_template_instructions(scene)
|
||||
|
||||
|
||||
# SCENE TEMPLATE INSTRUCTIONS SUPPORT
|
||||
|
||||
|
||||
async def scene_has_template_instructions(self, scene: "Scene") -> bool:
|
||||
"""Returns True if the scene has an instructions template."""
|
||||
instructions_template_path = os.path.join(scene.template_dir, "instructions.jinja2")
|
||||
instructions_template_path = os.path.join(
|
||||
scene.template_dir, "instructions.jinja2"
|
||||
)
|
||||
return os.path.exists(instructions_template_path)
|
||||
|
||||
|
||||
async def run_scene_template_instructions(self, scene: "Scene"):
|
||||
client = self.client
|
||||
game_state = scene.game_state
|
||||
|
||||
|
||||
|
||||
if not await self.scene_has_template_instructions(self.scene):
|
||||
return
|
||||
|
||||
|
||||
log.info("Running scene instructions from jinja2 template", scene=scene)
|
||||
with PrependTemplateDirectories([scene.template_dir]):
|
||||
prompt = Prompt.get(
|
||||
|
@ -105,60 +111,59 @@ class GameInstructionsMixin:
|
|||
instructions=instructions,
|
||||
)
|
||||
return instructions
|
||||
|
||||
|
||||
# SCENE PYTHON INSTRUCTIONS SUPPORT
|
||||
|
||||
async def run_scene_module(self, scene:"Scene"):
|
||||
|
||||
async def run_scene_module(self, scene: "Scene"):
|
||||
"""
|
||||
runs the game/__init__.py of the scene
|
||||
"""
|
||||
|
||||
|
||||
if not await self.scene_has_module(scene):
|
||||
return
|
||||
|
||||
|
||||
await self.load_scene_module(scene)
|
||||
|
||||
|
||||
log.info("Running scene instructions from python module", scene=scene)
|
||||
|
||||
|
||||
with OpenScopedContext(self.scene, self.client):
|
||||
with PrependTemplateDirectories(self.scene.template_dir):
|
||||
scene._module()
|
||||
|
||||
|
||||
if DEV_MODE:
|
||||
# delete the module so it can be reloaded
|
||||
# on the next run
|
||||
del scene._module
|
||||
|
||||
async def load_scene_module(self, scene:"Scene"):
|
||||
|
||||
async def load_scene_module(self, scene: "Scene"):
|
||||
"""
|
||||
loads the game.py of the scene
|
||||
"""
|
||||
|
||||
|
||||
if not await self.scene_has_module(scene):
|
||||
return
|
||||
|
||||
|
||||
if hasattr(scene, "_module"):
|
||||
log.warning("Scene already has a module loaded")
|
||||
return
|
||||
|
||||
|
||||
# file path to the game/__init__.py file of the scene
|
||||
module_path = self.scene_module_path
|
||||
|
||||
|
||||
# read thje file into _module property
|
||||
|
||||
|
||||
with open(module_path, "r") as f:
|
||||
module_code = f.read()
|
||||
scene._module = GameInstructionScope(
|
||||
agent=self,
|
||||
agent=self,
|
||||
log=log,
|
||||
scene=scene,
|
||||
module_function=compile_scene_module(module_code)
|
||||
scene=scene,
|
||||
module_function=compile_scene_module(module_code),
|
||||
)
|
||||
|
||||
|
||||
async def scene_has_module(self, scene:"Scene"):
|
||||
|
||||
async def scene_has_module(self, scene: "Scene"):
|
||||
"""
|
||||
checks if the scene has a game.py
|
||||
"""
|
||||
|
||||
return os.path.exists(self.scene_module_path)
|
||||
|
||||
return os.path.exists(self.scene_module_path)
|
||||
|
|
|
@ -1,17 +1,19 @@
|
|||
from typing import TYPE_CHECKING, Coroutine, Callable, Any
|
||||
import asyncio
|
||||
import nest_asyncio
|
||||
import contextvars
|
||||
from typing import TYPE_CHECKING, Any, Callable, Coroutine
|
||||
|
||||
import nest_asyncio
|
||||
import structlog
|
||||
from talemate.emit import emit
|
||||
from talemate.client.base import ClientBase
|
||||
from talemate.instance import get_agent, AGENTS
|
||||
|
||||
from talemate.agents.base import Agent
|
||||
from talemate.client.base import ClientBase
|
||||
from talemate.emit import emit
|
||||
from talemate.instance import AGENTS, get_agent
|
||||
from talemate.prompts.base import Prompt
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from talemate.tale_mate import Scene, Character
|
||||
from talemate.game.state import GameState
|
||||
from talemate.tale_mate import Character, Scene
|
||||
|
||||
__all__ = [
|
||||
"OpenScopedContext",
|
||||
|
@ -28,7 +30,8 @@ nest_asyncio.apply()
|
|||
|
||||
log = structlog.get_logger("talemate.game.scope")
|
||||
|
||||
def run_async(coro:Coroutine):
|
||||
|
||||
def run_async(coro: Coroutine):
|
||||
"""
|
||||
runs a coroutine
|
||||
"""
|
||||
|
@ -37,155 +40,153 @@ def run_async(coro:Coroutine):
|
|||
|
||||
|
||||
class ScopedContext:
|
||||
def __init__(self, scene:"Scene" = None, client:ClientBase = None):
|
||||
def __init__(self, scene: "Scene" = None, client: ClientBase = None):
|
||||
self.scene = scene
|
||||
self.client = client
|
||||
|
||||
|
||||
|
||||
scoped_context = contextvars.ContextVar("scoped_context", default=ScopedContext())
|
||||
|
||||
|
||||
class OpenScopedContext:
|
||||
def __init__(self, scene:"Scene", client:ClientBase):
|
||||
def __init__(self, scene: "Scene", client: ClientBase):
|
||||
self.scene = scene
|
||||
self.context = ScopedContext(
|
||||
scene = scene,
|
||||
client = client
|
||||
)
|
||||
|
||||
self.context = ScopedContext(scene=scene, client=client)
|
||||
|
||||
def __enter__(self):
|
||||
self.token = scoped_context.set(
|
||||
self.context
|
||||
)
|
||||
|
||||
self.token = scoped_context.set(self.context)
|
||||
|
||||
def __exit__(self, *args):
|
||||
scoped_context.reset(self.token)
|
||||
|
||||
|
||||
|
||||
|
||||
class ObjectScope:
|
||||
|
||||
"""
|
||||
Defines a method for getting the scoped object
|
||||
"""
|
||||
|
||||
|
||||
exposed_properties = []
|
||||
exposed_methods = []
|
||||
|
||||
def __init__(self, get_scoped_object:Callable):
|
||||
|
||||
def __init__(self, get_scoped_object: Callable):
|
||||
self.scope_object(get_scoped_object)
|
||||
|
||||
def __getattr__(self, name:str):
|
||||
|
||||
def __getattr__(self, name: str):
|
||||
if name in self.scoped_properties:
|
||||
return self.scoped_properties[name]()
|
||||
|
||||
|
||||
return super().__getattr__(name)
|
||||
|
||||
def scope_object(self, get_scoped_object:Callable):
|
||||
|
||||
|
||||
def scope_object(self, get_scoped_object: Callable):
|
||||
|
||||
self.scoped_properties = {}
|
||||
|
||||
|
||||
for prop in self.exposed_properties:
|
||||
self.scope_property(prop, get_scoped_object)
|
||||
|
||||
|
||||
for method in self.exposed_methods:
|
||||
self.scope_method(method, get_scoped_object)
|
||||
|
||||
def scope_property(self, prop:str, get_scoped_object:Callable):
|
||||
def scope_property(self, prop: str, get_scoped_object: Callable):
|
||||
self.scoped_properties[prop] = lambda: getattr(get_scoped_object(), prop)
|
||||
|
||||
def scope_method(self, method:str, get_scoped_object:Callable):
|
||||
|
||||
|
||||
def scope_method(self, method: str, get_scoped_object: Callable):
|
||||
|
||||
def fn(*args, **kwargs):
|
||||
_fn = getattr(get_scoped_object(), method)
|
||||
|
||||
|
||||
# if coroutine, run it in the event loop
|
||||
if asyncio.iscoroutinefunction(_fn):
|
||||
rv = run_async(
|
||||
_fn(*args, **kwargs)
|
||||
)
|
||||
rv = run_async(_fn(*args, **kwargs))
|
||||
elif callable(_fn):
|
||||
rv = _fn(*args, **kwargs)
|
||||
else:
|
||||
rv = _fn
|
||||
|
||||
|
||||
return rv
|
||||
|
||||
|
||||
fn.__name__ = method
|
||||
#log.debug("Setting", self, method, "to", fn.__name__)
|
||||
# log.debug("Setting", self, method, "to", fn.__name__)
|
||||
setattr(self, method, fn)
|
||||
|
||||
|
||||
class ClientScope(ObjectScope):
|
||||
|
||||
"""
|
||||
Wraps the client with certain exposed
|
||||
methods that can be used in game logic implementations
|
||||
through the scene's game.py file.
|
||||
|
||||
|
||||
Exposed:
|
||||
|
||||
|
||||
- send_prompt
|
||||
"""
|
||||
|
||||
exposed_properties = [
|
||||
"send_prompt"
|
||||
]
|
||||
|
||||
|
||||
exposed_properties = ["send_prompt"]
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(lambda: scoped_context.get().client)
|
||||
|
||||
def render_and_request(self, template_name:str, kind:str="create", dedupe_enabled:bool=True, **kwargs):
|
||||
|
||||
"""
|
||||
|
||||
def render_and_request(
|
||||
self,
|
||||
template_name: str,
|
||||
kind: str = "create",
|
||||
dedupe_enabled: bool = True,
|
||||
**kwargs,
|
||||
):
|
||||
"""
|
||||
Renders a prompt and sends it to the client
|
||||
"""
|
||||
prompt = Prompt.get(template_name, kwargs)
|
||||
prompt.client = scoped_context.get().client
|
||||
prompt.dedupe_enabled = dedupe_enabled
|
||||
return run_async(prompt.send(scoped_context.get().client, kind))
|
||||
|
||||
|
||||
def query_text_eval(self, query: str, text: str):
|
||||
world_state = get_agent("world_state")
|
||||
query = f"{query} Answer with a yes or no."
|
||||
response = run_async(
|
||||
world_state.analyze_text_and_answer_question(text=text, query=query, short=True)
|
||||
world_state.analyze_text_and_answer_question(
|
||||
text=text, query=query, short=True
|
||||
)
|
||||
)
|
||||
return response.strip().lower().startswith("y")
|
||||
|
||||
|
||||
|
||||
class AgentScope(ObjectScope):
|
||||
|
||||
"""
|
||||
Wraps agent calls with certain exposed
|
||||
methods that can be used in game logic implementations
|
||||
|
||||
|
||||
Exposed:
|
||||
|
||||
|
||||
- action: calls an agent action
|
||||
- config: returns the agent's configuration
|
||||
"""
|
||||
|
||||
def __init__(self, agent:Agent):
|
||||
|
||||
|
||||
def __init__(self, agent: Agent):
|
||||
|
||||
self.exposed_properties = [
|
||||
"sanitized_action_config",
|
||||
]
|
||||
|
||||
|
||||
self.exposed_methods = []
|
||||
|
||||
|
||||
# loop through all methods on agent and add them to the scope
|
||||
# if the function has `exposed` attribute set to True
|
||||
|
||||
|
||||
for key in dir(agent):
|
||||
value = getattr(agent, key)
|
||||
if callable(value) and hasattr(value, "exposed") and value.exposed:
|
||||
self.exposed_methods.append(key)
|
||||
|
||||
|
||||
# log.debug("AgentScope", agent=agent, exposed_properties=self.exposed_properties, exposed_methods=self.exposed_methods)
|
||||
|
||||
|
||||
super().__init__(lambda: agent)
|
||||
self.config = lambda: agent.sanitized_action_config
|
||||
|
||||
|
||||
|
||||
class GameStateScope(ObjectScope):
|
||||
|
||||
|
||||
exposed_methods = [
|
||||
"set_var",
|
||||
"has_var",
|
||||
|
@ -193,17 +194,17 @@ class GameStateScope(ObjectScope):
|
|||
"get_or_set_var",
|
||||
"unset_var",
|
||||
]
|
||||
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(lambda: scoped_context.get().scene.game_state)
|
||||
|
||||
class LogScope:
|
||||
|
||||
|
||||
class LogScope:
|
||||
"""
|
||||
Wrapper for log calls
|
||||
"""
|
||||
|
||||
def __init__(self, log:object):
|
||||
|
||||
def __init__(self, log: object):
|
||||
self.info = log.info
|
||||
self.error = log.error
|
||||
self.debug = log.debug
|
||||
|
@ -222,23 +223,28 @@ class CharacterScope(ObjectScope):
|
|||
"details",
|
||||
"is_player",
|
||||
]
|
||||
|
||||
|
||||
exposed_methods = [
|
||||
"update",
|
||||
"set_detail",
|
||||
"set_base_attribute",
|
||||
"rename",
|
||||
]
|
||||
|
||||
|
||||
|
||||
class SceneScope(ObjectScope):
|
||||
|
||||
"""
|
||||
Wraps scene calls with certain exposed
|
||||
methods that can be used in game logic implementations
|
||||
|
||||
|
||||
|
||||
"""
|
||||
|
||||
|
||||
exposed_properties = [
|
||||
"name",
|
||||
"title",
|
||||
]
|
||||
|
||||
exposed_methods = [
|
||||
"context",
|
||||
"context_history",
|
||||
|
@ -246,19 +252,20 @@ class SceneScope(ObjectScope):
|
|||
"npc_character_names",
|
||||
"restore",
|
||||
"set_content_context",
|
||||
"set_title",
|
||||
]
|
||||
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(lambda: scoped_context.get().scene)
|
||||
|
||||
def get_character(self, name:str) -> "CharacterScope":
|
||||
|
||||
def get_character(self, name: str) -> "CharacterScope":
|
||||
"""
|
||||
returns a character by name
|
||||
"""
|
||||
character = scoped_context.get().scene.get_character(name)
|
||||
if character:
|
||||
return CharacterScope(lambda: character)
|
||||
|
||||
|
||||
def get_player_character(self) -> "CharacterScope":
|
||||
"""
|
||||
returns the player character
|
||||
|
@ -266,30 +273,32 @@ class SceneScope(ObjectScope):
|
|||
character = scoped_context.get().scene.get_player_character()
|
||||
if character:
|
||||
return CharacterScope(lambda: character)
|
||||
|
||||
|
||||
def history(self):
|
||||
return [h for h in scoped_context.get().scene.history]
|
||||
|
||||
|
||||
class GameInstructionScope:
|
||||
|
||||
def __init__(self, agent:Agent, log:object, scene:"Scene", module_function:callable):
|
||||
|
||||
def __init__(
|
||||
self, agent: Agent, log: object, scene: "Scene", module_function: callable
|
||||
):
|
||||
self.game_state = GameStateScope()
|
||||
self.client = ClientScope()
|
||||
self.agents = type('', (), {})()
|
||||
self.agents = type("", (), {})()
|
||||
self.scene = SceneScope()
|
||||
self.wait = run_async
|
||||
self.log = LogScope(log)
|
||||
self.module_function = module_function
|
||||
|
||||
|
||||
for key, agent in AGENTS.items():
|
||||
setattr(self.agents, key, AgentScope(agent))
|
||||
|
||||
|
||||
|
||||
def __call__(self):
|
||||
self.module_function(self)
|
||||
|
||||
|
||||
def emit_status(self, status: str, message: str, **kwargs):
|
||||
if kwargs:
|
||||
emit("status", status=status, message=message, data=kwargs)
|
||||
else:
|
||||
emit("status", status=status, message=message)
|
||||
emit("status", status=status, message=message)
|
||||
|
|
|
@ -73,6 +73,6 @@ class GameState(pydantic.BaseModel):
|
|||
if not self.has_var(key):
|
||||
self.set_var(key, value, commit=commit)
|
||||
return self.get_var(key)
|
||||
|
||||
|
||||
def unset_var(self, key: str):
|
||||
self.variables.pop(key, None)
|
||||
self.variables.pop(key, None)
|
||||
|
|
|
@ -125,9 +125,9 @@ async def load_scene_from_character_card(scene, file_path):
|
|||
character.base_attributes = {
|
||||
k.lower(): v for k, v in character.base_attributes.items()
|
||||
}
|
||||
|
||||
character.dialogue_instructions = await creator.determine_character_dialogue_instructions(
|
||||
character
|
||||
|
||||
character.dialogue_instructions = (
|
||||
await creator.determine_character_dialogue_instructions(character)
|
||||
)
|
||||
|
||||
# any values that are lists should be converted to strings joined by ,
|
||||
|
@ -181,6 +181,7 @@ async def load_scene_from_data(
|
|||
scene.experimental = scene_data.get("experimental", False)
|
||||
scene.help = scene_data.get("help", "")
|
||||
scene.restore_from = scene_data.get("restore_from", "")
|
||||
scene.title = scene_data.get("title", "")
|
||||
|
||||
# reset = True
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ import random
|
|||
import re
|
||||
import uuid
|
||||
from contextvars import ContextVar
|
||||
from typing import Any
|
||||
from typing import Any, Tuple
|
||||
|
||||
import jinja2
|
||||
import nest_asyncio
|
||||
|
@ -34,8 +34,6 @@ from talemate.util import (
|
|||
remove_extra_linebreaks,
|
||||
)
|
||||
|
||||
from typing import Tuple
|
||||
|
||||
__all__ = [
|
||||
"Prompt",
|
||||
"LoopedPrompt",
|
||||
|
@ -273,10 +271,17 @@ class Prompt:
|
|||
return prompt
|
||||
|
||||
@classmethod
|
||||
async def request(cls, uid: str, client: Any, kind: str, vars: dict = None):
|
||||
async def request(
|
||||
cls, uid: str, client: Any, kind: str, vars: dict = None, **kwargs
|
||||
):
|
||||
if "decensor" not in vars:
|
||||
vars.update(decensor=client.decensor_enabled)
|
||||
prompt = cls.get(uid, vars)
|
||||
|
||||
# kwargs update prompt class attributes
|
||||
for key, value in kwargs.items():
|
||||
setattr(prompt, key, value)
|
||||
|
||||
return await prompt.send(client, kind)
|
||||
|
||||
@property
|
||||
|
@ -822,14 +827,9 @@ class Prompt:
|
|||
response = self.prepared_response.rstrip() + pad + response.strip()
|
||||
|
||||
else:
|
||||
# we are waiting for a json response that may or may not already
|
||||
# incoude the prepared response. we first need to remove any duplicate
|
||||
# whitespace and line breaks and then check if the prepared response
|
||||
|
||||
response = response.replace("\n", " ")
|
||||
response = re.sub(r"\s+", " ", response)
|
||||
|
||||
if not response.lower().startswith(self.prepared_response.lower()):
|
||||
# awaiting json response, if the response does not start with a {
|
||||
# it means its likely a coerced response and we need to prepend the prepared response
|
||||
if not response.lower().startswith("{"):
|
||||
pad = " " if self.pad_prepended_response else ""
|
||||
response = self.prepared_response.rstrip() + pad + response.strip()
|
||||
|
||||
|
|
|
@ -0,0 +1,25 @@
|
|||
{% block rendered_context -%}
|
||||
<|SECTION:CONTEXT|>
|
||||
{%- with memory_query=scene.snapshot() -%}
|
||||
{% include "extra-context.jinja2" %}
|
||||
{% endwith %}
|
||||
<|CLOSE_SECTION|>
|
||||
{% endblock -%}
|
||||
<|SECTION:SCENE|>
|
||||
{% for scene_context in scene.context_history(budget=min(2048, max_tokens-300-count_tokens(self.rendered_context())), min_dialogue=20, sections=False) -%}
|
||||
{{ scene_context }}
|
||||
{% endfor %}
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:TASK|>
|
||||
Continue {{ character.name }}'s unfinished line in this screenplay.
|
||||
|
||||
Your response MUST only be the new parts of the dialogue, not the entire line.
|
||||
|
||||
Partial line: {{ character.name }}: {{ input }}
|
||||
{% if not can_coerce -%}
|
||||
Continuation:
|
||||
<|CLOSE_SECTION|>
|
||||
{%- else -%}
|
||||
<|CLOSE_SECTION|>
|
||||
{{ bot_token }}{{ input }}
|
||||
{%- endif -%}
|
|
@ -10,5 +10,6 @@ By default all actors are given the following instructions for their character(s
|
|||
Dialogue instructions: "Use an informal and colloquial register with a conversational tone. Overall, {{ character.name }}'s dialog is informal, conversational, natural, and spontaneous, with a sense of immediacy."
|
||||
|
||||
However you can override this default instruction by providing your own instructions below.
|
||||
Keep the format similar and stick to one paragraph.
|
||||
<|CLOSE_SECTION|>
|
||||
{{ bot_token }}Dialogue instructions:
|
|
@ -23,10 +23,10 @@ Treat updates as absolute, the new character sheet will replace the old one.
|
|||
|
||||
Alteration instructions: {{ alteration_instructions }}
|
||||
{% endif %}
|
||||
Narration style should be that of a 90s point and click adventure game. You are omniscient and can describe the scene in detail.
|
||||
|
||||
Use an informal and colloquial register with a conversational tone. Overall, the narrative is Informal, conversational, natural, and spontaneous, with a sense of immediacy.
|
||||
|
||||
You must only generate attributes for {{ name }}. You are omniscient and can describe the character in detail.
|
||||
|
||||
Example:
|
||||
|
||||
Name: <character name>
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
from dataclasses import dataclass, field
|
||||
import re
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
import isodate
|
||||
|
||||
_message_id = 0
|
||||
|
@ -32,7 +33,7 @@ class SceneMessage:
|
|||
source: str = ""
|
||||
|
||||
hidden: bool = False
|
||||
|
||||
|
||||
typ = "scene"
|
||||
|
||||
def __str__(self):
|
||||
|
@ -138,7 +139,7 @@ class NarratorMessage(SceneMessage):
|
|||
class DirectorMessage(SceneMessage):
|
||||
action: str = "actor_instruction"
|
||||
typ = "director"
|
||||
|
||||
|
||||
@property
|
||||
def transformed_message(self):
|
||||
return self.message.replace("Director instructs ", "")
|
||||
|
@ -148,51 +149,58 @@ class DirectorMessage(SceneMessage):
|
|||
if self.action == "actor_instruction":
|
||||
return self.transformed_message.split(":", 1)[0]
|
||||
return ""
|
||||
|
||||
|
||||
@property
|
||||
def dialogue(self):
|
||||
if self.action == "actor_instruction":
|
||||
return self.transformed_message.split(":", 1)[1]
|
||||
return self.message
|
||||
|
||||
|
||||
@property
|
||||
def instructions(self):
|
||||
if self.action == "actor_instruction":
|
||||
return self.dialogue.replace('"','').replace("To progress the scene, i want you to ", "").strip()
|
||||
return (
|
||||
self.dialogue.replace('"', "")
|
||||
.replace("To progress the scene, i want you to ", "")
|
||||
.strip()
|
||||
)
|
||||
return self.message
|
||||
|
||||
@property
|
||||
def as_inner_monologue(self):
|
||||
|
||||
|
||||
# instructions may be written referencing the character as you, your etc.,
|
||||
# so we need to replace those to fit a first person perspective
|
||||
|
||||
|
||||
# first we lowercase
|
||||
instructions = self.instructions.lower()
|
||||
|
||||
|
||||
if not self.character_name:
|
||||
return instructions
|
||||
|
||||
# then we replace yourself with myself using regex, taking care of word boundaries
|
||||
instructions = re.sub(r"\byourself\b", "myself", instructions)
|
||||
|
||||
|
||||
# then we replace your with my using regex, taking care of word boundaries
|
||||
instructions = re.sub(r"\byour\b", "my", instructions)
|
||||
|
||||
|
||||
# then we replace you with i using regex, taking care of word boundaries
|
||||
instructions = re.sub(r"\byou\b", "i", instructions)
|
||||
|
||||
return f"{self.character_name} thinks: I should {instructions}"
|
||||
|
||||
|
||||
@property
|
||||
def as_story_progression(self):
|
||||
return f"{self.character_name}'s next action: {self.instructions}"
|
||||
|
||||
def __dict__(self):
|
||||
rv = super().__dict__()
|
||||
|
||||
|
||||
if self.action:
|
||||
rv["action"] = self.action
|
||||
|
||||
|
||||
return rv
|
||||
|
||||
|
||||
def __str__(self):
|
||||
"""
|
||||
The director message is a special case and needs to be transformed
|
||||
|
@ -212,6 +220,7 @@ class DirectorMessage(SceneMessage):
|
|||
else:
|
||||
return f"# {self.as_story_progression}"
|
||||
|
||||
|
||||
@dataclass
|
||||
class TimePassageMessage(SceneMessage):
|
||||
ts: str = "PT0S"
|
||||
|
@ -238,7 +247,9 @@ class ReinforcementMessage(SceneMessage):
|
|||
|
||||
def __str__(self):
|
||||
question, _ = self.source.split(":", 1)
|
||||
return f"# Internal notes for {self.character_name} - {question}: {self.message}"
|
||||
return (
|
||||
f"# Internal notes for {self.character_name} - {question}: {self.message}"
|
||||
)
|
||||
|
||||
def as_format(self, format: str, **kwargs) -> str:
|
||||
if format == "movie_script":
|
||||
|
|
|
@ -389,7 +389,7 @@ class WebsocketHandler(Receiver):
|
|||
character = emission.message_object.source
|
||||
else:
|
||||
character = ""
|
||||
|
||||
|
||||
director = instance.get_agent("director")
|
||||
direction_mode = director.actor_direction_mode
|
||||
|
||||
|
@ -541,6 +541,14 @@ class WebsocketHandler(Receiver):
|
|||
}
|
||||
)
|
||||
|
||||
def handle_autocomplete_suggestion(self, emission: Emission):
|
||||
self.queue_put(
|
||||
{
|
||||
"type": "autocomplete_suggestion",
|
||||
"message": emission.message,
|
||||
}
|
||||
)
|
||||
|
||||
def handle_audio_queue(self, emission: Emission):
|
||||
self.queue_put(
|
||||
{
|
||||
|
|
|
@ -265,12 +265,12 @@ class Character:
|
|||
|
||||
orig_name = self.name
|
||||
self.name = new_name
|
||||
|
||||
|
||||
if orig_name.lower() == "you":
|
||||
# we dont want to replace "you" in the description
|
||||
# or anywhere else so we can just return here
|
||||
return
|
||||
|
||||
return
|
||||
|
||||
if self.description:
|
||||
self.description = self.description.replace(f"{orig_name}", self.name)
|
||||
for k, v in self.base_attributes.items():
|
||||
|
@ -756,6 +756,7 @@ class Scene(Emitter):
|
|||
self.static_tokens = 0
|
||||
self.max_tokens = 2048
|
||||
self.next_actor = None
|
||||
self.title = ""
|
||||
|
||||
self.experimental = False
|
||||
self.help = ""
|
||||
|
@ -898,7 +899,13 @@ class Scene(Emitter):
|
|||
|
||||
def set_intro(self, intro: str):
|
||||
self.intro = intro
|
||||
|
||||
|
||||
def set_name(self, name: str):
|
||||
self.name = name
|
||||
|
||||
def set_title(self, title: str):
|
||||
self.title = title
|
||||
|
||||
def set_content_context(self, content_context: str):
|
||||
self.context = content_context
|
||||
|
||||
|
@ -1367,13 +1374,21 @@ class Scene(Emitter):
|
|||
if isinstance(message, DirectorMessage):
|
||||
if not keep_director:
|
||||
continue
|
||||
|
||||
if not message.character_name:
|
||||
# skip director messages that are not character specific
|
||||
# TODO: we may want to include these in the future
|
||||
continue
|
||||
|
||||
elif isinstance(keep_director, str) and message.source != keep_director:
|
||||
continue
|
||||
|
||||
if count_tokens(parts_dialogue) + count_tokens(message) > budget_dialogue:
|
||||
break
|
||||
|
||||
parts_dialogue.insert(0, message.as_format(conversation_format, mode=actor_direction_mode))
|
||||
parts_dialogue.insert(
|
||||
0, message.as_format(conversation_format, mode=actor_direction_mode)
|
||||
)
|
||||
|
||||
# collect context, ignore where end > len(history) - count
|
||||
|
||||
|
@ -1599,6 +1614,7 @@ class Scene(Emitter):
|
|||
self.name,
|
||||
status="started",
|
||||
data={
|
||||
"title": self.title or self.name,
|
||||
"environment": self.environment,
|
||||
"scene_config": self.scene_config,
|
||||
"player_character_name": (
|
||||
|
|
|
@ -890,10 +890,10 @@ def ensure_dialog_format(line: str, talking_character: str = None) -> str:
|
|||
line = line[len(talking_character) + 1 :].lstrip()
|
||||
|
||||
lines = []
|
||||
|
||||
|
||||
has_asterisks = "*" in line
|
||||
has_quotes = '"' in line
|
||||
|
||||
|
||||
default_wrap = None
|
||||
if has_asterisks and not has_quotes:
|
||||
default_wrap = '"'
|
||||
|
@ -925,7 +925,7 @@ def ensure_dialog_format(line: str, talking_character: str = None) -> str:
|
|||
return line
|
||||
|
||||
|
||||
def ensure_dialog_line_format(line: str, default_wrap:str=None) -> str:
|
||||
def ensure_dialog_line_format(line: str, default_wrap: str = None) -> str:
|
||||
"""
|
||||
a Python function that standardizes the formatting of dialogue and action/thought
|
||||
descriptions in text strings. This function is intended for use in a text-based
|
||||
|
@ -944,13 +944,13 @@ def ensure_dialog_line_format(line: str, default_wrap:str=None) -> str:
|
|||
line = line.strip()
|
||||
|
||||
line = line.replace('"*', '"').replace('*"', '"')
|
||||
|
||||
|
||||
# if the line ends with a whitespace followed by a classifier, strip both from the end
|
||||
# as this indicates the remnants of a partial segment that was removed.
|
||||
|
||||
|
||||
if line.endswith(" *") or line.endswith(' "'):
|
||||
line = line[:-2]
|
||||
|
||||
|
||||
if "*" not in line and '"' not in line and default_wrap and line:
|
||||
# if the line is not wrapped in either asterisks or quotes, wrap it in the default
|
||||
# wrap, if specified - when it's specialized it means the line was split and we
|
||||
|
@ -997,9 +997,9 @@ def ensure_dialog_line_format(line: str, default_wrap:str=None) -> str:
|
|||
else:
|
||||
if segment_open is None and c and c != " ":
|
||||
if last_classifier == '"':
|
||||
segment_open = '*'
|
||||
segment_open = "*"
|
||||
segment = f"{segment_open}{c}"
|
||||
elif last_classifier == '*':
|
||||
elif last_classifier == "*":
|
||||
segment_open = '"'
|
||||
segment = f"{segment_open}{c}"
|
||||
else:
|
||||
|
|
2
start-local.bat
Normal file
2
start-local.bat
Normal file
|
@ -0,0 +1,2 @@
|
|||
start cmd /k "cd talemate_frontend && npm run serve -- --host 127.0.0.1 --port 8080"
|
||||
start cmd /k "cd talemate_env\Scripts && activate && cd ../../ && python src\talemate\server\run.py runserver --host 127.0.0.1 --port 5050"
|
4
talemate_frontend/package-lock.json
generated
4
talemate_frontend/package-lock.json
generated
|
@ -1,12 +1,12 @@
|
|||
{
|
||||
"name": "talemate_frontend",
|
||||
"version": "0.22.0",
|
||||
"version": "0.23.0",
|
||||
"lockfileVersion": 2,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "talemate_frontend",
|
||||
"version": "0.22.0",
|
||||
"version": "0.23.0",
|
||||
"dependencies": {
|
||||
"@mdi/font": "7.4.47",
|
||||
"core-js": "^3.8.3",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "talemate_frontend",
|
||||
"version": "0.22.0",
|
||||
"version": "0.23.0",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"serve": "vue-cli-service serve",
|
||||
|
|
|
@ -157,6 +157,23 @@
|
|||
</v-row>
|
||||
</div>
|
||||
|
||||
<!-- COHERE API -->
|
||||
<div v-if="applicationPageSelected === 'cohere_api'">
|
||||
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
|
||||
<v-alert-title>Cohere</v-alert-title>
|
||||
<div class="text-grey">
|
||||
Configure your Cohere API key here. You can get one from <a href="https://dashboard.cohere.com/api-keys" target="_blank">https://dashboard.cohere.com/api-keys</a>
|
||||
</div>
|
||||
</v-alert>
|
||||
<v-divider class="mb-2"></v-divider>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-text-field type="password" v-model="app_config.cohere.api_key"
|
||||
label="Cohere API Key"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</div>
|
||||
|
||||
<!-- ELEVENLABS API -->
|
||||
<div v-if="applicationPageSelected === 'elevenlabs_api'">
|
||||
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
|
||||
|
@ -279,6 +296,7 @@ export default {
|
|||
{title: 'OpenAI', icon: 'mdi-api', value: 'openai_api'},
|
||||
{title: 'mistral.ai', icon: 'mdi-api', value: 'mistralai_api'},
|
||||
{title: 'Anthropic', icon: 'mdi-api', value: 'anthropic_api'},
|
||||
{title: 'Cohere', icon: 'mdi-api', value: 'cohere_api'},
|
||||
{title: 'ElevenLabs', icon: 'mdi-api', value: 'elevenlabs_api'},
|
||||
{title: 'RunPod', icon: 'mdi-api', value: 'runpod_api'},
|
||||
],
|
||||
|
|
|
@ -51,7 +51,7 @@
|
|||
|
||||
<v-card-text>
|
||||
<div class="text-caption" v-if="!client.data.has_prompt_template">No matching LLM prompt template found. Using default.</div>
|
||||
<pre>{{ client.data.prompt_template_example }}</pre>
|
||||
<div class="prompt-template-preview">{{ client.data.prompt_template_example }}</div>
|
||||
</v-card-text>
|
||||
<v-card-actions>
|
||||
<v-btn @click.stop="determineBestTemplate" prepend-icon="mdi-web-box">Determine via HuggingFace</v-btn>
|
||||
|
@ -250,4 +250,13 @@ export default {
|
|||
this.registerMessageHandler(this.handleMessage);
|
||||
},
|
||||
}
|
||||
</script>
|
||||
</script>
|
||||
<style scoped>
|
||||
|
||||
.prompt-template-preview {
|
||||
white-space: pre-wrap;
|
||||
font-family: monospace;
|
||||
font-size: 0.8rem;
|
||||
}
|
||||
|
||||
</style>
|
|
@ -140,6 +140,16 @@ export default {
|
|||
this.setWaitingForInput(false);
|
||||
},
|
||||
|
||||
messageTypeIsSceneMessage(type) {
|
||||
return ![
|
||||
'request_input',
|
||||
'client_status',
|
||||
'agent_status',
|
||||
'status',
|
||||
'autocomplete_suggestion'
|
||||
].includes(type);
|
||||
},
|
||||
|
||||
handleMessage(data) {
|
||||
|
||||
var i;
|
||||
|
@ -198,7 +208,7 @@ export default {
|
|||
action: data.action
|
||||
}
|
||||
);
|
||||
} else if (data.type != 'request_input' && data.type != 'client_status' && data.type != 'agent_status' && data.type != 'status') {
|
||||
} else if (this.messageTypeIsSceneMessage(data.type)) {
|
||||
this.messages.push({ id: data.id, type: data.type, text: data.message, color: data.color, character: data.character, status:data.status, ts:data.ts }); // Add color property to the message
|
||||
} else if (data.type === 'status' && data.data && data.data.as_scene_message === true) {
|
||||
|
||||
|
|
|
@ -50,6 +50,15 @@
|
|||
<v-icon class="ml-1 mr-3" v-else-if="isWaitingForInput()">mdi-keyboard</v-icon>
|
||||
<v-icon class="ml-1 mr-3" v-else>mdi-circle-outline</v-icon>
|
||||
|
||||
<v-tooltip v-if="isWaitingForInput()" location="top" text="Request autocomplete suggestion for your input. [Ctrl+Enter while typing]">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn :disabled="messageInput.length < 5" class="hotkey mr-3" v-bind="props" @click="requestAutocompleteSuggestion" color="primary" icon>
|
||||
<v-icon>mdi-auto-fix</v-icon>
|
||||
</v-btn>
|
||||
</template>
|
||||
</v-tooltip>
|
||||
|
||||
|
||||
<v-divider vertical></v-divider>
|
||||
|
||||
|
||||
|
@ -372,6 +381,7 @@ export default {
|
|||
inactiveCharacters: Array,
|
||||
activeCharacters: Array,
|
||||
playerCharacterName: String,
|
||||
messageInput: String,
|
||||
},
|
||||
computed: {
|
||||
deactivatableCharacters: function() {
|
||||
|
@ -667,6 +677,10 @@ export default {
|
|||
this.sendHotButtonMessage(command)
|
||||
},
|
||||
|
||||
requestAutocompleteSuggestion() {
|
||||
this.getWebsocket().send(JSON.stringify({ type: 'interact', text: `!acdlg:${this.messageInput}` }));
|
||||
},
|
||||
|
||||
handleMessage(data) {
|
||||
|
||||
if (data.type === "command_status") {
|
||||
|
|
|
@ -86,9 +86,13 @@
|
|||
|
||||
<!-- app bar -->
|
||||
<v-app-bar app>
|
||||
<v-app-bar-nav-icon @click="toggleNavigation('game')"><v-icon>mdi-script</v-icon></v-app-bar-nav-icon>
|
||||
<v-app-bar-nav-icon size="x-small" @click="toggleNavigation('game')">
|
||||
<v-icon v-if="sceneDrawer">mdi-arrow-collapse-left</v-icon>
|
||||
<v-icon v-else>mdi-arrow-collapse-right</v-icon>
|
||||
</v-app-bar-nav-icon>
|
||||
|
||||
<v-toolbar-title v-if="scene.name !== undefined">
|
||||
{{ scene.name || 'Untitled Scenario' }}
|
||||
{{ scene.title || 'Untitled Scenario' }}
|
||||
<span v-if="scene.saved === false" class="text-red">*</span>
|
||||
<v-chip size="x-small" v-if="scene.environment === 'creative'" class="ml-2"><v-icon text="Creative" size="14"
|
||||
class="mr-1">mdi-palette-outline</v-icon>Creative Mode</v-chip>
|
||||
|
@ -107,6 +111,9 @@
|
|||
Talemate
|
||||
</v-toolbar-title>
|
||||
<v-spacer></v-spacer>
|
||||
|
||||
<v-app-bar-nav-icon v-if="sceneActive" @click="returnToStartScreen()"><v-icon>mdi-home</v-icon></v-app-bar-nav-icon>
|
||||
|
||||
<VisualQueue ref="visualQueue" />
|
||||
<v-app-bar-nav-icon @click="toggleNavigation('debug')"><v-icon>mdi-bug</v-icon></v-app-bar-nav-icon>
|
||||
<v-app-bar-nav-icon @click="openAppConfig()"><v-icon>mdi-cog</v-icon></v-app-bar-nav-icon>
|
||||
|
@ -125,6 +132,7 @@
|
|||
|
||||
<SceneTools
|
||||
@open-world-state-manager="onOpenWorldStateManager"
|
||||
:messageInput="messageInput"
|
||||
:playerCharacterName="getPlayerCharacterName()"
|
||||
:passiveCharacters="passiveCharacters"
|
||||
:inactiveCharacters="inactiveCharacters"
|
||||
|
@ -345,6 +353,7 @@ export default {
|
|||
if (data.type == "scene_status") {
|
||||
this.scene = {
|
||||
name: data.name,
|
||||
title: data.data.title,
|
||||
environment: data.data.environment,
|
||||
scene_time: data.data.scene_time,
|
||||
saved: data.data.saved,
|
||||
|
@ -372,6 +381,23 @@ export default {
|
|||
return;
|
||||
}
|
||||
|
||||
if (data.type === 'autocomplete_suggestion') {
|
||||
|
||||
const completion = data.message;
|
||||
|
||||
// append completion to messageInput, add a space if
|
||||
// neither messageInput ends with a space nor completion starts with a space
|
||||
// unless completion starts with !, ., or ?
|
||||
|
||||
const completionStartsWithSentenceEnd = completion.startsWith('!') || completion.startsWith('.') || completion.startsWith('?') || completion.startsWith(')') || completion.startsWith(']') || completion.startsWith('}') || completion.startsWith('"') || completion.startsWith("'") || completion.startsWith("*") || completion.startsWith(",")
|
||||
|
||||
if (this.messageInput.endsWith(' ') || completion.startsWith(' ') || completionStartsWithSentenceEnd) {
|
||||
this.messageInput += completion;
|
||||
} else {
|
||||
this.messageInput += ' ' + completion;
|
||||
}
|
||||
}
|
||||
|
||||
if (data.type === 'request_input') {
|
||||
|
||||
this.waitingForInput = true;
|
||||
|
@ -409,7 +435,14 @@ export default {
|
|||
}
|
||||
|
||||
},
|
||||
sendMessage() {
|
||||
sendMessage(event) {
|
||||
|
||||
// if ctrl+enter is pressed, request autocomplete
|
||||
if (event.ctrlKey && event.key === 'Enter') {
|
||||
this.websocket.send(JSON.stringify({ type: 'interact', text: `!acdlg: ${this.messageInput}` }));
|
||||
return;
|
||||
}
|
||||
|
||||
if (!this.inputDisabled) {
|
||||
this.websocket.send(JSON.stringify({ type: 'interact', text: this.messageInput }));
|
||||
this.messageInput = '';
|
||||
|
@ -447,6 +480,16 @@ export default {
|
|||
else if (navigation == "debug")
|
||||
this.debugDrawer = !this.debugDrawer;
|
||||
},
|
||||
returnToStartScreen() {
|
||||
|
||||
if(this.sceneActive && !this.scene.saved) {
|
||||
let confirm = window.confirm("Are you sure you want to return to the start screen? You will lose any unsaved progress.");
|
||||
if(!confirm)
|
||||
return;
|
||||
}
|
||||
// reload
|
||||
document.location.reload();
|
||||
},
|
||||
getClients() {
|
||||
if (!this.$refs.aiClient) {
|
||||
return [];
|
||||
|
|
2
templates/llm-prompt/std/CommandR.jinja2
Normal file
2
templates/llm-prompt/std/CommandR.jinja2
Normal file
|
@ -0,0 +1,2 @@
|
|||
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{{ system_message }}
|
||||
{{ user_message }}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>{{ coercion_message }}
|
1
templates/llm-prompt/std/CommandRPlus.jinja2
Normal file
1
templates/llm-prompt/std/CommandRPlus.jinja2
Normal file
|
@ -0,0 +1 @@
|
|||
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{{ system_message }}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{{ user_message }}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>{{ coercion_message }}
|
7
templates/llm-prompt/std/Llama3.jinja2
Normal file
7
templates/llm-prompt/std/Llama3.jinja2
Normal file
|
@ -0,0 +1,7 @@
|
|||
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
||||
|
||||
{{ system_message }}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
||||
|
||||
{{ user_message }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
||||
|
||||
{{ coercion_message }}
|
7
templates/llm-prompt/talemate/Llama-3.jinja2
Normal file
7
templates/llm-prompt/talemate/Llama-3.jinja2
Normal file
|
@ -0,0 +1,7 @@
|
|||
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
||||
|
||||
{{ system_message }}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
||||
|
||||
{{ user_message }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
||||
|
||||
{{ coercion_message }}
|
1
templates/llm-prompt/talemate/c4ai-command-r-plus.jinja2
Normal file
1
templates/llm-prompt/talemate/c4ai-command-r-plus.jinja2
Normal file
|
@ -0,0 +1 @@
|
|||
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{{ system_message }}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{{ user_message }}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>{{ coercion_message }}
|
2
templates/llm-prompt/talemate/c4ai-command.jinja2
Normal file
2
templates/llm-prompt/talemate/c4ai-command.jinja2
Normal file
|
@ -0,0 +1,2 @@
|
|||
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{{ system_message }}
|
||||
{{ user_message }}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>{{ coercion_message }}
|
Loading…
Add table
Reference in a new issue